text
stringlengths 6
128k
|
---|
# Linear Canonical Transform On Boehmian Space
S. K. Panchal and Pravinkumar V. Dole
Department of Mathematics,
Dr. Babasaheb Ambedkar Marathwada University,
Aurangabad-431004 (M.S.) India.
E-mail ID-<EMAIL_ADDRESS><EMAIL_ADDRESS>
Abstract: The aim of this paper is to constructs Boehmian space, the linear
canonical transform for Boehmians is define and to study its properties.
AMS Subject Classification: 44A35, 44A40, 46F12, 46F99.
Key Words: Linear canonical transform, Convolution, Distributions, Boehmians.
## 1 Introduction
The most recent generalizations of functions is the theory of Boehmians. The
idea of construction of Boehmians was initiated by the concept of regular
operators introduced by Boehme [1]. Regular operators form a subalgebra of the
field of Mikusinski operators and they include only such functions whose
support is bounded from the left. In a concrete case, the space of Boehmians
contains all regular operators, all distributions and some objects which are
neither operators nor distributions. The space of Boehmians is the new class
of generalized functions which opened the new door to area of research in
mathematics. The construction of Boehmians is given by Mikusinski and
Mikusinski [5, 6, 8, 9]. Mikusinski and Nemzer has studied Fourier and Laplace
transform for Boehmians in [7] and [11] respectively. Zayed [15] extended the
fractional Fourier transform to class of integrable Boehmians. Singh studied
fractional integrals of fractional Fourier transform for integrable Boehmians
in [13]. The Fourier, Laplace and fractional Fourier transforms are the
special cases of linear canonical transform (LCT) and has many applications in
several areas, like signal processing and optics [2]. This lead to study
linear canonical transform for integrable Boehmians in [4]. In one hand we
constructs Boehmian spaces and other hand defined linear canonical transform
for Boehmians. Further, we obtain its properties like one-to-one, onto,
continuous from one Boehmian space to another Boehmian space and other basic
properties in the space of Boehmians.
The linear canonical transform of real valued function $f$ is defined [3, 10]
as,
$\displaystyle\mathcal{L}_{A}[f(t)](u)=F_{A}(u)=\left\\{{\begin{array}[]{*{20}{l}}{\sqrt{\frac{1}{2\pi
ib}}\int_{-\infty}^{\infty}e^{\frac{i}{2}[\frac{a}{b}t^{2}-\frac{2}{b}ut+(\frac{d}{b})u^{2}]}f(t)dt\,\,for\,\,b\neq
0,}\\\ {\sqrt{d}e^{\frac{i}{2}cdu^{2}}f(du)\qquad\qquad\qquad\qquad
for\,\,b=0,}\end{array}}\right.$ (1.3)
where $\mathcal{L}_{A}$ is the unitary linear canonical transform operator
with parameter $A=(a,b,c,d)$, $a,b,c,d$ are real number satisfying $ad-bc=1$.
The inverse transform for linear canonical transform is given by a linear
canonical transform having the parameter $A^{-1}=(d,-b,-c,a)$ and
$\mathcal{L}_{A^{-1}}$ is the inverse LCT operator. For value of parameters
as, $a=cos\theta,b=sin\theta,c=-sin\theta,d=cos\theta$ then LCT become
fractional Fourier transform, in particular, when $\theta=\frac{\pi}{2}$ then
LCT become Fourier transform and $a=0,b=i,c=i,d=0$ then LCT becomes Laplace
transform.
Let $\mathcal{L}^{1}(\mathbb{R})$ be the space of all complex valued
absolutely integrable functions on $\mathbb{R}$ with norm
$||f||_{1}=\int_{\mathbb{R}}|f(t)|dt\leq M_{1}$ and
$\mathcal{L}^{2}(\mathbb{R})$ be the space of all complex valued absolutely
square integrable functions on $\mathbb{R}$ with norm
$||g||_{2}=\big{(}\int_{\mathbb{R}}|g(t)|^{2}dt\big{)}^{\frac{1}{2}}\leq
M_{2}$, for some $M_{1},M_{2}>0$. Let
$\mathcal{L}^{1}(\mathbb{R})\cap\mathcal{L}^{2}(\mathbb{R})$ is denoted by
$\mathcal{L}^{1,2}(\mathbb{R})$.
###### Definition 1.1
[12](Regular Distributions) Let $f$ be the locally integrable function, i.e.
absolutely integrable on every finite interval on $\mathbb{R}$, then
distribution generated by $f$ is called regular distributions.
We see that $f\in\mathcal{L}^{1,2}(\mathbb{R})$ then $\mathcal{L}_{A}(f)$ and
$\mathcal{L}_{A^{-1}}(f)$ are the members of $\mathcal{L}^{1,2}(\mathbb{R})$.
###### Definition 1.2
[3] Let the weight function $W(t,\tau)=e^{i\tau(\tau-t)\frac{a}{b}}$. For any
two function $f$ and $g$ the convolution operation $*^{A}$ is defined as,
$\displaystyle
h(t)=(f*^{A}g)(t)=\int_{-\infty}^{\infty}f(\tau)g(t-\tau)W(t,\tau)d\tau$ (1.4)
###### Theorem 1.1
[3] (New Convolution Theorem)
Let $h(t)=(f*^{A}g)(t)$ and $H_{A}(u),F_{A}(u),G_{A}(u)$ denote the linear
canonical transform of $h(t),f(t)$ and $g(t)$ respectively, then
$\displaystyle H_{A}(u)=\sqrt{2i\pi
b}\,e^{-i(\frac{du^{2}}{2b})}F_{A}(u)G_{A}(u).$ (1.5)
## 2 Preliminary Results
In this section we obtain some results which are require to construct the
Boehmian space.
###### Lemma 2.1
Let $f\in\mathcal{L}^{1}(\mathbb{R})$ and $g\in\mathcal{L}^{2}(\mathbb{R})$
then the $(f*^{A}g)$ is in $\mathcal{L}^{2}(\mathbb{R})$.
###### Lemma 2.2
The space $(\mathcal{L}^{1,2}(\mathbb{R}),*^{A})$ is commutative semi group.
###### Theorem 2.1
(Plancherel type theorem) Let the sequence
$f_{n}\in\mathcal{L}^{1,2}(\mathbb{R})$ and $f_{n}\rightarrow f$ on
$\mathcal{L}^{2}(\mathbb{R})$ then
$\mathcal{L}_{A}(f_{n})\rightarrow\mathcal{L}_{A}(f)$ in
$\mathcal{L}^{2}(\mathbb{R})$ as $n\rightarrow\infty$.
###### Definition 2.1
Analogous to Plancherel type theorem for $f\in\mathcal{L}^{2}(\mathbb{R})$, we
define $\mathcal{L}_{A}(f)$ by
$\mathcal{L}^{2}-\lim_{n\rightarrow\infty}\mathcal{L}_{A}(f_{n})$, where
$f_{n}\in\mathcal{L}^{1,2}(\mathbb{R})$.
Let $\bigtriangledown$ be the set of all sequences of continuous real
functions $\\{\delta_{n}\\}$ from $\mathcal{L}^{1,2}(\mathbb{R})$ having
compact support on $\mathbb{R}$ with the following properties:
1. (i)
$\quad\int_{\mathbb{R}}e^{i\frac{at^{2}}{2b}}\delta_{n}(t)dt=1$,
$\forall\,n\in\mathbb{N}$,
2. (ii)
$\quad\lim_{n\rightarrow\infty}\int_{|t|>\epsilon}|\delta_{n}(t)|dt=0$ for
each $\epsilon>0$.
The members of $\bigtriangledown$ are called delta sequences.
###### Example 2.1
Let $a,b\in\mathbb{R};b\neq 0$, consider the sequence
$\displaystyle\delta_{n}(t)=\left\\{{\begin{array}[]{*{20}{l}}{e^{-i\frac{at^{2}}{2b}}t\qquad\qquad\qquad
for\quad 0\leq t\leq\frac{1}{n},}\\\
{e^{-i\frac{at^{2}}{2b}}n^{2}(\frac{2}{n}-t)\qquad\quad\,\,for\quad\frac{1}{n}\leq
t\leq\frac{2}{n},}\\\ {0\qquad\qquad\qquad\qquad\qquad\quad
otherwise}.\end{array}}\right.$
###### Lemma 2.3
Let $\\{\varphi_{n}\\},\\{\psi_{n}\\}\in\bigtriangledown$ then
$(\varphi_{n}*^{A}\psi_{n})\in\bigtriangledown$ for all $n\in\mathbb{N}$.
###### Lemma 2.4
Let $f\in\mathcal{L}^{1,2}(\mathbb{R})$ and
$\\{\psi_{n}\\}\in\bigtriangledown$ then $f*^{A}\psi_{n}\rightarrow f$ as
$n\rightarrow\infty$ in $\mathcal{L}^{2}(\mathbb{R})$.
## 3 LCT For Boehmians
A pair of sequences $(f_{n},\varphi_{n})$ is called a quotient of the
sequences, denoted by $f_{n}/\varphi_{n}$, where each $n\in\mathbb{N}$,
$f_{n}\in\mathcal{L}^{1,2}(\mathbb{R})$ and
$\\{\varphi_{n}\\}\in\bigtriangledown$ such that
$f_{m}*^{A}\varphi_{n}=f_{n}*^{A}\varphi_{m}$ holds
$\forall\,m,n\in\mathbb{N}$. Two quotients of sequences $f_{n}/\varphi_{n}$
and $g_{n}/\psi_{n}$ are equivalent if
$f_{n}*^{A}\psi_{n}=g_{n}*^{A}\varphi_{n}$ $\forall\,n\in\mathbb{N}$. This is
an equivalence relation. The equivalence class of quotient of sequences is
called a Boehmian. The space of all Boehmians is denoted by
$\mathcal{B}_{\mathcal{L}^{1,2}}=\mathcal{B}_{\mathcal{L}^{1,2}}(\mathcal{L}^{1,2}(\mathbb{R}),\bigtriangledown,*^{A})$
and the members of $\mathcal{B}_{\mathcal{L}^{1,2}}$ are denoted by
$F=[f_{n}/\varphi_{n}]$. The function $f\in\mathcal{L}^{1,2}(\mathbb{R})$ can
be identified with the Boehmian $[(f*^{A}\delta_{n})/\delta_{n}]$, where
$\\{\delta_{n}\\}$ is the delta sequence. Let $F=[f_{n}/\varphi_{n}]$, then
$F*^{A}\delta_{n}=f_{n}\in\mathcal{L}^{1,2}(\mathbb{R})$
$\forall\,n\in\mathbb{N}$.
###### Definition 3.1
A sequence of Boehmians $F_{n}$ is called $\Delta-$convergent to a Boehmian
$F$ ($\Delta-\lim F_{n}=F$) if there exist a delta sequence $\\{\delta_{n}\\}$
such that $(F_{n}-F)*^{A}\delta_{n}\in\mathcal{L}^{1,2}(\mathbb{R})$, for
every $n\in\mathbb{N}$ and that $\|(F_{n}-F)*^{A}\delta_{n}\|_{2}\rightarrow
0$ as $n\rightarrow\infty$.
###### Definition 3.2
A sequence of Boehmians $F_{n}$ is called $\delta-$convergent to a Boehmian
$F$ ($\delta-\lim F_{n}=F$) if there exist a delta sequence $\\{\delta_{n}\\}$
such that $F_{n}*^{A}\delta_{k}\in\mathcal{L}^{1,2}(\mathbb{R})$ and
$F*^{A}\delta_{k}\in\mathcal{L}^{1,2}(\mathbb{R})$ for every
$n,k\in\mathbb{N}$ and that $\|(F_{n}-F)*^{A}\delta_{k}\|_{2}\rightarrow 0$ as
$n\rightarrow\infty$ for each $k\in\mathbb{N}$.
Let $\\{\delta_{n}\\}$ is a delta sequence, then $\delta_{n}/\delta_{n}$
represents an Boehmian. Since the Boehmian $[\delta_{n}/\delta_{n}]$
corresponds to Dirac delta distribution $\delta$, all the derivative of
$\delta$ are also Boehmian. If $\\{\delta_{n}\\}$ is infinitely differentiable
and bounded support, then the $k^{th}$ derivative of $\delta$ is define by
$\delta^{(k)}=[\delta_{n}^{(k)}/\delta_{n}]\in\mathcal{B}_{\mathcal{L}^{1,2}}$,
for each $k\in\mathbb{N}$. The $k^{th}$ derivative of Boehmian
$F\in\mathcal{B}_{\mathcal{L}^{1,2}}$ is define by
$F^{(k)}=F*^{A}\delta^{(k)}$.The scalar multiplication, addition and
convolution in $\mathcal{B}_{\mathcal{L}^{1,2}}$ are define as,
$\displaystyle\lambda[f_{n}/\varphi_{n}]$ $\displaystyle=[\lambda
f_{n}/\varphi_{n}]$ $\displaystyle[f_{n}/\varphi_{n}]+[g_{n}/\psi_{n}]$
$\displaystyle=[(f_{n}*^{A}\psi_{n}+g_{n}*^{A}\varphi_{n})/\varphi_{n}*^{A}\psi_{n}]$
$\displaystyle[f_{n}/\varphi_{n}]*^{A}[g_{n}/\psi_{n}]$
$\displaystyle=[(f_{n}*^{A}g_{n})/(\varphi_{n}*^{A}\psi_{n})].$
###### Lemma 3.1
Let $\Delta-\lim F_{n}=F$ in $\mathcal{B}_{\mathcal{L}^{1,2}}$ then
$\Delta-\lim F_{n}^{(k)}=F^{(k)}$ for $\forall\,k\in\mathbb{N}$ in
$\mathcal{B}_{\mathcal{L}^{1,2}}$.
Let
$\bigtriangledown_{0}=\\{\mathcal{L}_{A}(\delta_{n});\\{\delta_{n}\\}\in\bigtriangledown\\}$
be the space of complex valued functions on $\mathbb{R}$, the operation
$\cdot$ is pointwise multiplication and $C_{0}(\mathbb{R})$ be the space of
all continuous functions vanishing at infinity on $\mathbb{R}$ then we
construct the another space of Boehmians, denoted by
$\mathcal{B}_{\bigtriangledown}=\mathcal{B}_{\bigtriangledown}(\mathcal{L}^{2}(\mathbb{R}),C_{0}(\mathbb{R})\cap\mathcal{L}^{2}(\mathbb{R}),\cdot,\bigtriangledown_{0})$.
This is the range of linear canonical transform on
$\mathcal{B}_{\mathcal{L}^{1,2}}$ and each element of
$\mathcal{B}_{\bigtriangledown}$ is denoted by
$\mathcal{L}_{A}(f_{n})/\mathcal{L}_{A}(\delta_{n})$ for all $n\in\mathbb{N}$,
where $\\{f_{n}\\}\in\mathcal{L}^{1,2}(\mathbb{R})$.
###### Lemma 3.2
Let $f,g\in\mathcal{L}^{2}(\mathbb{R});\varphi,\psi\in C_{0}(\mathbb{R})$ and
$\lambda\in\mathbb{C}$ then
(i) $f\cdot\varphi\in\mathcal{L}^{2}(\mathbb{R})$
(ii) $(f+g)\cdot\varphi=f\cdot\varphi+f\cdot\varphi$
(iii) $(\lambda f)\cdot\varphi=\alpha(f\cdot\varphi)$
(iv) $f\cdot(\varphi\cdot\psi)=(f\cdot\varphi)\cdot\psi$.
###### Lemma 3.3
Let $f_{n}\rightarrow f$ as $n\rightarrow\infty$ in
$\mathcal{L}^{2}(\mathbb{R})$ and $\varphi\in C_{0}(\mathbb{R})$ then
$f_{n}\cdot\varphi\rightarrow f\cdot\varphi$ in $\mathcal{L}^{2}(\mathbb{R})$.
###### Lemma 3.4
Let $\\{\delta_{n}\\}\in\bigtriangledown$ then $\mathcal{L}_{A}(\delta_{n})$
converges uniformly on each compact set to a constant function $1$ in
$\mathcal{L}^{2}(\mathbb{R})$.
###### Lemma 3.5
Let $f_{n}\longrightarrow f$ as $n\longrightarrow\infty$ in
$\mathcal{L}^{1,2}(\mathbb{R})$ and
$\mathcal{L}_{A}(\varphi_{n})\in\bigtriangledown_{0}$ then
$f_{n}\cdot\mathcal{L}_{A}(\varphi_{n})\rightarrow f$ in
$\mathcal{L}^{2}(\mathbb{R})$.
###### Lemma 3.6
Let
$\mathcal{L}_{A}(\varphi_{n}),\mathcal{L}_{A}(\psi_{n})\in\bigtriangledown_{0}$
then
$\mathcal{L}_{A}(\varphi_{n})\cdot\mathcal{L}_{A}(\psi_{n})\in\bigtriangledown_{0}$.
Proof: Let $\mathcal{L}_{A}(\varphi_{n}),\mathcal{L}_{A}(\psi_{n})\in
C_{0}(\mathbb{R})$ From theorem (1.1) and lemma (2.3) we get
$\mathcal{L}_{A}(\varphi_{n})\cdot\mathcal{L}_{A}(\psi_{n})=\frac{e^{\frac{i}{2}(\frac{d}{b})u^{2}}}{\sqrt{2\pi
ib}}\mathcal{L}_{A}(\varphi_{n}*^{A}\psi_{n})\in\bigtriangledown_{0}$.$\hfill\blacksquare$
###### Definition 3.3
Let $\\{f_{n}\\}\in\mathcal{L}^{1,2}(\mathbb{R})$ and
$\\{\delta_{n}\\}\in\bigtriangledown$, we define the linear canonical
transform
$\mathcal{L}_{A}:\mathcal{B}_{\mathcal{L}^{1,2}}\longrightarrow\mathcal{B}_{\bigtriangledown}$
as
$\displaystyle\mathcal{L}_{A}[f_{n}/\delta_{n}]=\mathcal{L}_{A}(f_{n})/\mathcal{L}_{A}(\delta_{n})\qquad
for\quad[f_{n}/\delta_{n}]\in\mathcal{B}_{\mathcal{L}^{1,2}}.$ (3.1)
The linear canonical transform on $\mathcal{B}_{\mathcal{L}^{1,2}}$ is well
defined. Indeed if $[f_{n}/\delta_{n}]\in\mathcal{B}_{\mathcal{L}^{1,2}}$,
then $f_{n}*^{A}\delta_{m}=f_{m}*^{A}\delta_{n}$ for all $m,n\in\mathbb{N}$.
Applying the linear canonical transform on both sides, we get
$\mathcal{L}_{A}(f_{n})\mathcal{L}_{A}(\delta_{m})=\mathcal{L}_{A}(f_{m})\mathcal{L}_{A}(\delta_{n})$
for all $m,n\in\mathbb{N}$ and hence
$\mathcal{L}_{A}(f_{n})/\mathcal{L}_{A}(\delta_{n})\in\mathcal{B}_{\bigtriangledown}$.
Further if
$[f_{n}/\psi_{n}]=[g_{n}/\delta_{n}]\in\mathcal{B}_{\mathcal{L}^{1,2}}$ then
we have $f_{n}*^{A}\delta_{n}=g_{n}*^{A}\psi_{n}$ for all $n\in\mathbb{N}$.
Again applying the linear canonical transform on both sides, we get
$\mathcal{L}_{A}(f_{n})\mathcal{L}_{A}(\delta_{n})=\mathcal{L}_{A}(g_{n})\mathcal{L}_{A}(\psi_{n})$
for all $n\in\mathbb{N}$. i.e.
$\mathcal{L}_{A}(f_{n})/\mathcal{L}_{A}(\psi_{n})=\mathcal{L}_{A}(g_{n})/\mathcal{L}_{A}(\delta_{n})$
in $\mathcal{B}_{\bigtriangledown}$.
###### Lemma 3.7
Let $[f_{n}/\varphi_{n}]\in B_{\mathcal{L}^{1,2}}$ then the linear canonical
transform of the sequence
$\displaystyle\mathcal{L}_{A}[f_{n}](u)=\sqrt{\frac{1}{2\pi
ib}}e^{\frac{i}{2}(\frac{d}{b})u^{2}}\int_{-\infty}^{\infty}e^{\frac{-i}{b}ut}e^{\frac{i}{2}\frac{a}{b}t^{2}}f_{n}(t)dt$
(3.2)
converges uniformly on each compact set in $\mathbb{R}$.
###### Definition 3.4
In view of the above proof of lemma (3.7), the linear canonical transform of
Boehmian in the space of continuous functions on $\mathbb{R}$ is define as,
$\displaystyle\mathcal{L}_{A}[F]=\lim_{n\rightarrow\infty}\mathcal{L}_{A}(f_{n}).$
###### Theorem 3.1
The linear canonical transform
$\mathcal{L}_{A}:\mathcal{B}_{\mathcal{L}^{1,2}}\longrightarrow\mathcal{B}_{\bigtriangledown}$
is consistent with
$\mathcal{L}_{A}:\mathcal{L}^{2}(\mathbb{R})\longrightarrow\mathcal{L}^{2}(\mathbb{R})$.
###### Theorem 3.2
The linear canonical transform
$\mathcal{L}_{A}:\mathcal{B}_{\mathcal{L}^{1,2}}\longrightarrow\mathcal{B}_{\bigtriangledown}$
is a bijection.
###### Theorem 3.3
Let $F,G\in\mathcal{B}_{\mathcal{L}^{1,2}}$ then
1. (a)
$\quad\mathcal{L}_{A}[F+\lambda
G]=\mathcal{L}_{A}(F)+\lambda\mathcal{L}_{A}(G)$, for any complex $\lambda$.
2. (b)
$\quad\mathcal{L}_{A}[e^{ikt}F](u)=e^{\frac{-idk(2u-bk)}{2}}\mathcal{L}_{A}[F](u-bk)$,
for $k\in\mathbb{R}$.
3. (c)
$\quad\mathcal{L}_{A}[F(t+\tau)](u)=e^{i(2u+a\tau)\frac{\tau}{2b}}\mathcal{L}_{A}[e^{\frac{-ia}{b}x\tau}F(x)](u)$.
4. (d)
$\quad\mathcal{L}_{A}[F^{(2)}](u)=\bigg{[}\bigg{(}\frac{iu}{b}\bigg{)}^{2}+\frac{ia}{b}\bigg{]}\mathcal{L}_{A}[F(t)](u).$
###### Theorem 3.4
Let $F,G\in\mathcal{B}_{\mathcal{L}^{1,2}}$ then
$\mathcal{L}_{A}(F*^{A}G)=\mathcal{L}_{A}(F)\mathcal{L}_{A}(G)$.
###### Theorem 3.5
Let $\delta-\lim F_{n}=F$ for $F_{n},F\in\mathcal{B}_{\mathcal{L}^{1,2}}$ then
$\mathcal{L}_{A}(F_{n})\rightarrow\mathcal{L}_{A}(F)$ uniformly on each
compact set of $\mathbb{R}$.
Proof: Let $\\{\delta_{m}\\}$ be a delta sequence such that
$F_{n}*^{A}\delta_{m},F*^{A}\delta_{m}\in\mathcal{L}^{1,2}(\mathbb{R})$ for
all $n,m\in\mathbb{N}$ and $\|(F_{n}-F)*^{A}\delta_{m}\|_{2}\rightarrow 0$ as
$n\rightarrow\infty$ for each $m\in\mathbb{N}$. Let $M$ be a compact set in
$\mathbb{R}$ then $\mathcal{L}_{A}(\delta_{m})>0$ on $M$ for all most
$m\in\mathbb{N}$. Since $\mathcal{L}_{A}(\delta_{m})$ is a continuous function
and
$\mathcal{L}_{A}(F_{n})*^{A}\mathcal{L}_{A}(\delta_{m})-\mathcal{L}_{A}(F)*^{A}\mathcal{L}_{A}(\delta_{m})=((\mathcal{L}_{A}(F_{n})-\mathcal{L}_{A}(F))*^{A}\mathcal{L}_{A}(\delta_{m}))$,
implies
$\|(\mathcal{L}_{A}(F_{n})-\mathcal{L}_{A}(F))*^{A}\mathcal{L}_{A}(\delta_{m})\|_{2}\rightarrow
0$ as $n\rightarrow\infty$ for each $m\in\mathbb{N}$. Thus
$\mathcal{L}_{A}(F_{n})\rightarrow\mathcal{L}_{A}(F)$ uniformly on $M$.
$\hfill\blacksquare$
## References
* [1] T. K. Boehme; The support of Mikusinski operators, Trans. Amer. Math. Soc., 176, 319-334, (1973).
* [2] Deng Bing, Tao Ran and Wang Yue. Convolution theorems for the linear canonical transform and their applications. Sci. China Series F: Inf. Sci., 49(5), 592-603, (2006).
* [3] Deyun Wei, Qiwen Ran and Yong Li; New convolution theorem for the linear canonical transform and its translation invariance property, Optik 123, 1478-1481, (2012).
* [4] Pravinkumar V. Dole and S. K. Panchal, Linear canonical transform for Integrable Boehmians, Int. J. Pure Appl. Math., 116(1), 91-96, (2017).
* [5] J.Mikusinski and P.Mikusinski; Quotients de suites et leurs applications dans l’analyse fonctionnelle, C.R. Acad. Sci. Paris Ser. I Math., 293, 463-464, (1981).
* [6] P. Mikusinski; Convergence of Boehmianes, Japan. J. Math, 9(1), 169-179,(1983).
* [7] P. Mikusinski; Fourier transform for integrable Boehmians, Rocky Moun-tain J. Math., 17(3), 577-582,(1987).
* [8] P. Mikusinski; Boehmians and generalized functions, Acta. Math. Hungarica, 51, 159-179, (1988).
* [9] P. Mikusinski; Transform of Boehmians, Different Aspects of Differentiability Dissertationes Mathematicae, 340, 201-206,(1995).
* [10] M. Moshinsky and C. Quesne; Linear canonical transformations and their unitary representations, J. Math. Phys, 12(8), 1772-1783,(1971).
* [11] Dennis Nemzer; Laplace transforms on a class of Boehmians, Bull. Austral. Math. Soc., 46, 347-352, (1992).
* [12] R. S. Pathak, _A Course in Distributional Theory and Applications_ , Narosa Publication House, New Delhi 2001.
* [13] A. Singh and P. K. Banergi; Fractional integrals of fractional Fourier transform for integrable Boehmians, Proceedings of the National Academy of Sciences, India Section A: Physical Sciences, 2017.
* [14] Walter Rudin; Real and Complex Analysis, Third Edition, McGraw-Hill, New York, 1987.
* [15] A. I. Zayed; Fractional Fourier transform of generalized functions, Integ. Trans. Spl. Funct.,7, 299-312, (1998).
|
# Multiple importance sampling for stochastic gradient estimation
Corentin Salaün Xingchang Huang Iliyan Georgiev Niloy J. Mitra Gurprit
Singh
###### Abstract
We introduce a theoretical and practical framework for efficient importance
sampling of mini-batch samples for gradient estimation from single and
multiple probability distributions. To handle noisy gradients, our framework
dynamically evolves the importance distribution during training by utilizing a
self-adaptive metric. Our framework combines multiple, diverse sampling
distributions, each tailored to specific parameter gradients. This approach
facilitates the importance sampling of _vector-valued_ gradient estimation.
Rather than naively combining multiple distributions, our framework involves
optimally weighting data contribution across multiple distributions. This
adapted combination of multiple importance yields superior gradient estimates,
leading to faster training convergence. We demonstrate the effectiveness of
our approach through empirical evaluations across a range of optimization
tasks like classification and regression on both image and point cloud
datasets.
Machine Learning, ICML
## 1 Introduction
Stochastic gradient descent (SGD), in tandem with gradient backpropagation, is
fundamental in optimizing complex neural networks. This iterative optimization
process relies on the efficient estimation of gradients to update model
parameters and minimize the optimization objective. A significant challenge in
methods based on SGD lies in the influence of stochasticity on gradient
estimation, impacting both the quality of the estimates and convergence speed.
This stochasticity introduces errors in the form of noise, and addressing and
minimizing such noise in gradient estimation continues to be an active area of
research.
Various approaches have been introduced to reduce gradient estimation noise,
including data diversification (Zhang et al., 2019; Faghri et al., 2020; Ren
et al., 2019), adaptive mini-batch sizes (Balles et al., 2017; Alfarra et al.,
2021), momentum-based estimation (Rumelhart et al., 1986; Kingma & Ba, 2014),
and adaptive sampling strategies (Santiago et al., 2021). These methods
collectively expedite the optimization by improving the gradient-estimation
accuracy.
Another well-established technique for noise reduction in estimation is
importance sampling (IS) (Loshchilov & Hutter, 2015; Katharopoulos & Fleuret,
2017, 2018), which involves the non-uniform selection of data samples for
mini-batch construction. Data samples that contribute more significantly to
gradient estimation are selected more often. This allows computational
resources to focus on the most critical data for the optimization task.
However, these algorithms are quite inefficient and add significant overhead
to the training process. Another limitation of importance sampling, in
general, lies in determining the best sampling distribution to achieve maximal
improvement, often necessitating a quality trade-off due to the simultaneous
estimation of numerous parameters.
We propose an efficient importance sampling algorithm that does not require
resampling, in contrast to (Katharopoulos & Fleuret, 2018). Our importance
function dynamically evolves during training, utilizing a self-adaptive metric
to effectively manage initial noisy gradients. Further, unlike existing IS
methods in machine learning where importance distributions assume scalar-
valued gradients, we propose a multiple importance sampling (MIS) strategy to
manage _vector-valued_ gradient estimation. We propose the simultaneous use of
multiple sampling strategies combined with a weighting approach following the
principles of MIS theory, well studied in the rendering literature in computer
graphics (Veach, 1997). Rather than naively combining multiple distributions,
our proposal involves estimating importance weights w.r.t. data samples across
multiple distributions by leveraging the theory of optimal MIS (OMIS)
(Kondapaneni et al., 2019). This optimization process yields superior gradient
estimates, leading to faster training convergence. In summary, we make the
following contributions:
* •
We develop an efficient IS algorithm with a self-adaptive metric for
importance sampling.
* •
We introduce an MIS estimator for gradient estimation to importance sample
vector-valued gradients for gradient estimation.
* •
We present a practical approach to compute the optimal weights for multiple
sampling strategies to maximize the quality of vector-valued gradient
estimation.
* •
We demonstrated the effectiveness of our approach on various machine learning
tasks.
## 2 Related work
| | | Output 1 | Output 2 | Output 3
---|---|---|---|---|---
| | | | |
| | | |
(a) Network diagram | (b) Ground-truth | (c) Output-layer | (d) Norms of individual output nodes
| classification | gradient norm | | |
Figure 1: We visualize different importance sampling distributions for a
simple classification task. We propose to use the output layer gradients for
importance sampling, as shown in the network diagram (a). For a given ground-
truth classification (top) and training dataset (bottom) shown in (b), it is
possible to importance sample from the $L_{2}$ norm of the output-layer
gradients (c) or from three different sampling distributions derived from the
gradient norms of individual output nodes (d). The bottom row shows sample
weights from each distribution.
### Importance sampling for gradient estimation.
Importance sampling (IS) (Kahn, 1950; Kahn & Marshall, 1953; Owen & Zhou,
2000) has emerged as a powerful technique in high energy physics, Bayesian
inference, rare event simulation for finance and insurance, and rendering in
computer graphics. In the past few years, IS has also been applied in machine
learning to improve the accuracy of gradient estimation and enhance the
overall performance of learning algorithms (Zhao & Zhang, 2015).
By strategically sampling data points from a non-uniform distribution, IS
effectively focuses training resources on the most informative and impactful
data, leading to more accurate gradient estimates. Bordes et al. (2005)
developed an online algorithm (LASVM) that uses importance sampling to train
kernelized support vector machines. Loshchilov & Hutter (2015) suggested
employing data rankings based on their respective loss values. This ranking is
then employed to create an importance sampling strategy that assigns greater
importance to data with higher loss values. Katharopoulos & Fleuret (2017)
proposed importance sampling the loss function. Subsequently, Katharopoulos &
Fleuret (2018) introduced an upper bound to the gradient norm that can be
employed as an importance function. Their algorithm involves resampling and
computing gradients with respect to the final layer. Despite the importance
function demonstrating improvement over uniform sampling, their algorithm
exhibits significant inefficiency.
Appendix B summarizes the theory behind (multiple) importance sampling. It
also states the optimal MIS estimator and how to compute it.
### Multiple importance sampling.
The concept of Multiple Importance Sampling (MIS) emerged as a robust and
efficient technique for integrating multiple sampling strategies (Owen & Zhou,
2000). Its core principle lies in assigning weights to various importance
sampling estimation, allowing each data sample to utilize the most appropriate
strategy. Veach (1997) introduced this concept of MIS to rendering in computer
graphics and proposed the widely adopted _balance heuristic_ for importance
(weight) allocation. The balance heuristic determines weights based on a data
sample’s relative importance across all sampling approaches, effectively
mitigating the influence of outliers with low probability densities. While MIS
is straightforward to implement and independent of the specific function,
Variance-Aware MIS (Grittmann et al., 2019) advanced the concept by using
variance estimates from each sampling technique for further error reduction.
Moreover, Optimal MIS (Kondapaneni et al., 2019) derived optimal sampling
weights that minimize MIS estimator variance. Notably, these weights depend
not only on probability density but also on the function values of the
samples.
## 3 Problem statement
The primary goal of machine-learning optimization is to find the optimal
parameters $\theta$ for a given model function $m(x,\theta)$ by minimizing a
loss function ${\mathcal{L}}$ over a dataset ${\Omega}$:
$\displaystyle\theta^{*}=\underset{\theta}{\mathrm{argmin}}\,\underbrace{\int_{{\Omega}}{\mathcal{L}}(m(x_{i},\theta),y)\,\mathrm{d}x.}_{L_{\theta}}$
(1)
The loss function ${\mathcal{L}}$ quantifies the dissimilarity between the
model predictions $m(x,\theta)$ and observed data $y$. The factor in front of
the integral normalizes the overall loss $L_{\theta}$ with respect to the
dataset size. In the common case of a discrete dataset, the integral becomes a
sum.
In practice, the total loss is minimized via iterative gradient descent. In
each iteration $t$, the gradient $\nabla L_{\theta_{t}}$ of the loss with
respect to the current model parameters $\theta_{t}$ is computed, and the
parameters are updated as
$\theta_{t+1}=\theta_{t}-\lambda\underbrace{\int_{{\Omega}}\nabla{\mathcal{L}}(m(x,\theta),y)\,\mathrm{d}x}_{\nabla
L_{\theta_{t}}},$ (2)
where $\lambda>0$ is the learning rate.
### Monte Carlo gradient estimator.
In practice, the parameter gradient is estimated from a small batch
$\\{x_{i}\\}_{i=1}^{B}$ of randomly selected data points:
$\langle\nabla
L_{\theta}\rangle=\sum_{i=1}^{B}\frac{\nabla{\mathcal{L}}(m(x_{i},\theta),y_{i})}{{B}p(x_{i})}\approx\nabla
L_{\theta},\quad x_{i}\sim p.$ (3)
The data points are sampled from a probability density function (pdf) $p$ or
probability mass function in discreet cases. The mini-batch gradient descent
substitutes the true gradient $\nabla L_{\theta_{t}}$ with an estimate
$\langle\nabla L_{\theta_{t}}\rangle$ in Equation 2 to update the model
parameters in each iteration.
We want to estimate $\nabla L_{\theta_{t}}$ accurately and also efficiently,
since the gradient-descent iteration (2) may require many thousands of
iterations until the parameters converge. These goals can be achieved by
performing the optimization in small batches whose samples are chosen
according to a carefully designed distribution $p$. For a simple
classification problem, Figure 1c shows an example importance sampling
distribution derived from the output layer of the model. In Figure 1d we
derive multiple distributions from the individual output nodes. Below we
develop theory and practical algorithms for importance sampling using a single
distribution (Section 4) and for combining multiple distributions to further
improve gradient estimation (Section 5).
## 4 Mini-batch importance sampling
Mini-batch gradient estimation (3) notoriously suffers from Monte Carlo noise,
which can make the parameter-optimization trajectory erratic and convergence
slow. That noise comes from the often vastly different contributions of
different samples $x_{i}$ to that estimate.
Typically, the selection of samples that go into a mini-batch is done with
uniform probability $p(x_{i})=1/{|\Omega|}$. Importance sampling is a
technique for using a non-uniform pdf to strategically pick samples
proportionally on their contribution to the gradient, to reduce estimation
variance.
### Practical algorithm.
We propose an importance sampling algorithm for mini-batch gradient descent,
outlined in Algorithm 1. Similarly to Schaul et al. (2015), we use an
importance function that relies on readily available quantities for each data
point, introducing only negligible memory and computational overhead over
classical uniform mini-batching. We store a set of persistent _un-normalized
importance_ scalars $q=\\{q_{i}\\}_{i=1}^{|\Omega|}$ that are updated
continuously during the optimization.
The first epoch is a standard SGD one, during which we additionally compute
the initial importance of each data point (line 3). In each subsequent epoch,
at each mini-batch optimization step $t$ we normalize the importance values to
a valid distribution $p$ (line 6). We then choose ${B}$ data samples (with
replacement) according to $p$ (line 7). The loss ${\mathcal{L}}$ is evaluated
for each selected data sample (line 8), and backpropagated to compute the loss
gradient (line 9). The per-sample importance is used in the gradient
estimation (line 10) to normalize the contribution. In practice lines 9-10 can
be done simultaneously by backpropagating a weighted loss
${\mathcal{L}}(x)\cdot(\nicefrac{{1}}{{(p(x)\cdot B)}})^{T}$. Finally, the
network parameters are updated using the estimated gradient (line 11). On line
12, we update the importance of the samples in the mini-batch; we describe our
choice of importance function below. The blending parameter $\gamma$ ensures
stability of the persistent importance as discussed in Appendix E. At the end
of each epoch (line 13), we add a small value to the un-normalized weights of
all data to ensure that every data point will be eventually evaluated, even if
its importance is deemed low by the importance metric.
Algorithm 1 Mini-batch importance sampling for SGD.
1:$\theta\leftarrow$ random parameter initialization
2:$B\leftarrow$ mini-batch size, $N=|\Omega|$ $\leftarrow$ Dataset size
3:$q,\theta\leftarrow\text{Initialize}(x,y,\Omega,\theta,B)$ $\leftarrow$
Algorithm 4
4:until convergence do $\leftarrow$ Loop over epochs
5: for $t\leftarrow 1$ to $N/B$ do $\leftarrow$ Loop over mini-batches
6: $p\leftarrow q/$sum$(q)$ $\leftarrow$ Normalize importance to pdf
7: $x,y\leftarrow{B}$ data samples $\\{x_{i},y_{i}\\}_{i=1}^{B}\propto p$
8: ${\mathcal{L}}(x)\leftarrow{\mathcal{L}}(m(x,\theta),y)$
9: $\nabla{\mathcal{L}}(x)\leftarrow$ Backpropagate$({\mathcal{L}}(x))$
10: $\langle\nabla
L_{\theta}\rangle\leftarrow(\nabla{\mathcal{L}}(x)\cdot(\nicefrac{{1}}{{p(x)}})^{T})/B$
$\leftarrow$ Equation 3
11: $\theta\leftarrow\theta-\eta\,\langle\nabla L_{\theta}\rangle$
$\leftarrow$ SGD step
12: $q(x)\leftarrow\gamma\cdot
q(x)+(1-\gamma)\cdot\left\|\frac{\partial{\mathcal{L}}(x)}{\partial
m(x,\theta)}\right\|$
13: $q\leftarrow q+\epsilon$ $\rcurvearrowne$ Accumulate importance
14:return $\theta$
It is important to note that the first epoch is done without importance
sampling to initialize each sample importance. This does not add overhead as
it is equivalent to a classical epoch running over all data samples. While
similar schemes have been proposed in the past (Loshchilov & Hutter, 2015),
they often rely on a multitude of hyperparameters, making their practical
implementation challenging. This has led to the development of alternative
methods like re-sampling (Katharopoulos & Fleuret, 2018; Dong et al., 2021;
Zhang et al., 2023). Tracking importance across batches and epochs minimizes
the computational overhead, further enhancing the efficiency and practicality
of the approach.
### Importance function.
In combination with the presented algorithm, we propose an importance function
that is efficient to evaluate. While the gradient $L_{2}$ norm has been shown
to be optimal (Zhao & Zhang, 2015; Needell et al., 2014; Wang et al., 2017;
Alain et al., 2015), calculating it can be computationally expensive as it
requires full backpropagation for every data point. To this end, we compute
the gradient norm only for a subset of the parameters, specifically the output
nodes of the network: $q(x)=\left\|\frac{\partial\mathcal{L}(x)}{\partial
m(x,\theta)}\right\|$. This choice is based on an upper bound of the gradient
norm, using the chain rule and the Cauchy–Schwarz inequality (Katharopoulos &
Fleuret, 2018):
$\displaystyle\\!\left\|\frac{\partial\mathcal{L}(x_{i})}{\partial\theta}\right\|$
$\displaystyle=\left\|\frac{\partial\mathcal{L}(x)}{\partial
m(x,\theta)}\cdot\frac{\partial m(x,\theta)}{\partial\theta}\right\|\leq$ (4)
$\displaystyle\left\|\frac{\partial\mathcal{L}(x)}{\partial
m(x,\theta)}\right\|\cdot\left\|\frac{\partial
m(x,\theta)}{\partial\theta}\right\|\leq\underbrace{\left\|\frac{\partial\mathcal{L}(x)}{\partial
m(x,\theta)}\right\|}_{q(x)}\cdot\,C\,,$
where $C$ is the Lipschitz constant of the parameters gradient. That is, our
importance function is a bound of the gradient magnitude based on the output-
layer gradient norm.
We tested the relationship between four different importance distributions:
uniform, our proposed importance function,
the loss function as importance (Katharopoulos & Fleuret, 2017), and the work
by Katharopoulos & Fleuret (2018) using an other gradient norm bound. The
inline figure plots the $L_{2}$ difference between these importance
distributions and the ground-truth gradient-norm distribution across epochs
for an MNIST classification task. It shows that Our IS distribution has the
smallest difference, i.e., it achieves high accuracy while requiring only a
small part of the gradient.
For some specific task when the output layer has predictable shape, it is
possible to derive a closed form definition of the proposed importance metric.
Appendix D derives the close form importance for classification task using
cross entropy loss.
Note that any importance heuristic can be used on line 12 of Algorithm 1, such
as the gradient norm (Zhao & Zhang, 2015; Needell et al., 2014; Wang et al.,
2017; Alain et al., 2015), the loss (Loshchilov & Hutter, 2015; Katharopoulos
& Fleuret, 2017; Dong et al., 2021), or more advanced importance
(Katharopoulos & Fleuret, 2018). For efficiency, our importance function
reuses the forward-pass computations from line 8, updating $q$ only for the
current mini-batch samples.
## 5 Multiple importance sampling
The parameter gradient $\nabla L_{\theta}$ is vector with dimension equal to
the number of model parameters. The individual parameter derivatives vary
uniquely across the data points, and estimation using a single distribution
(Section 4) inevitably requires making a trade-off, e.g., only importance
sampling the overall gradient magnitude. Truly minimizing the estimation error
requires estimating each derivative using a separate importance sampling
distribution tailored to its variation. However, there are two practical
issues with this approach: First, it would necessitate sampling from all of
these distributions, requiring “mini-batches” of size equal at least to the
number of parameters. Second, it would lead to significant computation waste,
since backpropagation computes all parameter derivatives but only one of them
would be used per data sample. To address this issue, we propose using a small
number of distributions, each tailored to the variation of a parameter subset,
and combining _all_ computed derivatives into a low-variance estimator, using
multiple importance sampling theory. As an example, Figure 1d shows three
sampling distributions for a simple classification task, based on the
derivatives of the network’s output nodes, following the boundary of each
class.
### MIS gradient estimator.
Combining multiple sampling distributions into a single robust estimator has
been well studied in the Monte Carlo rendering literature. The best known
method is _multiple importance sampling_ (MIS) (Veach, 1997). In our case of
gradient estimation, the MIS estimator takes for form
$\langle\nabla
L_{\theta}\rangle_{\mathrm{MIS}}=\sum_{j=1}^{J}\sum_{i=1}^{n_{j}}w_{j}(x_{ij})\frac{\nabla{\mathcal{L}}(m(x_{ij},\theta),y_{ij})}{n_{j}p_{j}(x_{ij})},$
(5)
where $J$ is the number of sampling distributions, $n_{j}$ the number of
samples from distribution $j$, and $x_{ij}$ the $i$th sample from the $j$th
distribution. Each sample is modulated by a weight $w_{j}(x_{ij})$; the
estimator is unbiased as long as $\sum_{j=1}^{J}w_{j}(x)=1$ for every data
point $x$ in the dataset.
### Optimal weighting.
Various MIS weighting functions $w_{j}$ have been proposed in literature, the
most universally used one being the balance heuristic (Veach, 1997). In this
work we use the recently derived optimal weighting scheme (Kondapaneni et al.,
2019) which minimizes the estimation variance for a given set of sampling
distributions $p_{j}$:
$w_{j}(x)=\alpha_{j}\frac{p_{j}(x)}{\nabla{\mathcal{L}}(m(x,\theta),y)}\;+\\\
\frac{n_{j}p_{j}(x)}{\sum_{k=1}^{J}n_{k}p_{k}(x)}\Bigg{(}1-\frac{\sum_{k=1}^{J}\alpha_{k}p_{k}(x)}{\nabla{\mathcal{L}}(m(x,\theta),y)}\Bigg{)}.$
(6)
Here, $\boldsymbol{\alpha}=[\alpha_{1},\ldots,\alpha_{J}]$ is the solution to
the linear system
$\small\boldsymbol{A}\boldsymbol{\alpha}=\boldsymbol{b}\text{, with
}\begin{dcases}a_{j,k}=\int_{{\Omega}}\frac{p_{j}(x)p_{k}(x)}{\sum_{i}^{J}n_{i}p_{i}(x)}d(x,y),\\\
b_{j}=\int_{{\Omega}}\frac{p_{j}(x)\nabla{\mathcal{L}}(m(x,\theta),y)}{\sum_{i}^{J}n_{i}p_{i}(x)}d(x,y),\end{dcases}$
(7)
where $a_{j,k}$ and $b_{j}$ are the elements of the matrix
$\boldsymbol{A}\in\mathbb{R}^{J\times J}$ and vector
$\boldsymbol{b}\in\mathbb{R}^{J}$ respectively.
Instead of explicitly computing the optimal weights in Equation 6 using
Equation 7 and plugging them into the MIS estimator (5), we can use a shortcut
evaluation that yields the same result (Kondapaneni et al., 2019):
$\langle\nabla L_{\theta}\rangle_{\mathrm{OMIS}}=\sum_{j=1}^{J}\alpha_{j}.$
(8)
In Appendix B we provide an overview of MIS and the aforementioned weighting
schemes. Importantly for our case, the widely adopted balance heuristic does
not bring practical advantage over single-distribution importance sampling
(Section 4) as it is equivalent to sampling from a mixture of the given
distributions; we can easily sample from this mixture by explicitly averaging
the distributions into a single one. In contrast, the optimal weights are
different for each gradient dimension as they depend on the gradient value.
Algorithm 2 Optimal multiple importance sampling SGD.
1:$\theta\leftarrow$ random parameter initialization
2:$B\leftarrow$ mini-batch size, $J\leftarrow$ number of pdf
3:$N=|\Omega|\leftarrow$ dataset size
4:$n_{j}\leftarrow$ sample count per technique, for $j\in\\{1,..J\\}$
5:$\boldsymbol{q},\theta\leftarrow\text{InitializeMIS}(x,y,\Omega,\theta,B)$
$\leftarrow$ Algorithm 5
6:$\langle\boldsymbol{A}\rangle\leftarrow 0^{J\times
J},\langle\boldsymbol{b}\rangle\leftarrow 0^{J}$ $\leftarrow$ OMIS linear
system
7:until convergence do $\leftarrow$ Loop over epochs
8: for $t\leftarrow 1$ to $N/B$ do $\leftarrow$ Loop over mini-batches
9:
$\langle\boldsymbol{A}\rangle\leftarrow\beta\langle\boldsymbol{A}\rangle,\langle\boldsymbol{b}\rangle\leftarrow\beta\langle\boldsymbol{b}\rangle$
10: for $j\leftarrow 1$ to $J$ do $\leftarrow$ Loop over distributions
11: $p_{j}\leftarrow q_{j}/\text{sum}(q_{j})$
12: $x,y\leftarrow{B}$ data samples $\\{x_{i},y_{i}\\}_{i=1}^{n_{j}}\propto
p_{j}$
13: ${\mathcal{L}}(x)\leftarrow{\mathcal{L}}(m(x,\theta),y)$
14: $\nabla{\mathcal{L}}(x)\leftarrow$ Backpropagate$({\mathcal{L}}(x))$
15: $S(x)\leftarrow\sum_{k=1}^{J}n_{k}p_{k}(x)$
16:
$\boldsymbol{W}\leftarrow\nicefrac{{n_{i}p_{i}(x)}}{{\sum_{k=1}^{J}n_{k}p_{k}(x)}}$
$\rcurvearrowsw$ Momentum estim.
17:
$\langle\boldsymbol{A}\rangle\leftarrow\langle\boldsymbol{A}\rangle+(1-\beta)\sum_{i=1}^{n_{j}}\boldsymbol{W}_{i}\boldsymbol{W}_{i}^{T}$
18:
$\langle\boldsymbol{b}\rangle\leftarrow\langle\boldsymbol{b}\rangle+(1-\beta)\sum_{i=1}^{n_{j}}\nabla{\mathcal{L}}(x_{i})\boldsymbol{W}_{i}/S(x_{i})$
19:
$\boldsymbol{q}(x)\leftarrow\gamma\boldsymbol{q}(x)+(1-\gamma)\frac{\partial\mathcal{L}(x)}{\partial
m(x,\theta)}$
20:
$\langle\boldsymbol{\alpha}\rangle\leftarrow\langle\boldsymbol{A}\rangle^{-1}\langle\boldsymbol{b}\rangle$
21: $\langle\nabla
L_{\theta}\rangle_{\mathrm{OMIS}}\leftarrow\sum_{j=1}^{J}\langle\alpha_{j}\rangle$
22: $\theta\leftarrow\theta-\eta\,\langle\nabla
L_{\theta}\rangle_{\mathrm{OMIS}}$ $\leftarrow$ SGD step
23:return $\theta$
### Practical algorithm.
Implementing the optimal-MIS estimator (8) amounts to drawing $n_{j}$ samples
from each distribution, computing $\boldsymbol{\alpha}$ for each dimension of
the gradient and summing its elements. The integrals in $\boldsymbol{A}$ and
$\boldsymbol{b}$ (sums in the discrete-dataset case) can be estimated as
$\langle\boldsymbol{A}\rangle$ and $\langle\boldsymbol{b}\rangle$ from the
drawn samples, yielding the estimate
$\langle\boldsymbol{\alpha}\rangle=\langle\boldsymbol{A}\rangle^{-1}\langle\boldsymbol{b}\rangle$.
Algorithm 2 shows a complete gradient-descent algorithm. The main differences
with Algorithm 1 are the use of multiple importance distributions
$\boldsymbol{q}=\\{q_{j}\\}_{j=1}^{J}$ (line 5) and the linear system used to
compute the OMIS estimator (line 6). This linear system is updated (lines
15-18) using the mini-batch samples and solved to obtain the gradient
estimation (line 21). Since the matrix $\langle\boldsymbol{A}\rangle$ is
independent of the gradient estimation (see Equation 7), its inversion can be
shared across all parameter estimates.
Figure 2: Convergence comparison of polynomial regression of order 6 using
different method. Exact gradient show a gradient descent as baseline and
classical SGD. For our method, we compare importance sampling and OMIS using
$n=2$ or $4$ importance distributions. Balance heuristic MIS is also visible.
Our method using OMIS achieve same convergence as exact gradient.
### Momentum-based linear-system estimation.
If the matrix estimate $\langle\boldsymbol{A}\rangle$ is inaccurate, its
inversion can be unstable and yield a poor gradient estimate. The simplest way
to tackle this problem is to use a large number of samples per distribution,
which produces a accurate estimates of both $\boldsymbol{A}$ and
$\boldsymbol{b}$ and thus a stable solution to the linear system. However,
this approach is computationally expensive. Instead, we keep the sample counts
low and reuse the estimates from previous mini-batches via momentum-based
accumulation, shown in lines 17–18, where $\beta$ is the parameter controlling
the momentum; we use $\beta=0.7$. This accumulation provides stability, yields
an estimate of the momentum gradient (Rumelhart et al., 1986), and allows us
to use 1–4 samples per distribution in a mini-batch.
### Importance functions.
To define our importance distributions, we expand on the approach from Section
4. Instead of taking the norm of the entire output layer of the model, we take
the different gradients separately as
$\boldsymbol{q}(x)=\frac{\partial\mathcal{L}(x)}{\partial m(x,\theta)}$ (see
Figure 1d). Similarly to Algorithm 1, we apply momentum-based accumulation of
the per-data importance (line 19 in Algorithm 2). If the output layer has more
nodes than the desired number $J$ of distributions, we select a subset of the
nodes. Many other ways exist to derive the distributions, e.g., clustering the
nodes into $J$ groups and taking the norm of each; we leave such exploration
for future work.
## 6 Experiments
### Implementation details.
We evaluate our importance sampling (IS) and optimal multiple importance
sampling (OMIS) methods on a set of classification and regression tasks with
different data modalities (images, point clouds). We compare them to classical
SGD (which draws mini-batch samples uniformly without replacement), DLIS
(Katharopoulos & Fleuret, 2018), and LOW (Santiago et al., 2021). DLIS uses a
resampling scheme that samples an initial, larger mini-batch uniformly and
then selects a fraction of them for backpropagation and a gradient step. This
resampling is based on an importance sampling metric computed by running a
forward pass for each initial sample. LOW applies adaptive weighting to
uniformly selected mini-batch samples to give importance to data with high
loss. All reported metrics are computed on data unseen during training, with
the exception of the regression tasks.
All experiments are conducted on a single NVIDIA Tesla A40 graphics card.
Details about the optimization setup of each experiment can be found in
Appendix A.
### Convex problem.
We performed a basic convergence analysis of IS and OMIS on a convex
polynomial-regression problem. Figure 2 compares classical SGD, our IS, and
three MIS techniques: balance heuristic (Veach, 1997) and our OMIS using two
and four importance distributions. The exact gradient serves as a reference
point for optimal convergence. Balance-heuristic MIS exhibits similar
convergence to IS. This can be attributed to the weights depending solely on
the relative importance distributions, disregarding differences in individual
parameter derivatives. This underscores the unsuitability of the balance
heuristic as a weighting method for vector-valued estimation. Both our OMIS
variants achieve convergence similar to that of the exact gradient. The four-
distribution variant achieves the same quality as the exact gradient using
only 32 data samples per mini-batch. This shows the potential of OMIS to
achieve low error in gradient estimation even at low mini-batch sizes.
Figure 3: Classification error convergence for MNIST classification for
various methods. Both Katharopoulos & Fleuret (2018) (DLIS) and resampling SGD
approach. In comparison, our three method use the presented algorithm without
resampling. It is visible that while DLIS perform similarly to our IS at equal
epoch, the overhead of the method makes ours noticeably better at equal time
for of IS and OMIS.
### Classification.
In Figure 3, we compare our algorithms to the DLIS resampling algorithm of
Katharopoulos & Fleuret (2018) on MNIST classification. Our IS performs
slightly better than DLIS, and our OMIS does best. The differences between our
methods and the rest are more pronounced when comparing equal-time
performance. DLIS has a higher computational cost as it involves running a
forward pass on a large mini-batch to compute resampling probabilities. Our
OMIS requires access to the gradient of each mini-batch sample; obtaining
these gradients in our current implementation is inefficient due to technical
limitations in the optimization framework we use (PyTorch). Nevertheless, the
method manages to make up for this overhead with a higher-quality gradient
estimate. In Figure 3 we compare classification error; loss-convergence plots
are shown in Appendix F (Figure 8).
Figure 4: On CIFAR-100 classification dataset, instead of comparing the DLIS
resampling algorithm , we use DLIS importance metric in our Algorithm 1. We
display zoom-in of the end of the curves to highlight the differences. At
equal epochs (left), our methods (Our IS & Our AS) show improvements compared
to LOW (Santiago et al., 2021) and DLIS weights. At equal time (right), LOW
and the DLIS weights takes longer to converge.Overall our approach shows
faster convergence with lower importance computation. Figure 5: Comparisons
on CIFAR-10 using Vision Transformer (ViT) (Dosovitskiy et al., 2020). The
results show our importance sampling scheme (Our IS) can improve over
classical SGD, LOW (Santiago et al., 2021) and DLIS (Katharopoulos & Fleuret,
2018) on modern transformer architecture.
In Figure 4, we compare our IS against using the DLIS importance function in
Algorithm 1 and LOW (Santiago et al., 2021) on CIFAR-100 classification. At
equal number of epochs, the difference between the methods is small (see
close-up view). Our IS achieves similar classification accuracy as LOW and
outperforms the DLIS variant. At equal time the difference is more important
as our method has lower computational cost. This experiment shows that our
importance function achieves better performance than that of DLIS within the
same optimization algorithm.
Figure 5 shows a similar experiment on CIFAR-10 using a vision transformer
(Dosovitskiy et al., 2020). Our IS method achieves consistent improvement over
the state of the art. The worse convergence of (original, resampling-based)
DLIS can be attributed to its resampling tending to exclude some training data
with very low importance, which can cause overfitting.
Figure 6: Comparison of our two methods (Our IS, Our OMIS) on point-cloud classification using PointNet (Qi et al., 2017) architecture. Our OMIS achieves lower classification error at equal epochs, though it introduces computation overhead as shown at equal-time comparisons. At equal time, our method using importance sampling achieves the best performance. | | | | |
---|---|---|---|---|---
| | | |
Reference | Uniform | DLIS | Our IS | Our OMIS
Figure 7: Comparison at equal step for image 2D regression. Left side show
the convergence plot while the right display the result regression and a
close-up view. Our method using MIS achieves the lower error on this problem
while IS and DLIS perform similarly. On the images it is visible that our OMIS
recover the finest details of the fur and whiskers.
Figure 6 shows point-cloud classification, where our IS is comparable to
classical SGD and our OMIS outperforms other methods in terms of
classification error at equal epochs. Equal-time comparison demonstrates that
our IS is as efficient an SGD in complexes cases where importance sampling
does not improve convergence. DLIS and our OMIS both suffer from computational
overhead.
We also perform an ablation study for linear-system momentum in Algorithm 2.
We apply same momentum on the gradient for classical SGD, DLIS and our IS.
Appendix F (Figure 9) shows this comparison. Our OMIS still outperforms other
methods for this task at equal steps.
### Regression.
Figure 7 shows results on image regression, comparing classical SGD, DLIS, and
our IS and OMIS. Classical SGD yields a blurry image, as seen in the zoom-ins.
DLIS and our IS methods achieves similar results, with increased whisker
sharpness but still blurry fur, though ours has slightly lower loss and is
computationally faster, as discussed above. Our OMIS employs three sampling
distributions based on the network’s outputs which represent the red, green
and blue image channels. This method achieves the lowest error and highest
image fidelity, as seen in the zoom-in.
## 7 Limitations and future work
We have showcased the effectiveness of importance sampling and optimal
multiple importance sampling (OMIS) in machine-learning optimization, leading
to a reduction in gradient-estimation error. Our current OMIS implementation
incurs some overhead as it requires access to individual mini-batch sample
gradients. Modern optimization frameworks can efficiently compute those
gradients in parallel but only return their average. This is the main
computational bottleneck in the method. The overhead of the linear system
computation is negligible; we have tested using up to 10 distributions.
Our current OMIS implementation is limited to sequential models; hence its
absence from our ViT experiment in Figure 5. However, there is no inherent
limitation that would prevent its use with such more complex architectures. We
anticipate that similar improvements could be achieved, but defer the
exploration of this extension to future work.
In all our experiments we allocate the same sampling budget to each
distribution. Non-uniform sample distribution could potentially further reduce
estimation variance, especially if it can be dynamically adjusted during the
optimization process.
Recent work from Santiago et al. (2021) has explored a variant of importance
sampling that forgoes sample-contribution normalization, i.e., the division by
the probability $p(x)$ in Equation 3 (and on line 10 of Algorithm 1). This
heuristic approach lacks proof of convergence but can achieve practical
improvement over importance sampling in some cases. We include a such variant
of our IS method in Appendix F.
## 8 Conclusion
This work proposes a novel approach to improve gradient-descent optimization
through efficient data importance sampling. We present a method incorporates a
gradient-based importance metric that evolves during training. It boasts
minimal computational overhead while effectively exploiting the gradient of
the network output. Furthermore, we introduce the use of (optimal) multiple
importance sampling for vector-valued, gradient estimation. Empirical
evaluation on typical machine learning tasks demonstrates the tangible
benefits of combining several importance distributions in achieving faster
convergence.
## Ethic and impact
This paper presents work that aims to advance the field of Machine Learning.
There are many potential societal consequences of our work, none of which we
feel must be specifically highlighted here.
## References
* Alain et al. (2015) Alain, G., Lamb, A., Sankar, C., Courville, A., and Bengio, Y. Variance reduction in sgd by distributed importance sampling. _arXiv preprint arXiv:1511.06481_ , 2015.
* Alfarra et al. (2021) Alfarra, M., Hanzely, S., Albasyoni, A., Ghanem, B., and Richtarik, P. Adaptive learning of the optimal batch size of sgd, 2021.
* Balles et al. (2017) Balles, L., Romero, J., and Hennig, P. Coupling adaptive batch sizes with learning rates. In _Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI)_ , pp. ID 141, August 2017. URL http://auai.org/uai2017/proceedings/papers/141.pdf.
* Bordes et al. (2005) Bordes, A., Ertekin, S., Weston, J., and Bottou, L. Fast kernel classifiers with online and active learning. _Journal of Machine Learning Research_ , 6(54):1579–1619, 2005. URL http://jmlr.org/papers/v6/bordes05a.html.
* Deng (2012) Deng, L. The mnist database of handwritten digit images for machine learning research [best of the web]. _IEEE signal processing magazine_ , 29(6):141–142, 2012.
* Dong et al. (2021) Dong, C., Jin, X., Gao, W., Wang, Y., Zhang, H., Wu, X., Yang, J., and Liu, X. One backward from ten forward, subsampling for large-scale deep learning. _arXiv preprint arXiv:2104.13114_ , 2021.
* Dosovitskiy et al. (2020) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ , 2020.
* Faghri et al. (2020) Faghri, F., Duvenaud, D., Fleet, D. J., and Ba, J. A study of gradient variance in deep learning. _arXiv preprint arXiv:2007.04532_ , 2020.
* Grittmann et al. (2019) Grittmann, P., Georgiev, I., Slusallek, P., and Křivánek, J. Variance-aware multiple importance sampling. _ACM Trans. Graph._ , 38(6), nov 2019. ISSN 0730-0301. doi: 10.1145/3355089.3356515. URL https://doi.org/10.1145/3355089.3356515.
* He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 770–778, 2016.
* Kahn (1950) Kahn, H. Random sampling (monte carlo) techniques in neutron attenuation problems–i. _Nucleonics_ , 6(5):27, passim, 1950.
* Kahn & Marshall (1953) Kahn, H. and Marshall, A. W. Methods of reducing sample size in monte carlo computations. _Journal of the Operations Research Society of America_ , 1(5):263–278, 1953.
* Katharopoulos & Fleuret (2017) Katharopoulos, A. and Fleuret, F. Biased importance sampling for deep neural network training. _ArXiv_ , abs/1706.00043, 2017. URL https://api.semanticscholar.org/CorpusID:38367260.
* Katharopoulos & Fleuret (2018) Katharopoulos, A. and Fleuret, F. Not all samples are created equal: Deep learning with importance sampling. In Dy, J. and Krause, A. (eds.), _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pp. 2525–2534. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/katharopoulos18a.html.
* Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Kondapaneni et al. (2019) Kondapaneni, I., Vévoda, P., Grittmann, P., Skřivan, T., Slusallek, P., and Křivánek, J. Optimal multiple importance sampling. _ACM Transactions on Graphics (TOG)_ , 38(4):37, 2019.
* Krizhevsky et al. (2009) Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Technical report, Toronto, ON, Canada, 2009.
* Langley (2000) Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), _Proceedings of the 17th International Conference on Machine Learning (ICML 2000)_ , pp. 1207–1216, Stanford, CA, 2000. Morgan Kaufmann.
* Loshchilov & Hutter (2015) Loshchilov, I. and Hutter, F. Online batch selection for faster training of neural networks. _arXiv preprint arXiv:1511.06343_ , 2015.
* Needell et al. (2014) Needell, D., Ward, R., and Srebro, N. Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K. (eds.), _Advances in Neural Information Processing Systems_ , volume 27. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper_files/paper/2014/file/f29c21d4897f78948b91f03172341b7b-Paper.pdf.
* Owen & Zhou (2000) Owen, A. and Zhou, Y. Safe and effective importance sampling. _Journal of the American Statistical Association_ , 95(449):135–143, 2000. doi: 10.1080/01621459.2000.10473909. URL https://www.tandfonline.com/doi/abs/10.1080/01621459.2000.10473909.
* Qi et al. (2017) Qi, C. R., Su, H., Mo, K., and Guibas, L. J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 652–660, 2017.
* Ren et al. (2019) Ren, H., Zhao, S., and Ermon, S. Adaptive antithetic sampling for variance reduction. In _International Conference on Machine Learning_ , pp. 5420–5428. PMLR, 2019.
* Rumelhart et al. (1986) Rumelhart, D. E., Hinton, G. E., and Williams, R. J. Learning representations by back-propagating errors. _nature_ , 323(6088):533–536, 1986.
* Santiago et al. (2021) Santiago, C., Barata, C., Sasdelli, M., Carneiro, G., and Nascimento, J. C. Low: Training deep neural networks by learning optimal sample weights. _Pattern Recognition_ , 110:107585, 2021.
* Schaul et al. (2015) Schaul, T., Quan, J., Antonoglou, I., and Silver, D. Prioritized experience replay. _arXiv preprint arXiv:1511.05952_ , 2015.
* Sitzmann et al. (2020) Sitzmann, V., Martel, J., Bergman, A., Lindell, D., and Wetzstein, G. Implicit neural representations with periodic activation functions. _Advances in neural information processing systems_ , 33:7462–7473, 2020.
* Veach (1997) Veach, E. _Robust Monte Carlo methods for light transport simulation_ , volume 1610. Stanford University PhD thesis, 1997.
* Wang et al. (2017) Wang, L., Yang, Y., Min, R., and Chakradhar, S. Accelerating deep neural network training with inconsistent stochastic gradient descent. _Neural Networks_ , 93:219–229, 2017.
* Wu et al. (2015) Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., and Xiao, J. 3d shapenets: A deep representation for volumetric shapes. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 1912–1920, 2015.
* Zhang et al. (2019) Zhang, C., Öztireli, C., Mandt, S., and Salvi, G. Active mini-batch sampling using repulsive point processes. In _Proceedings of the AAAI conference on Artificial Intelligence_ , volume 33, pp. 5741–5748, 2019.
* Zhang et al. (2023) Zhang, M., Dong, C., Fu, J., Zhou, T., Liang, J., Liu, J., Liu, B., Momma, M., Wang, B., Gao, Y., et al. Adaselection: Accelerating deep learning training through data subsampling. _arXiv preprint arXiv:2306.10728_ , 2023.
* Zhao & Zhang (2015) Zhao, P. and Zhang, T. Stochastic optimization with importance sampling for regularized loss minimization. In Bach, F. and Blei, D. (eds.), _Proceedings of the 32nd International Conference on Machine Learning_ , volume 37 of _Proceedings of Machine Learning Research_ , pp. 1–9, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/zhaoa15.html.
## Appendix A Optimization details
### Classification.
The classification tasks include image classification (MNIST (Deng, 2012),
CIFAR-10/100 (Krizhevsky et al., 2009) and point-cloud (ModelNet40 (Wu et al.,
2015)) classification.
The MNIST database contains 60,000 training images and 10,000 testing images.
We train a 3-layer fully-connected network (MLP) for MNIST over 50 epochs with
an Adam optimizer (Kingma & Ba, 2014). CIFAR-10, introduced by Krizhevsky et
al. (2009), is a dataset that consists of 60,000 color images of size 32x32.
These images belong to 10 different object classes, each class having 6,000
images. On the other hand, CIFAR-100 Krizhevsky et al. (2009) contains 100
classes with 600 images each. For each class, there are 500 training images
and 100 testing images. In our experiments, we train the ResNet-18 network (He
et al., 2016) on both datasets. We apply random horizontal flip and random
crops to augment the data during training. ModelNet40 contains 9,843 point
clouds for training and 2,468 for testing. Each point cloud has 1,024 points.
We train a PointNet (Qi et al., 2017) with 3 shared MLP layers and 2 fully-
connected layers for 300 epochs on point-cloud classification. We use the Adam
optimizer Kingma & Ba (2014), with batch size 64, weight decay 0.001, initial
learning rate 0.00002 divided by 2 after 100, 200 epochs.
### Regression.
Polynomial regression consists of optimizing the coefficients of a 1D
polynomial of a given order to fit randomly drawn data from a reference
polynomial of the same order. The reference data are generated on the interval
$[-2;2]$. Optimization is done using an Adam optimizer (Kingma & Ba, 2014)
with a mini-batch size of 32 elements.
The image regression task consists in learning the mapping between a 2D
coordinate input (pixel coordinate) and the 3-color output of the image for
this pixel. We use a network with 5 fully-connected layers associated with
positional encodings using SIREN activations (Sitzmann et al., 2020). The
training is done over 500 epoch using an Adam (Kingma & Ba, 2014) optimizer
and each mini-batch is composed of $256$ pixels for a $512^{2}$ reference
image.
## Appendix B Multiple importance sampling in brief
### Importance sampling.
An Importance sampling Monte Carlo estimator $\langle F\rangle_{\mathrm{IS}}$
of a function $f$ is define as :
$\langle
F\rangle_{\mathrm{IS}}=\sum_{i=1}^{n}\frac{f(x_{i})}{np(x_{i})},\qquad
x_{i}\propto p(x).$ (9)
With $x_{i}$ the $i^{th}$ data sample drawn following the probability
distribution function $p(x)$.
The effectiveness of this estimator depends on the relation between the
functions $f(x)$ and $p(x)$. The variance of such estimator is :
$\mathrm{Var}{[\langle
F\rangle_{\mathrm{IS}}]}=\frac{1}{n}\mathrm{Var}{[\nicefrac{{f}}{{p}}]}.$ (10)
Reducing variance in the estimation depends on the proportionality between the
function $f$ and the probability density $p$.
When dealing with multivariate functions, finding a probability density
proportional to every parameters is often impractical. A trade-off is required
to obtain a single probability distribution maximizing the proportionality
with all the parameters of the function simultaneously. Several studies, such
as (Zhao & Zhang, 2015; Needell et al., 2014; Wang et al., 2017; Alain et al.,
2015), have shown that the optimal choice of sampling strategy is the $L_{2}$
norm of the function $f$.
### Multiple importance sampling.
Multiple Importance Sampling (MIS) is a technique that combines multiple
sampling strategies with associated weightings, unlike importance sampling
which relies on a single strategy. This approach allows for a more versatile
gradient estimation. The MIS Monte Carlo estimator, denoted as $\langle
F\rangle_{\mathrm{MIS}}$, is calculated by summing over all samples drawn
independently for each strategy, and then using a weighted estimator. The
equation for $\langle F\rangle_{\mathrm{MIS}}$ is given by:
$\langle
F\rangle_{\mathrm{MIS}}=\sum_{j=1}^{J}\sum_{i=1}^{n_{j}}w_{j}(x_{ij})\frac{f(x_{ij})}{n_{j}p_{j}(x_{ij})}$
(11)
Here, $x_{ij}$ represents the $i^{th}$ sample from the $j^{th}$ technique,
$w_{j}(x)$ is a weighting function such that $f(x)\neq
0\Rightarrow\sum^{J}_{j=1}w_{j}(x)=1$, and $p_{j}(x)=0\Rightarrow w_{j}(x)=0$.
$J$ is the number of sampling techniques, and $n_{j}$ is the number of samples
generated by the $j^{th}$ technique. The variance of a Monte Carlo estimator
using MIS, denoted as $\mathrm{Var}[\langle F\rangle_{\mathrm{MIS}}]$, can be
expressed as:
$\mathrm{Var}[\langle
F\rangle_{\mathrm{MIS}}]=\sum_{j=1}^{J}\int_{D}\frac{w_{j}(x)^{2}f(x)^{2}}{n_{j}p_{j}(x)}dx-\sum_{j=1}^{J}\frac{1}{n_{j}}\langle
w_{j},f\rangle^{2}$ (12)
The balance heuristic (Veach, 1997) is the most commonly used MIS heuristic.
It sets the weight of the samples from each technique according to the
following equation:
$w_{i}(x)=\frac{n_{i}p_{i}(x)}{\sum_{k=1}^{J}n_{k}p_{k}(x_{k})}$ (13)
This weighting strategy effectively mitigates the impact of events with low
probability when samples are drawn from a low-probability distribution. It
prevents a large increase in the contribution of such events in the Monte
Carlo estimator (11) where the function value would be divided by a very low
value. The balance heuristic compensates for this and avoids extreme cases.
Overall, this weighting strategy increases the robustness of the importance
sampling estimator, but it is limited by its independence from the function
value.
### Optimal weighting.
Following the discussion in Section 5, it can also be deduced from Equations
11 and 6 that $\langle F\rangle_{\mathrm{OMIS}}=\sum_{j=1}^{J}\alpha_{j}$.
Given a set of probability distribution functions $p_{1}$, …, $p_{J}$, we can
formulate the optimal MIS solver as Algorithm 3. $\boldsymbol{W}_{ij}$
represents the vector containing the balance weight (13) w.r.t. the J sampling
techniques and the normalization factor
$S(x_{ij})=\sum_{k=1}^{J}n_{k}p_{k}(x_{ij})$.
Algorithm 3 Optimal multiple importance sampling solver.
1:$\langle\boldsymbol{A}\rangle\leftarrow 0^{J\times
J},\langle\boldsymbol{b}\rangle\leftarrow 0^{J}$
2:for $t\leftarrow 1$ to $T$ do
3: for $j\leftarrow 1$ to $J$ do
4: $\\{x_{ij}\\}_{i=1}^{n_{j}}\leftarrow$ draw $n_{j}$ samples from technique
$p_{j}$
5:
$\langle\boldsymbol{A}\rangle\leftarrow\langle\boldsymbol{A}\rangle+\sum_{j=1}^{J}\sum_{i=1}^{n_{j}}\boldsymbol{W}_{ij}\boldsymbol{W}_{ij}^{T}$
6:
$\langle\boldsymbol{b}\rangle\leftarrow\langle\boldsymbol{b}\rangle+\sum_{j=1}^{J}\sum_{i=1}^{n_{j}}f(x_{ij})\boldsymbol{W}_{ij}/S(x_{ij})$
7:$\langle\boldsymbol{\alpha}\rangle\leftarrow\text{solve linear system
}\langle\boldsymbol{A}\rangle\langle\boldsymbol{\alpha}\rangle=\langle\boldsymbol{b}\rangle$
8:return $\sum_{j=1}^{N}\langle\boldsymbol{\alpha_{j}}\rangle$
The algorithm proceeds through three key stages. The first stage involves
initializing the linear system defined in Equation 7 (line 1). The second
stage iteratively updates the system for each drawn data sample (lines 5-6).
Upon completion of this process, the matrix $\boldsymbol{A}$ and vector
$\boldsymbol{b}$ provide Monte Carlo approximations of the quantities
specified in Equation 7. The third and final stage involve solving the linear
system to obtain the vector $\boldsymbol{\alpha}$ (line 7). The estimated
value of $\langle F\rangle_{\mathrm{MIS}}^{o}$ is then returned.
It can be noted that the linear system size scales with the number of sampling
techniques. More importantly each sampling technique needs to be sampled in
order create a linear system possible to solve. The number a sample of each
technique does not have to be the same but requires to be fixed at the start
of the algorithm. Also the presented algorithm works for a scalar value
function. In the case of multivariate function, multiple contribution vector
$\boldsymbol{b}$ need to be constructed (one per parameter) and the linear
system needs to be solved for each.
## Appendix C Algorithm details
This section presents the two initialization subroutine for Algorithm 1 and
Algorithm 2. The role of the methods is to run a first epoch in a classical
SGD loop in order to process every data once. For each data the importance
metric is reported into the memory $q$ and returned with the current model
parameters. This approach avoids computing the importance for all data without
benefiting from the required forward step computed.
Algorithm 4 SGD-based initialization of persistent per-data importance $q$ in
Algorithm 1.
1:function Initialize($x$,$y$,$\Omega$,$\theta$,$B$)
2: for $t\leftarrow 1$ to $|\Omega|/B$ do
3: $x,y\leftarrow\\{x_{i},y_{i}\\}_{i=(t-1)\cdot{B}+1}^{t\cdot{B}+1}$
4: $l(x)\leftarrow{\mathcal{L}}(m(x,\theta),y)$
5: $\nabla l(x)\leftarrow$ Backpropagate$(l(x))$
6: $\langle\nabla L_{\theta}\rangle(x)\leftarrow$ $\nabla l(x)/B$ $\leftarrow$
Equation 3
7: $\theta\leftarrow\theta-\eta\,\langle\nabla L_{\theta}\rangle(x)$
$\leftarrow$ Equation 2
8: $q(x)\leftarrow\left\|\frac{\partial\mathcal{L}(x)}{\partial
m(x,\theta)}\right\|$
9: return $q$,$\theta$
Algorithm 5 Subroutine for initialization in Algorithm 2
1:function InitializeMIS($x$,$y$,$\Omega$,$\theta$,$B$)
2: Initialize $\boldsymbol{q}$ in a classical SGD loop
3: for $t\leftarrow 1$ to $|\Omega|/B$ do
4: $x,y\leftarrow\\{x_{i},y_{i}\\}_{i=(t-1)\cdot{B}+1}^{t\cdot{B}+1}$
5: See all samples in the first epoch
6: $l(x)\leftarrow{\mathcal{L}}(m(x,\theta),y)$
7: $\nabla l(x)\leftarrow$ Backpropagate$(l(x))$
8: $\langle\nabla L_{\theta}\rangle(x)\leftarrow$ $\nabla l(x)/B$ $\leftarrow$
Equation 3
9: $\theta\leftarrow\theta-\eta\,\langle\nabla L_{\theta}\rangle(x)$
$\leftarrow$ Equation 2
10: $\boldsymbol{q}(x)\leftarrow\frac{\partial\mathcal{L}(x)}{\partial
m(x,\theta)}$
11: return $\boldsymbol{q}$,$\theta$
Algorithm 6 Subroutine for cross entropy loss importance metric
1:$x_{i}=$ data sample, $y_{i}=$ class index of $x_{i}$
2:function Importance($x_{i}$,$y_{i}$)
3: $s\leftarrow\exp(m(x_{i},\theta))/\sum_{k=1}^{C}\exp(m(x_{i},\theta)_{k})$
$\leftarrow$ Eq.14
4: $q\leftarrow\sum_{j=1}^{C}s_{j}-\mathbf{1}_{j=y_{i}}$ $\leftarrow$ Eq.16
5: return $q$
## Appendix D Cross-entropy loss gradient
Machine learning frameworks take data $x$ as input, perform matrix
multiplication with weights and biases added. The output layer is then fed to
the softmax function to obtain values ${s}$ that are fed to the loss function.
$y$ represents the target values. We focus on the categorical cross-entropy
loss function for the classification problem (with $C$ categories) given by:
${\mathcal{L}}_{\text{cross-
ent}}=-\sum_{i}y_{i}\log{s}_{i},\;\;\;{s}_{i}=\frac{\exp(m(x_{i},\theta)_{l})}{\sum_{l}^{C}\exp(m(x_{i},\theta)_{l})}.\\!$
(14)
For backpropagation, we need to calculate the derivative of the $\log{s}$ term
w.r.t. the weighted input $z$ of the output layer. We can easily derive the
derivative of the loss from first principles as shown below:
$\begin{split}\frac{\partial{\mathcal{L}}_{\text{cross-ent}}}{\partial
m(x_{i},\theta)_{j}}&=-\frac{\partial}{\partial
m(x_{i},\theta)_{j}}\left(\sum_{i}^{C}y_{i}\log{s}_{i}\right)\\\
&=-\sum_{i}^{C}y_{i}\frac{\partial}{\partial
m(x_{i},\theta)_{j}}\log{s}_{i}\\\
&=-\sum_{i}^{C}\frac{y_{i}}{{s}_{i}}\frac{\partial{s}_{i}}{\partial
m(x_{i},\theta)_{j}}\\\
&=-\sum_{i}^{C}\frac{y_{i}}{{s}_{i}}{s}_{i}\cdot(\mathbf{1}\\{i==j\\}-{s}_{j})\\\
&=\sum_{i}^{C}{y_{i}}\cdot{s}_{j}-\sum_{i}^{C}y_{i}\cdot(\mathbf{1}\\{i==j\\})\\\
&={s}_{j}\sum_{i}^{C}{y_{i}}-y_{j}={s}_{j}-y_{j}\end{split}$ (15)
The partial derivative of the cross-entropy loss function w.r.t. output layer
parameters has the form:
$\displaystyle\frac{\partial{\mathcal{L}}_{\text{cross-ent}}}{\partial
m(x_{i},\theta)_{j}}$ $\displaystyle={s}_{j}-y_{j}$ (16)
For classification tasks, we directly use this analytic form of the derivative
and compute it’s norm as weights for adaptive and importance sampling.
## Appendix E Importance momentum
Updating the persistent per-sample importance $q$ directly sometime leads to a
sudden decrease of accuracy during training. To make the training process more
stable, we update $q$ by linearly interpolating the importance at the previous
and current steps:
$q(x)=\gamma\cdot q_{prev}(x)+(1-\gamma)\cdot q(x)$ (17)
where $\gamma$ is a constant for all data samples. In practice, we use
$\gamma\in\\{0.0,0.1,0.2,0.3\\}$ as it gives the best trade-off between
importance update and stability. This can be seen as a momentum evolution of
the per-sample importance to avoid high variation. Utilizing an exponential
moving average to update the importance metric prevents the incorporation of
outlier values. This is particularly beneficial in noisy setups, like
situations with a high number of class or a low total number of data.
## Appendix F Additional results
This section provides additional results, including an ablation study as shown
in Figure 9 for linear-system momentum used in Algorithm 2 and results of our
adaptive sampling method. Figures 9 and 6 demonstrate that classical SGD, DLIS
and Our IS work similarly with and without momentum. Our OMIS outperforms
other methods in both cases.
Figures 8, 10 and 11 show that our adaptive sampling variant (our AS) can
achieve better results than our IS or our OMIS in practice. Our AS is a
heuristic and we leave its theoretical formulation as future work.
Figure 8: We compare loss for the MNIST dataset between the resampling
algorithm by Katharopoulos & Fleuret (2018) (DLIS) and our algorithm. At equal
epochs, DLIS works better than both classical and resampling SGD. However, at
equal time, the resampling cost is too high, making DLIS even slower than
standard SGD. Figure 9: Ablation study on point-cloud classification using
linear-system momentum as described in Algorithm 2 for baselines represented
as dashed lines. Our OMIS still outperforms other baselines at equal epochs,
similar to the results shown in Figure 6. Figure 10: Comparisons on CIFAR-10
using Vision Transformer (ViT) (Dosovitskiy et al., 2020). The results show
our importance sampling scheme (Our IS) and the adaptive sampling variant (Our
AS) can improve over classical SGD, LOW (Santiago et al., 2021) and DLIS
(Katharopoulos & Fleuret, 2018) on modern transformer architecture. Figure
11: On CIFAR-100 classification dataset, instead of comparing the DLIS
resampling algorithm, we use DLIS importance in our Algorithm 1. We display
zoom-in of the end of the curves to highlight the differences. At equal epochs
(left), our methods (Our IS & Our AS) show improvements compared to LOW
(Santiago et al., 2021) and DLIS weights. At equal time (right), LOW and the
DLIS weights takes longer to converge. Overall our approach shows faster
convergence with lower importance computation.
|
# Beyond Gaussian fluctuations of quantum anharmonic nuclei.
The case of rotational degrees of freedom
Antonio Siciliano<EMAIL_ADDRESS>Dipartimento di Fisica,
Università di Roma La Sapienza, Piazzale Aldo Moro 5, 00185 Roma, Italy
Lorenzo Monacelli Dipartimento di Fisica, Università di Roma La Sapienza,
Piazzale Aldo Moro 5, 00185 Roma, Italy Theory and Simulation of Materials
(THEOS), and National Centre for Computational Design and Discovery of Novel
Materials (MARVEL), École Polytechnique Fédérale de Lausanne, 1015 Lausanne,
Switzerland Francesco Mauri Dipartimento di Fisica, Università di Roma La
Sapienza, Piazzale Aldo Moro 5, 00185 Roma, Italy
###### Abstract
The atomic motion in molecular crystals, such as high-pressure hydrogen or
hybrid organic-inorganic perovskites, is very complex due to quantum
anharmonic effects. In addition, these materials accommodate rotational
degrees of freedom. All the approximate methods that describe the nuclei
thermodynamic using Cartesian coordinates lead to an unphysical hybridization
of roto-librations with other high-energy modes. Hence, they do not accurately
account for the free energy contributions of these degrees of freedom. So, a
reliable description of a molecular crystal’s phase diagram is only possible
with Path Integral Molecular Dynamics (PIMD) at a high computational cost.
This work shows how to include roto-librational modes in the Self-Consistent
Harmonic Approximation (SCHA) framework. SCHA approximates the nuclei
Cartesian fluctuations to be Gaussian, thus neglecting curvilinear motion.
Keeping its low computational cost, we employ the generalization of SCHA,
called nonlinear SCHA (NLSCHA). Our method relies on a Gaussian ansatz for the
nuclei density matrix on a curved manifold, allowing us to map roto-librations
into harmonic modes defined on a surface. By optimizing the surface’s
curvature variationally, we minimize the free energy, allowing the spontaneous
activation of these degrees of freedom without external parameters. Notably,
in the limit of vanishing curvature, we recover the standard SCHA.
††preprint: APS/123-QED
## I Introduction
Thanks to the recent methodological advantages [1, 2, 3, 4, 5] in the field of
computational condensed matter, the pivotal role of quantum fluctuations,
anharmonic effects, and finite temperature excitations on the equilibrium
ionic properties has been unveiled. We emphasize that a reliable free-energy
calculation should encompass all degrees of freedom. Indeed, the crystal
configuration can accommodate new types of atomic motion as it changes with
temperature and pressure.
In simplest structures, the only degrees of freedom are those we refer to as
’linear vibrations’, i.e. Gaussian fluctuations in a Cartesian space. In this
case, the lattice excitations are defined in a flat space, as breathing modes
or molecular stretching. Certain materials exhibit rotations and librations,
i.e. partial rotations, showcasing unique characteristics as atom movement is
confined to a curved surface. Indeed, the free rotation of a diatomic molecule
is the correlated motion of two atoms on a sphere, where the diameter
corresponds to the average bond length. The situation is more complex in the
case of a molecular crystal, where a group of atoms forms a rigid structure
strongly bonded. The latter displays low-energy modes as it can rotate freely
or partially without distortions. This type of motion has a low impact on the
internal energy but, on the contrary, makes a crucial contribution to the
total entropy, increasing the phase space available for the system. If we miss
these degrees of freedom, we may not detect phase transitions driven by
internal energy and entropy competition.
A prototypical example is found in the Rigid Unit Modes (RUMs) of framework
materials [6], formed by stiff connected polyhedrons of atoms, e.g. SiO4 and
AlO4 tetrahedra. RUMs have been identified as the soft modes responsible for
the displacive transitions, where the rigid units rotate and translate from
one phase to another [7]. Similar behavior is shown by the methyl group CH3 as
it can behave as a spinning top [8, 9] depending on the environment
surrounding it. This molecule is found in many pharmaceutical products, e.g.
paracetamol, and biological compounds, such as proteins and deoxyribonucleic
acid (DNA). Another interesting case is the high-pressure hydrogen phase
diagram. In phase I, the H2 centers form an h.c.p. lattice, and the quantum
distribution of protons is almost spherical, suggesting that the molecules
behave as free rotators [10]. The increasing pressure leads to a larger
intermolecular interaction, and phase II is stabilized at around $110$ GPa
[11]. Smaller molecular librations replace free rotations, hence the
orientational disorder is reduced. Rotations are also thermally activated as
in the III-IV phase transition where the molecules behave as free rotators
above $300$ K and $220$ GPa [12].
If used for a systematic and unbiased investigation of molecular crystal phase
diagrams, a reliable free energy method should accurately describe linear
vibrations and roto-librational modes. Only Molecular Dynamics (MD)
simulations can achieve this at the classical level. Indeed, MD offers the
advantage of including non-perturbative anharmonic effects while accurately
representing all the degrees of freedom, including rotations. Nevertheless, it
does not account for quantum effects, which Path-Integral Molecular Dynamics
(PIMD), the non-perturbative, reference-exact method, includes at finite
temperatures. However, PIMD carries a high computational cost, requiring the
simultaneous evolution of a number of the system’s replicas inversely
proportional to temperature. For this reason, the development of approximate
ionic free energy methods became of paramount importance.
The most commonly used and straightforward is the harmonic approximation (HA).
The HA fails when phonon interactions dominate due to substantial deviations
of atoms from equilibrium positions, and extensive regions of the BO energy
surface (BOES) are explored. Such a condition is relevant for light nuclei,
leading to sizeable zero-point motion (ZPM), near the melting point, or during
a second-order phase transition. In addition, the HA is not suited for
molecular crystals as it completely misses rotational modes. Indeed, in this
case, the small oscillations assumption of the HA breaks down, meaning that it
completely disregards the rotations of molecules or groups of atoms within a
molecular crystal.
If sufficiently small, anharmonic corrections can be incorporated via
perturbation theory starting from the HA. However, this approach becomes
cumbersome and is ineffective for hydride superconductors, ferroelectric, and
charge-density wave compounds. The growing interest in these materials
prompted the community to develop other methods, Self-Consistent Phonons (SCP)
theories [1, 2, 4], Vibrational Self-Consistent Field (VSCF) [5] and the
Temperature-Dependent Effective Potential (TDEP) method [3]. All of them, at
different levels of approximation, describe anharmonicity for linear
vibrations. However, none of these methods can effectively describe roto-
librations, as they all depend on Cartesian coordinates, which are
insufficient for capturing the atomic motion on a surface. So, the call for
approximate methods to handle rotations is more urgent than ever.
The self-consistent description of anharmonic phonons [1, 13, 14, 15, 16, 17,
18, 19] is a family of variational methods that constrain the nuclear
probability distribution to be Gaussian in Cartesian coordinates. This
approximation allows the enforcement of symmetries, which is an excellent help
for identifying phase transition, and interpolation techniques are available
to describe the thermodynamic limit, avoiding the simulation of big
supercells. Remarkably, none of these advantages are present in MD/PIMD.
Nevertheless, when atoms rotate freely or partially, the probability
distribution deviates significantly from a Gaussian shape [8, 12]. Therefore,
self-consistent phonon (SCP) theories [1, 13, 14, 15, 16] are not reliable in
such cases.
Here, we show how the Nonlinear Self-Consistent Harmonic Approximation
(NLSCHA) [20] can overcome this problem. We employ an ad-hoc change of
variables to deform the Gaussian ansatz and allow normal modes to occur in a
curved manifold, capturing the rotations of molecules and rigid body clusters.
The key aspect is that we variationally optimize the curvature, allowing the
system to spontaneously activate these modes only if the quantum free energy
is minimized. Remarkably, if the curvature is zero, the surface on which we
constrain the atomic motion becomes flat, and in this limit, we describe only
linear vibrations with the same accuracy as standard SCP theories.
In section II, we discuss the failure of SCP theories [1, 2, 4] in accounting
for roto-librational degrees of freedom with a 2D model for H2, as already
noted in Refs [8, 12]. To address such limitation, section III shows how to
incorporate these modes in the NLSCHA framework with negligible computational
cost. In the end, in sections IV-VI, we benchmark our method at zero and
finite temperature, and in section VII we show how to generalize our method to
the three-dimensional case of molecules and crystals.
## II Failure on rotational modes
Here we show the failure of the SCP methods with molecular rotations. In
particular, among these methods, we consider the Self-Consistent Harmonic
Approximation (SCHA) [21, 1]. To this purpose, we solve the H2 molecule
rotating in two dimensions. In the center of mass reference frame, the only
degree of freedom is the relative coordinate,
$\bm{R}=\bm{R}_{1}-\bm{R}_{2}=(x,y)$, with an effective mass $m=m_{\ch{H}}/2$.
The BOES is given by a Morse potential fitted ab initio with DFT-BLYP on
$\ch{H2}$ (see appendix A) plus an empirical crystal field $E$ along the $x$
direction to control the rotational disorder
$V^{\text{(BO)}}(\bm{R})=V_{0}+d\left\\{1-\exp{-a(|\bm{R}|-R_{\text{eq}})}\right\\}+Ex$
(1)
where $|\bm{R}|=\sqrt{x^{2}+y^{2}}$
$\displaystyle V_{0}=$ $\displaystyle-1.172\text{ Ha}$ $\displaystyle d=$
$\displaystyle 0.137\text{ Ha}$ (2) $\displaystyle a=$ $\displaystyle
1.217\text{ Bohr}^{-1}$ $\displaystyle R_{\text{eq}}=$ $\displaystyle
1.393\text{ Bohr}$
Figure 1: Fig. a: exact, harmonic, and SCHA zero-point energy (ZPE), i.e. the
difference between the ground state energy and the potential minimum. Figs
b-g: exact and SCHA probability distributions (in Bohr-2) for three values of
the crystal field marked by horizontal lines in the upper panel
($E=0,0.005,0.6$ Ha/Bohr). SCHA can not describe roto-librations as its
Gaussian trial wavefunction unphysically hybridizes low-energy rotations with
high-energy vibrations.
In Fig. 1 a, we compare the exact, harmonic, and SCHA zero-point energy (ZPE)
as a function of the rotational freedom, $E$. In the harmonic approximation,
we expand $V^{\text{(BO)}}(\bm{R})$ around its minimum, and from the second-
order terms we extract the normal modes. The details of the exact
diagonalization and SCHA simulations are in appendix A and appendix B. The ZPE
is the difference between the ground state energy and the minimum of the
potential, representing the energy excess due to quantum uncertainty. In the
harmonic approximation, the ZPE is
$\text{ZPE}_{\text{harm}}=\frac{\hbar\left(\omega_{\text{harm,vib}}+\omega_{\text{harm,rot}}\right)}{2}$
(3)
where $\omega_{\text{harm,vib}},\omega_{\text{harm,rot}}$ are the frequencies
of the vibrational and rotational modes (respectively polarized along x and
y). In Fig. 1 b, d, f, we represent the probability distribution of the
relative coordinate $\bm{R}$ increasing the value of the crystal fields. In
the absence of $E$, the H2 molecule behaves like a free rotator (Fig. 1 b),
then rotations are progressively suppressed as $E$ increases (Fig. 1 d) until
the molecule is locked with a fixed orientation (Fig. 1 f).
In the harmonic approximation, $\omega_{\text{harm,rot}}$ is zero at $E=0$
Ha/Bohr, indicating the presence of a free rotator mode. So, there is a
direction along which the propagation costs zero energy, thereby undermining
the assumption of small oscillations on which the HA relies. Consequently, in
the case of full rotational invariance, the HA is not justified at finite
temperatures. In Fig. 2, we compare the exact, SCHA and harmonic free energies
at $1000$ K for low values of $E$. In the limit of vanishing crystal field,
the harmonic entropy diverges logarithmically as the rotational frequency
tends to zero. Consequently, the HA is unreliable at finite temperatures for
low values of $E$.
Figure 2: Exact, SCHA and harmonic free energies at $1000$ K for low values of
the crystal field $E$. As we reduce $E$, the rotational frequency is vanishing
$\omega_{\text{harm,rot}}\rightarrow 0$, hence the harmonic free energy
diverges as $f_{0}\log(E/E_{0})$ (solid dotted line).
In contrast, the SCHA method effectively handles large nuclear fluctuations by
adjusting the width and center of the Gaussian, minimizing the total energy.
Thus, we always get well-defined solutions. However, the SCHA method samples
the BOES using a Gaussian distribution in Cartesian coordinates, Fig. 1 c, e,
g. The approximation employed by SCHA results in an overestimation of the
rotational barrier, as it hybridizes vibrational and rotational modes, causing
an unphysical stiffening of the latter. This is evident when comparing SCHA
and exact probability distribution, Fig. 1 b-e. So a Gaussian trial density
matrix lacks the flexibility to describe free or partially rotating molecules,
thus sampling high-energy regions of the BOES. As a result, the SCHA ZPEs are
unreliable at low crystal fields. They can be worse than the predictions from
the HA, that, however provides a non-variational energy. Therefore, the SCHA
method fails to meet the gold-standard free energy method requirements when
rotations are present [8, 12]. Nevertheless, as $E$ is increased, rotations
are suppressed, and the SCHA method performs better, adeptly capturing the
anharmonic effects of the Morse potential and reducing its error in comparison
to the exact solution, as seen in Fig. 1 f-g.
Here, we demonstrated again that the SCHA is a valuable tool when linear
vibrations dominate as already noted in Refs [8, 12]. However, as soon as
rotational modes become active, the method falters due to the trial density
matrix’s lack of flexibility. All the other methods belonging to the SCP
family have the same problem. So, SCP methods can not detect the presence of a
rotational mode, leading to an overestimation of the total energy. To address
these issues, we employ the NLSCHA theory [20], which adapts a new trial
density matrix by automatically activating a roto-librational mode if it
lowers the total energy.
## III The solution of nonlinear Self-Consistent Harmonic Approximation
The SCP approach falls short in considering roto-librations, as these modes
generate non-Gaussian fluctuations (refer to Fig. 1 b-d), surpassing the
capabilities of these methods. We employ the NLSCHA theory to modify the
Gaussian ansatz with an invertible nonlinear transformation so that it can
accommodate rotations.
### III.1 The nonlinear change of variables
We aim to disentangle rotations from stretching modes as the SCHA leads to
unphysical hybridization. The solution of NLSCHA is to variationally optimize
a harmonic density matrix in a suitable auxiliary space that separates these
modes.
Figure 3: The stereographic transformation represents the Earth’s surface
(Fig. a) on a 2D plane (Fig. b). Each point on the Earth, identified by
$(\theta,\phi)$, is mapped to a unique point $(x_{\text{s}},y_{\text{s}})$ on
a plane tangent to the North Pole (N) (Fig. c). To build this transformation,
We connect S to a point $(\theta,\phi)$ then the stereographic projection
$(x_{\text{s}},y_{\text{s}})$ is given by the intersection of this line with
the tangent plane as Fig. c shows. This change of variables applied to the
atomic coordinates disentangles rotational modes (e.g. the motion on the
stereographic plane) from the vibrational ones (e.g. the motion perpendicular
to the surface). The plot was made with the cartopy package [22].
The most straightforward choice to describe rotations is spherical coordinates
$(r,\theta,\phi)$ [12]
$\displaystyle r$ $\displaystyle=\sqrt{x^{2}+y^{2}+z^{2}}$ (4a)
$\displaystyle\phi$ $\displaystyle=\atan\left(\frac{y}{x}\right)$ (4b)
$\displaystyle\theta$
$\displaystyle=\arccos\left(\frac{z}{\sqrt{x^{2}+y^{2}+z^{2}}}\right)$ (4c)
Note that it is possible to define small oscillations (i.e. the harmonic
approximation) only in 2D $(r,\phi)$ but not in 3D $(r,\phi,\theta)$ as small
variations of the polar angle $\theta$ around $\theta\simeq 0$ lead to large
changes of the azimuthal one $\phi$. In addition, Eqs (4) have a different
topology from the Cartesian space, indeed $(r,\phi,\theta)$ are not defined in
$\mathbb{R}^{3}$. To use the NLSCHA framework, we must work with an auxiliary
space with the same topology as the Cartesian one so that small oscillations
are well-defined and the entropy is an analytical quantity [20]. It is not a
problem working with $r$ as it represents the atomic distance and exploring
$r\rightarrow 0$ has a low probability. Indeed, $r\rightarrow 0$ has a
substantial energetic cost as it would correspond to nuclear fusion. So, we
have to solve the problem of the angular variables’ bounded range. If we adopt
the stereographic projection, the topology of $(\phi,\theta)$ becomes the same
as the Cartesian space. This technique represents the Earth on a 2D plane with
range $\mathbb{R}^{2}$. All the Earth’s surface points are projected on a
tangent plane at the North Pole (N) using the South Pole (S) as the projecting
point. Note that we can not represent S in this way and that no invertible
transformation can achieve this.
The NLSCHA auxiliary manifold we define here derives from a parameterization
of the stereographic transformation as it allows to explore more phase space,
i.e. it represents both North and South Hemispheres in cartography. For our
purposes, this property implies that we describe large angular fluctuations.
To better illustrate our approach, we consider the H2 molecule in 2D. We
define the auxiliary variables $\bm{u}=(u_{1},u_{2})$ from
$\bm{R}=\bm{\xi}(\bm{u})$ (5)
where $\bm{R}=(x,y)$ is the H2 relative coordinate in the center of mass
reference frame and $\bm{\xi}(\bm{u})$ is the stereographic projection
$\displaystyle x(\bm{u})$
$\displaystyle=x_{\text{C}}+(u_{1}+r_{0})\cos(\phi(u_{2}))$ (6a)
$\displaystyle y(\bm{u})$
$\displaystyle=y_{\text{C}}+(u_{1}+r_{0})\sin(\phi(u_{2}))$ (6b)
$\displaystyle\phi(u_{2})$
$\displaystyle=\phi_{0}+2\atan\left(\frac{u_{2}}{2r_{0}}\right)$ (6c)
Note that $\bm{\xi}(\bm{u})$ is parametrized by $x_{\text{C}}$,
$y_{\text{C}}$, $r_{0}$, and $\phi_{0}$. They represent the center of
curvature, $\bm{\mathcal{R}}_{\text{C}}$, and the curvature vector,
$\bm{\mathcal{R}}_{\text{T}}$,
$\displaystyle\bm{\mathcal{R}}_{\text{C}}$
$\displaystyle=\begin{bmatrix}x_{\text{C}}&y_{\text{C}}\end{bmatrix}$ (7a)
$\displaystyle\bm{\mathcal{R}}_{\text{T}}$
$\displaystyle=\begin{bmatrix}r_{0}\cos(\phi_{0})&r_{0}\sin(\phi_{0})\end{bmatrix}$
(7b)
where $\bm{\mathcal{R}}_{\text{T}}$ contains the information on the curvature
$\kappa$ of the nonlinear transformation
$\kappa=|\bm{\mathcal{R}}_{\text{T}}|^{-1}=r_{0}^{-1}$ (8)
Figure 4: A geometrical representation of the nonlinear transformation, Eq.
(6), with curvature $\kappa>0$. $\bm{R}=\bm{\xi}(\bm{u})$ corresponds to a
stereographic projection in 2D. The center and radius of the circle are
defined respectively by $\bm{\mathcal{R}}_{\text{C}}$ (red arrow) and
$|\bm{\mathcal{R}}_{\text{T}}|$ (green arrow).
$\bm{\mathcal{R}}_{\text{C}}+\bm{\mathcal{R}}_{\text{T}}$ indicates the
position of the tangent line, i.e. the North Pole. The stereographic
projection maps the points $u_{2}$ on angles $\phi$, with the South Pole
$\bm{\mathcal{R}}_{\text{C}}-\bm{\mathcal{R}}_{\text{T}}$ being the projecting
point.
We examine Fig. 4 to illustrate the geometrical meaning of Eq. (6) with
$\kappa>0$. The vector $\bm{\mathcal{R}}_{\text{C}}$ identifies the circle’s
center (center of the Earth) with radius $|\bm{\mathcal{R}}_{\text{T}}|$.
$u_{1}=|\bm{R}-\bm{\mathcal{R}}_{\text{C}}|-r_{0}$, the radial coordinate, and
$u_{2}$, the stereographic projection of $\phi-\phi_{0}$, identify the
position of a point $\bm{R}=(x,y)$ in the new space. We perform the
stereographic projection on the tangent plane at
$\bm{\mathcal{R}}_{\text{C}}+\bm{\mathcal{R}}_{\text{T}}$ (the North Pole)
using $\bm{\mathcal{R}}_{\text{C}}-\bm{\mathcal{R}}_{\text{T}}$ (the South
Pole) as the projecting point. Note that the North and South Pole are free
parameters of the transformation. As
$|\bm{\mathcal{R}}_{\text{T}}|\rightarrow+\infty$, the curvature
$\kappa\rightarrow 0$ so we recover a linear transformation (see section
III.2)
$\bm{R}\simeq\bm{\mathcal{R}}_{\text{C}}-\bm{\mathcal{R}}_{\text{T}}+\bm{u}=\bm{\mathcal{R}}+\bm{u}$
(9)
which is the one employed by standard SCHA [1]
($\bm{\mathcal{R}}_{\text{C}}-\bm{\mathcal{R}}_{\text{T}}=\bm{\mathcal{R}}$ is
the average atomic position in SCHA).
### III.2 The trial density matrix
Here, we show how to incorporate the ad-hoc change of variables, Eq. (6), with
NLSCHA [20]. Within our framework, we variationally minimize the quantum free
energy with a trial density matrix corresponding to a Gaussian probability
distribution in $\bm{u}$, not in $\bm{R}$ as in the SCHA. Indeed, depending on
$\kappa$, we have a probability distribution that supports roto-librations.
The NLSCHA density operator is defined by the matrix elements (see appendix C)
$\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}^{\prime}}=\frac{\overline{\rho}_{\text{nl}}(\bm{u},\bm{u}^{\prime})}{\sqrt{\mathcal{J}(\bm{u})\mathcal{J}(\bm{u}^{\prime})}}$
(10)
where $\mathcal{J}$ is the Jacobian’s determinant
$\mathcal{J}(\bm{u})=\det\left(\partialderivative{\bm{\xi}(\bm{u)}}{\bm{u}}\right)>0$
(11)
and $\overline{\rho}_{\text{nl}}(\bm{u},\bm{u}^{\prime})$ satisfies
($\beta^{-1}=k_{\text{B}}T$)
$\displaystyle\overline{\rho}_{\text{nl}}(\bm{u},\bm{u}^{\prime})$
$\displaystyle=\frac{\bra{\bm{u}}\exp{-\beta\hat{\overline{\mathcal{H}}}_{\text{nl}}}\ket{\bm{u}^{\prime}}}{\overline{\mathcal{Z}}_{\text{nl}}}$
(12a) $\displaystyle\overline{\mathcal{Z}}_{\text{nl}}$
$\displaystyle=\int_{-\infty}^{+\infty}d\bm{u}\bra{\bm{u}}\exp{-\beta\hat{\overline{\mathcal{H}}}_{\text{nl}}}\ket{\bm{u}}$
(12b)
In NLSCHA the auxiliary harmonic Hamiltonian
$\hat{\overline{\mathcal{H}}}_{\text{nl}}$ is defined in $\bm{u}$-space
$\bra{\bm{u}}\hat{\overline{\mathcal{H}}}_{\text{nl}}\ket{\bm{u}^{\prime}}=\delta(\bm{u}-\bm{u}^{\prime})\left(-\frac{\hbar^{2}}{2}\partialderivative{\bm{u}}\cdot\overset{-1}{\bm{\mathcal{M}}}\cdot\partialderivative{\bm{u}}+\frac{1}{2}\bm{u}\cdot\bm{\Phi}_{\text{nl}}\cdot\bm{u}\right)$
(13)
The variational parameters of $\hat{\rho}_{\text{nl}}$ (Eq. (10)) are the
force constant $\bm{\Phi}_{\text{nl}}$, the mass tensor $\bm{\mathcal{M}}$ and
the free parameters of the nonlinear transformation $\bm{\xi}$
$\bm{\Gamma}_{\text{nl}}=(\bm{\mathcal{R}}_{\text{C}},\bm{\mathcal{R}}_{\text{T}})$
(14)
Note that both $\bm{\Phi}_{\text{nl}}$ and $\bm{\mathcal{M}}$ are symmetric
and positive definite.
Figure 5: Figs a-b show respectively $\overline{\rho}_{\text{nl}}(\bm{u})$
and the corresponding physical probability distribution
$\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$ with finite curvature
$\kappa=|\bm{\mathcal{R}}_{\text{T}}|^{-1}>0$. Figs c-d report the same
quantities but with zero curvature $\kappa\rightarrow 0$. When $\kappa$ is
finite, $\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$ describes roto-
vibrational modes, see Fig. b. Otherwise, if $\kappa\rightarrow 0$, the metric
of the nonlinear transformation is trivial so
$\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$ becomes a SCHA-like
distribution, see Fig. d.
We prove that $\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$ describes roto-
librational modes. In Fig. 5 we compare $\overline{\rho}_{\text{nl}}(\bm{u})$,
Eqs (12), with the corresponding physical probability distribution
$\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$, Eq. (10), showing the effect
of the nonlinear transformation when $\kappa>0$ (Fig. 5 a, b) or
$\kappa\rightarrow 0$ (Fig. 5 c, d). Note that in Fig. 5, we keep
$\bm{\Phi}_{\text{nl}}$ and $\bm{\mathcal{M}}$ fixed. In the auxiliary
$\bm{u}$-space $\overline{\rho}_{\text{nl}}(\bm{u})$ is a Gaussian, see Fig. 5
a-c. When the nonlinear change of variables is applied, the shape of
$\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$ changes depending on the
curvature $\kappa$, see Fig. 5 b, d. As expected from section III.1, a finite
$\kappa$ bends the probability distribution thanks to the stereographic
projection. So $\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$ describes a
molecule that simultaneously rotates and vibrates, see Fig. 5 b. On the
contrary, when $\kappa\rightarrow 0$, we obtain a Gaussian probability
distribution, recovering the standard SCHA, see Fig. 5 d.
From Fig. 5 we deduce the newly introduced variational manifold is a superset
of the SCHA, ensuring that NLSCHA systematically outperforms the SCHA for
roto-librations. We variationally estimate the exact BO free energy (see
appendix D)
$\displaystyle F^{\text{(BO)}}$ $\displaystyle\leq F_{\text{nl}}$ (15a)
$\displaystyle F_{\text{nl}}$
$\displaystyle=\Tr\left[\hat{\rho}_{\text{nl}}\hat{H}^{\text{(BO)}}\right]+k_{\text{B}}T\Tr\left[\hat{\rho}_{\text{nl}}\log(\hat{\rho}_{\text{nl}})\right]$
(15b)
where the entropic term
$-k_{\text{B}}\Tr\left[\hat{\rho}_{\text{nl}}\log(\hat{\rho}_{\text{nl}})\right]$
has a harmonic form and coincides with the temperature derivative of
$F_{\text{nl}}$ only if we optimize all the free parameters [20]. We minimize
$F_{\text{nl}}$ with respect to the free parameters
$\displaystyle\partialderivative{F_{\text{nl}}}{\bm{\Phi}_{\text{nl}}}$
$\displaystyle=\bm{0}$ (16a)
$\displaystyle\partialderivative{F_{\text{nl}}}{\bm{\mathcal{M}}}$
$\displaystyle=\bm{0}$ (16b)
$\displaystyle\partialderivative{F_{\text{nl}}}{\bm{\Gamma}_{\text{nl}}}$
$\displaystyle=\left(\partialderivative{F_{\text{nl}}}{\bm{\mathcal{R}}_{\text{T}}},\partialderivative{F_{\text{nl}}}{\bm{\mathcal{R}}_{\text{C}}}\right)=\bm{0}$
(16c)
where the gradient in Eqs (16) depends solely on the BO energies and forces as
in the SCHA, see appendix E and Ref. [20] for details. Once the equilibrium
conditions, Eqs (16), are reached the system’s real interacting normal modes
are described, in a self-consistent framework, by the NLSCHA auxiliary phonons
$\bm{D}_{\text{nl}}=\overset{-T}{\sqrt{\bm{\mathcal{M}}}}\cdot\bm{\Phi}_{\text{nl}}\cdot\overset{-1}{\sqrt{\bm{\mathcal{M}}}}=\sum_{\mu=1}^{2}\omega_{\text{nl,}\mu}^{2}\bm{e}_{\text{nl},\mu}\bm{e}_{\text{nl},\mu}$
(17)
where $-1$ and $-T$ indicate the inverse and its transpose. The variational
optimization of its curvature $\kappa$ allows the description of both linear
vibrations (see Fig. 5 b for $\kappa\rightarrow 0$) and roto-librations (see
Fig. 5 a for $\kappa>0$) so the crystal/molecule activates the minimum-free
energy degrees of freedom without any external constraint.
## IV Results at zero temperature
The NLSCHA results obtained solving Eqs (16) are reported in Fig. 6. In Fig. 6
a, we report the exact, SCHA and NLSCHA zero-point energy (ZPE) changing the
crystal field $E$. Fig. 6 b-i compare the exact probability distributions with
the SCHA and NLSCHA ones for three values of the crystal field $E$.
For low values of $E$, the NLSCHA outperforms the SCHA thanks to the finite
curvature $\kappa$ allowing the bending of the vibration in the angular
variable. The description of rotations is excellent, in particular, at
$E=0.005$ Ha/Bohr (see lower panels of Fig. 6) the SCHA error is $21.6$ meV
while the NLSCHA error is one order of magnitude smaller, being just $3.0$
meV. As already discussed in section II, as $E$ is increased, only linear
vibrations survive. Note that, reducing the angular fluctuation, NLSCHA
perfectly reproduces the SCHA results, Fig. 6, see appendix F.
Figure 6: Fig. a: exact, SCHA and NLSCHA zero-point energy (ZPE), i.e. the
difference between the ground state energy and the minimum of the potential.
Fig. b-e: exact, SCHA, and NLSCHA probability distributions (in Bohr-2) for
three values of the crystal field marked by horizontal lines $E=0,0.005,0.06$
Ha/Bohr. At low values of $E$, NLSCHA describes roto-librations since the
stereographic projection disentangles rotations from vibrations (Figs d-g). At
high $E$, NLSCHA reproduces the SCHA results (Figs i-l).
Figure 7: Fig. a: exact, SCHA and NLSCHA ZPE for low values of the crystal
field (from $E=0$ Ha/Bohr up to $E=5\cdot 10^{-3}$ Ha/Bohr) which do not
suppress roto-librations. Figs b-l: exact, SCHA, and NLSCHA probability
distributions (in Bohr-2) for three values of the crystal field marked by
horizontal lines in the upper panel $E=5\cdot 10^{-3},2.5\cdot 10^{-3},5\cdot
10^{-3}$ Ha/Bohr. Contrary to SHCA, NLSCHA captures the smallest changes in
the rotational degree of freedom thanks to the curvature optimization.
In Fig. 7, we compare the exact solution with SCHA and NLSCHA for extremely
low crystal field values, where the molecule rotates almost freely. Our method
detects even the subtlest alterations in the rotational degree of freedom, see
Fig. 7 b-e. Conversely, SCHA is inaccurate as it yields an almost crystal
field-independent ZPE. While NLSCHA completely captures semi-free rotations
(up to $\pi/2$) even for small values of the crystal field, it fails when
$E=0$ Ha/Bohr (see Fig. 6). Indeed, large angular fluctuations, i.e.
$\omega_{\text{nl,}\text{rot}}\rightarrow 0$, introduce spurious distortions
into the probability distribution preventing a comprehensive characterization
of free rotations, see Fig. 8.
Note that Fig. 8 has a connection with cartography. The limit
$\omega_{\text{nl,}\text{rot}}\rightarrow 0$ implies diverging angular
fluctuations, that push the probability weight at the boundaries of the
stereographic plane where the Jacobian (Eq. (11)) deforms the distribution
(Fig. 8). Similarly, the stereographic projection deforms the geographical
areas closer to the South Pole (Fig. 3). Both in cartography and NLSCHA, this
effect is due to the Jacobian (Eq. (11)) which is dominant for continents
below the Equator (Fig. 3), and large angular fluctuations (Fig. 8).
Importantly, we emphasize that a multi-peak and sharp distribution as in Fig.
8 has a very high kinetic energy, thus making it inaccessible during the
minimization of $F_{\text{nl}}$ (Eq. (15b)).
Figure 8: The NLSCHA distribution
$\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$ (Bohr-2) for
$\omega_{\text{nl,}\text{rot}}\rightarrow 0$. The limit of vanishing angular
frequency, $\omega_{\text{nl,}\text{rot}}\rightarrow 0$, pushes the spectral
weight at the boundaries of the stereographic plane so that the NLSCHA
Jacobian (Eq. (11)) deforms the probability distribution. Hence, NLSCHA can
not describe free rotations.
## V Results at finite temperature
Figure 9: Fig. a compares the exact free energy with the SCHA and NLSCHA
results ($E=0.01$ Ha/Bohr). Fig. b-g: exact, SCHA and NLSCHA probability
distribution (in Bohr-2) for various temperatures ($0-500-1000$ K). Finite
temperature increases the rotational degree of freedom. SCHA completely misses
the activation of this degree of freedom whereas in NLSCHA the angular
fluctuation is optimized to variationally minimize the free energy.
Temperature plays a major role in the thermodynamics of molecules, as it can
activate their rotations. Here, we investigate the thermal effect on the H2
model comparing the SCHA and NLSCHA with exact results. Fig. 9 a presents the
exact, SCHA, and NLSCHA free energies in the temperature range from $0$ K up
to $1000$ K. Furthermore, Figs 9 b-l display the exact, SCHA, and NLSCHA
probability distributions for $0$, $500$, and $1000$ K. Figs 9 b, c, d show
how the amplitude of the rotation increases upon heating.
Notably, as the temperature rises, the SCHA error also increases as the trial
probability distribution does not bend as the NLSCHA one. The NLSCHA approach
effectively captures the temperature-induced activation of rotations by
appropriately determining the optimal $\kappa$ for each temperature $T$.
Notably, at $500$ K, the NLSCHA error is $5.3$ meV while the SCHA completely
fails with an error of $23.8$ meV. The NLSCHA error grows with temperature as
the entropy is generated by a harmonic Hamiltonian and of course, there is
space for future works to relax this hypothesis. The free parameters of NLSCHA
and SCHA are reported respectively in appendix F and appendix B.
## VI Phase transition
Figure 10: The model for III-IV phase transition of pure hydrogen. Exact,
SCHA, and NLSCHA free energy difference $\Delta
F=F_{\text{rot}}-F_{\text{vib}}+\delta$ as a function of temperature $T$.
$F_{\text{rot}}$ is the free energy for $E=0.01$ Ha/Bohr and $F_{\text{vib}}$
for $E=0.06$ Ha/Bohr. The critical temperature $T_{\text{c}}$ (defined by
$\Delta F=0$ meV) depends on the value of the shift $\delta$, which differs
for all the methods used. NLSCHA finds the correct critical temperature as it
accurately reproduces the temperature-induced activation of rotations (see
Fig. 9).
Now, we discuss a qualitative model for the III-IV phase transition of high-
pressure hydrogen. In phase III, the orientation of the H2 molecules is fixed
[23], by increasing temperature, free rotations are activated, and phase IV is
stabilized at around $300$ K [24, 25, 12].
We consider the exact, SCHA and NLSCHA free energy difference for two values
of the crystal field $E=0.01$, $0.06$ Ha/Bohr. $E=0.01$ Ha/Bohr models phase
IV as rotations are thermally activated (see Fig. 9), while $E=0.06$ Ha/Bohr
represents phase III as these modes are always locked (see appendix G). Fig.
10 shows $\Delta F=F_{\text{rot}}-F_{\text{vib}}+\delta$ where $\delta$ is a
different shift for all the methods. We define the critical temperature
$T_{\text{c}}$ for each method as the temperature where $\Delta F=0$ meV.
Remarkably, the NLSCHA $T_{\text{c}}$ is very close to the exact one with an
error of $14\%$; on the contrary, in the SCHA, there is an overestimation of
$75\%$ K due to the unphysical hybridization of the rotations with the linear
vibrations. Hence, we expect SCHA to be completely unreliable for
investigating high-pressure hydrogen at finite temperatures and should be
replaced by the more powerful NLSCHA. Remarkably, as in the SCHA, the entropy
of NLSCHA does not need any additional complex calculation, contrary to
MD/PIMD. Indeed, it is analytical and depends solely on the NLSCHA phonons
(Eq. (17)) [20].
## VII 3D case
In this section, we extend the nonlinear change of variables discussed in
section III.1 to the case of $N$ atoms in three dimensions. Here, we employ
the stereographic projection on the sphere, so we transform the Cartesian
coordinates $\bm{R}_{i}$ of each atom $i$ in the following way
$\displaystyle\bm{R}_{i}=\bm{\mathcal{R}}_{\text{C},i}+(u_{i,1}+r_{0,i})\begin{bmatrix}\cos(\phi(\bm{u}_{i}))\sin(\theta(\bm{u}_{i}))\\\
\sin(\phi(\bm{u}_{i}))\sin(\theta(\bm{u}_{i}))\\\
\cos(\theta(\bm{u}_{i}))\end{bmatrix}$ (18a)
$\displaystyle\phi(\bm{u}_{i})=\phi_{0,i}+\atan\left(\frac{u_{i,2}}{u_{i,3}}\right)$
(18b)
$\displaystyle\theta(\bm{u}_{i})=\theta_{0,i}+2\atan\left(\frac{\sqrt{u_{i,2}^{2}+u_{i,3}^{2}}}{2r_{0,i}}\right)$
(18c)
As in section III.1, the free parameters are the center of the curvature,
$\bm{\mathcal{R}}_{\text{C},i}$, and the curvature vector,
$\bm{\mathcal{R}}_{\text{T},i}$, defined as
$\bm{\mathcal{R}}_{\text{T},i}=r_{0,i}\begin{bmatrix}\cos(\phi_{0,i})\sin(\theta_{0,i})&\sin(\phi_{0,i})\sin(\theta_{0,i})&\cos(\theta_{0,i})\end{bmatrix}$
(19)
Note that the nonlinear transformation does not mix the coordinates of
different atoms; consequently, the number of extra free parameters scales
linearly with $N$. $\bm{\mathcal{R}}_{\text{C},i}$ is a linear shift of the
positions. $\bm{\mathcal{R}}_{\text{T},i}$ defines the position of the
stereographic plane and its length gives the inverse curvature on each atom
$r_{0,i}=|\bm{\mathcal{R}}_{\text{T},i}|=\kappa_{i}^{-1}$. So $\kappa_{i}>0$
means that atom $i$ is part of a group that rotates (e.g. an organic molecule
inside the cage of a molecular perovskite) otherwise, if
$\kappa_{i}\rightarrow 0$, the atom only vibrates (e.g. the atoms of a cage in
a molecular perovskite). Note that we can not use $(\phi,\theta)$ in NLSCHA as
small oscillations are not well-defined. Indeed, $\theta\simeq 0$ imply large
$\phi$ fluctuations. On the contrary, we always define the harmonic
approximation in the stereographic coordinates (see Fig. 3). In Fig. 11, we
present the NLSCHA probability distribution for a 3D rotating diatomic
molecule in the center-of-mass reference frame $\bm{R}=\bm{R}_{1}-\bm{R}_{2}$.
Fig. 11 highlights the applicability of our approach to the study of extended
crystals and molecules.
Figure 11: 3D isosurfaces of the NLSCHA probability distribution for a
diatomic molecule’s center of mass $\bm{R}=\bm{R}_{1}-\bm{R}_{2}$. The plot
was made with mayavi package [26].
## VIII Conclusions
We have demonstrated that NLSCHA is a very promising method for a systematic
and unbiased investigation of molecular crystals’ phase diagrams as it takes
into account linear vibrations with the same accuracy as SCHA. Still, also it
perfectly describes roto-librations, as no other currently available
approximation. The improved flexibility is due to the curvature $\kappa$,
which minimizes the free energy, allowing the system to spontaneously activate
its degrees of freedom.
It is important to emphasize that the additional computational overhead of
NLSCHA is negligible. The SCHA and NLSCHA minimize the free energy with
respect to an auxiliary force constant matrix, but in NLSCHA we also have the
mass tensor $\bm{\mathcal{M}}$. The dimension of these matrices increases
quadratically with the number of atoms, thereby being the most computationally
intensive step in the minimization. However, while the SCHA minimizes the
centroid position $\bm{\mathcal{R}}$, representing the average atomic
position, the NLSCHA optimizes two centroids $\bm{\mathcal{R}}_{\text{T}}$ and
$\bm{\mathcal{R}}_{\text{C}}$ for each atom. The dual centroid structure
arises due to the nonlinear transformation and encapsulates information about
the particle’s average position and the curvature $\kappa_{i}$. Of particular
significance is that the nonlinear transformation acts solely on single-
particle coordinates; as such, the coordinates of different atoms remain
separate (section VII). Hence, the dimensions of $\bm{\mathcal{R}}_{\text{T}}$
and $\bm{\mathcal{R}}_{\text{C}}$ scale linearly with the number of atoms.
Consequently, the NLSCHA computational cost in the thermodynamic limit scales
quadratically with the number of atoms as in the SCHA. This is primarily due
to the dominating influence of the optimization process concerning the
auxiliary force constant matrix and the mass tensor.
In conclusion, the NLSCHA equations can be solved stochastically as discussed
in Refs [20, 1]. This makes our method the most promising competitor of PIMD
as it incorporates both vibrations and roto-librational degrees of freedom in
a quantum framework, finally solving SCHA weakness pointed out in Refs [8,
12].
## Acknowledgements
A.S. and F.M. acknowledge support from European Union under project ERC-SYN
MORE-TEM (grant agreement No 951215). L.M. acknowledges funding from the
European Research Council, Marie Curie, project THERMOH.
## Appendix A Exact diagonalization
The Morse potential for H2 is fitted using Quantum Espresso [27, 28] combined
with the BLYP exchange functional [29]. The cutoffs on plane waves and charge
density are $80$ Ry and $320$ Ry, the k-point grid is $(10,10,10)$. The
simulation box has size $20$ Bohr.
Figure 12: Fit of DFT-BLYP energy profile for H2 obtained with Quantum
Espresso.
The exact diagonalization of the Schrodinger equation is performed on a
uniform square grid in $\bm{R}$ space of size $N=1200$ between $\pm 3$ Bohr.
To get eigenvectors and eigenfunctions of $\hat{H}^{\text{(BO)}}$, we use the
Implicitly Restarted Lanczos Method [30] as implemented in the scipy function
scipy.sparse.linalg.eigsh [31]. The Lanczos algorithm applies many times the
target Hamiltonian $\hat{H}^{\text{(BO)}}$ to a starting normalized
wavefunction $\ket{\psi}_{\text{init}}$ to get the ground state and the first
excited states. To avoid the storage of the Hamiltonian as a $N^{2}\times
N^{2}$ sparse matrix we used scipy.sparse.linalg.LinearOperator which allows
to compute on-the-fly $\hat{H}^{\text{(BO)}}\ket{\psi}$ where
$\bra{\bm{R}}\ket{\psi}$ is defined on the 2D $N\times N$ grid.
## Appendix B SCHA simulations
The SCHA simulations were performed on a uniform square grid in $\bm{R}$ space
of size $1200$ between $\pm 3$ Bohr. The conjugate gradient (CG) minimization
of the SCHA free energy was performed with the scipy [31] function
scipy.optimize.minimize setting gtol to $10^{-9}$ and maxiter $400$. We report
the SCHA free parameters’ values both at zero (appendix B.1) and finite
temperature (appendix B.2).
### B.1 Zero temperature
In Fig. 13 we report the SCHA free parameters at equilibrium from $E=0$
Ha/Bohr to $E=0.06$ Ha/Bohr. $\mathcal{R}_{x}$ represents the $x$ component of
the SCHA centroid, $\bm{\mathcal{R}}$, while $\omega_{\text{rot}}$ and
$\omega_{\text{vib}}$ are the rotational and vibrational SCHA frequencies. We
remark that $\mathcal{R}_{y}$ is zero by symmetry.
Figure 13: The free parameters of SCHA at zero temperature from $E=0$ Ha/Bohr
to $E=0.06$ Ha/Bohr.
In Fig. 14 we report the SCHA free parameters at equilibrium at low values of
the crystal field from $E=0$ Ha/Bohr to $E=0.005$ Ha/Bohr.
Figure 14: The free parameters of SCHA at zero temperature from $E=0$ Ha/Bohr
to $E=0.005$ Ha/Bohr.
### B.2 Finite temperature
The free parameters $\mathcal{R}_{x}$, $\omega_{\text{rot}}$ and
$\omega_{\text{vib}}$ at finite temperature for $E=0.01-0.06$ Ha/Bohr are
reported in Figs 15 16.
Figure 15: The free parameters of SCHA at finite temperature for $E=0.01$
Ha/Bohr.
Figure 16: The free parameters of SCHA at finite temperature for $E=0.06$
Ha/Bohr.
## Appendix C The trial density matrix
According to Ref. [20], the nonlinear SCHA trial density matrix is given by
Eq. (10) where $\overline{\rho}_{\text{nl}}(\bm{u},\bm{u}^{\prime})$ is
generated by an harmonic Hamiltonian Eq. (13)
$\displaystyle\overline{\rho}_{\text{nl}}(\bm{u},\bm{u}^{\prime})=\sqrt{\det\left(\frac{\bm{\Upsilon}_{\text{nl}}}{2\pi}\right)}\exp\left\\{-\frac{1}{4}\sum_{ab=1}^{2}u_{a}\Theta_{\text{nl}}{}_{ab}u_{b}\right.$
(20)
$\displaystyle\left.-\frac{1}{4}\sum_{ab=1}^{2}u^{\prime}_{a}\Theta_{\text{nl}}{}_{ab}u^{\prime}_{b}+\sum_{ab=1}^{2}u_{a}A_{\text{nl}}{}_{ab}u^{\prime}_{b}\right\\}$
The NLSCHA tensors are related by
$\bm{\Upsilon}_{\text{nl}}=\bm{\Theta}_{\text{nl}}-2\bm{A}_{\text{nl}}$ and
are defined by
$\displaystyle\Upsilon_{\text{nl},ab}$
$\displaystyle=\sum_{ij=1}^{2}\sqrt{\mathcal{M}}_{ai}\overline{\Upsilon}_{\text{nl,ij}}\sqrt{\mathcal{M}}^{T}_{jb}$
(21a) $\displaystyle\overline{\Upsilon}_{\text{nl,ij}}$
$\displaystyle=\sum_{\mu=1}^{2}\frac{2\omega_{\text{nl,}}{}_{\mu}}{\hbar(1+2n_{\text{nl,}\mu})}e_{\text{nl},\mu}^{i}e_{\text{nl},\mu}^{j}$
(21b)
and
$\displaystyle A_{\text{nl}}{}_{,ab}$
$\displaystyle=\sum_{ij=1}^{2}\sqrt{\mathcal{M}}_{ai}\overline{A}_{\text{nl,ij}}\sqrt{\mathcal{M}}^{T}_{jb}$
(22a) $\displaystyle\overline{A}_{\text{nl,ij}}$
$\displaystyle=\sum_{\mu=1}^{2}\frac{2\omega_{\text{nl,}}{}_{\mu}n_{\text{nl,}\mu}(1+n_{\text{nl,}\mu})}{\hbar(1+2n_{\text{nl,}\mu})}e_{\text{nl},\mu}^{i}e_{\text{nl},\mu}^{j}$
(22b)
where $n_{\text{nl,}\mu}$ is the Bose-Einstein occupation number
$n_{\text{nl,}\mu}=\frac{1}{e^{\beta\hbar\omega_{\text{nl,}\mu}}-1}$ (23)
The square root of the mass tensor $\bm{\mathcal{M}}$ satisfies
$\mathcal{M}_{ab}=\sum_{i=1}^{2}\sqrt{\mathcal{M}}_{ai}\sqrt{\mathcal{M}}^{T}_{ib}$
(24)
It seems that there is a contradiction within nonlinear SCHA because
$\overline{\rho}_{\text{nl}}(\bm{u},\bm{u}^{\prime})$ is normalized as a
Gaussian [20] but the radial variable $u_{1}$ is defined between $[0,+\infty)$
(see Eq. (6)). So we approximate the normalization condition extending the
range of $u_{1}$ in $(-\infty,+\infty)$
$\displaystyle\Tr\left[\hat{\rho}_{\text{nl}}\right]$
$\displaystyle=\int_{-\infty}^{+\infty}d\bm{R}\bra{\bm{R}}\hat{\rho}_{\text{nl}}\ket{\bm{R}}$
(25)
$\displaystyle\simeq\int_{-\infty}^{+\infty}d\bm{u}\overline{\rho}_{\text{nl}}(\bm{u},\bm{u})=1$
The above assumption is justified by the following argument. In a diatomic
molecule, the linear vibration along the radial coordinate $u_{1}$ is a high
energy mode so that the bond length is always well defined hence the
corresponding component of the wave function is very localized, i.e. it decays
rapidly to zero, otherwise the atoms of the molecule will collapse one into
the other. No approximations are considered for the normalization along
$u_{2}$ as we used the stereographic projection that maps the angles
$[0,2\pi]$ in $(-\infty,+\infty)$ (see Eq. (6c)).
## Appendix D The nonlinear SCHA free energy
In appendix D.1 we compute all the necessary quantities to get the NLSCHA
kinetic energy according to Ref. [20]. In appendix D.2 we present the full
NLSCHA free energy following Ref. [20].
### D.1 Preliminary definitions
We define the Jacobian of $\bm{\xi}$ (Eq. (6))
$J^{i}_{j}=\partialderivative{R_{i}}{u_{j}}=\begin{bmatrix}\partialderivative{x}{u_{1}}&\partialderivative{y}{u_{1}}\\\
\partialderivative{x}{u_{2}}&\partialderivative{y}{u_{2}}\end{bmatrix}=\begin{bmatrix}\frac{x-x_{\text{C}}}{u_{1}+r_{0}}&\frac{y-y_{\text{C}}}{u_{1}+r_{0}}\\\
-\frac{y-y_{\text{C}}}{r_{0}f^{2}(u_{2})}&-\frac{x-x_{\text{C}}}{r_{0}f^{2}(u_{2})}\end{bmatrix}$
(26)
where $f(u_{2})$ is
$f(u_{2})=\sqrt{1+\left(\frac{u_{2}}{2r_{0}}\right)^{2}}$ (27)
The determinat of Eq. (26)
$\mathcal{J}=\left|\det\left(\bm{J}\right)\right|=\frac{u_{1}+r_{0}}{r_{0}f^{2}(u_{2})}$
(28)
The inverse metric tensor $\bm{g}$ is
$g{}^{ab}=\sum_{i=1}^{2}\partialderivative{u_{a}}{R_{i}}\partialderivative{u_{b}}{R_{j}}=\begin{bmatrix}1&0\\\
0&(r_{0}f(u_{2}))^{2}h(\bm{u})\end{bmatrix}$ (29)
where $h(\bm{u})$ is
$h(\bm{u})=\left(\frac{f(u_{2})}{u_{1}+r_{0}}\right)^{2}$ (30)
In addition, we define the vector $\bm{d}$ as
$\bm{d}=\frac{1}{2}\partialderivative{\log(\mathcal{J})}{\bm{u}}=\begin{bmatrix}\frac{1}{2(u_{1}+r_{0})}\\\
-\frac{u_{2}}{(2r_{0}f(u_{2}))^{2}}\end{bmatrix}$ (31)
### D.2 Free energy calculation
According to Ref. [20] the nonlinear SCHA kinetic energy is
$\Tr\left[\frac{\hat{\bm{P}}^{2}}{2m}\hat{\rho}_{\text{nl}}\right]=\left\langle\mathcal{K}\right\rangle_{\text{nl}}=\int_{-\infty}^{+\infty}d\bm{u}\mathcal{K}\overline{\rho}_{\text{nl}}(\bm{u})$
(32)
$\mathcal{K}$ is the kinetic kernel [20]
$\mathcal{K}=\sum_{a=1}^{2}\left(\mathcal{K}^{(2)}_{a}\mathcal{L}^{(2)}_{a}+\mathcal{K}^{(1)}_{a}\mathcal{L}^{(1)}_{a}\right)+\mathcal{K}^{(0)}$
(33)
where $\bm{\mathcal{L}}^{(1)}$, $\bm{\mathcal{L}}^{(2)}$ are defined by
$\displaystyle\mathcal{L}^{(1)}_{a}$
$\displaystyle=-\partialderivative{\log\left(\overline{\rho}_{\text{nl}}(\bm{u})\right)}{\widetilde{u}_{a}}=\frac{1}{\sqrt{m}}\sum_{i=1}^{3N}\Upsilon_{\text{nl},ai}u_{i}$
(34a) $\displaystyle\mathcal{L}^{(2)}_{a}$
$\displaystyle=-\partialderivative{\log\left(\overline{\rho}_{\text{nl}}(\bm{u})\right)}{\widetilde{u}_{a}}{\widetilde{u}_{a}}$
(34b)
$\displaystyle=\frac{1}{m}\left(\Upsilon_{\text{nl},aa}-\sum_{ij=1}^{3N}\Upsilon_{\text{nl},ai}\Upsilon_{\text{nl},aj}u_{i}u_{j}\right)$
and $\mathcal{K}^{(0)}$, $\bm{\mathcal{K}}^{(1)}$, and
$\bm{\mathcal{K}}^{(2)}$ by
$\displaystyle\mathcal{K}^{(0)}$
$\displaystyle=\frac{\hbar^{2}}{2m}\left\\{\Tr\left[\bm{g}\cdot\left(\frac{\bm{\Upsilon}_{\text{nl}}}{4}+\bm{A}_{\text{nl}}\right)\right]+\bm{d}\cdot\bm{g}\cdot\bm{d}\right\\}$
$\displaystyle=\frac{\hbar^{2}}{2m}\Tr\left[\bm{g}\cdot\left(\frac{\bm{\Upsilon}_{\text{nl}}}{4}+\bm{A}_{\text{nl}}\right)\right]+\frac{\hbar^{2}h(\bm{u})}{8m}$
(35a) $\displaystyle\bm{\mathcal{K}}^{(1)}$
$\displaystyle=\frac{\hbar^{2}}{4}\bm{g}\cdot\bm{d}=\frac{\hbar^{2}}{4\sqrt{m}}\begin{bmatrix}\frac{1}{2(u_{1}+r_{0})}\\\
-\frac{1}{4}u_{2}h(\bm{u})\end{bmatrix}$ (35b)
$\displaystyle\bm{\mathcal{K}}^{(2)}$
$\displaystyle=-\frac{\hbar^{2}}{8}\text{diag}(\bm{g})=-\frac{\hbar^{2}}{8}\begin{bmatrix}1\\\
(r_{0}f(u_{2}))^{2}h(\bm{u})\end{bmatrix}$ (35c)
where $\bm{g}$ and $\bm{d}$ are defined in Eqs (29) (31). The potential energy
is
$\displaystyle\Tr\left[\hat{V}^{\text{(BO)}}\hat{\rho}_{\text{nl}}\right]$
$\displaystyle=\int_{-\infty}^{+\infty}d\bm{u}V^{\text{(BO)}}(\bm{\xi}(\bm{u}))\overline{\rho}_{\text{nl}}(\bm{u})$
(36) $\displaystyle=\left\langle V^{\text{(BO)}}\right\rangle_{\text{nl}}$
So the NLSCHA free energy is
$F_{\text{nl}}=\left\langle\mathcal{K}\right\rangle_{\text{nl}}+\left\langle
V^{\text{(BO)}}\right\rangle_{\text{nl}}-TS_{\text{nl}}$ (37)
where the entropy is harmonic as discussed in Ref. [20]
$\displaystyle
S_{\text{nl}}=\Tr\left[\hat{\rho}_{\text{nl}}\log(\hat{\rho}_{\text{nl}})\right]$
(38)
$\displaystyle=k_{\text{B}}\sum_{\mu=1}^{2}\left[(1+n_{\text{nl,}\mu})\log(1+n_{\text{nl,}\mu})-n_{\text{nl,}\mu}\log(n_{\text{nl,}\mu})\right]$
## Appendix E Gradient of the nonlinear SCHA free energy
We start with the gradient with respect to the center of the curvature
$\bm{\mathcal{R}}_{\text{C}}$ (Eq. (7a))
$\partialderivative{F_{\text{nl}}}{\bm{\mathcal{R}}_{\text{C}}}=\left\langle\partialderivative{V^{\text{(BO)}}}{\bm{R}}\right\rangle_{\text{nl}}$
(39)
The gradient with respect to curvature vector $\bm{\mathcal{R}}_{\text{T}}$
(Eq. (7b)) is the following
$\displaystyle\partialderivative{F_{\text{nl}}}{\bm{\mathcal{R}}_{\text{T}}}=\left\langle\partialderivative{\mathcal{K}}{\bm{\mathcal{R}}_{\text{T}}}\right\rangle_{\text{nl}}+\left\langle\partialderivative{V^{\text{(BO)}}}{\bm{\mathcal{R}}_{\text{T}}}\right\rangle_{\text{nl}}$
(40)
$\displaystyle=\frac{\bm{\mathcal{R}}_{\text{T}}}{|\bm{\mathcal{R}}_{\text{T}}|}\left\langle\partialderivative{\mathcal{K}}{r_{0}}\right\rangle_{\text{nl}}+\sum_{i=1}^{2}\partialderivative{R_{i}}{\bm{\mathcal{R}}_{\text{T}}}\left\langle\partialderivative{V^{\text{(BO)}}}{{R_{i}}}\right\rangle_{\text{nl}}$
noting that $\mathcal{K}(\bm{u})$ (Eq. (33)) depends only on the curvature
$|\bm{\mathcal{R}}_{\text{T}}|=r_{0}=\kappa^{-1}$. The derivative of the
kinetic energy kernel $\mathcal{K}$ (Eq. (33)) is
$\displaystyle\partialderivative{\mathcal{K}}{r_{0}}=\sum_{a=1}^{2}\left(\partialderivative{\mathcal{K}^{(2)}_{a}}{r_{0}}\mathcal{L}^{(2)}_{a}+\partialderivative{\mathcal{K}^{(1)}_{a}}{r_{0}}\mathcal{L}^{(1)}_{a}\right)+\partialderivative{\mathcal{K}^{(0)}(\bm{u})}{r_{0}}(\bm{u})$
(41)
where $r_{0}$ the derivative of the coefficients (Eqs (35)) are
$\displaystyle\partialderivative{\mathcal{K}^{(0)}}{r_{0}}=$
$\displaystyle\frac{\hbar^{2}}{2m}\Tr\left[\partialderivative{\bm{g}}{r_{0}}\cdot\left(\frac{\bm{\Upsilon}_{\text{nl}}}{4}+\bm{A}_{\text{nl}}\right)\right]$
$\displaystyle+\frac{\hbar^{2}}{8m}\partialderivative{h(\bm{u})}{r_{0}}$ (42a)
$\displaystyle\partialderivative{\bm{\mathcal{K}}^{(1)}}{r_{0}}=$
$\displaystyle\frac{\hbar^{2}}{4\sqrt{m}}\begin{bmatrix}\partialderivative{r_{0}}\frac{1}{2(u_{1}+r_{0})}\\\
-\frac{1}{4}u_{2}\partialderivative{r_{0}}h(\bm{u})\end{bmatrix}$ (42b)
$\displaystyle\partialderivative{\bm{\mathcal{K}}^{(2)}}{r_{0}}=$
$\displaystyle-\frac{\hbar^{2}}{8}\begin{bmatrix}0\\\
\partialderivative{[(r_{0}f(u_{2}))^{2}h(\bm{u})]}{r_{0}}\end{bmatrix}$ (42c)
$\displaystyle\partialderivative{\bm{g}}{r_{0}}=$
$\displaystyle\begin{bmatrix}0&0\\\
0&\partialderivative{[(r_{0}f(u_{2}))^{2}h(\bm{u})]}{r_{0}}\end{bmatrix}$
(42d) $\displaystyle\partialderivative{f^{2}(u_{2})}{r_{0}}=$
$\displaystyle-\frac{u_{2}{}^{2}}{2r_{0}^{3}}$ (42e)
$\displaystyle\partialderivative{h(\bm{u})}{r_{0}}=$
$\displaystyle\frac{1}{(u_{1}+r_{0})^{2}}\partialderivative{f^{2}(u_{2})}{r_{0}}-\frac{2h(\bm{u})}{r_{0}+u_{1}}$
(42f)
The gradient of the Cartesian coordinates $\bm{R}$ with respect to the
curvature vector $\bm{\mathcal{R}}_{\text{T}}$ is
$\partialderivative{R_{i}}{\bm{\mathcal{R}}_{\text{T}}}=\frac{\bm{\mathcal{R}}_{\text{T}}}{|\bm{\mathcal{R}}_{\text{T}}|}\partialderivative{R_{i}}{r_{0}}+\partialderivative{\phi_{0}}{\bm{\mathcal{R}}_{\text{T}}}\partialderivative{R_{i}}{\phi_{0}}$
(43)
where the derivatives of the Cartesian coordinates $\bm{R}=(x,y)$ are
$\begin{bmatrix}\partialderivative{x}{r_{0}}&\partialderivative{y}{r_{0}}\\\
\partialderivative{x}{\phi_{0}}&\partialderivative{y}{\phi_{0}}\end{bmatrix}=\begin{bmatrix}\frac{x-x_{\text{C}}}{u_{1}+r_{0}}+\frac{u_{2}(y-y_{\text{C}})}{r_{0}^{2}f^{2}(u_{2})}&\frac{y-y_{\text{C}}}{u_{1}+r_{0}}-\frac{u_{2}(x-x_{\text{C}})}{r_{0}^{2}f^{2}(u_{2})}\\\
-(y-y_{\text{C}})&+(x-x_{\text{C}})\end{bmatrix}$ (44)
and the derivative of $\phi_{0}$ is computed from the definition of
$\bm{\mathcal{R}}_{\text{T}}$ (Eq. (7b))
$\partialderivative{\phi_{0}}{\bm{\mathcal{R}}_{\text{T}}}=-\frac{1}{|\bm{\mathcal{R}}_{\text{T}}|^{2}}\begin{bmatrix}\mathcal{R}_{\text{T},2}&-\mathcal{R}_{\text{T},1}\end{bmatrix}$
(45)
To compute the gradient with respect to $\bm{\Phi}_{\text{nl}}$ we use the
following formula introduced by Ref. [32]
$\displaystyle\partialderivative{\left\langle
O\right\rangle_{\text{nl}}}{\bm{\Phi}_{\text{nl}}}=\left\langle\partialderivative{O}{\bm{\Phi}_{\text{nl}}}\right\rangle_{\text{nl}}$
(46)
$\displaystyle+\frac{1}{2}\sum_{ijk=1}^{2}\partialderivative{\overset{-1}{\Upsilon}_{\text{nl},ij}}{\bm{\Phi}_{\text{nl}}}\Upsilon_{\text{nl},ik}\left\langle
u_{k}\partialderivative{O}{u_{j}}\right\rangle_{\text{nl}}$
The first term of Eq. (46) takes into account the explicit dependence of $O$
on $\bm{\Phi}_{\text{nl}}$ while the second considers the change in the
probability distribution $\overline{\rho}_{\text{nl}}(\bm{u})$. With Eq. (46),
the derivative of $F_{\text{nl}}$ with respect to the auxiliary force constant
matrix $\bm{\Phi}_{\text{nl}}$ is
$\displaystyle\partialderivative{F_{\text{nl}}}{\bm{\Phi}_{\text{nl}}}=\left\langle\partialderivative{\mathcal{K}(\bm{u})}{\bm{\Phi}_{\text{nl}}}\right\rangle_{\text{nl}}$
(47)
$\displaystyle+\frac{1}{2}\sum_{ijk=1}^{2}\partialderivative{\overset{-1}{\Upsilon}_{\text{nl},ij}}{\bm{\Phi}_{\text{nl}}}\Upsilon_{\text{nl},ik}\left\langle
u_{k}\left(\partialderivative{\mathcal{K}}{u_{j}}+\partialderivative{V^{\text{(BO)}}}{u_{j}}\right)\right\rangle_{\text{nl}}$
$\displaystyle-T\partialderivative{S_{\text{nl}}}{\bm{\Phi}_{\text{nl}}}$
where derivative of the entropy $S_{\text{nl}}$ with respect to
$\bm{\Phi}_{\text{nl}}$ is computed in Ref. [20].
The first term is the derivative of $\mathcal{K}(\bm{u})$ (Eq. (33)) with
respect to $\bm{\Phi}_{\text{nl}}$
$\partialderivative{\mathcal{K}}{\bm{\Phi}_{\text{nl}}}=\sum_{a=1}^{2}\left(\mathcal{K}^{(2)}_{a}\partialderivative{\mathcal{L}^{(2)}_{a}}{\bm{\Phi}_{\text{nl}}}+\mathcal{K}^{(1)}_{a}\partialderivative{\mathcal{L}^{(1)}_{a}}{\bm{\Phi}_{\text{nl}}}\right)+\partialderivative{\mathcal{K}^{(0)}}{\bm{\Phi}_{\text{nl}}}$
(48)
The derivatives of $\bm{\mathcal{L}}^{(1)}$, $\bm{\mathcal{L}}^{(2)}$ (Eqs
(34)) are
$\displaystyle\partialderivative{\mathcal{L}^{(1)}_{a}}{\bm{\Phi}_{\text{nl}}}$
$\displaystyle=\frac{1}{\sqrt{m}}\left(\sum_{i=1}^{2}\partialderivative{\Upsilon_{\text{nl},ai}}{\bm{\Phi}_{\text{nl}}}u{}_{i}\right)$
(49a)
$\displaystyle\partialderivative{\mathcal{L}^{(2)}_{a}}{\bm{\Phi}_{\text{nl}}}$
$\displaystyle=\frac{1}{m}\left(\partialderivative{\Upsilon_{\text{nl},aa}}{\bm{\Phi}_{\text{nl}}}-2\sum_{ij=1}^{2}\partialderivative{\Upsilon_{\text{nl},ai}}{\bm{\Phi}_{\text{nl}}}\Upsilon_{\text{nl},aj}u{}_{i}u{}_{j}\right)$
(49b)
The derivative of $\mathcal{K}^{(0)}$ (Eq. (35)) is
$\partialderivative{\mathcal{K}^{(0)}}{\bm{\Phi}_{\text{nl}}}=\frac{\hbar^{2}}{2m}\Tr\left[\bm{g}\cdot\left(\frac{1}{4}\partialderivative{\bm{\Upsilon}_{\text{nl}}}{\bm{\Phi}_{\text{nl}}}+\partialderivative{\bm{A}_{\text{nl}}}{\bm{\Phi}_{\text{nl}}}\right)\right]$
(50)
The derivatives of $\bm{\Upsilon}_{\text{nl}}$/$\bm{A}_{\text{nl}}$ can be
computed with the expressions of Ref. [20].
The derivative of the kinetic energy kernel $\mathcal{K}(\bm{u})$ with respect
to the auxiliary displacements $\bm{u}$ is
$\displaystyle\partialderivative{\mathcal{K}}{\bm{u}}=$
$\displaystyle\sum_{a=1}^{2}\left(\partialderivative{\mathcal{K}^{(2)}_{a}}{\bm{u}}\mathcal{L}^{(2)}_{a}+\mathcal{K}^{(2)}_{a}\partialderivative{\mathcal{L}^{(2)}_{a}}{\bm{u}}\right)$
(51)
$\displaystyle+\sum_{a=1}^{2}\left(\partialderivative{\mathcal{K}^{(1)}_{a}}{\bm{u}}\mathcal{L}^{(1)}_{a}+\mathcal{K}^{(1)}_{a}\partialderivative{\mathcal{L}^{(1)}_{a}}{\bm{u}}\right)+\partialderivative{\mathcal{K}^{(0)}}{\bm{u}}$
The derivatives with respect to $\bm{u}$ of $\mathcal{K}^{(0)}$,
$\bm{\mathcal{K}}^{(1)}$, and $\bm{\mathcal{K}}^{(2)}$ (Eqs (35)) are
$\displaystyle\partialderivative{\mathcal{K}^{(0)}}{\bm{u}}=$
$\displaystyle\frac{\hbar^{2}}{2m}\left\\{\Tr\left[\partialderivative{\bm{g}}{\bm{u}}\cdot\left(\frac{\bm{\Upsilon}_{\text{nl}}}{4}+\bm{A}_{\text{nl}}\right)\right]\right.$
$\displaystyle\left.+\frac{1}{4}\partialderivative{h(\bm{u})}{\bm{u}}\right\\}$
(52a) $\displaystyle\partialderivative{\bm{\mathcal{K}}^{(1)}}{\bm{u}}=$
$\displaystyle\frac{\hbar^{2}}{4\sqrt{m}}\begin{bmatrix}\partialderivative{\bm{u}}\frac{1}{2(u_{1}+r_{0})}\\\
-\frac{1}{4}u_{2}\partialderivative{\bm{u}}h(\bm{u})\end{bmatrix}$ (52b)
$\displaystyle\partialderivative{\bm{\mathcal{K}}^{(2)}}{\bm{u}}=$
$\displaystyle-\frac{\hbar^{2}}{8}\begin{bmatrix}0\\\
r_{0}^{2}\partialderivative{(f^{2}(u_{2})h(\bm{u}))}{\bm{u}}\end{bmatrix}$
(52c) $\displaystyle\partialderivative{\bm{g}}{\bm{u}}=$
$\displaystyle\begin{bmatrix}0&0\\\
0&r_{0}^{2}\partialderivative{[f^{2}(u_{2})h(\bm{u})]}{\bm{u}}\end{bmatrix}$
(52d) $\displaystyle\partialderivative{f^{2}(u_{2})}{\bm{u}}=$
$\displaystyle\begin{bmatrix}0\\\ \frac{u_{2}}{2r_{0}^{2}}\end{bmatrix}$ (52e)
$\displaystyle\partialderivative{h(\bm{u})}{\bm{u}}=$
$\displaystyle\begin{bmatrix}-\frac{2}{(r_{0}+u_{1})}h(\bm{u})\\\
\frac{1}{(r_{0}+u_{1})^{2}}\partialderivative{f^{2}(u_{2})}{u_{2}}\end{bmatrix}$
(52f)
and the derivatives of $\bm{\mathcal{L}}^{(1)}$, $\bm{\mathcal{L}}^{(2)}$ (Eqs
(34)) are
$\displaystyle\partialderivative{\mathcal{L}^{(1)}_{a}}{u_{b}}=$
$\displaystyle\frac{1}{\sqrt{m}}\Upsilon_{\text{nl},ab}$ (53a)
$\displaystyle\partialderivative{\mathcal{L}^{(2)}_{a}}{u_{b}}=$
$\displaystyle\frac{1}{m}\left(-2\Upsilon_{\text{nl},ab}\sum_{i=1}^{2}\Upsilon_{\text{nl},ai}u_{i}\right)$
(53b)
The derivative of the BOES with respect to the auxiliary displacements
$\bm{u}$ is
$\partialderivative{V^{\text{(BO)}}(\bm{R})}{u_{j}}=-\sum_{i=1}^{2}J^{i}_{j}f^{\text{(BO)}}_{i}$
(54)
where $\bm{J}$ is defined in Eq. (26).
We minimize with respect to $\sqrt{\bm{\mathcal{M}}}$ so that we can enforce
the positive definiteness of
$\bm{\mathcal{M}}=\sqrt{\bm{\mathcal{M}}}^{T}\cdot\sqrt{\bm{\mathcal{M}}}$
$\displaystyle\partialderivative{F_{\text{nl}}}{\sqrt{\bm{\mathcal{M}}}}=\left\langle\partialderivative{\mathcal{K}}{\sqrt{\bm{\mathcal{M}}}}\right\rangle_{\text{nl}}$
(55)
$\displaystyle+\frac{1}{2}\sum_{ijk=1}^{2}\partialderivative{\overset{-1}{\Upsilon}_{\text{nl},ij}}{\sqrt{\bm{\mathcal{M}}}}\Upsilon_{\text{nl},ik}\left\langle
u_{k}\left(\partialderivative{\mathcal{K}}{u_{j}}+\partialderivative{V^{\text{(BO)}}}{u_{j}}\right)\right\rangle_{\text{nl}}$
$\displaystyle-T\partialderivative{S_{\text{nl}}}{\sqrt{\bm{\mathcal{M}}}}$
This equation is a straightforward generalization of Eq. (47). The derivative
$\overset{-1}{\bm{\Upsilon}}_{\text{nl}}$ is computed in Ref. [20]. The
derivative of the kinetic kernel is
$\partialderivative{\mathcal{K}}{\sqrt{\bm{\mathcal{M}}}}=\sum_{a=1}^{2}\left(\mathcal{K}^{(2)}_{a}\partialderivative{\mathcal{L}^{(2)}_{a}}{\sqrt{\bm{\mathcal{M}}}}+\mathcal{K}^{(1)}_{a}\partialderivative{\mathcal{L}^{(1)}_{a}}{\sqrt{\bm{\mathcal{M}}}}\right)+\partialderivative{\mathcal{K}^{(0)}}{\sqrt{\bm{\mathcal{M}}}}$
(56)
Then, to compute
$\displaystyle\partialderivative{\mathcal{L}^{(2)}_{a}}{\sqrt{\bm{\mathcal{M}}}}$
$\displaystyle=\partialderivative{\Upsilon_{\text{nl},aa}}{\sqrt{\bm{\mathcal{M}}}}-2\sum_{ij=1}^{2}\partialderivative{\Upsilon_{\text{nl},ai}}{\sqrt{\bm{\mathcal{M}}}}\Upsilon_{\text{nl},aj}u{}_{i}u{}_{j}$
(57a)
$\displaystyle\partialderivative{\mathcal{L}^{(1)}_{a}}{\sqrt{\bm{\mathcal{M}}}}$
$\displaystyle=\sum_{i=1}^{2}\partialderivative{\Upsilon_{\text{nl},ai}}{\sqrt{\bm{\mathcal{M}}}}u{}_{i}$
(57b)
and
$\partialderivative{\mathcal{K}^{(0)}}{\sqrt{\bm{\mathcal{M}}}}=\frac{\hbar^{2}}{2m}\Tr\left[\bm{g}\cdot\left(\frac{1}{4}\partialderivative{\bm{\Upsilon}_{\text{nl}}}{\sqrt{\bm{\mathcal{M}}}}+\partialderivative{\bm{A}_{\text{nl}}}{\sqrt{\bm{\mathcal{M}}}}\right)\right]$
(58)
we employ the formulas derived in Ref. [20]. The derivative of the entropy is
$\displaystyle\partialderivative{S_{\text{nl}}}{\sqrt{\mathcal{M}}_{ab}}=\sum_{ij=1}^{2}\partialderivative{S_{\text{nl}}}{D_{\text{nl}}{}_{,ij}}\partialderivative{D_{\text{nl}}{}_{,ij}}{\sqrt{\mathcal{M}}_{ab}}$
(59)
where $\partialderivative{\bm{D}_{\text{nl}}}{\sqrt{\bm{\mathcal{M}}}}$ and
$\partialderivative{S_{\text{nl}}}{\bm{D}_{\text{nl}}}$ are given by Ref.
[20].
## Appendix F Nonlinear SCHA simulations
The NLSCHA simulations were performed on a uniform square grid in $\bm{u}$
space of size $1000$ between $\pm 3$ Bohr. The conjugate gradient (CG)
minimization of the NLSCHA free energy was performed with the scipy [31]
function scipy.optimize.minimize setting gtol to $10^{-8}$ and maxiter $4000$.
We report the free parameters of NLSCHA both at zero (appendix F.1) and finite
temperature (appendix F.2).
### F.1 Parameters at zero temperature
In Figs 17 18 we report the nonlinear SCHA free parameters for $E=0-E=0.06$
Ha/Bohr and $E=0-E=0.005$ Ha/Bohr. We show the $x$-component of
$\bm{\mathcal{R}}_{\text{C}}$, denoted by $x_{\text{C}}$, the radius of the
curvature, $r_{0}$, the rotational and vibrational frequencies,
$\omega_{\text{rot}}$ and $\omega_{\text{vib}}$, with the corresponding
effective mass eigenvalues,
$\mathcal{M}_{\text{rot}},\mathcal{M}_{\text{vib}}$, divided by the physical
value $\mu_{\ch{H2}}=m_{\ch{H}}/2$. The symmetry of the potential, see Eq.
(1), fixes the values of $y_{\text{C}}=0$ Bohr and $\phi_{0}=\pi$ are fixed.
Note that at $T=0$ K we find that $\bm{\mathcal{M}}$ coincides with the
physical one.
Figure 17: The free parameters of NLSCHA
($X_{C},r_{0},\omega_{\text{rot}},\omega_{\text{vib}},\mathcal{M}_{\text{rot}}/\mu_{\ch{H2}},\mathcal{M}_{\text{vib}}/\mu_{\ch{H2}}$)
at $T=0$ K from $E=0$ Ha/Bohr to $E=0.06$ Ha/Bohr.
Figure 18: The free parameters of NLSCHA
($X_{C},r_{0},\omega_{\text{rot}},\omega_{\text{vib}},\mathcal{M}_{\text{rot}}/\mu_{\ch{H2}},\mathcal{M}_{\text{vib}}/\mu_{\ch{H2}}$)
at $T=0$ K and low values of the crystal field $E$ from $E=0$ Ha/Bohr to
$E=0.005$ Ha/Bohr.
### F.2 Parameters at finite temperature
Figs 19 20 report the NLSCHA free parameters at finite temperature from $T=0$
K to $T=1000$ K for $E=0.01$ Ha/Bohr and $E=0.06$ Ha/Bohr. For $E=0.01$
Ha/Bohr, the eigenvalue of $\bm{\mathcal{M}}$ corresponding to the rotational
mode, $\mathcal{M}_{\text{rot}}$, is lower than the physical one so that
rotational fluctuations are enhanced. Note that for the vibron mode, the
eigenvalue of $\bm{\mathcal{M}}$ is
$\mathcal{M}_{\text{vib}}\simeq\mu_{\ch{H2}}$. For $E=0.06$ Ha/Bohr,
$\mathcal{M}_{\text{vib}}<\mu_{\ch{H2}}$ at $200$ K compensates for the
increase of $\omega_{\text{rot}}$.
Figure 19: The free parameters of NLSCHA
($X_{C},r_{0},\omega_{\text{rot}},\omega_{\text{vib}},\mathcal{M}_{\text{rot}}/\mu_{\ch{H2}},\mathcal{M}_{\text{vib}}/\mu_{\ch{H2}}$)
from $T=0$ K to $T=1000$ K with $E=0.01$ Ha/Bohr.
Figure 20: The free parameters of NLSCHA
($X_{C},r_{0},\omega_{\text{rot}},\omega_{\text{vib}},\mathcal{M}_{\text{rot}}/\mu_{\ch{H2}},\mathcal{M}_{\text{vib}}/\mu_{\ch{H2}}$)
from $T=0$ K to $T=1000$ K with $E=0.06$ Ha/Bohr.
In Fig. 21 we compare the NLSCHA free energies ($E=0.01$ Ha/Bohr) obtained
optimizing $\bm{\mathcal{M}}$ and keeping it fixed
$\bm{\mathcal{M}}=\mu_{\ch{H2}}\bm{1}$. Fig. 21 b-l report the corresponding
probability distributions compared with the exact result. Note that at
$T=1000$ K optimizing $\bm{\mathcal{M}}$ gives an error of $8.5$ meV while
keeping it fixed yields $9.3$ meV.
Figure 21: Fig. a shows the exact, and NLSCHA free energies as a function of
temperature for $E=0.01$ Ha/Bohr. We report the NLSCHA solutions with
$\bm{\mathcal{M}}=\mu_{\text{\ch{H2}}}\bm{1}$ and
$\bm{\mathcal{M}}\neq\mu_{\text{\ch{H2}}}\bm{1}$. Figs b-l report the exact
(Figs b-e-h), and NLSCHA (Figs c-f-i with
$\bm{\mathcal{M}}\neq\mu_{\text{\ch{H2}}}\bm{1}$ and Figs d-g-l with
$\bm{\mathcal{M}}=\mu_{\text{\ch{H2}}}\bm{1}$) probability distribution
$\rho(R)$ (Bohr-2) at $0-500-1000$ K.
## Appendix G Phase transition model
In Fig. 22-23 we report the exact, harmonic, SCHA and NLSCHA free energies for
$E=0.01-0.06$ Ha/Bohr form $T=0$ K to $1000-600$ K along with the probability
distributions. Note that when rotations are locked ($E=0.06$ Ha/Bohr) the
NLSCHA solution coincides with the SCHA.
Figure 22: Fig. a shows the exact, harmonic, SCHA and NLSCHA free energies as
a function of temperature for $E=0.01$ Ha/Bohr. Figs b-o report the exact
(Figs b-f-l), harmonic (Figs d-h-n), SCHA (Figs c-g-m) and NLSCHA (Figs e-i-o)
probability distribution $\rho(\bm{R})$ (Bohr-2) at $0-500-1000$ K.
Figure 23: Fig. a shows the exact, harmonic, SCHA and NLSCHA free energies as
a function of temperature for $E=0.06$ Ha/Bohr. Figs b-o report the exact
(Figs b-f-l), harmonic (Figs d-h-n), SCHA (Figs c-g-m) and NLSCHA (Figs e-i-o)
probability distribution $\rho(\bm{R})$ (Bohr-2) at $0-300-600$ K.
## References
* Monacelli _et al._ [2021] L. Monacelli, R. Bianco, M. Cherubini, M. Calandra, I. Errea, and F. Mauri, The stochastic self-consistent harmonic approximation: calculating vibrational properties of materials with full quantum and anharmonic effects, Journal of Physics: Condensed Matter 33, 363001 (2021).
* Oba _et al._ [2019] Y. Oba, T. Tadano, R. Akashi, and S. Tsuneyuki, First-principles study of phonon anharmonicity and negative thermal expansion in ${\mathrm{scf}}_{3}$, Phys. Rev. Materials 3, 033601 (2019).
* Hellman _et al._ [2011] O. Hellman, I. A. Abrikosov, and S. I. Simak, Lattice dynamics of anharmonic solids from first principles, Phys. Rev. B 84, 180301 (2011).
* Souvatzis _et al._ [2008] P. Souvatzis, O. Eriksson, M. I. Katsnelson, and S. P. Rudin, Entropy driven stabilization of energetically unstable crystal structures explained from first principles theory, Phys. Rev. Lett. 100, 095901 (2008).
* Monserrat _et al._ [2013] B. Monserrat, N. D. Drummond, and R. J. Needs, Anharmonic vibrational properties in periodic systems: energy, electron-phonon coupling, and stress, Phys. Rev. B 87, 144302 (2013).
* Gambhir _et al._ [1999] M. Gambhir, M. T. Dove, and V. Heine, Rigid unit modes and dynamic disorder: Sio2 cristobalite and quartz, Physics and Chemistry of Minerals 26, 484 (1999).
* Giddy _et al._ [1993] A. P. Giddy, M. T. Dove, G. S. Pawley, and V. Heine, The determination of rigid-unit modes as potential soft modes for displacive phase transitions in framework crystal structures, Acta Crystallographica Section A 49, 697 (1993), https://onlinelibrary.wiley.com/doi/pdf/10.1107/S0108767393002545 .
* Kapil _et al._ [2019] V. Kapil, E. Engel, M. Rossi, and M. Ceriotti, Assessment of approximate methods for anharmonic free energies, Journal of Chemical Theory and Computation 15, 5845 (2019), pMID: 31532997, https://doi.org/10.1021/acs.jctc.9b00596 .
* Rossi _et al._ [2016] M. Rossi, P. Gasparotto, and M. Ceriotti, Anharmonic and quantum fluctuations in molecular crystals: A first-principles study of the stability of paracetamol, Phys. Rev. Lett. 117, 115702 (2016).
* Kitamura _et al._ [2000] H. Kitamura, S. Tsuneyuki, T. Ogitsu, and T. Miyake, Quantum distribution of protons in solid molecular hydrogen at megabar pressures, Nature 404, 259 (2000).
* Kohanoff _et al._ [1997] J. Kohanoff, S. Scandolo, G. L. Chiarotti, and E. Tosatti, Solid molecular hydrogen: The broken symmetry phase, Phys. Rev. Lett. 78, 2783 (1997).
* Morresi _et al._ [2022] T. Morresi, R. Vuilleumier, and M. Casula, Hydrogen phase-iv characterization by full account of quantum anharmonicity, Phys. Rev. B 106, 054109 (2022).
* Georgescu and Mandelshtam [2012] I. Georgescu and V. A. Mandelshtam, Self-consistent phonons revisited. i. the role of thermal versus quantum fluctuations on structural transitions in large lennard-jones clusters, The Journal of Chemical Physics 137, 144106 (2012), https://doi.org/10.1063/1.4754819 .
* Brown _et al._ [2013a] S. E. Brown, I. Georgescu, and V. A. Mandelshtam, Self-consistent phonons revisited. ii. a general and efficient method for computing free energies and vibrational spectra of molecules and clusters, The Journal of Chemical Physics 138, 044317 (2013a), https://doi.org/10.1063/1.4788977 .
* Tadano and Tsuneyuki [2018] T. Tadano and S. Tsuneyuki, First-principles lattice dynamics method for strongly anharmonic crystals, Journal of the Physical Society of Japan 87, 041015 (2018), https://doi.org/10.7566/JPSJ.87.041015 .
* Brown _et al._ [2013b] S. E. Brown, I. Georgescu, and V. A. Mandelshtam, Self-consistent phonons revisited. II. A general and efficient method for computing free energies and vibrational spectra of molecules and clusters, The Journal of Chemical Physics 138, 044317 (2013b), https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/1.4788977/15461101/044317_1_online.pdf .
* Monacelli and Mauri [2021] L. Monacelli and F. Mauri, Time-dependent self-consistent harmonic approximation: Anharmonic nuclear quantum dynamics and time correlation functions, Phys. Rev. B 103, 104305 (2021).
* Siciliano _et al._ [2023] A. Siciliano, L. Monacelli, G. Caldarelli, and F. Mauri, Wigner gaussian dynamics: Simulating the anharmonic and quantum ionic motion, Phys. Rev. B 107, 174307 (2023).
* Lihm and Park [2021] J.-M. Lihm and C.-H. Park, Gaussian time-dependent variational principle for the finite-temperature anharmonic lattice dynamics, Phys. Rev. Research 3, L032017 (2021).
* Siciliano _et al._ [2024] A. Siciliano, L. Monacelli, and F. Mauri, Beyond gaussian fluctuations of quantum anharmonic nuclei (2024), arXiv:2407.03802 [cond-mat.mtrl-sci] .
* Hooton [1955] D. Hooton, Li. a new treatment of anharmonicity in lattice thermodynamics: I, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 46, 422 (1955), https://doi.org/10.1080/14786440408520575 .
* Met Office [2015] Met Office, _Cartopy: a cartographic python library with a Matplotlib interface_, Exeter, Devon (2010 - 2015).
* Monacelli _et al._ [2023] L. Monacelli, M. Casula, K. Nakano, S. Sorella, and F. Mauri, Quantum phase diagram of high-pressure hydrogen, Nature Physics 10.1038/s41567-023-01960-5 (2023).
* Howie _et al._ [2012a] R. T. Howie, C. L. Guillaume, T. Scheler, A. F. Goncharov, and E. Gregoryanz, Mixed molecular and atomic phase of dense hydrogen, Phys. Rev. Lett. 108, 125501 (2012a).
* Howie _et al._ [2012b] R. T. Howie, T. Scheler, C. L. Guillaume, and E. Gregoryanz, Proton tunneling in phase iv of hydrogen and deuterium, Phys. Rev. B 86, 214104 (2012b).
* Ramachandran and Varoquaux [2011] P. Ramachandran and G. Varoquaux, Mayavi: 3D Visualization of Scientific Data, Computing in Science & Engineering 13, 40 (2011).
* Giannozzi _et al._ [2009] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, Journal of Physics: Condensed Matter 21, 395502 (2009).
* Giannozzi _et al._ [2017] P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. D. Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Küçükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. O. de-la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, Advanced capabilities for materials modelling with quantum ESPRESSO, Journal of Physics: Condensed Matter 29, 465901 (2017).
* Miehlich _et al._ [1989] B. Miehlich, A. Savin, H. Stoll, and H. Preuss, Results obtained with the correlation energy density functionals of becke and lee, yang and parr, Chemical Physics Letters 157, 200 (1989).
* Lehoucq _et al._ [1998] R. B. Lehoucq, D. C. Sorensen, and C. Yang, _ARPACK Users’ Guide_ (Society for Industrial and Applied Mathematics, 1998) https://epubs.siam.org/doi/pdf/10.1137/1.9780898719628 .
* Virtanen _et al._ [2020] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors, SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, Nature Methods 17, 261 (2020).
* Bianco _et al._ [2017] R. Bianco, I. Errea, L. Paulatto, M. Calandra, and F. Mauri, Second-order structural phase transitions, free energy curvature, and temperature-dependent anharmonic phonons in the self-consistent harmonic approximation: Theory and stochastic implementation, Phys. Rev. B 96, 014111 (2017).
|
# Pfeed: Generating near real-time personalized feeds using precomputed
embedding similarities
Binyam Gebre BolThe Netherlands<EMAIL_ADDRESS>, Karoliina Ranta
Booking.comThe Netherlands<EMAIL_ADDRESS>, Stef van den Elzen
Eindhoven University of TechnologyThe Netherlands<EMAIL_ADDRESS>,
Ernst Kuiper BolThe Netherlands<EMAIL_ADDRESS>, Thijs Baars Last Mile
SolutionsThe Netherlands<EMAIL_ADDRESS>and Tom Heskes
Radboud University NijmegenThe Netherlands<EMAIL_ADDRESS>
(2022)
###### Abstract.
In personalized recommender systems, embeddings are often used to encode
customer actions and items, and retrieval is then performed in the embedding
space using approximate nearest neighbor search. However, this approach can
lead to two challenges: 1) user embeddings can restrict the diversity of
interests captured and 2) the need to keep them up-to-date requires an
expensive, real-time infrastructure. In this paper, we propose a method that
overcomes these challenges in a practical, industrial setting. The method
dynamically updates customer profiles and composes a feed every two minutes,
employing precomputed embeddings and their respective similarities. We tested
and deployed this method to personalise promotional items at Bol, one of the
largest e-commerce platforms of the Netherlands and Belgium. The method
enhanced customer engagement and experience, leading to a significant 4.9%
uplift in conversions.
deep learning, joint embeddings, dual encoders, contrastive learning,
personalization
††copyright: acmcopyright††journalyear: 2022††doi:
XXXXXXX.XXXXXXX††conference: Make sure to enter the correct conference title
from your rights confirmation emai; August 25–29, 2024; Barcelona,
Spain††price: 15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Information systems
Recommender systems
## 1\. Introduction
Bol, like many other e-commerce platforms, faces the challenge of providing
customers with an easy and efficient way to navigate their vast catalog and
find products that match their customers’ interests. The traditional approach
of relying on customer controlled text-based search engines or browsing
through categories is often limited and cumbersome, particularly during the
customer’s discovery phase. To overcome these limitations and enhance
customers’ overall discovery experience, Bol has launched personalized feeds
called Top deals for you, Top picks for you, and New for you.
These personalized feed systems utilize a combination of the customer’s
historical and recent behavior to display the best recommendations on the
customer’s home page across both app and desktop platforms. In this paper, we
present the methodology behind these feeds. We begin by presenting the
challenges inherent to creating personalized feed systems. Subsequently, we
delve into the prevailing industry approach (related work) that tackles these
challenges, concluding with the presentation of our proposed solution and the
evaluation outcomes.
### 1.1. Four Challenges in Personalized Feed Systems
Personalized feed systems can be viewed as search engines, where customers are
the search queries and items in the catalog are the search results. In this
view, there are four challenges that need to be overcome to provide customers
with a personalized set of items that align with their interests and
preferences: customer, item, candidate retrieval and ranking challenges.
#### 1.1.1. Customer representation challenge
Customers show complex behaviors while shopping on e-commerce sites before
making a purchase, e.g., searching for items, viewing items, reading reviews,
and making item comparisons. The challenge is distilling these interactions
into a concise customer representation. In addition to their dynamic
interactions, the representation may also need to incorporate static
attributes of customers, such as customer ID, gender, and clothing-size.
#### 1.1.2. Item representation challenge
Items have rich structured information such as item ID, title, description,
specifications, and other metadata. Items also have historical customer
interactions: views, clicks, customer ratings, reviews, etc. The item
representation challenge is identifying the most relevant data for
representing various items, a task complicated by two factors. The first is
the diversity of item attributes. For instance, author and title are key
attributes for books, whereas size and gender are more critical for fashion
items. The second factor is the cold-start problem associated with new
products; these have no historical interactions.
#### 1.1.3. Candidate retrieval challenge
Candidate retrieval entails determining which items best match a given
customer’s preferences. Here, the challenges are of two varieties: 1) training
customer and item representations in the same embedding space and 2) the
inference challenge, which aims to efficiently retrieve the best matches from
a corpus containing millions to billions of items.
#### 1.1.4. Ranking challenge
The candidate retrieval stage is followed by a ranking stage, where the
retrieved candidates are re-ranked using a more complex model and more complex
features of both the retrieved candidates and queries. The goal of this stage
is to select and rank the top K items per customer (for example, the top 100
items) using learning-to-rank algorithms (Cheng et al., 2016; Zhou et al.,
2018; Guo et al., 2017; Wang et al., 2021b).
In this paper, we focus on addressing the first three challenges: the customer
representation, item representation, and candidate retrieval challenges.
### 1.2. Our Contributions
The most dominant approach to tackling the aforementioned challenges relies on
a user-item framework (see figure 1). Two neural networks, called dual
encoders, are each trained to generate embeddings for user and item data
(Pancha et al., 2022; Pal et al., 2020; Covington et al., 2016; Yi et al.,
2019; Wang et al., 2021a). The user embedding model receives input in the form
of a sequence or bag of interactions on items, along with context and user
data (Wang et al., 2021a). On the other hand, the item embedding model
utilizes various item metadata types including item IDs (Covington et al.,
2016; Pancha et al., 2022; Cen et al., 2020) or output embeddings from pre-
trained models (Ying et al., 2018; Pancha et al., 2022; Pal et al., 2020).
However, despite its widespread use, the user encoding model in this framework
has two significant drawbacks: the single vector representation bottleneck and
the high infrastructure and maintenance costs.
#### 1.2.1. Single vector representation bottleneck
Using a single vector to represent users introduces challenges due to the
diversity and complexity of their interests, compromising both the capacity to
accurately represent users and the interpretability of the representation by
obscuring which interests are represented and which are not. While attempts to
use multiple embeddings have been made to overcome these limitations, the
exact number of vectors needed and the method for obtaining them remain topics
of research (Li et al., 2019; Pal et al., 2020).
#### 1.2.2. High infrastructure and maintenance costs
Generating and maintaining up-to-date user embeddings requires substantial
investment in terms of infrastructure and maintenance (see, for example, the
SOAP platform from Meta (Zhang et al., 2023)). Each new user action
necessitates executing the user encoder to generate fresh embeddings and
recommendations. Furthermore, the user encoder must be large in order to
effectively model a sequence of interactions, leading to expensive training
and inference requirements.
Figure 1. User-to-item framework: Single vectors from the user encoder limit
representation and interpretability. Keeping them fresh demands high-
maintenance infrastructure. Figure 2. Query-to-item framework: Query
embeddings and their similarities are precomputed. Users are represented by a
dynamic set of queries that can be updated as needed.
Our approach overcomes these drawbacks by modelling item-to-item
relationships, as illustrated in figure 2. Here, the first item represents the
query context (an item that has been bought or viewed), while the second item
is the target (the item that is subsequently bought). We utilize dual encoders
to effectively capture relationships between viewed and bought items, as well
as between items bought together. Specifically, our contributions include:
1. (1)
We demonstrate how a transformer-based two-tower architecture, also known as
dual encoders, can be utilized to generate multiple embeddings per item in one
model run. Generating multiple embeddings is effective for capturing the
various roles of items, and generating them with one model run provides
inference efficiency.
2. (2)
We show how we represent customers with multiple queries, where each query
corresponds to a product that the customer has interacted with, either through
a view or a buy. This approach of representing customers by a set of queries
allows us to precompute query embeddings and their respective similarities,
facilitating the generation of personalized feeds in near real-time (updates
occurring every 2 minutes). This approach offers the benefits of efficiency,
as queries are shared, and interpretability, as each recommendation is
associated with a specific query.
3. (3)
We showcase real-world applications of our approach in deployed systems at
Bol, namely, Top deals for you, Top picks for you, and New for you. By
indexing products that are on sale, new or popular and matching them with
selected customer query representations, we generate the Top deals for you,
New for you, and Top picks for you recommendations.
## 2\. Related Work
Pre-deep learning era, matrix factorization methods were used for personalized
recommendations (see (Hu et al., 2008; Koren et al., 2009; Su and
Khoshgoftaar, 2009; Koren et al., 2022)). Since the AlexNet paper (Krizhevsky
et al., 2012), which showed the value of deep learning in image recognition,
deep learning has also been applied in recommender systems (He et al., 2017;
Zhang et al., 2019). Among this rich literature (see survey (Zhang et al.,
2019)), the papers most related to our work come from industrial recommender
systems such as those of eBay (Wang et al., 2021a), Youtube (Covington et al.,
2016; Yi et al., 2019), Google Play (Yang et al., 2020a), Pinterest(Pancha et
al., 2022; Pal et al., 2020), and Alibaba (Li et al., 2019; Cen et al., 2020;
Wang et al., 2018). We examine these papers on how they address the customer
representation, item representation, and retrieval challenges.
### 2.1. Customer Representation Challenge
The YouTube paper (Covington et al., 2016) uses a Multilayer Perceptron (MLP)
model to encode both user and video entities into the same space. The user
encoding model takes as inputs embedded video watches (50 recent watches),
embedded search tokens (50 recent searches) and user attributes such as age
and gender. A vocabulary of 1M videos and 1M search tokens is embedded with
256 floats.
The eBay paper (Wang et al., 2021a) uses a recurrent (GRU) model to generate
user embeddings. The inputs to the GRU model are item or query embeddings
along with their respective event type embeddings. The event type embeddings
are defined by four dimensions and serve to capture various actions on the
items. The item embeddings are based on content-based features such as item
titles, categories (e.g., mobile phones), and structured aspects (e.g., brand:
Apple, network: Verizon). The user embedding has 64 dimensions.
The Pinterest paper (Pancha et al., 2022) uses a transformer model to
represent the user in 256 dimensions. The inputs to the model are: a sequence
of Pins, represented by their PinSage embedding (256-dimensional) and metadata
features: action type, surface, timestamp, and action duration (Pal et al.,
2020).
To capture the diverse and multifaceted interests of users, prior work from
Pinterest and JD.com used multiple embeddings per user (Pancha et al., 2022;
Pal et al., 2020; Li et al., 2019). While the notion of employing multiple
embeddings to represent users is similar to our method, it also differs. In
our solution, the embeddings that constitute customer representations are not
unique to each individual customer but rather, are shared among users.
### 2.2. Item Representation Challenge
The YouTube paper (Covington et al., 2016) represents videos with embeddings
of 256 dimensions based on Item IDs. The eBay study (Wang et al., 2021a)
employs a 3-layer MLP to create item embeddings with a 64-dimensional output.
These embeddings are derived from inputs that include title, aspect, and
category embeddings. Each of these embeddings is formulated as a Continuous-
Bag-of-Words (CBOW) representation, corresponding to the tokens found in the
title, aspect, and category. The Pinterest paper (Pancha et al., 2022) uses an
MLP model to represent items (more specifically, Pins) based only on PinSage
embeddings of dimension 256.
Our work utilizes textual metadata (such as the title and category of a
product) to embed item entities. In the YouTube paper, item IDs are used as
input to the neural network model, leading to a larger model size due to the
need to store an embedding table of significant size. In contrast, our
approach generates embeddings directly from input metadata, eliminating the
need for a separate table. This is similar to the eBay paper, which also
utilizes metadata alone to represent items (Wang et al., 2021a).
### 2.3. Candidate Retrieval Challenge
#### 2.3.1. Training challenge
The most common training strategy for learning user and item embeddings is
based on a two-tower user-item framework (see papers from eBay, YouTube and
Pinterest (Wang et al., 2021a; Covington et al., 2016; Yi et al., 2019; Pancha
et al., 2022)). The user-item framework tackles the twin challenges of user
representation and training using two neural networks in one go. The first
network represents user activity of item views and searches whereas the second
network represents target items. Variations exist in both the models employed
for user and item representation, as well as in the input types fed into the
model. Additionally, variations arise in the negative sampling approach
utilized during training.
Our training strategy also builds upon the two-tower model and negative
sampling techniques. However, it emphasizes capturing item-to-item
relationships, rather than the more common user-to-item relationships. During
training for the retrieval stage, our work eliminates the necessity for user
specific data and modeling the user, focusing solely on aggregated item-to-
item relationships, specifically view-buy or buy-buy interactions.
#### 2.3.2. Inference challenge
The approach to overcoming the inference challenge is essentially the same for
all large-scale recommender systems. Embeddings of items are indexed and
approximate nearest neighbor search is used to efficiently retrieve the most
relevant items for given queries represented by user embeddings. Most systems
differ in the tools used, e.g., the vector database. For example, eBay uses
FAISS (Wang et al., 2021a), an open source library from Facebook. Youtube and
Pinterest use their own implementations (Covington et al., 2016; Pancha et
al., 2022; Guo et al., 2020). Our work uses the FAISS library (Johnson et al.,
2019) for indexing and search operations. Since all potential query embeddings
(item views and buys) are known in advance, we precompute their similarities
and store the query results in a lookup table. Personalized recommendations
are then generated by identifying relevant queries for a user and retrieving
the corresponding recommendations.
## 3\. Methodology
Our method for creating personalized feed recommendations, which we call
Pfeed, involves two phases. In the first phase, we train and produce multi-
vector item embeddings (see figures 3(a) and 3(b)). In the second phase, these
embeddings are applied to generate personalized product recommendations (see
figures 3(c) and 3(d)). The goal of the first phase is to capture item-to-item
relationships through embeddings. We use ”query-to-item” and ”query-to-target”
interchangeably to refer to the same concept of item-to-item relationships.
(a) Step 1: Contrastive pre-training
(b) Step 2: Generating embeddings
(c) Step 3: Indexing and precomputing similarities
(d) Step 4: Generating personalized feed recommendations
Figure 3. The major steps involved in generating near real-time personalized
recommendations
### 3.1. Representing an Item with Three Embeddings
In Pfeed, an item can play one of three roles: 1) view query, 2) buy query,
and 3) target item. View queries are items clicked during a session leading to
the purchase of specific items, thus creating view-buy relationships. Buy
queries, on the other hand, are items frequently purchased in conjunction with
or shortly before other items, establishing buy-buy relationships. The items
that come after view or buy queries are the target items. Our goal is to
capture the three roles of an item - view query, buy query, and target - using
three distinct embeddings, all generated by a single encoder.
### 3.2. Model Architecture - Generating Three Item Embeddings with One Model
Run
We use a transformer encoder (Vaswani et al., 2017) to generate three
embeddings for a given item, each corresponding to the view, buy, or target
role. To achieve this, we first tokenize the item metadata into a sequence of
tokens using the sentencepiece library (Kudo and Richardson, 2018). We then
prepend three special tokens: [Q_V], [Q_B] and [TGT] as shown in figure 4.
These special tokens play a similar role as the [CLS] special token in BERT
(Devlin et al., 2019). The first three embeddings from the transformer’s final
layer, corresponding to the special tokens [Q_V], [Q_B], and [TGT],
respectively represent the item’s view query, buy query, and target
embeddings. Because all these three embeddings are generated in one model run,
we call the model a Single Input Multi Output (SIMO) embedding model. The SIMO
model achieves threefold efficiency compared to a SISO (Single Input and
Single Output) embedding model, which requires executing the model three times
with distinct prompts for each of the three item roles (view query, buy query,
and target roles).
Figure 4. The SIMO (Single Input Multi Ouput) embedding model generates three
embeddings per item in one model run using three special tokens: [Q_V], [Q_B],
and [TGT].
### 3.3. Training with Contrastive Learning
#### 3.3.1. Training data
We train the SIMO embedding model with query-target pairs consisting of the
two types of relationships. The first set consists of item pairs of view-buy
relationship (i.e., {$q$, view, $t$}). The second set consists of items pairs
of buy-buy relationship (i.e., {$q$, buy, $t$}). We combine the sets to form
one set $\\{(q_{i},r_{i},t_{i})\\}^{N}_{i=1}$, where $(q_{i},r_{i},t_{i})$
corresponds to a positive example, indicating that item $q_{i}$ and
interaction (or relation) $r_{i}$ led to the purchase of item $t_{i}$. In
addition to query-target item pairs, we also sample random items to reduce
bias (Yang et al., 2020b).
#### 3.3.2. Dual encoders
The objective of our training is to get a model that produces similar
embeddings for matching query-target $(q_{i},r_{i},t_{i})$ inputs and
dissimilar embeddings for non-matching inputs such as
$(\tilde{q_{i}},r_{i},t_{i})$ or $(q_{i},r_{i},\tilde{t_{i}})$. To achieve
this objective, we employ dual encoders. We feed the query input $q_{i}$ and
the target input $t_{i}$ into two instances of the transformer encoder $E$.
The encoder $E$ maps $q_{i}$ and $t_{i}$ independently and outputs three
embeddings of $Q_{i}^{v}$, $Q_{i}^{b}$, and $T_{i}$. From target encoder, we
take $T_{i}$ embedding and do a dot product with the $Q_{i}$ embedding of the
query encoder, which is $Q_{i}^{v}$ or $Q_{i}^{b}$, depending on the relation
$r_{i}$ (see figure 5). When the training samples also include randomly
sampled items, called random negatives, we use the same encoder $E$ to
generate embeddings: $\tilde{Q}_{i}^{v}$, $\tilde{Q}_{i}^{b}$, and
$\tilde{T}_{i}$ . These embeddings are mixed with the embeddings of in-batch
negatives during training (Yang et al., 2020b).
Figure 5. Inputs to dual SIMO encoders: the query encoder takes in the
metadata of the query item and generates three embeddings and the target
encoder takes in the metadata of the target item and generates three
embeddings. During training, the loss is determined by the target embedding
derived from the target item $t_{i}$ encoder and pairing it with a query
embedding from the query item $q_{i}$ encoder, selected by the relation
$r_{i}$ indicator.
#### 3.3.3. Training objectives
The training objective consists of two contrastive loss terms. The first loss
term employs a query-target softmax formulation (see equation 1). In this
formulation, we sample negative targets for a given query-target pair. The
second loss term employs a target-query softmax (see equation 2), where
negative queries are sampled for the same query-target pair. We use four types
of negative sampling strategies: 1) in-batch negatives, 2) uniformly sampled
negatives, and 3) mixed negatives (Yang et al., 2020b) which is a combination
of in-batch negatives and uniformly sampled negatives, and 4) self-negatives.
(1)
$\displaystyle\mathcal{L}_{1}=-\frac{1}{|\mathcal{B}|}\sum_{i=1}^{|\mathcal{B}|}\underbrace{\log\frac{e^{\beta\mathbf{Q}_{i}\cdot\mathbf{T}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\beta\mathbf{Q}_{i}\cdot\mathbf{T}_{j}}+\sum_{j=1}^{|\mathcal{N}|}e^{\beta\mathbf{Q}_{i}\cdot\tilde{\mathbf{T}}_{j}}}}_{\text{query
$\rightarrow$ target softmax}})$
(2)
$\displaystyle\mathcal{L}_{2}=-\frac{1}{|\mathcal{B}|}\sum_{i=1}^{|\mathcal{B}|}\underbrace{\log\frac{e^{\beta\mathbf{Q}_{i}\cdot\mathbf{T}_{i}}}{\sum_{j=1}^{|\mathcal{B}|}e^{\beta\mathbf{Q}_{j}\cdot\mathbf{T}_{i}}+\sum_{j=1}^{|\mathcal{N}|}e^{\beta\tilde{\mathbf{Q}}_{j}\cdot\mathbf{T}_{i}}}}_{\text{target
$\rightarrow$ query softmax}})$
In equations 1 and 2, $\mathcal{B}$ represents a batch of embedding pairs for
positive samples:
$\\{(\mathbf{Q}_{1},\mathbf{T}_{1}),(\mathbf{Q}_{2},\mathbf{T}_{2}),\ldots,(\mathbf{Q}_{|\mathcal{B}|},\mathbf{T}_{|\mathcal{B}|})\\}$.
$\mathcal{N}$ represents a set of embeddings from negative items that are
uniformly sampled from the catalog and appear as $\tilde{\mathbf{T}}_{j}$ or
$\tilde{\mathbf{Q}}_{j}$, depending on the direction of the softmax
computation (query-to-target or target-to-query). Each embedding is L2
normalized (i.e., $\|\mathbf{Q}\|_{2}=1$ and $\|\mathbf{T}\|_{2}=1$). The
scale parameter $\beta$ is a parameter that is trained with the model
parameters. Initially, we tried a few manually fixed values (e.g., 10, 100)
and found it to affect performance significantly.
### 3.4. Inference
Pfeed has three inference steps: precomputing embeddings, precomputing
similarities and generating personalized feeds.
#### 3.4.1. Precomputing embeddings
After successful training using the approach described above, we use the
resulting trained encoder to generate embeddings for all items in the catalog
(see figure 3(b)). For each item, we generate three embeddings. The first two
embeddings are query embeddings for when the item is viewed (indicated as
embedding $Q_{i}^{v}$ in figure 4) or bought (indicated as embedding
$Q_{i}^{b}$ in figure 4). The third embedding is for when the item is used as
a target item (indicated as embedding $T_{i}$ in figure 4).
#### 3.4.2. Precomputing similarities
The target embeddings of all items in the catalog (or selected part of it) are
indexed with a vector indexing library (in our case, we use FAISS) and we
search against the index using the view query and buy query embeddings of all
items in the catalog. If the catalog has $N$ items, then we get $2\times N$
queries (view and buy for every item in the catalog). For each of the $2\times
N$ queries, we get the $M$ most similar items, resulting in a table with
$2\times N\times M$ entries (see figure 3(c)). Only entries with a score
greater than a prefixed threshold are stored in a lookup table. We fix this
threshold from known item-to-item scores (validation data split). Similarity
scores above the first percentile (approximately 15% of the original set) are
stored in the lookup database.
#### 3.4.3. Generating personalized feeds
The process for generating a ranked list of items per customer includes: 1)
selecting queries for each customer (up to 100), 2) retrieving up to 10
potential next items-to-buy for each query, and 3) combining these items and
applying ranking, diversity, and business criteria (see figure 3(d)). This
process is executed daily for all customers and every two minutes for those
active in the last two minutes. Recommendations resulting from recent queries
are prioritized over those from historical ones.
### 3.5. Case Study: Personalized Item Feeds at Bol
We applied Pfeed to generate multiple personalized feeds at Bol, one of the
largest e-commerce platforms of the Netherlands and Belgium. The feeds can be
seen on the app or website and have titles such as Top deals for you, Top
picks for you, and New for you. These feeds differ on at least one of two
factors: the specific items targeted for personalization and/or the particular
queries selected to represent customer interests.
#### 3.5.1. Top deals for you
This feed personalizes items with promotional offers or discounted prices.
Pfeed takes the most recent 100 unique customer item views/buys (per category)
as query keys. And for each key, it retrieves up to 10 potential discounted
items for the customer to buy. This is achieved by accessing precomputed query
results and merging them, ensuring near real-time response in the process.
This is done daily for all customers and every 2 minutes for recently active
customers (see figure 3(d)).
#### 3.5.2. New for you
This feed personalizes newly released items. New items, often marked by
limited interactions, present a challenge to recommender systems reliant on
item IDs or interaction data. However, Pfeed circumvents this cold-start issue
because it generates item embeddings using textual catalog metadata (Li et
al., 2023). The New for you feed works similarly to the Top deals for you
feed, with the distinction being the type of items selected for
personalization. In New for you, items are designated as new if they fall
within the most recent 10% of items based on their release date, relative to
their specific categories. This approach guarantees that each category
features its own set of new items, accommodating the varying time scales
across different categories.
#### 3.5.3. X for you
In general, Pfeed generates X for you by limiting the search index or the
search output to consist of only items of $X$. In addition to Top deals for
you and New for you, Pfeed has been used to generate other feeds, namely Top
picks for you and Select deals for you. Items for Top picks for you come from
those that have a certain level of popularity and match the customers’ most
recent queries from their most frequently interacted with categories. Items
for Select deals for you come from items that are curated to reward customer
loyalty and apply only to customers who are Select members.
## 4\. Experiments
To evaluate Pfeed, we run both offline and online experiments. The offline
experiments are used to evaluate the quality of the embeddings and to
illustrate the effects of different design choices on performance. To
understand the impact of the embeddings on the personalized feed system, we
report results from an online A/B testing experiment. The experiments are
specifically designed to answer the following questions.
Q1::
How does the model that produces three embeddings in one run (SIMO model)
compare in terms of performance to the model that generates each embedding in
three separate runs (SISO model)?
Q2::
How effective is the SIMO model for cold-start product recommendation? And
popular items?
Q3::
How sensitive is the SIMO model to the training strategy, particularly
concerning negative sampling and model sizes.
Q4::
How effective are these query-target relationships in generating personalized
feeds (online A/B testing)?
### 4.1. Dataset
We create view-buy and buy-buy datasets, comprising of approximately two
million positive training/testing samples from around a million unique items
(see table 1). These datasets are constructed from customer item views and
item buys.
Table 1. Bol dataset statistics Dataset | # of positive pairs | # of distinct items
---|---|---
view-buy | 0.99M | 1.08M
buy-buy | 0.96M | 0.27M
Negative | - | 2.00M
Combined | 1.95M | 3.28M
#### 4.1.1. view-buy dataset
The view-buy dataset consists of item pairs with view-buy relationships. The
pairs are constructed from converting customer sessions. Items that are
purchased become target items and the items that were viewed in the same
session become the view queries. Of all the view-buy pairs aggregated from
sessions from the last four months, we choose the top one million pairs that
meet a minimum occurrence threshold and have a high association strength as
measured by a cosine metric (Huang et al., 2015)).
#### 4.1.2. buy-buy dataset
The buy-buy dataset consists of item pairs with buy-buy relationships. The
pairs are constructed from customer purchases. Items that are purchased later
in time become target items and the items that were purchased earlier in time
become the buy queries. From all the possible buy-buy pairs constructed from
the customer purchases, we select the top one million pairs that meet a
minimum occurrence threshold and have a high association strength as measured
by a cosine metric.
#### 4.1.3. Negative dataset
In addition to view-buy or buy-buy datasets, we also use a negative dataset
that consists of uniformly sampled random items (about two millions). The
purpose of this dataset is to reduce selection bias (Yang et al., 2020b).
### 4.2. Offline Evaluation
We use the recall metric to compare different design choices. Our dataset is
split into training, validation and test sets in the proportions of 80%, 10%,
and 10%. To the target items $t_{i}$ in the test samples
$(q_{i},r_{i},t_{i})$, we add a distractor set $\tilde{C}$ of one million
items, randomly sampled from the item catalog (a similar approach is used in
ItemSage from Pinterest (Baltescu et al., 2022)). We consider a design choice
to be better when its recall@K is higher, i.e., the proportion of
$(q_{i},r_{i},t_{i})$ samples for which the retrieved item $t_{i}$ is ranked
within the top K among $\tilde{C}\cup t_{i}$.
### 4.3. Model Architecture Details
We use a transformer encoder model with four layers and eight attention heads.
The model is identified as SIMO-128, where 128 represents the size of the
hidden dimension. Depending on the input sequence we feed to the model, we
have either a SIMO or a SISO embedding model.
### 4.4. Model Training Details
We use Pytorch and Pytorch lightning for training the transformer model. The
model is optimized with Lamb optimizer (You et al., 2019) with a learning rate
of 0.01 on four V100 GPUs using Distributed Data Parallel (DDP) strategy. Each
GPU runs three instances of the model, each handling a batch size of 1024.
These instances handle input sequences from query, target, and negative item
sequences after tokenization using the sentencepiece library (Kudo and
Richardson, 2018) using a vocabulary size of 20k. Prior to loss computation,
all forward passes from each GPU are gathered, resulting in a total batch size
of $1024\times 4(=4096)$. The loss is computed by incorporating both in-batch
and uniformly sampled negative samples, amounting to a total of $8192$ minus
$1$ negatives per positive sample (Yang et al., 2020b). To stabilize training,
gradients are clipped to 0.5. The context length of the input sequence is
fixed to a maximum of 64 tokens, sufficient for encoding item titles and
essential metadata such as categories but excluding descriptions.
### 4.5. Retrieval Performance and Efficiency (Q1)
The query-target retrieval system, based on the embeddings generated by a
transformer model that generates three embeddings with a single run (SIMO
embedding model), performs comparably to the model that generates the
embeddings separately (SISO embedding model). The SIMO embedding model
generates embeddings three times faster than the SISO embedding model (see
table 2).
Table 2. Recall@K on view-buy and buy-buy datasets Model | Recall@10 (%) |
---|---|---
| view-buy dataset | buy-buy dataset | efficiency
SIMO-128 | 41.86 | 36.41 | 3x
SISO-128 | 41.57 | 36.12 | x
### 4.6. Retrieval Performance on Cold-start and Popular Items (Q2)
The query-target retrieval system, based on the SIMO-128 model, shows varying
performance depending on the nature of the dataset and the level of popularity
of the items. On the buy-buy dataset, recall scores are lower for head items.
On the view-buy dataset, recall scores are slightly higher for head items (see
table 3). This recall score difference between the two datasets is attributed
to the differing distributions of query-to-target relationship categories. On
the buy-buy dataset, approximately 75% of the relationships are either one-to-
many, many-to-one, or many-to-many (complex relationships). In contrast, on
the view-buy dataset, such relationships constitute less than 21% (see table
4). A detailed analysis of recall scores segmented by relationship category
reveals a consistent trend across both datasets: scores on item pairs with
complex relationships are lower (see table 5). The reasons for this are
twofold: First, single vectors face difficulties in capturing complex
relationships. Second, during training, the model is inaccurately penalized
for failing to replicate the exact query-target pairs provided, rather than
being evaluated on its ability to identify any valid query-target pairs.
Table 3. Impact of item popularity on Recall@K Popularity | Recall@10 (%)
---|---
view-buy dataset | buy-buy dataset
Cold-start | 38.52 | 59.76
Tail | 41.66 | 55.88
Head | 42.32 | 25.54
All | 41.86 | 36.41
Table 4. Relationship categories and their distributions Relationship category | Distribution (%)
---|---
view-buy dataset | buy-buy dataset
$1x1$ | 80.5 | 24.7
$1xn$ | 6.9 | 16.2
$mx1$ | 11.5 | 20.5
$mxn$ | 1.1 | 38.6
All | 100.0 | 100.0
Table 5. Relationship categories and Recall@K Relationship category | Recall@10 (%)
---|---
view-buy dataset | buy-buy dataset
$1x1$ | 42.08 | 58.01
$1xn$ | 40.22 | 41.98
$mx1$ | 41.71 | 35.55
$mxn$ | 37.63 | 20.72
All | 41.86 | 36.41
### 4.7. Sensitivity of the Retrieval Performance (Q3)
We conduct a sensitivity analysis of our method by varying the hidden
dimensions of the SIMO model and altering particular aspects of the training
strategy, particularly the negative sampling strategy.
#### 4.7.1. Hidden dimension
We vary the hidden dimension of the model between 64, 128, 256, 384, and 512
while keeping the rest of the transformer model and training strategy the
same. Performance increases as the dimension increases until 384. At dimension
512, the model’s performance drops (see table 6).
Table 6. Impact of hidden dimension vector size on Recall@K Vector size | Parameter # | Recall@10 (%)
---|---|---
view-buy | buy-buy dataset
$64$ | 1.5M | 37.87 | 32.09
$128$ | 3.6M | 41.86 | 36.41
$256$ | 9.1M | 44.31 | 40.73
$384$ | 16.6M | 44.71 | 41.61
$512$ | 26.0M | 41.23 | 38.93
#### 4.7.2. Negative sampling strategy
We use four types of negative sampling strategies: in-batch negative sampling,
uniform negative sampling, mixed negative sampling, and self-negative
sampling. The best performance is achieved with mixed negative sampling, where
both in-batch and uniform sampled negatives are used (Yang et al., 2020b). In-
batch negative sampling is second best (see table 7).
Table 7. Impact of negative sampling strategy on Recall@K Negative Sampling | Recall@10 (%)
---|---
view-buy dataset | buy-buy dataset
Mixed | 41.86 | 36.41
In-batch | 40.87 | 35.88
Uniform | 39.15 | 31.73
Mixed + self-negatives | 40.45 | 31.24
Self-negatives refer to instances where the target embeddings of query items
serve as their own negatives (or the query embeddings of target items serve as
their own negatives). Self-negatives are advantageous for handling
asymmetrical buy-buy relationships or instances of non-repeat purchases. When
we add self-negatives from query-item pairs having buy-buy relations to the
mixed negatives, we observe a decline in the overall recall score. This
suggests that such relationships are less prevalent in the dataset.
### 4.8. Online A/B testing (Q4)
We ran an online A/B testing experiment where we compared a treatment group
receiving personalized Top deals for you item lists (generated by Pfeed)
against a control group that received a non-personalized Top deals list,
curated by promotion specialists. This experiment was conducted over a two-
week period with an even 50-50 split between the two groups. The results
showed a statistically significant increase in performance for the treatment
group: there was a 4.9% increase in conversion rates and a 27% increase in the
number of items added to wish lists (see table 8). Following these results,
Pfeed has been deployed and can be found on both the mobile app and the
website of Bol.
Table 8. Online A/B test Model | Wish list additions | Conversion
---|---|---
Non-personalized deals | 0.00 | 0.00
Top deals for you | +27% | +4.9%
## 5\. Conclusions
In this paper, we introduced Pfeed, a method for generating personalized
product feeds on e-commerce platforms. The method has been deployed at Bol
with services called: _Top deals for you_ , _Top picks for you_ , and _New for
you_ and achieved a significant conversion uplift. Pfeed uses a query-to-item
framework as opposed to user-item, the framework most dominant for
personalized recommender systems. We highlighted three benefits of the query-
to-item framework. 1) Simplification of real-time deployment, as query results
can be precomputed and user interests can dynamically be updated in real-time,
all without requiring model inference or the unlearning of past preferences.
2) Enhanced interpretability, as each recommendation in the feed can be traced
to specific queries. 3) Increased computational efficiency due to the reuse of
queries among users. Additionally, we demonstrated the use of multiple special
tokens as input in the transformer model, enabling a single model run to
generate multiple embeddings for each item.
## 6\. Future Work
Pfeed’s embedding approach can be enhanced in two ways: 1) better handling of
query-to-item training samples having many-to-many relations and 2) explicit
modeling of memorization and generalisation features.
Query-to-item with many-to-many relationships: Pfeed’s current method of
representing users with a set of individual queries provides flexibility but
falls short in modeling sequential user behavior. This isn’t inherently an
issue, as the ranking phase can incorporate sequential information. However,
it requires the embedding-based retrieval phase to be expressive enough to
handle an increased set of relevant items, including those that might
otherwise be excluded by sequential modeling. For example, if a user buys
diapers, there are numerous potential next purchases such as items related to
baby toys or clothes. Pfeed’s embedding strategy struggles to model such
complex relations (one-to-many, many-to-one and many-to-many relations). In
practice, Pfeed settles with the most probable next purchase and thus provides
less variety per query. Future enhancements could involve multi-vector query
representations, allowing for a wider range of item choices.
Explicit modeling of memorization and generalization features: Pfeed’s
embedding strategy leverages item content, like titles, which is good for
generalization, but it does not explicitly incorporate memorization features
such as item IDs or popularity. This limitation could impact the system’s
performance, particularly with popular items. Future work could focus on
designing an architecture that can adaptively use memorization features when
available, while still relying on generalization features in their absence.
This improvement would enable the system to more accurately predict next-item
choices, covering both popular and long tail items.
## 7\. Acknowledgments
We are grateful to Tim Akkerman, Cuong Dinh, Isaac Sijaranamual, Paulo Barchi,
Barrie Kersbergen, Haider Ali Afzal, Bart van de Garde, and Charles de Leau
for their suggestions, comments, corrections, and inspiration.
## References
* (1)
* Baltescu et al. (2022) Paul Baltescu, Haoyu Chen, Nikil Pancha, Andrew Zhai, Jure Leskovec, and Charles Rosenberg. 2022\. ItemSage: Learning Product Embeddings for Shopping Recommendations at Pinterest. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ (Washington DC, USA) _(KDD ’22)_. Association for Computing Machinery, New York, NY, USA, 2703–2711. https://doi.org/10.1145/3534678.3539170
* Cen et al. (2020) Yukuo Cen, Jianwei Zhang, Xu Zou, Chang Zhou, Hongxia Yang, and Jie Tang. 2020\. Controllable Multi-Interest Framework for Recommendation. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (Virtual Event, CA, USA) _(KDD ’20)_. Association for Computing Machinery, New York, NY, USA, 2942–2951. https://doi.org/10.1145/3394486.3403344
* Cheng et al. (2016) Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. 2016\. Wide & Deep Learning for Recommender Systems. In _Proceedings of the 1st Workshop on Deep Learning for Recommender Systems_ (Boston, MA, USA) _(DLRS 2016)_. Association for Computing Machinery, New York, NY, USA, 7–10. https://doi.org/10.1145/2988450.2988454
* Covington et al. (2016) Paul Covington, Jay Adams, and Emre Sargin. 2016\. Deep neural networks for youtube recommendations. In _Proceedings of the 10th ACM conference on recommender systems_. 191–198.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , Jill Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
* Guo et al. (2017) Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: A Factorization-Machine Based Neural Network for CTR Prediction. In _Proceedings of the 26th International Joint Conference on Artificial Intelligence_ (Melbourne, Australia) _(IJCAI’17)_. AAAI Press, 1725–1731.
* Guo et al. (2020) Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020. Accelerating Large-Scale Inference with Anisotropic Vector Quantization. https://arxiv.org/abs/1908.10396
* He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017\. Neural collaborative filtering. In _Proceedings of the 26th international conference on world wide web_. 173–182.
* Hu et al. (2008) Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In _Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on_. Ieee, 263–272.
* Huang et al. (2015) Yanxiang Huang, Bin Cui, Wenyu Zhang, Jie Jiang, and Ying Xu. 2015. TencentRec: Real-Time Stream Recommendation in Practice. In _Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data_ (Melbourne, Victoria, Australia) _(SIGMOD ’15)_. Association for Computing Machinery, New York, NY, USA, 227–238. https://doi.org/10.1145/2723372.2742785
* Johnson et al. (2019) Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019\. Billion-scale similarity search with GPUs. _IEEE Transactions on Big Data_ 7, 3 (2019), 535–547.
* Koren et al. (2009) Yehuda Koren, Robert Bell, and Chris Volinsky. 2009\. Matrix factorization techniques for recommender systems. _Computer_ 42, 8 (2009).
* Koren et al. (2022) Yehuda Koren, Steffen Rendle, and Robert Bell. 2022\. Advances in collaborative filtering. _Recommender systems handbook_ (2022), 91–142.
* Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012\. ImageNet Classification with Deep Convolutional Neural Networks. In _Advances in Neural Information Processing Systems_ , F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (Eds.), Vol. 25. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
* Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_. Association for Computational Linguistics, Brussels, Belgium, 66–71. https://doi.org/10.18653/v1/D18-2012
* Li et al. (2019) Chao Li, Zhiyuan Liu, Mengmeng Wu, Yuchi Xu, Huan Zhao, Pipei Huang, Guoliang Kang, Qiwei Chen, Wei Li, and Dik Lun Lee. 2019\. Multi-Interest Network with Dynamic Routing for Recommendation at Tmall. In _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ (Beijing, China) _(CIKM ’19)_. Association for Computing Machinery, New York, NY, USA, 2615–2623. https://doi.org/10.1145/3357384.3357814
* Li et al. (2023) Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, and Julian McAuley. 2023. Text Is All You Need: Learning Language Representations for Sequential Recommendation. In _Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ (, Long Beach, CA, USA,) _(KDD ’23)_. Association for Computing Machinery, New York, NY, USA, 1258–1267. https://doi.org/10.1145/3580305.3599519
* Pal et al. (2020) Aditya Pal, Chantat Eksombatchai, Yitong Zhou, Bo Zhao, Charles Rosenberg, and Jure Leskovec. 2020\. PinnerSage: Multi-Modal User Embedding Framework for Recommendations at Pinterest. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (Virtual Event, CA, USA) _(KDD ’20)_. Association for Computing Machinery, New York, NY, USA, 2311–2320. https://doi.org/10.1145/3394486.3403280
* Pancha et al. (2022) Nikil Pancha, Andrew Zhai, Jure Leskovec, and Charles Rosenberg. 2022. PinnerFormer: Sequence Modeling for User Representation at Pinterest. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ (Washington DC, USA) _(KDD ’22)_. Association for Computing Machinery, New York, NY, USA, 3702–3712. https://doi.org/10.1145/3534678.3539156
* Su and Khoshgoftaar (2009) Xiaoyuan Su and Taghi M Khoshgoftaar. 2009. A survey of collaborative filtering techniques. _Advances in artificial intelligence_ 2009 (2009).
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017\. Attention is All you Need. In _Advances in Neural Information Processing Systems_ , I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
* Wang et al. (2018) Jizhe Wang, Pipei Huang, Huan Zhao, Zhibo Zhang, Binqiang Zhao, and Dik Lun Lee. 2018\. Billion-Scale Commodity Embedding for E-Commerce Recommendation in Alibaba. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (London, United Kingdom) _(KDD ’18)_. Association for Computing Machinery, New York, NY, USA, 839–848. https://doi.org/10.1145/3219819.3219869
* Wang et al. (2021b) Ruoxi Wang, Rakesh Shivanna, Derek Cheng, Sagar Jain, Dong Lin, Lichan Hong, and Ed Chi. 2021b. DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-Scale Learning to Rank Systems. In _Proceedings of the Web Conference 2021_ (Ljubljana, Slovenia) _(WWW ’21)_. Association for Computing Machinery, New York, NY, USA, 1785–1797. https://doi.org/10.1145/3442381.3450078
* Wang et al. (2021a) Tian Wang, Yuri M Brovman, and Sriganesh Madhvanath. 2021a. Personalized embedding-based e-commerce recommendations at ebay. _arXiv preprint arXiv:2102.06156_ (2021).
* Yang et al. (2020a) Ji Yang, Xinyang Yi, Derek Zhiyuan Cheng, Lichan Hong, Yang Li, Simon Xiaoming Wang, Taibai Xu, and Ed H. Chi. 2020a. Mixed Negative Sampling for Learning Two-Tower Neural Networks in Recommendations. In _Companion Proceedings of the Web Conference 2020_ (Taipei, Taiwan) _(WWW ’20)_. Association for Computing Machinery, New York, NY, USA, 441–447. https://doi.org/10.1145/3366424.3386195
* Yang et al. (2020b) Ji Yang, Xinyang Yi, Derek Zhiyuan Cheng, Lichan Hong, Yang Li, Simon Xiaoming Wang, Taibai Xu, and Ed H Chi. 2020b. Mixed negative sampling for learning two-tower neural networks in recommendations. In _Companion Proceedings of the Web Conference 2020_. 441–447.
* Yi et al. (2019) Xinyang Yi, Ji Yang, Lichan Hong, Derek Zhiyuan Cheng, Lukasz Heldt, Aditee Kumthekar, Zhe Zhao, Li Wei, and Ed Chi. 2019. Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations _(RecSys ’19)_. Association for Computing Machinery, New York, NY, USA, 269–277. https://doi.org/10.1145/3298689.3346996
* Ying et al. (2018) Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. 2018. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (London, United Kingdom) _(KDD ’18)_. Association for Computing Machinery, New York, NY, USA, 974–983. https://doi.org/10.1145/3219819.3219890
* You et al. (2019) Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2019\. Large batch optimization for deep learning: Training bert in 76 minutes. _arXiv preprint arXiv:1904.00962_ (2019).
* Zhang et al. (2019) Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019\. Deep learning based recommender system: A survey and new perspectives. _ACM Computing Surveys (CSUR)_ 52, 1 (2019), 1–38.
* Zhang et al. (2023) Wei Zhang, Dai Li, Chen Liang, Fang Zhou, Zhongke Zhang, Xuewei Wang, Ru Li, Yi Zhou, Yaning Huang, Dong Liang, et al. 2023\. Scaling User Modeling: Large-scale Online User Representations for Ads Personalization in Meta. _arXiv preprint arXiv:2311.09544_ (2023).
* Zhou et al. (2018) Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018\. Deep Interest Network for Click-Through Rate Prediction. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (London, United Kingdom) _(KDD ’18)_. Association for Computing Machinery, New York, NY, USA, 1059–1068. https://doi.org/10.1145/3219819.3219823
|
# Low-Resolution Near-infrared Stellar Spectra Observed by the Cosmic Infrared
Background Experiment (CIBER)
Min Gyu Kim11affiliation: Dept. of Physics and Astronomy, Seoul National
University, Seoul 08826, Korea 22affiliation: Korea Astronomy and Space
Science Institute (KASI), Daejeon 34055, Korea , Hyung Mok Lee11affiliation:
Dept. of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
, Toshiaki Arai33affiliation: Department of Space Astronomy and Astrophysics,
Institute of Space and Astronautical Science (ISAS), Japan Aerospace
Exploration Agency (JAXA), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa
252-5210, Japan , James Bock44affiliation: Department of Astronomy,
California Institute of Technology, Pasadena, CA 91125, USA 55affiliation:
Jet Propulsion Laboratory (JPL), 4800 Oak Grove Dr., Pasadena, CA 91109, USA
, Asantha Cooray66affiliation: Center for Cosmology, University of California,
Irvine, Irvine, CA 92697, USA , Woong-Seob Jeong22affiliation: Korea
Astronomy and Space Science Institute (KASI), Daejeon 34055, Korea , Seong
Jin Kim22affiliation: Korea Astronomy and Space Science Institute (KASI),
Daejeon 34055, Korea , Phillip Korngut44affiliation: Department of Astronomy,
California Institute of Technology, Pasadena, CA 91125, USA 55affiliation:
Jet Propulsion Laboratory (JPL), 4800 Oak Grove Dr., Pasadena, CA 91109, USA
, Alicia Lanz44affiliation: Department of Astronomy, California Institute of
Technology, Pasadena, CA 91125, USA , Dae Hee Lee22affiliation: Korea
Astronomy and Space Science Institute (KASI), Daejeon 34055, Korea , Myung
Gyoon Lee11affiliation: Dept. of Physics and Astronomy, Seoul National
University, Seoul 08826, Korea , Toshio Matsumoto33affiliation: Department of
Space Astronomy and Astrophysics, Institute of Space and Astronautical Science
(ISAS), Japan Aerospace Exploration Agency (JAXA), 3-1-1 Yoshinodai, Chuo-ku,
Sagamihara, Kanagawa 252-5210, Japan , Shuji Matsuura33affiliation:
Department of Space Astronomy and Astrophysics, Institute of Space and
Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), 3-1-1
Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan 77affiliation:
Department of Physics, Kwansei Gakuin University, Hyogo 669-1337, Japan , Uk
Won Nam22affiliation: Korea Astronomy and Space Science Institute (KASI),
Daejeon 34055, Korea , Yosuke Onishi33affiliation: Department of Space
Astronomy and Astrophysics, Institute of Space and Astronautical Science
(ISAS), Japan Aerospace Exploration Agency (JAXA), 3-1-1 Yoshinodai, Chuo-ku,
Sagamihara, Kanagawa 252-5210, Japan 88affiliation: Department of Physics,
Tokyo Institute of Technology 2-12-1 Ookayama, Meguro-ku, Tokyo, 152-8550,
Japan , Mai Shirahata33affiliation: Department of Space Astronomy and
Astrophysics, Institute of Space and Astronautical Science (ISAS), Japan
Aerospace Exploration Agency (JAXA), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara,
Kanagawa 252-5210, Japan , Joseph Smidt66affiliation: Center for Cosmology,
University of California, Irvine, Irvine, CA 92697, USA 99affiliation:
Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545,
USA , Kohji Tsumura1010affiliation: Frontier Research Institute for
Interdisciplinary Science, Tohoku University, Sendai 980-8578, Japan , Issei
Yamamura33affiliation: Department of Space Astronomy and Astrophysics,
Institute of Space and Astronautical Science (ISAS), Japan Aerospace
Exploration Agency (JAXA), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa
252-5210, Japan , and Michael Zemcov1111affiliation: Center for Detectors,
School of Physics and Astronomy, Rochester Institute of Technology, Rochester
NY 14623, USA 55affiliation: Jet Propulsion Laboratory (JPL), 4800 Oak Grove
Dr., Pasadena, CA 91109, USA<EMAIL_ADDRESS>
###### Abstract
We present near-infrared (0.8-1.8 $\micron$) spectra of 105 bright (${m_{J}}$
$<$ 10) stars observed with the low resolution spectrometer on the rocket-
borne Cosmic Infrared Background Experiment (CIBER). As our observations are
performed above the earth’s atmosphere, our spectra are free from telluric
contamination, which makes them a unique resource for near-infrared spectral
calibration. Two-Micron All Sky Survey (2MASS) photometry information is used
to identify cross-matched stars after reduction and extraction of the spectra.
We identify the spectral types of the observed stars by comparing them with
spectral templates from the Infrared Telescope Facility (IRTF) library. All
the observed spectra are consistent with late F to M stellar spectral types,
and we identify various infrared absorption lines.
catalogs — infrared: stars — stars: general — techniques: spectroscopic
## 1 Introduction
Precise ground-based measurements of stellar spectra are challenging in the
near-infrared (IR) because of the contaminating effects of telluric lines from
species like water, oxygen, and hydroxyl in the earth’s atmosphere. Telluric
correction using standard stars is generally used to overcome this problem,
but these corrections are problematic in wavelength regions marked by strong
line contamination, such as from water and hydroxyl. In contrast, space-based
spectroscopy in the near-IR does not require telluric correction, so can
provide new insights into stellar atmospheres (e.g. Matsuura et al. 1999;
Tsuji et al. 2001), especially near $1\micron$ where starlight is not
reprocessed by dust in the circumstellar environment (Meyer et al., 1998). In
particular, near-IR spectra can be used to study the age and mass of very
young stars (Joyce et al., 1998; Peterson et al., 2008), and the physical
properties of very cool stars (Sorahana & Yamamura, 2014).
Of particular interest in the study of the atmospheres of cool stars is water.
According to early models of stellar photospheres (Russell, 1934), H2O existed
only in later than M6 type stars, and until recently observations have
supported this. In 1963, the balloon-borne telescope Stratoscope II observed
H2O in two early M2-M4 giant stars (Woolf et al., 1964) at 1.4 and
$1.9\,\micron$. Several decades later, Tsuji et al. (1997) measured H2O
absorption in an M2.5 giant star using the Infrared Space Observatory (Kessler
et al. 1996), and Matsuura et al. (1999) observed water at 1.4, 1.9, 2.7, and
$6.2\micron$ for 67 stars with the Infrared Telescope in Space (Murakami et
al. 1996; Matsumoto et al. 2005). Surprisingly, Tsuji et al. (2001) discovered
water features in late K-type stars. These results required a new stellar
photosphere model to explain the existence of H2O features in hotter than M6
type stars (Tsuji et al., 2015).
The low resolution spectrometer (LRS; Tsumura et al. 2013) on the Cosmic
Infrared Background Experiment (CIBER; Bock et al. 2006; Zemcov et al. 2013)
observed the diffuse infrared background from 0.7 to 2.0 $\micron$ during four
flights above the Earth atmosphere. The LRS was designed to observe the near-
IR background (Hauser & Dwek, 2001; Madau & Pozzetti, 2000), and as a result
finds excess extragalactic background light above all known foregrounds
(Matsuura et al. 2016, ApJ, submitted, 2016). Furthermore, we precisely
measure astrophysical components contributing to the diffuse sky brightness
(see Leinert et al. 1998 for a review). For example, Tsumura et al. (2010)
observed a component of the zodiacal light absorbed by silicates in a broad
band near $800\,$nm. By correlating the LRS with a 100 $\micron$ dust map
(Schlegel, 1998), Arai et al. (2015) measured smooth diffuse galactic light
spectrum from the optical band to the near-IR and constrained the size
distribution of interstellar dust, which was dominated by small particles
(half-mass radius $\sim$0.06 $\micron$).
The LRS also observed many bright galactic stars, enabling us to study their
near-IR SEDs. In this paper, we present flux-calibrated near-IR spectra of 105
stars from $0.8\leq\lambda\leq 1.8\,\micron$ with spectral resolution
$15\leq\lambda/\Delta\lambda\leq 30$ over the range. The paper is organized as
follows. In Section 2, the observations and instrumentation are introduced. We
describe the data reduction, calibration, astrometry, and extraction of the
stellar spectra in Section 3. In Section 4, the spectral typing and features
are discussed. Finally, a summary and discussion are given in Section 5.
## 2 Instrument
The LRS is one of the four optical instruments of the CIBER payload (Zemcov et
al., 2013); the others are a narrowband spectrometer (Korngut et al. 2013) and
two wide-field imagers (Bock et al., 2013). The LRS (Tsumura et al., 2013) is
a prism-dispersed spectrometer with five rectangular $5.35^{\circ}\times
2.8\arcmin$ slits imaging a 5.8 ∘ $\times$ 5.8 ∘ field of view. The detector
has $256\times 256$ pixels at a pixel scale of $1.36\arcmin\times
1.36\arcmin$. CIBER has flown four times (2009 February, 2010 July, 2012
March, and 2013 June) with apogees and total exposure times of over 325 km and
$\sim$ 240 s, respectively, in the first three flights and of 550 km and 335 s
in the final, non-recovered flight. Due to spurious signal contamination from
thermal emission from the shock-heated rocket skin, we do not use the first
flight data in this work (Zemcov et al., 2013). Eleven target fields were
observed during the three subsequent flights, as listed in Table 1. Details of
the field selection are described in Matsuura et al. 2016, ApJ, submitted
(2016).
During the observations, the detector array is read nondestructively at $\sim
4\,$Hz frame-1. Each field is observed for many tens or hundreds of frames,
and an image for each field is obtained by computing the slope of the
accumulated values for each pixel (Garnett & Forrest, 1993). Figure 1 shows an
example image of the North Ecliptic Pole (NEP) region obtained during the
second flight. More than 20 bright stars ($m_{J}$ $<$ 11) are observed. The
stellar spectra are characterized by a small amount of field distortion as
well as an arc-shaped variation in constant-wavelength lines along the slit
direction. The latter is known as a “smile” and is a known feature of prism
spectrometers (Fischer et al., 1998). Details of the treatment of these
distortions are described in Section 3.3 and 3.4.
## 3 Data Analysis
In this section, we describe how we perform background subtraction,
calibration, photometric estimation, astrometric registration, and spectral
extraction from the LRS-observed images.
### 3.1 Pixel response correction
We measure the relative pixel response (flat field) in the laboratory before
each flight (Arai et al., 2015). The second- and the third-flight data are
normally corrected with these laboratory flats. However, for the fourth flight
from the laboratory calibrations do not extend to the longest wavelengths
($\lambda\geq 1.4\micron$) because the slit mask shifted its position with
respect to the detector during the flight. We therefore use the second-flight
flat field to correct the relative response for the fourth-flight data, as
this measurement covers $\lambda>1.6\micron$. To apply this flat field, we
need to assume that the intrinsic relative pixel response does not vary
significantly over the flights. To check the validity of this assumption, we
subtract the second flat image to the fourth flat image for overlapped pixels
and calculate the pixel response difference. We find that only 0.3 % of pixels
with response measured in both are different by $2\sigma$, where $\sigma$ is
the standard deviation of the pixel response. Finally, we mask 0.06 % of the
array detectors to remove those pixels with known responsivity pathologies and
those prone to transient electronic events (Lee et al., 2010).
### 3.2 Calibration
For each flight, the absolute brightness and wavelength irradiance
calibrations have been measured in the laboratory in collaboration with the
National Institute of Standards and Technology. The details of these
calibrations can be found in Tsumura et al. (2013). The total photometric
uncertainty of the LRS brightness calibration is estimated to be $\pm 3$%
(Tsumura et al., 2013; Arai et al., 2015).
### 3.3 Background Removal
The raw image contains not only spectrally dispersed images of stars but also
the combined emission from zodiacal light $\lambda I_{\lambda}^{\rm ZL}$,
diffuse galactic light $\lambda I_{\lambda}^{\rm DGL}$, the extragalactic
background $\lambda I_{\lambda}^{\rm EBL}$, and instrumental effects $\lambda
I_{\lambda}^{\rm inst}$ (Leinert et al., 1998). The measured signal $\lambda
I_{\lambda}^{\rm meas}$ can be expressed as
$\lambda I_{\lambda}^{\rm meas}=\lambda I_{\lambda}^{\ast}+\lambda
I_{\lambda}^{\rm ZL}+\lambda I_{\lambda}^{\rm ISL}+\lambda I_{\lambda}^{\rm
DGL}+\lambda I_{\lambda}^{\rm EBL}+\lambda I_{\lambda}^{\rm inst},$ (1)
where we have decomposed the intensity from stars into a resolved component
$\lambda I_{\lambda}^{\ast}$ and an unresolved component arising from the
integrated light of stars below the sensitivity of the LRS $\lambda
I_{\lambda}^{\rm ISL}$. It is important to subtract the sum of all components
except $\lambda I_{\lambda}^{\ast}$ from the measured brightness to isolate
the emission from detected stars. At this point in the processing, we have
corrected for multiplicative terms affecting $\lambda I_{\lambda}^{\rm meas}$.
Dark current, which is the detector photocurrent measured in the absence of
incident flux, is an additional contribution to $\lambda I_{\lambda}^{\rm
inst}$. The stability of the dark current in the LRS has been shown to be 0.7
nW m-2 sr-1 over each flight, which is a negligible variation from the typical
dark current (i.e., 20 nW m-2 sr-1; (Arai et al., 2015)). As a result, we
subtract the dark current as part of the background estimate formed below.
The relative brightnesses of the remaining background components are
wavelength-dependent, so an estimate for their mean must be computed along
constant-wavelength regions, corresponding to the vertical columns in Figure
1. Furthermore, because of the LRS’s large spatial PSF, star images can extend
over several pixels in the imaging direction and even overlap one another.
This complicates background estimation in pixels containing star images and
reduces the number of pixels available to estimate the emission from the
background components.
To estimate the background in those pixels containing star images, we compute
the average value of pixels with no star images along each column, as
summarized in Figure 2. We remove bright pixels that may contain star images,
as described in Arai et al. (2015). The spectral smile effect shown in Figure
1 introduces spectral curvature along a column. We estimate it causes an error
of magnitude $\delta\lambda/\lambda<10^{-2}$, which is small compared to the
spectral width of a pixel. Approximately half of the rows remain after this
clipping process; the fraction ranges from 45 % to 62 % depending on the
stellar density in each field. This procedure removes all stars with $J>13$,
and has a decreasing completeness above this magnitude (Arai et al., 2015).
To generate an interpolated background map, each candidate star pixel is
replaced by the average of nearby pixels calculated along the imaging
direction from the $\pm 10$ pixels on either side of the star image. We again
do not explicitly account for the spectral smile. This interpolated background
image is subtracted from the measured image, resulting in an image containing
only bright stellar emission. The emission from faint stars and bright stars
that inefficiently illuminate a grating slit that contributes to
$I_{\lambda}^{\rm ISL}$ is naturally removed in this process.
### 3.4 Star Selection
The bright lines dispersed in the spectral direction in the background-
subtracted images are candidate star spectra. To calculate the spectrum of
candidate sources, we simply isolate individual lines of emission and map the
pixel values onto the wavelength using the ground calibration. However, this
procedure is complicated both by the extended spatial PSF of the LRS and by
source confusion.
To account for the size of the LRS spatial PSF (FWHM $\sim$1.2 pixels) as well
as optical distortion from the prism that spreads the star images slightly
into the imaging direction, we sum five rows of pixels in the imaging
direction for each candidate star. Since the background emission has already
been accounted for, this sum converges to the total flux as the number of
summed rows is increased. By summing five rows, we capture $>99.9$% of a
candidate star’s flux. The wavelengths of the spectral bins are calculated
from the corresponding wavelength calibration map in the same way.
From these spectra, we can compute synthetic magnitudes in the $J$\- and
$H$-bands, which facilitate comparison to Two-Micron All-Sky Survey (2MASS)
measurements. We first convert surface brightness in nW m-2 sr-1 to flux in nW
m-2 Hz-1, and then integrate the monochromatic intensity over the 2MASS band,
applying the filter transmissivity of the $J$\- and $H$-bands (Cohen et al.,
2003). To determine the appropriate zero magnitude, we integrate the $J$\- and
$H$-band intensity of Vega’s spectrum (Bohlin & Gilliland, 2004) with the same
filter response. The $J$\- and $H$-band magnitudes of each source are then
calculated, allowing both flux and color comparisons between our data and the
2MASS catalog.
Candidate star spectra may be comprised of the blended emission from two or
more stars, and these must be rejected from the catalog. Such blends fall into
one of two categories: (i) stars that are visually separate but are close
enough to share flux in a 5 pixel-wide photometric aperture, and (ii) stars
that are close enough that their images overlap so as to be indistinguishable.
We isolate instances of case (i) by comparing the fluxes calculated by summing
both three and five rows along the imaging direction for each source. If the
magnitude or $J-H$ color difference between the two apertures is larger than
the statistical uncertainty (described in Section 3.6), we remove those
spectra from the catalog. To find instances of case (ii), we use the 2MASS
star catalog registered to our images using the procedure described in Section
3.5. Candidate sources that do not meet the criteria presented below are
rejected.
To ensure the catalog spectra are for isolated stars rather than for
indistinguishable blends, we impose the following requirements on candidate
star spectra: (i) each candidate must have $J<11$; (ii) the $J$-band magnitude
difference between the LRS candidate and the matched 2MASS counterpart must be
$<1.5$; (iii) the $J-H$ color difference between the LRS candidate star and
the matched 2MASS counterpart must be $<0.3$; and (iv) among the candidate
2MASS counterparts within the 500$\arcsec$ ($=6$ pixel) radius of a given LRS
star, the second-brightest 2MASS star must be fainter than the brightest one
by more than 2 mag at the J band. Criterion (i) excludes faint stars that may
be strongly affected by residual backgrounds, slit mask apodization, or source
confusion. The second and third criteria mitigate mismatching by placing
requirements on the magnitude and color of each star. In particular, the $J-H$
color of a source does not depend on the slit apodization or the position in
image space (see Figure 3), so any significant change in $J-H$ color as the
photometric aperture is varied suggests that more than a single star could be
contributing to the measured brightness. Finally, it is possible that two
stars with similar $J-H$ colors lie close to each other, so the last criterion
is applied to remove stars for which equal-brightness blending is an issue.
Approximately one in three candidate stars fails criterion (iv). The number of
candidate stars rejected at each criterion is described in Table 2.
In addition, three of LRS candidate stars are identified as variables in the
SIMBAD database 111http://simbad.u-strasbg.fr/simbad/. We also identify two
stars as binary and multiple-star systems as well as four high proper motion
stars. Through these stringent selection requirements, we conservatively
include only the spectra of bright, isolated stars in our catalog. Finally,
105 star spectra survive all the cuts, and the corresponding stars are
selected as catalog members.
### 3.5 Astrometry
We match the synthesized LRS $J$, $H$, and $J-H$ information with the 2MASS
point source catalog (Skrutskie et al., 2006) to compute an astrometric
solution for the LRS pointing in each sky image. This is performed in a
stepwise fashion by using initial estimates for the LRS’s pointing to solve
for image registration on a fine scale.
As a rough guess at the LRS pointing, we use information provided by the
rocket’s attitude control system (ACS), which controls the pointing of the
telescopes (Zemcov et al., 2013). This provides an estimated pointing solution
that is accurate within 15 $\arcmin$ of the requested coordinates. However,
since the ACS and the LRS are not explicitly aligned to one another, a finer
astrometric registration is required to capture the pointing of the LRS to
single-pixel accuracy.
To build a finer astrometric solution, we simulate images of each field in the
2MASS J-band using the positional information from the ACS, spatially
convolved to the LRS PSF size. Next, we apodize these simulated 2MASS images
with the LRS slit mask, compute the slit-masked magnitudes of three reference
stars, and calculate the $\chi^{2}$ statistic using
$\chi^{2}{{}_{p,q}}=\sum_{i}\left(\frac{F_{LRS,i}-F_{2MASS,i}}{\sigma_{LRS,i}}\right)^{2},$
(2)
where index i represents each reference star and subscripts p and q index the
horizontal and vertical positions of the slit mask, respectively. $F_{LRS,i}$
and $F_{2MASS,i}$ are the fluxes in the LRS and 2MASS $J$-band, and
$\sigma_{LRS,i}$ is the statistical error of the LRS star (see Section 3.6).
The minimum $\chi^{2}$ gives the most likely astrometric position of the slit
mask. Since, on average, there are around five bright stars with $J<9$ per
field, spurious solutions are exceedingly unlikely, and all fields give a
unique solution.
Using this astrometric solution, we can assign coordinates to the rest of the
detected LRS stars. We estimate that the overall astrometric error is
120$\arcsec$ by computing the mean distance between the LRS and 2MASS
coordinates of all matched stars. The error corresponds to 1.5 times the pixel
scale. We check the validity of the astrometric solutions by comparing the
colors and fluxes between the LRS and matched 2MASS stars. In Figures 3 and 4,
we show the comparison of the $J-H$ colors and fluxes of the cross-matched
stars in each field. Here, we multiply the LRS fluxes at the J- and H-band by
2.22 and 2.17, respectively, to correct for the slit apodization. The
derivation of correction factors is described in Section 5. On the whole, they
match well within the error range.
### 3.6 Spectral Error Estimation
Even following careful selection, the star spectra are subject to various
kinds of uncertainties and errors, including statistical uncertainties, errors
in the relative pixel response, absolute calibration errors, wavelength
calibration errors, and background subtraction errors.
Statistical uncertainties in the spectra can be estimated directly from the
flight data. We calculate the $1\sigma$ slope error from the line fit (see
Section 2) as we generate the flight images; this error constitutes the
estimate for the statistical photometric uncertainty for each pixel. In this
statistical error, we include contributions from the statistical error in the
background estimate and the relative pixel response. The error in the
background signal estimate is formed by computing the standard deviation of
the $\pm$10 pixels along the constant-$\lambda$ direction for each pixel to
match the background estimate region. This procedure captures the local
structure in the background image, which is a reasonable measure of the
variation we might expect over a photometric aperture. Neighboring pixels in
the wavelength direction have extremely covariant error estimates in this
formulation, which are acceptable since the flux measurements are also
covariant in this direction. A statistical error from the relative pixel
response correction is applied by multiplying 3% of the relative response by
the measured flux in each field (Arai et al., 2015). To compute the total
statistical error, each constituent error is summed in quadrature for each
pixel.
Several instrumental systematic errors are present in these measurements,
including those from wavelength calibration, absolute calibration, and
relative response correction. In this work, we do not explicitly account for
errors in the wavelength calibration, as the variation is $\pm$ 1 nm over 10
constant-wavelength pixels, which is $<0.1R$. In all flights, $<$ 3 % absolute
calibration error is applied (Arai et al., 2015). For the longest-wavelength
regions ($\lambda$ $>$ 1.6 $\micron$) of the fourth-flight data that are not
measured even in the second-flight flat, we could not perform flat correction.
Instead, we apply a systematic error amounting to 5.3 % of the measured sky
brightness. The error is estimated from pixels in the short-wavelength regions
($\lambda$ $<$ 1.4 $\micron$) of the fourth-flight flat. We calculate
deviations from unity for those pixels and take a mean of 5.3 %. The linear
sum of systematic errors is then combined with statistical error in
quadrature.
## 4 The Spectra
The 105 stellar spectra that result from this processing can be used to test
spectral type determination algorithms and study near-IR features that are
invisible from the ground. Despite the relatively low spectral resolution of
our stellar spectra, we identify several molecular bands, particularly for the
late-type stars. We present the $J-$band-normalized LRS spectra for each of
the catalog stars in Figure 5.
General information for each spectrum is summarized in Table 3 with the
corresponding star ID. All spectra are publicly available in electronic form
222http://astro.snu.ac.kr/$\sim$mgkim/. The spectra are presented without the
application of interstellar extinction corrections, since extinction
correction assumes both a color index and the integrated Galactic extinction
along the line of sight. Therefore, without knowing the stars’ distances, it
is difficult to make progress. For CIBER fields, typical extinction ranges
from 0.005 to 0.036 mag at the J-band if we assume extinction coefficients
R(J) with 0.72 (Yuan et al., 2013)
### 4.1 Spectral type determination
The star spectral types are determined by fitting known spectral templates to
the measured LRS spectra. We use the Infrared Telescope Facility (IRTF) and
Pickles (Pickles, 1998) templates for the SED fitting. The SpeX instrument
installed on the IRTF observed stars using a medium-resolution spectrograph (R
$=$ 2000). The template library contains spectra for 210 cool stars (F to M
type) with wavelength coverage from 0.8 to 2.5 $\micron$ (Cushing, 2005;
Rayner, 2009). The Pickles library is a synthetic spectral library that
combines spectral data from various observations to achieve wavelength
coverage from the UV (0.115 $\micron$) to the near-IR (2.5 $\micron$). It
contains 131 spectral templates for all star types (i.e., O to M type) with a
uniform sampling interval of 5 $\AA$.
To perform the SED fit, we degrade the template spectra to the LRS spectral
resolution using a mean box-car smoothing kernel corresponding to the slit
function of the LRS. Both the measured and template spectra are normalized to
the $J$-band flux. We calculate the flux differences between the LRS and
template spectra using
$\chi^{2}=\sum_{\lambda}\left(\frac{F_{LRS,\lambda}-F_{ref,\lambda}}{\sigma_{LRS,\lambda}}\right)^{2},$
(3)
where $F_{LRS,\lambda}$ and $F_{ref,\lambda}$ are the fluxes of the observed
and template spectra at wavelength $\lambda$ normalized at $J$-band and
$\sigma_{LRS,\lambda}$ is the statistical error of the observed spectrum. The
best-fitting spectral type is determined by finding the minimum $\chi^{2}$.
No early-type (i.e., O, B, A) stars are found in our sample; all stars have
characteristics consistent with those of late-type stars (F and later).
Because the IRTF library has about twice the spectral type resolution of the
Pickles library, we provide the spectral type determined from the IRTF
template in Table 3. Since the IRTF library does not include a continuous set
of spectral templates, we observe discrepancies between the LRS and best-fit
IRTF templates, even though the $J-H$ colors are consistent between 2MASS and
the LRS within the uncertainties. The Pickles and IRTF fits are consistent
within the uncertainty in the classification ($\sim$ 0.42 spectral subtypes).
A color-color diagram for the star sample is shown in Figure 6. Although the
color-color diagram does not allow us to clearly discriminate between spectral
types, qualitatively earlier-type stars are located in the bluer region, while
later-type stars are located in the redder region, consistent with
expectations. LRS stars well follow the color-color distributions of typical
2MASS stars in LRS fields, as indicated by the gray dots.
To estimate the error in our spectral type determination, we compare our
identifications with the SIMBAD database (Wenger et al., 2000), where 63 of
the 105 stars have prior spectral type determinations. Figure 7 shows the
spectral types determined from the IRTF fit versus those from the SIMBAD
database. The 1$\sigma$ error of type difference is estimated to be 0.59
spectral subtypes, which is comparable with those in other published works
(Gliese, 1971; Jaschek & Jaschek, 1973; Jaschek, M., 1978; Roeser, 1988; Houk
et al., 1999). The error can be explained with two factors: (i) the low
spectral resolution of the LRS and (ii) the SED template libraries, which do
not represent all star types.
Five stars are observed twice in different flights (BA2_5 and BB4_6, N2_6 and
N3_5, BA2_1 and BA3_4, BB2_1 and BB3_1, and BB2_4 and BB3_4; see Figure 8),
enabling us to investigate the interflight stability of the spectra. For BA2_5
and BB4_6, the spectral type is known to be F8, while our procedure yields F7V
and F1II from the second- and fourth-flight data, respectively. For N2_6 and
N3_5, the known type is K5 while we determine M0.5V for both flights. For
BA2_1 and BA3_4, the known type is F5 while we determine F7III and F2III-IV in
the second and third flights. For BB2_1 and BB3_1, the fitted types are
G8IIIFe5 and K4V for a K1 type star, and the type of BB2_4 and BB3_4 are not
known but are fitted to F9V for both flights. The determined spectra are
consistent within an acceptable error window, though the longer-wavelength
data exhibit large differences, which can be attributed to calibration error.
We present the spectra of each star from both flights in Table 3. This
duplication results in our reporting of 110 spectra in the catalog, even
through only 105 individual stars are observed.
## 5 Discussion
We determined the spectral type of 105 stars as well as the associated typing
error (0.59 spectral subtypes) assessed by comparing the type against a set of
63 previously determined spectral types. Representative examples of the
measured spectra for different spectral types are shown in Figure 9. Molecular
absorption lines are evident in these spectra, including the CaII triplet and
various CN bands.
Since we observed stars above the earth’s atmosphere, observations of the H2O
molecular band are possible. However, they are not able to distinguish between
CN and H2O at 1.4 $\micron$ since both have the same bandhead and appear in
late-type stars (Wing & Spinrad, 1970). For example, the spectral features of
M2-M4 (super)giant stars observed by Stratoscope II, previously identified as
CN, were identified as H2O (Tsuji et al., 2000). Several subsequent
observations show clear evidence that water features exist even in K type
stars, requiring modifications of present stellar photosphere models (Tsuji et
al., 2000).
In our spectral catalog, most K and M type stars exhibit a broad absorption
band around 1.4 $\micron$. Although it is not possible to identify specific
molecular bands with our data, we cannot exclude the presence of H2O in the
spectra of these stars. Future mid-IR measurements at $6.3\micron$ would help
disentangle the source of the spectral features by removing the spectral
degeneracies between CN and H2O (Tsuji et al., 2001).
As these spectra are free from telluric contamination and the LRS is
calibrated against absolute irradiance standards (Arai et al., 2015), in
principle these measurements could be used as near-IR spectral standards.
However, our lack of knowledge of the instrument response function (IRF) on
the spectral plane complicates the use of these measurements for the absolute
photometric calibration of stars. Specifically, the LRS’s IRF depends on the
end-to-end optical properties of the instrument. Because we use a slit mask at
the focus of an optical coupler (Tsumura et al., 2013), the full IRF knowledge
of the focusing element of the optical coupler is difficult to disentangle
from other effects. As a result, we would need to know the precise IRF to
assign an absolute error estimate to an absolute calibration of the star
images. This response function was not characterized during ground testing.
Nevertheless, we consider it instructive to check the validity of photometric
results whether or not the estimated magnitudes of the LRS stars are
reasonable compared to previous measurements. We perform an empirical
simulation as follows. For each LRS star, we generate a point source image
with the flux of the 2MASS counterpart convolved to the LRS PSF. Instrumental
noise and source confusion from faint stars ($J>13$) based on the 2MASS stars
around a target star are also added. We measure the photometric flux of the
simulated star image in the same way as for the LRS stars as described in this
paper. An aperture correction is applied to the LRS stars, since stars that
are clipped by the slit mask will appear to have a reduced flux measurement.
Figures 10 and 11 show the ratios of the band-synthesized flux of each LRS
star to the flux of the corresponding 2MASS star with statistical errors. The
range explained by our simulations is illustrated as a color-shaded area. The
LRS stars fall within the expected flux range. Also, the flux ratios of the
stars between flights well agree, validating the stability of the photometric
calibrations for the three CIBER flights. The large scatter at faint stars is
caused by background noise, including adjacent faint stars and the instrument.
The statistical J- and H-band flux errors are 3.89 % and 4.51 %, with
systematic errors of 2.98 % and 3.82 %. We conclude that the achievable
uncertainties on the absolute photometric amplitudes of these spectra are not
competitive with other measurements (e.g. the existing 2MASS J and H-band flux
errors are 1.57 % and 2.36 %, respectively).
The slit mask apodization correction ultimately limits the accuracy of our
absolute calibration measurement and can lead to subtle biases. However, by
connecting them with precise spectral measurements, we can improve the
accuracy of LRS stellar spectra. The European Space Agency’s Gaia (Perryman et
al., 2001; Jordi et al., 2010) mission is a scanning all-sky survey that uses
a blue photometer (0.33$\micron$ $<$ $\lambda$ $<$ 0.68$\micron$) and a red
(0.64$\micron$ $<$ $\lambda$ $<$ 1.05$\micron$) one to cover 0.33$\micron$ to
1.05$\micron$ with spectral resolution similar to that of the LRS. Because the
Gaia photometers spectrally overlap with the LRS, we expect to eventually be
able to unambiguously correct for the slit mask apodization and achieve an
absolute flux calibration with less than 2 % accuracy over the full range
$0.4\leq\lambda\leq 1.6\,\mu$m for our 105 stars.
In addition, the data reduction procedure described here may be a useful guide
for the Gaia analysis. Since Gaia uses a prism-based photometer source
detection, the data will show a nonlinear spatial variation of constant-
wavelength bands and flux losses by a finite window size, as in our
measurements. The background estimation will also require careful treatment
with precise estimation of the end-to-end Gaia PSF.
This work was supported by NASA APRA research grants NNX07AI54G, NNG05WC18G,
NNX07AG43G, NNX07AJ24G, and NNX10AE12G. Initial support was provided by an
award to J.B. from the Jet Propulsion Laboratory’s Director’s Research and
Development Fund. Japanese participation in CIBER was supported by KAKENHI
(20·34, 18204018, 19540250, 21340047, 21111004, and 26800112) from Japan
Society for the Promotion of Science (JSPS) and the Ministry of Education,
Culture, Sports, Science, and Technology. Korean participation in CIBER was
supported by the Pioneer Project from the Korea Astronomy and Space Science
Institute. M.G.K. acknowledges support from the Global PhD Fellowship Program
through the NRF, funded by the Ministry of Education (2011-0007760). H.M.L.
and M.G.L. were supported by NRF grant 2012R1A4A1028713. M.Z. and P.K.
acknowledge support from NASA postdoctoral program fellowships, and A.C.
acknowledges support from NSF CAREER awards AST-0645427 and NSF AST-1313319.
We thank the dedicated efforts of the sounding rocket staff at the NASA
Wallops Flight Facility and White Sands Missile Range and also thank Dr. Allan
Smith, Dr. Keith Lykke, and Dr. Steven Brown (NIST) for the laboratory
calibration of the LRS. This publication makes use of data products from the
2MASS, which is a joint project of the University of Massachusetts and the
Infrared Processing and Analysis Center/California Institute of Technology,
funded by the NASA and the NSF. This research has made use of the SIMBAD
database, operated at CDS, Strasbourg, France, and the SpeX library .
## References
* Arai et al. (2015) Arai, T., Matsuura, S., Bock, J., et al. 2015, ApJ, 806, 69
* Bock et al. (2006) Bock, J., Battle, J., Cooray, A., et al. 2006, New A Rev., 50, 215
* Bock et al. (2013) Bock, J., Sullivan, I., Arai, T., et al. 2013, ApJS, 207, 32
* Bohlin & Gilliland (2004) Bohlin, R. C., & Gilliland, R. L., 2004, AJ, 127, 3508 (BG)
* Cohen et al. (2003) Cohen, M., Wheaton, Wm. A., & Megeath, S. T. 2003, AJ, 126, 1090
* Cushing (2005) Cushing, M. C. 2005, ApJ, 623, 1115
* Fischer et al. (1998) Fisher, J., Baumback, M. M., Bowles, J. H., Grossmann, J. M., & Antoniades, J. A. 1998, Proc SPIE, 3438, 23
* Garnett & Forrest (1993) Garnett, J., D.P., & Forrest, W.J., 1993, SPIE, 1946, 395G
* Gliese (1971) Gliese, W. 1971, Veroeffentlichungen des Astronomischen Rechen-Instituts Heidelberg, 24, 1
* Hauser & Dwek (2001) Hauser, M.G. & Dwek, E., 2001, ARA&A, 39, 249
* Houk et al. (1999) Houk, N., & Swift, C. 1999, University of Michigan Catalogue of Two-Dimensional Spectral Types for the HD Stars, Vol. 5 (Ann Arbor: Univ. Michigan)
* Jaschek, M. (1978) Jaschek, M. 1978, CDS Inf. Bull. 15, 121
* Jaschek & Jaschek (1973) Jaschek, C., & Jaschek, M. 1973, in IAU Symp. 50, Spectral Classification and Multicolour Photometry, ed. C. Fehrenbach & B. E. Westerlund (Dordrecht:Reidel), 43
* Jordi et al. (2010) Jordi, C., Gebran, M., Carrasco, J. M., et al. 2010, A&A, 523, A48
* Joyce et al. (1998) Joyce, R. R., Hinkle, K. H., Wallace, L., Dulick, M., & Lambert, D. L., 1998, AJ, 116, 2520
* Kessler et al. (1996) Kessler, M. F., Steinz, J. A., & Anderegg, M. E., et al. 1996, A&A, 315, L27
* Korngut et al. (2013) Korngut, P. M., Renbarger, T., Arai, T., et al. 2013, ApJS, 207, 34
* Lee et al. (2010) Lee, D. H., Kim, M. G., Tsumura, K., et al. 2010, Journal of Astronomy and Space Sciences, 27, 401
* Leinert et al. (1998) Leinert, Ch., Bowyer, S., Haikala, L. K., et al. 1998, A&AS, 127, 1L
* Madau & Pozzetti (2000) Madau, P. & Pozzetti, L., 2000, MNRAS, 312, L9-L15
* Matsumoto et al. (2005) Matsumoto, T., Matsuura, S., Murakami, H., et al. 2005, ApJ, 626, 31
* Matsuura et al. (1999) Matsuura, M., Yamamura, I., Murakami, H., Freund, M. M., & Tanaka, M. 1999, A&A, 348, 579
* Matsuura et al. 2016, ApJ, submitted (2016) Matsuura, S., Arai, T., Bock, J., et al. 2016, ApJ, submitted
* Meyer et al. (1998) Meyer, M. R., Edwards, S., Hinkle, K. H., & Strom, S. E., 1998, ApJ, 508, 397
* Murakami et al. (1996) Murakami, H., Freund, M. M., Ganga, K., et al. 1996, PASJ, 48, L41
* Peterson et al. (2008) Peterson, D. E., Megeath, S. T., Luhman, K. L., et al. 2008, ApJ, 685, 313
* Perryman et al. (2001) Perryman, M. A. C., de Boer, K. S., Gilmore, G., et al. 2001, A&A, 369, 339
* Pickles (1998) Pickles, A. J. 1998, PASP, 110, 863
* Rayner (2009) Rayner, J. T. 2009, ApJS, 185, 289
* Roeser (1988) Roeser, S. & Bastian, U., 1988, A&A, 74, 449
* Russell (1934) Russell, H. N. 1934, ApJ, 79, 317
* Schlegel (1998) Schlegel, D. J. 1998, ApJ, 500, 525
* Skrutskie et al. (2006) Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163S
* Sorahana & Yamamura (2014) Sorahana, S., Yamamura, I. 2014, ApJ, 793, 47
* Tsumura et al. (2010) Tsumura, K., Battle, J., Bock, J., et al. 2010, ApJ, 719, 394
* Tsumura et al. (2013) Tsumura, K., Arai, T., Battle, J., et al. 2013, ApJS, 207, 33
* Tsuji et al. (1997) Tsuji, T., Ohnaka, K., Aoki, W., & Yamamura, I. 1997, A&A, 320, L1
* Tsuji et al. (2000) Tsuji, T. 2000, ApJ, 538, 801
* Tsuji et al. (2001) Tsuji, T. 2001, A&A, 376, L1
* Tsuji et al. (2015) Tsuji, T. 2015, PASJ, 67, 26T
* Wenger et al. (2000) Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9
* Wing & Spinrad (1970) Wing, R. F., & Spinrad, H. 1970, ApJ, 159, 973
* Woolf et al. (1964) Woolf, N. J., Schwarzschild, M., & Rose, W. K. 1964, ApJ, 140, 833
* Yuan et al. (2013) Yuan, H. B., Liu, X. W., & Xiang, M. S. 2013, MNRAS, 430, 2188
* Zemcov et al. (2013) Zemcov, M., Arai, T., Battle, J., et al. 2013, ApJS, 207, 31
Figure 1: An example CIBER-LRS image toward the NEP field. The five
illuminated columns are dispersed spectra from the five slits of the LRS, and
the bright horizontal lines in each column are images of individual stars. As
an example, we highlight a single horizontal light trail by a red box; this is
the light from a single star dispersed from 0.7 to 2.0 $\micron$. The bright
dots are pixels hit by cosmic rays. The yellow boxes highlight representative
examples of stellar spectra disturbed by the prism. Note that the distortion
direction is different between the upper and lower parts of the image, and the
distortion becomes negligible at the center line of the image.
Figure 2: Flow chart of the background image construction. (a) Same as Figure
1. The red box indicates the set of rows to be averaged. (b) Histogram of
averaged values for each row. This average values for each slit are drawn with
different color. (c) Image after iterative sigma clipping of bright rows from
(b). The red box indicates the size of $\pm$ 10 pixels that are averaged. (d)
Reconstructed background image including all instrumental noise and undetected
faint stars.
Figure 3: LRS J-H color comparison with cross-matched 2MASS J-H color. Each
color corresponds to a different flight. The dashed line shows a linear fit,
exhibiting a slight systematic offset from unity. The J-H colors of LRS stars
are conserved regardless of the slit apodization effect.
Figure 4: The 2MASS J- and H-band fluxes are shown as a function of the LRS J-
and H-band. Each color represents the data obtained on a different flight.
Slit apodization effect is corrected for all LRS stars. Correction factors are
derived based on the slit simulation for magnitude ranges covered by the LRS
stars, as shown in Figure 10 and 11.
Figure 5: LRS spectra of stars identified in this survey. The blue curve
represents the IRTF template degraded to fit the observed LRS spectrum,
indicated by a red curve. All spectra are normalized at the J-band. The
original template (gray color) is superimposed for comparison. The LRS ID and
best-fit IRTF type are indicated on the upper right at each panel. (b)-(f) LRS
spectra identified in this work. The color code is the same as that in Figure
5. Figure 5 (Contnued): Continued. Figure 5 (Contnued): Continued. Figure 5
(Contnued): Continued. Figure 5 (Contnued): Continued. Figure 5 (Contnued):
Continued. Figure 6: Color-color diagram for all identified stars. The J-H and
K-H color information is from 2MASS, and the type information is from the IRTF
fit. The background gray dots indicate stars drawn from the 2MASS catalog of
each CIBER field. The colors represent different stellar types. The scatter of
types over the J-H color can be explained either by the noncontinuous IRTF
library or by uncertainties in spectral subclass.
Figure 7: Type comparison determined from the IRTF fit and the literature for
63 stars whose types are already known. The dashed and dotted lines represent
the 1$\sigma$ error and $\pm$1 spectral type, respectively. The colors
represent the different flights’ data. Two A-type stars, indicated by an
arrow, are fitted to F-type stars. Fit types based on the Pickles library also
give the same results.
Figure 8: Five stars are serendipitously observed in two independent flights.
Each panel shows two spectra extracted from each flight. Top left panel: 2nd
flight (BA2_5), 4th flight (BB4_6). Top right panel: 2nd flight (N2_6), 3rd
flight (N3_5). Middle left panel: 2nd flight (BA2_1), 3rd flight (BA3_4).
Middle right panel: 2nd flight (BB2_1), 3rd flight (BB3_1). Bottom left panel:
2nd flight (BB2_4), 3rd flight (BB3_4). The large discrepancies arise from
calibration error above 1.6 $\micron$ but show consistency of in-flight
calibration below 1.6 $\micron$.
Figure 9: Representative examples of LRS spectra from this work. The color
code is the same as that in Figure 5. F, G, K, and M stellar types are shown
in each panel. Compared to other types, a typical F-type spectrum (top left
panel) does not show any obvious absorption features across the wavelength
range. We identified several features in our LRS spectra that correspond to
typical absorption lines in the near-IR (i.e., CaII with bandhead at 0.85
$\micron$, CN with bandhead at 0.95, 1.15, and 1.5 $\micron$). The strongest
feature in the F-type stars (top left) is the CaII triplet line, indicated
with an arrow at 0.85 $\micron$. From types later than G (top right), CN bands
appear with bandheads at 1.1, 0.91, 0.94, and 1.4$\micron$. We also identified
M-type stars, as indicated in the bottom right panel. Since M-type stars have
dominant molecular bands in their spectra, the identified lines are blended
with other strong molecular bands, such as TiO (bandhead at 0.82$\micron$),
ZrO (bandhead at 0.93$\micron$), FeH (bandhead at 0.99$\micron$), and H2O
(bandhead at 1.4$\micron$). The strength of each line depends on the spectral
type.
Figure 10: Flux ratios of all LRS stars to the matched 2MASS stars in the
J-band. Each color represents the stars observed from each flight. Since the
LRS flux is apodized by the slit mask, an aperture correction has been made to
yield ratio unity in the ideal case (dotted line). The averaged original flux
ratio is drawn as a dashed line, and its reciprocal is used for aperture
correction. The color-shaded area shows the range of relation we expect from
an instrument simulation, representing the upper and lower bounds of the
absolute calibrations of the LRS.
Figure 11: Same as Figure 10 but for the H-band.
Table 1: Rocket-Commanded Coordinates for the observed field. Arabic numbers after the Hyphen for the Elat fields indicate the flight number Field | R.A. | Decl.
---|---|---
Elat10-2 | 15:07:60.0 | -2:00:00
Elat30-2 | 14:44:00 | 20:00:00
Elat30-3 | 15:48:00 | 9:30:00
Elat10-4 | 12:44:00 | 8:00:00
Elat30-4 | 12:52:00 | 27:00:00
NEP | 18:00:00 | 66:20:23.987
SWIRE | 16:11:00 | 55:00:00
BootesA | 14:33:54.719 | 34:53:2.396
BootesB | 14:29:17.761 | 34:53:2.396
Lockman | 10:45:12.0 | 58:00:00
DGL | 16:47:60.0 | 69:00:00
Table 2: Number of stars rejected at each criterion Flight | Total Candidates | Crit.(i) | Crit.(ii) | Crit.(iii) | Crit.(iv) | Total in Final Catalog
---|---|---|---|---|---|---
2nd flight | 198 | 15 | 43 | 8 | 145 | 38
3rd flight | 177 | 14 | 41 | 6 | 127 | 30
4th flight | 171 | 23 | 43 | 5 | 117 | 42
Table 3: Star catalog
Flight | Field | ID | Name | R.A.aafootnotemark: | Decl.bbfootnotemark: | LRS Jccfootnotemark: | LRS Hddfootnotemark: | 2MASS Jeefootnotemark: | 2MASS Hfffootnotemark: | SIMBAD typeggSpectral type given from SIMBAD database. | Best-fit IRTF Type | $\chi^{2}$ | Note
---|---|---|---|---|---|---|---|---|---|---|---|---|---
| Elat10 | E102_1 | TYC5000-614-1 | 15:06:50.134 | -00:02:47.746 | 9.020 | 8.283 | 8.283 | 7.608 | K2 | K3III | 0.720 | …
| Elat10 | E102_2 | … | 14:59:05.568 | -01:08:23.294 | 9.095 | 8.279 | 8.350 | 7.484 | … | M0IIIb | 4.582 | …
| Elat10 | E102_3 | HD131553 | 14:54:20.898 | -01:52:19.938 | 9.576 | 9.241 | 8.673 | 8.472 | F0V | G0Ib-II | 0.522 | …
| Elat10 | E102_4 | HD134456 | 15:09:58.320 | -00:52:47.269 | 7.872 | 7.754 | 6.982 | 6.854 | F2III | F2III-IV | 0.076 | …
| Elat10 | E102_5 | TYC5001-847-1 | 15:14:43.328 | -01:31:43.763 | 9.940 | 9.633 | 9.226 | 8.898 | … | F8Ib | 0.416 | …
| Elat10 | E102_6 | BD-01-3038 | 15:14:15.481 | -01:37:09.268 | 8.273 | 7.633 | 7.477 | 6.862 | K0 | M0.5V | 0.462 | …
| Elat10 | E102_7 | HD133213 | 15:03:28.468 | -03:10:05.732 | 8.802 | 8.751 | 8.066 | 8.030 | A2III | F5II-III | 0.086 | …
| Elat30 | E302_1 | BD+22-2745 | 14:46:03.405 | 22:04:37.528 | 8.065 | 7.499 | 7.158 | 6.664 | G5 | K7V | 0.304 | …
| Elat30 | E302_2 | HD127666 | 14:32:02.149 | 22:04:47.600 | 8.645 | 8.396 | 7.866 | 7.676 | G5 | F8V | 0.045 | …
| Elat30 | E302_3 | HD131132 | 14:51:16.019 | 18:38:59.284 | 6.648 | 6.111 | 5.803 | 5.334 | K0 | G8IIIFe5 | 0.260 | …
| Elat30 | E302_4 | BD+19-2867 | 14:49:56.793 | 18:37:29.741 | 10.875 | 10.668 | 10.195 | 9.928 | G5 | G1II-IIIFe-1CH0.5 | 0.705 | …
| Elat30 | E302_5 | BD+19-2857 | 14:45:32.922 | 18:40:20.255 | 7.342 | 6.643 | 6.466 | 5.815 | K2 | M0V | 0.234 | …
| Elat30 | E302_6 | TYC1481-620-1 | 14:46:48.921 | 17:30:12.359 | 10.208 | 9.644 | 9.620 | 9.138 | … | K4V | 0.551 | …
| Elat30 | E302_7 | BD+18-2928 | 14:45:45.544 | 17:30:17.950 | 6.555 | 5.752 | 5.752 | 5.050 | M0 | K3IIIFe-0.5 | 1.488 | …
2nd | NEP | N2_1 | BD+68-954 | 17:43:43.944 | 68:24:26.593 | 10.067 | 9.742 | 9.394 | 9.168 | F5 | F0II | 0.064 | …
NEP | N2_2 | … | 17:38:56.867 | 66:22:12.587 | 10.726 | 10.216 | 10.440 | 9.937 | … | G5IIIa | 0.240 | …
NEP | N2_3 | BD+67-1039A | 17:52:45.953 | 67:00:12.935 | 8.925 | 8.587 | 8.571 | 8.130 | … | F8Ib | 0.045 | …
| NEP | N2_4 | TYC4208-116-1 | 17:49:23.407 | 65:28:22.606 | 7.646 | 6.837 | 6.840 | 6.047 | … | K4III | 0.807 | …
| NEP | N2_5 | BD+67-1067 | 18:20:50.229 | 67:55:01.776 | 8.199 | 7.694 | 7.430 | 6.939 | K0 | K3V | 0.119 | …
| NEP | N2_6iifootnotemark: | HD166779 | 18:07:35.504 | 63:54:12.298 | 6.544 | 5.874 | 5.706 | 5.078 | K5 | M0.5V | 0.221 | …
| SWIRE | S2_1 | HD144245 | 16:01:58.920 | 56:36:03.496 | 6.921 | 6.238 | 6.173 | 5.505 | K5 | K3III | 0.201 | …
| SWIRE | S2_2 | HD144082 | 16:01:09.819 | 56:26:23.172 | 7.929 | 7.644 | 7.135 | 6.944 | F5 | G1VFe-0.5 | 0.051 | …
| SWIRE | S2_3 | HD147733 | 16:20:51.242 | 54:23:10.320 | 8.172 | 8.125 | 7.414 | 7.351 | A3 | F8IV | 0.059 | …
| SWIRE | S2_4 | HD234317 | 16:32:27.630 | 54:20:14.320 | 8.713 | 8.283 | 7.999 | 7.564 | G5 | K1V | 0.081 | …
| SWIRE | S2_5 | HD146736 | 16:15:15.896 | 52:01:48.338 | 8.929 | 8.618 | 8.140 | 7.884 | G5 | F9IIIa | 0.060 | …
| BootesA | BA2_1jjfootnotemark: | HD126878 | 14:27:13.534 | 34:43:19.996 | 8.631 | 8.385 | 7.783 | 7.640 | F5 | F7III | 0.046 | …
| BootesA | BA2_2 | TYC2557-719-1 | 14:41:46.727 | 33:34:23.452 | 10.800 | 10.557 | 10.045 | 9.783 | … | F2III-IV | 0.331 | …
| BootesA | BA2_3 | TYC2556-652-1 | 14:33:46.073 | 33:34:53.886 | 10.341 | 9.620 | 9.352 | 8.717 | K9V | M1.5V | 1.125 | high-proper-motion
| BootesA | BA2_4 | BD+34-2527 | 14:25:57.827 | 33:34:32.984 | 9.846 | 9.426 | 9.250 | 8.973 | G5III | G5V | 0.120 | …
| BootesA | BA2_5hhfootnotemark: | HD126210 | 14:23:24.060 | 33:34:19.099 | 8.480 | 8.274 | 7.653 | 7.492 | F8 | F7V | 0.039 | …
| BootesA | BA2_6 | BD+34-2522 | 14:21:54.490 | 33:34:35.580 | 7.311 | 6.514 | 6.307 | 5.545 | K5 | K3IIIFe-0.5 | 0.584 | …
| BootesA | BA2_7 | … | 14:41:50.085 | 32:24:33.790 | 10.848 | 10.330 | 10.178 | 9.587 | … | M2V | 1.521 | …
| BootesA | BA2_8 | TYC2553-127-1 | 14:29:10.917 | 32:27:40.871 | 10.252 | 9.490 | 9.130 | 8.483 | … | K2III | 1.255 | …
| BootesB | BB2_1kkfootnotemark: | TYC2560-1157-1 | 14:38:39.909 | 35:31:13.224 | 9.347 | 8.799 | 8.611 | 8.100 | K1 | G8IIIFe5 | 0.143 | …
| BootesB | BB2_2 | BD+36-2489 | 14:24:52.634 | 35:32:12.714 | 9.026 | 8.530 | 8.773 | 8.484 | G5 | G7IV | 0.107 | …
| BootesB | BB2_3 | BD+32-2490 | 14:34:03.366 | 32:06:02.588 | 9.640 | 9.089 | 8.835 | 8.414 | K0 | G8IIIFe1 | 0.127 | …
| BootesB | BB2_4llfootnotemark: | BD+31-2630 | 14:33:01.264 | 30:56:33.554 | 10.240 | 9.793 | 9.504 | 9.246 | … | F9V | 0.336 | …
| BootesB | BB2_5 | TYC2553-961-1 | 14:24:21.497 | 30:58:03.684 | 10.323 | 9.713 | 9.351 | 8.864 | … | G8IIIFe1 | 0.580 | …
| Elat30 | E303_1 | BD+11-2874 | 15:52:08.230 | 10:52:28.103 | 7.882 | 7.169 | 6.692 | 6.012 | K5V | M0.5V | 0.330 | spectroscopic binary
| Elat30 | E303_2 | HD141631 | 15:49:47.057 | 10:48:24.520 | 8.251 | 7.922 | 7.555 | 7.096 | K2 | G4O-Ia | 0.206 | …
| Elat30 | E303_3 | TYC947-300-1 | 15:50:53.577 | 09:41:15.828 | 10.379 | 9.841 | 9.861 | 9.310 | … | K1IIIFe-0.5 | 0.595 | …
| Elat30 | E303_4 | HD141531 | 15:49:16.496 | 09:36:42.408 | 7.718 | 7.052 | 6.971 | 6.337 | K | M1V | 0.089 | …
| NEP | N3_1 | HD164781 | 17:57:03.647 | 68:49:19.744 | 8.948 | 8.601 | 7.733 | 7.423 | K0 | G8V | 0.076 | …
| NEP | N3_2 | TYC4428-1122-1 | 17:54:46.231 | 68:06:42.016 | 9.753 | 9.250 | 9.009 | 8.353 | … | K1IIIbCN1.5Ca1 | 0.629 | …
| NEP | N3_3 | BD+67-1050 | 18:06:45.898 | 67:50:40.686 | 8.273 | 7.722 | 7.485 | 6.976 | K2 | K1IIIbCN1.5Ca1 | 0.134 | …
| NEP | N3_4 | BD+65-1248 | 18:12:21.398 | 65:36:17.381 | 7.214 | 6.492 | 6.359 | 5.635 | K5 | K5III | 0.919 | …
| NEP | N3_5iifootnotemark: | HD166779 | 18:07:35.504 | 63:54:12.298 | 6.711 | 6.077 | 5.706 | 5.078 | K5 | M0.5V | 0.455 | …
| NEP | N3_6 | TYC4226-812-1 | 18:25:26.020 | 66:00:38.783 | 9.655 | 9.417 | 8.924 | 8.714 | … | F8Ia | 0.293 | …
| SWIRE | S3_1 | BD+55-1802 | 16:01:45.359 | 54:48:40.882 | 10.325 | 10.033 | 9.570 | 9.330 | G0 | G2IV | 0.392 | …
| SWIRE | S3_2 | TYC3870-1085-1 | 15:54:21.929 | 53:36:47.786 | 10.417 | 10.198 | 9.554 | 9.300 | … | G2II-III | 0.871 | …
3rd | SWIRE | S3_3 | TYC3870-366-1 | 15:53:29.099 | 53:28:36.008 | 8.669 | 8.062 | 7.928 | 7.281 | … | M1V | 0.285 | …
SWIRE | S3_4 | TYC3877-704-1 | 16:10:22.667 | 54:28:38.784 | 9.017 | 8.472 | 8.258 | 7.715 | … | K1IIIbCN1.5Ca1 | 0.239 | …
SWIRE | S3_5 | TYC3877-1592-1 | 16:01:43.031 | 53:06:25.855 | 10.233 | 9.746 | 9.566 | 9.077 | … | G9III | 0.136 | …
| SWIRE | S3_6 | TYC3878-216-1 | 16:25:31.829 | 53:25:25.453 | 9.065 | 8.709 | 8.364 | 8.020 | … | G1IIICH1 | 0.214 | …
| Lockman | L3_1 | V*DM-UMa | 10:55:43.521 | 60:28:09.613 | 7.975 | 7.476 | 7.194 | 6.621 | K0III | G2Ib | 0.233 | …
| Lockman | L3_2 | HD94880 | 10:58:21.518 | 59:16:53.422 | 7.787 | 7.482 | 6.900 | 6.629 | G0 | G0Ib-II | 0.115 | …
| Lockman | L3_3 | HD92320 | 10:40:56.905 | 59:20:33.065 | 7.947 | 7.662 | 7.148 | 6.852 | G0 | F2-F5Ib | 0.109 | high-proper-motion
| Lockman | L3_4 | HD237955 | 10:57:44.114 | 58:10:01.103 | 9.799 | 9.619 | 8.705 | 8.508 | G0 | F5III | 0.038 | …
| Lockman | L3_5 | TYC3827-847-1 | 11:01:59.570 | 56:58:11.510 | 9.498 | 9.094 | 8.816 | 8.279 | … | M2V | 0.479 | …
| Lockman | L3_6 | HD237961 | 11:00:12.007 | 56:59:49.481 | 9.267 | 9.049 | 8.495 | 8.271 | G0 | G1VFe-0.5 | 0.304 | …
| BootesA | BA3_1 | BD+362491 | 14:26:05.241 | 35:50:00.776 | 8.897 | 8.498 | 8.095 | 7.676 | K0 | G3II | 0.515 | …
| BootesA | BA3_2 | HD128368 | 14:35:32.053 | 34:41:11.540 | 7.436 | 6.789 | 6.530 | 5.942 | K0 | M0.5V | 0.215 | …
| BootesA | BA3_3 | BD+35-2576 | 14:32:31.567 | 34:42:09.493 | 9.291 | 8.834 | 9.058 | 8.737 | K0 | F5Ib-G1Ib | 0.143 | …
| BootesA | BA3_4jjfootnotemark: | HD126878 | 14:27:13.534 | 34:43:19.996 | 9.190 | 9.091 | 7.783 | 7.640 | F5 | F2III-IV | 0.060 | …
| BootesB | BB3_1kkfootnotemark: | TYC2560-1157-1 | 14:38:39.909 | 35:31:13.224 | 9.416 | 8.918 | 8.611 | 8.100 | K1 | K4V | 0.124 | …
| BootesB | BB3_2 | BD+32-2503 | 14:41:07.455 | 32:04:45.095 | 9.628 | 9.449 | 8.853 | 8.624 | … | F8Ib | 0.198 | …
| BootesB | BB3_3 | BD+32-2456 | 14:18:52.718 | 32:06:31.003 | 9.191 | 8.531 | 7.992 | 7.444 | K2III | K0.5IIICN1 | 0.534 | …
| BootesB | BB3_4llfootnotemark: | BD+31-2630 | 14:33:01.264 | 30:56:33.554 | 10.170 | 9.940 | 9.504 | 9.246 | … | F9V | 0.438 | …
| Elat10 | E104_1 | HD111645 | 12:50:42.449 | 08:52:30.238 | 8.908 | 8.691 | 8.124 | 7.920 | F8 | F7III | 0.041 | …
| Elat10 | E104_2 | BD+11-2491 | 12:46:07.870 | 11:09:25.744 | 10.229 | 9.992 | 9.486 | 9.201 | F8 | F2-F5Ib | 0.162 | …
| Elat10 | E104_3 | … | 12:41:28.720 | 10:52:57.907 | 10.959 | 10.368 | 10.599 | 10.096 | … | K5V | 0.702 | …
| Elat10 | E104_4 | HD110777 | 12:44:20.102 | 06:51:16.916 | 8.442 | 8.212 | 7.663 | 7.418 | G0 | F8Ia | 0.148 | …
| Elat10 | E104_5 | BD+10-2440 | 12:33:51.920 | 09:31:54.156 | 8.139 | 7.372 | 6.662 | 5.860 | … | K3II-III | 1.012 | …
| Elat10 | E104_6 | HD109824 | 12:37:48.044 | 04:59:07.195 | 6.860 | 6.296 | 6.092 | 5.542 | K0 | K0.5IIb | 0.570 | …
| Elat30 | E304_1 | … | 13:02:54.144 | 26:23:27.762 | 8.966 | 8.441 | 8.267 | 7.756 | … | K1IIIbCN1.5Ca1 | 0.478 | …
| Elat30 | E304_2 | BD+27-2207 | 13:02:50.671 | 26:50:00.402 | 10.924 | 10.630 | 10.141 | 9.899 | F8 | F8Ib | 0.262 | …
| Elat30 | E304_3 | TYC1995-264-1 | 13:02:50.439 | 27:29:22.283 | 10.212 | 10.004 | 9.586 | 9.251 | … | G1VFe-0.5 | 0.121 | …
| Elat30 | E304_4 | BD+27-2197 | 12:57:45.577 | 27:01:51.600 | 10.562 | 10.374 | 9.873 | 9.672 | F5 | F2Ib | 0.098 | …
| Elat30 | E304_5 | TYC1995-1123-1 | 12:57:25.736 | 28:18:25.992 | 9.837 | 9.006 | 8.997 | 8.229 | … | M1.5V | 0.608 | …
| Elat30 | E304_6 | LP322-154 | 12:57:04.818 | 29:30:36.860 | 10.454 | 9.808 | 9.740 | 9.096 | K5V | M0.5V | 1.460 | high-proper-motion
| Elat30 | E304_7 | TYC2532-820-1 | 12:56:45.236 | 30:44:22.556 | 10.678 | 10.006 | 9.838 | 9.324 | K1V | M1V | 0.344 | …
| NEP | N4_1 | BD+68-951 | 17:38:51.760 | 68:13:16.536 | 9.137 | 8.449 | 7.942 | 7.438 | K0 | K1.5IIIFe-0.5 | 0.273 | multiple-star
| NEP | N4_2 | HD161500 | 17:41:10.318 | 65:13:10.301 | 7.442 | 6.860 | 6.633 | 6.119 | K2 | K1IIIbCN1.5Ca1 | 0.312 | …
| NEP | N4_3 | G227-20 | 17:52:11.850 | 64:46:08.720 | 9.077 | 8.391 | 8.249 | 7.615 | M0.5V | M1.5V | 0.449 | high-proper-motion
4th | NEP | N4_4 | TYC4208-1599-1 | 17:52:05.421 | 64:37:15.827 | 10.278 | 9.725 | 9.929 | 9.259 | … | M2V | 0.486 | …
NEP | N4_5 | BD+64-1227A | 17:52:17.178 | 64:14:16.411 | 8.816 | 8.500 | 8.400 | 8.125 | … | F8Ib | 0.046 | …
NEP | N4_6 | TYC4213-161-1 | 18:03:24.923 | 67:12:41.681 | 10.171 | 9.868 | 9.327 | 9.115 | … | F7III | 0.109 | …
| NEP | N4_7 | BD+66-1074 | 18:03:15.008 | 66:20:29.069 | 7.609 | 6.866 | 6.739 | 6.046 | K5 | K3II-III | 1.262 | …
| NEP | N4_8 | HD170592 | 18:25:24.759 | 65:45:34.470 | 7.474 | 7.143 | 6.722 | 6.409 | K0 | G5V | 0.148 | …
| SWIRE | S4_1 | TYC3870-1026-1 | 15:55:16.319 | 54:45:12.510 | 10.127 | 9.564 | 9.332 | 8.829 | … | K3V | 0.261 | …
| SWIRE | S4_2 | TYC3496-1361-1 | 15:56:04.610 | 52:13:29.543 | 8.240 | 7.566 | 7.519 | 6.825 | … | K3III | 0.421 | …
| SWIRE | S4_3 | TYC3880-1133-1 | 16:03:15.627 | 56:02:35.210 | 8.711 | 7.821 | 7.791 | 6.995 | … | M2.5IIIBa0.5 | 2.347 | …
| SWIRE | S4_4 | TYC3877-484-1 | 16:03:12.065 | 54:44:27.658 | 9.047 | 8.361 | 7.846 | 7.288 | … | K2IIIFe-1 | 0.147 | …
| SWIRE | S4_5 | HD234308 | 16:26:05.554 | 52:18:08.266 | 8.652 | 8.101 | 7.932 | 7.407 | K0 | K1IIIFe-0.5 | 0.237 | …
| DGL | D4_1 | TYC4419-1623-1 | 16:14:22.875 | 69:55:54.455 | 10.093 | 9.624 | 9.419 | 8.810 | … | M2V | 0.373 | …
| DGL | D4_2 | TYC4419-1631-1 | 16:18:10.929 | 69:16:36.761 | 9.923 | 9.466 | 9.229 | 8.916 | … | K1V | 0.124 | …
| DGL | D4_3 | BD+67-943 | 16:29:52.210 | 66:47:45.154 | 9.390 | 9.120 | 8.606 | 8.417 | F8 | F8Ia | 0.110 | …
| DGL | D4_4 | TYC4196-2280-1 | 16:34:34.354 | 65:36:05.818 | 10.424 | 9.946 | 9.783 | 9.339 | … | G4V | 0.232 | …
| DGL | D4_5 | HD151286 | 16:40:37.776 | 70:34:14.772 | 7.110 | 6.668 | 6.237 | 5.794 | … | G3II | 0.070 | …
| DGL | D4_6 | BD+69-873 | 16:47:31.365 | 68:51:02.603 | 8.338 | 7.820 | 7.495 | 7.010 | K0 | G7.5IIIa | 0.111 | …
| DGL | D4_7 | HD154273 | 16:58:40.137 | 69:38:05.431 | 7.022 | 6.508 | 6.197 | 5.746 | K0 | G7.5IIIa | 0.106 | …
| DGL | D4_8 | TYC4424-1380-1 | 17:08:33.058 | 71:00:28.044 | 9.242 | 8.911 | 9.008 | 8.727 | … | G2IV | 0.109 | …
| DGL | D4_9 | TYC4421-2278-1 | 17:16:54.688 | 67:38:26.279 | 8.993 | 8.460 | 8.269 | 7.792 | … | K1IIIFe-0.5 | 0.174 | …
| BootesB | BB4_1 | TYC2557-870-1 | 14:40:08.540 | 34:40:29.669 | 10.107 | 9.545 | 9.249 | 8.768 | … | M2V | 0.331 | …
| BootesB | BB4_2 | HD128094 | 14:34:10.846 | 30:59:10.356 | 7.857 | 7.240 | 6.963 | 6.405 | K0 | K2III | 0.226 | …
| BootesB | BB4_3 | TYC2559-388-1 | 14:34:47.808 | 35:34:09.419 | 9.761 | 9.346 | 9.011 | 8.550 | G8V | G6III | 0.184 | …
4th | BootesB | BB4_4 | TYC2553-947-1 | 14:28:52.868 | 31:30:30.316 | 8.505 | 7.763 | 7.642 | 6.917 | … | K2III | 0.170 | …
BootesB | BB4_5 | V*KT-Boo | 14:29:02.513 | 33:50:38.929 | 8.699 | 8.271 | 7.846 | 7.465 | G | G0Ib-II | 0.074 | …
BootesB | BB4_6hhfootnotemark: | HD126210 | 14:23:24.060 | 33:34:19.099 | 8.764 | 8.749 | 7.653 | 7.492 | F8 | F1II | 0.194 | …
| BootesB | BB4_7 | TYC2549-413-1 | 14:23:23.452 | 34:33:24.854 | 9.399 | 8.885 | 8.510 | 7.947 | … | K1IIIbCN1.5Ca1 | 0.269 | …
a,ba,bfootnotetext: The J2000.0 right ascension (RA) and the declination (Dec)
of a star in a sexagesimal from 2MASS data.
c,dc,dfootnotetext: Vega magnitude of the LRS.
e,fe,ffootnotetext: Vega magnitude of the matched 2MASS point source catalog.
h,i,j,k,lh,i,j,k,lfootnotetext: A star that observed from two independent
flights.
|
# Response-conditioned Turn-taking Prediction
Bing’er Jiang Erik Ekstedt Gabriel Skantze
Division of Speech, Music and Hearing, KTH Royal Institute of Technology
{binger, erikekst<EMAIL_ADDRESS>
###### Abstract
Previous approaches to turn-taking and response generation in conversational
systems have treated it as a two-stage process: First, the end of a turn is
detected (based on conversation history), then the system generates an
appropriate response. Humans, however, do not take the turn just because it is
likely, but also consider whether what they want to say fits the position. In
this paper, we present a model (an extension of TurnGPT) that conditions the
end-of-turn prediction on both conversation history and what the next speaker
wants to say. We found that our model consistently outperforms the baseline
model in a variety of metrics. The improvement is most prominent in two
scenarios where turn predictions can be ambiguous solely from the conversation
history: 1) when the current utterance contains a statement followed by a
question; 2) when the end of the current utterance semantically matches the
response. Treating the turn-prediction and response-ranking as a one-stage
process, our findings suggest that our model can be used as an incremental
response ranker, which can be applied in various settings.
## 1 Introduction
A fundamental component of spoken dialog system (SDS) is turn-taking, i.e.,
the decision of when to take turns at appropriate places, without causing long
response delays or interrupting the user. In other words, the system must be
able to correctly identify when the user is yielding the turn, and it is
appropriate to make a response, and when the user is simply making a mid-
utterance pause (Skantze, 2021). Traditionally, this has been done using a
simple silence threshold. However, silence is not a very good indicator of
turn-shifts and more modern approaches instead use various cues known to be
important in human-human turn-taking, such as lexico-syntactic cues, prosody,
or gaze (Gravano and Hirschberg, 2011; Ishii et al., 2016; Lala et al., 2019;
Ekstedt and Skantze, 2022).
Ekstedt and Skantze (2020) proposed TurnGPT, a transformer-based language
model that incrementally processes words in the user’s utterance and predicts
the probability of a turn-shift after each word. This is similar to the notion
of syntactic or pragmatic completion points that have been identified in
conversation analysis (Ford and Thompson, 1996). In their analysis of TurnGPT,
Ekstedt and Skantze (2020) found that the 20% of the model’s attention is
directed towards utterances earlier than the current one, indicating that it
is sensitive to pragmatic aspects of dialogue.
While such models are indeed a step forward, there is a still an important
component missing that we will address in this paper. When humans make a
decision to take the turn, it is not just based on whether there are enough
turn-yielding cues in the interlocutor’s utterance. Sacks et al. (1974) use
the notion of transition-relevant places, or TRP, for places where a
transition could potentially take place (but does not have to). Thus, many
places for turn-shifts are highly optional. To partly address this problem,
Ishii et al. (2022) annotated the willingness of the next speaker to take the
turn, and built a model that could predict this willingness based on
multimodal cues.
Whether a turn-shift takes place or not also depends on the intention of the
next speaker, and what they want to say. For dialogue systems, this means that
the system should not automatically take the turn once the transition-
probability passes a certain threshold, and only then decide what it should
respond. Instead, the system should take the potential response into account
when deciding whether it is appropriate to take the turn or not.
We call this response-conditioned turn-taking prediction, which is illustrated
in Figure 1. In this paper, we investigate to what extent and under what
scenarios such response-conditioning would help to predict turn-shifts. We
present a model called RC-TurnGPT, which is an extension of TurnGPT.
Figure 1: Response-conditioned turn-taking prediction.
Note that the current study does not intend to address how and when the next
speaker comes up with what they would like to say. This depends of course on
the exact implementation of the dialogue system, which could for example be
response-ranking (Gao et al., 2020) or an intent-based planning approach (FAIR
et al., 2022). Regardless of this, the model proposed here could be used to
incrementally rank or score potential responses to see whether they fit well
from a turn-taking perspective.
## 2 Methods
TurnGPT is a unidirectional transformer-based language model (LM) optimized
through cross-entropy to predict the next token in a sequence. It is a pre-
trained GPT-2 (base) model (Radford et al., 2019), finetuned on unpunctuated
dialog corpora, with a special turn-shift token (TS) that delimits consecutive
turns. RC-TurnGPT is an extension of this model, by also conditioning the
prediction on the response.
While the RC-TurnGPT model is architecturally equivalent to TurnGPT, it
differs in the training objective through a simple data transformation. This
transformation permutes the ordering of turns in a similar approach as the FIM
pre-training objective of Bavarian et al. (2022). We consider turn-based
dialog sequences to consist of three parts: the context/history (H), the
current utterance (CU) and the next response (R). The task is to correctly
predict the location of the turn-shift token in the current utterance,
$CU_{i}$, given the history, $H_{i}$, and the next response, $R_{i}$, over all
samples $i$ in the dataset, $D$. The samples $i\in D_{I}$ are extracted by
applying a turn-based sliding window approach with a step size of 1 and a
window size of 3 turns.
However, instead of the uniform left-to-right next token prediction task of
regular LMs, the RC-TurnGPT model train on ordered sequences of {R, H, CU},
masking the loss over R and H to solely learn over the CU turns. This enables
the model to use information of both H and R while keeping the original left-
to-right next token prediction setup.
Finally, the TurnGPT model utilized three special tokens in addition to the
original GPT-2 vocabulary, the aforementioned TS token and two speaker tokens.
The speaker tokens are similar to positional embeddings and are added to the
word embeddings to encode the speaker identity over each word. Because of the
permuted ordering of the RC-TurnGPT setup we also include a fourth special
response-token that are added to the words of the response to distinguish them
from the actual context. Both the base model and the datasets were implemented
using Huggingface Wolf et al. (2020); Lhoest et al. (2021).
### 2.1 Data
We train RC-TurnGPT and the baseline TurnGPT on two types of data sets based
on Ekstedt and Skantze (2020): Assistant and Written Social. The former
constitutes of three task-oriented dialog corpora: Taskmaster Byrne et al.
(2019), MetaLWOZ Lee et al. (2019), and MultiWoz Zang et al. (2020). The
latter includes two corpora constructed by human-human written dialogs:
CuriosityDialogs Rodriguez et al. (2020) and DailyDialog Li et al. (2017). All
datasets are written dialogs with clearly defined turns. The resulting full
dataset contains 106,830 dialogs for training, 9,362 for validation, and 7,897
for test, with an average number of turns being 13.69.
### 2.2 Evaluation
To evaluate the models, we propose five turn-level based metrics that measures
the turn-shift performance in various ways. The models are considered to make
a turn-shift prediction when the probability exceeds a certain threshold
optimized for performance over the validation split, for each model
independently.
First, we define turn-level accuracy (TL-Acc) to be the percentage of turns
where the turn-shift probability exceeds the threshold at, and only at, the
ground-truth end of turn. Second, the no response rate (NRR) is the percentage
of turns where the threshold is never exceeded and the model fails to make a
response. The third metric is defined to measure the barge-in rate (BR), the
percentage of turns where the models would make a turn-shift prediction before
the actual turn-shift.
We also investigate instances where the two models make different turn-taking
decisions to see how well the response would fit, using perplexity as a
measure. We use the TurnGPT model to calculate the average perplexity over the
response (R-PPL).
Lastly, we define the ordinal spike rate (OSR) to be the percentage of turns
where the probability is the greatest at the end of the turn. This metric does
not consider a threshold but simply measures how many times the highest
probability is located at the correct turn-shift location.
## 3 Results
### 3.1 Aggregate results
Table 1 shows that RC-TurnGPT performs better in all evaluations metrics,
although the improvement is not large overall. While 55.77% turn-level
accuracy may not seem very high, it should be noted that even predictions
different from ground-truth turn-shift can also be valid in everyday
conversations, especially in long utterances where several completion points
are likely. While the threshold-based binary metric is low, the probability-
based OSR is much higher, indicating that the model is indeed able to detect
end of turn reflected by assigning the highest probability. Furthermore, the
perplexity of the response also decreases, showing that when one or both of
the two models make a mistake, the response fits better with the context for
the turn-shifts RC-TurnGPT takes.
Metric | Turn-GPT | RC-TurnGPT
---|---|---
TL-Acc $\uparrow$ | 53.93% | 55.77%
NRR $\downarrow$ | 20.90% | 19.23%
BR $\downarrow$ | 25.17% | 24.75%
R-PPL $\downarrow$ | 1.923 | 1.918
OSR $\uparrow$ | 88.57% | 89.17%
Table 1: The turn-level accuracy (TL-Acc), no response rate (NRR), barge-in
rate (BR), response perplexity (R-PPL) and the ordinal spike rate (OSR)
performance for TurnGPT and RC-TurnGPT. Best performance are bold.
### 3.2 Model analysis
In order to better understand when conditioning on the response helps turn-
shift prediction and when it does not, we proceed to analyse cases where only
RC-TurnGPT makes the correct prediction, and where both models are successful.
We extract all turns in the test set where TurnGPT makes a pre-mature turn-
shift prediction but RC-TurnGPT correctly predicts the end of the turn. We
sort the turns by the difference in probability assigned by the two models at
the TurnGPT-predicted turn-shift. We then investigate the difference between
the top and bottom 1000 cases. By comparing these two subsets, we can better
understand when conditioning on the response makes the biggest difference. We
identified two scenarios which we hypothesized would be important: 1)
statement to question; 2) semantic matching.
#### Statement to question
refers to cases where the current utterance consists of at least one statement
and ends with a question. As there are more than one natural completion point,
TurnGPT will be greedy while RC-TurnGPT will take the response into
consideration and choose a later completion point as turn shift. Consider the
following dialogue in Figure 2 (Current Utterance plotted, Response in
caption):
Figure 2: Different turn-taking predictions: TurnGPT predicts the turn-shift
at the end of a statement; RC-TurnGPT predicts the end of a question.
Response: sure first of all it’s very important for you not to be late
Figure 2 shows that without conditioning on the response, TurnGPT spikes at an
early completion point interrupting the current speaker. However, as the
response clearly corresponds to an answer to a request, RC-TurnGPT waits until
the speaker finishes their request.
In order to quantify this effect, we use punctuations to calculate how often
TurnGPT makes a mistake by missing a question. We use the top/bottom subsets
and ask GPT3111Model version: “text-curie-001” Brown et al. (2020) to insert
punctuation over the ground truth turns (advice in this example) and the
incomplete TurnGPT predicted turns (week in this example). We then calculate
the ratio of cases where the former ends with a question mark while the latter
does not. The top cases contain 36.3% statements to questions and the bottom
11.7%. The higher ratio in the top cases indicates that the RC-TurnGPT model
recognizes this pattern and uses the response conditioning to wait for the
appropriate moment to take the turn.
#### Semantic matching
refers to cases where the response semantically corresponds to the
specification made in the later parts of the current utterance. Consider the
dialogue in Figure 3:
Figure 3: Different turn-taking predictions: RC-TurnGPT’s prediction allows
closer semantic matching between current utterance and response. Response:
sure vietnam achieved an 8% gdp growth between 1990 and 1997
As the response clearly addresses the topic of economy, Figure 3 shows that
RC-TurnGPT would spike only after economy is specified, whereas TurnGPT has
two spikes at both places and would predict the turn shift after v-iet-nam. It
is important to note that while the response has no lexical overlap, the model
still manages to find the semantic correlation.
In order to investigate whether RC-TurnGPT consitently recognizes such
pattern, we use Sentence-Bert Reimers and Gurevych (2019) to measure the
Semantic Textual Similarity between the Response and the last part of the
actual turns missed by TurnGPT (here, ’s economy). The average cosine distance
for the top and bottom subsets are 0.293 and 0.209 respectively. This
indicates that where RC-TurnGPT outperforms TurnGPT, it does consider the
semantic content of the response and delays predicting a turn-shift until the
relevant semantic information has been stated.
#### Non-ambiguous turn-completions.
In addition, there are also a large number of cases where the current
utterance has a fairly simple structure and hence it is not ambiguous where to
take the turn.In those cases, conditioning on the next response obviously
makes a very small difference. As illustrated in Figure 4, given that there is
only one completion point, both models predict the turn shift correctly. This
also explains why there are no drastic improvements for RC-TurnGPT when
looking at aggregate results on the whole test set, as most of the task-
oriented dialogues contain such simple utterances, which TurnGPT can perform
well on.
Figure 4: Similar turn-taking predictions for a simple utterance. Response: it
is the capital of france
## 4 Discussion and conclusion
In this study, we examined how turn-taking prediction can be improved when
conditioned on the response. We found that the response conditioning is
particularly helpful under two circumstances, mainly by preventing greedy
turn-taking at earlier completion point: 1) when the current utterance
contains statements followed by questions; 2) when the end of the current
utterance semantically matches the response. However, for simple utterances
with fewer completion points, TurnGPT is already capable of predicting the
correct turn shift, and there is no additional help from conditioning on the
response.
We should again stress that this paper does not address the question of how
and when the system comes up with a potential response. However, this analysis
shows that it is indeed possible to find a more suitable transition-point,
when conditioning on the response. As we have suggested, the decision what to
say and when it say it should be considered as a joint decision rather than a
two-step process. In this regard, the RC-TurnGPT model could be used as an
incremental response ranker, which does not only consider different responses
at each step, but which can also decide not to respond and wait for more
input. For instance, it can be applied in an interview setting where the model
(interviewer) asks questions (ranking from a list of interview questions) and
take the turn at appropriate places. For future work, it would also be
interesting to involve the utility of the candidate responses (from the
system’s perspective). In the interview scenario, this could for example mean
that the system can find moments where certain important questions can be
asked, and which also fit well from a turn-taking perspective.
## Limitations
As mentioned above, the current study is limited to the question of whether
(and when) conditioning turn-taking prediction on the response improves the
performance. It does not yet show how the model could be incorporated in a
spoken dialogue system. Moreover, this study focuses only on written
conversations without incorporating spoken dialogues. Thus, the
interpretations can be limited to dialogues that are relatively ‘formal’
without hesitations, repetitions, etc. Note also that we only analyse lexical
cues to turn-taking (just like with TurnGPT), and leave out other modalities
for future work.
## Ethics Statement
The current study does not involve any human subjects and we do not foresee
any ethical consequences.
## References
* Bavarian et al. (2022) Mohammad Bavarian, Heewoo Jun, Nikolas Tezak, John Schulman, Christine McLeavey, Jerry Tworek, and Mark Chen. 2022. Efficient training of language models to fill in the middle.
* Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
* Byrne et al. (2019) Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: Toward a realistic and diverse dialog dataset. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4516–4525, Hong Kong, China. Association for Computational Linguistics.
* Ekstedt and Skantze (2020) Erik Ekstedt and Gabriel Skantze. 2020. TurnGPT: a transformer-based language model for predicting turn-taking in spoken dialog. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 2981–2990, Online. Association for Computational Linguistics.
* Ekstedt and Skantze (2022) Erik Ekstedt and Gabriel Skantze. 2022. How Much Does Prosody Help Turn-taking? Investigations using Voice Activity Projection Models. In _Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue_ , pages 541–551, Edinburgh, UK. Association for Computational Linguistics.
* FAIR et al. (2022) FAIR, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, Athul Paul Jacob, Mojtaba Komeili, Karthik Konath, Minae Kwon, Adam Lerer, Mike Lewis, Alexander H. Miller, Sasha Mitts, Adithya Renduchintala, Stephen Roller, Dirk Rowe, Weiyan Shi, Joe Spisak, Alexander Wei, David Wu, Hugh Zhang, and Markus Zijlstra. 2022. Human-level play in the game of Diplomacy by combining language models with strategic reasoning. _Science_ , 378(6624):1067–1074.
* Ford and Thompson (1996) C Ford and S Thompson. 1996. Interactional units in conversation: syntactic, intonational, and pragmatic resources for the management of turns. In E Ochs, E Schegloff, and A Thompson, editors, _Interaction and grammar_ , Studies in interactional sociolinguistics 13, chapter 3, pages 134–184. Cambridge University Press, Cambridge.
* Gao et al. (2020) Xiang Gao, Yizhe Zhang, Michel Galley, Chris Brockett, and Bill Dolan. 2020. Dialogue response ranking training with large-scale human feedback data. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 386–395, Online. Association for Computational Linguistics.
* Gravano and Hirschberg (2011) Agustın Gravano and Julia. Hirschberg. 2011. Turn-taking cues in task-oriented dialogue. _Computer Speech & Language_, 25(3):601–634.
* Ishii et al. (2016) Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, and Junji Yamato. 2016. Prediction of who will be the next speaker and when using gaze behavior in multiparty meetings. _ACM Transactions on Interactive Intelligent Systems_.
* Ishii et al. (2022) Ryo Ishii, Xutong Ren, Michal Muszynski, and Louis-Philippe Morency. 2022. Trimodal prediction of speaking and listening willingness to help improve turn-changing modeling. _Frontiers in Psychology_ , 13:774547.
* Lala et al. (2019) Divesh Lala, Koji Inoue, and Tatsuya Kawahara. 2019. Smooth turn-taking by a robot using an online continuous model to generate turn-taking cues. In _2019 International Conference on Multimodal Interaction_ , ICMI ’19, page 226–234, New York, NY, USA. Association for Computing Machinery.
* Lee et al. (2019) Sungjin Lee, Hannes Schulz, Adam Atkinson, Jianfeng Gao, Kaheer Suleman, Layla El Asri, Mahmoud Adada, Minlie Huang, Shikhar Sharma, Wendy Tay, and Xiujun Li. 2019. Multi-domain task-completion dialog challenge. In _Dialog System Technology Challenges 8_.
* Lhoest et al. (2021) Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Li et al. (2017) Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. _OpenAI Blog_ , 1(8):9.
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.
* Rodriguez et al. (2020) Pedro Rodriguez, Paul Crook, Seungwhan Moon, and Zhiguang Wang. 2020. Information seeking in the spirit of learning: a dataset for conversational curiosity. In _Empirical Methods in Natural Language Processing_.
* Sacks et al. (1974) H Sacks, Emanuel Schegloff, and G Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. _Language_ , 50:696–735.
* Skantze (2021) Gabriel Skantze. 2021. Turn-taking in Conversational Systems and Human-Robot Interaction : A Review. _Computer Speech & Language_, 67.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics.
* Zang et al. (2020) Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen. 2020. Multiwoz 2.2: A dialogue dataset with additional annotation corrections and state tracking baselines. In _Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, ACL 2020_ , pages 109–117.
|
# Blume-Emery-Griffiths dynamics in social networks
Yao-Hui Yang Department of Mathematics and Physics, Chongqing University of
Science and Technology, Chongqing $401331$, China
###### Abstract
We introduce the Blume-Emery-Griffiths (BEG) model in a social networks to
describe the three-state dynamics of opinion formation. It shows that the
probability distribution function of the time series of opinion is a Gaussian-
like distribution. We also study the response of BEG model to the external
periodic perturbation. One can observe that both the interior thermo-noise and
the external field result in phase transition, which is a split phenomena of
the opinion distributions. It is opposite between the effect acted on the
opinion systems of the amplitude of the external field and of the thermo-
noise.
###### pacs:
02.50.-r, 87.23.Ge, 89.75.-k, 05.45.-a,
## I INTRODUCTION
Over the last few years, the study of opinion formation in complex networks
has attracted a growing amount of works and becomes the major trend of
sociophysics Intro-1 . Many models have been proposed, like those of Deffuant
Intro-2 , Galam Intro-3 , Krause-Hegselmann (KH) Intro-4 , and Sznajd Intro-5
. But most models in the literature consider two-state opinion agents, in
favor ($+1$) or against ($-1$) about a certain topic. In the Galam’s majority
rule and the Sznajd’s updating rule, the interaction between the agents is
randomly changed during the evolution, and the time to reach consensus is
associated with the initial traction $p$ of $+1$ state. The consensus time $T$
reaches its maximal value at $p=0.5$. In the Sznajd model, a pair of nearest
neighbors convinces its neighbors to adopt the pair opinion if and only if
both members have the same opinion. Otherwise the pair and its neighbors do
not change opinion. In the KH consensus model, the opinions between $0$ and
$1$ and a confidence bound parameter is introduced. The agent $i$ would take
the average opinion of all neighboring agents that are within a confidence
bound during the evolution. In the Deffuant model, the opinion of two randomly
selected neighboring agents $i$ and $j$ would remain unchanged, if their
opinions $\sigma_{i}$ and $\sigma_{j}$ differ by more than a fixed threshold
parameter. Otherwise, each opinion moves into the direction of the other by an
amount $\mu\times\mid\sigma_{i}-\sigma_{j}\mid$.
Additionally, complex networks have received much attention in recent years.
Topologically, a network is consisted of nodes and links. The complex network
models, such as the lattice network, the random network Intr_6 ; Intr_7 ;
Intr_8 , the small-world network Intr_9 ; Intr_10 , and the scale-free network
Intr_11 , are studied in many branches of science. It is meaningful to mention
that opinion formation models are set up in complex networks.
In the present work, we investigate the implication of a social network in a
stochastic opinion formation model. We first introduce the Blume-Emery-
Griffiths (BEG) model Intr_12 ; Intr_13 ; Intr_14 to describe the dynamics of
opinion formation, and the model of complex networks we used is social network
which is more reality. Our simulation focuses on the average opinion for
different situation. And we also simulated the system under the influence of
external field.
In the rest of this paper we will give a description of this dynamic model and
how to generate the underlying networks. In Sec.III, we show the simulation
results without external filed. In Sec. IV we present the results with the
influence of external field. The final section presents further discussion and
conclusion.
## II The model
Generally speaking, social networks include some essential characteristics,
such as short average path lengths, high clustering, assortative mixing
Model-1 ; Model-2 , the existence of community structure, and broad degree
distributions Model-3 ; Model-4 . As a result, we use Riitta Toivonen’s social
network model in our present work Model-5 . This network is structured by two
processes: $1)$ attachment to random vertices, and $2)$ attachment to the
neighborhood of the random vertices, giving rise to implicit preferential
attachment. These processes give rise to essential characteristics for social
networks. The second process gives rise to assortativity, high clustering and
community structure. The degree distribution is also determined by the number
of edges generated by the second process for each random attachment.
Figure 1: Degree distribution of networks with $N=10000$. Result is averages
over $20$ simulation runs. The number of initial contacts is distributed as
$p(n_{init}=1)=0.25$, $p(n_{init}=2)=0.75$, and the number of secondary
contacts from each initial contact $n_{2nd}\sim U[0,3]$.
In this paper, the network is grown from a chain with $10$ nodes. The number
of initial contacts is distributed as $p(n_{init}=1)=0.25$,
$p(n_{init}=2)=0.75$, and the number of secondary contacts from each initial
contact $n_{2nd}\sim U[0,3]$ (uniformly distributed between $0$ and $3$). The
total number of nodes in the social network structure is $N=10000$. The degree
distribution of simulated networks is displayed in Fig. 1. We note that the
degree distributon $P(k)$ is a power-law functional form and a peak around the
degree $k=5$, also that consistent with real world observations Intr_11 ;
Model-6 .
Now, we consider a system with $N$ agents, which is represented by nodes on a
social network. For each node, we consider three states which are represented
by $+1$, $0$, and $-1$. A practical example could be the decision to agree
$\sigma_{i}(t)=+1$, disagree $\sigma_{i}(t)=-1$, or neutral $\sigma_{i}(t)=0$.
The states are updated according to the stochastic parallel spin-flip dynamics
defined by the transition probabilities
$Prob\left(\sigma_{i,t+1}=s^{\prime}|\sigma_{N}(t)\right)=\frac{\exp\left\\{-\beta\epsilon_{i}\left[s^{\prime}|\sigma_{N}(t)\right]\right\\}}{\sum_{s}\exp\left\\{-\beta\epsilon_{i}[s|\sigma_{N}(t)]\right\\}}$
(1)
where $s,s^{\prime}\in\\{+1,0,-1\\}$, and $\beta=a/T$, $a$ represents the
active degree of system, defined as $a=\left<\sigma_{N}^{2}(t)\right>$. The
energy potential $\epsilon_{i}\left[s|\sigma_{N}(t)\right]$ is defined by
$\epsilon_{i}\left[s|\sigma_{N}(t)\right]=-sh_{i}\left(\sigma_{N}(t)\right)-s^{2}\theta_{i}\left(\sigma_{N}(t)\right),$
(2)
where the following local field in node $i$ carries all information
$\displaystyle h_{N,i}(t)$ $\displaystyle=$ $\displaystyle\sum_{j\neq
i}J_{ij}\sigma_{j}(t),$ $\displaystyle\theta_{N,i}(t)$ $\displaystyle=$
$\displaystyle\sum_{j\neq i}K_{ij}\sigma_{j}^{2}(t).$
Here, we define coupling $J_{ij}$ and $K_{ij}$ are positive numbers less than
or equal to $1$, and with Gaussian distribution. $h_{N,i}(t)$ represents the
time dependent interaction strengths between the node $i$ and his $n_{i}$
nearest neighboring nodes. $\theta_{N,i}(t)$ instead the strengths of feedback
and $T$ is interior thermo-noise. So the average opinion is defined by
$r(t)=\frac{1}{N}\sum_{j=1}^{N}\sigma_{j}(t).$ (3)
## III Simulation results
Figure 2: (a) Time series of average opinion with the total time steps is
$t=10000$, (b) the distribution functions $P(R)$, and (c) the autocorrelation
function $c(\tau)$. The parameters used in the simulation are
$p(n_{init}=1)=0.95$, $N=10000$, $T=1.0$ and $L=10000$. The parameter $J_{ij}$
and $K_{ij}$ are positive numbers which are not larger than $1$ in whole
simulations. All the results in this paper are obtained over $20$ realizations
of the social networks.
At first we investigate the time series of average opinion, as illustrated in
Fig. 2(a). It shows there exists the fluctuation around the average opinion
$r=0$. In order to compare the fluctuation of different scales, the time
series have been normalized according to
$R(t)=\left(r(t)-\left<r(t)\right>_{\tau}\right)/\delta\left(r(t)\right),$
where $\left<r(t)\right>_{\tau}$ and $\delta(r(t))$ denote the average and the
standard deviation over the period considered, respectively. In Fig. 2(b), we
present the distribution functions $P(R)$ associated with the time series. It
is clear that this function $P(R)$ is a Gaussian form.
We calculate the autocorrelation function $c(\tau)$ of our model. For a time
series of $L$ samples, $r(t)$ for $t=1,2,\ldots,L$, $c(\tau)$ is defined by
$c(\tau)=\frac{\sum_{t=1}^{L-\tau}(r(t)-\bar{r})(r(t+\tau)-\bar{r})}{\sum_{t=1}^{L-\tau}(r(t)-\bar{r})^{2}},$
(4)
where $\tau$ is the time delay and $\bar{r}$ represents the average over the
period under consideration. Fig. 2(c) shows the result of autocorrelation
function of our model. It is found that $c(\tau)$ decreases rapidly in very
small rang of $\tau$. It means the system has short-time memory effects. As is
now well known, the stock market has nontrivial memory effects simulation-2 .
For example, the autocorrelation funciton of Dow Jones (DJ), also in the small
rang of $\tau$, decreases rapidly from $1$ to $0$. From this point, perhaps
our model is helpful to understand the financial markets.
## IV The influence of external field
Figure 3: Time series of the average opinion with different values of
amplitude $A=0.08$, $0.12$, $0.16$, $0.22$, $0.28$, $0.32$. Parameters are
$T=1.0$, $\omega=\pi/3$, and $\varphi=0$.
In order to explore what phenomena maybe happen to system under the influence
of external field. We add a period external field to the energy potential
$\epsilon_{i}$,
$\epsilon_{i}\left[s^{\prime}|\sigma_{N}(t)\right]=-sh_{i}\left(\sigma_{N}(t)\right)-s^{2}\theta_{i}\left(\sigma_{N}(t)\right)-s\left[A\cos(\omega
t+\varphi)\right],$ (5)
where $A$ is the amplitude of period external field, $\omega$ is frequency and
$\varphi$ denotes the initial phase of external field.
We investigate the effect of amplitude $A$ by fixing other parameters. In Fig.
3 we plot the time series of the average opinion $r(t)$ under different values
of $A$. It is obvious that the distribution functions have a remarkable change
with increasing $A$. With increasing strength of external field, the average
opinion comes into several discrete parts. For small amplitude $A=0.02$,
$P(R)$ is still a Gaussian form. When $A=0.08$, it begins to appear two
fluctuation around nonzero symmetric values of average opinions. Then, four
nonzero average opinions appear at $A=0.16$. Note that the intervals among the
discrete average opinions increase with increase in the strength $A$ of
external fields. Fig. 3 gives the process from two wave crests to four
independent parts. And the average opinion of the whole system will jump from
one part to the other parts at all times.
Figure 4: (a) The distribution functions $P(R)$ of average opinion time series
under different amplitudes $A$. Parameters are $T=1.0$, $\omega=\pi/3$, and
$\varphi=0$. (b) $P(R)$ for different frequencies $\omega$. Parameters are
$A=0.06$, $\varphi=\pi/2$, and $T=1.0$.
In Fig. 4, we present the distribution function $P(R)$ of the average opinion.
Again, it is easy to verify that the average opinions oscillate among serval
separate symmetric nonzero values under the external periodic driving force
[see Fig. 4(a)]. A similar oscillation behavior is observed for simulation on
the influence of the frequency $\omega$ which is shown in Fig. 4(b). Noted
that $P(R)$ for the frequency $\omega=\pi/3$ is same to the case for
$\omega=2\pi/3$, and the same distribution is observed between $\omega=\pi/6$
and $\omega=5\pi/6$. But there are distinct difference for $\omega=0$ and
$\omega=\pi/2$. It indicates a possible period $\pi$ in the case of fixed
other parameters.
Figure 5: The distribution functions $P(R)$ of average opinion time series
under different initial phases $\varphi$. Parameters are $A=0.16$,
$\omega=\pi/3$, and $T=1.0$.
Fig. 5 shows the distribution functions $P(R)$ of average opinion time series
for different initial phases $\varphi$. For $\varphi=0$, the average opinion
vibrates among four symmetric nonzero values. When $\varphi$ increases to
$\pi/2$, clearly, the average opinion comes into a $3$-value oscillation.
Additionally, note that the distribution functions is almost same for
$\varphi=0$ and $\varphi=\pi$ (or $\varphi=\pi/2$ and $\varphi=3\pi/2$).
Again, one can conjecture $P(R)$ is a $\pi$-period behavior. We also observe
the system’s average opinion time series only have two types of distribution
functions in different values of initial phases $\varphi$.
Figure 6: The distribution functions $P(R)$ of average opinion time series
under different interior thermo-noises $T$. The parameters used in the
simulation are $A=0.16$, $\omega=\pi/3$, and $\varphi=0$.
Another important parameter for the systems is the interior thermo-noise $T$.
We explore its effects with (or without) external fields. It is found that
there is not remarkable influence on the system without external field.
Contrarily, in the case of external field, $P(R)$ shows a similar oscillation
with it in Fig. 4(a) (see Fig. 6). Note that their influences are opposite. In
Fig. 6, with increasing $T$ the forms of $P(R)$ transform from four-peak to
two-peak gradually, and merge into only one-peak at last. At the same time,
the average opinion $r$ is expanded from some separate regions to the whole
more expansive scale for larger $T$.
By comparing the Fig. 4(a) with the Fig. 6, it is clear that the amplitude $A$
and interior thermo-noise $T$ have opposite effects acting on the systems. It
looks like a couple of contradictory parameters, even though both lead to the
split phenomena of the distribution of average opinion $P(R)$ and the nonzero
average $R$.
It exists similar behaviors in the Ising ferromagnetic systems. In Ising
model, the order-disorder transition is a second order transition. It will be
a non-zero magnetization $\pm|M_{sp}|$ for a finite system. There is a nonzero
probability for ever that the system from near $+|M_{sp}|$ to near
$-|M_{sp}|$, and vice versa external-1 . In our model under the influence of
external field, it is also observed the phenomena of phase transition caused
by $T$ (or by $A$), which is similar to the Ising paramagnetic-
antiferromagnetic transition.
As discussed above, the energy potential increases with increasing $T$, and
the system’s entropy becomes larger (more disordered). But the external field
tends to restrict the disordered effects in the system and reduces the
disordered strength into several separate regions.
## V Conclusion
In the present work we introduce Blume-Emery-Griffiths model on opinion
formation with three-state. Considering the characters of real social systems,
we construct a social network to link between agents. In this BEG model, each
person’s opinion is influenced not only by his specific local information from
his neighbors but also by the average opinion of the whole network.
Moreover, we focus on the behaviors of BEG systems under external
perturbation. The simulation results show that this system is sensitive to the
external field. As discussed in Sec. III, the parameters in the external
periodic perturbation, such as amplitude $A$, initial phase $\varphi$, and
frequency $\omega$, have obvious impacts on the opinion systems. Besides, the
effect of the amplitude $A$ or interior thermo-noise $T$ is similar to the
Ising paramagnetic-antiferromagnetic transition, and the influence acted on
systems from $A$ and $T$ is opposite.
## References
* (1) C. Borghesi, and S. Galam, Phys. Rev. E 73 (2006) 066118.
* (2) G. Deffuant, D. Neau, and F. Amblard, Adv. Complex Syst. 3 (2000) 87.
* (3) S. Galam, J. Stat. Phys. 61, (1990) 943; S. Galam, Physica A 238 (1997) 66.
* (4) R. Hegselmann and U. Krause, J. Artif. Societies Social Simulation 5, (3) (2002) paper 2 (jasss.soc.surrey.ac.uk).; U. Krause, Soziale Dynamiken mit vielen interakteuren. Eine Problemskizze, in: U. Krause, M. Stockler, eds.), Modellierung und Simulation von Dynamiken mit vielen interagierenden Akteuren, Bremen University, January 1997, pp. 37–51.
* (5) K. Sznajd-Weron and J. Sznajd, Int. J. Mod. C 11 (2000) 1157.
* (6) P. Erdos and A. Renyi, Publ. Math. 6 (1959) 290.
* (7) P. Erdos and A. Renyi, Publ. Math. Inst. Hung. Acad. Sci. 5 (1960) 17.
* (8) P. Erdos and A. Renyi, Bull. Inst. Int. Stat. 38 (1961) 343.
* (9) D. J. Watts and S. H. Strogatz, Nature 393 (1998) 440.
* (10) M. E. J. Newman and D. J. Watts, Phys. Lett. A 263 (1999) 341.
* (11) A. -L. Barabasi and R. Albert, Science 286 (1999) 509.
* (12) M. Blume, V. J. Emery, and R. B. Griffiths, Phys. Rev. A 4 (1971) 1071.
* (13) R. David, C. Dominguez, and E. Korutcheva, Phys. Rev. E 62 (2000) 2620.
* (14) D. Bollé, I. Pérez Castillo, and G. M. Shim, Phys. Rev. E 67 (2003) 036113.
* (15) M. E. J. Newman, Phys. Rev. Lett. 89 (2002) 208701.
* (16) M. E. J. Newman and J. Park, Phys. Rev. E 68 (2003) 036122.
* (17) L. A. N. Amaral, A. Scala, M. Barthélémy, and H. E. Stanley, Prol. Natl. Acad. Sci. USA 97 (2000) 11149.
* (18) M. Boguña, R. Pastor-Satorras, A. Diaz-Guilera, and A. Arenas, Phys. Rev. E 70 (2004) 056122.
* (19) R. Toivonen, J.-P. Onnela, J. Saramäki, J. Hyvönen, and K. Kaski, Physica A 371, (2006) 851.
* (20) A. Grönlund, and P. Holme, Phys. Rev. E 70 (2004) 036108.
* (21) R. Y. You and Z. Chen, Chinese J. Comput. Phys. 21 (2004) 341.
* (22) M. Bartolozzi, D. B. Leinweber, and A. W. Thomas, Phys. Rev. E 72 (2005) 046113.
* (23) K. Binder and D. W. Heermann, Monte Carlo simulation in statistical physics: an introduction, Springer-Verlag, Berlin, 2002.
|
# On the action of the Steenrod algebra on the modular invariants of special
linear group
Nguyễn Sum Department of Mathematics, Quy Nhơn University, 170 An Dương
Vương, Quy Nhơn, Bình Định, Viet Nam<EMAIL_ADDRESS>
###### Abstract.
We compute the action of the Steenrod algebra on generators of algebras of
invariants of special linear group ${SL_{n}=SL(n,\mathbb{Z}/p)}$ in the
polynomial algebra with $p$ an odd prime number.
###### Key words and phrases:
Invariant theory, Dickson-Mùi invariants, Steenrod-Milnor operations
###### 2010 Mathematics Subject Classification:
Primary 55S10; Secondary 55S05
## 1\. Introduction
For an an odd prime $p$, let $SL_{n}$ denote the special linear subgroup of
$GL(n,\mathbb{Z}/p)$, which acts naturally on the cohomology algebra
$H^{*}(B(\mathbb{Z}/p)^{n})$. Here and in what follows, the cohomology is
always taken with coefficients in the prime field $\mathbb{Z}/p$.
According to [3], $H^{*}(B(\mathbb{Z}/p)^{n})=E(x_{1},\ldots,x_{n})\otimes
P(y_{1},\ldots,y_{n})$ with $\dim x_{i}=1$, $y_{i}=\beta x_{i}$, where $\beta$
is the Bockstein homomorphism, $E(.,\ldots,.)$ and $P(.,\ldots,.)$ are the
exterior and polynomial algebras over $\mathbb{Z}/p$ generated by the
variables indicated. Let $(e_{k+1},\ldots,e_{n}),k\geqslant 0,$be a sequence
of non-negative integers. Following Mùi [2], we define
$[k;e_{k+1},\ldots,e_{n}]=[k;e_{k+1},\ldots,e_{n}](x_{1},\ldots,x_{n},y_{1},\ldots,y_{n})$
by
$[k;e_{k+1},\ldots,e_{n}]=\frac{1}{k!}\begin{vmatrix}x_{1}&\cdots&x_{n}\\\
\vdots&\cdots&\vdots\\\ x_{1}&\cdots&x_{n}\\\
y_{1}^{p^{e_{k+1}}}&\cdots&y_{n}^{p^{e_{k+1}}}\\\ \vdots&\cdots&\vdots\\\
y_{1}^{p^{e_{n}}}&\cdots&y_{n}^{p^{e_{n}}}\end{vmatrix}.$
The precise meaning of the right hand side is given in [2]. For $k=0$, we
write
$[0;e_{1},\ldots,e_{n}]=[e_{1},\ldots,e_{n}]=\det\left(y_{i}^{p^{e_{j}}}\right).$
We set
$\displaystyle L_{n,s}$ $\displaystyle=[0,\ldots,\hat{s},\ldots,n],\
0\leqslant s\leqslant n,$ $\displaystyle L_{n}$
$\displaystyle=L_{n,n}=[0,\ldots,n-1].$
Each $[k;e_{k+1},\ldots,e_{n}]$ is an invariant of $SL_{n}$ and
$[e_{1},\ldots,e_{n}]$ is divisible by $L_{n}$. Then Dickson invariants
$Q_{n,s},\ 0\leqslant s\leqslant n,$ and Mùi invariants
$M_{n,s_{1},\ldots,s_{k}}$, $0\leq\ s_{1}<\ldots<s_{k}\leq n$ are defined by
$\displaystyle Q_{n,s}$ $\displaystyle=L_{n,s}/L_{n},$ $\displaystyle
M_{n,s_{1},\ldots,s_{k}}$
$\displaystyle=[k;0,\ldots,\hat{s}_{1},\ldots,\hat{s}_{k},\ldots,n-1].$
Note that $Q_{n,n}=1$, $Q_{n,0}=L_{n}^{p-1}$,
$M_{n,0,\ldots,n-1}=[n;\emptyset]=x_{1}\ldots x_{n}$.
Mùi proved in [2] that $H^{*}(B(\mathbb{Z}/p)^{n})^{SL_{n}}$ is the free
module over the Dickson algebra $P(L_{n},Q_{n,1},\ldots,Q_{n,n-1})$ generated
by 1 and $M_{n,s_{1},\ldots,s_{k}}$ with $0\leq\ s_{1}<\ldots<s_{k}\leq n$.
The Steenrod algebra $\mathcal{A}(p)$ acts on $H^{*}(B(\mathbb{Z}/p)^{n})$ by
well-known rules. Since this action commutes with the action of $SL_{n}$, it
induces an action of $\mathcal{A}(p)$ on
$H^{*}(B(\mathbb{Z}/p)^{n})^{SL_{n}}$.
Let $\tau_{s}$ and $\xi_{i}$ be the Milnor elements of dimensions $2p^{s}-1$
and $2p^{i}-2$, respectively, in the dual algebra $\mathcal{A}(p)^{*}$ of
$\mathcal{A}(p)$. Milnor showed in [5] that
$\mathcal{A}(p)^{*}=E(\tau_{0},\tau_{1},\ldots)\otimes
P(\xi_{1},\xi_{2},\ldots).$
So, $\mathcal{A}(p)^{*}$ has a basis consisting of all monomials
$\tau_{S}\xi^{R}=\tau_{s_{1}}\ldots\tau_{s_{t}}\xi_{1}^{r_{1}}\ldots\xi_{m}^{r_{m}}$
with $S=(s_{1},\ldots,s_{t})$, $0\leqslant s_{1}<\ldots<s_{t}$,
$R=(r_{1},\ldots,r_{m})$. Let $St^{S,R}\in\mathcal{A}(p)$ denote the dual of
$\tau_{S}\xi^{R}$ with respect to this basis of $\mathcal{A}(p)^{*}$. Then
$\mathcal{A}(p)$ has a new basis consisting of all operations $St^{S,R}$. In
particular, for $S=\emptyset$, $R=(k)$, $St^{S,R}$ is nothing but the Steenrod
operation $P^{k}$.
The action of $P^{k}$ on Dickson and Mùi invariants was explicitly computed by
Hưng and Minh [4]. The action of $St^{S,R}$ on the invariant
$[n,\emptyset]=x_{1}\ldots x_{n}$ was computed by Mùi [3].
In this paper, we compute the action of $St^{S,R}$ on
$[k;e_{k+1},\ldots,e_{n}]$ and prove a nice relation between the invariants
$[k;e_{k+1},\ldots,e_{n}+s]$, $0\leqslant s\leqslant n$, and the Dickson
invariants. Using these results, we explicitly compute the action of $P^{k}$
on Mùi invariants $M_{n,s_{1},\ldots,s_{k}}$ which was first computed in Hưng
and Minh [4] by another method.
To state the main results, we introduce some notations. Let $J=(J_{0},J_{1},$
$\ldots,J_{m})$ with $J_{s}\subset\\{{k+1},\ldots,n\\}$, $0\leqslant
s\leqslant m$, and $\coprod_{s=0}^{m}J_{s}=\\{{k+1},\ldots,n\\}$ (disjoint
union). We define the sequence $R_{J}=(r_{J_{1}},\ldots,r_{J_{m}})$ and the
function $\Phi_{J}:\\{{k+1},\ldots,n\\}\to\\{{1},\ldots,m\\}$ by setting
$\displaystyle r_{J_{s}}$ $\displaystyle=\sum_{j\in J_{s}}p^{e_{j}},\
0\leqslant s\leqslant m,$ $\displaystyle\Phi_{J}(i)$ $\displaystyle=s\ \text{
if }\ i\in J_{s},\ k+1\leqslant i\leqslant n.$
The main result of this paper is
###### Theorem 1.1.
Suppose that $e_{i}\neq e_{j}$ for $i\neq j$, $S=(s_{1},\ldots,s_{t})$,
$s_{1}<\ldots<s_{t}<m$. Under the above notation we have
$St^{S,R}[k;e_{k+1},\ldots,e_{n}]\\\
=\begin{cases}(-1)^{t(k-t)}[k-t,s_{1},\ldots,s_{t},e_{k+1}+\Phi_{J}(k+1),\ldots,e_{n}+\Phi_{J}(n)],\\\
\hskip 156.49014ptR=R_{J},\ \text{ for some }J,\\\ 0,\hskip 142.26378pt\text{
otherwise. }\end{cases}$
We have also the following relation from which we can explicitly compute
$St^{S,R}[k;e_{k+1},\ldots,e_{n}]$ in terms of Dickson and Mùi invariants.
###### Proposition 1.2.
For $0\leqslant k\leqslant n$,
$[k;e_{k+1},\ldots,e_{n-1},e_{n}+n]=\sum_{s=0}^{n-1}(-1)^{n+s-1}[k;e_{k+1},\ldots,e_{n-1},e_{n}+s]Q_{n,s}^{p^{e_{n}}}.$
Using Theorem 1.1 and Proposition 1.2 we explicitly compute the action of
$St^{S,R}$ on Mùi invariant $M_{n,s_{1},\ldots,s_{k}}$ when $S,\ R$ are
special. Particularly, we prove
###### Theorem 1.3 (Hưng and Minh [4]).
For $s_{0}=-1<s_{1}<\ldots<s_{k}<s_{k+1}=n$,
$\displaystyle P^{t}$ $\displaystyle M_{n,s_{1},\ldots,s_{k}}=$
$\displaystyle\begin{cases}M_{n,t_{1},\ldots,t_{k}},&t=\underset{i=1}{\overset{k}{\sum}}\frac{p^{s_{i}}-p^{t_{i}}}{p-1},\text{
with }s_{i-1}<t_{i}\leqslant s_{i},\\\
\underset{i=1}{\overset{k+1}{\sum}}(-1)^{k+1-i}M_{n,t_{1},\ldots,\hat{t}_{i}\ldots,t_{k+1}}Q_{n,t_{i}},&t=\underset{i=1}{\overset{k+1}{\sum}}\frac{p^{s_{i}}-p^{t_{i}}}{p-1},\text{
with }s_{i-1}<t_{i}\leqslant s_{i},\\\ &1\leqslant i\leqslant k+1,\
t_{k+1}<s_{k+1}=n,\\\ 0,&\text{otherwise. }\end{cases}$
## Acknowledgment
I would like to thank Professor Huỳnh Mùi for his generous help and inspiring
guidance.
## 2\. Proof of Theorem 1.1
First we recall Mùi’s results on the homomorphism $d_{m}^{*}P_{m}$ and the
operations $St^{S,R}$.
Let $\mathcal{A}_{p^{m}}$ be the alternating group on $p^{m}$ letters. Suppose
that $X$ is a topological space, $W\mathcal{A}_{p^{m}}$ is a contractible
$\mathcal{A}_{p^{m}}$-free space. Then we have the Steenrod power map
$P_{m}:H^{q}(X)\longrightarrow
H^{p^{m}q}\big{(}W\mathcal{A}_{p^{m}}\underset{\mathcal{A}_{p^{m}}}{\times}X^{p^{m}}\big{)},$
which sends $u$ to $1\otimes u^{p^{m}}$ at the cochain level (see [6; Chap.
VII]).
The inclusion $(\mathbb{Z}/p)^{n}\subset\mathcal{A}_{p^{m}}$ together with the
diagonal map $X\to X^{p^{m}}$ and the Künneth formula induces the homomorphism
$d_{m}:H^{*}\big{(}W\mathcal{A}_{p^{m}}\underset{\mathcal{A}_{p^{m}}}{\times}X^{p^{m}}\big{)}\longrightarrow
H^{*}(B(\mathbb{Z}/p)^{n})\otimes H^{*}(X).$
Set $\tilde{M}_{m,s}=M_{m,s}L_{m}^{h-1},\ 0\leqslant s<m,\
\tilde{L}_{m}=L_{m}^{h},\ h=(p-1)/2$. We have
###### Theorem 2.1 (Mùi [3; 1.3]).
Let $u\in H^{q}(X),\ \mu(q)=(-1)^{hq(q-1)/2}(h!)^{q}$. Then
$d_{m}^{*}P_{m}(u)=\mu(q)^{m}\sum_{S,R}(-1)^{r(S,R)}\tilde{M}_{m,s_{1}}\ldots\tilde{M}_{m,s_{t}}\tilde{L}_{m}^{r_{0}}Q_{m,1}^{r_{1}}\ldots
Q_{m,m-1}^{r_{m-1}}\otimes St^{S,R}u.$
Here the summation runs over all $(S,R)$ with $S=(s_{1},\ldots,s_{t})$,
$0\leqslant s_{1}<\ldots<s_{t}<m$, $R=(r_{1},\ldots,r_{m})$,
$r_{0}=q-t-2(r_{1}+\ldots+r_{m})\geqslant 0$,
$r(S,R)=t+s_{1}+\ldots+s_{t}+r_{1}+2r_{2}+\ldots+mr_{m}$.
###### Proposition 2.2 (Mùi [2, 3]).
i) $d_{m}^{*}P_{m}$ is a natural homomorphism preserving cup product up to a
sign. Precisely,
$d_{m}^{*}P_{m}(uv)=(-1)^{mhqr}d_{m}^{*}P_{m}ud_{m}^{*}P_{m}v,$
with $q=\dim u,\ r=\dim v$.
ii)
$d_{m}^{*}P_{m}y_{i}=\underset{s=0}{\overset{m}{\sum}}(-1)^{m+s}Q_{m,s}\otimes
y_{i}^{p^{s}}$.
iii) $d_{m}^{*}P_{m}(x_{1}\ldots x_{n})=$
$\mu(n)^{m}\underset{0\leqslant
s_{1}<\ldots<s_{t}<m}{\sum}(-1)^{t(n-t)+r(S,0)}\tilde{M}_{m,s_{1}}\ldots\tilde{M}_{m,s_{t}}\tilde{L}_{m}^{n-t}\otimes[n-t,s_{1},\ldots,s_{t}].$
Here $x_{i}$ and $y_{i}$ are defined as in the introduction.
###### Lemma 2.3.
If $e_{i}\neq e_{j}$ for $i\neq j$, then
$d_{m}^{*}P_{m}[e_{1},\ldots,e_{n}]\\\
=\sum_{J=(J_{0},\ldots,J_{m})}(-1)^{mn+r(\emptyset,R_{J})}\tilde{L}_{m}^{2r_{J_{0}}}Q_{m,1}^{r_{j_{1}}}\ldots
Q_{m,m-1}^{r_{j_{m-1}}}\\\
\otimes[e_{1}+\Phi_{J}(1),\ldots,e_{n}+\Phi_{J}(n)],$
where $R_{J}$ and $\Phi_{J}$ are defined as in Theorem 1.1.
###### Proof.
Let $\Sigma_{n}$ be the symmetric group on $n$ letters. Then
$[e_{1},\ldots,e_{n}]=\sum_{\sigma\in\Sigma_{n}}\text{sign}\
\\!\sigma\prod_{i=1}^{n}y_{i}^{p^{e_{\sigma(i)}}}.$
From Proposition 1.2, we have
$\displaystyle
d_{m}^{*}P_{m}\Big{(}\prod_{i=1}^{n}y_{i}^{p^{e_{\sigma(i)}}}\Big{)}$
$\displaystyle=\prod_{i=1}^{n}\big{(}d_{m}^{*}P_{m}y_{i}\big{)}^{p^{e_{\sigma(i)}}}$
$\displaystyle=\prod_{i=1}^{n}\Big{(}\underset{s=0}{\overset{m}{\sum}}(-1)^{m+s}Q_{m,s}^{p^{e_{\sigma(i)}}}\otimes
y_{i}^{p^{e_{\sigma(i)}+s}}\Big{)}.$
Expanding this product and using the definitions of $\Phi_{J},R_{J}$ and the
assumption of the lemma, we get
$d_{m}^{*}P_{m}\Big{(}\prod_{i=1}^{n}y_{i}^{p^{e_{\sigma(i)}}}\Big{)}=\sum_{J}(-1)^{mn+r(\emptyset,R_{J})}Q_{m,0}^{r_{J_{0}}}\ldots
Q_{m,m-1}^{r_{j_{m-1}}}\otimes\prod_{i=1}^{n}y_{i}^{p^{e_{\sigma(i)}+\Phi_{J}(\sigma(i))}}.$
Hence, from the above equalities we obtain
$\displaystyle d_{m}^{*}P_{m}$ $\displaystyle[e_{1},\ldots,e_{n}]$
$\displaystyle=\sum_{J}(-1)^{mn+r(\emptyset,R_{J})}Q_{m,0}^{r_{J_{0}}}\ldots
Q_{m,m-1}^{r_{j_{m-1}}}\otimes\sum_{\sigma\in\Sigma_{n}}\text{sign}\
\\!\sigma\prod_{i=1}^{n}y_{i}^{p^{e_{\sigma(i)}+\Phi_{J}(\sigma(i))}}$
$\displaystyle=\sum_{J}(-1)^{mn+r(\emptyset,R_{J})}Q_{m,0}^{r_{J_{0}}}\ldots
Q_{m,m-1}^{r_{j_{m-1}}}\otimes[e_{1}+\Phi_{J}(1),\ldots,e_{n}+\Phi_{J}(n)].$
Since $Q_{m,0}=\tilde{L}_{m}^{2}$, the lemma is proved. ∎
###### 2.4. Proof of Theorem 1.1.
Let $I$ be a subset of $\\{1,\ldots,n\\}$ and $I^{\prime}$ is its complement
in $\\{1,\ldots,n\\}$. Writing $I=(i_{1},\ldots,i_{k})$ and
$I^{\prime}=(i_{k+1},\ldots,i_{n})$ with $i_{1}<\ldots<i_{k}$ and
$i_{k+1}<\ldots<i_{n}$. We set $x_{I}=x_{i_{1}}\ldots x_{i_{k}}$,
$[e_{k+1},\ldots,e_{n}]_{I}=[e_{k+1},\ldots,e_{n}](y_{i_{k+1}},\ldots,y_{i_{n}})$
and $\sigma_{I}=\begin{pmatrix}1&\ldots&n\\\
i_{1}&\ldots&i_{n}\end{pmatrix}\in\Sigma_{n}$. In [2; I.4.2], Mùi showed that
$[k;e_{k+1},\ldots,e_{n-1},e_{n}]=\sum_{I}\text{sign}\
\\!\sigma_{I}x_{I}[e_{k+1},\ldots,e_{n}]_{I}.$
From Proposition 2.2 and Lemma 2.3 we have
$d_{m}^{*}P_{m}(x_{I})=\mu(k)^{m}\underset{0\leqslant
s_{1}<\ldots<s_{t}<m}{\sum}(-1)^{t(k-t)+r(S,0)}\tilde{M}_{m,s_{1}}\ldots\tilde{M}_{m,s_{t}}\tilde{L}_{m}^{k-t}\\\
\otimes[k-t,s_{1},\ldots,s_{t}]_{I},$
where
$[k-t,s_{1},\ldots,s_{t}]_{I}=[k-t,s_{1},\ldots,s_{t}](x_{i_{1}},\ldots,x_{i_{k}},y_{i_{1}},\ldots,y_{i_{k}})$,
$d_{m}^{*}P_{m}[e_{k+1},\ldots,e_{n}]_{I}\\\
=\sum_{J=(J_{0},\ldots,J_{m})}(-1)^{m(m-k)+r(\emptyset,R_{J})}\tilde{L}_{m}^{2r_{J_{0}}}Q_{m,1}^{r_{j_{1}}}\ldots
Q_{m,m-1}^{r_{j_{m-1}}}\otimes\\\
[e_{k+1}+\Phi_{J}(k+1),\ldots,e_{n}+\Phi_{J}(n)]_{I}.$
Set $q=\dim[k;e_{k+1},\ldots,e_{n}]=k+2(p^{e_{k+1}}+\ldots+p^{e_{n}}).$ An
easy computation shows that $\mu(q)=(-1)^{n-k}\mu(k)$ and
$r(S,0)+r(\emptyset,R)=r(S,R)$. Hence from Proposition 2.2 and the above
equalities we get
$d_{m}^{*}P_{m}[e_{k+1},\ldots,e_{n}]\\\
=\mu(q)^{m}\sum_{S,J}(-1)^{t(t-k)+r(S,R_{J})}M_{m,s_{1}}\ldots\tilde{M}_{m,s_{t}}\tilde{L}_{m}^{k-t+2r_{J_{0}}}Q_{m,1}^{r_{j_{1}}}\ldots
Q_{m,m-1}^{r_{j_{m-1}}}\otimes\\\ \sum_{I}\text{sign}\
\\!\sigma_{I}[k-t,s_{1},\ldots,s_{t}]_{I}[e_{k+1}+\Phi_{J}(k+1),\ldots,e_{n}+\Phi_{J}(n)]_{I}.$
Then, using the Laplace development we obtain
$d_{m}^{*}P_{m}[e_{k+1},\ldots,e_{n}]\\\
=\mu(q)^{m}\sum_{S,J}(-1)^{t(t-k)+r(S,R_{J})}M_{m,s_{1}}\ldots\tilde{M}_{m,s_{t}}\tilde{L}_{m}^{k-t+2r_{J_{0}}}Q_{m,1}^{r_{j_{1}}}\ldots
Q_{m,m-1}^{r_{j_{m-1}}}\otimes\\\
[k-t,s_{1},\ldots,s_{t},e_{k+1}+\Phi_{J}(k+1),\ldots,e_{n}+\Phi_{J}(n)].$
Theorem 1.1 now follows this equality and Theorem 2.1. ∎
## 3\. Proof of Proposition 1.2
First we prove the stated relation for $k=0$,
$\displaystyle[e_{1},\ldots,e_{n-1},e_{n}+n]=\sum_{s=0}^{n-1}(-1)^{n+s-1}[e_{1},\ldots,e_{n-1},e_{n}+s]Q_{n,s}^{p^{e_{n}}}.$
(3.1)
We will prove (3.1) and the following relation together by induction on $n$,
$\displaystyle[e_{1},\ldots,e_{n-1},e_{n}+n-1]=\sum_{s=0}^{n-2}(-1)^{n+s}[e_{1},\ldots,e_{n-1},e_{n}+s]Q_{n-1,s}^{p^{e_{n}}}$
$\displaystyle\hskip 170.71652pt+[e_{1},\ldots,e_{n-1}]V_{n}^{p^{e_{n}}}.$
(3.2)
Here, $V_{n}=L_{n}/L_{n-1}$.
We denote (3.1) and (3.2) when $n=m$ by 3.1$(m)$ and 3.2$(m)$, respectively.
When $n=2$ the proof is straightforward. Suppose that $n>2$ and that
3.1$(n-1)$ and 3.2$(n-1)$ are true.
By Laplace development and 3.2$(n-1)$ we have
$\displaystyle[e_{1},\ldots,e_{n-1},e_{n}+n-1]$
$\displaystyle=\sum_{t=1}^{n-1}(-1)^{n+t}[e_{1},\ldots,\hat{e}_{t},\ldots,e_{n-1},e_{n}+n-1]y_{n}^{p^{e_{t}}}+[e_{1},\ldots,e_{n-1}]y_{n}^{p^{e_{n}+n-1}}$
$\displaystyle=\sum_{t=1}^{n-1}(-1)^{n+t}\Big{(}\sum_{s=0}^{n-2}(-1)^{n+s}[e_{1},\ldots,\hat{e}_{t},\ldots,e_{n-1},e_{n}+s]Q_{n-1,s}^{p^{e_{n}}}\Big{)}y_{n}^{p^{e_{t}}}$
$\displaystyle\hskip 199.16928pt+[e_{1},\ldots,e_{n-1}]y_{n}^{p^{e_{n}+n-1}}$
$\displaystyle=\sum_{s=0}^{n-2}(-1)^{n+s}\Big{(}\sum_{t=1}^{n-1}(-1)^{n+t}[e_{1},\ldots,\hat{e}_{t},\ldots,e_{n-1},e_{n}+s]y_{n}^{p^{e_{t}}}\Big{)}Q_{n-1,s}^{p^{e_{n}}}$
$\displaystyle\hskip 199.16928pt+[e_{1},\ldots,e_{n-1}]y_{n}^{p^{e_{n}+n-1}}$
$\displaystyle=\sum_{s=0}^{n-2}(-1)^{n+s}[e_{1},\ldots,e_{n-1},e_{n}+s]Q_{n-1,s}^{p^{e_{n}}}$
$\displaystyle\hskip
113.81102pt+[e_{1},\ldots,e_{n-1}]\sum_{s=0}^{n-1}(-1)^{n+s-1}Q_{n-1,s}^{p^{e_{n}}}y_{n}^{p^{e_{n}+s}}.$
Since $V_{n}=\sum_{s=0}^{n-1}(-1)^{n+s-1}Q_{n-1,s}y_{n}^{p^{s}}$ (see [1],
[2]), 3.2$(n)$ is proved.
Now we prove 3.1$(n)$. From 3.2$(n)$ and the relation
$Q_{n,s}=Q_{n-1,s-1}^{p}+Q_{n-1,s}V_{n}^{p-1}$ (see [1], [2]) we obtain
$\displaystyle[e_{1},$ $\displaystyle\ldots,e_{n-1},e_{n}+n]$
$\displaystyle=\sum_{s=1}^{n-1}(-1)^{n+s-1}[e_{1},\ldots,e_{n-1},e_{n}+s]Q_{n-1,s-1}^{p^{e_{n}+1}}$
$\displaystyle\hskip 85.35826pt+[e_{1},\ldots,e_{n-1}]V_{n}^{p^{e_{n}+1}}$
$\displaystyle=\sum_{s=1}^{n-1}(-1)^{n+s-1}[e_{1},\ldots,e_{n-1},e_{n}+s]Q_{n,s}^{p^{e_{n}}}$
$\displaystyle\hskip
85.35826pt-[e_{1},\ldots,e_{n-1},e_{n}+n-1]V_{n}^{(p-1)p^{e_{n}}}$
$\displaystyle+\Big{(}\sum_{s=1}^{n-2}(-1)^{n+s}[e_{1},\ldots,e_{n-1},e_{n}+s]Q_{n-1,s}^{p^{e_{n}}}$
$\displaystyle\hskip
85.35826pt+[e_{1},\ldots,e_{n-1}]V_{n}^{p^{e_{n}}}\Big{)}V_{n}^{(p-1)p^{e_{n}}}.$
Combining this equality and 3.2$(n)$ we get
$\displaystyle[e_{1},e_{2},\ldots,e_{n-1},e_{n}+n]$
$\displaystyle=\sum_{s=1}^{n-1}(-1)^{n+s-1}[e_{1},\ldots,e_{n-1},e_{n}+s]Q_{n,s}^{p^{e_{n}}}$
$\displaystyle-(-1)^{n}[e_{1},\ldots,e_{n-1},e_{n}]Q_{n-1,0}^{p^{e_{n}}}V_{n}^{(p-1)p^{e_{n}}}.$
Since $Q_{n,0}=Q_{n-1,0}V_{n}^{p-1}$, the proof of 3.1$(n)$ is completed.
For $0<k<n$, Proposition 1.2 follows from (3.1) and [2; I.4.7] which asserts
that
$\displaystyle[k;e_{k+1},\ldots,e_{n}]=$ $\displaystyle\
(-1)^{k(k-1)/2}\sum_{0\leq
s_{1}<\ldots<s_{k}}(-1)^{s_{1}+\ldots+s_{k}}M_{n,s_{1},\ldots,s_{k}}[s_{1},\ldots,s_{k},e_{k+1},\ldots,e_{n}]/L_{n}.$
The proposition is completely proved.
## 4\. Some applications
In this section, using Theorem 1.1 and Proposition 1.2, we prove Theorem 1.3
and explicitly compute the action of $St^{S,R}$ on Mùi invariant
$M_{n,s_{1},\ldots,s_{k}}$ when $S,R$ are special. First we prove Theorem 1.3.
###### 4.1. Proof of Theorem 1.3.
Recall that $P^{t}=St^{\emptyset,(t)}$. From Theorem 1.1 we have
$\displaystyle P^{t}$ $\displaystyle M_{n,s_{1},\ldots,s_{k}}$
$\displaystyle=\begin{cases}[k;0,\ldots,\hat{t}_{1},\ldots,\hat{t}_{k+1},\ldots,n],&t=\underset{i=1}{\overset{k+1}{\sum}}\frac{p^{s_{i}}-p^{t_{i}}}{p-1},\text{
with }\\\ &s_{i-1}<t_{i}\leqslant s_{i},1\leqslant i\leqslant k+1,\\\
0,&\text{otherwise. }\end{cases}$
If $t_{k+1}=s_{k+1}=n$, then
$[k;0,\ldots,\hat{t}_{1},\ldots,\hat{t}_{k+1},\ldots,n]=M_{n,t_{1},\ldots,t_{k}}$.
Suppose $t_{k+1}<n$. By Proposition 1.2 we have
$\displaystyle[k;0,\ldots,\hat{t}_{1},$
$\displaystyle\ldots,\hat{t}_{k+1},\ldots,n]$
$\displaystyle=\sum_{s=0}^{n-1}(-1)^{n+s-1}[k;0,\ldots,\hat{t}_{1},\ldots,\hat{t}_{k+1},\ldots,n-1,s]Q_{n,s}$
$\displaystyle=\sum_{s=0}^{n-1}(-1)^{k+1-i}M_{n,t_{1},\ldots,\hat{t}_{i},\ldots,t_{k+1}}Q_{n,t_{i}}$
Hence Theorem 1.3 follows. ∎
###### Notation 4.2.
Denote by $S^{\prime}:s_{k+1}<\dots<s_{n-1}$ the ordered complement of a
sequence $S:1\leqslant s_{1}<\ldots<s_{k}<n$ in $\\{1,\ldots,n-1\\}$. Set
$\Delta_{i}=(0,\ldots,1,\ldots,0)$ with 1 at the $i$-th place $(1\leqslant
i\leqslant n)$, $\Delta_{0}=(0,\ldots,0)$ and $R=(r_{1},\ldots,r_{n})$. Here,
the length of $\Delta_{i}$ is $n$.
The following was proved in Mùi [3; 5.3] for $R=\Delta_{0}$.
###### Proposition 4.3.
Set $s_{0}=0$. Under the above notations, we have
$St^{S^{\prime},R}M_{n,1,\ldots,n-1}=\begin{cases}(-1)^{(k-1)(n-1-k)+s_{t}-t}M_{s_{0},\ldots,\hat{s}_{t},\ldots,s_{k}},&R=\Delta_{s_{t}},\\\
\underset{t=0}{\overset{k}{\sum}}(-1)^{k(n-k)-t}M_{s_{0},\ldots,\hat{s}_{t},\ldots,s_{k}}Q_{n,s_{t}},&R=\Delta_{n},\\\
0,&\text{otherwise. }\end{cases}$
###### Proof.
Note that $M_{n,1,\ldots,n-1}=[n-1;0]$. From Theorem 1.1 we obtain
$St^{S^{\prime},R}M_{n,1,\ldots,n-1}=\begin{cases}(-1)^{k(n-1-k)}[k;1,\ldots,\hat{s}_{1},\ldots,\hat{s}_{k},\ldots,n-1,i],\\\
\hskip 28.45274ptR=\Delta_{i},\text{ with }i=s_{t},\ 0\leqslant t\leqslant
k,\text{ or }i=n,\\\ 0,\hskip 19.91684pt\text{otherwise. }\end{cases}$
It is easy to see that
$[k;1,\ldots,\hat{s}_{1},\ldots,\hat{s}_{k},\ldots,n-1,s_{t}]=(-1)^{n-1-k+s_{t}-t}M_{s_{0},\ldots,\hat{s}_{t},\ldots,s_{k}}.$
According to Proposition 1.2 we have
$\displaystyle[k;1,\ldots,$
$\displaystyle\hat{s}_{1},\ldots,\hat{s}_{k},\ldots,n-1,n]$
$\displaystyle=\sum_{s=0}^{n-1}(-1)^{n+s-1}[k;1,\ldots,\hat{s}_{1},\ldots,\hat{s}_{k},\ldots,n-1,s]Q_{n,s}$
$\displaystyle=\sum_{t=0}^{k}(-1)^{k-t}M_{s_{0},\ldots,\hat{s}_{t},\ldots,s_{k}}Q_{n,s_{t}}.$
From this the proposition follows. ∎
By the same argument as given in the proof of Theorem 1.3 and Proposition 4.3
we obtain the following results.
###### Proposition 4.4.
Let $\Delta_{i}$ be as in 4.2 and $s_{0}=0$. Then
$St^{\emptyset,\Delta_{i}}M_{n,s_{1},\ldots,s_{k}}=\begin{cases}(-1)^{s_{t}-t}M_{s_{0},\ldots,\hat{s}_{t},\ldots,s_{k}},&s_{1}>0,\
i=s_{t},\\\
\underset{t=0}{\overset{k}{\sum}}(-1)^{n-t-1}M_{s_{0},\ldots,\hat{s}_{t},\ldots,s_{k}}Q_{n,s_{t}},&s_{1}>0,\
i=n,\\\ 0,&\text{otherwise.}\end{cases}$
The following proposition was proved by Hưng and Minh [4] for $s=0$.
###### Proposition 4.5.
For $0\leqslant s\leqslant n$,
$St^{(s),(0)}M_{n,s_{1},\ldots,s_{k}}=\begin{cases}(-1)^{k+s_{t}-t}M_{s_{0},\ldots,\hat{s}_{t},\ldots,s_{k}},&s=s_{t},\\\
\underset{i=1}{\overset{k}{\sum}}(-1)^{n+k+t+1}M_{s_{1},\ldots,\hat{s}_{t},\ldots,s_{k}}Q_{n,s_{t}},&s=n,\\\
0,&\text{otherwise.}\end{cases}$
## References
* [1] L. E. Dickson, A fundamental system of invariants of the general modular linear group with a solution of the form problem, Trans. Amer. Math. Soc. 12 (1911), 75-98 MR1500882
* [2] H. Mùi, Modular invariant theory and the cohomology algebras of symmetric groups, J. Fac. Sci. Univ. Tokyo Sec. IA Math. 22 (1975), 319-369 MR0422451
* [3] H. Mùi, Cohomology operations derived from modular invariants, Math. Z. 193 (1986), 151-163 MR852916
* [4] N.H.V. Hưng and P.A. Minh, The action of the mod $p$ Steenrod operations on the modular invariants of linear groups, Vietnam J. Math. 23 (1995), 39-56 MR1367491
* [5] J. Milnor, Steenrod algebra and its dual, Ann. of Math. 67 (1958), 150-171 MR0099653
* [6] N.E. Steenrod and D.B.A. Epstein, Cohomology operations, Ann. of Math. No. 50, Princeton University Press, 1962 MR0145525
|
# Fake News Detection using Stance Classification: A Survey
Anders E. Lillie<EMAIL_ADDRESS>and Emil R. Middelboe<EMAIL_ADDRESS>
(December 11, 2018)
###### Abstract
This paper surveys and presents recent academic work carried out within the
field of stance classification and fake news detection. Echo chambers and the
model organism problem are examples that pose challenges to acquire data with
high quality, due to opinions being polarised in microblogs. Nevertheless it
is shown that several machine learning approaches achieve promising results in
classifying stance. Some use crowd stance for fake news detection, such as the
approach in [Dungs et al., 2018] using Hidden Markov Models. Furthermore
feature engineering have significant importance in several approaches, which
is shown in [Aker et al., 2017]. This paper additionally includes a proposal
of a system implementation based on the presented survey.
## 1 Introduction
Fake news detection currently relies on knowing the attitude that people
communicating on social media are expressing towards an idea. Figuring this
out is called stance classification, which is a Natural Language Processing
(NLP) task that seeks to classify the stance taken towards some claim. This
paper reviews different ideas and approaches towards accomplishing this goal.
NLP is a research area concerned with processing human language using language
models and computational approaches like machine learning (ML). With the
progress of ML, tools and techniques open up for various ways of designing the
algorithm for stance classification. It is interesting to investigate this
progress and gain insight into current state-of-the-art approaches.
The work presented in this paper is carried out in the ”Thesis Preparation”
course at the IT-University of Copenhagen on the third semester of the MSc
Software Development program. As such it is a project preparing for the thesis
in Spring, 2019. The following is the tentative research question for the
thesis project.
### 1.1 Research question
Stance classification and fake news detection is currently mostly concerned
with the English language. The thesis project will attempt to answer the
following questions: how do we build an automatic stance classification system
for Danish? Further, how do we apply this system to verify or refute rumours
and possibly detect fake news?
### 1.2 Overview
The objective of this paper will thus be to study the approaches used for
stance classification and fake news detection in the English language and what
methods might be applicable to build a system for the Danish language. In
particular section 2 will provide context and definition for the term stance
classification. Section 3 will discuss definitions of fake news detection,
refer to recent work and discuss a number of social and psychological aspects
in the area. Section 4 will cover data gathering, feature extraction and data
annotation, as well as give context for the structure of microblogs. Section 5
covers a number of different approaches taken to classify stance and detect
fake news. Section 6 will present proposals for the choice of approach, data
gathering and technology for the thesis project, in addition to a high-level
thesis plan. Finally section 7 will summarise the findings of this research
paper.
## 2 Stance classification
Literature on stance classification and stance detection systems is rather
new, as most of the papers are published within the last 10 years. One of the
first studies in the area is from [Qazvinian et al., 2011], in which they
gather data from Twitter containing more than 10,000 tweets over 5 different
topics. They propose a system for identifying misinformation in microblogs
using different Bayes classifiers, and extracting “content-based”, “network-
based”, and “Twitter specific memes” features. Different approaches and
objectives have since been set to tackle the computational task of classifying
stance given some data based on a number of claims.
Conversations in microblogs, such as Twitter, are typically used in
classifying the stance for each reply to the source post, which expresses some
claim. Many systems use the Support, Denying, Querying, and Commenting (SDQC)
labels for classifying these posts[Zubiaga et al., 2016]. Before stance
classification is further investigated, we discuss applications of stance
classification as well as related subjects.
### 2.1 Applications
Stance classification is an area with closely related subjects, including
veracity classification/detection and fake news detection. The reason for this
is that stance classification can be used in the task of veracity
classification, as well as fake news detection[Dungs et al., 2018, Shu et al.,
2017a]. In this paper the term stance classification refers to the task of
determining the opinion behind some text towards a specific target. As such,
stance detection is the task of using the classification system to
automatically discover stance, and this term is used interchangeably with
stance classification. The same goes for veracity classification which, on the
other hand, is the task of resolving some claim by analysing crowd
reactions[Derczynski et al., 2017].
The task of stance classification often comes in two variants: open and
target-specific[Aker et al., 2017]. Open stance classification is applied in
contexts, where no target/topic is known in advance, which makes it suitable
for rumour resolution. Since the attitudes(stances) from a crowd towards some
claim can be indicative of its truthfulness, it is as such applicable in
veracity detection[Dungs et al., 2018]. In target-specific stance
classification, on the other hand, cues about a target that is known in
advance are provided in the training data. This can make classification of
stance from unseen data, but with the same target, easier[Mohammad et al.,
2016].
Furthermore the above described variants of stance classification can be
either supervised or unsupervised. In the former case classification has prior
knowledge based on a ground truth, i.e. data is annotated, and in the latter
case classification must be inferred from data, since there is no prior
knowledge111https://towardsdatascience.com/supervised-vs-unsupervised-
learning-14f68e32ea8d. Visited 03-12-2018.
In the next section we introduce fake news detection and explore how stance
classification is used for rumour resolution.
## 3 Fake news detection
One definition of fake news is that “fake news is news articles that are
intentionally and verifiably false”[Shu et al., 2017a]. The key features of
this statement is (1) authenticity: fake news include false information that
can be verified, and (2) intent: fake news is created with dishonest intention
to mislead consumers. A related area is that of rumour classification, in
which the veracity of circulating information is yet to be verified at the
time of spreading[Shu et al., 2017a]. Thus the distinction is that fake news
is intentionally misleading and is something which can be proven to be fake.
The problem to solve for detecting rumours and fake news is however much the
same. In the context of Twitter for example, given a source tweet containing a
claim and a number of responses, the task is to determine whether the claim is
true or false.
PHEME is a project dealing with the fake news detection problem described
above, focusing on veracity of data in social media and on the web[Derczynski
and Bontcheva, 2014]. In particular four kinds of false claims are sought to
be identified in real time: rumours, disinformation, misinformation, and
speculation. Out of these four categories disinformation most precisely
describes the definition of fake news given above, i.e. information that is
spread deliberately to deceive, in contrast to misinformation, which is
unintentional. Since the start of PHEME in 2014, several studies and papers
have been published dealing with the task mentioned here, including [Kochkina
et al., 2017, Derczynski et al., 2017, Zubiaga et al., 2016].
The task of identifying false claims is also undertaken in the Fake News
Challenge[Pomerleau and Rao, 2017]. The goal in this challenge is to explore
how ML and NLP can be used to combat the “fake news problem”. Specifically the
task is broken down into stages, with the first stage being stance detection,
classifying whether a body text agrees, disagrees, discusses or is unrelated
to a headline. Note that this is quite different from the analysis of
microblog data, where the posts in a sense are dynamic due to its temporal
feature. However, related to the task of the Fake News Challenge is the work
of [Augenstein et al., 2016], in which they build a classification system to
interpret tweet stance towards previously unseen targets and where the target
is not always mentioned in the text. Specifically they build a model to
classify tweets regarding Donald Trump, where the training and development
data is based on the targets Climate Change is a Real Concern, Feminist
Movement, Atheism, Legalization of Abortion, and Hillary Clinton.
### 3.1 Social and psychological aspects
Since fake news revolve around people it is interesting to investigate which
social and psychological factors that have relevance and implications for fake
news detection.
Some concepts that may have effects for the data used in fake news detection
are confirmation bias and the echo chamber effect[Shu et al., 2017a].
Confirmation bias describes consumers who prefer to receive information that
confirms their existing views, while the echo chamber effect describes users
on social media that tend to form groups containing like-minded people with
polarised opinions. These phenomena are discussed in [Quattrociocchi et al.,
2016], which carries out research on a large Facebook dataset. The research
shows that users tend to polarise their interactions with users and pages of
the same kind. Furthermore it is shown that the degree of polarisation
correlates with the degree of sentiment extremity in the users’ comments.
Another concept describing sharing of information between users is filter
bubbles and is covered in [Bechmann and Nielbo, 2018]. Filter bubbles describe
isolated users receiving news and information which does not overlap with
information other users get. As such filter bubbles are much alike echo
chambers, however [Bechmann and Nielbo, 2018] has a focus on filter bubbles in
relation to the Facebook news feed. The paper concludes that respectively 10.0
and 27.8 percentage of users in the used data set were in a filter bubble,
depending on the approach. Furthermore it is noted that there is no clear
connection between age, education, living location or gender and being in a
filter bubble. However the users in filter bubbles had fewer friends, group
likes and page likes than users who were not.
While [Bechmann and Nielbo, 2018] and [Quattrociocchi et al., 2016] both
examine spread and isolation of information, it is important to note a key
difference between them. [Bechmann and Nielbo, 2018] covers information spread
of news content specifically on the Facebook news feed in relation to the
algorithm Edge Rank222http://edgerank.net/ visited 09-12-2018, while
[Quattrociocchi et al., 2016] examine the spread of information in regards to
shared posts, page likes and so forth.
The above findings show that it is important to keep these social and
psychological aspects in mind, while considering the data used from social
media platforms. Otherwise polarised or skewed data could have implications
for the results and later usefulness of research in other contexts. This leads
to the next section, where data and factors which influence its quality is
discussed.
## 4 Data
Gathering data for stance classification is a task in itself, as different
factors, such as bias and class distribution, can have significant
consequences for the resulting system. Social and psychological aspects in
this regard are discussed above in section 3.1. Furthermore classifiers
performs better with datasets with balanced class labels after annotation has
been performed. Otherwise you might end up with misleading/imprecise
classification systems: In [Kochkina et al., 2017] they build the best-
performing system for SemEval 2017 Task 8, subtask A, but due to unbalanced
data, the model is primarily able to just classify “commenting” instances,
with only few correct predictions of “denying” and “supporting” instances,
which are the more interesting classes.
### 4.1 Data gathering
This section will provide an overview of approaches to gather relevant data
for the stance classification task.
In [Castillo et al., 2011] a system is generated to gather data from Twitter
and filter newsworthy topics. First they monitor Twitter posts in a period of
2 months using a monitoring system333“Twitter Monitor”(currently unavailable)
from:
http://www.twittermonitor.net/, which detects bursts in the frequency of sets
of keywords found in messages. Then they query the system with specific
keywords and collect tweets that match them during the burst peaks. They
gather Twitter data in this way on over 2500 topics, and filter newsworthy
ones from pure conversations with the MTurk API444https://www.mturk.com/. The
paper also describes how the labels given from MTurk is used to train a J48
decision tree classifier to filter the topics automatically.
Similarly a dataset is generated from Twitter using regular expression queries
in [Qazvinian et al., 2011]. They utilise the Twitter API by searching for
data with queries that each represent a popular rumour that is deemed either
“false” or “partly true” by About.com555http://urbanlegends.about.com. Then
two annotators manually go over all the tweets collected and annotate whether
they are about a set of particular rumours.
More recent datasets include those in the SemEval tasks, such as SemEval 2016,
task 6[Mohammad et al., 2016] and SemEval 2017, Task 8[Derczynski et al.,
2017]. Alternative datasets are discussed in [Shu et al., 2017a], including
BuzzFeedNews, LIAR, BS Detector, and CREDBANK. They point out, however, that
these datasets have limitations that make them challenging for fake news
detection. As a result they are currently in the process of developing their
own dataset, which include news content and social context features[Shu et
al., 2018], which are the feature categories they find important for the fake
news detection task.
### 4.2 Feature extraction
Once data has been extracted for analysis, one must extract features relevant
for the task at hand. The subject of feature engineering could comprise a
whole paper in itself. As such this section will not try to compare features,
but will provide an overview of the most common features used for stance
classification and fake news detection[Castillo et al., 2011, Shu et al.,
2017a, Qazvinian et al., 2011, Aker et al., 2017, Kochkina et al., 2017,
Enayet and El-Beltagy, 2017]. Table 1 is a compact list of groups of similar
features with accompanying short descriptions. Additionally a popular approach
is to also include word embeddings using the word2vec algorithm[Google, 2013],
representing words by dense vectors, as done in [Kochkina et al., 2017].
Feature | Description
---|---
Lexical | Count of words and characters, ratio of capital letters, names, as well as presence of period, question mark, exclamation mark, and special words (e.g. negation words)
Attachments | URLs, images, and/or hashtags content
Syntax | Sentence-level features, e.g. n-grams, BOW, POS tags
User | No. of posts written, user creation date, no. of followers, demographics
Post | Source or reply, relation to other posts, sentiment (positive/negative polarity), temporal information
Table 1: An overview of the most common features used in stance classification
and fake news detection
With the progress of ML, tools and techniques open up for various ways of
tackling the task of stance classification. Several studies however show that
the most crucial part of stance classification is to extract and use the most
optimal features[Aker et al., 2017, Dungs et al., 2018]. In early work, it was
explored how features could be categorised into four classes, message-,
topic-, user-, and propagation-based [Castillo et al., 2011]. Although they
are Twitter-specific, they are claimed to be generic. The message-based
features deals with characteristics of messages, such as tweet-length. The
user-based features, on the other hand, deals with characteristics of the
user, which posts the message, such as registration age. Topic-based features
are then an aggregation computed from the message- and user-based, such as the
fraction of tweets containing URLs. Finally the propagation-based features
consider the conversation tree, e.g. depth of the tree.
Another study shows that more or less abandoning the idea of having many
features can provide significant results [Dungs et al., 2018]. Their
contribution shows how stance and tweet times alone achieve state-of-the-art
results in the task of veracity detection as opposed to approaches using
content and user based features as those introduced above. Along these lines
[Aker et al., 2017] shows how, by adding just six tweet confidence “problem-
specific” features to existing well-performing features, they achieve better
results than previous systems on the same data. They prove this by using a
decision tree stance classifier, which allegedly is simpler in its approach in
comparison to competing systems’.
### 4.3 Data structure in microblogs
This paper investigates stance classification over social media data and in
particular from microblog platforms, as the structure makes it applicable for
this task[Tolmie et al., 2018]. As an example of a microblog conversation,
figure 1 from [Kochkina et al., 2017] illustrates a Twitter conversation,
where a source post makes a claim and nested replies respond to it either
directly or indirectly. Note that the tweets are also annotated, which is
discussed in the next section.
Figure 1: A conversation thread with three branches. Source: [Kochkina et al.,
2017]
In [Procter et al., 2013] they analyse how rumours propagate in Twitter, which
we hypothesize also applies for similar microblogs such as
Reddit666https://www.reddit.com/. In short it comprises the following events,
which we have reformulated to be general for microblogs:
1. 1.
A rumour starts with someone posting about the occurrence of an alleged
incident.
2. 2.
The rumour spreads as the post is shared and some form of evidence may be
added in the process.
3. 3.
Others begin to challenge its credibility.
4. 4.
A consensus begins to emerge
This can be compared to figure 1, where user 0 starts a rumour and several
other users replies, some challenging its credibility by either querying the
rumour(user 2) or denying it(user 4). Had the example been bigger we might
also have seen other people actually re-posting the initial post, some
supporting it with URLs to the incident, and after some time a general
consensus could possible be inferred from the messages.
### 4.4 Annotation
When data is gathered and features are extracted, the question is then which
kind of labels one should use for annotation. One annotation scheme that is
popular is SDQC which labels a text as either supporting, denying, querying or
commenting in regards to some rumour. This is discussed in section 4.4.1,
followed up by a comparison to topic classification in section 4.4.2, which
tries to label a text to be within some predefined category.
Manually annotating data does however come with some challenges. It is time
consuming to have experts and individuals annotate the data manually and the
annotations could be influenced by the individuals’ personal bias. Different
annotators might have different views of which labels are appropriate for some
microblog post. One example is [Stranisci et al., 2016], where 8 annotators
manually annotate over 8000 tweets. Each tweet is annotated twice by different
annotators, and there are disagreements on more than 2000 of the tweets. To
mitigate the disagreements from personal bias, a crowd sourcing platform is
utilised to give another set of
annotations777https://en.wikipedia.org/wiki/Figure_Eight_Inc. Visited
09-12-2018.
Not only is it important which labels are used, but also what data is being
annotated. Twitter is a popular platform for gathering data. It facilitates an
easy way to gather large amounts of text data which can circumvent
controversial debates or events. Using public datasets in research should help
enabling others to verify and improve on prior research.
While Twitter is a great platform for gathering data, it is not the only
source of data out there and this must be kept in mind. If data from Twitter
is primarily used, one could think that it might skew models and systems to be
optimised for text written in the context of that particular social media
platform and might be less useful elsewhere. This is further discussed in
section 4.5.
#### 4.4.1 Labels and SDQC
The idea of using the SDQC labels stems from an article, which experimentally
analyses tweets sent during the August 2011 riots in England with a
“computationally assisted methodology”[Procter et al., 2013]. They develop a
code frame for annotating rumour tweets with 6 labels: claim with and without
evidence, counterclaim with and without evidence, appeal for more information,
and comment. This framework is extended and used in a more recent study, in
which they develop a methodology that enables to collect, identify and
annotate rumour data from Twitter[Zubiaga et al., 2016]. They assume two types
of tweets, namely the source tweet that initiates the rumour and the response
tweets that respond to it(See also figure 1). They categorise their labels in
three main dimensions, which express a mode of interaction; support/response
type, certainty, and evidentiality.
Support/response type is dependent on the tweet type, where a source tweet can
be labelled as supporting, denying or under-specified in regards to the
content of the statement. If it is a response tweet, it can be labelled as
agreed, disagreed, appeal for more information, and comment. These labels
corresponds to the codes using in the formerly mentioned paper, [Procter et
al., 2013]. In addition to their work, however, [Zubiaga et al., 2016] also
consider response types for nested responses, i.e. tweets not directly
responding to the source tweet.
The certainty measures the degree of confidence expressed by the author of a
tweet when posting a statement in the context of a rumour. The values for the
dimension include; certain, somewhat certain, and uncertain. Finally
evidentiality determines the type of evidence, if any, provided directly in
relation to the rumour being discussed. This includes seven labels, where
attachments and quotation are examples.
The methodology described above is the general approach for the articles
investigated in this paper as most of them work with data following the format
described in section 4.3. An important take-away is the observation that
nested posts/replies play a big role for the propagation of rumours.
#### 4.4.2 A related annotation scheme: Topic classification
SDQC seems to be a fair annotation scheme, as the labels divide the classes
into very general opinion categories, supposedly making it very suitable for
stance classification. In comparison we could look at a different approach and
investigate another annotation scheme. One such example could be topic
classification.
Topic classification is somewhat similar to stance classification, but differs
in its objective, and thus its annotation scheme. Where the latter deals with
classifying opinions(stance) from text, the former deals with classifying
specific topics from the content of text. This approach is used in [Giménez et
al., 2017] to analyse tweets in regards to the Spanish election in 2015. They
introduce five categories for topic labelling: (1) political issues, (2)
policy issues, (3) personal issues, (4) campaign issues, and (5) other issues.
In regards to SDQC, where the labels are rather general and can be used in any
stance classification task, this annotation scheme is rather context specific.
They conclude that the task was complicated, in particular when the topics
were similar. One can indeed imagine that the tweet data would contain text
with generally more than one of the topics included, making it difficult to
annotate it with only one category. With SDQC, you would typically not see a
person both support and deny some claim. Thus stance classification is more
forgiving in comparison to topic classification when it comes to the
annotation scheme.
### 4.5 Twitter conversations as a social phenomenon
The methodology behind the SDQC annotation scheme is analysed in [Tolmie et
al., 2018], where they compare the sociological aspects of conversations and
compare them to that of microblogging, and in particular Twitter. They
conclude that microblogging cannot be treated as a face-to-face conversation
due to various factors, including the asynchronous nature of the technology
and limits in messages. They investigate microblogging as a turn-taking
system, in which one person initiates a message(and potentially a rumour),
from which users take turn in responding to. One interesting observation in
this regard is that the flow in face-to-face conversations allows for
naturally “self-selecting” next speakers, whereas there are no turn order in
microblogging because of the temporal gaps. They find out that rumours unfold
across multiple turns and that one needs to examine the organisational
characteristics of how specific phenomena unfold in social interaction in
order to understand how they work as social phenomena. This means that
focusing on one tweet in isolation is very limiting in regards to the
information that can be extracted in a social context. The annotation schema
is then based on the following observations in relation to Twitter, where 4
and 5 are specifically related to the production of rumours:
1. 1.
Tweets are sequentially ordered
2. 2.
Exchanges involve topic management
3. 3.
Important accountability mechanisms are in play
4. 4.
Agreement and disagreement
5. 5.
How tweets are rendered trustworthy through production or evidence
More specifically 2 and 3 relate to the task of labelling source tweets as
either aligning or refuting a news event. 1, 2, 3, and 4 relate to the task of
labelling whether replies agree or disagree with the source tweet, and finally
certainty and evidentiality relate to 5.
#### 4.5.1 Big data on social media
Related to the subject of Twitter conversations in a social context, [Tufekci,
2014] is a research paper on big data on social media, in which methodological
and conceptual challenges for this field are studied. Validity and
representativeness of social media big data is in focus. Issues in this regard
are introduced, in which some of them are of particular interest in the
context of stance classification and fake news detection.
One is the “model organism problem”, in which the use of a few specific social
media platforms are frequently used to generate data with no consideration of
potential bias. It is argued that online platforms such as Twitter raises
important questions of representation and visibility because of the difference
in behaviour depending on demography and social groups. The point is that we
might miss out on important data by making use of the same platforms over and
over again.
Another interesting issue is that big data analysis typically relies only on a
single social media platform, whereas it is rarely the case that such
information is only confined to one source. It is argued that such analysis
must take into account that there may be effects which are not visible because
relevant information is missing. Thus a request for more research on more than
one platform is given in order to understand broader patterns of connectivity.
Finally a point on human interaction on social media platforms argue that
human self-awareness needs to be taken into account in big data analysis as
humans behave differently when they know they are being observed. Along these
lines it is argued that one should take into account that people often are
aware of the mechanisms involved in social media communication, and as such
can exploit it for their own benefit. This is also related to the concept of
confirmation bias which is discussed in section 3.1.
To summarise, even if a social media platform such as Twitter provides easily
available data on news event, one should consider the actual data content. It
is important to investigate whether the data is representative, whether other
platforms can contribute and who the users communicating are and how they
behave.
## 5 Classification approaches
Once data has been gathered and annotated and features extracted, one must
decide which approach to use for the actual classification system. This
section will provide an overview of different approaches, as well as their
results(section 5.10), both for sole stance classification but also applied to
fake news detection.
### 5.1 Recurrent Neural Network and Long-Short Term Memory
Recurrent neural network(RNN) systems allow representing arbitrarily sized
structured inputs in a fixed-size vector, while paying attention to the
structure properties of the input. This makes it quite appropriate for the
type of data for stance classification. [Kochkina et al., 2017] implements
such a system in SemEval-2017 Task 8 subtask A, in which they implement a Long
Short-Term Memory(LSTM) version, feeding in whole conversation branches as
input, as opposed to the typical case of using words alone as input. They end
up with very nice results, even coming out as the best performing system for
the task.
[Augenstein et al., 2016], as earlier mentioned, utilises the LSTM model
differently. They use the LSTM to facilitate an encoding of target text and
source text, using a “Bidirectional conditional LSTM” approach. That is, input
is fed into the network in both directions, and one layer is dependent on
another layer’s state, thus making it conditional.
[Zarrella and Marsh, 2016] also implements an RNN, but takes an interesting
approach by pre-training their model on existing related twitter data
utilising the most frequent hash-tag as a label. The lack of domain-labelled
data was a challenge in their project and the pre-training of their model was
an attempt to tackle this, and yielded good results in comparison to a
randomly initialised model.
### 5.2 Support Vector Machine
Another SemEval-2017 Task 8 paper, which deals with both subtask A and B is
[Enayet and El-Beltagy, 2017]. They have focus on the latter(veracity
prediction), in which they scored best. They use a linear Support Vector
Machine (SVM) approach with BOW features and other manually selected features.
The SVM maps data to points in space, and then assign classes to the data
depending on the positions of output, as opposed to the probabilistic
classification approach typically used in neural networks.
### 5.3 Convolutional Neural Network
Another approach based on neural networks is that of convolutional neural
networks, which works particularly well with spatial data structures.
[Chen et al., 2017] implements a CNN model with leave-one-out(LOO) testing to
classify with SDQC. They use varying window sizes, while also training 5
models independently and using majority voting to find results. A similar
approach is implemented in [Kim, 2014], which deals with seven different NLP
tasks, experimenting with different CNN architectures. It is shown that a CNN
with “static”(non-trained in backpropagation) word embeddings and “non-
static”(trained in backpropagation) word embeddings performs well, whereas a
combination of the two, denoted as “CNN-multichannel” overall performs best.
As such it is concluded that unsupervised pre-training of word vectors is an
important factor for NLP deep learning, and that a one-layered CNN can perform
remarkably well.
### 5.4 Crowd stance
One approach for fake news detection, is to analyse the replies on the source
post of the news. Applying stance classification on microblog conversation
data enables analysis of the general stance/opinion of the “crowd”[Pomerleau
and Rao, 2017, Derczynski et al., 2017]. If there for example is a lot of
negative response to a post, it might show that the crowd is sceptical about
the claim/rumour in the source and vice versa.
### 5.5 Hidden Markov Model
One example of a crowd stance implementation is the use of a Hidden Markov
Model(HMM) in [Dungs et al., 2018], which uses stance and tweets’ times alone
for automatic rumour veracity classification, achieving state-of-the-art
results. HMM is well known for temporal pattern recognition, which they
utilise as follows: Regard individual stances over a rumour’s lifetime as an
ordered sequence of observations, and then compare sequence occurrence
probabilities for true/false. Their results are obtained using gold stance
labels, but they also test it with automatically generated stance labels[Aker
et al., 2017], and observe only a marginal decrease in performance. This shows
that their veracity classification system has viable practical applications.
Furthermore they apply their system in a setting for early detection of
rumours. That is, they limit the number of tweets to the first 5 and 10 tweets
respectively. Surprisingly, even with only 5 tweets, their model still
outperforms the baselines, which use all of the tweets.
### 5.6 Tree classification
In [Aker et al., 2017] one of the approaches used is a J48 decision tree
classifier, which is build over a number of features from earlier work,
extended with some “problem-specific” features. The approach reaches state-of-
the-art performance for stance classification, and shows that a simple
classifier can work really well. A lot of work went into defining the features
for a tweet and shows that the features used to define a tweet are key to
having good results.
#### 5.6.1 Ensemble with CNN
Another interesting approach is that of the winning team for the Fake News
Challenge[Pomerleau and Rao, 2017], which uses an ensemble of decision trees
and a CNN[Baird et al., 2017]. Specifically their model is based on a 50/50
weighted average between gradient-boosted decision trees and a deep CNN.
Neither of the models have impressive accuracy, but the 50/50 weighting in the
final classification step improves their results.
### 5.7 Multi-Layered Perceptron
The second best scoring team in the Fake News Challenge[Pomerleau and Rao,
2017], utilises a Multi-Layered Perceptron(MLP) [Hanselowski et al., 2017]
approach with several ReLU layers. The final system ensemble 5 instances of
the MLP model and decides the output label by majority voting between the
instances. The team scoring third best in the Fake News Challenge also
implements an MLP, employing lexical and similarity features with one hidden
ReLU layer[Riedel et al., 2017]. As noted in the paper, the results are quite
disappointing since the model is inaccurate at predicting the most interesting
classes, “agree” and “disagree”.
### 5.8 Matrix factorization
In [Shu et al., 2017b] an approach trying to exploit a tri-relationship
between publishers, news contents and social engagements is used to do fake
news detection. The approach(denoted “TriFN”) utilises non-negative matrix
factorisation to generate latent feature vectors for users and news. A
comprehensive mathematical model describing the problem as a minimisation
problem is described and formally optimised. The results of the TriFN
framework outperforms a number of baselines and yields positive results.
### 5.9 Bayes classification
A more mathematical approach than those described so far is classification
with Bayes classifiers, as implemented in [Qazvinian et al., 2011], being a
pioneering paper in the area of stance classification and fake news detection.
The approach is based on learning a linear function of different Bayes
classifiers as high level features to predict whether a user believes a
specific rumour or not. Each of the classifiers calculates the likelihood
ratio for a given tweet to be under either a positive model or negative model
with respect to the given feature(i.e. Bayes classifier).
### 5.10 Performance overview
In appendix A a brief overview of the used method, dataset and results for the
papers presented in this section is shown in table 2. Note that this is not
meant to be a direct comparison of the approaches taken in the papers, as some
of the papers use different metrics, degree of classification and datasets and
a comparison would as such not give much value.
The results for the Fake News Challenge systems are using a custom metric,
making it difficult to compare with other systems. However, [Hanselowski et
al., 2018] have reproduced the results and reports their $F_{1}$ score, which
are used here as well. Interestingly the teams’ rankings change when comparing
$F_{1}$ scores. Note also that the lower part of table 2(divided by double-
lines) reports results for binary classification for the fake news detection
task, whereas the upper part is for stance classification.
## 6 Proposal of system implementation approach
In this section we propose a concrete system implementation for the thesis
project as a result of the research in this paper. The objective is to build a
stance classification system for Danish over social media data, and apply it
to rumour resolution and possible to detect fake news. As such we need to
gather data, annotate it, build a classification system and deploy it888By
deploying it we mean to make it publicly available to use “out-of-the-box” on
GitHub. Furthermore we include a tentative thesis plan as a feasibility check
and guideline for the project.
### 6.1 Data gathering and annotation
For the data gathering and annotation phase we propose to use the social media
platform, Reddit, and in particular the official Danish sub-reddit on
https://www.reddit.com/r/Denmark/. Reddit has an open-source API allowing to
use their data for non-commercial use999https://www.reddit.com/wiki/api
Visited 04-12-2018. Clearly Twitter would be an obvious candidate to choose as
is clear from the research presented in this paper, but we have spent some
time on exploring Danish news on this platform, which is not really present.
Facebook would be another good candidate, but its data is not publicly
available.
For annotation we propose to use the SDQC approach, that is, four labels
indicating: support, denying, querying, and commenting.
Alternatively, if we do not succeed in finding a proper microblog platform, we
can use the same approach as the task in the Fake News Challenge[Pomerleau and
Rao, 2017], i.e. performing stance classification based on a headline and a
body text for a news event.
### 6.2 Classification and detection system
We propose to build a stance classifier model with a decision tree approach,
which is covered in section 5.6, as it has proved to be a simple yet effective
approach for stance classification.
Further, the approach described in section 5.5 is quite interesting, achieving
very nice results, implementing a HMM for veracity detection using few and
simple features, including stance and tweets’ times. We propose to use the
same approach for fake news detection based on historical events.
In the case where we do not have data in the form of a microblog, but as news
articles(see section 6.1 above), we propose to use an ensemble approach as the
one introduced in section 5.6.1, combing tree classification with deep
learning methods.
### 6.3 Technology
We also propose a framework and programming environment to work with for the
project. Python is a popular programming language for data analysis, including
ML and NLP, because of its plethora of libraries. A HMM model could be
implemented using hmmlearn101010https://github.com/hmmlearn/hmmlearn, a neural
network could be implemented using PyTorch111111https://pytorch.org/, and a
decision tree with the scikit-learn library121212https://scikit-learn.org/.
Python also features a rich library for data pre-processing,
NLKT131313https://www.nltk.org/, which among other things does automatic word
tokenization. Apart from the useful libraries, Python is a high-level language
allowing for a code-first approach, i.e. fast prototyping.
### 6.4 Thesis plan
The thesis project is carried out February through June 2019, with deadline
for hand-in June 3rd at 14.00 o’clock. Thus we have 17 weeks to implement the
system and write the thesis paper. A high-level thesis plan is sketched in
table 3, appendix B.
The plan comprises of work items for the implementation of the system and
thesis sections divided into the weeks we intend to carry them out. The first
month the focus will be on data gathering and annotation. Simultaneously and
the following month we will build a prototype for an early evaluation of the
data. The third month we will tune the system, run experiments and change the
parameters accordingly, allowing us to test and evaluate the system. Then we
will deploy it, making it publicly available.
The plan also consists of a week by week overview of the sections we will
focus on in the thesis paper. First off the general structure of the thesis as
well as data gathering will be in focus. In the following weeks the
annotation, baseline and dataset will be covered. After this the choice of
technology, data analysis and the prototype systems will be discussed. Then
the parameter space, optimal parameters and results will be reported after
running experiments. Finally the results will be analysed and discussed before
concluding the thesis.
## 7 Conclusion
The objective for this research paper has been to survey stance classification
and fake news detection. The task of classifying the opinion of a crowd
towards a rumour has been explored by many approaches within the last 10
years, resulting in very useful findings. One particularly interesting use of
stance classification is to use it for assessing whether some news event is
true or false. One challenge in this regard is the social and psychological
aspects occurring in microblogs, where polarised opinions take effect because
of filter bubbles and the echo chamber effect. Another challenge for analysing
data from microblogs, such as Twitter, is the model organism problem, which is
the prevalent issue of representation and visibility when continuously using
the same platforms in stance classification.
We have further investigated the process of data gathering and annotation, and
how imbalanced data can have significant impact of the results obtained in
stance classification. In particular feature engineering seem to be of great
importance, choosing representative information to extract that will do great
on the test data at hand while still being general-purpose oriented.
Different methods for stance classification and fake news detection have been
explored, but because of different data and metrics it has been difficult to
directly compare their results. However, one particular approach is very
relevant and interesting for the thesis project, which is the use of a HMM in
analysing rumours in microblog data, achieving very promising results.
Furthermore the use of a decision tree model for stance classification appear
to be a good choice. These are also the approaches we propose to use in the
thesis project, where we intend to gather a dataset in the Danish language,
annotate it, build the classifier/detection system and deploy it. In
conclusion the findings in this research paper will most likely prove very
useful as background knowledge in the coming thesis project.
## References
* [Aker et al., 2017] Aker, A., Derczynski, L., and Bontcheva, K. (2017). Simple Open Stance Classification for Rumour Analysis. In Proceedings of Recent Advances in Natural Language Processing, pages 31–39, Varna, Bulgaria.
* [Augenstein et al., 2016] Augenstein, I., Rocktäschel, T., Vlachos, A., and Bontcheva, K. (2016). Stance Detection with Bidirectional Conditional Encoding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876–885, Austin, Texas.
* [Baird et al., 2017] Baird, S., Sibley, D., and Pan, Y. (2017). Talos Targets Disinformation with Fake News Challenge Victory. https://blog.talosintelligence.com/2017/06/talos-fake-news-challenge.html \- Visited 04-12-2018.
* [Bechmann and Nielbo, 2018] Bechmann, A. and Nielbo, K. L. (2018). Are We Exposed to the Same “News” in the News Feed? pages 900–1002.
* [Castillo et al., 2011] Castillo, C., Medoza, M., and Poblete, B. (2011). Information Credibility on Twitter. In WWW 2011 – Session: Information Credibility, pages 675–684, Hyderabad, India.
* [Chen et al., 2017] Chen, Y.-C., Liu, Z.-Y., and Kao, H.-Y. (2017). IKM at SemEval-2017 Task 8: Convolutional Neural Networks for Stance Detection and Rumor Verification. In Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 465–469, Vancouver, Canada.
* [Derczynski and Bontcheva, 2014] Derczynski, L. and Bontcheva, K. (2014). PHEME: Veracity in Digital Social Networks.
* [Derczynski et al., 2017] Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G. W. S., and Zubiaga, A. (2017). SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 69–76, Vancouver, Canada.
* [Dungs et al., 2018] Dungs, S., Aker, A., Fuhr, N., and Bontcheva, K. (2018). Can Rumour Stance Alone Predict Veracity? In Proceedings of the 27th International Conference on Computational Linguistics, page 3360–3370, Santa Fe, New Mexico, USA.
* [Enayet and El-Beltagy, 2017] Enayet, O. and El-Beltagy, S. R. (2017). NileTMRG at SemEval-2017 Task 8: Determining Rumour and Veracity Support for Rumours on Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 470–474, Vancouver, Canada.
* [Giménez et al., 2017] Giménez, M., Baviera, T., Llorca, G., Gámir, J., Calvo, D., Rosso, P., , and Rangel, F. (2017). Overview of the 1st Classification of Spanish Election Tweets Task at IberEval 2017. In Second Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2017).
* [Google, 2013] Google (2013). word2vec. https://code.google.com/archive/p/word2vec/ \- Visited 26-11-2018.
* [Hanselowski et al., 2017] Hanselowski, A., PVS, A., Schiller, B., and Caspelherr, F. (2017). System developed by Team Athene in FNC-1. https://github.com/hanselowski/athene_system/blob/master/system_description_athene.pdf \- visited 04-12-2018.
* [Hanselowski et al., 2018] Hanselowski, A., PVS, A., Schiller, B., Caspelherr, F., Chaudhuri, D., Meyer, C. M., and Gurevych, I. (2018). A Retrospective Analysis of the Fake News Challenge Stance Detection Task. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1859–1874, Santa Fe, New Mexico, USA.
* [Kim, 2014] Kim, Y. (2014). Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar.
* [Kochkina et al., 2017] Kochkina, E., Liakata, M., and Augenstein, I. (2017). Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM. In Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 475–480, Vancouver, Canada.
* [Mohammad et al., 2016] Mohammad, S. M., Kiritchenko, S., Sobhani, P., Zhu, X., and Cherry, C. (2016). SemEval-2016 Task 6: Detecting Stance in Tweets. In Proceedings of SemEval-2016, pages 31–41, San Diego, California.
* [Pomerleau and Rao, 2017] Pomerleau, D. and Rao, D. (2017). Fake news challenge. http://www.fakenewschallenge.org \- Visited 26-11-2018 and dataset from https://github.com/FakeNewsChallenge/fnc-1 \- Visisted 10-12-2018.
* [Procter et al., 2013] Procter, R., Vis, F., and Voss, A. (2013). Reading the riots on Twitter: methodological innovation for the analysis of big data. International Journal of Social Research Methodology, pages 16:3 197–2014.
* [Qazvinian et al., 2011] Qazvinian, V., Rosengren, E., Radev, D. R., and Mei, Q. (2011). Rumor has it: Identifying Misinformation in Microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1589–1599, Edinburgh, Scotland.
* [Quattrociocchi et al., 2016] Quattrociocchi, W., Scala, A., and Sunstein, C. R. (2016). Echo Chambers on Facebook.
* [Riedel et al., 2017] Riedel, B., Augenstein, I., Spithourakis, G. P., and Riedel, S. (2017). A simple but tough-to-beat baseline for the Fake News Challenge stance detection task. CoRR, abs/1707.03264.
* [Shu et al., 2018] Shu, K., Mahudeswaran, D., Wang, S., Lee, D., and Liu, H. (2018). FakeNewsNet: A Data Repository with News Content, Social Context and Dynamic Information for Studying Fake News on Social Media. arXiv:1809.01286.
* [Shu et al., 2017a] Shu, K., Sliva, A., Wang, S., Tang, J., and Liu, H. (2017a). Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1):22–36.
* [Shu et al., 2017b] Shu, K., Wang, S., and Liu, H. (2017b). Exploiting Tri-Relationship for Fake News Detection. arXiv preprint arXiv:1712.07709.
* [Stranisci et al., 2016] Stranisci, M., Bosco, C., Farías, D. I. H., and Patti, V. (2016). Annotating Sentiment and Irony in the Online Italian Political Debate on #labuonascuola.
* [Tolmie et al., 2018] Tolmie, P., Procter, R., Rouncefield, M., Liakata, M., and Zubiaga, A. (2018). Microblog Analysis as a Program of Work. In ACM Transactions on Social Computing, New York, NY, USA.
* [Tufekci, 2014] Tufekci, Z. (2014). Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls. In Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media, page 505–514. www.aaai.org.
* [Zarrella and Marsh, 2016] Zarrella, G. and Marsh, A. (2016). Transfer Learning for Stance Detection. In Proceedings of SemEval-2016, pages 458–463, San Diego, California.
* [Zubiaga et al., 2016] Zubiaga, A., Liakata, M., Procter, R., Hoi, G. W. S., and Tolmie, P. (2016). Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads. PLoS ONE. 11(3).
## Appendix A Results overview
Approach | Acc | $F_{1}$ | Dataset
---|---|---|---
Transfer Learning RNN | - | 67.8 | [Mohammad et al., 2016]
Bidirectional LSTM | - | 58.3 | [Mohammad et al., 2016]
Branch LSTM | - | 43.4 | [Derczynski et al., 2017]
CNN | - | 53.6 | [Derczynski et al., 2017]
SVM | 53.0 | - | [Derczynski et al., 2017]
J48 | 79.02 | - | [Derczynski et al., 2017]
MLP | - | 60.4 | [Pomerleau and Rao, 2017]
MLP | - | 58.3 | [Pomerleau and Rao, 2017]
CNN and Tree ensemble | - | 58.2 | [Pomerleau and Rao, 2017]
Bayes | 94.1 | 92.5 | [Qazvinian et al., 2011]
Hidden Markov Models | - | 80.4 | [Zubiaga et al., 2016]
TriFN | - | 87.0 | [Shu et al., 2017b, BuzzFeed]
TriFN | - | 88.0 | [Shu et al., 2017b, Politifact]
Table 2: Overview of performance results for the different approaches for
stance classification(top) and fake news detection(bottom)
## Appendix B Thesis plan
Week | Work item | Thesis | Milestones
---|---|---|---
6 | Data gathering | General structure and data gathering |
7 | Data annotation and
prototype | Annotation |
8 | Data annotation and
prototype | Baseline |
9 | Data annotation and
prototype | Dataset |
10 | Prototype testing | Technology | Working prototype and gathered dataset
11 | Data evaluation | Data analysis and
statistics |
12 | Finalize prototype | Describe prototype
system |
13 | Finalize prototype | Describe prototype
system |
14 | Tune system
parameters | Parameter space | Intermediate results and finished prototype
15 | Tune system
parameters | Optimal parameters |
16 | Test | Experiment results |
17 | Test | Draft for final version |
18 | Evaluation of results | Result and error
analysis | Draft for thesis and results gathered
19 | Evaluation of results | Result and error
analysis |
20 | System revision | Discussion |
21 | Conclude | Conclude, abstract |
22 | Deploy system | Proof read |
Table 3: Thesis plan
|
# Single-Image based unsupervised joint segmentation and denoising
Nadja Gruber , , Corresponding author<EMAIL_ADDRESS>of
Mathematics, University of Innsbruck, AustriaVASCage-Research Centre on
Vascular Ageing and Stroke, Innsbruck, Austria Johannes Schwab MRC Laboratory
of Molecular Biology, Cambridge, UK Noémie Debroux Institut Pascal, Université
Clermont Auvergne, Clermont-Ferrand, France Nicolas Papadakis Institut de
Mathématiques de Bordeaux, Bordeaux, France Markus Haltmeier 22footnotemark: 2
###### Abstract
In this work, we develop an unsupervised method for the joint segmentation and
denoising of a single image. To this end, we combine the advantages of a
variational segmentation method with the power of a self-supervised, single-
image based deep learning approach. One major strength of our method lies in
the fact, that in contrast to data-driven methods, where huge amounts of
labeled samples are necessary, our model can segment an image into multiple
meaningful regions without any training database. Further, we introduce a
novel energy functional in which denoising and segmentation are coupled in a
way that both tasks benefit from each other. The limitations of existing
single-image based variational segmentation methods, which are not capable of
dealing with high noise or generic texture, are tackled by this specific
combination with self-supervised image denoising. We propose a unified
optimisation strategy and show that, especially for very noisy images
available in microscopy, our proposed joint approach outperforms its
sequential counterpart as well as alternative methods focused purely on
denoising or segmentation. Another comparison is conducted with a supervised
deep learning approach designed for the same application, highlighting the
good performance of our approach.
## 1 Introduction
Image denoising and segmentation are fundamental problems in image processing
[37, 5, 23]. In many biomedical applications, such as fluorescence microscopy
or transmission electron cryomicroscopy, one is interested in the segmentation
of objects. However, training data for this task is typically scarce and hard
to obtain due to the intrinsic complexity and high noise of such images as
well as the long time required by experts to label them. Therefore, there is a
need for unsupervised methods for tackling the two imaging tasks in a unified
way. In this work, we propose such a framework, and apply it to a subset of a
popular, public available dataset of microscopy images.
The objective of segmentation is to divide a given image into different,
meaningful regions, while denoising describes the task of removing noise from
a corrupted image. The main difficulty in noise removal is to flatten the
unwanted, high frequency corruption, while preserving essential features such
as edges. At first glance, denoising and segmentation are two different
applications. Nevertheless, both tasks share relationships, as very similar
models can be used to solve both problems [8]. As we demonstrate in this work,
denoising and segmentation can benefit a lot from each other. By identifying
edges segmentation guides the denoising process to preserve sharp structures
while smoothing the unwanted high frequency residuals. Also, by removing
unnecessary and misleading information from images, denoising helps and
improves the segmentation accuracy.
There exist at least two main kinds of approaches to tackle the two tasks
individually. The first class of methods involves the minimisation of an
energy functional within graph or variational frameworks. The second type of
approaches that recently became popular considers deep learning techniques,
especially convolutional neural networks [29]. In the following, we give a
short overview of the most important and related variational and deep learning
based methods.
### 1.1 Variational Methods
Standard imaging methods for segmentation and denoising are based on an energy
functional that captures the desired characteristics of the output image. The
energy functional typically consists of a data fitting term, and a
regularisation term that encourages properties of the output image, such as
smoothness or sparsity. The energy functional is then minimised using
optimisation techniques such as gradient descent or proximal splitting
algorithms.
##### Denoising
One of the best known variational model for image denoising is the Rudin-
Osher-Fatemi (ROF) [36] model. This model improves the region-based Mumford-
Shah [31] functional that realises a piecewise smooth approximation of an
input image. The ROF model and its extensions reduce noise by penalizing the
total variation of the image. Such methods thus promote piecewise constant
images with undesirable staircase effects in homogeneous regions and they are
unable to recover image details and patterns with higher variation. In case of
severe input noise, they provide poor denoising results as image contours are
confused with noise [16].
On the other hand, the resulting image being piecewise constant, it can be
used for segmentation, by choosing regions of the same value, or thresholding
the image. More details about the link between the ROF based denoising models
and segmentation can for example be found in [8].
##### Segmentation
In their seminal paper [11], Chan and Vese proposed to solve the MumfordShah
problem with a levelset reformulation. Let us denote by $\Omega$ a bounded
subset of $\mathbb{R}^{2}$ with Lipschitz boundary, where the given image
$f:\Omega\rightarrow[0,1]$ is defined, and by
$u\colon\Omega\rightarrow\\{0,1\\}$ the desired binary mask, separating $f$
into two different areas corresponding to the two mean intensity values
$c_{1}$ and $c_{2}$, belonging to foreground and background region,
respectively. In 2D, this framework involves a surface $\phi$ whose zero level
represents the contour of interest, and the mask is obtained as
$u(x)=H\left(\phi(x)\right),$ where $H(\cdot)$ is the Heaviside function. The
proposed energy for binary segmentation is given by
$\displaystyle\mathcal{E}(\phi,c_{1},c_{2})=\int_{\Omega}\lvert\nabla
H(\phi(x))\rvert dx$ $\displaystyle+\lambda\int_{\Omega}\lvert
f(x)-c_{1}\rvert^{2}H(\phi(x))dx$ (1)
$\displaystyle+\lambda\int_{\Omega}\lvert
f(x)-c_{2}\rvert^{2}(1-H(\phi(x))dx,$
where $\lambda>0$ is a regularization parameter to tune. Slight modifications
of this method have already been used in microscopy [34, 40], as it is well
adapted to cell segmentation, where labeled data are scarce. which computes
the intensity averages by using constant intensity information across the
region. However, the Chan-Vese model computes the intensity averages by using
constant information across the region, and thus is a global region-based
model. Therefore, it does not deal well with intensity inhomogeneities and the
presence of high noise. To mitigate this problem, many local region-based
extensions of the active contour of the piecewise constant active contour
model have been proposed [27, 45], but the methods remain sensible to the
considered hand-crafted features and the initial contour. In another line of
works, pre-filtering tools are considered to better prepare the image for
segmentation [9, 28, 43] in a sequential pipeline: denoise before segment. In
[7], a three-stage approach is proposed, consisting of smoothing, lifting, and
segmentation using thresholding.
In our work, we tackle the afore-mentioned issues by introducing a generalized
Chan-Vese segmentation functional including a robust data-fidelity term that
is jointly learned with self supervised deep denoising techniques.
### 1.2 Deep Learning Methods
We now review some of the most relevant works using neural networks for the
denoising and segmentation of images.
##### Denoising
While variational image denoising techniques focus on explicitly modeling data
noise, modern deep learning approaches directly learn how to map the noisy
image to its clean counterpart. In the literature, several types of deep
learning based denoising methods can be found. In particular, supervised
approaches require pairs of noisy images and corresponding clean ground truth
data (see [21, 44, 41]). However, the presence of such noisy and clean data
pairs is rare in reality and often artificial, and it therefore makes such
methods useless for practical applications.
To overcome the requirement of clean images, internal statistical methods
(i.e. methods, where image patches from the same image are used for the noise
reduction), have been introduced [46]. In [39], Ulyanov et al. exploit the
fact that the internal structure of CNNs, inherently resonates with the
distribution of natural images, and utilize this observation for image
restoration without the need of additional training data. For each single
image to restore, this method thus proposes to train a CNN to reconstruct the
considered image. The idea is that an early stopping of the training allows to
recover a regularised, denoised image. A different strategy is proposed in the
Noise2Noise [25] method, where noisy image pairs are mapped to one another.
The drawback of these type of methods is that it still relies on the
availability of such pairs. In practice, even the acquisition of two noisy
realisations of the same image content is often difficult [5]. To this end,
self-supervised training methods operating on one single noisy image, such as
Noise2Void [24], Noise2Self [3], and more recently Noise2Fast [26], are
promising alternatives. This self-supervision is accomplished by
excluding/masking the center (blind spot) of the receptive field of the
network. In this type of training, it is assumed that the noise is pixelwise
independent and that the true intensity of a pixel can be predicted from the
local image context, with the exception of the blind spots mentioned
previously [24]. In this work, We utilised Noise2Fast [26] in our study due to
its optimal combination of computational speed and performance. The method
itself will be explained in more detail in Sections 2, and 4.
##### Segmentation
Among existing deep learning based approaches addressing image segmentation,
the U-Net [35] first introduced for microscopy cell segmentation, is one of
the most successful network architectures. Next to it, we mention Mask-RCNN
[20], a two-stage object detection and segmentation, extending the popular
faster R-CNN architecture [17]. DeepLab [13] is a family of methods that use
atrous convolution (also known as dilated convolution) to capture multi-scale
context information. It has been shown to achieve state-of-the-art results on
several segmentation benchmarks. Still, even the best existing methods offer
plenty of scope for improvements, motivating further research in this field
[21,23,7]. Their high performance comes along with a price to pay. A common
trait to the mentioned approaches is their requirement for tremendous amounts
of labeled ground truth training data, the creation of which is time-consuming
and prone to subjective errors.
As already mentioned, for many applications such as microscopy, the available
image data is very noisy and the available ground truth training data are
scarce. It is thus of great interest to tackle both, the segmentation and the
denoising, in a unified manner. In the following, we will review variational
methods, as well as deep learning based approaches tackling both, the
segmentation and the denoising in a joint manner.
### 1.3 Joint denoising and segmentation methods
In [6], Cai et al. design a model tackling the segmentation of images with a
high level of noise or blurriness. To this end, they propose a variational
approach, coupling an extension of the piecewise constant Mumford Shah model
with an image restoration model, making it more robust in the processing of
the given corrupted image $f$. In [14], the authors propose a variational
approach for the joint reconstruction and segmentation. Therefore, they derive
a model consisting of a total variation regularised reconstruction from
undersampled data, and a Chan-Vese based segmentation. The authors show the
improvement of joint reconstruction and segmentation performance compared to
the sequential approach. In another work [33], Ramlau et. al illustrate that
the Mumford–Shah level-et method can enhance the quality of reconstructed
images and improve the accuracy of segmentation results.
In the context of microscopy data, purely deep learning based approaches
dealing with both segmentation and denoising are [32] and [5]. In [32],
Prakesh et al. demonstrated on various microscopy datasets that the use of
self-supervised denoising priors improves the segmentation results, especially
when only a few ground truth segmentation masks are available for training. In
a similar work [5], the authors propose DenoiSeg, consisting of a U-Net for
image segmentation, and the self-supervised denoising scheme Noise2Void [24],
which are combined and trained with a common loss. The authors demonstrate,
that the global optimisation outperforms the sequential counterpart, where the
image is denoised first and then segmented. This method requires labeled data.
To reach high denoising performance, a huge amount of noisy data is also
required. Moreover, the loss function is just the sum of the segmentation and
denoising losses. There exists no coupling between the two tasks in the
objective to optimise. Segmentation can therefore benefit from the noise
reduction, but the reverse is not possible.
To overcome these limitations, we a propose single-image method with a new
joint loss that allows full interaction between segmentation and noise
reduction.
### 1.4 Contributions
In this work, we propose a new model for joint image denoising and
segmentation that combines advantages from variational models and deep
learning. In contrast to the aforementioned state-of-the-art deep learning
based methods which require a large cohort of labeled and clean training
images, we obtain comparable results using only one single image. While
supervised deep learning based methods are trained with hand-made annotations
which are prone to subjective labelling errors, the proposed combination of a
variational segmentation model with a self-supervised denoising CNN does not
require any labeled data or a representative dataset leading to the
elimination of pre-training.
We combine denoising and segmentation tasks in such a way, that both of them
benefit from each other. This is a main difference between existing deep joint
approaches such as [5], where the denoising task solely aims at improving
segmentation. More specifically, we design two dedicated denoising networks
for the foreground and background regions to improve the overall image
denoising performance, and use the difference of the denoising performances in
the two regions to find the segmentation mask.
Our method can be seen as a flexible generalization of existing Chan-Vese
models. Standard Chan-Vese models as well as joint variational methods for
segmentation and denoising [6, 14] rely on the piecewise constant assumption
of hand-crafted features. Thus, these methods are struggling with intensity
inhomogeneties, textures or high noise levels. Further, methods of that kind
strongly depend on the initialisation due to the non-convexity of the
functional. In this work we propose to learn the structural patterns of
different regions in the image without any prior information, by designing a
novel energy functional, where feature information is captured in a natural
way by a denoising network. The paper is organised as follows. We start with
toy examples that illustrate the region specific denoisers as the motivation
of our method in 1.5. We then formulate the two problems that we aim to solve
and revise the necessary ingredients that make up our algorithm in Section 2.
Our proposed new algorithm is described and analysed in Section 3, and the
numerical implementation is presented in 4. Section 5 shows the application of
the method to microscopy data. Further, we apply our proposed model to natural
images, and demonstrate, that with manual guidance by the user roughly
indicating the two regions, our method can successfully be applied to more
complex problems. The paper ends with a conclusion and outlook to possible
future work.
### 1.5 Motivation
In the following, we give an intuition of how the two denoising and
segmentation tasks we aim to solve are coupled in a way that both of them have
a positive influence on each other. We present some toy examples, showing how
segmentation can successfully guide the denoising process. In a toy example,
we generate an image of size 256$\times 256$ that consists of stripe patterns
that are aligned differently in different areas; see Figure 1. This image is
further corrupted by manually adding Gaussian noise with a noise level of 50.
Here, we use two linear neural networks (2 ”experts” respectively dedicated to
the foreground and the background) consisting of a single convolutional layer
with one filter of size $15\times 15$, which is trained using a slight
modification of the Noise2Fast training strategy [26] described in Section 4.
More precisely, we restrict the training of the network to the two regions of
the image, respectively, by masking the loss function, and restricting the
training on boxes of size $30\times 30$ which are depicted in Figure 1. We
find that learned filters are adapted to the structure of the signal in the
corresponding region. As a result, the error patterns have higher values in
the region where the denoiser has not been trained. This provides the basis
for exploiting region-specific denoising for segmentation. The experimental
details for this toy example are provided in Section 5.
Figure 1: Visualisation of the idea behind the proposed joint denoising and
segmentation model. Here, we trained two networks consisting of one single
filter using the Noise2Fast [25] strategy and restricted the training to the
two boxes marked in the noisy image $f$. From the two right binary images in
the bottom row, we observe that the two denoising experts perform much better
in the region they have been trained on. The difference images (noisy image
minus Denoised by Expert 1 (resp. 2) can then be used in the segmentation
process, by exploiting the fact that regions with a small denoising error for
the first (resp. second) expert can be assigned as foreground (resp.
background). Figure 2: Given noisy RGB input image (corrupted with Gaussian
noise, noise level = 0.75), denoised image using Noise2Fast on the whole
image, region-specific experts, and ground truth image. We clearly observe
sharper edges, and better recovered color information in the “two-
experts”-example.
The positive effect of segmentation on the denoising process is even more
evident in the natural image shown in Figure 2. We used the network
architecture proposed by the authors in [26], resulting in a non-linear neural
network. First, the denoising network was trained and subsequently applied on
the whole image. The second image shows the result obtained with two
separately trained neural networks. This strategy yields a better visual
result, which is further confirmed by PSNR values of 19.69 and 19.12,
respectively. The noisy image has been generated by scaling the given clean
RGB input image to $[0,1]$, and adding randomly distributed noise scaled with
the maximum pixel value, ensuring that the noise is proportional to the image
intensity. Here, we used a manually generated mask of the zebra and background
region, and during training, computed the MSE restricted to the two different
regions, respectively.
In the next section, we will fix the notation, formalize the problem, and
describe the main ingredients that are used for our proposed method.
## 2 Problem Description
We now present in detail the background that is relevant for our algorithm.
First we set our notations, and describe our proposed energy functional for
our unified denoising and segmentation framework.
In the following, we denote by $\Omega\subset\mathbb{R}^{2}$ a bounded set
with Lipschitz boundary, and by $\mathbb{F}$ a space of functions
$f\colon\Omega\rightarrow\mathbb{R}^{d}$, with $d=1$ for grayscale images, and
$d=3$ in the RGB case. We consider a given (noisy) image $f\in\mathbb{F}$,
which we want to jointly denoise, and split up into $C$ different regions.
###### Problem 1 (Image Denoising).
The goal of image denoising is to recover a clean image $g$ from a noisy
observation $f$ which follows an image degradation model $f=g+n$, where $n$ is
the signal degrading noise which we want to remove.
Note that although other degradation types are possible, we assume an additive
model here and specifically we will consider noise with an expected value of
zero.
###### Problem 2 (Image Segmentation).
Image segmentation refers to the process of automatically dividing an image
into meaningful regions. Based on specific pre-defined characteristics of a
given image $f\in\mathbb{F},$ one is interested in splitting the image domain
into two (in the case of binary segmentation) regions $\Sigma,$ and
$\Omega\setminus\Sigma$. In the case of multiclass segmentation, the objective
is to build a partition $\Omega=\bigcup_{i=1}^{C}\Sigma_{i}$ of the image
domain into $C$ disjoint regions (classes), where each of the regions
$\Sigma_{1},\dots,\Sigma_{C-1}$ represents a specific structure of objects in
$f$ and
$\Omega\setminus(\Sigma_{1}\uplus\Sigma_{2}\uplus\dots\uplus\Sigma_{C-1})$
represents the background.
In this work, we address these two problems simultaneously by designing an
energy functional in a way that both tasks benefit from each other. Next, we
discuss the two main components from the literature that form the basis of our
approach.
### 2.1 Convex Chan-Vese Formulation
In [12], Chan et al propose to relax the binary Chan-Vese segmentation problem
(1) and let the desired solution $u(x)$ take values in $[0,1]$. The resulting
convex energy is
$\displaystyle\min_{0\leq u\leq 1}\int_{\Omega}\lvert\nabla
u\rvert+\lambda\int_{\Omega}\left((c_{1}-f(x))^{2}-(c_{2}-f(x))^{2}\right)u(x)dx.$
(2)
The authors showed that, for any fixed constants $c_{1},c_{2}\in\mathbb{R},$ a
global minimiser for the non-convex problem can be found by carrying out the
minimisation in (2), and setting $\Sigma=\\{x:u(x)>\tau\\}$ for a.e.
$\tau\in[0,1]$.
Though the model is convex it still suffers from difficulties in segmenting
images where the piecewise constant assumption is not a relevant prior for the
different regions in the image, or if the image is corrupted by severe noise.
These issues are the main problems to solve in the current paper.
### 2.2 Self-supervised single-image based denoising
For a given noisy image $f=g+n$, self-supervised denoising methods are based
on some variant of the self supervised loss
$\displaystyle\mathcal{L}_{f}(\theta)=\int_{\Omega}(\Phi_{\theta}(f)(x)-f(x))^{2}.$
(3)
Clearly, such a strategy cannot work without restricting the class of
functions $\Phi_{\theta}$, since $\Phi_{\theta}=Id$ would be the minimiser and
does not yield a denoised image. One strategy to overcome this problem is the
method introduced in [39] where a generative model $\Phi_{\theta}$ is trained
minimising (3). In this framework, the convolutional structure and early
stopping prevents $\Phi_{\theta}$ to learn the fine image features (noise) to
obtain a denoised image. Another strategy is linear filtering with a
restriction on the filter [3, 24]. For example, a filter which is zero in its
central position, and therefore not taking into account the information of
this pixel but only the surrounding areas can be used to denoise an image
minimising (3). Another type of method tries to minimise a slightly different
functional. Motivated by the previous example, the authors of [26] introduce
$N$ random binary masks $\mathcal{H}_{k}$ that delete information in the
image. Training is then done using the loss function
$\displaystyle\mathcal{L}_{f}(\theta)=\frac{1}{N}\sum_{k=1}^{N}\int(\Phi_{\theta}(\mathcal{H}_{k}\cdot
f)(x)-f(x))^{2}\cdot(1-\mathcal{H}_{k}).$ (4)
This training strategy also prevents the network from learning the identity
operator. Although not directly minimising (3), we use a variant of this
method named Noise2Fast [26]. This variant uses regular masks
$\mathcal{H}_{1},\mathcal{H}_{2},\mathcal{H}_{3},\mathcal{H}_{4}$, which
consist of horizontal and vertical stripes on the even and odd image indices.
## 3 Proposed Joint Denoising and Segmentation
We now introduce our joint model, inspired by the observations described in
Section 1.5. To control binary segmentation, we propose to train two denoising
neural networks, each focusing on performing well in one of the regions to be
segmented (cf. Figure 1 and 2). We denote these “experts” by
$\Phi_{\theta^{F}}$ for the foreground, and $\Phi_{\theta^{B}}$ for the
background. These experts are neural networks with parameters $\theta^{F}$ and
$\theta^{B}$, that are trained with a modified denoising strategy. Let us
mention that the model is presented in the case of two regions, but the
extension to multi-class is straightforward, following for instance the
framework in [19, 1, 30]
In Section 3.1, we present the proposed joint energy function designed for the
combined denoising and segmentation process. This energy generalizes the
convex Chan-Vese functional (2) with a data-fidelity term defined from the
self-supervised denoising method . The optimisation scheme is performed in an
alternating way, as presented in Section 3.2. We finally provide theoretical
convergence results for our algorithm in Section 3.3.
### 3.1 Joint energy functional
In the following, we denote by $BV(\Omega)$ the space of all integrable
functions $u:\Omega\rightarrow\mathbb{R}$ with bounded total variation
$|u|_{\text{TV}}$, and consider the admissible set
$\displaystyle\mathbb{A}\coloneqq\\{u\in BV(\Omega)\mid 0\leq u\leq 1\\}.$
Further, let $i_{\mathbb{A}}:BV(\Omega)\rightarrow[0,\infty]$ denote the
associated indicator function, which is 0 inside $\mathbb{A}$, and $\infty$
elsewhere. The parameters of the two denoising experts, $\Phi_{\theta^{F}}$
and $\Phi_{\theta^{B}}$ are denoted by
$\boldsymbol{\theta}=(\theta^{F},\theta^{B})\in\mathbb{R}^{L\times L},$ and
are respectively dedicated to the foreground and the background. These two
experts are neural networks trained using the strategy proposed in [26]. We
consider the joint model
$\begin{split}\mathcal{E}_{f,\lambda}(u,\boldsymbol{\theta})=i_{\mathbb{A}}(u)+\lambda\lvert
u\rvert_{\text{TV}}&+\int_{\Omega}\left(f(x)-\Phi_{\theta^{F}}(f)(x)\right)^{2}u(x)dx\\\
&+\int_{\Omega}\left(f(x)-\Phi_{\theta^{B}}(f)(x)\right)^{2}(1-u(x))dx\,.\end{split}$
(5)
Note that for fixed network parameters $\boldsymbol{\theta}$, the proposed
energy is convex in $u$. Moreover, we can threshold the result and still have
a global optima (see Theorem 3.2). Further we point out that in the case where
the noise2fast training strategy is used, the energy functional for the
denoising step is not exactly functional (5).
Figure 3 illustrates the idea behind the proposed segmentation model. For
grayscale images, one can initialise the algorithm by thresholding image
values. In more complex cases, a user can be asked to provide representative
boxes for the background and foreground regions. Then, alternately, the
denoising experts are trained on subsets of the two different segmented
regions and the segmentations are updated. In practice, the data-fidelity term
in (5) is updated given the denoising performance of the two experts
$\Phi_{\theta^{F}}$ and $\Phi_{\theta^{B}}$. For fixed network parameters
$\boldsymbol{\theta}$, the energy (5) is minimised. Repeating this procedure
until a convergence criteria is met, we obtain the segmentation mask $u$, as
well as the denoised image $g\approx
u\odot\Phi_{\theta^{F}}+(1-u)\odot\Phi_{\theta^{B}}$.
Figure 3: The first image shows the given grayscale input image $f$, and user
defined boxes representing rough foreground and background regions. The third
image highlights pixels where the foreground expert denoiser performs better
than the background one, while the last image is the segmentation result
obtained by minimising the proposed energy (5).
###### Example 3.
Here, we give examples for neural networks that act as denoisers and relate to
existing approaches.
* •
Constant Background: In case where the background is assumed constant, one
could simply assume that $\Phi_{\theta^{B}}(f)=\theta^{B}\mathbbm{1}$, which
corresponds to estimate a scalar value $\theta^{B}$ being the mean value of
the given image inside the corresponding region as in the original Chan and
Vese model.
* •
Linear filter: In this case, the network is linear with respect to the network
parameters $\theta^{B}$, more precisely,
$\Phi_{\theta^{B}}(f)=\omega_{\theta^{B}}\ast f$, leading to a bi-convex
energy functional (5). In our toy example in Figure 1, we have applied such a
linear network consisting of one single filter of kernel size $15\times 15$.
* •
Filtering of data fidelity term: When one of the regions is assumed to be
constant and high noise levels are present, mean filtering improves the
results. The data fidelity terms of energy (5) can then be replaced by
$\int_{\Omega}\left[K_{\sigma}\ast\left(f-\phi_{\theta^{F}}(f)\right)\right]^{2}u$
and
$\int_{\Omega}\left[K_{\sigma}\ast\left(f-\phi_{\theta^{B}}(f)\right)\right]^{2}(1-u)$,
respectively, where $K_{\sigma}$ is a mean filter with kernel size $\sigma$. A
similar approach has been done in [27], where a more robust version of the
Chan-Vese model [11] has been proposed by introducing Gaussian convolution in
the data fitting terms, in order to make the method robust to non homogeneous
regions.
* •
Generic CNN: Any typical state of the art denoising neural network (Deep image
prior [39], Noise2Void [24]) can be used in our framework. Note, that in this
case the bi-convexity of energy (5) is not ensured anymore.
In the next paragraph, we discuss in more detail the joint alternating
optimisation procedure we propose to minimise energy (5).
### 3.2 Joint optimisation
We propose to iteratively optimise problem (5) with an alternating procedure
[15]. In case the denoising step does not exactly minimise energy (5), we
actually alternate between minimising two slightly different functionals. For
the sake of readability, this is not indicated in the notation. We start with
the initialiseation of the segmentation mask $u.$ This is either achieved by
thresholding for grayscale images, or as shown in Figure 3, manually choosing
boxes representing the different regions to segment in the image. Then, based
on the initial guess, the denoising expert(s) $\Phi_{\theta^{F}},$ and
$\Phi_{\theta^{B}}$ are trained on the given initial masks. To this end, we
use the ADAM optimiser [22] until convergence. As a next step, for fixed
network parameters $\boldsymbol{\theta}$, we update the segmentation mask $u$.
For fixed $\boldsymbol{\theta},$ the energy functional (5) is convex, and all
the necessary assumptions for the application of the primal dual algorithm
[10] are fulfilled. A more detailed description on the considered discrete
schemes is provided in Section 4 (see Algorithm 2). These alternate steps are
repeated as long as the decrease of energy (5) is greater than $p=15$ percent,
which we empirically found to give a good compromise between computation speed
and quality of the results.
The overall joint optimisation scheme is presented in Algorithm 1. A sketch of
the alternating procedure is provided in Figure 4.
Algorithm 1 Alternating optimisation scheme.
Initialise $u^{0}\leftarrow\boldsymbol{1}_{\\{f>\epsilon\\}}$ and
$\boldsymbol{\theta}^{0}=\boldsymbol{\theta}_{0}$
while
$\mathcal{E}^{k}_{f,\lambda}(u^{k},\boldsymbol{\theta}^{k})/\mathcal{E}^{k-1}_{f,\lambda}(u^{k-1},\boldsymbol{\theta}^{k-1})\geq
p\cdot\mathcal{E}^{k-1}_{f,\lambda}(u^{k-1},\boldsymbol{\theta}^{k-1})/\mathcal{E}^{k-2}_{f,\lambda}(u^{k-2},\boldsymbol{\theta}^{k-2})$
do
$\boldsymbol{\boldsymbol{\theta}}^{k+1}\leftarrow\operatorname*{argmin}_{\boldsymbol{\theta}}\mathcal{E}^{k}_{f,\lambda}(u^{k+1},\boldsymbol{\theta})$
{with a few ADAM iterations for $\theta^{F}$ and Chan and Vese update for the
background if $\Phi_{\theta^{B}}(f)=\theta^{B}\mathbbm{1}$}
$u^{k+1}\leftarrow\operatorname*{argmin}_{u}\mathcal{E}^{k}_{f,\lambda}(u,\boldsymbol{\theta}^{k})$
{with Algorithm 2)}
end while
In the following paragraph, we will discuss the convergence property of
Algorithm 1.
Figure 4: Alternating optimisation scheme. As a first step, regions are
provided for the training of the two denoising experts using the strategy.
These regions can be obtained by thresholding image values or by manually
choosing boxes. The differences between the given noisy image $f$ and network
outputs $\Phi_{\theta^{F}}(f)$ and $\Phi_{\theta^{B}}(f)$, are used in the
subsequent segmentation step, minimising
$\mathcal{E}_{\lambda,f}(\cdot,\boldsymbol{\theta})$ with Algorithm 2.
### 3.3 Theoretical Results
In this section, we discuss some theoretical results of the proposed energy
functional and the presented alternating algorithm. Note that these results
hold if the denoiser is trained by minimising (3).
###### Remark 4 (Monotonicity of alternating minimisation).
The proposed energy functional (5) is continuous and bounded from below.
Therefore, for each $k\geq 0$, the following relations hold
$\displaystyle\mathcal{E}_{f,\lambda}(u^{(k)},\boldsymbol{\theta}^{(k+1)})$
$\displaystyle\leq\mathcal{E}_{f,\lambda}(u^{(k-1)},\boldsymbol{\theta}^{(k)})$
$\displaystyle\mathcal{E}_{f,\lambda}(u^{(k+1)},\boldsymbol{\theta}^{(k)})$
$\displaystyle\leq\mathcal{E}_{f,\lambda}(u^{(k)},\boldsymbol{\theta}^{(k-1)}).$
Hence, the generated sequence
$\\{\mathcal{E}_{f,\lambda}(u^{(k)},\boldsymbol{\theta}^{(k)})\\}_{k\in\mathbb{N}}$
converges monotonically.
###### Theorem 3.1 (Convergence of Algorithm 1).
Assume that the level set
$S^{0}=\\{(u,\boldsymbol{\theta}):\mathcal{E}_{f,\lambda}(u,\boldsymbol{\theta})\leq\mathcal{E}_{f,\lambda}(u^{0},\boldsymbol{\theta}^{0})\\}$
of $\mathcal{E}_{f,\lambda}$ defined in (5) is compact and that
$\mathcal{E}_{f,\lambda}$ is continuous on $S^{0}$. Then, the sequence
$\\{(u^{k},\boldsymbol{\theta}^{k}\\}$ generated by Algorithm 1 is defined and
bounded. Moreover, every cluster point of
$\\{(u^{k},\boldsymbol{\theta}^{k})\\}$ is a stationary point of
$\mathcal{E}_{f,\lambda}$.
###### Proof.
This is a direct application of Theorem 4.1 in [38], using that (i) we only
alternate between two variables $u$ and $\boldsymbol{\theta}$, (ii) the
coupling between $u$ and $\boldsymbol{\theta}$ in $\mathcal{E}_{f,\lambda}$ is
smooth. ∎
###### Remark 5.
The energy (5), which is convex for fixed network parameters
$\boldsymbol{\theta}=(\theta^{F},\theta^{B})$ is a relaxation of the fully
non-convex problem
$\displaystyle\mathcal{E}(\Sigma,\boldsymbol{\theta})=\text{Per}(\Sigma,\Omega)+\int_{\Sigma}(f-\Phi_{\theta^{F}}(f))^{2}dx+\int_{\Omega\setminus\Sigma}(f-\Phi_{\theta^{B}}(f))^{2}dx,$
(6)
where $\Sigma\subset\mathbb{R}^{2}$, and $\Omega\setminus\Sigma$ are the two
regions of the given image $f(x),$ and Per$(\Sigma,\Omega)$ is the perimeter
of the interface seperating these two regions.
###### Theorem 3.2 (Thresholding).
For any fixed $\boldsymbol{\theta}$, a global minimiser for the non-convex
problem
$\min_{\Sigma,\boldsymbol{\theta}}\mathcal{E}(\cdot,\boldsymbol{\theta})$ in
(6) can be found by carrying out the minimisation of
$\min_{u}\mathcal{E}_{f,\lambda}(\cdot,\boldsymbol{\theta})$, and then setting
$\Sigma=\\{x:u(x)\geq\tau\\}$ for a.e. $\tau\in[0,1]$.
###### Proof.
The proof is similar to the one in [12](Theorem 2). The only difference is in
the data fidelity term, where instead of the fixed constants $c_{1}$, and
$c_{2}$, we look at fixed network outputs $\Phi_{\theta^{F}}(f)$, and
$\Phi_{\theta^{B}}(f)$. As the problem is one-homogeneous in $u$, thanks to
the co-area formula, we show that
$\mathcal{E}_{f,\lambda}(u,\boldsymbol{\theta})=\int_{\epsilon}\mathcal{E}_{f,\lambda}(\mathbbm{1}_{u>\tau},\boldsymbol{\theta})$,
so we can thus conclude that if $u$ is a minimiser of the energy (5) for fixed
$\boldsymbol{\theta}$, then for a.e. $\tau\in[0,1]$ the set $\Sigma(\tau)$ has
to be a minimiser of (6). ∎
## 4 Numerical Implementation
In the following, we describe the numerical implementation of the proposed
method.
### 4.1 Segmentation Step
We can rewrite our segmentation sub-problem in the form
$\displaystyle\min_{u\in\mathbb{X}}\mathcal{F}(K(u))+\mathcal{G}(u),$ (7)
where $K(u)\coloneqq\nabla u$, $\mathcal{F}(v)\coloneqq\|v\|_{1,2}$ and
$\mathcal{G}(u)\coloneqq
i_{\mathbb{A}}(u)+\int_{\Omega}(f-\Phi_{\theta_{F}}(f))^{2}u+\int_{\Omega}(f-\Phi_{\theta_{B}}(f))^{2})(1-u)$.
It holds that $K:\mathbb{X}\rightarrow\mathbb{Y}$ is a linear mapping between
Hilbert spaces $\mathbb{X},\mathbb{Y}$ and
$\mathcal{F}:\mathbb{Y}\rightarrow[0,\infty]$ and
$\mathcal{G}:\mathbb{X}\rightarrow[0,\infty]$ are convex and lower semi-
continuous functionals, i.e. all the necessary assumptions for the application
of the primal dual algorithm framework proposed in [10] are fulfilled.
#### 4.1.1 Discretisation
In the following, we fix the notation which we use throughout this Section. We
work with discrete images in $\mathbb{H}\coloneqq\mathbb{R}^{N_{1}\times
N_{2}}$, denoting a finite dimensional Hilbert space equipped with an inner
product $\langle u,v\rangle=\sum_{i}u[i]v[i]$ for $u,v\in\mathbb{H}$ with
$i=(i_{1},i_{2})\in\\{1,\dots,N_{1}\\}\times\\{1,\dots,N_{2}\\}.$ The discrete
gradient
$\nabla=(\nabla_{1},\nabla_{2}):\mathbb{H}\rightarrow\mathbb{H}\times\mathbb{H}$
is defined by forward differences with Neumann boundary conditions,
$\displaystyle(\nabla_{1}u)[i]$
$\displaystyle\coloneqq\begin{cases}(u[i_{1}+1,i_{2}]-u[i_{1},i_{2}])/h&\text{if
}i_{1}<N_{1}\\\ 0&\text{if }i_{1}=N_{1}\end{cases}$
$\displaystyle(\nabla_{2}u)[i]$
$\displaystyle\coloneqq\begin{cases}(u[i_{1},i_{2}+1]-u[i_{1},i_{2}])/h&\text{if
}i_{2}<N_{2}\\\ 0&\text{if }i_{2}=N_{2}\,.\end{cases}$
Its adjoint is given by
$\nabla^{*}(v_{1},v_{2})=\nabla^{*}_{1}v_{1}+\nabla_{2}^{*}v_{2}=:-\operatorname{div}(v_{1},v_{2})$
where $\operatorname{div}\colon\mathbb{H}\times\mathbb{H}\to\mathbb{H}$ is the
discrete divergence operator and for
$(v_{1},v_{2})\in\mathbb{H}\times\mathbb{H}$ we have
$\displaystyle(\nabla^{*}_{1}v_{1})[i]$
$\displaystyle=\begin{cases}-(v_{1}[i_{1},i_{2}]-v_{1}[i_{1}-1,i_{2}])/h&\text{if
}1<i_{1}<N_{1}\\\ -v_{1}[1,i_{2}]&\text{if }i_{1}=1\\\
\phantom{-}v_{1}[N_{1}-1,i_{2}]&\text{if }i_{1}=N_{1}\end{cases}$
$\displaystyle(\nabla^{*}_{2}v_{2})[i]$
$\displaystyle=\begin{cases}-(v_{2}[i_{1},i_{2}]-v_{2}[i_{1},i_{2}-1])/h&\text{if
}1<i_{2}<N_{2}\\\ -v_{2}[i_{1},1]&\text{if }i_{2}=1\\\
\phantom{-}v_{2}[i_{1},N_{2}-1]&\text{if }i_{2}=N_{2}\,.\end{cases}$
The discrete, isotropic TV semi-norm of an image $u\in\mathbb{H}$ is defined
as
$\lVert\nabla
u\rVert_{2,1}\coloneqq\sum_{i}\sqrt{(\nabla_{1}u[i])^{2}+(\nabla_{2}u[i])^{2}}\,.$
The discrete versions of the admissible set and the corresponding indicator
function, are $\mathbb{A}=\\{u\in\mathbb{H}|0\leq u\leq 1\\}$, and
$i_{\mathbb{A}}.$ The discretisation of the data fidelity term of energy (5)
is written as $\sum_{i}{D}(u[i],\boldsymbol{\theta})$, where
$\displaystyle{D}(u,\boldsymbol{\theta})$ $\displaystyle\coloneqq
d(u,\theta^{F})+d(1-u,\theta^{B})$ (8) $\displaystyle d(u,\theta^{F})$
$\displaystyle\coloneqq u\cdot\left(\Phi_{\theta^{F}}(f)-f\right)^{2}$
$\displaystyle d(1-u,\theta^{B})$
$\displaystyle\coloneqq(1-u)\cdot\left(\Phi_{\theta^{B}}(f)-f\right)^{2}.$
Using these notations, the discrete version of energy (5) reads
$\displaystyle\mathcal{E}_{f,\lambda}^{CV}(u,\boldsymbol{\theta})=i_{\mathbb{A}}(u)+\lambda\lVert\nabla
u\rVert_{1,2}+\sum_{i}{D}(u[i],\boldsymbol{\theta})\,.$ (9)
The optimisation problem (9) is in general a non-convex and challenging
problem to be solved. We will use alternating minimisation, where we employ
for the update step of the segmentation mask $u$ the Chambolle-Pock algorithm
[10], while for updating the network parameters
$\boldsymbol{\theta}=(\theta^{F},\theta^{B})$, we apply ADAM optimisation
[22].
#### 4.1.2 Segmentation algorithm
We here detail the minimisation of the functional (5) with respect to $u$ for
fixed $\boldsymbol{\theta}$, which corresponds to solve problem (7) with
$\displaystyle\mathbb{X}$ $\displaystyle=\mathbb{H}$ $\displaystyle\mathbb{Y}$
$\displaystyle=\mathbb{H}^{2}$ $\displaystyle\mathcal{F}$
$\displaystyle=\lambda\lVert v\rVert_{2,1}$ $\displaystyle K$
$\displaystyle=\nabla$ $\displaystyle\mathcal{G}$
$\displaystyle=i_{\mathbb{A}}+\sum_{i}{D}(u[i],\boldsymbol{\theta}).$
As the operator $K$ is linear, and the functionals $\mathcal{F}$ and
$\mathcal{G}$ are convex and lower semi-continuous, all requirements for the
application of the primal dual algorithm proposed in [10] are fulfilled.
To implement this algorithm, it is required to compute the Fenchel conjugate
$\mathcal{F}^{*}$ of $\mathcal{F}$, as well as the proximal mappings of
$\mathcal{F}^{*}$ and $\mathcal{G}$. We start with the derivation of the
Fenchel conjugate of $\mathcal{F}$. For $\lVert\cdot\rVert_{2,1}$ it
corresponds to the indicator function of the unit ball of the dual norm,
resulting in $i_{2,\infty}=\lVert\cdot\rVert^{\ast}_{2,\infty}$. Hence we have
$\mathcal{F}^{*}(v)=i_{2,\infty}(v/\lambda)$, the indicator function of
$\\{v|\lVert\boldsymbol{v}\rVert_{2,\infty}\leq 1\\}\subset(\mathbb{H})^{2}$.
As a next step, we compute the proximal operators of $\mathcal{F}^{\ast}$ and
$\mathcal{G}$. Recall that the proximal operator of the indicator function
$i_{C}$ of some set $C$ is given by the orthogonal projection on $C$. The
projection $P_{2,\infty}\colon\mathbb{H}\rightarrow\mathbb{H}^{2}$ onto the
unit ball in the $(2,\infty)-$norm is thus obtained by
$\displaystyle(P_{2,\infty}(\boldsymbol{v}))[i,k]=\frac{v[i,k]}{\max\\{1,(v_{1}[i,k]^{2}+v_{2}[i,k]^{2})^{1/2}\\}}.$
Thus, the proximal operator of $\mathcal{F}^{\ast}$ results in
$\displaystyle\operatorname{prox}_{\mathcal{F}^{\ast}}(\boldsymbol{v})=P_{2,\infty,\lambda}(\boldsymbol{v})\coloneqq
P_{2,\infty}(\boldsymbol{v}/\lambda).$
Further, by introducing
$\tilde{f}=\left(f-\Phi_{\theta^{F}}(f)\right)^{2}-\left(f-\Phi_{\theta^{B}}(f)\right)^{2}$,
one can finally show that
$\operatorname{prox}_{\tau\mathcal{G}}(u_{0}[i])=P_{\mathbb{A}}\left(u_{0}[i]-\tau\tilde{f}[i]\right).$
The overall primal dual Algorithm 2 is summarised below.
Algorithm 2 Segmentation algorithm based on the minimisation of the energy
functional (5) with respect to $u$ for a fixed $\theta$.
Input: noisy input image $f\in\mathbb{H}$
initialisation: $v^{0}\in\mathbb{H}$, $u^{0},\bar{u}^{0}\in\mathbb{H}$
while $\lVert u^{n+1}-u^{n}\rVert>\epsilon$ do
$v^{n+1}\leftarrow
P_{2,\infty,\lambda}(v^{n}+\sigma\boldsymbol{\nabla}\bar{u}^{n})$
$u^{n+1}\leftarrow
P_{\mathbb{A}}(u^{n}-\tau\boldsymbol{\nabla}^{\intercal}v^{n+1}-\tau\tilde{f})$
$\bar{u}^{n+1}\leftarrow u^{n+1}+\eta(u^{n+1}-u^{n})$.
end while
return $u^{n+1}$
### 4.2 Acceleration with a mask prior
In our experiments, we observed, that one of the denoisers (the one that is
trained on the more complex region), tends to improve on the other region.
Once this progress has started, it is quite difficult to stop the segmentation
mask from expanding and converging to an undesired minimum being the constant
segmentation result. Inspired by the work in [2, 42], we now propose to
overcome the problem of finding an appropriate stopping criteria, by adding a
fidelity term, ensuring that the updated segmentation mask $u^{k}$ does not
deviate too far from its initial guess. Assume that we have a reference mask
$u_{R}^{0}$, then, in the continuous setting, we consider the successive
problems:
$\displaystyle\mathcal{E}^{k}_{f,\lambda}(u,\theta)\coloneqq
i_{\mathbb{A}}(u)$ $\displaystyle+\lambda\lvert
u\rvert_{\text{TV}}+\int_{\Omega}\left(f-\Phi_{\theta^{F}}(f)\right)^{2}u$
(10)
$\displaystyle+\int_{\Omega}\left(f-\Phi_{\theta^{B}}(f)\right)^{2}(1-u)+\frac{\mu}{2}||u-u_{R}^{k}||^{2}\,.$
We can therefore optimise iteratively problem (10) with the alternate
procedure presented in Algorithm 3. Note, that in this case, as the global
energy is changed at each iteration, we do not have convergence guarantee
anymore for the alternating procedure.
Algorithm 3 Alternating optimisation scheme with acceleration.
Initialise $u^{0}\leftarrow f$ and $\theta^{0}=\theta_{0}$ and $u_{R}^{0}$
for $k=1,\dots,N$ do
$\boldsymbol{\boldsymbol{\theta}}^{k+1}\leftarrow\operatorname*{argmin}_{\boldsymbol{\theta}}\mathcal{E}^{k}_{f,\lambda}(u^{k+1},\boldsymbol{\theta})$
{with a few ADAM iterations for $\theta^{F}$ and Chan and Vese update for the
background if $\Phi_{\theta^{B}}(f)=\theta^{B}\mathbbm{1}$}
$u^{k+1}\leftarrow\operatorname*{argmin}_{u}\mathcal{E}^{k}_{f,\lambda}(\boldsymbol{u},\theta^{k})$
{with Algorithm 4)}
$\boldsymbol{u}_{R}^{k+1}=\boldsymbol{u}^{k+1}$ (update reference mask)
end for
To solve the segmentation problem, we reformulate the optimisation of problem
(10) for fixed $\boldsymbol{\theta}$ as
$\min_{u}\mathcal{F}(K(u))+\mathcal{G}^{k}(u)$, with
$\mathcal{G}^{k}(u)=i_{\mathbb{A}}(u)+\frac{\mu}{2}||u-u_{R}^{k}||^{2}+\int_{\Omega}\left(f(x)-\Phi_{\theta^{F}}(f)(x)\right)^{2}u(x)+\int_{\Omega}\left(f(x)-\Phi_{\theta^{B}}(f)(x)\right)^{2}(1-u(x)).$
Recalling that
$\tilde{f}=\left(f(x)-\Phi_{\theta^{F}}(f)(x)\right)^{2}-\left(f-\Phi_{\theta^{B}}(f)(x)\right)^{2}$,
we can show that:
$\operatorname{prox}_{\tau\mathcal{G}^{k}}(u^{0}[i])=P_{\mathbb{A}}\left(\frac{u^{0}[i]+\tau\mu
u_{R}^{k}[i]-\tau\tilde{f}[i]}{1+\tau\mu}\right).$
Observing that $\mathcal{G}^{k}$ is $\mu$-strongly convex in $u$, we consider
the accelerated primal dual algorithm of [10] to solve problem (10).
Algorithm 4 Segmentation algorithm based on the minimisation of the energy
functional (10) with respect to $u$ for a fixed $\theta$.
Input: noisy input image $f\in\mathbb{H}$
Parameters: $\lambda,\sigma,\tau,\theta$
Initialisation: $v^{0}\in\mathbb{H}$, $u^{0},\bar{u}^{0}\in\mathbb{H}$
while $\lVert u^{n+1}-u^{n}\rVert>\epsilon$ do
$v^{n+1}\leftarrow
P_{2,\infty,\lambda}(v^{n}+\sigma\boldsymbol{\nabla}\bar{u}^{n})$
$u^{n+1}\leftarrow
P_{\mathbb{A}}\left((u^{n}-\tau\boldsymbol{\nabla}^{\intercal}v^{n+1}+\tau\mu
u_{R}^{k}-\tau\tilde{f})/(1+\tau\mu)\right)$
$\eta=\frac{1}{1+2\mu\tau},\tau=\tau\eta,\sigma=\frac{\sigma}{\eta}$
$\bar{u}^{n+1}\leftarrow u^{n+1}+\eta(u^{n+1}-u^{n})$.
end while
return $u^{n+1}$
As we have discussed the numerical implementation of the segmentation step, we
now present the discrete setting and implementation of the denoising step.
### 4.3 Denoising step using Noise2Fast strategy
We here detail the denoising of a discretized 2D image
$f\in\mathbb{R}^{m\times n}$ composed of a clean signal
$g\in\mathbb{R}^{n\times n}$ and noise $n\in\mathbb{R}^{m\times n}$, i.e.
$\displaystyle f=g+n.$
For completeness, we introduce $u_{B}^{k},$ which for $k=0$ corresponds to the
Initialisation of the background region. These masks can either be obtained by
thresholding the image, or can be given in form of user-provided boxes. For
the next update steps, i.e. $k=1,\dots,N$ it holds that $u_{B}^{k}=1-u^{k}.$
Using these notations, for fixed $\boldsymbol{u}^{k}=(u^{k},u_{B}^{k})$, in
the $k$-th denoising step of our alternating procedure, the energy functional
(5), reduces to
$\displaystyle\min_{\boldsymbol{\theta}}\sum_{i}{D}(\boldsymbol{u}^{k}[i],\boldsymbol{\theta})=\min_{\boldsymbol{\theta}}\sum_{i}\left(\Phi_{\theta^{F}}(f)[i]-f[i]\right)^{2}\cdot
u^{k}[i]+\cdot\left(\Phi_{\theta^{B}}(f)[i]-f[i]\right)^{2}\cdot
u_{B}^{k}[i],$ (11)
where $\Phi_{\theta^{F}}$ and $\Phi_{\theta^{B}}$ are (deep) experts
respectively dedicated to the denoising of the foreground and background.
We build our denoisers on top of the the Noise2Fast method introduced by
Lequyer et al. in [26]. In this paper, the authors propose a fast single image
blind denoiser, using a special downsampling strategy. More precisely, their
method consists in splitting a given image into smaller parts by using a
checkerboard downsampling strategy. From a single image, four images are thus
generated, by removing one half of all pixels, and shifting the remaining
pixels to fill in the gaps left behind. Then, a network is trained to learn
the mappings between the resulting downsampled image pairs. Due to the
internal redundancy in form of recurrent patches present in images, and the
high degree of self-similarity, the neural network will also be able to
denoise the whole image instead of the downsampled ones [4, 46, 18]. For a
more detailed description of the Noise2Fast training strategy, such as the
network architecture, we refer the reader to [26].
In our approach, we use a different loss function as the one described in the
work of Lequyer et al [26]. Instead of considering the whole image domain for
training, we restrict the optimisation process for the foreground
$\Phi_{\theta^{F}}$ (resp. background $\Phi_{\theta^{B}}$) expert to the
current segmentation masks $u^{k}$ (resp. $1-u^{k}$) obtained by Algorithm 2.
In a first step, as in [26] the downsampled training images are generated in
the following way
$\displaystyle
f_{\text{even}}(i,j)=f\left(i,2j+(i\,\text{mod}\,2)\right)\in\mathbb{R}^{m\times\frac{n}{2}}$
$\displaystyle
f_{\text{odd}}(i,j)=f\left(i,2j+(i\,\text{mod}\,2)+1)\right)\in\mathbb{R}^{m\times\frac{n}{2}}$
$\displaystyle
f^{\prime}_{\text{even}}(i,j)=f\left(2i+(i\,\text{mod}\,2),j\right)\in\mathbb{R}^{\frac{m}{2}\times
n}$ $\displaystyle
f^{\prime}_{\text{odd}}(i,j)=f\left(2i+(i\,\text{mod}\,2)+1),j\right)\in\mathbb{R}^{\frac{m}{2}\times
n},$
and we repeat this downsampling procedure for the segmentation masks $u^{k}$
and $u_{B}^{k}$, for $k=0,\dots,N$ as well. We denote as
$\displaystyle\mathcal{J}^{k}=\\{(f_{\text{even}},f_{\text{odd}},u_{\text{odd}}^{k},u_{B\text{,odd}}^{k}),(f_{\text{odd}},f_{\text{even}},u_{\text{even}}^{k},u_{B\text{,even}}^{k}),$
$\displaystyle(f^{\prime}_{\text{even}},f^{\prime}_{\text{odd}},u_{\text{odd}}^{k^{\prime}},u_{B\text{,odd}}^{k^{\prime}}),(f^{\prime}_{\text{odd}},f^{\prime}_{\text{even}},u_{\text{even}}^{k^{\prime}},u_{B\text{,even}}^{k^{\prime}})\\}$
the set of training data for $k=0,\dots,N$, with $N$ being the number of
iterations of the alternating minimisation.
We then train the two denoising networks, $\Phi_{\theta^{F}}$ and
$\Phi_{\theta^{B}}$, restricted to the given regions, $u^{k}$, and
$u^{k}_{B}$, i.e. for
$(\tilde{f},\tilde{g},\tilde{u},\tilde{u}_{B})\in\mathcal{J}^{k}$ we minimise
$\displaystyle\mathcal{L}_{\boldsymbol{u}}^{k}(\boldsymbol{\theta})=\sum_{i}\left(\Phi_{\theta^{F}}(\tilde{f})[i]-\tilde{g}[i])\right)^{2}\cdot\tilde{u}[i]+\left(\Phi_{\theta^{B}}(\tilde{f})[i]-\tilde{g}([i])\right)^{2}\cdot\tilde{u}_{B}[i].$
(12)
Thus the self-supervised denoisers learn to reconstruct even (resp. odd) lines
and columns $\tilde{f}$ of the image thanks to the odd (resp. even) ones
$\tilde{g}$. As mentioned above, caused by the self-similarity redundancy, by
minimising (12), $\mathcal{L}_{u}^{k}$, we also solve problem (11).
In the next paragraph, we demonstrate the possible applications of three
different variants of the proposed joint denoising and segmentation method.
## 5 Experiments and Results
The code which was used to obtain the results presented in this work is
provided on GitHub (https://github.com/Nadja1611/Single-Image-based-
unsupervised-joint-segmentation-and-denoising.git). As a first application, we
test our method on the microscopy cell nuclei dataset from the DSB2018
dataset111https://www.kaggle.com/c/data-science-bowl-2018 stemming from the
Kaggle 2018 Data Science Bowl challenge. The data consists of a diverse
collection of cell nuclei imaged by various fluorescence microscopes. The
patches are of size $128\times 128$, and come with manually generated
segmentation ground truths. More precisely, we use the noise free data and
manually add gaussian noise with three different noise levels, namely 10, 30,
and 50. In our experiments, we considered the same subset of images than the
one used in [5], where the authors demonstrated that the segmentation of noisy
data can be improved by addressing denoising and segmentation in a cooperative
(but not fully joint) manner.
In the following experiments, for the evaluation of the segmentation
performance we use the Dice metric, and for capturing the denoising
performance in the experiments, we choose peak signal to noise ratio (PSNR)
and structural similarity metric (SSIM).
We stop our alternating Algorithm 1, as soon as the decrease of energy (5) is
less than 15 percent of the previous decrease rate. We tried out a few
different strategies, and this one turned out to be the most promising one. We
indeed observed that the a criteria based on the change in the energy decay is
robust to different scales for the regularisation parameter $\lambda$, and it
also adapts to different type of images.
We compare the segmentation performance of our joint denoising and
segmentation approach with the convex Chan-Vese model from [12] applied either
on the noisy images directly, or on the previously denoised data within a
sequential approach. For both the proposed joint approach and the sequential
one, we use the same denoised image as starting point for fair comparisons.
Further, we test our method against the partially joint denoising and
segmentation framework in [5].
### Segmentation with the constant background assumption
We start with the evaluation of our method on a subset of the DSB2018 cell
nuclei data which were manually corrupted (noise levels 10, 30 and 50). To
this end, we train a foreground denoiser, $\Phi_{\theta^{F}}$, and we assume
the background to be constant, i.e. $\Phi_{\theta^{B}}=\theta^{B}\mathbbm{1}.$
For this particular type of images, this assumption is useful, while for
images with more structural patterns, this may not be a reasonable choice, and
two denoising experts might be necessary.
To apply our joint approach, we first denoise the given image using the
Noise2Fast strategy in the way as described in Section 4, and use the
thresholded denoised image (with the threshold $\epsilon$ set to $0.5$) as
initialisation. For noise level 10, we applied the segmentation Algorithm 1
with the constant background assumption, while a Noise2Fast expert was
considered for higher noise levels. We recall that the overall process for
solving the joint segmentation and denoising is presented in Algorithm 1.
Depending on the type of image, for the alternate process between two and six
iterations are required to meet the convergence criteria.
For each method, we conducted the experiments with ten different values of the
regularisation parameter $\lambda$ evenly distributed in the interval $[0,1]$,
and then selected for each image the result with the highest Dice value.
As a further comparison, we applied the convex Chan-Vese model from [12]
directly on the noisy images. The obtained results are depicted in Figures 5
to 7, while the segmentation evaluation metrics are summarised in Table 1. We
observe that for all three noise levels, the sequential and Chan-Vese method
from [12] struggle with intensity inhomogenities of the cells. These examples
highlight the strength of the proposed unified approach, which is capable of
segmenting cells with intensities close to the mean value of the background.
Notice that the proposed approach does not perform well on the last example
due to the presence of intensity inhomogeneities, ascribed to a spatially
varying field, the bias field, in the upper left corner of the image. Please
note that in this case, evaluating the denoising performance might not be
appropriate, as we are assuming a constant background and not applying
denoising to the background.
In Table 1, the results obtained by the supervised joint denoising and
segmentation method DenoiSeg [5], are summarised. Here, we ran the provided
code using 10 training images. More precisely, we used the DSB2018 dataset
with noise level zero and added Gaussian noise in the same way as before to
all of the 4320 images, among which 3750 were used for training, 670 for
validation and the same 50 as we used for our experiments for testing. It has
to be mentioned, that for validation all 570 annotated validation images are
used in [5], resulting in a total number of 580 annotated images in total
during the training process. As displayed in Table 1, this method performs the
best. To have a fairer comparison in term of training data, we decided to
adapt their method by using 10 images in total (DenoiSeg(10 images in Table
1), 7 for training and 3 for validation. In this setting, all available data
are still used for the training of the denoiser, whereas for the segmentation
network, the ground truth masks for all but the ten training images are zeroed
out. With this smaller level of supervision, our approach outperforms the
method of [5].
Figure 5: Visual comparison of the segmentation results of data with noise
level 10. From left to right, this figure shows: the noisy input, the results
obtained with the proposed joint approach, the sequential approach, the chan-
Vese baseline and the ground truth segmentation masks. For all compared
methods, the $\lambda$ maximising the Dice score has been selected. Figure 6:
Visual comparison of the segmentation results of data with noise level 30.
For higher noise levels, it is required to filter the background fidelity
term. This prevents from considering higher values of the regularisation
parameter $\lambda$, that may lead to an over-segmentation of the background
and an overall decrease of the segmentation performance. For noise level 30
and 50, as mentioned in Section 3.1, we therefore minimise
$\begin{split}\mathcal{E}_{f,\lambda}(u,\boldsymbol{\theta})=i_{\mathbb{A}}(u)+\lambda\lvert
u\rvert_{\text{TV}}&+\int_{\Omega}\left[K_{\sigma}\ast\left(f-\phi_{\theta^{F}}(f)\right)\right]^{2}u(x)\\\
&+\int_{\Omega}\left[K_{\sigma}\ast\left(f-\phi_{\theta^{B}}(f)\right)\right]^{2}(1-u(x))dx\,\end{split}$
with $K_{\sigma}$ being a mean filter with $\sigma=3$.
The next paragraph shows experimental results which were obtained applying our
idea of training denoising experts for both regions.
Figure 7: Visual comparison of the segmentation results of data with noise level 50. noise level | $n10$ | $n30$ | $n50$
---|---|---|---
baseline | 0.820 | 0.773 | 0.582
sequential | 0.799 | 0.777 | 0.735
proposed | 0.851 | 0.825 | 0.786
DenoiSeg [5] | 0.864 | 0.848 | 0.818
DenoiSeg (10 images) | 0.843 | 0.820 | 0.750
Table 1: Dice values obtained on 50 images of the DSB2018 dataset for the
compared methods, and three different noise levels. Here, baseline is the
convex Chan-Vese [12] method directly applied to the noisy data, while for the
sequential method, we first denoise the image using Noise2Fast [26]. Our
unsupervised method almost reaches the performance of the fully supervised
approach [5].
### Segmentation using two denoisers
In the toy example in Figure 1 from Section 1.5, we trained two denoising
experts (in this case we used a linear network consisting of one filter of
size $15\times 15$) initialised by the yellow and purple boxes of size
$30\times 30$. We iterated between the denoising and segmentation three times,
until the energy decrease was less then 10 percent. For segmentation, we set
the regularisation parameter $\lambda$ to 0.02. After the first segmentation
step, the loss functions of the denoisers were restricted to $u$ and $1-u$
respectively.
Figure 8 is a typical example showing the strength of the proposed algorithm
compared to intensity-based approaches. In this experiment, we preprocessed
the given image of size 256$\times$256 in a way, that both regions have the
same mean value, and added gaussian noise as described before, with a noise
level of 10. As a consequence, the classical Chan-Vese algorithm totally fails
on this example. This model can nevertheless perform well with an adapted
hand-crafted transformation of the image to segment. As illustrated in the two
last image of Figure 8, when fed with the map of normalized image gradient
instead of the original image intensities, the Chan-Vese model is able to
segment the two part of the image.
On the other hand, our approach is able to automatically learn a relevant
transformation of image data and provides excellent segmentation without any
previous trick. The reason for that is again, that the weights learnt by the
two denoising experts strongly depend on the true underlying signal, which, in
contrast to the mean intensity, is different in the two regions. Here, both
denoising experts were initialised by boxes of size 50$\times$50 centered in
the regions. We used a regularisation parameter $\lambda$ of 0.06, and set the
learning rate to 0.001. Using the same stopping criterion as in the cell
example, these results were obtained after $3$ iterations of the alternating
procedure involving denoising and segmentation steps.
Figure 8: Segmentation of a noisy Brodatz image consisting of two different
textures. The first three images show the noisy input $f$, the minimiser of
energy (5), and the result obtained by directly applying the active contour
algorithm [11]. The fourth image shows the normalized gradient of $f$, and the
last one is the result obtained when applying the classical Chan-Vese
algorithm on the normalized gradient map.
In Figure 9, we display the clean image considered in the experiment of Figure
8, as well as different denoised images with their corresponding quantitative
metrics. More precisely, the second image in the figure is obtained by
applying the Noise2Fast strategy to the whole image, while the third image is
the result of the proposed joint optimisation procedure, where the image is
composed using the segmentation mask $u$ and the denoised images from the two
denoising experts. Especially in the left region, we can observe a better
denoising performance of the proposed method, which is more evident by
examining the PSNR (20.36 vs 19.815) and SSIM (0.753 vs 0.696) values.
Figure 9: Comparison of denoising performance with different Noise2Fast
strategies. On the middle image, Noise2Fast is applied to the whole image. On
the right image, we present the final denoised image obtained from the two
separate denoisers learned with the proposed framework.
### Segmentation with a reference mask using Algorithm 3
In Figure 10, we show another example of image segmentation for three
different noise levels using Algorithm 4. The main difficulty of this image
lies in the intensities which are shared by the object to be segmented and the
background. Therefore, we chose a representative box for initialising the
squirrel, which includes both, dark and bright areas, in order to enable the
foreground denoising expert to better generalize on the foreground region
consisting of dark and bright areas. Naturally, as the squirrel and background
do not differ much in terms of their structural properties, the foreground
denoiser, $\Phi_{\theta^{F}}$ also performs well on the background, causing
the segmentation mask $u$ to grow. In order to control this behaviour, we
applied our second strategy that includes a recursive reference mask as
described in Algorithm 3, thus preventing the segmentation mask obtained at
iteration $k+1$ from deviating too much from the previous one at iteration
$k$. More precisely, the parameters that we used for noise level 10 were
$\mu=0.0001,\lambda=0.005$, for noise level 30 we set $\mu=0.005$,
$\lambda=0.005$, while for a noise level 50 $\mu=0.00015$, $\lambda=0.005$.
Figure 10: Segmentation results obtained on the noisy images showing a
squirrel corrupted with three different noise levels. The first column shows
clean input image, and initialisation for foreground and background regions,
while in the second column the noisy versions of the given image are depicted.
The remaining ones present the segmentation results obtained using the
proposed strategy with the segmentation Algorithm 4, the segmentation masks
using the Chan-Vese algorithm provided by skimage with checkerboard
initialisation and box initialisation, respectively. The last row shows the
denoised images which are obtained by composing the obtained segmentation mask
and expert denoiser outputs.
In the following we discuss some possible extensions, and current limitations
of the proposed joint denoising and segmentation approach.
## 6 Extensions and limitations
First, our proposed unified framework can be extended to the (multichannel)
multiclass segmentation case, as we discuss in the following paragraph.
### 6.1 Vector-valued multi-class model
In order to segment a noise-corrupted vector-valued image represented as
$\boldsymbol{f}=(f_{1},\dots,f_{L})$ into $C$ different regions, we can
consider $C$ dedicated neural networks acting as denoising experts for each
region. In this case, the objective is to estimate $C$ segmentation masks
$\\{u_{i}\\}_{i=1}^{C}$ satisfying the simplex constraint, i.e.
$\sum_{k=1}^{C}u_{i}=1$, as well as the set of network parameters
$\boldsymbol{\theta}^{\text{MC}}=(\theta^{\text{MC}}_{1},\dots,\theta^{\text{MC}}_{C})$.
With these notations, the energy (5) can be extended to segment noise-
corrupted, vector-valued images $\boldsymbol{f}$ as
$\displaystyle\mathcal{E}_{f,\lambda}(\boldsymbol{u},\boldsymbol{\theta})\coloneqq
i_{\mathbb{A}}(\boldsymbol{u})$
$\displaystyle+\lambda\lvert\boldsymbol{u}\rvert_{\text{TV}}+\sum_{i=1}^{C}\sum_{j=1}^{L}\int_{\Omega}\left(f_{j}-\Phi_{\theta_{i}^{\text{MC}}}(f_{j})\right)^{2}u_{i}\,.$
(13)
As before, it may not be necessary to train $C$ different denoising networks,
as some regions may be assumed to be constant and in this case the “expert”
for region $i$ can be replaced by the mean value of the image inside region
$i$.
### 6.2 Limitations
A limitation of the current work lies in the training strategy in the case
where two denoisers are applied. In our experiments, we observed that once the
denoising experts have been trained on the initial boxes and the subsequent
segmentation step has been realised, it may occur that one of the two classes
in the image contains an important part of the other class. As a result,
during the next denoising step, one of the network is trained on parts of both
regions present in the image. With the influence of the total variation
regularisation, $u$ may converge to an undesired constant mask. With the
recursive integration of a reference mask, we already proposed in section 4.2
a strategy to overcome this drawback. One interesting alternative would be to
include an additional constraint enforcing the denoisers to perform better in
their initial regions than in the other initial ones.
Next, in some of our experiments we have observed that the Noise2Fast denoiser
is not well suited for the segmentation of certain images such as the zebra
image in Figure 3. The reason for that is that the filters of the learned
experts operate locally, and are not good in capturing global information of
the regions. As a consequence, in the case of the zebra, in regions where the
stripes are thicker, and the intensity values are closer to the one in the
background, the background expert outperforms the one of the foreground,
resulting in an undesired result as obtained by the piecewise constant Chan-
Vese model. To overcome this limitation, we modified the checkerboard strategy
of the Noise2Fast method, and instead of squeezing the image to half of its
width/height, we divided its size by a factor of four. In addition to include
different denoisers, such as for instance the deep image prior [39], an
interesting perspective would be to define new data fitting terms focusing on
structural similarities within the different classes.
## 7 Conclusion
In this work, we have proposed a novel energy functional for the joint
denoising and segmentation of images. Our framework combines the advantages of
well-established variational models with modern self-supervised deep learning
strategies. A major strength of the method lies in the fact that it can handle
single images without the need of ground truth segmentation masks or noisy-
clean training pairs. Further, the energy functional is designed in a such a
way that both tasks benefit from each other, which has also been confirmed by
experiments.
## References
* [1] Egil Bae and Ekaterina Merkurjev. Convex variational methods on graphs for multiclass segmentation of high-dimensional data and point clouds. Journal of Mathematical Imaging and Vision, 58:468–493, 2017.
* [2] Antonio Baeza, Vicent Caselles, Pau Gargallo, and Nicolas Papadakis. A narrow band method for the convex formulation of discrete multilabel problems. Multiscale Modeling & Simulation, 8(5):2048–2078, 2010.
* [3] Joshua Batson and Loic Royer. Noise2Self: Blind denoising by self-supervision. In International Conference on Machine Learning, pages 524–533. PMLR, 2019.
* [4] Antoni Buades, Bartomeu Coll, and Jean-Michel Morel. Self-similarity-based image denoising. Communications of the ACM, 54(5):109–117, 2011.
* [5] Tim-Oliver Buchholz, Mangal Prakash, Deborah Schmidt, Alexander Krull, and Florian Jug. DenoiSeg: joint denoising and segmentation. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part I, pages 324–337. Springer, 2021.
* [6] Xiaohao Cai. Variational image segmentation model coupled with image restoration achievements. Pattern Recognition, 48(6):2029–2042, 2015.
* [7] Xiaohao Cai, Raymond Chan, Mila Nikolova, and Tieyong Zeng. A three-stage approach for segmenting degraded color images: Smoothing, lifting and thresholding (slat). Journal of Scientific Computing, 72:1313–1332, 2017.
* [8] Xiaohao Cai, Raymond Chan, Carola-Bibiane Schonlieb, Gabriele Steidl, and Tieyong Zeng. Linkage between piecewise constant Mumford–Shah model and Rudin–Osher–Fatemi model and its virtue in image segmentation. SIAM Journal on Scientific Computing, 41(6):B1310–B1340, 2019.
* [9] Xiaohao Cai, Raymond Chan, and Tieyong Zeng. A two-stage image segmentation method using a convex variant of the Mumford–Shah model and thresholding. SIAM Journal on Imaging Sciences, 6(1):368–390, 2013.
* [10] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision, 40:120–145, 2011.
* [11] Tony Chan and Luminita Vese. An active contour model without edges. In Scale-Space Theories in Computer Vision: Second International Conference, Scale-Space’99 Corfu, Greece, September 26–27, 1999 Proceedings 2, pages 141–151. Springer, 1999.
* [12] Tony F Chan, Selim Esedoglu, and Mila Nikolova. Algorithms for finding global minimizers of image segmentation and denoising models. SIAM journal on applied mathematics, 66(5):1632–1648, 2006.
* [13] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017.
* [14] Veronica Corona, Martin Benning, Matthias J Ehrhardt, Lynn F Gladden, Richard Mair, Andi Reci, Andrew J Sederman, Stefanie Reichelt, and Carola-Bibiane Schönlieb. Enhancing joint reconstruction and segmentation with non-convex bregman iteration. Inverse Problems, 35(5):055001, 2019.
* [15] Imre Csiszár. Information geometry and alternating minimization procedures. Statistics and Decisions, Dedewicz, 1:205–237, 1984.
* [16] Linwei Fan, Fan Zhang, Hui Fan, and Caiming Zhang. Brief review of image denoising techniques. Visual Computing for Industry, Biomedicine, and Art, 2:1–12, 2019.
* [17] Ross Girshick. Fast R-CNN. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
* [18] Daniel Glasner, Shai Bagon, and Michal Irani. Super-resolution from a single image. In 2009 IEEE 12th international conference on computer vision, pages 349–356. IEEE, 2009.
* [19] Nadja Gruber, Johannes Schwab, Sebastien Court, Elke Gizewski, and Markus Haltmeier. A joint variational multichannel multiphase segmentation framework. arXiv preprint arXiv:2202.04680, 2022.
* [20] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
* [21] Viren Jain and Sebastian Seung. Natural image denoising with convolutional networks. Advances in neural information processing systems, 21, 2008.
* [22] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [23] Sreedhar Kollem, Katta Rama Linga Reddy, and Duggirala Srinivasa Rao. A review of image denoising and segmentation methods based on medical images. International Journal of Machine Learning and Computing, 9(3):288–295, 2019.
* [24] Alexander Krull, Tim-Oliver Buchholz, and Florian Jug. Noise2Void-learning denoising from single noisy images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2129–2137, 2019.
* [25] Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. Noise2Noise: Learning image restoration without clean data. arXiv preprint arXiv:1803.04189, 2018.
* [26] Jason Lequyer, Reuben Philip, Amit Sharma, and Laurence Pelletier. Noise2Fast: Fast self-supervised single image blind denoising. arXiv preprint arXiv:2108.10209, 2021.
* [27] Chunming Li, Chiu-Yen Kao, John C Gore, and Zhaohua Ding. Implicit active contours driven by local binary fitting energy. In 2007 IEEE conference on computer vision and pattern Recognition, pages 1–7. IEEE, 2007.
* [28] Xu Li, Xiaoping Yang, and Tieyong Zeng. A three-stage variational image segmentation framework incorporating intensity inhomogeneity information. SIAM Journal on Imaging Sciences, 13(3):1692–1715, 2020.
* [29] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. A survey on deep learning in medical image analysis. Medical image analysis, 42:60–88, 2017.
* [30] Niklas Mevenkamp and Benjamin Berkels. Variational multi-phase segmentation using high-dimensional local features. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–9. IEEE, 2016.
* [31] David Bryant Mumford and Jayant Shah. Optimal approximations by piecewise smooth functions and associated variational problems. Communications on pure and applied mathematics, 1989.
* [32] Mangal Prakash, Tim-Oliver Buchholz, Manan Lalit, Pavel Tomancak, Florian Jug, and Alexander Krull. Leveraging self-supervised denoising for image segmentation. In 2020 IEEE 17th international symposium on biomedical imaging (ISBI), pages 428–432. IEEE, 2020.
* [33] Ronny Ramlau and Wolfgang Ring. A Mumford-Shah level-set approach for the inversion and segmentation of X-ray tomography data. Journal of Computational Physics, 221(2):539–557, 2007.
* [34] R Rashmi, Keerthana Prasad, and Chethana Babu K Udupa. Multi-channel Chan-Vese model for unsupervised segmentation of nuclei from breast histopathological images. Computers in Biology and Medicine, 136:104651, 2021.
* [35] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
* [36] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259–268, 1992.
* [37] Otmar Scherzer, Markus Grasmair, Harald Grossauer, Markus Haltmeier, and Frank Lenzen. Variational methods in imaging, volume 167. Springer, 2009.
* [38] Paul Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of optimization theory and applications, 109(3):475, 2001.
* [39] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9446–9454, 2018.
* [40] Yalin Wang, Ilhwan Jo, Stephen Wong, Shing-tung Yau, and Tony F Chan. Segmentation and tracking of 3D neuron microscopy images using a pde based method and connected component labeling algorithm. In 2006 IEEE/NLM Life Science Systems and Applications Workshop, pages 1–2. IEEE, 2006.
* [41] Martin Weigert, Uwe Schmidt, Tobias Boothe, Andreas Müller, Alexandr Dibrov, Akanksha Jain, Benjamin Wilhelm, Deborah Schmidt, Coleman Broaddus, Siân Culley, et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nature methods, 15(12):1090–1097, 2018.
* [42] Romain Yildizoglu, Jean-François Aujol, and Nicolas Papadakis. Active contours without level sets. In 2012 19th IEEE International Conference on Image Processing, pages 2549–2552. IEEE, 2012.
* [43] Tianming Zhan, Jun Zhang, Liang Xiao, Yunjie Chen, and Zhihui Wei. An improved variational level set method for MR image segmentation and bias field correction. Magnetic Resonance Imaging, 31(3):439–447, 2013.
* [44] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7):3142–3155, 2017.
* [45] Kaihua Zhang, Lei Zhang, Huihui Song, and Wengang Zhou. Active contours with selective local or global segmentation: a new formulation and level set method. Image and Vision computing, 28(4):668–676, 2010.
* [46] Maria Zontak and Michal Irani. Internal statistics of a single natural image. In CVPR 2011, pages 977–984. IEEE, 2011.
|
# Spectral Form Factor of a Quantum Spin Glass
Michael Winer Joint Quantum Institute, Department of Physics, University of
Maryland, College Park, Maryland 20742, USA Richard Barney Joint Quantum
Institute, Department of Physics, University of Maryland, College Park,
Maryland 20742, USA Condensed Matter Theory Center, Department of Physics,
University of Maryland, College Park, Maryland 20742, USA Christopher L.
Baldwin Joint Quantum Institute, Department of Physics, University of
Maryland, College Park, Maryland 20742, USA Victor Galitski Joint Quantum
Institute, Department of Physics, University of Maryland, College Park,
Maryland 20742, USA Brian Swingle Department of Physics, Brandeis
University, Waltham, Massachusetts 02453, USA
###### Abstract
It is widely expected that systems which fully thermalize are chaotic in the
sense of exhibiting random-matrix statistics of their energy level spacings,
whereas integrable systems exhibit Poissonian statistics. In this paper, we
investigate a third class: spin glasses. These systems are partially chaotic
but do not achieve full thermalization due to large free energy barriers. We
examine the level spacing statistics of a canonical infinite-range quantum
spin glass, the quantum $p$-spherical model, using an analytic path integral
approach. We find statistics consistent with a direct sum of independent
random matrices, and show that the number of such matrices is equal to the
number of distinct metastable configurations—the exponential of the spin glass
“complexity” as obtained from the quantum Thouless-Anderson-Palmer equations.
We also consider the statistical properties of the complexity itself and
identify a set of contributions to the path integral which suggest a
Poissonian distribution for the number of metastable configurations. Our
results show that level spacing statistics can probe the ergodicity-breaking
in quantum spin glasses and provide a way to generalize the notion of spin
glass complexity beyond models with a semi-classical limit.
###### Contents
1. 1 Introduction
1. 1.1 Review of the spectral form factor
2. 1.2 Review of mean-field spin glasses
3. 1.3 Summary of results and implications
2. 2 Real-time dynamics of the quantum $p$-spherical model
1. 2.1 The model
2. 2.2 Schwinger-Keldysh path integral
3. 2.3 TAP equations on the Schwinger-Keldysh contour
3. 3 The semiclassical ramp in the ergodic phase
1. 3.1 Effective action
2. 3.2 Connected solutions
3. 3.3 Contribution of connected solutions
4. 3.4 Evaluation of the SFF
4. 4 The semiclassical ramp in the non-ergodic phase
1. 4.1 Effective action
2. 4.2 Connected solutions
3. 4.3 Contribution of connected solutions
4. 4.4 Evaluation of the SFF
5. 5 Higher moments of the evolution operator
1. 5.1 Effective action
2. 5.2 Connected solutions
3. 5.3 Contribution of connected solutions
4. 5.4 Evaluation of the SFF
6. A Derivation of Schwinger-Keldysh TAP equations
7. B Energy of a TAP state
8. C Accounting for filter functions
## 1 Introduction
An isolated quantum many-body system which reaches an effective thermal
equilibrium state starting from an out-of-equilibrium initial state is often
called “quantum chaotic.” As commonly used, quantum chaos is a loose term
referring to a family of phenomena that typically co-occur, including the
ability of the system to serve as its own heat bath [1, 2, 3], hydrodynamic
behavior of conserved quantities [4, 5, 6, 7, 8], and random-matrix-like
energy eigenvalues [9, 10, 11, 12]. Given this variety, it is crucial to
understand the relationships between different manifestations of quantum chaos
[13, 14].
These relationships are complicated and interesting in large part because the
systems in question have structure, such as locality and symmetry. For
example, if the Hamiltonian has spatial locality, energy conservation implies
the existence of slow hydrodynamic modes and an associated long time scale,
the Thouless time, such that random-matrix behavior is only present for energy
levels closer than the inverse Thouless time [15, 16]. Similarly, if the
Hamiltonian possesses a symmetry, then it can be organized into blocks
labelled by irreducible representations of the symmetry. One finds random-
matrix statistics within each individual block, but full ergodicity is broken
because matrix elements between different blocks are forbidden [17, 18, 19,
20, 21].
It is natural to ask whether there are other ways in which ergodicity can be
lost, and if so, what the resulting spectral statistics of the Hamiltonians
are. In particular, we will better understand the relations between different
measures of quantum chaos by understanding how they are lost and what replaces
them.
Quantum spin glasses provide one well-established context to explore these
questions, since they exhibit a rich phenomenology associated with the
inability to fully thermalize [22, 23, 24, 25, 26, 27, 28] In this paper, we
determine the spectral statistics of an analytically tractable spin glass
model, the quantum $p$-spherical model. We find that up to times polynomial in
the system size, the Hamiltonian can effectively be described as approximately
block-diagonal. Each block behaves as a random matrix independent of the
others, and the number of blocks depends on the energy per particle. At high
energies, there is only one block and the system is ergodic. Below a critical
energy density, the Hamiltonian breaks into exponentially many blocks — the
average number of blocks jumps discontinuously from the high energy regime and
then decreases as the energy density decreases further. We establish these
results via a path integral computation of the spectral form factor (SFF),
which measures correlations between pairs of energy levels [29, 30, 31, 32,
33].
In the remainder of the introduction, we give some physical context by
reviewing the spectral form factor and mean-field spin glasses. We then
summarize our results in more detail and discuss implications. In Sec. 2, we
review the $p$-spherical model in detail. In Sec. 3, we calculate the SFF of
this model in the high-temperature ergodic regime, and in Sec. 4, we do so in
the non-ergodic regime. Finally, in Sec. 5, we investigate higher-moment
analogues of the SFF.
Figure 1: (Top left) Fully chaotic systems have energy levels that are
statistically similar to a Gaussian random matrix, indicated by the orange
block. (Top right) By contrast, quantum spin glasses in the non-ergodic phase
have spectral statistics that resemble a collection of many nearly-decoupled
random matrices (Bottom) Spectral statistics can be diagnosed via the spectral
form factor, denoted $\textrm{SFF}(T)$, which consists of a path integral over
a pair of real-time contours as indicated by the red lines. The universal part
of $\textrm{SFF}(T)$, which is proportional to $T$, is enhanced by the number
of effectively uncoupled sectors (other non-universal contributions are not
indicated here).
### 1.1 Review of the spectral form factor
To study the spectral correlations of a Hamiltonian $H$, a standard tool is
the spectral form factor (SFF) [34, 35], defined as
$\textrm{SFF}(T)\equiv\big{|}\textrm{Tr}e^{-iHT}\big{|}^{2}.$ (1)
In situations where the spectrum is unbounded, or when one wishes to
concentrate on a portion of the spectrum, the trace in Eq. (1) is regulated by
a filter function $f(H)$:
$\textrm{SFF}(T,f)\equiv\big{|}\textrm{Tr}f(H)e^{-iHT}\big{|}^{2}.$ (2)
One common choice is $f(H)=e^{-\beta H}$[29, 36], and another is
$f(H)=e^{-c(H-E_{0})^{2}}$. The latter allows one to study level statistics
near a specified energy $E_{0}$.
For a single Hamiltonian, the SFF is an erratic function of time [35]. Thus
one usually considers an ensemble of Hamiltonians and defines the SFF as the
average of Eq. (2) over the ensemble. Throughout this paper, we use the
notation $\mathbb{E}[\,\cdot\,]$ to denote the ensemble average.
The SFF is closely related to the correlation function of the density of
states. Formally, the (filtered) density of states is given by
$\rho(E,f)\equiv\sum_{n}f(E_{n})\delta(E-E_{n})=\textrm{Tr}f(H)\delta(E-H),$
(3)
where $n$ labels the eigenstate of $H$ with eigenvalue $E_{n}$, and its
correlation function is
$C(E,\omega,f)\equiv\mathbb{E}\left[\rho\left(E+\frac{\omega}{2},f\right)\rho\left(E-\frac{\omega}{2},f\right)\right].$
(4)
We have that
$\displaystyle\textrm{SFF}(T,f)$
$\displaystyle=\mathbb{E}\Big{[}\textrm{Tr}f(H)e^{-iHT}\textrm{Tr}f(H)e^{iHT}\Big{]}$
(5) $\displaystyle=\int dEd\omega\,e^{-i\omega
T}\mathbb{E}\left[\textrm{Tr}f(H)\delta\left(E+\frac{\omega}{2}-H\right)\textrm{Tr}f(H)\delta\left(E-\frac{\omega}{2}-H\right)\right]$
$\displaystyle=\int d\omega\,e^{-i\omega T}\int dE\,C(E,\omega,f).$
The SFF is simply the Fourier transform of the correlation function with
respect to $\omega$, integrated over $E$ (although the filter function allows
one to concentrate on an arbitrary subset of the spectrum).
Figure 2: The disorder-averaged SFF for the Gaussian unitary ensemble (GUE) of
matrix dimension $N=50$, computed numerically by averaging over ten thousand
realizations. The three distinct regimes — dip, ramp, plateau — are indicated.
It is conceptually useful to split the SFF into two contributions:
$\textrm{SFF}(T,f)=\big{|}\mathbb{E}\textrm{Tr}f(H)e^{-iHT}\big{|}^{2}+\bigg{(}\mathbb{E}\left[\big{|}\textrm{Tr}f(H)e^{-iHT}\big{|}^{2}\right]-\big{|}\mathbb{E}\textrm{Tr}f(H)e^{-iHT}\big{|}^{2}\bigg{)}.$
(6)
The first term, the disconnected piece of the SFF, comes solely from the
average density of states. It is the second term, the connected piece, that
contains information on the correlation between energy levels. The assertion
of “random matrix universality” [9, 37] can be phrased as the statement that
an ensemble of quantum chaotic Hamiltonians will generically have the same
connected SFF as the canonical Gaussian ensembles of random matrix theory [11,
38] This conjectured universal behavior is illustrated in Fig. 2, which plots
the disorder-averaged SFF of the Gaussian unitary ensemble (one of the
aforementioned canonical ensembles). Note the three distinct regimes:
* •
The “dip”, occurring at short times, comes from the disconnected piece of the
SFF (and thus its precise shape is non-universal). It reflects a loss of
constructive interference — the different terms of $\textrm{Tr}e^{-iHT}$
acquire different phase factors as $T$ increases.
* •
The “ramp”, occurring at intermediate times, is arguably the most interesting
regime. In the canonical matrix ensembles, it is a consequence of the
result[11]
$\mathbb{E}\left[\rho\left(E+\frac{\omega}{2}\right)\rho\left(E-\frac{\omega}{2}\right)\right]-\mathbb{E}\left[\rho\left(E+\frac{\omega}{2}\right)\right]\mathbb{E}\left[\rho\left(E-\frac{\omega}{2}\right)\right]\sim-\frac{1}{\mathfrak{b}\pi^{2}\omega^{2}},$
(7)
where $\mathfrak{b}=1$, $2$, $4$ in the orthogonal, unitary, and sympletic
ensembles respectively [11]. The right-hand side being negative is a
reflection of the well-known level repulsion in quantum chaotic systems [39].
Taking the Fourier transform with respect to $\omega$ gives a term
proportional to $T$ for the connected SFF. Such a linear-in-$T$ ramp is often
taken as a defining signature of quantum chaos.
* •
The “plateau”, occurring at late times, results from the discreteness of the
spectrum. At times much larger than the inverse level spacing, one expects
that all off-diagonal terms in the double-trace of the SFF sum to effectively
zero, meaning that
$\textrm{SFF}(T,f)=\sum_{mn}e^{-i(E_{m}-E_{n})T}f(E_{m})f(E_{n})\sim\sum_{n}f(E_{n})^{2}.$
(8)
As the plateau regime is both challenging to access analytically and not
particularly informative, we shall not consider it further in this work.
The bulk of our analysis in this paper is devoted to calculation of the ramp
in a well-known quantum spin glass model, the $p$-spherical model (discussed
below). The results can be understood via the elementary observation that when
a Hamiltonian is block diagonal,
$H=\begin{pmatrix}H_{1}&0&0\\\ 0&H_{2}&0&\ldots\\\ 0&0&H_{3}\\\
&\vdots&&\ddots\end{pmatrix},$ (9)
then $\textrm{Tr}e^{-iHT}=\sum_{k}\textrm{Tr}e^{-iH_{k}T}$. If the different
blocks are independent, then the variance of $\textrm{Tr}e^{-iHT}$ is the sum
of the variance of each $\textrm{Tr}e^{-iH_{k}T}$, i.e., the SFF is the sum of
the SFF for each block. In particular, the coefficient of the universal
linear-in-$T$ ramp is multiplied by the number of independent blocks. Systems
with only approximately block-diagonal Hamiltonians, for which there are small
matrix elements between blocks, have this enhancement of the ramp up to the
transition timescale between blocks. For a more detailed analysis, see Ref.
[19].
### 1.2 Review of mean-field spin glasses
Figure 3: (Left) Cartoon of the energy landscape in a 1RSB spin glass. The
y-axis is energy per spin, $E/N$, where $E$ is energy and $N$ is the number of
spins. Different points on the x-axis represent (very roughly, since the
actual configuration space is $N$-dimensional) different spin configurations
$\sigma$. The dashed line indicates the energy density $\epsilon_{d}$ below
which the system is non-ergodic. (Right) Sketch of the dynamical phase diagram
for a quantum 1RSB spin glass. The x-axis represents parameters controlling
the strength of quantum fluctuations, and the y-axis is energy density. Note
that many other types of phase transitions are also present, in particular
equilibrium transitions, but are not indicated here. See, e.g., Refs. [26, 27]
for more information.
Broadly speaking, spin glasses are systems in which the magnetic moments
$\sigma_{i}$ are frozen but disordered at low temperatures. However, this
definition (much like that of “quantum chaos”) encompasses a wide variety of
phenomena which are in many ways quite distinct, as is made clear by the
literature on the subject [22, 23, 24, 25, 26, 27, 28]. In the present paper,
we focus on what are known as “one-step replica symmetry breaking” (1RSB) spin
glass phases [27]. We are specifically interested in quantum spin glasses, but
we first review the corresponding classical case, for which configurations are
labelled by a list $\sigma\equiv\\{\sigma_{1},\cdots,\sigma_{N}\\}$ and the
Hamiltonian is simply a function of $\sigma$.
While the technical definition of 1RSB is somewhat involved, the qualitative
physics is straightforward to understand and captured by the sketch in Fig. 3.
The energy landscape, i.e., energy as a function of spin configuration, has
many deep wells and steep barriers. In particular, the number of wells is
$e^{O(N)}$ and the heights of the energy barriers separating wells are $O(N)$,
where $N$ is the number of spins. As a result, below a certain energy density
$\epsilon_{d}$, the system is extremely non-ergodic: it remains trapped within
an exponentially small fraction of the thermodynamically relevant
configuration space until exponentially long timescales. While the 1RSB
phenomenon was originally studied in the context of stochastic classical
dynamics [40, 41, 42, 43], it has recently been shown to imply exponentially
long tunneling timescales for isolated quantum dynamics as well [44, 45, 46,
47].
TAP states (named after Thouless, Anderson, and Palmer [48]) provide a more
quantitative description of such “deep wells”. Arguably the most general
definition (see Ref. [25] for others) is in terms of the Legendre transform of
the free energy with respect to local fields:
$F\big{(}\\{m_{i}\\}\big{)}=-\frac{1}{\beta}\log{\textrm{Tr}e^{-\beta
H+\beta\sum_{i}h_{i}\sigma_{i}}}+\sum_{i}h_{i}m_{i},$ (10)
where $H$ is the Hamiltonian of interest and the fields $\\{h_{i}\\}$ are
chosen so that $\langle\sigma_{i}\rangle=m_{i}$ (where
$\langle\,\cdot\,\rangle$ indicates a thermal average). TAP states are simply
the local minima of $F(\\{m_{i}\\})$. Physically, each corresponds to a
different “well” of the energy landscape, including thermal fluctuations
around the lowest point (thus TAP states do generically depend on
temperature). The partition function can be decomposed as a sum over TAP
states:
$Z\equiv\sum_{\sigma}e^{-\beta
H(\sigma)}=\sum_{\alpha}\left[\sum_{\sigma}\delta_{\sigma\in\alpha}e^{-\beta
H(\sigma)}\right]\equiv\sum_{\alpha}Z_{\alpha},$ (11)
where $\alpha$ denotes a TAP state and $\delta_{\sigma\in\alpha}$ restricts
the trace to only those states belonging to TAP state $\alpha$. Note that in
this discussion, $\sigma$ can refer to any set of degrees of freedom: Ising
spins, vector spins, continuous coordinates, etc. In all cases, Eqs. (10) and
(11) can be interpreted accordingly.
Quantum generalizations of spin glasses are usually obtained by adding non-
commuting terms to the Hamiltonian. For example, with an Ising Hamiltonian,
one often interprets $\sigma_{i}$ as the Pauli spin-$z$ operator
$\sigma_{i}^{z}$ and includes an additional transverse field
$\Gamma\sum_{i}\sigma_{i}^{x}$ [49, 50, 51, 52]. On the other hand, with
systems having continuous degrees of freedom (including the one which we study
in this paper), one can interpret $\sigma_{i}$ as a position coordinate and
include the “kinetic energy” $\sum_{i}\pi_{i}^{2}/2\mu$, where $\pi_{i}$ is
the momentum operator conjugate to $\sigma_{i}$ [53, 54]. Generically, the
resulting system has a frozen spin glass phase at low energy and small quantum
fluctuations (the latter being controlled by $\Gamma$ and $\mu^{-1}$
respectively in the examples above), and has a paramagnetic phase at either
high energy or large quantum fluctuations. A sketch of the typical phase
diagram is shown in Fig. 3, with these two phases indicated by “non-ergodic”
and “ergodic”.
It has recently been noted that quantum 1RSB spin glasses can exhibit
eigenstate phase transitions which are distinct from the above [55, 56, 57].
Qualitatively speaking, on the low energy/fluctuation side of the eigenstate
phase boundary, each eigenstate of the Hamiltonian is localized on a single
TAP state. This implies that under the system’s internal dynamics alone (i.e.,
as given by the Schrodinger equation), the system cannot tunnel between TAP
states on any timescale, even times exponential in the number of spins. On the
other side of the phase boundary, each eigenstate is delocalized over many TAP
states in accordance with random matrix behavior. As discussed in Ref. [46],
while this implies that the system does tunnel between TAP states, the
timescale for tunneling is necessarily exponential in system size, analogous
to the activation times under open-system dynamics. Only when there exists a
single TAP state can one identify the phase as genuinely thermalizing. As a
result, one finds phase diagrams like that sketched in Fig. 3, with “non-
ergodic”/“ergodic” indicating whether multiple TAP states exist and
“localized”/“delocalized” referring to the eigenstate properties.
### 1.3 Summary of results and implications
In this paper, we calculate the SFF for a particular ensemble of quantum spin
glasses, the quantum $p$-spherical model (PSM) [58, 53, 54]. We find that in
the ergodic phase, the connected part of the SFF agrees with the expectation
from random matrix theory (Eq. (62) below), while in the non-ergodic phase, it
is enhanced by a factor which is precisely the number of TAP states (Eq.
(109)). Given the discussion in Secs. 1.1 and 1.2, this makes precise and
validates the idea that each metastable state (i.e., TAP state) corresponds to
a block of the Hamiltonian that is quantum chaotic on its own but is nearly
decoupled from all others, thus making the system as a whole non-ergodic [56].
This is the main result of the present work.
Since we only calculate the SFF up to times polynomial in system size, our
results are consistent with but do not test the distinction between localized
and delocalized phases shown in Fig. 3, which is only relevant beyond the
exponentially long timescale corresponding to tunneling between TAP states. We
leave it for future work to incorporate such instanton effects into the path
integral, expecting that they will reduce the SFF to the random matrix result
precisely in the non-ergodic delocalized phase (and even then only beyond the
exponential tunneling timescale).
In addition to the SFF, we consider higher moments of the evolution operator
and identify a set of saddle points (Eq. (134)) suggesting that: i) the number
of TAP states at a given energy is Poisson-distributed, and ii) the numbers of
TAP states at different energies are independent. However, since we have not
evaluated the perturbative corrections around each saddle point, which would
generically dominate over any subleading saddle points, we cannot claim to
have an accurate calculation. It is another direction for future work to study
the distribution of TAP states more systematically.
Our results can be further understood by comparing to Refs. [18] and [19] on
one hand and Refs. [59] and [60] on the other. The first set of papers argues
that for a system which separates into weakly coupled sectors, the SFF
enhancement is the sum of return probabilities over all configurations. If the
time evolution can be considered as an effective Markov process with transfer
rates between sectors given by some matrix $M$, then the SFF enhancement
factor is $\textrm{Tr}e^{MT}$. The second set of papers argues that for a
classical spin glass undergoing Markovian stochastic dynamics with generator
$M$, the number of TAP states can be calculated — and perhaps even defined —
as $\textrm{Tr}e^{MT}$. In this sense, the present paper can be considered as
a “missing link” that extends the results of Refs. [59] and [60] to quantum
systems.
The fact that SFF enhancement is related to return probabilities suggests that
the spectral statistics of spin glasses may contain information on aging
dynamics as well. Another open question is whether the equilibrium replica-
symmetry-breaking transition has any consequences for spectral statistics.
These, as well as those already mentioned, are all promising directions for
future work.
## 2 Real-time dynamics of the quantum $p$-spherical model
### 2.1 The model
The classical $p$-spherical model (PSM) [61] is a disordered spin model with
all-to-all $p$-body interactions. It is defined by the classical Hamiltonian
$H_{\textrm{cl}}\equiv\sum_{(i_{1}\cdots i_{p})}J_{i_{1}\cdots
i_{p}}\sigma_{i_{1}}\cdots\sigma_{i_{p}},$ (12)
where the couplings $J_{i_{1}\dots i_{p}}$ are independent Gaussian random
variables with mean zero and variance
$\mathbb{E}{J_{i_{1}\cdots i_{p}}}^{2}=\frac{J^{2}(p-1)!}{C_{i_{1}\cdots
i_{p}}N^{p-1}}.$ (13)
Here and throughout, $\mathbb{E}$ indicates an average over couplings. The
notation $(i_{1}\cdots i_{p})$ denotes sets of $p$ indices such that $1\leq
i_{1}\leq\cdots\leq i_{p}\leq N$. The sum in Eq. (12) is over all such sets.
Our treatment differs from the standard convention by including a parameter
$J$ for the overall strength of the disorder. To recover the standard
expressions, simply set $J^{2}=p/2$. We also include the combinatorial factor
$C_{i_{1}\cdots i_{p}}=\prod_{1\leq i\leq N}n_{i}!$, where $n_{i}$ is the
number of indices set equal to $i$. This term is almost always one, but its
inclusion avoids $1/N$ corrections in the action.
The $\sigma_{i}$ are real, continuous spin variables subject to the spherical
constraint
$\sum_{i=1}^{N}\sigma_{i}^{2}=N,$ (14)
which ensures that the system has an extensive free energy. It is apparent
that this is a mean-field model without any spatial structure. This allows for
infinite free energy barriers around metastable states in the thermodynamic
limit, making the model ideal for examining the impact of metastability on the
spectral statistics of spin glasses.
In this work, we follow Refs. [58, 53, 54] in generalizing Eq. (12) to a
quantum Hamiltonian $H$. We treat the $\sigma_{i}$ as commuting position
operators, and define conjugate momentum operators $\pi_{i}$ which satisfy the
commutation relations
$[\sigma_{i},\pi_{j}]=i\delta_{ij}.$ (15)
The quantum PSM simply includes a kinetic energy term in the Hamiltonian:
$H=\sum_{i=1}^{N}\frac{\pi_{i}^{2}}{2\mu}+\sum_{(i_{1}\cdots
i_{p})}J_{i_{1}\cdots i_{p}}\sigma_{i_{1}}\cdots\sigma_{i_{p}}.$ (16)
The mass $\mu$ is an additional parameter controlling the strength of quantum
fluctuations. To incorporate the spherical constraint, we take the Hilbert
space to be the subspace in which $\sum_{i}\sigma_{i}^{2}$ has eigenvalue $N$.
The quantum PSM may be interpreted as a soft-spin version of the Ising
$p$-spin model in an external transverse field — itself the subject of much
study [62, 51, 63, 64, 65] — where $\mu^{-1}$ is analogous to the transverse
field. Alternatively, if we think of
$\sigma\equiv\\{\sigma_{1},\cdots,\sigma_{N}\\}$ as a position vector in
$N$-dimensional space, the quantum PSM has a natural interpretation as a
particle of mass $\mu$ moving on a hypersphere of radius $\sqrt{N}$. This
particle experiences the Gaussian random potential
$V(\sigma)=\sum_{(i_{1}\cdots i_{p})}J_{i_{1}\cdots
i_{p}}\sigma_{i_{1}}\cdots\sigma_{i_{p}},$ (17)
whose correlation function is
$\mathbb{E}V(\sigma)V(\sigma^{\prime})=\frac{J^{2}(p-1)!}{C_{i_{1}\cdots
i_{p}}N^{p-1}}\sum_{(i_{1}\cdots
i_{p})}\sigma_{i_{1}}\sigma^{\prime}_{i_{1}}\cdots\sigma_{i_{p}}\sigma^{\prime}_{i_{p}}=\frac{J^{2}}{pN^{p-1}}\big{(}\sigma\cdot\sigma^{\prime}\big{)}^{p}.$
(18)
Note that there is a very important difference between $p=2$ and $p>2$: the
former is a Gaussian model, essentially (but for the spherical constraint) a
system of linearly coupled harmonic oscillators. It therefore has
qualitatively different behavior than the $p>2$ models, which are genuinely
interacting and serve as reasonable toy models for rugged energy landscapes.
In this work, we exclusively consider $p>2$.
### 2.2 Schwinger-Keldysh path integral
Figure 4: Summary of the contours, order parameters, and (at least at high
temperature) equations of motion considered in this work. The left column
gives the quantities appropriate to the Schwinger-Keldysh path integral, and
the right column to the spectral form factor (SFF) path integral.
(Top row) Contours for the respective path integrals. Each of the different
branches is labelled, and directions are indicated by arrowheads. Points
connected by dashed lines are identified, making the contours periodic.
(Middle row) Relationship between order parameters of the theory and
observable quantities. $H$ and $Z$ are the $p$-spin Hamiltonian and partition
function respectively. $\mathcal{T}$ and $\widetilde{\mathcal{T}}$ denote time
ordering and anti-ordering.
(Bottom row) Equations of motion. These take the same form for both path
integrals, differing only in the contour $\mathcal{C}$ being used.
Just as other all-to-all models have a saddle-point/mean-field description at
large $N$, so too does the PSM. We start with the disorder-averaged (i.e.,
“annealed”) path integral on the Schwinger-Keldysh contour at inverse
temperature $\beta$, illustrated in the left column of Fig. 4. While it is in
general incorrect (often grossly) to disorder-average the path integral
itself, it is known that the annealed approximation is accurate in the PSM as
long as $\beta$ is less than a critical value $\beta_{s}$ [26]. We shall
assume that this is true throughout. The annealed path integral is
$\displaystyle\mathbb{E}Z_{\textrm{SK}}$
$\displaystyle=\int\mathcal{D}\sigma^{N}\exp\left[\int_{\mathcal{C}}dt\sum_{i}\left(\frac{i\mu}{2}\big{(}\partial_{t}\sigma_{i}(t)\big{)}^{2}-\frac{iz(t)}{2}\big{(}\sigma_{i}(t)^{2}-1\big{)}\right)\right]$
(19) $\displaystyle\qquad\qquad\quad\cdot\int
dP(J)\exp\left[-i\int_{\mathcal{C}}dt\sum_{(i_{1}\cdots i_{p})}J_{i_{1}\cdots
i_{p}}\sigma_{i_{1}}(t)\cdots\sigma_{i_{p}}(t)\right],$
where
$dP(J)\propto\prod_{(i_{1}\cdots i_{p})}dJ_{i_{1}\cdots
i_{p}}\exp\left[-\frac{N^{p-1}C_{i_{1}\cdots i_{p}}J_{i_{1}\cdots
i_{p}}^{2}}{2(p-1)!J^{2}}\right].$ (20)
For brevity, we use $\mathcal{C}$ to denote the entire contour. Thus
$\int_{\mathcal{C}}dt$ indicates a contour integral within the complex-$t$
plane. The Lagrange multiplier $z(t)$ is included to enforce the spherical
constraint. It can be interpreted as a time-dependent harmonic potential whose
value is chosen such that $\sum_{i}\sigma_{i}(t)^{2}=N$ at all times. Thus the
measure $\mathcal{D}\sigma^{N}$ is simply the product measure over each
$\sigma_{i}$ independently. From here, the same manipulations used to get
Schwinger-Dyson equations for the SYK model will give us equations of motion
for the PSM.
One can immediately perform the Gaussian integrals over the couplings to
obtain
$\mathbb{E}Z_{\textrm{SK}}=\int\mathcal{D}\sigma^{N}e^{-NS^{\prime}},$ (21)
where
$\displaystyle NS^{\prime}$
$\displaystyle\equiv\int_{\mathcal{C}}dt\sum_{i}\left(-\frac{i\mu}{2}\big{(}\partial_{t}\sigma_{i}(t)\big{)}^{2}+\frac{iz(t)}{2}\big{(}\sigma_{i}(t)^{2}-1\big{)}\right)$
(22) $\displaystyle\qquad\qquad+\frac{J^{2}(p-1)!}{2C_{i_{1}\cdots
i_{p}}N^{p-1}}\sum_{(i_{1}\cdots
i_{p})}\int_{\mathcal{C}}dtdt^{\prime}\sigma_{i_{1}}(t)\sigma_{i_{1}}(t^{\prime})\cdots\sigma_{i_{p}}(t)\sigma_{i_{p}}(t^{\prime})$
$\displaystyle=\int_{\mathcal{C}}dt\sum_{i}\left(-\frac{i\mu}{2}\big{(}\partial_{t}\sigma_{i}(t)\big{)}^{2}+\frac{iz(t)}{2}\big{(}\sigma_{i}(t)^{2}-1\big{)}\right)+\frac{NJ^{2}}{2p}\int_{\mathcal{C}}dtdt^{\prime}\left(\frac{1}{N}\sum_{i}\sigma_{i}(t)\sigma_{i}(t^{\prime})\right)^{p}.$
Next introduce a “fat unity”,
$\displaystyle 1=$
$\displaystyle\int\mathcal{D}\mathcal{G}\prod_{tt^{\prime}}\delta\Big{(}N\mathcal{G}(t,t^{\prime})-\sum_{i}\sigma_{i}(t)\sigma_{i}(t^{\prime})\Big{)}$
(23) $\displaystyle=$
$\displaystyle\int\mathcal{D}\mathcal{G}\mathcal{D}\mathcal{F}\exp\left[\frac{N}{2}\int_{\mathcal{C}}dtdt^{\prime}\mathcal{F}(t,t^{\prime})\left(\mathcal{G}(t,t^{\prime})-\frac{1}{N}\sum_{i}\sigma_{i}(t)\sigma_{i}(t^{\prime})\right)\right].$
The integral over the self-energy $\mathcal{F}(t,t^{\prime})$ runs along the
imaginary axis, making the second line simply the identity $\int
dpe^{ipx}=2\pi\delta(x)$ (we absorb factors of $2\pi$ into the measure
$\mathcal{D}\mathcal{F}$). However, when we ultimately evaluate the path
integral by saddle point, we shall find that the saddle point value of
$\mathcal{F}(t,t^{\prime})$ is real. Inserting Eq. (23) into the path integral
gives
$\mathbb{E}Z_{\textrm{SK}}=\int\mathcal{D}\mathcal{G}\mathcal{D}\mathcal{F}\int\mathcal{D}\sigma^{N}e^{-NS^{\prime\prime}},$
(24)
where
$\displaystyle NS^{\prime\prime}$
$\displaystyle\equiv-\frac{iN}{2}\int_{\mathcal{C}}dtz(t)+\frac{N}{2}\int_{\mathcal{C}}dtdt^{\prime}\left(\frac{J^{2}}{p}\mathcal{G}(t,t^{\prime})^{p}-\mathcal{F}(t,t^{\prime})\mathcal{G}(t,t^{\prime})\right)$
(25)
$\displaystyle\qquad\qquad+\frac{1}{2}\sum_{i}\left[\int_{\mathcal{C}}dt\left(-i\mu\big{(}\partial_{t}\sigma_{i}(t)\big{)}^{2}+iz(t)\sigma_{i}(t)^{2}\right)+\int_{\mathcal{C}}dtdt^{\prime}\sigma_{i}(t)\mathcal{F}(t,t^{\prime})\sigma_{i}(t^{\prime})\right].$
We can now perform the integral over $\sigma_{i}$, resulting in
$\mathbb{E}Z_{\textrm{SK}}=\int\mathcal{D}\mathcal{G}\mathcal{D}\mathcal{F}e^{-NS_{\textrm{eff}}},$
(26)
where
$S_{\textrm{eff}}\equiv-\frac{i}{2}\int_{\mathcal{C}}dtz(t)+\frac{1}{2}\int_{\mathcal{C}}dtdt^{\prime}\left(\frac{J^{2}}{p}\mathcal{G}(t,t^{\prime})^{p}-\mathcal{F}(t,t^{\prime})\mathcal{G}(t,t^{\prime})\right)+\frac{1}{2}\log{\textrm{Det}}\Big{[}i(\mu\partial_{t}^{2}+z)+\mathcal{F}\Big{]}.$
(27)
At large $N$, the remaining path integral can be evaluated within the saddle
point approximation. The locations of the saddle points are determined by
setting to zero the functional derivatives of Eq. (27):
$\begin{gathered}i\big{(}\mu\partial_{t}^{2}+z(t)\big{)}\mathcal{G}(t,t^{\prime})+\int_{\mathcal{C}}dt^{\prime\prime}\mathcal{F}(t,t^{\prime\prime})\mathcal{G}(t^{\prime\prime},t^{\prime})=\delta(t-t^{\prime}),\\\
\mathcal{F}(t,t^{\prime})=J^{2}\mathcal{G}(t,t^{\prime})^{p-1},\qquad\mathcal{G}(t,t)=1.\end{gathered}$
(28)
Keep in mind that the time arguments in Eq. (28) are complex and range over
the entire Schwinger-Keldysh contour. In particular, although it is hidden in
this compact notation, the infinitesimals $dt$ acquire different phases
depending on the branch of the contour: $dt$ is a positive real infinitesimal
on the upper (“forward”) real-time branch, a negative real infinitesimal on
the lower (“backward”) real-time branch, and a negative imaginary
infinitesimal on the thermal branch.
$\mathcal{G}(t,t^{\prime})$ is the order parameter of this theory. As is clear
from the manner by which it was introduced (top line of Eq. (23)), expectation
values of $\mathcal{G}(t,t^{\prime})$ within the path integral are equivalent
to expectation values of $N^{-1}\sum_{i}\sigma_{i}(t)\sigma_{i}(t^{\prime})$.
The latter are simply time-ordered correlation functions. We shall focus on
the real-time correlation functions, for which it is more transparent to
explicitly indicate the branches by $\alpha\in\\{u,l\\}$ and have $t$ be
simply a real variable. Formally, we have that
$\displaystyle\big{<}\mathcal{G}_{uu}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}Z_{\textrm{SK}}^{-1}\textrm{Tr}e^{-\beta
H}\mathcal{T}\sigma_{i}(t)\sigma_{i}(t^{\prime})\Big{]},$
$\displaystyle\qquad\big{<}\mathcal{G}_{ul}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}Z_{\textrm{SK}}^{-1}\textrm{Tr}e^{-\beta
H}\sigma_{i}(t^{\prime})\sigma_{i}(t)\Big{]},$ (29)
$\displaystyle\big{<}\mathcal{G}_{lu}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}Z_{\textrm{SK}}^{-1}\textrm{Tr}e^{-\beta
H}\sigma_{i}(t)\sigma_{i}(t^{\prime})\Big{]},$
$\displaystyle\qquad\big{<}\mathcal{G}_{ll}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}Z_{\textrm{SK}}^{-1}\textrm{Tr}e^{-\beta
H}\widetilde{\mathcal{T}}\sigma_{i}(t)\sigma_{i}(t^{\prime})\Big{]},$
where $\mathcal{T}$ denotes time ordering and $\widetilde{\mathcal{T}}$
denotes time anti-ordering. Note that we can omit the sum over $i$ because the
different spins (upon disorder-averaging) have equivalent behavior.
A number of formal properties of
$\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})$ are evident from Eq. (29).
For one thing, $\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})$ clearly
depends only on the time difference $t-t^{\prime}$, and we shall often write
$\mathcal{G}_{\alpha\alpha^{\prime}}(t)$ with $t^{\prime}=0$. Since the four
components differ only in time ordering, we see that for any function $f(x)$,
$f\Big{(}\mathcal{G}_{uu}(t)\Big{)}+f\Big{(}\mathcal{G}_{ll}(t)\Big{)}=f\Big{(}\mathcal{G}_{ul}(t)\Big{)}+f\Big{(}\mathcal{G}_{lu}(t)\Big{)}.$
(30)
We can further express all four components in terms of a single complex-valued
function (equivalently two real-valued functions). For example, write
$\mathcal{G}_{lu}(t)$ in terms of its real and imaginary parts as
$\mathcal{G}^{R}(t)+i\mathcal{G}^{I}(t)$. Since
$\mathcal{G}_{lu}(t)^{*}=\mathcal{G}_{lu}(-t)$, $\mathcal{G}^{R}(t)$ is even
and $\mathcal{G}^{I}(t)$ is odd. One can easily confirm that
$\displaystyle\mathcal{G}_{uu}(t)$
$\displaystyle=\mathcal{G}^{R}(t)+i\textrm{sgn}[t]\mathcal{G}^{I}(t),$
$\displaystyle\qquad\mathcal{G}_{ul}(t)$
$\displaystyle=\mathcal{G}^{R}(t)-i\mathcal{G}^{I}(t),$ (31)
$\displaystyle\mathcal{G}_{lu}(t)$
$\displaystyle=\mathcal{G}^{R}(t)+i\mathcal{G}^{I}(t),$
$\displaystyle\qquad\mathcal{G}_{ll}(t)$
$\displaystyle=\mathcal{G}^{R}(t)-i\textrm{sgn}[t]\mathcal{G}^{I}(t).$
One of the most important features of
$\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})$ is the limiting behavior
at large $|t-t^{\prime}|$, as a function of the inverse temperature $\beta$.
Numerical solution of Eq. (28) demonstrates that there is a critical value
$\beta_{d}$ (which is less than $\beta_{s}$):
* •
For $\beta<\beta_{d}$,
$\lim_{|t-t^{\prime}|\rightarrow\infty}\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})=0$.
We call this the “ergodic” phase ($\mathbb{E}\langle\sigma_{i}(t)\rangle=0$ by
symmetry regardless of temperature, and so in this phase
$\mathbb{E}\langle\sigma_{i}(t)\sigma_{i}(t^{\prime})\rangle\rightarrow\mathbb{E}\langle\sigma_{i}(t)\rangle\mathbb{E}\langle\sigma_{i}(t^{\prime})\rangle$).
* •
For $\beta_{d}<\beta<\beta_{s}$,
$\lim_{|t-t^{\prime}|\rightarrow\infty}\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})=q_{\textrm{EA}}>0$.
We call this the “non-ergodic” phase. The quantity $q_{\textrm{EA}}$ is
referred to as the “Edwards-Anderson” order parameter.
* •
For $\beta_{s}<\beta$, our initial annealed approximation is no longer valid.
The replica trick is required to obtain accurate results [23, 24], but (at
least for finite-time dynamical properties) the behavior is qualitatively
similar to that of the non-ergodic phase.
### 2.3 TAP equations on the Schwinger-Keldysh contour
The dynamical calculation described above only hints at the complexity of the
non-ergodic phase. A more complete picture emerges from a generalization in
the spirit of the TAP equations. Our treatment follows that of Ref. [66],
which derived TAP equations on the thermal circle for the quantum PSM. While
the extension to real-time dynamics is straightforward, we are not aware of
any explicit calculation in the literature. Thus we present a detailed
derivation of the following equations in App. A.
As discussed in Sec. 1.2, the TAP free energy (or Gibbs potential) is the
Legendre transform of the free energy with respect to local fields. It is
therefore a function of the magnetization $m_{i}$ of each spin. For the free
energy of quantum systems, the magnetization should also have an imaginary
time index $m_{i}(\tau)$. The imaginary-time correlation function
$\mathcal{G}(\tau,\tau^{\prime})$ becomes an additional order parameter.
We define the TAP action on the Schwinger-Keldysh contour analogously. It is a
function of the magnetizations $m_{i}(t)$ and the correlation function
$\mathcal{G}(t,t^{\prime})$, with $t$ again being complex-valued and ranging
over the entire contour. Specifically,
$\displaystyle iNS_{\textrm{TAP}}[m,\mathcal{G}]$
$\displaystyle\equiv\log{\int\mathcal{D}\sigma^{N}\exp\left[i\sum_{i}S_{i}^{0}-i\int_{\mathcal{C}}dt\sum_{(i_{1}\cdots
i_{p})}J_{i_{1}\cdots i_{p}}\sigma_{i_{1}}(t)\cdots\sigma_{i_{p}}(t)\right]}$
(32)
$\displaystyle\qquad\qquad+\frac{iN}{2}\int_{\mathcal{C}}dtz(t)-i\int_{\mathcal{C}}dt\sum_{i}h_{i}(t)m_{i}(t)+\frac{iN}{2}\int_{\mathcal{C}}dtdt^{\prime}\Lambda(t,t^{\prime})\mathcal{G}(t,t^{\prime}),$
where $\mathcal{C}$ denotes the Schwinger-Keldysh contour and
$S_{i}^{0}\equiv\int_{\mathcal{C}}dt\left(\frac{\mu}{2}\big{(}\partial_{t}\sigma_{i}(t)\big{)}^{2}-\frac{z(t)}{2}\sigma_{i}(t)^{2}+h_{i}(t)\sigma_{i}(t)\right)-\frac{1}{2}\int_{\mathcal{C}}dtdt^{\prime}\Lambda(t,t^{\prime})\sigma_{i}(t)\sigma_{i}(t^{\prime}).$
(33)
The fields $h_{i}(t)$ and $\Lambda(t,t^{\prime})$ are not independent
parameters. They are instead chosen so that
$\langle\sigma_{i}(t)\rangle=m_{i}(t)$ and
$N^{-1}\sum_{i}\langle\sigma_{i}(t)\sigma_{i}(t^{\prime})\rangle=\mathcal{G}(t,t^{\prime})$,
just as $z(t)$ is again chosen to enforce
$N^{-1}\sum_{i}\langle\sigma_{i}(t)^{2}\rangle=1$, where the expectation value
is with respect to the action in Eq. (32).
Due to the Legendre-transform structure of $S_{\textrm{TAP}}$, we have that
$N\frac{\partial S_{\textrm{TAP}}}{\partial
m_{i}(t)}=-h_{i}(t),\qquad\frac{\partial
S_{\textrm{TAP}}}{\partial\mathcal{G}(t,t^{\prime})}=\frac{1}{2}\Lambda(t,t^{\prime}).$
(34)
The TAP equations are those for $m_{i}(t)$ and $\mathcal{G}(t,t^{\prime})$
which one gets by setting the right-hand sides of Eq. (34) to zero. The
solutions are therefore the values of magnetization and correlation function
which the system can consistently possess “on its own,” without any external
fields. In this sense, each solution corresponds to a distinct metastable
state. There is no reason why there cannot be many self-consistent solutions,
and indeed, spin glass models such as the PSM do have many at sufficiently low
temperature.
We calculate the TAP equations in App. A. They are simplified by the fact that
we can take $m_{i}(t)=m$ and $z(t)=z$. We also define $q_{\textrm{EA}}\equiv
N^{-1}\sum_{i}m_{i}^{2}$. The equations come out to be (together with
$\mathcal{G}(t,t)=1$)
$i\big{(}\mu\partial_{t}^{2}+z\big{)}\Big{(}\mathcal{G}(t,t^{\prime})-q_{\textrm{EA}}\Big{)}+J^{2}\int_{\mathcal{C}}dt^{\prime\prime}\Big{(}\mathcal{G}(t,t^{\prime\prime})^{p-1}-q_{\textrm{EA}}^{p-1}\Big{)}\Big{(}\mathcal{G}(t^{\prime\prime},t^{\prime})-q_{\textrm{EA}}\Big{)}=\delta(t-t^{\prime}),$
(35)
$J^{2}\int_{\mathcal{C}}dt^{\prime}\Big{(}\mathcal{G}(t,t^{\prime})^{p-1}-(p-1)q_{\textrm{EA}}^{p-2}\mathcal{G}(t,t^{\prime})+(p-2)q_{\textrm{EA}}^{p-1}\Big{)}m_{i}=-izm_{i}-i\sum_{(i_{1}\cdots
i_{p})}J_{i_{1}\cdots i_{p}}\frac{\partial(m_{i_{1}}\cdots
m_{i_{p}})}{\partial m_{i}}.$ (36)
Note that Eq. (36) is $N$ equations, one for each spin $i$, and that it holds
equally for any value of $t$ due to time translation invariance. Defining
$\mathcal{F}(t,t^{\prime})\equiv J^{2}\mathcal{G}(t,t^{\prime})^{p-1}$, Eq.
(35) is quite similar to Eq. (28). The only difference is that Eq. (35) uses
$\Delta\mathcal{G}(t,t^{\prime})\equiv\mathcal{G}(t,t^{\prime})-q_{\textrm{EA}}$
and
$\Delta\mathcal{F}(t,t^{\prime})\equiv\mathcal{F}(t,t^{\prime})-J^{2}q_{\textrm{EA}}^{p-1}$,
which decay to zero at large $|t-t^{\prime}|$, rather than
$\mathcal{G}(t,t^{\prime})$ and $\mathcal{F}(t,t^{\prime})$ themselves.
Despite the more involved derivation, $\mathcal{G}(t,t^{\prime})$ remains a
contour-ordered expectation value. Thus, returning to the notation in which
$\alpha\in\\{u,l\\}$ labels branches and $t$ is real,
$\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})$ possesses the same formal
properties as discussed in the previous subsection (Eqs. (30) and (31)). Of
particular importance will be the Fourier transform of
$\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t)$ at zero frequency, denoted
$\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(0)$, as well as its
(matrix) inverse,
$\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}^{-1}(0)$. Also define
$L\equiv\int_{-\infty}^{\infty}dt\Delta\mathcal{G}^{R}(t)$ and
$\Lambda\equiv\int_{0}^{\infty}dt\Delta\mathcal{G}^{I}(t)$. Then from Eq.
(31), we see that
$\begin{pmatrix}\Delta\widetilde{\mathcal{G}}_{uu}(0)&\Delta\widetilde{\mathcal{G}}_{ul}(0)\\\
\Delta\widetilde{\mathcal{G}}_{lu}(0)&\Delta\widetilde{\mathcal{G}}_{ll}(0)\end{pmatrix}=\begin{pmatrix}L+2i\Lambda&L\\\
L&L-2i\Lambda\end{pmatrix},$ (37)
$\begin{pmatrix}\Delta\widetilde{\mathcal{G}}_{uu}(0)&\Delta\widetilde{\mathcal{G}}_{ul}(0)\\\
\Delta\widetilde{\mathcal{G}}_{lu}(0)&\Delta\widetilde{\mathcal{G}}_{ll}(0)\end{pmatrix}^{-1}=\frac{1}{4\Lambda^{2}}\begin{pmatrix}L-2i\Lambda&-L\\\
-L&L+2i\Lambda\end{pmatrix}.$ (38)
The multiplicity of solutions to the TAP equations comes from Eq. (36). By use
of Eqs. (35), (37), and (38), it can be written (associating $u$ with 0 and
$l$ with 1)
$\displaystyle\left[(-1)^{\alpha}\sum_{\alpha^{\prime}}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}^{-1}(0)-(p-1)J^{2}q_{\textrm{EA}}^{p-2}\sum_{\alpha^{\prime}}(-1)^{\alpha^{\prime}}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(0)\right]$
$\displaystyle m_{i}$ (39)
$\displaystyle=\left[\frac{1}{2i\Lambda}-(p-1)J^{2}q_{\textrm{EA}}^{p-2}2i\Lambda\right]$
$\displaystyle m_{i}=-i\sum_{(i_{1}\cdots i_{p})}J_{i_{1}\cdots
i_{p}}\frac{\partial(m_{i_{1}}\cdots m_{i_{p}})}{\partial m_{i}}.$
Eq. (39) is identical to that which appears and has been well-studied for the
classical PSM [61, 41, 67, 26]. Thus we simply quote the following results. In
addition to the inverse temperature $\beta$, solutions to Eq. (39) are
parametrized by the quantity
$\mathcal{E}\equiv\frac{1}{NJq_{\textrm{EA}}^{p/2}}\sum_{(i_{1}\cdots
i_{p})}J_{i_{1}\cdots i_{p}}m_{i_{1}}\cdots m_{i_{p}},$ (40)
which can be interpreted as a “normalized” potential energy density: each
magnetization has a value which is (very roughly) comparable to
$q_{\textrm{EA}}^{1/2}$, and thus the natural scale for the interaction energy
is $Jq_{\textrm{EA}}^{p/2}$. The value of $q_{\textrm{EA}}$ for a given
$\mathcal{E}<0$ is given by the largest solution to
$-2Jq_{\textrm{EA}}^{p/2-1}\Lambda=\frac{p}{2(p-1)}\left(-\mathcal{E}-\sqrt{\mathcal{E}^{2}-\mathcal{E}_{\textrm{th}}^{2}}\right),\qquad\mathcal{E}_{\textrm{th}}\equiv-\frac{2\sqrt{p-1}}{p},$
(41)
where $\Lambda$ depends on $q_{\textrm{EA}}$ through Eq. (35). One can show
that solutions to Eq. (41) exist only for $\beta>\beta_{d}$, with $\beta_{d}$
the same as defined in Sec. 2.2. Furthermore, Eq. (41) only makes sense if
$\mathcal{E}\leq\mathcal{E}_{\textrm{th}}$. In that case, the number of
solutions $\mathcal{N}(\beta,\mathcal{E})$ to Eq. (39) — in addition to the
trivial solution $m_{i}=0$ — is exponential in system size:
$N^{-1}\log{\mathcal{N}(\beta,\mathcal{E})}\sim\Sigma(\mathcal{E})$, with111
As written, Eq. (42) is a bit sloppy. $\mathcal{N}(\mathcal{E})$ is given by
Eq. (42) when the latter is non-negative and $\beta$ is such that solutions to
Eq. (41) exist. In all other cases, $\mathcal{N}(\mathcal{E})=0$.
$\Sigma(\mathcal{E})=\frac{1}{2}\left(1+2\log{\frac{p}{2}}\right)-\frac{p\mathcal{E}^{2}}{2}+\frac{p^{2}}{8(p-1)}\Big{(}\mathcal{E}+\sqrt{\mathcal{E}^{2}-\mathcal{E}_{\textrm{th}}^{2}}\Big{)}^{2}+\log{\Big{(}-\mathcal{E}+\sqrt{\mathcal{E}^{2}-\mathcal{E}_{\textrm{th}}^{2}}\Big{)}}.$
(42)
The exponent $\Sigma(\mathcal{E})$ is referred to as the “complexity” in the
spin glass literature.
The connection between this TAP approach and the conventional Schwinger-
Keldysh path integral lies in the fact that: i) the inverse temperature
$\beta_{d}$ at which TAP states with non-zero magnetization appear is
identical to that at which the autocorrelation function acquires a non-zero
late-time limit; ii) the overlap determined by Eq. (41) is identical to the
late-time value of the autocorrelation function. This strongly suggests the
following picture:
* •
For $\beta<\beta_{d}$ (the “ergodic” phase), there exists a single equilibrium
state with zero magnetization, and the correlation function decays to zero on
a finite timescale.
* •
For $\beta_{d}<\beta$ (the “non-ergodic” phase), there exist exponentially
many metastable states having non-zero magnetization. The number of states is
given by the exponential of the complexity $\Sigma(\mathcal{E})$. Dynamically,
in the $N\rightarrow\infty$ limit, a system prepared in one metastable state
will remain in that state for all time. At finite $N$, it is only on a
timescale exponential in $N$ that the system can transition between states.
Much more can be said about these phases (in particular how the replica-
symmetry-breaking transition at $\beta_{s}$ appears within the TAP approach),
and a large body of literature is devoted to this topic. We refer in
particular to Ref. [26] as an excellent starting point.
In the present work, we determine the spectral statistics of the PSM in both
the ergodic and non-ergodic phase. Those of the former can be computed very
much along the lines of Ref. [29], which we do in Sec. 3. Those of the latter,
however, require novel calculations which we present in Sec. 4.
Unsurprisingly, the properties of TAP states shall play an essential role.
## 3 The semiclassical ramp in the ergodic phase
To reiterate, we are evaluating
$\textrm{SFF}(T,f)\equiv\mathbb{E}\big{|}\textrm{Tr}f(H)e^{-iHT}\big{|}^{2}=\mathbb{E}\Big{[}\textrm{Tr}f(H)e^{-iHT}\textrm{Tr}f(H)e^{iHT}\Big{]},$
(43)
where $H$ is the PSM Hamiltonian (Eq. (16)) and $f$ is a filter function as
discussed in Sec. 1.1. Here we consider the ergodic phase, for which the
results are analogous to those of SYK [29]. We then consider the non-ergodic
phase in Sec. 4.
### 3.1 Effective action
The calculation begins by retracing the steps described in Sec. 2.2, only on a
modified contour. We still have upper and lower branches indicated by
$\alpha\in\\{u,l\\}$ (with $u=0$ and $l=1$), but now each is separately
periodic. Furthermore, we no longer have a thermal branch. See the right
column of Fig. 4, as compared to the left column. While some care is required
to account for the filter functions (as discussed in Appendix C), we
ultimately arrive at an expression analogous to Eq. (27):
$\textrm{SFF}(T,f)=\int\mathcal{D}G\mathcal{D}F\,f\big{(}\epsilon_{u}[G]\big{)}f\big{(}\epsilon_{l}[G]\big{)}e^{-NS_{\textrm{eff}}[G,F]},$
(44) $\displaystyle S_{\textrm{eff}}[G,F]$
$\displaystyle=-\frac{i}{2}\int_{0}^{T}dt\sum_{\alpha}(-1)^{\alpha}z_{\alpha}(t)+\frac{1}{2}\int_{0}^{T}dtdt^{\prime}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\left(\frac{J^{2}}{p}G_{\alpha\alpha^{\prime}}(t,t^{\prime})^{p}-F_{\alpha\alpha^{\prime}}(t,t^{\prime})G_{\alpha\alpha^{\prime}}(t,t^{\prime})\right)$
(45)
$\displaystyle\qquad\qquad+\frac{1}{2}\log{\textrm{Det}}\Big{[}i(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\big{(}\mu\partial_{t}^{2}+z_{\alpha}\big{)}+(-1)^{\alpha+\alpha^{\prime}}F_{\alpha\alpha^{\prime}}\Big{]},$
where the “energy density” $\epsilon_{\alpha}[G]$ is defined as ($0^{+}$
denotes a positive infinitesimal)
$\epsilon_{\alpha}[G]\equiv-\frac{\mu}{2}\partial_{t}^{2}G_{\alpha\alpha}(0^{+},0)-\frac{iJ^{2}}{p}\int_{0}^{T}dt\sum_{\alpha^{\prime}}(-1)^{\alpha^{\prime}}G_{\alpha\alpha^{\prime}}(t,0)^{p}.$
(46)
See App. C for details. The saddle point of $S_{\textrm{eff}}$ is given by the
equations (compare to Eq. (28))
$\begin{gathered}i\big{(}\mu\partial_{t}^{2}+z_{\alpha}(t)\big{)}G_{\alpha\alpha^{\prime}}(t,t^{\prime})+\int_{0}^{T}dt^{\prime\prime}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}F_{\alpha\alpha^{\prime\prime}}(t,t^{\prime\prime})G_{\alpha^{\prime\prime}\alpha^{\prime}}(t^{\prime\prime},t^{\prime})=(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\delta(t-t^{\prime}),\\\
F_{\alpha\alpha^{\prime}}(t,t^{\prime})=J^{2}G_{\alpha\alpha^{\prime}}(t,t^{\prime})^{p-1},\qquad
G_{\alpha\alpha}(t,t)=1.\end{gathered}$ (47)
Denoting averages with respect to the path integral of Eq. (44) by
$\langle\,\cdot\,\rangle$, the expectation value of $G$ is related to the
original degrees of freedom as follows (we omit the filter functions here for
brevity):
$\displaystyle\big{<}G_{uu}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}\textrm{Tr}e^{-iHT}\mathcal{T}\sigma_{i}(t)\sigma_{i}(t^{\prime})\textrm{Tr}e^{iHT}\Big{]},$
$\displaystyle\qquad\big{<}G_{ul}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}\textrm{Tr}e^{-iHT}\sigma_{i}(t)\textrm{Tr}e^{iHT}\sigma_{i}(t^{\prime})\Big{]},$
(48) $\displaystyle\big{<}G_{lu}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}\textrm{Tr}e^{-iHT}\sigma_{i}(t^{\prime})\textrm{Tr}e^{iHT}\sigma_{i}(t)\Big{]},$
$\displaystyle\qquad\big{<}G_{ll}(t,t^{\prime})\big{>}$
$\displaystyle=\mathbb{E}\Big{[}\textrm{Tr}e^{-iHT}\textrm{Tr}e^{iHT}\widetilde{\mathcal{T}}\sigma_{i}(t)\sigma_{i}(t^{\prime})\Big{]},$
where $\mathcal{T}$ denotes time ordering and $\widetilde{\mathcal{T}}$
denotes time anti-ordering. One immediately sees from Eq. (48) that:
1. i)
all components of $\langle G_{\alpha\alpha^{\prime}}(t,t^{\prime})\rangle$ are
time-translation invariant and have period $T$;
2. ii)
$\langle G_{uu}(t,t^{\prime})\rangle$ and $\langle
G_{ll}(t,t^{\prime})\rangle$ are even functions of $t-t^{\prime}$;
3. iii)
$\langle G_{ul}(t,t^{\prime})\rangle$ and $\langle
G_{lu}(t,t^{\prime})\rangle$ are in fact independent of both time arguments;
4. iv)
$\langle G_{uu}(t,t^{\prime})\rangle^{*}=\langle G_{ll}(t,t^{\prime})\rangle$;
5. v)
$\langle G_{ul}(t,t^{\prime})\rangle^{*}=\langle G_{lu}(t,t^{\prime})\rangle$.
Solutions to Eq. (47) do not necessarily share all these properties, since
some of the symmetries may be spontaneously broken.
However, one simple solution that obeys all of the above is to take
$G_{ul}(t,t^{\prime})=G_{lu}(t,t^{\prime})=0$. The resulting action is
precisely what one would get from averaging each factor of
$\textrm{Tr}e^{-iHT}$ separately, i.e., this solution gives the disconnected
contribution to the SFF:
$\mathbb{E}\Big{[}\textrm{Tr}f(H)e^{-iHT}\textrm{Tr}f(H)e^{iHT}\Big{]}=\mathbb{E}\Big{[}\textrm{Tr}f(H)e^{-iHT}\Big{]}\mathbb{E}\Big{[}\textrm{Tr}f(H)e^{iHT}\Big{]}+\cdots,$
(49)
where $\cdots$ denotes the contribution to the path integral from non-zero
$G_{ul}$ and/or $G_{lu}$. Eq. (49) holds equally well in the non-ergodic
phase, and thus the remainder of this paper will be concerned with determining
those additional contributions.
### 3.2 Connected solutions
Following Ref. [29], we construct approximate solutions to Eq. (47) which
become accurate at large $T$. We first present the solutions and justify them
afterwards. Take $\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})$ to be the
Schwinger-Keldysh correlation function at inverse temperature
$\beta_{\textrm{aux}}$, exactly as given in Sec. 2.2 (Eq. (29) in particular).
Again define $\mathcal{F}_{\alpha\alpha^{\prime}}(t,t^{\prime})\equiv
J^{2}\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})^{p-1}$. A solution to
the SFF saddle point equations (up to terms which vanish at large $T$) is
$G_{\alpha\alpha^{\prime}}(t,t^{\prime})=\sum_{n=-\infty}^{\infty}\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime}+\delta_{\alpha\neq\alpha^{\prime}}\Delta+nT),$
(50)
$F_{\alpha\alpha^{\prime}}(t,t^{\prime})=\sum_{n=-\infty}^{\infty}\mathcal{F}_{\alpha\alpha^{\prime}}(t-t^{\prime}+\delta_{\alpha\neq\alpha^{\prime}}\Delta+nT).$
(51)
Here $\Delta$ can be any real number between 0 and $T$. Thus Eqs. (50) and
(51) constitute a two-parameter family of solutions, the parameters being
$\beta_{\textrm{aux}}$ and $\Delta$. Every such solution contributes to the
SFF.
As for the Lagrange multipliers $z_{\alpha}(t)$, they are independent of $t$
due to time translation invariance. We further have that $z_{u}=z_{l}\equiv
z$: both equal the value of the chemical potential needed to satisfy the
equilibrium spherical constraint, i.e.,
$N^{-1}\sum_{i}\textrm{Tr}Z_{\textrm{SK}}^{-1}e^{-\beta_{\textrm{aux}}H}\sigma_{i}^{2}=1$
(time translation invariance then implies that
$\mathcal{G}_{\alpha\alpha}(t,t)=1$ for all times and both branches).
To justify Eqs. (50) and (51), it is essential that
$\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})$ decay exponentially to
zero as $|t-t^{\prime}|\rightarrow\infty$. Thus these solutions only apply in
the ergodic phase. With this in mind, the following comments together
establish their validity:
* •
The sum over $n$ ensures that $G_{\alpha\alpha^{\prime}}(t-t^{\prime})$ has
period $T$, even though $\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})$
does not.
* •
Since $\mathcal{G}_{\alpha\alpha}(t-t^{\prime})$ decays exponentially,
$G_{\alpha\alpha}(0)\sim 1$ up to terms which are exponentially small in $T$.
* •
The equation
$F_{\alpha\alpha^{\prime}}(t,t^{\prime})=J^{2}G_{\alpha\alpha^{\prime}}(t,t^{\prime})^{p-1}$
is satisfied up to exponentially small terms because, when raising Eq. (50) to
the $p-1$’th power, all cross terms are exponentially small (as is the sum
over them). In other words,
$\left(\sum_{n}\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime}+nT+\delta_{\alpha\neq\alpha^{\prime}}\Delta)\right)^{p-1}\sim\sum_{n}\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime}+nT+\delta_{\alpha\neq\alpha^{\prime}}\Delta)^{p-1}.$
(52)
* •
$\mathcal{G}_{\alpha\alpha^{\prime}}(t,t^{\prime})$ obeys Eq. (28), written
explicitly in terms of components as
$\displaystyle
i\big{(}\mu\partial_{t}^{2}+z\big{)}\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})+\int_{0}^{\infty}dt^{\prime\prime}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\mathcal{F}_{\alpha\alpha^{\prime\prime}}(t-t^{\prime\prime})\mathcal{G}_{\alpha^{\prime\prime}\alpha^{\prime}}(t^{\prime\prime}-t^{\prime})$
(53)
$\displaystyle\qquad\qquad\qquad\qquad\;\;\;-i\int_{0}^{\beta_{\textrm{aux}}}d\tau^{\prime\prime}\mathcal{F}_{\alpha
v}(t+i\tau^{\prime\prime})\mathcal{G}_{v\alpha^{\prime}}(-i\tau^{\prime\prime}-t^{\prime})=(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\delta(t-t^{\prime}),$
where $v$ denotes the thermal branch of the contour. For $t,t^{\prime}\gg 1$
(which still allows $t-t^{\prime}$ to take any value), $\mathcal{G}_{\alpha
v}(t+i\tau)$ is exponentially small for all $\tau$ and the last term on the
left-hand side can be neglected. We can also take the lower limit of the
$t^{\prime\prime}$ integral to $-\infty$. Thus when checking whether Eq. (50)
satisfies Eq. (47), we have that
$\displaystyle
i\big{(}\mu\partial_{t}^{2}+z\big{)}G_{\alpha\alpha^{\prime}}(t-t^{\prime})+\int_{0}^{T}dt^{\prime\prime}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}F_{\alpha\alpha^{\prime\prime}}(t-t^{\prime\prime})G_{\alpha^{\prime\prime}\alpha^{\prime}}(t^{\prime\prime}-t^{\prime})$
(54) $\displaystyle\;\;\sim
i\big{(}\mu\partial_{t}^{2}+z\big{)}\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})+\int_{-\infty}^{\infty}dt^{\prime\prime}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\mathcal{F}_{\alpha\alpha^{\prime\prime}}(t-t^{\prime\prime})\mathcal{G}_{\alpha^{\prime\prime}\alpha^{\prime}}(t^{\prime\prime}-t^{\prime})$
$\displaystyle\;\;\sim(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\delta(t-t^{\prime}),$
again making use of the fact that
$\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})$ is exponentially small
when $|t-t^{\prime}|$ is large. The equation is indeed satisfied.
* •
Finally, the off-diagonal components $G_{ul}(t,t^{\prime})$ and
$G_{lu}(t,t^{\prime})$ contain the parameter $\Delta$ because they break the
separate time translation symmetries in $t$ and $t^{\prime}$ (see property iii
above). Thus if any choice of $\Delta$ solves Eq. (47), so do all choices of
$\Delta\in[0,T)$.
As noted above, we have thus identified a two-parameter family of solutions to
the SFF saddle point equations. In what follows it will be more convenient to
parametrize the solutions by the equilibrium energy density
$\epsilon(\beta_{\textrm{aux}})$ corresponding to inverse temperature
$\beta_{\textrm{aux}}$. We can express $\epsilon(\beta)$ in terms of
$\mathcal{G}$ (and thus $G$) by inserting a factor of $H$ into the Schwinger-
Keldysh contour. Since $H$ clearly commutes with the evolution operator
$e^{-\beta H}e^{iHt}e^{-iHt}$, it can be inserted at any point, in particular
at a late time for which (again because
$\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})$ decays exponentially) the
thermal branch can be neglected. By following the same steps as in Appendix C,
we find that $\epsilon(\beta)$ is given precisely by Eq. (46), evaluated on
either branch:
$\displaystyle\epsilon=$
$\displaystyle-\frac{\mu}{2}\partial_{t}^{2}\mathcal{G}_{uu}(0^{+})-\frac{iJ^{2}}{p}\int_{-\infty}^{\infty}dt\Big{(}\mathcal{G}_{uu}(t)^{p}-\mathcal{G}_{ul}(t)^{p}\Big{)}$
(55) $\displaystyle=$
$\displaystyle-\frac{\mu}{2}\partial_{t}^{2}\mathcal{G}_{ll}(0^{+})+\frac{iJ^{2}}{p}\int_{-\infty}^{\infty}dt\Big{(}\mathcal{G}_{ll}(t)^{p}-\mathcal{G}_{lu}(t)^{p}\Big{)}.$
### 3.3 Contribution of connected solutions
Having demonstrated that Eqs. (50) and (51) solve the SFF saddle point
equations, it remains to calculate the action (Eq. (45)) evaluated at the
solutions. First note that, since each solution obeys Eq. (47), we can rewrite
the action as
$S_{\textrm{eff}}=-\frac{J^{2}(p-1)T}{2p}\int_{0}^{T}dt\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}G_{\alpha\alpha^{\prime}}(t)^{p}-\frac{1}{2}\sum_{\omega}\log{\textrm{Det}}\widetilde{G}_{\alpha\alpha^{\prime}}(\omega),$
(56)
where $\omega\in 2\pi\mathbb{Z}/T$ and
$\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)\equiv\int_{0}^{T}dte^{i\omega
t}G_{\alpha\alpha^{\prime}}(t)$. Note that the Lagrange multiplier terms have
dropped out since $z_{u}=z_{l}$. Furthermore, since $\int
dtG_{\alpha\alpha^{\prime}}(t)^{p}\sim\int
dt\mathcal{G}_{\alpha\alpha^{\prime}}(t)^{p}$, the general relation in Eq.
(30) implies that the first term of Eq. (56) in fact vanishes.
For the second term, note that by Eq. (50),
$\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)=e^{-i\delta_{\alpha\neq\alpha^{\prime}}\omega\Delta}\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega),\qquad\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)\equiv\int_{-\infty}^{\infty}dte^{i\omega
t}\mathcal{G}_{\alpha\alpha^{\prime}}(t).$ (57)
The exponential decay of $\mathcal{G}_{\alpha\alpha^{\prime}}(t)$ implies that
$\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)$ (and thus
$\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)$) is an infinitely
differentiable function of $\omega$. Strictly speaking, since the path
integral is regularized by a timestep $\Delta t\rightarrow 0$,
$\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)$ is furthermore
periodic with period $2\pi/\Delta t$. The same is true of
$\log{\textrm{Det}}\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)$. Thus the
Euler-Maclaurin formula [68] gives
$\sum_{n=-\pi/\Delta t}^{\pi/\Delta
t}\log{\textrm{Det}}\widetilde{G}_{\alpha\alpha^{\prime}}\left(\frac{2\pi
n}{T}\right)\sim\frac{T}{2\pi}\int_{-\pi/\Delta t}^{\pi/\Delta
t}d\omega\log{\textrm{Det}}\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)\rightarrow\frac{T}{2\pi}\int_{-\infty}^{\infty}d\omega\log{\textrm{Det}}\widetilde{G}_{\alpha\alpha^{\prime}}(\omega),$
(58)
up to terms which vanish faster than any polynomial in $T^{-1}$. Thus
$S_{\textrm{eff}}$ is proportional to $T$, and we only need to evaluate the
proportionality constant.
Rather than calculate the integral directly, we follow Ref. [29] and evaluate
the derivative $dS_{\textrm{eff}}/dT$ starting from Eq. (45). It is convenient
to rescale time as $t\rightarrow Tt$, so that $T$ becomes simply another
parameter:
$\displaystyle S_{\textrm{eff}}$
$\displaystyle=\frac{T^{2}}{2}\int_{0}^{1}dtdt^{\prime}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\left(\frac{J^{2}}{p}G_{\alpha\alpha^{\prime}}(t,t^{\prime})^{p}-F_{\alpha\alpha^{\prime}}(t,t^{\prime})G_{\alpha\alpha^{\prime}}(t,t^{\prime})\right)$
(59)
$\displaystyle\qquad\qquad+\frac{1}{2}\log{\textrm{Det}}\Big{[}i(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\big{(}\mu
T^{-2}\partial_{t}^{2}+z_{\alpha}\big{)}+(-1)^{\alpha+\alpha^{\prime}}F_{\alpha\alpha^{\prime}}\Big{]}.$
Note that, since $S_{\textrm{eff}}$ is evaluated at a solution of the saddle
point equations, we only need to differentiate the explicit factors of $T$:
$\displaystyle\frac{dS_{\textrm{eff}}}{dT}=$
$\displaystyle\;T\int_{0}^{1}dtdt^{\prime}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\left(\frac{J^{2}}{p}G_{\alpha\alpha^{\prime}}(t,t^{\prime})^{p}-F_{\alpha\alpha^{\prime}}(t,t^{\prime})G_{\alpha\alpha^{\prime}}(t,t^{\prime})\right)$
(60)
$\displaystyle\qquad\qquad-\frac{i\mu}{T^{3}}\int_{0}^{1}dt\sum_{\alpha}(-1)^{\alpha}\partial_{t}^{2}\Big{[}i(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\big{(}\mu
T^{-2}\partial_{t}^{2}+z_{\alpha}(t)\big{)}+(-1)^{\alpha+\alpha^{\prime}}F_{\alpha\alpha^{\prime}}\Big{]}^{-1}\bigg{|}_{\alpha=\alpha^{\prime},t=t^{\prime+}}.$
Returning to unscaled time and using Eq. (47), we have that
$\frac{dS_{\textrm{eff}}}{dT}=-\frac{(p-1)J^{2}}{p}\int_{0}^{T}dt\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}G_{\alpha\alpha^{\prime}}(t)^{p}-\frac{i\mu}{T}\sum_{\alpha}(-1)^{\alpha}\partial_{t}^{2}G_{\alpha\alpha}(0^{+})=0,$
(61)
again using Eqs. (30) and (55). Thus the proportionality constant is in fact
zero, i.e., $S_{\textrm{eff}}=0$.
### 3.4 Evaluation of the SFF
To finally compute the SFF, we simply need to sum over all connected
solutions, i.e., integrate over $\epsilon_{\textrm{aux}}$ and $\Delta$.
However, there are additional discrete symmetries which give further
solutions: i) we can time-reverse the off-diagonal components, i.e., take
$G_{ul}(t)=\mathcal{G}_{ul}(-t)$ and $G_{lu}(t)=\mathcal{G}_{lu}(-t)$; ii) if
$p$ is even, we can take $G_{ul}(t)=-\mathcal{G}_{ul}(t)$ and
$G_{lu}(t)=-\mathcal{G}_{lu}(t)$. These must be summed over as well, giving an
additional factor of $2(1+\delta_{p\textrm{ even}})$, where $\delta_{p\textrm{
even}}$ is the indicator function on $p$ being even (1 if true, 0 if false).
Thus our final expression is
$\displaystyle\mathbb{E}\Big{[}\textrm{Tr}f(H)e^{-iHT}\textrm{Tr}f(H)e^{iHT}\Big{]}$
$\displaystyle\sim\left|\mathbb{E}\textrm{Tr}f(H)e^{-iHT}\right|^{2}+\int\frac{d\epsilon_{\textrm{aux}}}{2\pi}f(\epsilon_{\textrm{aux}})^{2}\int_{0}^{T}d\Delta\,2\big{(}1+\delta_{p\textrm{
even}}\big{)}e^{0}$ (62)
$\displaystyle=\left|\mathbb{E}\textrm{Tr}f(H)e^{-iHT}\right|^{2}+2\big{(}1+\delta_{p\textrm{
even}}\big{)}T\int\frac{d\epsilon_{\textrm{aux}}}{2\pi}f(\epsilon_{\textrm{aux}})^{2}.$
The measure $1/2\pi$ can be derived using hydrodynamic methods [19, 29], but
its precise value is not essential for our purposes. The key feature is simply
that the linear-in-$T$ ramp has emerged.
However, keep in mind that Eq. (62) is only valid if the filter function is
such that all contributing values of $\epsilon_{\textrm{aux}}$ lie in the
ergodic phase. In the following section we modify this analysis to hold in the
non-ergodic phase as well. We shall see that it is necessary to incorporate
the structure of multiple TAP states.
## 4 The semiclassical ramp in the non-ergodic phase
As we have stressed repeatedly, the results of Sec. 3 rely heavily on having
an equilibrium correlation function which decays to zero at late times. Thus a
new approach is needed to calculate the SFF in the non-ergodic phase, where
$\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})\rightarrow
q_{\textrm{EA}}\neq 0$ as $|t-t^{\prime}|\rightarrow\infty$. More
specifically, we can no longer neglect the integral over the thermal branch in
Eq. (28), and $\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})$ no longer
solves the SFF equations of motion (Eq. (47)).
However, in the TAP equations of motion, Eq. (35), we can neglect the thermal
branch since $\mathcal{G}(t)-q_{\textrm{EA}}$ does decay to zero exponentially
quickly. This suggests that a viable strategy is to construct solutions for
the SFF using the TAP correlation function. Since TAP states are parametrized
by the quantity $\mathcal{E}$ in Eq. (40), it will be necessary to first
modify the SFF path integral so as to involve $\mathcal{E}$. We associate the
magnetizations and overlap from the TAP approach with time-averaged functions
of the spin configuration, namely
$m_{i}[\sigma]\equiv\frac{1}{T}\int_{0}^{T}dt\sigma_{iu}(t),\qquad
q[\sigma]\equiv\frac{1}{T^{2}}\int_{0}^{T}dtdt^{\prime}\frac{1}{N}\sum_{i}\sigma_{iu}(t)\sigma_{iu}(t^{\prime}).$
(63)
The choice to use only the upper contour in defining $m_{i}[\sigma]$ and
$q[\sigma]$ will become convenient in Sec. 5, but for now one could equally
well use any other combination of branches, say the average of $\sigma_{i}(t)$
over the lower branch or over both branches symmetrically. With these
definitions, we introduce $\mathcal{E}$ via Eq. (40).
### 4.1 Effective action
To begin, insert an additional fat unity into the path integral:
$\displaystyle 1$ $\displaystyle=\int
d\mathcal{E}_{\textrm{aux}}\delta\left[N\mathcal{E}_{\textrm{aux}}-\frac{1}{Jq[\sigma]^{p/2}}\sum_{(i_{1}\cdots
i_{p})}J_{i_{1}\cdots
i_{p}}\left(\frac{1}{T}\int_{0}^{T}dt\sigma_{i_{1}u}(t)\right)\cdots\left(\frac{1}{T}\int_{0}^{T}dt\sigma_{i_{p}u}(t)\right)\right].$
(64)
With this addition, the full path integral is
SFF
$\displaystyle=\int\mathcal{D}P(J)\mathcal{D}\sigma^{N}d\mathcal{E}_{\textrm{aux}}d\lambda\exp\left[\frac{i}{2}\sum_{i}\int_{0}^{T}dt\sum_{\alpha}(-1)^{\alpha}\Big{[}\mu\big{(}\partial_{t}\sigma_{i\alpha}(t)\big{)}^{2}-z_{\alpha}(t)\big{(}\sigma_{i\alpha}(t)^{2}-1\big{)}\Big{]}\right]$
(65) $\displaystyle\qquad\cdot\exp\left[-i\sum_{(i_{1}\cdots
i_{p})}J_{i_{1}\cdots
i_{p}}\int_{0}^{T}dt\sum_{\alpha}(-1)^{\alpha}\sigma_{i_{1}\alpha}(t)\cdots\sigma_{i_{p}\alpha}(t)\right]$
$\displaystyle\qquad\qquad\cdot\exp\left[iN\lambda\mathcal{E}_{\textrm{aux}}-\frac{i\lambda}{Jq[\sigma]^{p/2}}\sum_{(i_{1}\cdots
i_{p})}J_{i_{1}\cdots
i_{p}}\left(\frac{1}{T}\int_{0}^{T}dt\sigma_{i_{1}u}(t)\right)\cdots\left(\frac{1}{T}\int_{0}^{T}dt\sigma_{i_{p}u}(t)\right)\right].$
Proceeding as usual — averaging over disorder, introducing
$G_{\alpha\alpha^{\prime}}(t,t^{\prime})$ and
$F_{\alpha\alpha^{\prime}}(t,t^{\prime})$ as before, integrating out spins —
we arrive at
$\textrm{SFF}(T,f)=\int
d\mathcal{E}_{\textrm{aux}}d\lambda\mathcal{D}G\mathcal{D}F\,f\big{(}\epsilon_{u}[\lambda,G]\big{)}f\big{(}\epsilon_{l}[\lambda,G]\big{)}e^{-NS_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}},\lambda,G,F]},$
(66) $\displaystyle S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}},\lambda,G,F]$
$\displaystyle=-i\lambda\mathcal{E}_{\textrm{aux}}-\frac{i}{2}\int_{0}^{T}dt\sum_{\alpha}(-1)^{\alpha}z_{\alpha}(t)$
(67)
$\displaystyle\qquad+\frac{\lambda^{2}}{2p}+\frac{J\lambda}{pq[G]^{p/2}}\int_{0}^{T}dt\sum_{\alpha}(-1)^{\alpha}\left(\frac{1}{T}\int_{0}^{T}dt^{\prime}G_{\alpha
u}(t,t^{\prime})\right)^{p}$
$\displaystyle\qquad\qquad+\frac{1}{2}\int_{0}^{T}dtdt^{\prime}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\left(\frac{J^{2}}{p}G_{\alpha\alpha^{\prime}}(t,t^{\prime})^{p}-F_{\alpha\alpha^{\prime}}(t,t^{\prime})G_{\alpha\alpha^{\prime}}(t,t^{\prime})\right)$
$\displaystyle\qquad\qquad\qquad+\frac{1}{2}\log{\textrm{Det}}\Big{[}i(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\big{(}\mu\partial_{t}^{2}+z_{\alpha}\big{)}+(-1)^{\alpha+\alpha^{\prime}}F_{\alpha\alpha^{\prime}}\Big{]},$
where we are denoting $q[G]\equiv
T^{-2}\int_{0}^{T}dtdt^{\prime}G_{uu}(t,t^{\prime})$. The argument of the
filter function is modified as well; it is now
$\epsilon_{\alpha}[\lambda,G]\equiv-\frac{\mu}{2}\partial_{t}^{2}G_{\alpha\alpha}(0^{+},0)-\frac{iJ^{2}}{p}\int_{0}^{T}dt\sum_{\alpha^{\prime}}(-1)^{\alpha^{\prime}}G_{\alpha\alpha^{\prime}}(t,0)^{p}-\frac{iJ\lambda}{pq[G]^{p/2}}\left(\frac{1}{T}\int_{0}^{T}dtG_{\alpha
u}(t,0)\right)^{p}.$ (68)
Note that $\mathcal{E}_{\textrm{aux}}$ enters linearly into the action. Thus
if we were to integrate over $\mathcal{E}_{\textrm{aux}}$ at this point, we
would obtain a $\delta$-function forcing $\lambda=0$. The action would then
reduce to the ergodic-phase expression, Eq. (45). While reassuring, this would
not have accomplished anything, so we instead treat
$\mathcal{E}_{\textrm{aux}}$ as a fixed parameter for now. We obtain saddle
point equations by differentiating Eq. (67) only with respect to $\lambda$,
$z$, $G$, and $F$.
The saddle point equations, assuming time translation invariance from the
outset, are
$i\big{(}\mu\partial_{t}^{2}+z_{\alpha}\big{)}G_{\alpha\alpha^{\prime}}(t-t^{\prime})+\int_{0}^{T}dt^{\prime\prime}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}F_{\alpha\alpha^{\prime\prime}}(t-t^{\prime\prime})G_{\alpha^{\prime\prime}\alpha^{\prime}}(t^{\prime\prime}-t^{\prime})=(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\delta(t-t^{\prime}),$
(69) $\displaystyle
F_{\alpha\alpha^{\prime}}(t)=J^{2}G_{\alpha\alpha^{\prime}}(t)^{p-1}$
$\displaystyle+\frac{J\lambda}{Tq[G]^{p/2}}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}\left(\frac{\widetilde{G}_{\alpha\alpha^{\prime}}(0)}{T}\right)^{p-1}$
(70) $\displaystyle-\frac{J\lambda}{Tq[G]^{p/2+1}}\delta_{\alpha
u}\delta_{\alpha^{\prime}u}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\left(\frac{\widetilde{G}_{\alpha^{\prime\prime}u}(0)}{T}\right)^{p},$
$\mathcal{E}_{\textrm{aux}}=-\frac{iJT}{pq[G]^{p/2}}\sum_{\alpha}(-1)^{\alpha}\left(\frac{\widetilde{G}_{\alpha
u}(0)}{T}\right)^{p}-\frac{i\lambda}{p},$ (71)
as well as the usual requirement $G_{\alpha\alpha}(0)=1$. Here
$\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)$ is the Fourier transform of
$G_{\alpha\alpha^{\prime}}(t)$.
### 4.2 Connected solutions
With $\mathcal{E}_{\textrm{aux}}$ fixed, let
$\mathcal{G}_{\alpha\alpha^{\prime}}(t)$ be the solution to the TAP equation
of motion (Eq. (35)) corresponding to inverse temperature
$\beta_{\textrm{aux}}$. Denote the Edwards-Anderson order parameter at
$\mathcal{E}_{\textrm{aux}}$ and $\beta_{\textrm{aux}}$ by $q_{\textrm{EA}}$.
Also recall the various auxiliary quantities we defined in Sec. 2.3: the self-
energy $\mathcal{F}_{\alpha\alpha^{\prime}}(t)\equiv
J^{2}\mathcal{G}_{\alpha\alpha^{\prime}}(t)^{p-1}$, the deviations
$\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t)\equiv\mathcal{G}_{\alpha\alpha^{\prime}}(t)-q_{\textrm{EA}}$
and
$\Delta\mathcal{F}_{\alpha\alpha^{\prime}}(t)\equiv\mathcal{F}_{\alpha\alpha^{\prime}}(t)-J^{2}q_{\textrm{EA}}^{p-1}$,
and the quantity $\Lambda\equiv\int_{0}^{\infty}dt\Delta\mathcal{G}^{I}(t)$.
We have that $\mathcal{G}_{\alpha\alpha^{\prime}}(t)$ and
$\mathcal{F}_{\alpha\alpha^{\prime}}(t)$ obey Eq. (35), which by taking $t$
and $t^{\prime}$ to be far from the thermal branch can be written
$i\big{(}\mu\partial_{t}^{2}+z\big{)}\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t-t^{\prime})+\int_{-\infty}^{\infty}dt^{\prime\prime}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\Delta\mathcal{F}_{\alpha\alpha^{\prime\prime}}(t-t^{\prime\prime})\Delta\mathcal{G}_{\alpha^{\prime\prime}\alpha^{\prime}}(t^{\prime\prime}-t^{\prime})=(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\delta(t-t^{\prime}).$
(72)
We also have, as a result of Eq. (39), the relationship
$\mathcal{E}_{\textrm{aux}}=\frac{2(p-1)Jq_{\textrm{EA}}^{p/2-1}\Lambda}{p}+\frac{1}{2pJq_{\textrm{EA}}^{p/2-1}\Lambda}.$
(73)
Finally, recall the expression for the complexity $\Sigma(\mathcal{E})$, the
logarithm of the number of solutions to the TAP magnetization equations at
$\mathcal{E}$:
$\Sigma(\mathcal{E})=\frac{1}{2}\left(1+2\log{\frac{p}{2}}\right)-\frac{p\mathcal{E}^{2}}{2}+\frac{p^{2}}{8(p-1)}\left(\mathcal{E}+\sqrt{\mathcal{E}^{2}-\mathcal{E}_{\textrm{th}}^{2}}\right)^{2}+\log{\left(-\mathcal{E}+\sqrt{\mathcal{E}^{2}-\mathcal{E}_{\textrm{th}}^{2}}\right)},$
(74)
where $\mathcal{E}_{\textrm{th}}^{2}=4(p-1)/p^{2}$. Using Eq. (73), we can
express $\Sigma(\mathcal{E}_{\textrm{aux}})$ in terms of $\Lambda$ and
$q_{\textrm{EA}}$:
$\Sigma(\mathcal{E}_{\textrm{aux}})=-\frac{p-2}{2p}-\frac{1}{8pJ^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}+\frac{2(p-1)J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}{p}-\frac{1}{2}\log{4J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}.$
(75)
Our solution to the SFF saddle point equations, Eqs. (69) through (71), is
best written in the frequency domain (tildes denote Fourier transforms):
$\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)=Tq_{\textrm{EA}}\delta_{\omega
0}+\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)+\frac{\widetilde{g}_{\alpha\alpha^{\prime}}(\omega)}{T},$
(76)
$\widetilde{F}_{\alpha\alpha^{\prime}}(\omega)=\left(TJ^{2}q_{\textrm{EA}}^{p-1}+ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}\right)\delta_{\omega
0}+\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(\omega)+\frac{\widetilde{f}_{\alpha\alpha^{\prime}}(\omega)}{T},$
(77)
$\lambda=ip\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}+\frac{\delta}{T}.$
(78)
We again take $z_{\alpha}$ to be the equilibrium value corresponding to
$\beta_{\textrm{aux}}$. The precise form of the correction terms
$\widetilde{g}_{\alpha\alpha^{\prime}}(\omega)$,
$\widetilde{f}_{\alpha\alpha^{\prime}}(\omega)$, and $\delta$ is largely
unimportant — the essential feature is simply that they are $O(1)$ and the
corrections are thus $O(T^{-1})$. Note that in the time domain, this solution
amounts to
$G_{\alpha\alpha^{\prime}}(t)=q_{\textrm{EA}}+\sum_{n=-\infty}^{\infty}\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t+nT)+\frac{g_{\alpha\alpha^{\prime}}(t)}{T},$
(79)
$F_{\alpha\alpha^{\prime}}(t)=J^{2}q_{\textrm{EA}}^{p-1}+\sum_{n=-\infty}^{\infty}\Delta\mathcal{F}_{\alpha\alpha^{\prime}}(t+nT)+\frac{ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}}{T}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}+\frac{f_{\alpha\alpha^{\prime}}(t)}{T}.$
(80)
The sums are convergent because $\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t)$
and $\Delta\mathcal{F}_{\alpha\alpha^{\prime}}(t)$ decay rapidly to zero as
$|t|\rightarrow\infty$.
Although we have omitted it for notational simplicity, we can add a term
$\delta_{\alpha\neq\alpha^{\prime}}\Delta$ to the time arguments of
$\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t+nT)$ and
$\Delta\mathcal{F}_{\alpha\alpha^{\prime}}(t+nT)$ for any $\Delta\in[0,T)$,
exactly as in Sec. 3. Due to the separate time translation symmetry on each
branch of the SFF contour, all such solutions are equally valid and contribute
the same action. Thus we shall demonstrate the validity of Eqs. (76) through
(78) and evaluate the action only for $\Delta=0$, but then integrate over all
$\Delta\in[0,T)$ in the final expression for the SFF.
Let us first confirm that our solution satisfies the saddle point equation for
$\lambda$, Eq. (71). Referring to Eq. (37), we have that
$\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(0)=L+(-1)^{\alpha}2i\Lambda\delta_{\alpha\alpha^{\prime}}$.
Thus
$q[G]=q_{\textrm{EA}}+\frac{L+2i\Lambda}{T}+O(T^{-2}),$ (81)
and Eq. (71) becomes
$\mathcal{E}_{\textrm{aux}}=2Jq_{\textrm{EA}}^{p/2-1}\Lambda-\frac{i\lambda}{p}+O(T^{-1}).$
(82)
Solving for $\lambda$ indeed gives Eq. (78). The $O(T^{-1})$ terms determine
$\delta$ as a function of the other quantities.
Now turn to Eq. (70). In the frequency domain, the right-hand side evaluates
to222Since $\widetilde{g}_{\alpha\alpha^{\prime}}(\omega)=O(1)$ with respect
to $T$, $g_{\alpha\alpha^{\prime}}(t)$ decays to zero as
$|t|\rightarrow\infty$ (at least to leading order).
$\displaystyle\int_{0}^{T}dte^{i\omega
t}J^{2}\left(q_{\textrm{EA}}+\sum_{n=-\infty}^{\infty}\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t+nT)\right)^{p-1}+ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}\delta_{\omega 0}+O(T^{-1})$ (83)
$\displaystyle\qquad\qquad\stackrel{{\scriptstyle\textrm{set}}}{{=}}\left(TJ^{2}q_{\textrm{EA}}^{p-1}+ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}\right)\delta_{\omega
0}+\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(\omega)+\frac{\widetilde{f}_{\alpha\alpha^{\prime}}(\omega)}{T}.$
Along the lines of Eq. (52), we have that
$\displaystyle
J^{2}\left(q_{\textrm{EA}}+\sum_{n=-\infty}^{\infty}\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t+nT)\right)^{p-1}$
$\displaystyle=J^{2}q_{\textrm{EA}}^{p-1}+J^{2}\sum_{r=1}^{p-1}\binom{p-1}{r}q_{\textrm{EA}}^{p-1-r}\left(\sum_{n=-\infty}^{\infty}\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t+nT)\right)^{r}$
(84) $\displaystyle\sim
J^{2}q_{\textrm{EA}}^{p-1}+J^{2}\sum_{r=1}^{p-1}\binom{p-1}{r}q_{\textrm{EA}}^{p-1-r}\sum_{n=-\infty}^{\infty}\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t+nT)^{r}$
$\displaystyle=J^{2}q_{\textrm{EA}}^{p-1}+\sum_{n=-\infty}^{\infty}\Delta\mathcal{F}_{\alpha\alpha^{\prime}}(t+nT).$
Thus, up to $O(1)$, both sides of Eq. (83) agree.
Finally, we confirm that Eq. (69) is satisfied. At non-zero frequencies, we
have
$i\big{(}-\mu\omega^{2}+z\big{)}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)+\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime\prime}}(\omega)\Delta\widetilde{\mathcal{G}}_{\alpha^{\prime\prime}\alpha^{\prime}}(\omega)+O(T^{-1})\stackrel{{\scriptstyle\textrm{set}}}{{=}}(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}},$
(85)
which agrees at $O(1)$ due to Eq. (72). At zero frequency, we instead have
$\displaystyle
iz\left(Tq_{\textrm{EA}}+\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(0)+\frac{\widetilde{g}_{\alpha\alpha^{\prime}}(0)}{T}\right)$
(86)
$\displaystyle\quad+\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\left(TJ^{2}q_{\textrm{EA}}^{p-1}+ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime\prime}u}\big{)}+\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime\prime}}(0)+\frac{\widetilde{f}_{\alpha\alpha^{\prime\prime}}(0)}{T}\right)$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdot\left(Tq_{\textrm{EA}}+\Delta\widetilde{\mathcal{G}}_{\alpha^{\prime\prime}\alpha^{\prime}}(0)+\frac{\widetilde{g}_{\alpha^{\prime\prime}\alpha^{\prime}}(0)}{T}\right)\stackrel{{\scriptstyle\textrm{set}}}{{=}}(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}.$
The $O(T)$ terms come out to be
$T\left(izq_{\textrm{EA}}+\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\Big{(}J^{2}q_{\textrm{EA}}^{p-1}\Delta\widetilde{\mathcal{G}}_{\alpha^{\prime\prime}\alpha^{\prime}}(0)+q_{\textrm{EA}}\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime\prime}}(0)\Big{)}+ipJq_{\textrm{EA}}^{p/2}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}\right)\stackrel{{\scriptstyle\textrm{set}}}{{=}}0.$
(87)
Yet from the TAP magnetization equations, Eq. (36), it follows that
$\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\Big{(}q_{\textrm{EA}}\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime\prime}}(0)-(p-1)J^{2}q_{\textrm{EA}}^{p-1}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime\prime}}(0)\Big{)}=-izq_{\textrm{EA}}-ipJq_{\textrm{EA}}^{p/2}\mathcal{E}_{\textrm{aux}}.$
(88)
Thus Eq. (87) evaluates to
$-ipJq_{\textrm{EA}}^{p/2}\mathcal{E}_{\textrm{aux}}+pJ^{2}q_{\textrm{EA}}^{p-1}\sum_{\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime\prime}}(0)+ipJq_{\textrm{EA}}^{p/2}\mathcal{E}_{\textrm{aux}}-2ipJ^{2}q_{\textrm{EA}}^{p-1}\Lambda=0,$
(89)
using Eq. (37). The $O(1)$ terms of Eq. (86) determine
$\widetilde{g}_{\alpha\alpha^{\prime}}(0)$ and
$\widetilde{f}_{\alpha\alpha^{\prime}}(0)$. We have therefore confirmed that
all saddle point equations are solved by Eqs. (76) through (78).
### 4.3 Contribution of connected solutions
It remains only to evaluate the action, Eq. (67), at the above solution. The
action can be written as
$\displaystyle S_{\textrm{eff}}$
$\displaystyle=-i\lambda\mathcal{E}_{\textrm{aux}}+\frac{\lambda^{2}}{2p}+\frac{JT\lambda}{pq[G]^{p/2}}\sum_{\alpha}(-1)^{\alpha}\left(\frac{\widetilde{G}_{\alpha
u}(0)}{T}\right)^{p}$ (90)
$\displaystyle\qquad+\frac{T}{2}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\int_{0}^{T}dt\left(\frac{J^{2}}{p}G_{\alpha\alpha^{\prime}}(t)^{p}-F_{\alpha\alpha^{\prime}}(t)G_{\alpha\alpha^{\prime}}(t)\right)$
$\displaystyle\qquad\qquad+\frac{1}{2}\sum_{\omega}\log{\textrm{Det}}\Big{[}i(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\big{(}-\mu\omega^{2}+z\big{)}+(-1)^{\alpha+\alpha^{\prime}}\widetilde{F}_{\alpha\alpha^{\prime}}(\omega)\Big{]}.$
Interestingly, we can determine $S_{\textrm{eff}}$ up to a single additive
constant simply by noting that
$dS_{\textrm{eff}}/d\mathcal{E}_{\textrm{aux}}=-i\lambda$ (recall that
$S_{\textrm{eff}}$ is stationary with respect to variations in all quantities
other than $\mathcal{E}_{\textrm{aux}}$). With $\lambda$ given by Eq. (78) and
$q_{\textrm{EA}}^{p/2-1}\Lambda$ given by Eq. (41), we can carry out the
integral to obtain that
$S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}}]=\frac{p\mathcal{E}_{\textrm{aux}}^{2}}{2}-\frac{p^{2}}{8(p-1)}\Big{(}\mathcal{E}_{\textrm{aux}}+\sqrt{\mathcal{E}_{\textrm{aux}}^{2}-\mathcal{E}_{\textrm{th}}^{2}}\Big{)}^{2}-\log{\Big{(}-\mathcal{E}_{\textrm{aux}}+\sqrt{\mathcal{E}_{\textrm{aux}}^{2}-\mathcal{E}_{\textrm{th}}^{2}}\Big{)}}+C,$
(91)
for some unknown constant $C$. Comparing to Eq. (42), this is highly
suggestive that $S_{\textrm{eff}}=-\Sigma(\mathcal{E}_{\textrm{aux}})$. Of
course, we do need to determine the remaining constant, and so we now turn to
a more elaborate calculation.
Rather than substitute Eqs. (76) and (77) into Eq. (90), we instead use the
simpler functions
$\widetilde{G}^{\prime}_{\alpha\alpha^{\prime}}(\omega)=Tq_{\textrm{EA}}\delta_{\omega
0}+\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega),$ (92)
$\widetilde{F}^{\prime}_{\alpha\alpha^{\prime}}(\omega)=\left(TJ^{2}q_{\textrm{EA}}^{p-1}+ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}\right)\delta_{\omega
0}+\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(\omega)+\frac{\widetilde{f}_{\alpha\alpha^{\prime}}(0)}{T}\delta_{\omega
0},$ (93)
and show that the error incurred in doing so vanishes at large $T$.
Let us demonstrate that the error is negligible first. At any non-zero
frequency, we have that
$\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)=\widetilde{G}^{\prime}_{\alpha\alpha^{\prime}}(\omega)+\frac{\widetilde{g}_{\alpha\alpha^{\prime}}(\omega)}{T},\qquad\widetilde{F}_{\alpha\alpha^{\prime}}(\omega)=\widetilde{F}^{\prime}_{\alpha\alpha^{\prime}}(\omega)+\frac{\widetilde{f}_{\alpha\alpha^{\prime}}(\omega)}{T}.$
(94)
The partial derivatives of $S_{\textrm{eff}}$ at nonzero $\omega$ are
$\frac{\partial
S_{\textrm{eff}}}{\partial\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)}=\frac{1}{2}(-1)^{\alpha+\alpha^{\prime}}\int_{0}^{T}dte^{-i\omega
t}\Big{(}J^{2}G_{\alpha\alpha^{\prime}}(t)^{p-1}-F_{\alpha\alpha^{\prime}}(t)\Big{)},$
(95) $\frac{\partial
S_{\textrm{eff}}}{\partial\widetilde{F}_{\alpha\alpha^{\prime}}(\omega)}=\frac{1}{2}(-1)^{\alpha+\alpha^{\prime}}\bigg{(}\Big{[}i(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\big{(}-\mu\omega^{2}+z\big{)}+(-1)^{\alpha+\alpha^{\prime}}\widetilde{F}_{\alpha^{\prime}\alpha}(\omega)\Big{]}_{\alpha\alpha^{\prime}}^{-1}-\int_{0}^{T}dte^{-i\omega
t}G_{\alpha\alpha^{\prime}}(t)\bigg{)},$ (96)
which vanish when evaluated at
$\widetilde{G}^{\prime}_{\alpha\alpha^{\prime}}(\omega)=\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)$
and
$\widetilde{F}^{\prime}_{\alpha\alpha^{\prime}}(\omega)=\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(\omega)$.
Thus the $O(T^{-1})$ difference between
$\widetilde{G}_{\alpha\alpha^{\prime}}(\omega)$ and
$\widetilde{G}^{\prime}_{\alpha\alpha^{\prime}}(\omega)$, as with
$\widetilde{F}_{\alpha\alpha^{\prime}}(\omega)$ and
$\widetilde{F}^{\prime}_{\alpha\alpha^{\prime}}(\omega)$, translates only to
an $O(T^{-2})$ difference in the action. Even after summing over all
$\omega\neq 0$, the total error333 Since the $G(t)^{p}$ term is not diagonal
in the frequency domain, this argument requires a bit more care. One can
easily show that
$\partial^{2}S_{\textrm{eff}}/\partial\widetilde{G}(\omega)\partial\widetilde{G}(\omega^{\prime})$
is $O(T^{-1})$ for $\omega\neq\pm\omega^{\prime}$ and $O(1)$ for
$\omega=\pm\omega^{\prime}$. Summing over all frequencies, the former case
gives a total contribution $O(T^{-3})O(T^{2})=O(T^{-1})$ and the latter gives
$O(T^{-2})O(T)=O(T^{-1})$. The total error is thus $O(T^{-1})$ as claimed. is
only $O(T^{-1})$.
Neglecting non-zero frequencies,
$\widetilde{F}^{\prime}_{\alpha\alpha^{\prime}}(\omega)$ is identical to
$\widetilde{F}_{\alpha\alpha^{\prime}}(\omega)$ and
$\widetilde{G}^{\prime}_{\alpha\alpha^{\prime}}(\omega)$ differs only by
$\widetilde{g}_{\alpha\alpha^{\prime}}(0)\delta_{\omega 0}/T$. In the time
domain, the latter corresponds to
$G_{\alpha\alpha^{\prime}}(t)=G^{\prime}_{\alpha\alpha^{\prime}}(t)+\frac{\widetilde{g}_{\alpha\alpha^{\prime}}(0)}{T^{2}}=q_{\textrm{EA}}+\sum_{n=-\infty}^{\infty}\Delta\mathcal{G}_{\alpha\alpha^{\prime}}(t+nT)+\frac{\widetilde{g}_{\alpha\alpha^{\prime}}(0)}{T^{2}}.$
(97)
Yet
$\frac{\partial S_{\textrm{eff}}}{\partial
G_{\alpha\alpha^{\prime}}(t)}=\frac{T}{2}(-1)^{\alpha+\alpha^{\prime}}\Big{(}J^{2}G_{\alpha\alpha^{\prime}}(t)^{p-1}-F_{\alpha\alpha^{\prime}}(t)\Big{)}+O(1).$
(98)
When evaluated at $G^{\prime}_{\alpha\alpha^{\prime}}(t)$ and
$F^{\prime}_{\alpha\alpha^{\prime}}(t)$, the $O(T)$ contribution vanishes (see
Eq. (84)). Thus $\partial S_{\textrm{eff}}/\partial
G_{\alpha\alpha^{\prime}}(t)$ is $O(1)$, and an $O(T^{-2})$ change to
$G_{\alpha\alpha^{\prime}}(t)$ leads only to an $O(T^{-1})$ change in the
action even after integrating over $t$.
Since all errors are $O(T^{-1})$, we can safely evaluate $S_{\textrm{eff}}$ at
Eqs. (92) and (93) rather than the full solution (we still use Eq. (78) for
$\lambda$). The first line of Eq. (90) can be computed straightforwardly. It
comes out to be
$\frac{p\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}^{2}}{2}=\frac{1}{8pJ^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}-\frac{1}{p}+\frac{2J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}{p},$
(99)
where we used Eq. (73) to obtain the right-hand side.
Next consider the bottom line. Since
$\widetilde{F}^{\prime}_{\alpha\alpha^{\prime}}(\omega)=\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(\omega)$
for $\omega\neq 0$, while
$\widetilde{F}^{\prime}_{\alpha\alpha^{\prime}}(0)=\widetilde{F}_{\alpha\alpha^{\prime}}(0)$,
we can write the determinant term as
$\displaystyle\frac{1}{2}\sum_{\omega}\log{\textrm{Det}}\Big{[}i(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}\big{(}-\mu\omega^{2}+z\big{)}+(-1)^{\alpha+\alpha^{\prime}}\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(\omega)\Big{]}$
(100)
$\displaystyle\qquad\qquad+\frac{1}{2}\log{\textrm{Det}}\Big{[}iz(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}+(-1)^{\alpha+\alpha^{\prime}}\widetilde{F}_{\alpha\alpha^{\prime}}(0)\Big{]}$
$\displaystyle\qquad\qquad\qquad\qquad-\frac{1}{2}\log{\textrm{Det}}\Big{[}iz(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}+(-1)^{\alpha+\alpha^{\prime}}\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(0)\Big{]}.$
The top line vanishes by exactly the same reasoning as in Sec. 3.3: it is
proportional to $T$ by the Euler-Maclaurin formula, and then must be zero
since the derivative with respect to $T$ vanishes. Given Eq. (37), the bottom
line is simply
$\frac{1}{2}\log{\textrm{Det}}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(0)=\frac{1}{2}\log{4\Lambda^{2}}.$
(101)
For the middle line we take an indirect approach. We have that
$iz(-1)^{\alpha}\delta_{\alpha\alpha^{\prime}}+(-1)^{\alpha+\alpha^{\prime}}\widetilde{F}_{\alpha\alpha^{\prime}}(0)$
is the matrix inverse to $\widetilde{G}_{\alpha\alpha^{\prime}}(0)$ (using the
full solution for the latter, Eq. (76)). Written out,
$\begin{pmatrix}iz+\widetilde{F}_{uu}(0)&-\widetilde{F}_{ul}(0)\\\
-\widetilde{F}_{lu}(0)&-iz+\widetilde{F}_{ll}(0)\end{pmatrix}=\begin{pmatrix}\widetilde{G}_{uu}(0)&\widetilde{G}_{ul}(0)\\\
\widetilde{G}_{lu}(0)&\widetilde{G}_{ll}(0)\end{pmatrix}^{-1}=\frac{1}{\textrm{Det}\widetilde{G}(0)}\begin{pmatrix}\widetilde{G}_{ll}(0)&-\widetilde{G}_{ul}(0)\\\
-\widetilde{G}_{lu}(0)&\widetilde{G}_{uu}(0)\end{pmatrix}.$ (102)
Rather than this $(u,l)$ basis, express Eq. (102) in the $(u+l,u-l)$ basis
(called “classical”/“quantum” in the Keldysh literature), denoted $(+,-)$:
$\begin{pmatrix}\widetilde{F}_{--}(0)&iz+\widetilde{F}_{-+}(0)\\\
iz+\widetilde{F}_{+-}(0)&\widetilde{F}_{++}(0)\end{pmatrix}=\frac{1}{\textrm{Det}\widetilde{G}(0)}\begin{pmatrix}\widetilde{G}_{--}(0)&-\widetilde{G}_{+-}(0)\\\
-\widetilde{G}_{-+}(0)&\widetilde{G}_{++}(0)\end{pmatrix}.$ (103)
We can read off that
$\textrm{Det}\widetilde{G}(0)^{-1}=\widetilde{F}_{++}(0)/\widetilde{G}_{++}(0)$.
Note that we only need $\widetilde{G}_{++}(0)$ and $\widetilde{F}_{++}(0)$ to
$O(T)$ in order to calculate the determinant to $O(1)$. Thus the middle line
of Eq. (100) evaluates to $(\log{J^{2}q_{\textrm{EA}}^{p-2}})/2$, and the
total contribution of the determinant term is
$\frac{1}{2}\log{4J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}.$ (104)
Lastly consider the middle line of Eq. (90). Since
$G^{\prime}_{\alpha\alpha^{\prime}}(t)=\mathcal{G}_{\alpha\alpha^{\prime}}(t)$
(up to exponentially small corrections),
$\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}G^{\prime}_{\alpha\alpha^{\prime}}(t)^{p}=0$
by virtue of Eq. (30). We are left with
$\displaystyle-\frac{T}{2}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\int_{0}^{T}dtF^{\prime}_{\alpha\alpha^{\prime}}(t)G^{\prime}_{\alpha\alpha^{\prime}}(t)$
(105)
$\displaystyle\qquad\sim-\frac{T}{2}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\int_{0}^{T}dt\mathcal{F}_{\alpha\alpha^{\prime}}(t)\mathcal{G}_{\alpha\alpha^{\prime}}(t)$
$\displaystyle\qquad\qquad-\frac{1}{2}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\left(ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}+\frac{\widetilde{f}_{\alpha\alpha^{\prime}}(0)}{T}\right)\widetilde{G}^{\prime}_{\alpha\alpha^{\prime}}(0).$
The first term is again proportional to
$\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}G^{\prime}_{\alpha\alpha^{\prime}}(t)^{p}=0$.
The second term would appear to be more problematic, since
$\widetilde{f}_{\alpha\alpha^{\prime}}(0)$ (for which we have not given an
explicit expression) contributes at $O(1)$ due to
$\widetilde{G}^{\prime}_{\alpha\alpha^{\prime}}(0)$ being $O(T)$. However, we
only need the component $\widetilde{f}_{--}(0)/T=\widetilde{F}_{--}(0)$, and
from Eq. (103) we see that
$\widetilde{F}_{--}(0)=\frac{1}{\textrm{Det}\widetilde{G}(0)}\widetilde{G}_{--}(0)=\frac{1}{\textrm{Det}\widetilde{G}(0)}\frac{\textrm{Det}\widetilde{G}(0)+\widetilde{G}_{+-}(0)\widetilde{G}_{-+}(0)}{\widetilde{G}_{++}(0)}=\frac{1-4J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}{2Tq_{\textrm{EA}}}+O\left(\frac{1}{T^{2}}\right).$
(106)
Eq. (105) evaluates to
$2pJq_{\textrm{EA}}^{p/2-1}\Lambda\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}-\frac{1}{2}+2J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}=\frac{1}{2}-2J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2},$
(107)
again using Eq. (73).
We finally have the large-$T$ limit of the action, given by the sum of Eqs.
(99), (104), and (107):
$S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}}]=\frac{p-2}{2p}+\frac{1}{8pJ^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}-\frac{2(p-1)J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}{p}+\frac{1}{2}\log{4J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}.$
(108)
Comparing to the complexity $\Sigma(\mathcal{E})$ given in Eq. (75), we see
that $S_{\textrm{eff}}$ is precisely $-\Sigma(\mathcal{E}_{\textrm{aux}})$.
### 4.4 Evaluation of the SFF
We have shown that, at a given $\mathcal{E}_{\textrm{aux}}$ and for each value
of inverse temperature $\beta_{\textrm{aux}}$, there is a solution to the SFF
saddle point equations with
$S_{\textrm{eff}}=-\Sigma(\mathcal{E}_{\textrm{aux}})$. The full (connected)
SFF is obtained by integrating over all $\mathcal{E}_{\textrm{aux}}$ and
$\beta_{\textrm{aux}}$, as well as the symmetry-broken order parameter
$\Delta$ (which contributes an overall factor of $T$) and an additional factor
$2(1+\delta_{p\textrm{ even}})$ from the discrete symmetries. As in Sec. 3, it
is more convenient to integrate over the energy density
$\epsilon(\mathcal{E}_{\textrm{aux}},\beta_{\textrm{aux}})$. We show in App. B
that $\epsilon(\mathcal{E},\beta)$ comes out to be precisely the argument of
the filter function, Eq. (68), when evaluated at the saddle point solution.
Our final result is that444The factor $\sqrt{pN/2\pi}$ comes from the integral
over fluctuations in $\lambda$ — the variance is $p/N$ (see Eq. (90)), and the
original fat unity introducing $\mathcal{E}_{\textrm{aux}}$ comes with a
prefactor $N/2\pi$.
$\textrm{SFF}(T,f)=\big{|}\mathbb{E}\textrm{Tr}f(H)e^{-iHT}\big{|}^{2}+2\big{(}1+\delta_{p\textrm{
even}}\big{)}T\sqrt{\frac{pN}{2\pi}}\int
d\mathcal{E}_{\textrm{aux}}e^{N\Sigma(\mathcal{E}_{\textrm{aux}})}\int_{\epsilon_{-}(\mathcal{E}_{\textrm{aux}})}^{\epsilon_{+}(\mathcal{E}_{\textrm{aux}})}\frac{d\epsilon_{\textrm{aux}}}{2\pi}f(\epsilon_{\textrm{aux}})^{2},$
(109)
where the inner integral runs only over the range
$[\epsilon_{-}(\mathcal{E}_{\textrm{aux}}),\epsilon_{+}(\mathcal{E}_{\textrm{aux}})]$
in which solutions to the TAP equations exist. Furthermore, one can easily
generalize Eq. (109) by making the filter function $\mathcal{E}$-dependent,
i.e., $f(\mathcal{E},\epsilon_{\textrm{aux}})$. The resulting quantity is the
SFF for the projection of the system into certain TAP states.
Compare Eq. (109) for the non-ergodic phase to Eq. (62) for the ergodic phase,
and recall the discussion of block-diagonal Hamiltonians in Sec. 1.1. Our
result demonstrates that each metastable (i.e., TAP) state can be thought of
its own quantum chaotic subspace, one which is independent of any others. This
is the central result of our paper. While the qualitative idea has been
proposed in previous work [56], the present analysis both makes it precise and
proves it.
## 5 Higher moments of the evolution operator
In this final section we consider higher moments of $\textrm{Tr}e^{-iHT}$,
i.e., the quantities
$\textrm{SFF}^{(n)}(T,f)\equiv\mathbb{E}\Big{[}\Big{(}\textrm{Tr}f(H)e^{-iHT}\Big{)}^{n}\Big{(}\textrm{Tr}f(H)e^{iHT}\Big{)}^{n}\Big{]}.$
(110)
The saddle points of these higher moments exhibit an interesting structure
that will shed further light on the distribution of TAP states, although care
must be taken in interpreting the results. We first present the calculation
and discuss afterwards.
### 5.1 Effective action
The effective action governing the $n$’th moment is derived in exactly the
same manner as in Sec. 4. The only major difference is that now spins have a
“replica” index $a\in\\{1,\cdots,n\\}$ in addition to a contour index
$\alpha\in\\{u,l\\}$. We also include a separate fat unity defining
$\mathcal{E}_{\textrm{aux},a}$ for each replica. The result is (compare to
Eqs. (66) and (67))
$\textrm{SFF}^{(n)}(T,f)=\int
d\mathcal{E}_{\textrm{aux}}d\lambda\mathcal{D}G\mathcal{D}F\prod_{a=1}^{n}f\big{(}\epsilon_{au}[\lambda,G]\big{)}f\big{(}\epsilon_{al}[\lambda,G]\big{)}e^{-NS_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}},\lambda,G,F]},$
(111) $\displaystyle S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}},\lambda,G,F]$
$\displaystyle=-i\sum_{a}\lambda_{a}\mathcal{E}_{\textrm{aux},a}-\frac{i}{2}\int_{0}^{T}dt\sum_{a\alpha}(-1)^{\alpha}z_{a\alpha}(t)$
(112)
$\displaystyle\quad+\sum_{aa^{\prime}}\frac{\lambda_{a}\lambda_{a^{\prime}}}{2pq[G_{aa}]^{p/2}q[G_{a^{\prime}a^{\prime}}]^{p/2}}\left(\frac{1}{T^{2}}\int_{0}^{T}dtdt^{\prime}G_{au,a^{\prime}u}(t,t^{\prime})\right)^{p}$
$\displaystyle\quad\quad+\sum_{a^{\prime}}\frac{J\lambda_{a^{\prime}}}{pq[G_{a^{\prime}a^{\prime}}]^{p/2}}\int_{0}^{T}dt\sum_{a\alpha}(-1)^{\alpha}\left(\frac{1}{T}\int_{0}^{T}dt^{\prime}G_{a\alpha,a^{\prime}u}(t,t^{\prime})\right)^{p}$
$\displaystyle\quad\quad\quad+\frac{1}{2}\int_{0}^{T}dtdt^{\prime}\sum_{aa^{\prime}}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\left(\frac{J^{2}}{p}G_{a\alpha,a^{\prime}\alpha^{\prime}}(t,t^{\prime})^{p}-F_{a\alpha,a^{\prime}\alpha^{\prime}}(t,t^{\prime})G_{a\alpha,a^{\prime}\alpha^{\prime}}(t,t^{\prime})\right)$
$\displaystyle\quad\quad\quad\quad+\frac{1}{2}\log{\textrm{Det}}\Big{[}i(-1)^{\alpha}\delta_{aa^{\prime}}\delta_{\alpha\alpha^{\prime}}\big{(}\mu\partial_{t}^{2}+z_{a\alpha}\big{)}+(-1)^{\alpha+\alpha^{\prime}}F_{a\alpha,a^{\prime}\alpha^{\prime}}\Big{]},$
with energy densities
$\displaystyle\epsilon_{a\alpha}[\lambda,G]$
$\displaystyle=-\frac{\mu}{2}\partial_{t}^{2}G_{a\alpha,a\alpha}(0^{+},0)$
(113)
$\displaystyle\qquad\qquad-\frac{iJ^{2}}{p}\int_{0}^{T}dt\sum_{a^{\prime}\alpha^{\prime}}(-1)^{\alpha^{\prime}}G_{a\alpha,a^{\prime}\alpha^{\prime}}(t,0)^{p}-i\sum_{a^{\prime}}\frac{J\lambda_{a^{\prime}}}{pq[G_{a^{\prime}a^{\prime}}]^{p/2}}\left(\frac{1}{T}\int_{0}^{T}dtG_{a\alpha,a^{\prime}u}(t,0)\right)^{p}.$
The saddle point equations are therefore
$\displaystyle
i\big{(}\mu\partial_{t}^{2}+z_{a}\big{)}G_{a\alpha,a^{\prime}\alpha^{\prime}}(t-t^{\prime})+\int_{0}^{T}dt^{\prime\prime}\sum_{a^{\prime\prime}\alpha^{\prime\prime}}(-1)^{\alpha^{\prime\prime}}F_{a\alpha,a^{\prime\prime}\alpha^{\prime\prime}}(t-t^{\prime\prime})$
$\displaystyle
G_{a^{\prime\prime}\alpha^{\prime\prime},a^{\prime}\alpha^{\prime}}(t^{\prime\prime}-t^{\prime})$
(114)
$\displaystyle=(-1)^{\alpha}\delta_{aa^{\prime}}\delta_{\alpha\alpha^{\prime}}\delta(t-t^{\prime}),$
$F_{a\alpha,a^{\prime}\alpha^{\prime}}(t)=J^{2}G_{a\alpha,a^{\prime}\alpha^{\prime}}(t)^{p-1}+\frac{J}{T}\left(\frac{\lambda_{a}}{q[G_{aa}]^{p/2}}\delta_{\alpha
u}+\frac{\lambda_{a^{\prime}}}{q[G_{a^{\prime}a^{\prime}}]^{p/2}}\delta_{\alpha^{\prime}u}\right)\left(\frac{\widetilde{G}_{a\alpha,a^{\prime}\alpha^{\prime}}(0)}{T}\right)^{p-1}+O(T^{-2}),$
(115)
$\mathcal{E}_{\textrm{aux},a}=-\frac{iJT}{pq[G_{aa}]^{p/2}}\sum_{a^{\prime}\alpha^{\prime}}(-1)^{\alpha^{\prime}}\left(\frac{\widetilde{G}_{au,a^{\prime}\alpha^{\prime}}(0)}{T}\right)^{p}-\frac{i}{pq[G_{aa}]^{p/2}}\sum_{a^{\prime}}\frac{\lambda_{a^{\prime}}}{q[G_{a^{\prime}a^{\prime}}]^{p/2}}\left(\frac{\widetilde{G}_{au,a^{\prime}u}(0)}{T}\right)^{p}.$
(116)
Note that Eqs. (114) through (116) have the following permutation symmetry
with respect to replica indices. Suppose that $G$, $F$, and $\lambda$
constitute a valid solution. For any permutation $\pi$ of the set
$\\{1,\cdots,n\\}$, define $\pi_{\alpha}(a)$ to be the permuted element
$\pi(a)$ if $\alpha=l$ but simply the original element $a$ if $\alpha=u$. Then
the quantities $\overline{G}$, $\overline{F}$, and $\overline{\lambda}$
defined by
$\overline{G}_{a\alpha,a^{\prime}\alpha^{\prime}}(t,t^{\prime})\equiv
G_{\pi_{\alpha}(a)\alpha,\pi_{\alpha^{\prime}}(a^{\prime})\alpha^{\prime}}(t,t^{\prime}),\qquad\overline{F}_{a\alpha,a^{\prime}\alpha^{\prime}}(t,t^{\prime})\equiv
F_{\pi_{\alpha}(a)\alpha,\pi_{\alpha^{\prime}}(a^{\prime})\alpha^{\prime}}(t,t^{\prime}),\qquad\overline{\lambda}_{a}=\lambda_{a},$
(117)
constitute an equally valid solution. This symmetry has a nice graphical
interpretation in terms of pairings between upper and lower contours,
illustrated in Fig. 5: however contour $au$ is correlated with $a^{\prime}l$
in a given solution, there is an alternate solution in which $au$ has the same
correlation with $\pi(a^{\prime})l$.
One trivial solution to the saddle point equations is to use the solution from
Sec. 4 for $a=a^{\prime}$ while setting all cross-replica elements to zero.
The action then decomposes into a sum of single-replica actions, which we
evaluated in Sec. 4. In other words, this contribution to the $n$’th moment is
simply $\textrm{SFF}(T,f)^{n}$. However, by the permutation symmetry described
above, we actually have $n!$ such contributions:
$\textrm{SFF}^{(n)}(T,f)=n!\cdot\textrm{SFF}(T,f)^{n}+\cdots,$ (118)
where the ellipses denote additional solutions.
### 5.2 Connected solutions
Figure 5: Graphical representation of the various saddle point solutions for
the $n=2$ moment. The four contours — $1u$, $1l$, $2u$, $2l$ — are shown at
the top. Below are the four varieties of solutions: each upper contour must be
paired with a lower contour, but one is free to choose which replicas are
paired, and there is further freedom in which TAP state each pair lies within
(blue and orange lines indicate two different TAP states).
In general, for arbitrary values of $\mathcal{E}_{\textrm{aux},a}$, we have
been unable to find any further saddle points. However, when some replicas
have equal values of $\mathcal{E}_{\textrm{aux}}$, we can construct additional
solutions. Pick any set of inverse temperatures $\beta_{a}$ (not necessarily
equal), and suppose that the replicas $\\{1,\cdots,n\\}$ partition into groups
$A\equiv\\{a_{1},\cdots,a_{|A|}\\}$, such that $\mathcal{E}_{\textrm{aux},a}$
equals a common value $\mathcal{E}_{\textrm{aux},A}$ for all $a\in A$. We
again take $G_{a\alpha,a\alpha^{\prime}}(t-t^{\prime})$ to be the solution
from Sec. 4. For $a$ and $a^{\prime}$ in different groups, we still set
$G_{a\alpha,a^{\prime}\alpha^{\prime}}=0$. For $a$ and $a^{\prime}$ in the
same group $A$, however, we now set
$G_{a\alpha,a^{\prime}\alpha^{\prime}}(t-t^{\prime})=\big{(}q_{\textrm{EA},a}q_{\textrm{EA},a^{\prime}}\big{)}^{1/2},$
(119)
where $q_{\textrm{EA},a}$ is the Edwards-Anderson order parameter
corresponding to $\mathcal{E}_{\textrm{aux},A}$ and $\beta_{a}$. This
corresponds to the replicas lying within the same TAP state (see Fig. 5). We
can write this compactly as
$G_{a\alpha,a^{\prime}\alpha^{\prime}}(t)=\big{(}q_{\textrm{EA},a}q_{\textrm{EA},a^{\prime}}\big{)}^{1/2}+\delta_{aa^{\prime}}\Big{(}\Delta\mathcal{G}_{a,\alpha\alpha^{\prime}}(t)+O(T^{-1})\Big{)}.$
(120)
Inserting into Eq. (116), we have that $\lambda_{a}$ must obey
$\sum_{a^{\prime}\in
A}\lambda_{a^{\prime}}=ip\big{(}\mathcal{E}_{\textrm{aux},A}-2Jq_{\textrm{EA},a}^{p/2-1}\Lambda_{a}\big{)}+O(T^{-1}).$
(121)
Note that, by virtue of Eq. (41), $Jq_{\textrm{EA},a}^{p/2-1}\Lambda_{a}$ is a
function solely of $\mathcal{E}_{\textrm{aux},A}$. Thus Eq. (121) is
consistent among all $a\in A$. The self-energy is then given by
$F_{a\alpha,a^{\prime}\alpha^{\prime}}(t)=J^{2}\big{(}q_{\textrm{EA},a}q_{\textrm{EA},a^{\prime}}\big{)}^{\frac{p-1}{2}}+\delta_{aa^{\prime}}\Delta\mathcal{F}_{a,\alpha\alpha^{\prime}}(t)+\frac{J}{T}\left(\sqrt{\frac{q_{\textrm{EA},a^{\prime}}^{p-1}}{q_{\textrm{EA},a}}}\lambda_{a}\delta_{\alpha
u}+\sqrt{\frac{q_{\textrm{EA},a}^{p-1}}{q_{\textrm{EA},a^{\prime}}}}\lambda_{a^{\prime}}\delta_{\alpha^{\prime}u}\right)+O(T^{-1}).$
(122)
It remains only to check that Eq. (114) can be satisfied. It is automatically
solved at non-zero frequencies, since then $\widetilde{G}(\omega)$ and
$\widetilde{F}(\omega)$ reduce to
$\delta_{aa^{\prime}}\Delta\widetilde{\mathcal{G}}(\omega)$ and
$\delta_{aa^{\prime}}\Delta\widetilde{\mathcal{F}}(\omega)$ respectively. At
zero frequency we confirm that the equation is solved to $O(T)$ (the $O(1)$
terms only determine subleading corrections). Following the same steps as in
Sec. 4.2, the left-hand side of Eq. (114) simplifies to
$JT\sqrt{q_{\textrm{EA},a}^{p-1}q_{\textrm{EA},a^{\prime}}}\left(\sum_{a^{\prime\prime}\in
A}\lambda_{a^{\prime\prime}}+2i(p-1)Jq_{\textrm{EA},a}^{p/2-1}\Lambda_{a}+2iJq_{\textrm{EA},a^{\prime}}^{p/2-1}\Lambda_{a^{\prime}}-ip\mathcal{E}_{\textrm{aux},A}\right)=0,$
(123)
as desired.
Note that in this solution, only the sum $\sum_{a}\lambda_{a}$ is determined —
all orthogonal components of the vector $\lambda$ are free to take any values.
This does not imply that there are multiple such solutions, however. Returning
to the effective action in Eq. (112), the fact that the saddle point equations
determine only $G$, $F$, and $\sum_{a}\lambda_{a}$ means that, if we first
integrate over them, the resulting $\lambda$-dependent action is of the form
$S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}},\lambda]=S\left[\mathcal{E}_{\textrm{aux}},\sum_{a}\lambda_{a}\right]-i\sum_{aa^{\prime}}\sum_{b=2}^{|A|}\lambda_{a}u_{ab}u_{a^{\prime}b}\mathcal{E}_{\textrm{aux},a^{\prime}},$
(124)
for some function $S$ of the single quantity $\sum_{a}\lambda_{a}$ (as well as
all $\mathcal{E}_{\textrm{aux}}$) and for any choice of orthonormal basis
vectors $u_{ab}$ orthogonal to the all-1 vector. When we integrate over
$\sum_{a}\lambda_{a}u_{ab}$, we thus get a $\delta$-function forcing
$\sum_{a^{\prime}}u_{a^{\prime}b}\mathcal{E}_{\textrm{aux},a^{\prime}}=0$.
Together, the $\delta$-functions force all $\mathcal{E}_{\textrm{aux},a}$ to
equal a common value $\mathcal{E}_{\textrm{aux},A}$. Not only is this
consistent with our original assumption, it shows that our construction cannot
work for any other values of $\mathcal{E}_{\textrm{aux},a}$.
### 5.3 Contribution of connected solutions
To evaluate the action, note first of all that since the numbers $\beta_{a}$
define a continuous family of solutions, and since the action is by definition
stationary at these solutions, all choices of $\beta_{a}$ must give the same
value of the action. We thus take all $\beta_{a}$ to equal a common value
$\beta$ for simplicity. The action evaluated at this solution still decomposes
into a sum over groups, but now the contribution of a single group $A$ is
$\displaystyle S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}}]$
$\displaystyle=-i\mathcal{E}_{\textrm{aux}}\sum_{a\in
A}\lambda_{a}+\frac{1}{2p}\left(\sum_{a\in
A}\lambda_{a}\right)^{2}+2iJq_{\textrm{EA}}^{p/2-1}\Lambda\sum_{a\in
A}\lambda_{a}$ (125)
$\displaystyle\qquad+\frac{T}{2}\sum_{aa^{\prime}}\int_{0}^{T}dt\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\left(\frac{J^{2}}{p}G_{a\alpha,a\alpha^{\prime}}(t)^{p}-F_{a\alpha,a\alpha^{\prime}}(t)G_{a\alpha,a\alpha^{\prime}}(t)\right)$
$\displaystyle\qquad\qquad+\frac{1}{2}\log{\textrm{Det}}\Big{[}i(-1)^{\alpha}\delta_{aa^{\prime}}\delta_{\alpha\alpha^{\prime}}\big{(}\mu\partial_{t}^{2}+z\big{)}+(-1)^{\alpha+\alpha^{\prime}}F_{a\alpha,a^{\prime}\alpha^{\prime}}\Big{]}.$
Note that now $\mathcal{E}_{\textrm{aux}}$, $q_{\textrm{EA}}$, and $\Lambda$
are all independent of the replica $a$ (within a given group $A$). We are also
free to set all $\lambda_{a}=\lambda$, meaning that our saddle point solution
simplifies to (in frequency space)
$\widetilde{G}_{a\alpha,a^{\prime}\alpha^{\prime}}(\omega)=Tq_{\textrm{EA}}\delta_{\omega
0}+\delta_{aa^{\prime}}\Big{(}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)+O(T^{-1})\Big{)},$
(126)
$\widetilde{F}_{a\alpha,a^{\prime}\alpha^{\prime}}(\omega)=\left(TJ^{2}q_{\textrm{EA}}^{p-1}+\frac{ipJq_{\textrm{EA}}^{p/2-1}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}}{|A|}\big{(}\delta_{\alpha
u}+\delta_{\alpha^{\prime}u}\big{)}\right)\delta_{\omega
0}+\delta_{aa^{\prime}}\Delta\widetilde{\mathcal{F}}_{\alpha\alpha^{\prime}}(\omega)+O(T^{-1}),$
(127)
$\lambda=\frac{ip}{|A|}\big{(}\mathcal{E}_{\textrm{aux}}-2Jq_{\textrm{EA}}^{p/2-1}\Lambda\big{)}+O(T^{-1}).$
(128)
Eq. (125) can be evaluated following the same procedure as in Sec. 4.3.
Directly substituting Eqs. (126) through (128) gives
$S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}}]=\frac{p\mathcal{E}_{\textrm{aux}}^{2}}{2}-2pJ^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}-\frac{Tq_{\textrm{EA}}}{2}\sum_{aa^{\prime}}\sum_{\alpha\alpha^{\prime}}(-1)^{\alpha+\alpha^{\prime}}\widetilde{F}_{a\alpha,a^{\prime}\alpha^{\prime}}(0)-\frac{1}{2}\sum_{\omega}\log{\textrm{Det}}\widetilde{G}_{a\alpha,a^{\prime}\alpha^{\prime}}(\omega),$
(129)
and we again must determine certain components of $\widetilde{F}(0)$ and
$\textrm{Det}\widetilde{G}(0)$. As before, it is expedient to use the $(+,-)$
basis with respect to contour indices. We also switch to the Fourier basis
with respect to factor indices: from Eq. (126),
$\displaystyle\widetilde{G}_{b\alpha,b^{\prime}\alpha^{\prime}}(\omega)$
$\displaystyle\equiv\frac{1}{|A|}\sum_{aa^{\prime}=1}^{|A|}e^{2\pi
i(ab-a^{\prime}b^{\prime})/|A|}\widetilde{G}_{a\alpha,a^{\prime}\alpha^{\prime}}(\omega)$
(130)
$\displaystyle=T|A|q_{\textrm{EA}}\delta_{b0}\delta_{b^{\prime}0}\delta_{\omega
0}+\delta_{bb^{\prime}}\Big{(}\Delta\widetilde{\mathcal{G}}_{\alpha\alpha^{\prime}}(\omega)+O(T^{-1})\Big{)}.$
Thus $\textrm{Det}\widetilde{G}(\omega)$ factors with respect to $b$, and
furthermore, $\sum_{\omega}\log{\textrm{Det}\widetilde{G}_{b}(\omega)}\sim 0$
for all $b\neq 0$ as in Secs. 3.3 and 4.3. For $b=0$, the determinant is
calculated by comparing to the $b=0$ block of
$iz(-1)^{\alpha}+(-1)^{\alpha+\alpha^{\prime}}\widetilde{F}(0)$, written in
the $(+,-)$ basis (compare to Eq. (103)):
$\begin{pmatrix}\widetilde{F}_{0-,0-}(0)&iz+\widetilde{F}_{0-,0+}(0)\\\
iz+\widetilde{F}_{0+,0-}(0)&\widetilde{F}_{0+,0+}(0)\end{pmatrix}=\frac{1}{\textrm{Det}\widetilde{G}_{0}(0)}\begin{pmatrix}\widetilde{G}_{0-,0-}(0)&-\widetilde{G}_{0+,0-}(0)\\\
-\widetilde{G}_{0-,0+}(0)&\widetilde{G}_{0+,0+}(0)\end{pmatrix}.$ (131)
We see that
$\textrm{Det}\widetilde{G}_{0}(0)=\widetilde{G}_{0+,0+}(0)/\widetilde{F}_{0+,0+}(0)\sim
1/J^{2}q_{\textrm{EA}}^{p-2}$, and $\widetilde{F}_{0-,0-}(0)$ (which is in
fact the only element of $\widetilde{F}(0)$ needed in Eq. (129)) is given by
$\widetilde{G}_{0-,0-}(0)/\textrm{Det}\widetilde{G}_{0}(0)\sim(1-4J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2})/2T|A|q_{\textrm{EA}}$.
The action evaluates to
$S_{\textrm{eff}}[\mathcal{E}_{\textrm{aux}}]=\frac{p-2}{2p}+\frac{1}{8pJ^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}-\frac{2(p-1)J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}}{p}+\frac{1}{2}\log{4J^{2}q_{\textrm{EA}}^{p-2}\Lambda^{2}},$
(132)
which is again precisely $-\Sigma(\mathcal{E}_{\textrm{aux}})$.
### 5.4 Evaluation of the SFF
In the above calculation, note that we get a single contribution of complexity
for the entire group $A$. However, there is still a factor
$(2T)^{|A|}(1+\delta_{p\textrm{ even}})^{2|A|-1}$ due to the separate time
translation, time reversal, and reflection symmetries of each replica555The
contribution $(1+\delta_{p\textrm{ even}})^{2|A|-1}$, rather than
$(1+\delta_{p\textrm{ even}})^{2|A|}$, is because reflecting all spin
configurations does not change the values of any overlaps.. Finally, the sum
over all connected solutions amounts to a sum over the possible ways of
partitioning $n$ elements, in addition to the $n!$ ways of pairing upper and
lower contours. Using $P\equiv\\{A_{1},\cdots,A_{|P|}\\}$ to denote a
partition, we have that
$\displaystyle\textrm{SFF}^{(n)}(T,f)=n!\sum_{P}$ $\displaystyle\prod_{A\in
P}2^{|A|}\big{(}1+\delta_{p\textrm{
even}}\big{)}^{2|A|-1}T^{|A|}\sqrt{\frac{pN}{2\pi}}\int
d\mathcal{E}_{\textrm{aux},A}e^{N\Sigma(\mathcal{E}_{\textrm{aux},A})}$ (133)
$\displaystyle\qquad\qquad\cdot\prod_{a\in
A}\int_{\epsilon_{-}(\mathcal{E}_{\textrm{aux},A})}^{\epsilon_{+}(\mathcal{E}_{\textrm{aux},A})}\frac{\textrm{d}\epsilon_{\textrm{aux},a}}{2\pi}f(\epsilon_{\textrm{aux},a})^{2}.$
In particular, suppose the filter function is chosen so as to have a small
width $\Delta\mathcal{E}\ll 1/N$ around a certain value $\mathcal{E}$ (as in
Sec. 4.4, the above calculation can easily be modified to allow for
$\mathcal{E}$-dependent filter functions). Then the $n$’th moment simplifies
to
$\textrm{SFF}^{(n)}(T,f)=n!\sum_{P}\left(\big{(}1+\delta_{p\textrm{
even}}\big{)}^{-1}\sqrt{\frac{pN}{2\pi}}e^{N\Sigma(\mathcal{E})}\Delta\mathcal{E}\right)^{|P|}\left(2\big{(}1+\delta_{p\textrm{
even}}\big{)}^{2}T\int_{\epsilon_{-}(\mathcal{E})}^{\epsilon_{+}(\mathcal{E})}\frac{\textrm{d}\epsilon_{\textrm{aux}}}{2\pi}f(\epsilon_{\textrm{aux}})^{2}\right)^{n}.$
(134)
Eq. (134) has a nice interpretation as the $n$’th moment of a sum over a
Poisson-distributed number of Gaussians. To be precise, suppose we have an
infinite sequence of i.i.d. complex Gaussians, $\\{Z_{i}\\}_{i=1}^{\infty}$,
each with $\mathbb{E}Z_{i}=0$ and $\mathbb{E}Z_{i}Z_{i}^{*}=\sigma^{2}$.
Consider the sum $S\equiv\sum_{i=1}^{M}Z_{i}$, where $M$ is itself a Poisson-
distributed random variable with mean $\mu$. The $n$’th moment of $SS^{*}$,
averaging over both Gaussians and $M$, can be written
$\mathbb{E}\big{[}S^{n}S^{*n}\big{]}=\sum_{m=0}^{\infty}p_{\mu}(m)\mathbb{E}\left[\left(\sum_{i=1}^{m}Z_{i}\right)^{n}\left(\sum_{i=1}^{m}Z_{i}^{*}\right)^{n}\right]=n!\sum_{m=0}^{\infty}p_{\mu}(m)\big{(}m\sigma^{2}\big{)}^{n}$
(135)
where $p_{\mu}(m)$ denotes the Poisson distribution of mean $\mu$, and Wick’s
theorem is used for the latter equality. It is known that the $n$’th moment of
a Poisson distribution is $\sum_{P}\mu^{|P|}$, where the sum is again over all
partitions of $n$ elements. Thus
$\mathbb{E}\big{[}S^{n}S^{*n}\big{]}=n!\sum_{P}\mu^{|P|}\sigma^{2n}.$ (136)
If we associate $\sigma^{2}$ with the SFF of a single TAP state at
$\mathcal{E}$,
$\sigma^{2}=2\big{(}1+\delta_{p\textrm{
even}}\big{)}^{2}T\int_{\epsilon_{-}(\mathcal{E})}^{\epsilon_{+}(\mathcal{E})}\frac{\textrm{d}\epsilon_{\textrm{aux}}}{2\pi}f(\epsilon_{\textrm{aux}})^{2},$
(137)
and associate $\mu$ with the number of TAP states,
$\mu=\big{(}1+\delta_{p\textrm{
even}}\big{)}^{-1}\sqrt{\frac{pN}{2\pi}}e^{N\Sigma(\mathcal{E})}\Delta\mathcal{E},$
(138)
then Eqs. (134) and (136) are identical.
It is quite tempting to interpret this as saying that the number of TAP states
at $\mathcal{E}$ is Poisson-distributed with mean given by Eq. (138), and that
each TAP state has a Gaussian-distributed value of $\textrm{Tr}e^{-iHT}$ with
variance (i.e., SFF) given by Eq. (137). We are not aware of any results in
the literature which would contradict such a claim. However, keep in mind that
$\textrm{SFF}^{(n)}$ has perturbative corrections around each saddle point
which are suppressed by powers of $N$, whereas every connected partition in
Eq. (134) is suppressed exponentially relative to the fully disconnected one,
whose contribution is given by Eq. (118)666 At sufficiently low energies,
where the function $\Sigma(\mathcal{E})$ is negative, the situation is
reversed and the fully connected partition dominates. The issue remains,
however, that we do not calculate perturbative corrections around the dominant
saddle point. Furthermore, the relevance of these moment calculations to
individual realizations of the PSM is much more suspect when
$\Sigma(\mathcal{E})<0$. . Thus we cannot claim to have rigorously computed
the $n$’th moment to any level of accuracy beyond the disconnected piece.
Nonetheless, the structure of saddle points which we have identified is highly
suggestive and warrants further investigation.
Finally, let us briefly comment on the case $p=2$, which — being effectively a
Gaussian model — exhibits very different behavior than the $p>2$ models
considered here. One might wonder as to where our calculations break down for
$p=2$. While we leave a systematic investigation for future work, one evident
and important difference lies in the stability of the saddle point solutions:
$\partial^{2}S_{\textrm{eff}}/\partial G^{2}$ is zero when $G$ vanishes for
all $p>2$, whereas it is non-zero when $p=2$. Since our higher-moment results
in particular rely on having disconnected clusters of replicas, this
difference can very well have significant consequences
## Acknowledgements
This work was supported by:the U.S. Department of Energy, Office of Science,
Basic Energy Sciences under award number DE-SC0001911 (V.G.); the Joint
Quantum Institute (M.W.); the Air Force Office of Scientific Research under
award numbers FA9550-17-1-0180 (M.W.) and FA9550-19-1-0360 (B.S.); the U.S.
Department of Energy, Office of Science, Office of Advanced Scientific
Computing Research, Accelerated Research for Quantum Computing program “FAR-
QC” (R.B.); the DoE ASCR Quantum Testbed Pathfinder program under award number
DE-SC0019040 (C.L.B.); the DoE ASCR Accelerated Research in Quantum Computing
program under award number DE-SC0020312 (C.L.B.); the DoE QSA, AFOSR, AFOSR
MURI, NSF PFCQC program, NSF QLCI under award number OMA-2120757 (C.L.B.); DoE
award number DE-SC0019449 (C.L.B.), ARO MURI, and DARPA SAVaNT ADVENT
(C.L.B.). This material is based upon work supported by the National Science
Foundation Graduate Research Fellowship Program under Grant No. DGE 1840340
(R.B.), and by the National Science Foundation NRC postdoctoral fellowship
program (C.L.B.).
## References
* [1] J.. Deutsch “Quantum statistical mechanics in a closed system” In _Phys. Rev. A_ 43.4 American Physical Society, 1991, pp. 2046–2049
* [2] Mark Srednicki “Chaos and quantum thermalization” In _Phys. Rev. E_ 50.2 American Physical Society, 1994, pp. 888–901
* [3] Marcos Rigol, Vanja Dunjko and Maxim Olshanii “Thermalization and its mechanism for generic isolated quantum systems” In _Nature_ 452, 2008, pp. 854–858
* [4] Paolo Glorioso and Hong Liu “Lectures on non-equilibrium effective field theories and fluctuating hydrodynamics”, 2018 arXiv:1805.09331 [hep-th]
* [5] Michael Crossley, Paolo Glorioso and Hong Liu “Effective field theory of dissipative fluids” arXiv, 2015 arXiv:1511.03646 [hep-th]
* [6] Sašo Grozdanov and Janos Polonyi “Viscosity and dissipative hydrodynamics from effective field theory” In _Phys. Rev. D_ 91.10 American Physical Society, 2015, pp. 105031
* [7] Felix M. Haehl, R. Loganayagam and Mukund Rangamani “Effective action for relativistic hydrodynamics: fluctuations, dissipation, and entropy inflow” In _J. High Energ. Phys._ 2018.10 Springer ScienceBusiness Media LLC, 2018, pp. 194
* [8] Kristan Jensen, Natalia Pinzani-Fokeeva and Amos Yarom “Dissipative hydrodynamics in superspace” In _J. High Energ. Phys._ 2018.9 Springer ScienceBusiness Media LLC, 2018, pp. 127
* [9] O. Bohigas, M.. Giannoni and C. Schmit “Characterization of Chaotic Quantum Spectra and Universality of Level Fluctuation Laws” In _Phys. Rev. Lett._ 52.1 American Physical Society, 1984, pp. 1–4
* [10] Freeman J. Dyson “Statistical Theory of the Energy Levels of Complex Systems. I” In _J. Math. Phys._ 3.1, 1962, pp. 140–156
* [11] M.L. Mehta “Random Matrices” Elsevier Science, 2004
* [12] Thomas Guhr, Axel Müller–Groeling and Hans A Weidenmüller “Random-matrix theories in quantum physics: common concepts” In _Phys. Rep._ 299.4-6 Elsevier, 1998, pp. 189–425
* [13] Luca D’Alessio, Yariv Kafri, Anatoli Polkovnikov and Marcos Rigol “From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics” In _Adv. Phys._ 65.3, 2016, pp. 239–362
* [14] Lea F. Santos and Marcos Rigol “Onset of quantum chaos in one-dimensional bosonic and fermionic systems and its relation to thermalization” In _Phys. Rev. E_ 81.3 American Physical Society, 2010, pp. 036206
* [15] Amos Chan, Andrea De Luca and J. T. Chalker “Spectral Statistics in Spatially Extended Chaotic Quantum Many-Body Systems” In _Phys. Rev. Lett._ 121.6 American Physical Society (APS), 2018, pp. 060601
* [16] Sanjay Moudgalya, Abhinav Prem, David A. Huse and Amos Chan “Spectral statistics in constrained many-body quantum chaotic systems” In _Phys. Rev. Research_ 3.2 American Physical Society (APS), 2021
* [17] Mauro Schiulaz, E. Torres-Herrera and Lea F. Santos “Thouless and relaxation time scales in many-body quantum systems” In _Phys. Rev. B_ 99.17 American Physical Society (APS), 2019, pp. 174313
* [18] Dibyendu Roy and Tomaž Prosen “Random matrix spectral form factor in kicked interacting fermionic chains” In _Phys. Rev. E_ 102.6 American Physical Society, 2020, pp. 060202
* [19] Michael Winer and Brian Swingle “Hydrodynamic Theory of the Connected Spectral Form Factor”, 2020 arXiv:2012.01436 [cond-mat.stat-mech]
* [20] Michael Winer and Brian Swingle “Spontaneous Symmetry Breaking, Spectral Statistics, and the Ramp”, 2021 arXiv:2106.07674 [cond-mat.stat-mech]
* [21] Dibyendu Roy, Divij Mishra and Tomaž Prosen “Spectral form factor in a minimal bosonic model of many-body quantum chaos”, 2022 arXiv:2203.05439 [cond-mat.stat-mech]
* [22] K. Binder and A.. Young “Spin glasses: Experimental facts, theoretical concepts, and open questions” In _Rev. Mod. Phys._ 58, 1986, pp. 801–976
* [23] M. Mezard, G. Parisi and M.. Virasoro “Spin Glass Theory and Beyond” World Scientific, 1987
* [24] K.. Fischer and J.. Hertz “Spin Glasses” Cambridge University Press, 1991
* [25] H. Nishimori “Statistical Physics of Spin Glasses and Information Processing” Oxford University Press, 2001
* [26] Tommaso Castellani and Andrea Cavagna “Spin-glass theory for pedestrians” In _J. Stat. Mech._ 2005.05, 2005, pp. P05012
* [27] M. Mezard and A. Montanari “Information, Physics, and Computation” Oxford University Press, 2009
* [28] D.. Stein and C.. Newman “Spin Glasses and Complexity” Princeton University Press, 2013
* [29] Phil Saad, Stephen H. Shenker and Douglas Stanford “A semiclassical ramp in SYK and in gravity”, 2019 arXiv:1806.06840 [hep-th]
* [30] Phil Saad “Late Time Correlation Functions, Baby Universes, and ETH in JT Gravity” arXiv, 2019 arXiv:1910.10311 [hep-th]
* [31] Michael Winer, Shao-Kai Jian and Brian Swingle “Exponential Ramp in the Quadratic Sachdev-Ye-Kitaev Model” In _Phys. Rev. Lett._ 125.25 American Physical Society (APS), 2020, pp. 250602
* [32] Sebastian Müller et al. “Periodic-orbit theory of universality in quantum chaos” In _Phys. Rev. E_ 72.4 American Physical Society, 2005, pp. 046207
* [33] Yiming Chen “Spectral form factor for free large N gauge theory and strings” arXiv, 2022 arXiv:2202.04741 [hep-th]
* [34] Jordan S. Cotler et al. “Black holes and random matrices” In _J. High Energ. Phys._ 2017.5 Springer ScienceBusiness Media LLC, 2017, pp. 118
* [35] E Brézin and S Hikami “Spectral form factor in a random matrix theory” In _Phys. Rev. E_ 55.4 APS, 1997, pp. 4067
* [36] Kyriakos Papadodimas and Suvrat Raju “Local Operators in the Eternal Black Hole” In _Phys. Rev. Lett._ 115.21 American Physical Society (APS), 2015
* [37] Alexander Altland and Martin R. Zirnbauer “Nonstandard symmetry classes in mesoscopic normal-superconducting hybrid structures” In _Phys. Rev. B_ 55.2 American Physical Society, 1997, pp. 1142–1161
* [38] T. Tao “Topics in Random Matrix Theory”, Graduate studies in mathematics American Mathematical Society, 2012
* [39] E.P. Wigner and J.J. Griffin “Group Theory and Its Application to the Quantum Mechanics of Atomic Spectra”, Pure and applied Physics Academic Press, 1959
* [40] T.. Kirkpatrick and D. Thirumalai “Dynamics of the Structural Glass Transition and the p-Spin Interaction Spin-Glass Model” In _Phys. Rev. Lett._ 58, 1987, pp. 2091–2094
* [41] A. Crisanti, H. Horner and H.. Sommers “The spherical p-spin interaction spin-glass model” In _Z. Phys. B_ 92.2, 1993, pp. 257–271
* [42] L.. Cugliandolo and J. Kurchan “Analytical solution of the off-equilibrium dynamics of a long-range spin-glass model” In _Phys. Rev. Lett._ 71, 1993, pp. 173–176
* [43] A Barrat, R Burioni and M Mézard “Dynamics within metastable states in a mean-field spin glass” In _J. Phys. A: Math. Gen._ 29.5, 1996, pp. L81
* [44] Boris Altshuler, Hari Krovi and Jérémie Roland “Anderson localization makes adiabatic quantum optimization fail” In _Proceedings of the National Academy of Sciences_ 107.28, 2010, pp. 12446–12450
* [45] V. Bapst et al. “The quantum adiabatic algorithm applied to random optimization problems: The quantum spin glass perspective” In _Phys. Rep._ 523.3, 2013, pp. 127–205
* [46] C.. Baldwin and C.. Laumann “Quantum algorithm for energy matching in hard optimization problems” In _Phys. Rev. B_ 97, 2018, pp. 224201
* [47] Vadim N. Smelyanskiy et al. “Nonergodic Delocalized States for Efficient Population Transfer within a Narrow Band of the Energy Landscape” In _Phys. Rev. X_ 10, 2020, pp. 011017
* [48] D.. Thouless, P.. Anderson and R.. Palmer “Solution of ‘Solvable Model of a Spin Glass”’ In _Philosophical Magazine_ 35.3, 1977, pp. 593–601
* [49] H Ishii and T Yamamoto “Effect of a transverse field on the spin glass freezing in the Sherrington-Kirkpatrick model” In _J. Phys. C: Solid State Phys._ 18.33, 1985, pp. 6225–6237
* [50] D Thirumalai, Qiang Li and T R Kirkpatrick “Infinite-range Ising spin glass in a transverse field” In _J. Phys. A: Math. Gen._ 22.16, 1989, pp. 3339
* [51] Yadin Y. Goldschmidt “Solvable model of the quantum spin glass in a transverse field” In _Phys. Rev. B_ 41.7, 1990, pp. 4858–4861
* [52] G. Büttner and K.. Usadel “Replica-symmetry breaking for the Ising spin glass in a transverse field” In _Phys. Rev. B_ 42, 1990, pp. 6385–6395
* [53] Leticia F. Cugliandolo and Gustavo Lozano “Real-time nonequilibrium dynamics of quantum glassy systems” In _Phys. Rev. B_ 59, 1999, pp. 915–942
* [54] Leticia F Cugliandolo, D R Grempel and Constantino A. da Silva Santos “Imaginary-time replica formalism study of a quantum spherical p-spin-glass model” In _Phys. Rev. B_ 64.1, 2001, pp. 144031–1440326
* [55] C.. Laumann, A. Pal and A. Scardicchio “Many-Body Mobility Edge in a Mean-Field Quantum Spin Glass” In _Phys. Rev. Lett._ 113, 2014, pp. 200405 |
# On the Ergodic Mutual Information of Keyhole MIMO Channels With Finite-
Alphabet Inputs
Chongjun Ouyang, Ali Bereyhi, Saba Asaad, Ralf R. Müller, Julian Cheng, and
Hongwen Yang C. Ouyang and H. Yang are with the School of Information and
Communication Engineering, Beijing University of Posts and Telecommunications,
Beijing, 100876, China (e-mail: {DragonAim,yanghong}@bupt.edu.cn).A. Bereyhi,
S. Asaad, and R. R. Müller are with the Institute for Digital Communications,
Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
(e-mail: {ali.bereyhi,saba.asaad,ralf.r.mueller}@fau.de).J. Cheng is with the
School of Engineering, The University of British Columbia, Kelowna, BC V1V
1V7, Canada (email: julian.cheng@ubc.ca).
###### Abstract
This letter studies the ergodic mutual information (EMI) of keyhole multiple-
input multiple-output channels having finite-alphabet input signals. The EMI
is first investigated for single-stream transmission considering both cases
with and without the channel state information at the transmitter. Then, the
derived results are extended to the scenario of multi-stream transmission.
Asymptotic analyses are performed in the regime of high signal-to-noise ratio
(SNR). The high-SNR EMI is shown to converge to a constant with its rate of
convergence determined by the diversity order. On this basis, the influence of
the keyhole effect on the EMI is discussed. The analytical results are
validated by numerical simulations.
###### Index Terms:
Ergodic mutual information, finite-alphabet inputs, keyhole channel, multiple-
input multiple-output.
## I Introduction
Multiple-input multiple-output (MIMO) systems are known to boost the spectral
efficiency (SE) of wireless channels in comparison to conventional single-
antenna systems. Yet, practical MIMO systems may suffer from severe
degradation of the SE, due to channel degeneration. One of such phenomena is
termed the keyhole effect which may arise in a hallway or tunnel with the
electromagnetic waves propagating through the same hole as shown in Figure 1;
see [1, 6, 4, 2, 3, 5] and the references therein. This effect is observed in
various applications; for instance, in vehicle-to-vehicle communications under
dense urban environments [7]. The existence of this effect was initially
predicted in theory [1] and then validated by empirical measurements [3]. In
contrast to traditional MIMO channels, keyhole channels generally characterize
rank-deficient MIMO channels, which may have sufficient scattering around the
transceivers, but due to other propagation effects, such as diffraction, the
channel matrix might exhibit only low rank.
Theoretically, the keyhole effect can remove the spatial multiplexing gain of
MIMO channels [1]. It hence models the worst-case propagation environment for
MIMO systems from the SE perspective. In general, the system SE is
proportional to the achievable input-output mutual information (MI) of the
channel [8]. Consequently, analyzing the MI of keyhole MIMO channels can
benchmark the worst-case SE of multiple-antenna systems. Motivated by this,
several studies analyzed the MI of keyhole MIMO channels for Gaussian
distributed input signals [2, 3, 4, 5, 7, 6]. Particularly, the MI achieved by
Gaussian inputs was analyzed in single-user ergodic case [2, 3, 4, 5], multi-
user ergodic case [6], and the single-user outage case [7]. Yet, practical
transmit signals are often taken from finite constellation alphabets, e.g.,
quadrature amplitude modulation (QAM). These finite constellations yield
reduced MI, especially in the high signal-to-noise ratio (SNR) regime [8, 14,
9, 16]. Despite its importance, analysis of the ergodic MI (EMI) for keyhole
MIMO channels with finite input constellations has been left open.
This letter studies the EMI of keyhole MIMO channels with finite-alphabet
inputs under Nakagami-$m$ fading. The main contributions of this work are as
follows: 1) We derive novel expressions of the EMI under single-stream
transmission (SST) by considering perfect CSI at the receiver and both cases
with and without the CSI at the transmitter (CSIT); 2) We extend the scenario
of SST to the scenario of multi-stream transmission (MST) and study the EMI
under three typical precoding schemes; 3) We characterize the EMI in the high-
SNR region and determine the diversity order of the system, which enables us
to estimate the influence of the keyhole effect111We comment that also for
MIMO channels without keyholes, there has been very limited work on
characterizing the high-SNR asymptotic behaviours of the EMI achieved by
finite-alphabet inputs. Yet, this can be done by using the approach proposed
in this work, which will be considered in the future.. Compared with our
previous work [9] that focused more on approximating the EMI in single-antenna
systems and neglected the high-SNR analyses, this letter gains more insights
into the influence of finite-alphabet on the EMI in MIMO keyhole channels.
Figure 1: Illustration of a keyhole MIMO channel
## II System Model
Consider the point-to-point keyhole MIMO channel illustrated in Figure 1,
where an $N_{\rm{t}}$-antenna transmitter (Tx) sends wireless signals to an
$N_{\rm{r}}$-antenna receiver (Rx). The received signal is given by
$\displaystyle{\mathbf{y}}=\sqrt{\bar{\gamma}}{\mathbf{H}}{\mathbf{s}}+{\mathbf{n}},$
(1)
where ${\mathbf{H}}\in{\mathbbmss{C}}^{N_{\rm{r}}\times N_{\rm{t}}}$
represents the channel matrix with $N_{\rm{t}}>1$ and $N_{\rm{r}}>1$,
${\mathbf{s}}\in{\mathbbmss{C}}^{N_{\rm{t}}\times 1}$ denotes the transmit
signal satisfying
${\mathbbmss{E}}\left\\{{\mathbf{s}}^{\mathsf{H}}{\mathbf{s}}\right\\}=1$,
$\bar{\gamma}$ denotes the transmit SNR, and
${\mathbf{n}}\sim{\mathcal{CN}}\left({\mathbf{0}},{\mathbf{I}}_{N_{\rm{r}}}\right)$
is additive white Gaussian noise (AWGN).
Considering the spatial structure of keyhole MIMO channels, we have
${\mathbf{H}}={\mathbf{h}}_{\rm{r}}{\mathbf{h}}_{\rm{t}}^{\mathsf{H}}$ for
${\mathbf{h}}_{\rm{r}}\in{\mathbbmss{C}}^{N_{\rm{r}}\times 1}$ and
${\mathbf{h}}_{\rm{t}}\in{\mathbbmss{C}}^{N_{\rm{t}}\times 1}$, where
$\displaystyle{\mathbf{h}}_{\rm{r}}=\left[\sqrt{\alpha_{1}}{\rm{e}}^{{\rm{j}}\phi_{1}},\ldots,\sqrt{\alpha_{N_{\rm{r}}}}{\rm{e}}^{{\rm{j}}\phi_{N_{\rm{r}}}}\right]^{\mathsf{T}}\in{\mathbbmss{C}}^{N_{\rm{r}}\times
1},$ (2)
$\displaystyle{\mathbf{h}}_{\rm{t}}=\left[\sqrt{\beta_{1}}{\rm{e}}^{{\rm{j}}\psi_{1}},\ldots,\sqrt{\beta_{N_{\rm{t}}}}{\rm{e}}^{{\rm{j}}\psi_{N_{\rm{t}}}}\right]^{\mathsf{T}}\in{\mathbbmss{C}}^{N_{\rm{t}}\times
1},$ (3)
denote the keyhole-to-Rx and keyhole-to-Tx channel vectors, respectively,
which are statistically independent of each other [1]222It is worth mentioning
that the keyhole channel is also influenced by the size of the keyhole.
Intuitively, the keyhole effect is more pronounced when the keyhole’s physical
size approximately equals or is even smaller than the wavelength [3].
Unfortunately, a quantitative characterization of the influence of the
keyhole’s size on the channel is still open.. We assume that all entries in
the vector ${\mathbf{h}}_{\rm{r}}$ are independent and identically distributed
(i.i.d.), i.e., the phases $\phi_{a}$ for $a\in\\{1,\ldots,N_{\rm{r}}\\}$ are
uniformly distributed on $\left[0,2\pi\right)$ and the magnitudes
$\sqrt{\alpha_{a}}$ follow the Nakagami-$m$ distribution with the probability
density function (PDF) of $\alpha_{a}$ given by
$f\left(x;m_{\rm{r}},m_{\rm{r}}\right)$. Here,
$\displaystyle
f\left(x;c,d\right)\triangleq\frac{1}{\Gamma\left(c\right)}x^{c-1}{\rm{e}}^{-dx}d^{c},x\geq
0,$ (4)
where
$\Gamma\left(x\right)\triangleq\int_{0}^{\infty}t^{x-1}{\rm{e}}^{-t}{\rm{d}}t$
is the gamma function [10], and $m_{\rm{r}}\geq\frac{1}{2}$ indicates the
fading severity. Likewise, we assume that the keyhole-to-Tx channel undergoes
i.i.d. Nakagami-$m$ fading; thus, the PDF of the magnitudes $\beta_{b}$ for
$b\in\\{1,\ldots,N_{\rm{t}}\\}$ is given by
$f\left(x;m_{\rm{t}},m_{\rm{t}}\right)$ for some fading severity
$m_{\rm{t}}\geq\frac{1}{2}$ and the phases $\psi_{b}$ are uniformly
distributed on $\left[0,2\pi\right)$. It is worth noting that the Nakagami-$m$
model is a generalization of the statistical model used in [3, 7, 6], which
has been illustrated to fit better with empirical data.
## III Single-Stream Transmission
We start the analysis by considering the SST. The transmitted signal is given
by ${\mathbf{s}}={\mathbf{w}}x$, where
${\mathbf{w}}\in{\mathbbmss{C}}^{N_{\rm{r}}\times 1}$ denotes the precoding
vector satisfying $\left\|{\mathbf{w}}\right\|^{2}=1$ and
$x\in{\mathbbmss{C}}$ is the transmitted symbol. We assume that $x$ satisfies
the power constraint ${\mathbbmss{E}}\\{\left|x\right|^{2}\\}=1$ and is taken
from a finite constellation alphabet $\mathcal{X}$ consisting of $M$ points,
i.e., ${\mathcal{X}}=\left\\{\mathsf{x}_{g}\right\\}_{g=1}^{M}$. The $g$th
symbol in $\mathcal{X}$, i.e, $\mathsf{x}_{g}$, is transmitted with
probability $p_{g}$, $0<p_{g}<1$, and the vector of probabilities
${\mathbf{p}}_{\mathcal{X}}\triangleq[p_{1},\cdots,p_{M}]\in{\mathbbmss{C}}^{1\times
M}$ is called the input distribution with $\sum_{g=1}^{M}p_{g}=1$.
The derivation of EMI for the SST (SST-EMI) in a fading keyhole MIMO channel
is best understood by specifying the MI of a scalar Gaussian channel with
finite-alphabet inputs. To this end, consider the scalar AWGN channel
$Y=\sqrt{\gamma}X+Z,$ (5)
where $Z\sim{\mathcal{CN}}\left(0,1\right)$ is AWGN, $X$ is the channel input
taken from the alphabet $\mathcal{X}$ subject to the input distribution
${\mathbf{p}}_{\mathcal{X}}$, and $\gamma$ is the SNR. For this channel, the
MI is given by [8]
$\begin{split}I_{M}^{\mathcal{X}}\left(\gamma\right)&=H_{{\mathbf{p}}_{\mathcal{X}}}-\frac{1}{\pi}\sum\nolimits_{g=1}^{M}\int_{\mathbbmss{C}}p_{g}{\rm
e}^{-\left|u-\sqrt{\gamma}{\mathsf{x}}_{g}\right|^{2}}\\\
&\times\log_{2}{\left(\sum\nolimits_{{g^{\prime}}=1}^{M}\frac{p_{g^{\prime}}}{p_{g}}{\rm
e}^{\left|u-\sqrt{\gamma}{\mathsf{x}}_{g}\right|^{2}-\left|u-\sqrt{\gamma}{\mathsf{x}}_{g^{\prime}}\right|^{2}}\right)}{\rm
d}u,\end{split}$ (6)
where $H_{{\mathbf{p}}_{\mathcal{X}}}$ is the entropy of the input
distribution ${\mathbf{p}}_{\mathcal{X}}$ in bits. By a straightforward
extension of this result to a single-input vectorized channel, it is shown
that the SST-EMI achieved by maximum ratio combining is given by
$\displaystyle{\mathcal{I}}_{M}^{\mathcal{X}}={\mathbbmss{E}}\\{I_{M}^{\mathcal{X}}({\bar{\gamma}}\left\|{\mathbf{h}}_{\rm{r}}\right\|^{2}\left|{\mathbf{h}}_{\rm{t}}^{\mathsf{H}}{\mathbf{w}}\right|^{2})\\}.$
(7)
It is worth noting that the EMI is a function of the precoding vector
${\mathbf{w}}$. In the sequel, we will analyze the SST-EMI based on the
availability of CSIT.
### III-A SST Without CSIT
With no CSIT, the transmitter applies uniform beamforming, i.e.,
${\mathbf{w}}=\frac{1}{\sqrt{N_{\rm{t}}}}{\mathbf{1}}$, where
${\mathbf{1}}\triangleq\left[1,\cdots,1\right]^{\mathsf{T}}$. In this case, we
have
${\mathcal{I}}_{M}^{\mathcal{X}}={\mathbbmss{E}}\left\\{I_{M}^{\mathcal{X}}\left(S_{1}{\bar{\gamma}}/{N_{\rm{t}}}\right)\right\\}$,
where
$S_{1}=\left\|{\mathbf{h}}_{\rm{r}}\right\|^{2}\left|{\mathbf{h}}_{\rm{t}}^{\mathsf{H}}{\mathbf{1}}\right|^{2}$.
To characterize the EMI, we follow three major steps which are illustrated in
the sequel.
#### III-A1 Channel Statistics
At the first step, we derive the PDF of $S_{1}$. The statistical independence
of ${\mathbf{h}}_{\rm{t}}$ and ${\mathbf{h}}_{\rm{r}}$ concludes that
$A=\left\|{\mathbf{h}}_{\rm{r}}\right\|^{2}$ and
$B=\left|{\mathbf{h}}_{\rm{t}}^{\mathsf{H}}{\mathbf{1}}\right|^{2}$ are
mutually independent. It follows that the PDF of the product $S_{1}=AB$ can be
calculated as
$f_{S_{1}}\left(x\right)=\int_{0}^{\infty}f_{B}\left(\frac{x}{y}\right)f_{A}\left(y\right)\frac{1}{y}{\rm{d}}y$,
where $f_{A}(\cdot)$ and $f_{B}(\cdot)$ denote the PDFs of $A$ and $B$,
respectively. Yet, due to the intractability of
$\left|{\mathbf{h}}_{\rm{t}}^{\mathsf{H}}{\mathbf{1}}\right|^{2}$, a closed-
form expression for its PDF is only available when $m_{\rm{t}}$ is an integer
[11]. Accordingly, we let $m_{\rm{t}}$ be an integer in order to facilitate
the subsequent analyses. The following two lemmas are then employed to
characterize $S_{1}$.
###### Lemma 1.
Define an operator ${\mathcal{F}}\left\langle{\cdot}\right\rangle$ as
${\mathcal{F}}\left\langle{Q}\right\rangle\triangleq\sum_{i_{1}=0}^{m_{\rm{t}}-1}\cdots\sum_{i_{N_{\rm{t}}}=0}^{m_{\rm{t}}-1}\sum_{h=0}^{S_{N_{\rm{t}}}}\frac{\left(-S_{N_{\rm{t}}}\right)_{h}S_{N_{\rm{t}}}!Y_{N_{\rm{t}}}{Q}}{X_{N_{\rm{t}}}\left(h!\right)^{2}U_{N_{\rm{t}}}^{S_{N_{\rm{t}}}}},$
(8)
where
$X_{N_{\rm{t}}}=\prod_{k=1}^{N_{\rm{t}}}\left(\frac{\left(i_{k}!\right)^{2}}{\left(1-m_{\rm{t}}\right)_{i_{k}}}\right)$,
$S_{N_{\rm{t}}}=\sum_{k=1}^{N_{\rm{t}}}i_{k}$,
$Y_{N_{\rm{t}}}=\prod_{k=1}^{N_{\rm{t}}}\left(\frac{1}{4m_{\rm{t}}}\right)^{i_{k}}$,
$U_{N_{\rm{t}}}=\sum_{k=1}^{N_{\rm{t}}}\frac{1}{4m_{\rm{t}}}$, and
$\left(z\right)_{n}\triangleq\frac{\Gamma\left(z+n\right)}{\Gamma\left(z\right)}$
is the Pochhammer symbol [10, Eq. (5.2.5)] with
$\left(-z\right)_{n}=\left(-1\right)^{n}\left(z-n+1\right)_{n}$. Then, the PDF
of $B=\left|{\mathbf{h}}_{\rm{t}}^{\mathsf{H}}{\mathbf{1}}\right|^{2}$ can be
written as
$f_{B}\left(x\right)={\mathcal{F}}\left\langle{{\rm{e}}^{-\frac{x}{4U_{N_{\rm{t}}}}}x^{h}\left(4U_{N_{\rm{t}}}\right)^{-h-1}}\right\rangle$.
###### Proof:
Please refer to [11] for more details. ∎
###### Lemma 2.
The PDF of $S_{1}$ is given by
$\begin{split}f_{S_{1}}\left(x\right)&={\mathcal{F}}\left\langle\frac{2}{\Gamma\left(N_{\rm{r}}m_{\rm{r}}\right)}\left({m_{\rm{r}}x}/{\left(4U_{N_{\rm{t}}}\right)}\right)^{\frac{N_{\rm{r}}m_{\rm{r}}+h+1}{2}}\right.\\\
&\times\left.x^{-1}K_{N_{\rm{r}}m_{\rm{r}}-h-1}\left(2\sqrt{{m_{\rm{r}}x}/{\left(4U_{N_{\rm{t}}}\right)}}\right)\right\rangle,\end{split}$
(9)
where $K_{\nu}\left(\cdot\right)$ is the $\nu$th order modified Bessel
function of the second kind [10, Eq. (10.31.1)].
###### Proof:
Since $\left\\{\sqrt{\alpha_{a}}\right\\}_{a=1}^{N_{\rm{r}}}$ are $N_{\rm{r}}$
i.i.d. Nakagami-$m$ variables, the PDF of
${A}=\sum_{a=1}^{N_{\rm{r}}}\alpha_{a}$ can be written as
$f_{A}\left(x\right)=f\left(x;N_{\rm{r}}m_{\rm{r}},m_{\rm{r}}\right)$. Aided
with the integral identity in [10, Eq. (10.32.10)], we finally conclude the
desired PDF in (9). ∎
#### III-A2 Explicit Analysis
In the second step, we invoke Lemma 2 to derive an approximation for the SST-
EMI.
###### Theorem 1.
For SST-EMI achieved without CSIT, the following approximation becomes exact
as the complexity-vs-accuracy tradeoff parameter $V$ approaches infinity:
$\displaystyle{\mathcal{I}}_{M}^{\mathcal{X}}\approx{\mathcal{F}}\left\langle\sum_{k=1}^{V}\sum_{l=1}^{V}\frac{w_{k}w_{l}I_{M}^{\mathcal{X}}\left(\frac{4U_{N_{\rm{t}}}\bar{\gamma}t_{k}t_{l}}{m_{\rm{r}}N_{\rm{t}}}\right)}{\Gamma\left(N_{\rm{r}}m_{\rm{r}}\right)t_{l}^{-j}t_{k}^{1-N_{\rm{r}}m_{\rm{r}}}}\right\rangle,$
(10)
where $\left\\{w_{i}\right\\}$ and $\left\\{t_{i}\right\\}$ denote the weight
and abscissa factors of Gauss–Laguerre integration.
###### Proof:
The EMI can be calculated as
$\displaystyle{\mathcal{I}}_{M}^{\mathcal{X}}=\int_{0}^{\infty}\left(\int_{0}^{\infty}f_{B}\left(\frac{x}{y}\right)\frac{f_{A}\left(y\right)}{y}{\rm{d}}y\right)I_{M}^{\mathcal{X}}\left(\frac{x\bar{\gamma}}{N_{\rm{t}}}\right){\rm
d}x.$ (11)
We use the Gauss–Laguerre quadrature method [10, Eq. (3.5.27)] to calculate
the two integrals in (11) successively. This leads to the approximate
expression shown in (10). ∎
Note that given a target approximation precision, quantifying the relationship
between the required value of $V$ and other system parameters, such as $M$ and
$\bar{\gamma}$, is challenging. By numerical simulation, we find out that
setting $V=200$ can generally achieve an approximation precision of
$10^{-14}$.
#### III-A3 Asymptotic Analysis
In the last step, we investigate the asymptotic behaviour of the EMI. It is
worth noting that the MIMO keyhole channel does not always harden under the
asymptotic condition when
$N_{\rm{t}}~{}{\text{or}}~{}N_{\rm{r}}\rightarrow\infty$ [6]. This makes it
challenging to gain further insights into the EMI by setting
$N_{\rm{t}}~{}{\text{or}}~{}N_{\rm{r}}\rightarrow\infty$. As a compromise,
more attention will be paid to the asymptotic limit in which the SNR
approaches infinity, i.e., $\bar{\gamma}\rightarrow\infty$. The result is
given in Theorem 2.
###### Theorem 2.
Let $N_{\rm{r}}m_{\rm{r}}\neq h+1$ for
$h\in\left\\{0,\cdots,N_{\rm{t}}(m_{\rm{t}}-1)\right\\}$. When
$\bar{\gamma}\rightarrow\infty$, the EMI achieved without CSIT can be
characterized as
${\mathcal{I}}_{M}^{\mathcal{X}}\simeq{H_{{\mathbf{p}}_{\mathcal{X}}}}-\left({\mathcal{G}}_{\rm{a}}{\bar{\gamma}}\right)^{-{\mathcal{G}}_{\rm{d}}}$,
where ${\mathcal{G}}_{\rm{d}}=1$ and
$\displaystyle{\mathcal{G}}_{\rm{a}}^{-1}\\!=\\!\sum\limits_{i_{1}=0}^{m_{\rm{t}}-1}\\!\\!\cdots\\!\\!\sum\limits_{i_{N_{\rm{t}}}=0}^{m_{\rm{t}}-1}\\!\frac{U_{N_{\rm{t}}}^{-S_{N_{\rm{t}}}}S_{N_{\rm{t}}}!Y_{N_{\rm{t}}}\hat{\mathcal{M}}\left(2\right)m_{\rm{t}}m_{\rm{r}}\log_{2}{\rm{e}}}{\left(N_{\rm{r}}m_{\rm{r}}-1\right)\prod_{k=1}^{N_{\rm{t}}}\left(\frac{\left(i_{k}!\right)^{2}}{\left(1-m_{\rm{t}}\right)_{i_{k}}}\right)}.$
(12)
Here,
$\hat{\mathcal{M}}\left({x}\right)\triangleq{\mathcal{M}}\left[{\mathrm{mmse}}_{M}^{\mathcal{X}}\left(t\right);{x}\right]$,
where ${\mathrm{mmse}}_{M}^{\mathcal{X}}\left(t\right)$ denotes the minimum
mean square error (MMSE) in estimating $X$ in (5) from $Y$. Moreover,
${\mathcal{M}}\left[p\left(t\right);z\right]\triangleq\int_{0}^{\infty}t^{z-1}p\left(t\right){\rm
d}t$ denotes the Mellin transform of $p\left(t\right)$ [12].
###### Proof:
The proof is given in Appendix A. ∎
###### Remark 1.
The results in Theorem 2 suggest that the EMI achieved by finite-alphabet
input signals converges to ${H_{{\mathbf{p}}_{\mathcal{X}}}}$ as the SNR
increases and its rate of convergence (ROC) is determined by the diversity
order ${\mathcal{G}}_{\rm{d}}$ and the array gain ${\mathcal{G}}_{\rm{a}}$.
### III-B SST With CSIT
With CSIT, we can apply maximal ratio transmission (MRT) at the transmitter,
i.e.,
${\mathbf{w}}=\frac{1}{\left\|{\mathbf{h}}_{\rm{t}}\right\|}{\mathbf{h}}_{\rm{t}}$.
Hence, the EMI is given by
${\mathcal{I}}_{M}^{\mathcal{X}}={\mathbbmss{E}}\left\\{I_{M}^{\mathcal{X}}\left({\bar{\gamma}}S_{2}\right)\right\\}$,
where
$S_{2}=\left\|{\mathbf{h}}_{\rm{r}}\right\|^{2}\left\|{\mathbf{h}}_{\rm{t}}\right\|^{2}$.
Similar to the previous case, we characterize the SST-EMI in three steps.
#### III-B1 Channel Statistics
Using similar steps as those outlined in the proof of Lemma 2, we arrive at
the following lemma.
###### Lemma 3.
The PDF of $S_{2}$ is given by
$\displaystyle f_{S_{2}}\left(x\right)\\!=\\!\frac{2\\!\left(m_{\rm t}m_{\rm
r}\right)^{\frac{N_{\rm t}m_{\rm t}+N_{\rm r}m_{\rm r}}{2}}\\!K_{N_{\rm
r}m_{\rm r}-N_{\rm t}m_{\rm t}}\left(2\sqrt{m_{\rm t}m_{\rm
r}x}\right)}{\Gamma\left(N_{\rm t}m_{\rm t}\right)\Gamma\left(N_{\rm r}m_{\rm
r}\right)x^{1-\frac{N_{\rm t}m_{\rm t}+N_{\rm r}m_{\rm r}}{2}}}.$
###### Proof:
The proof is similar to the one given for Lemma 2. We hence omit it. ∎
#### III-B2 Explicit Analysis
The EMI with CSIT is given by
${\mathcal{I}}_{M}^{\mathcal{X}}=\int_{0}^{\infty}f_{S_{2}}\left(x\right)I_{M}^{\mathcal{X}}\left({\bar{\gamma}}x\right){\rm{d}}x$.
By following the same steps as those taken in the proof of Theorem 1, we
conclude the following approximation for ${\mathcal{I}}_{M}^{\mathcal{X}}$:
$\displaystyle{\mathcal{I}}_{M}^{\mathcal{X}}\approx\sum_{k=1}^{V}\sum_{l=1}^{V}\frac{w_{k}w_{l}t_{j}^{N_{\rm{r}}m_{\rm{r}}-1}t_{l}^{N_{\rm{t}}m_{\rm{t}}-1}}{\Gamma\left(N_{\rm{r}}m_{\rm{r}}\right)\Gamma\left(N_{\rm{t}}m_{\rm{t}}\right)}I_{M}^{\mathcal{X}}\left(\frac{\bar{\gamma}t_{k}t_{l}}{m_{\rm{r}}m_{\rm{t}}}\right).$
(13)
Similar to (10), the approximation in (13) becomes exact as the complexity-vs-
accuracy tradeoff parameter $V$ tends infinity.
#### III-B3 Asymptotic Analysis
The limiting EMI with CSIT for asymptotically high SNRs is characterized as
follows.
###### Theorem 3.
Let $N_{\rm r}m_{\rm r}\neq N_{\rm t}m_{\rm t}$. When
$\bar{\gamma}\rightarrow\infty$, the asymptotic EMI with CSIT satisfies
${\mathcal{I}}_{M}^{\mathcal{X}}\simeq{H_{{\mathbf{p}}_{\mathcal{X}}}}-\left({\mathcal{G}}_{\rm{a}}{\bar{\gamma}}\right)^{-{\mathcal{G}}_{\rm{d}}}$,
where
${\mathcal{G}}_{\rm{d}}=\min\left\\{N_{\rm{t}}m_{\rm{t}},N_{\rm{r}}m_{\rm{r}}\right\\}$
and
$\displaystyle{\mathcal{G}}_{\rm{a}}=\frac{1}{m_{\rm{r}}m_{\rm{t}}}\left(\frac{\Gamma\left(N_{\rm{t}}m_{\rm{t}}\right)\Gamma\left(N_{\rm{r}}m_{\rm{r}}\right){\mathcal{G}}_{\rm{d}}\ln{2}}{\Gamma\left(\left|N_{\rm{t}}m_{\rm{t}}-N_{\rm{r}}m_{\rm{r}}\right|\right)\hat{\mathcal{M}}\left({\mathcal{G}}_{\rm{d}}+1\right)}\right)^{1/{{\mathcal{G}}_{\rm{d}}}}.$
###### Proof:
The proof is given by directly applying the method detailed in Appendix A. We
hence skip the details. ∎
###### Remark 2.
The above result suggest that the diversity order in this case is a function
of $\left\\{N_{\rm{t}},N_{\rm{r}},m_{\rm{t}},m_{\rm{r}}\right\\}$. By
increasing the number of antennas, this expression can become larger than the
one derived for the case without CSIT.
### III-C Discussions on Keyhole Rank-Deficiency
Consider a special case, in which the amplitudes of the channel coefficients
follow the Rayleigh distribution, namely $m_{\rm{t}}=m_{\rm{r}}=1$. The MIMO
channel matrix in this case has full rank, if there exist no keyholes [1].
Using the method presented in Appendix A, we can characterize the high-SNR
SST-EMI in the keyhole and full-rank MIMO channels, respectively.
Particularly, in the keyhole MIMO channel, the high-SNR SST-EMI achieved with
and without CSIT can be written as
${\mathcal{I}}_{M,{\rm{c}},{\rm{r}}}^{\mathcal{X}}\simeq{H_{{\mathbf{p}}_{\mathcal{X}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-\min\left\\{N_{\rm{t}},N_{\rm{r}}\right\\}}\right)$333The
notation $f(x)={\mathcal{O}}\left(g(x)\right)$ means that
$\limsup_{x\rightarrow\infty}\frac{\left|f(x)\right|}{g(x)}<\infty$. and
${\mathcal{I}}_{M,{\rm{n}},{\rm{r}}}^{\mathcal{X}}\simeq{H_{{\mathbf{p}}_{\mathcal{X}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-1}\right)$,
respectively. Moreover, in full-rank MIMO channels, the high-SNR SST-EMI
achieved with and without CSIT can be expressed as
${\mathcal{I}}_{M,{\rm{c}},{\rm{nk}}}^{\mathcal{X}}\simeq{H_{{\mathbf{p}}_{\mathcal{X}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-N_{\rm{r}}N_{\rm{t}}}\right)$
and
${\mathcal{I}}_{M,{\rm{n}},{\rm{nk}}}^{\mathcal{X}}\simeq{H_{{\mathbf{p}}_{\mathcal{X}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-N_{\rm{r}}}\right)$,
respectively.
###### Remark 3.
Comparing ${\mathcal{I}}_{M,{\rm{c}},{\rm{r}}}^{\mathcal{X}}$ (or
${\mathcal{I}}_{M,{\rm{n}},{\rm{r}}}^{\mathcal{X}}$) with
${\mathcal{I}}_{M,{\rm{c}},{\rm{nk}}}^{\mathcal{X}}$ (or
${\mathcal{I}}_{M,{\rm{n}},{\rm{nk}}}^{\mathcal{X}}$), we conclude that the
keyhole effect can reduce the diversity order of the SST-EMI.
We can extend the above results to a more generic case, where the rank of the
channel matrix is smaller than $\min\left\\{N_{\rm{r}},N_{\rm{t}}\right\\}$,
i.e., the channel matrix is rank-deficient. One example of such rank-deficient
channels is a multi-keyhole MIMO channel whose number of keyholes is smaller
than $\min\left\\{N_{\rm{r}},N_{\rm{t}}\right\\}$. Particularly, for fixed
$N_{\rm{r}}$ and $N_{\rm{t}}$, the SST-EMI achieved by a finite-alphabet input
in a rank-deficient channel yields a lower diversity order than the one
achieved in a full-rank channel. This is similar to MIMO channels with
Gaussian inputs under SST. Due to the page limitations, further discussions
are skipped here and left as a potential direction for future work.
## IV Extension to Multi-Stream Transmission
For MST, the received signal vector is given by
$\displaystyle{\mathbf{y}}=\sqrt{\bar{\gamma}}{\mathbf{H}}{\mathbf{P}}{\mathbf{x}}+{\mathbf{n}},$
(14)
where ${\mathbf{P}}\in{\mathbbmss{C}}^{N_{\rm{t}}\times N}$ denotes the
precoding matrix satisfying
${\mathsf{tr}}\left\\{{\mathbf{P}}{\mathbf{P}}^{\mathsf{H}}\right\\}=1$ with
$N$ being the number of data streams, and
${\mathbf{x}}\in{\mathbbmss{C}}^{N\times 1}$ is the data vector with i.i.d.
elements drawn from the $M$-ary constellation $\mathcal{X}$. Hence, the input
signal $\mathbf{x}$ is taken from a multi-dimensional constellation
${\mathcal{Y}}$ consisting of $M^{N}$ points, i.e.,
$\mathbf{x}\in{\mathcal{Y}}=\left\\{{\bm{\mathsf{x}}}_{g}\in{\mathbbmss{C}}^{N\times
1}\right\\}_{g=1}^{M^{N}}$, with
$\mathbbmss{E}\left\\{{\mathbf{x}}{\mathbf{x}}^{\mathsf{H}}\right\\}={\mathbf{I}}_{N}$.
Assume ${\bm{\mathsf{x}}}_{g}$ is sent with probability $q_{g}$, $0<q_{g}<1$,
and the input distribution is given by
${\mathbf{q}}_{{\mathcal{Y}}}\triangleq[q_{1},\cdots,q_{M^{N}}]\in{\mathbbmss{C}}^{1\times
M^{N}}$ with $\sum_{g=1}^{M^{N}}q_{g}=1$. The MI in this case can be written
as
$\texttt{I}\left({\bar{\gamma}};{\mathbf{H}}{\mathbf{P}}\right)=H_{{\mathbf{q}}_{{\mathcal{Y}}}}-N_{\rm{r}}\log_{2}{\rm{e}}-\sum\nolimits_{g=1}^{M^{N}}p_{g}f_{g}\left({\bar{\gamma}};{\mathbf{H}}{\mathbf{P}}\right)$,
where
$f_{g}\left({\bar{\gamma}};{\mathbf{H}}{\mathbf{P}}\right)\\!\triangleq\\!{\mathbbmss{E}}_{\mathbf{n}}\left\\{\log_{2}\left({\sum_{g^{\prime}=1}^{M^{N}}\frac{p_{g^{\prime}}}{p_{g}}{\rm{e}}^{-\left\|{\mathbf{n}}+\sqrt{\bar{\gamma}}{\mathbf{H}}{\mathbf{P}}{\mathbf{b}}_{g,g^{\prime}}\right\|^{2}}}\right)\right\\}$
with
${\mathbf{b}}_{g,g^{\prime}}={\bm{\mathsf{x}}}_{g}-{\bm{\mathsf{x}}}_{g^{\prime}}=\left[b_{g,g^{\prime},1},\cdots,b_{g,g^{\prime},N}\right]^{\mathsf{H}}\in{\mathbbmss{C}}^{N\times
1}$ [8]. Note that although the authors in [5] derived a closed-form
expression for the MST-EMI achieved by Gaussian inputs, it is challenging to
extend the results in [5] to systems with finite-alphabet inputs. We hence
consider high-SNR limit while analyzing the MST-EMI achieved by finite-
alphabet inputs.
### IV-A MST Without CSIT
For the case without CSIT, the preocding matrix can be set to
${\mathbf{P}}=1/\sqrt{N_{\rm{t}}}{\mathbf{I}}_{N_{\rm{t}}}\triangleq{\mathbf{P}}_{\rm{no}}$
and the number of data streams is given by $N=N_{\rm{t}}$. The corresponding
high-SNR MST-EMI is characterized in the following theorem.
###### Theorem 4.
Let $\bar{\gamma}\rightarrow\infty$. Then, the MST-EMI without CSIT can be
characterized as ${\mathscr{I}}_{M}^{\mathcal{Y}}\simeq
H_{{\mathbf{q}}_{{\mathcal{Y}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-1}\right)$.
###### Proof:
The proof is given in Appendix B. ∎
###### Remark 4.
The results in Theorem 4 suggest that the diversity order of the MST-EMI
without CSIT is given by ${\mathcal{G}}_{\rm{d}}=1$, which is the same as that
of the SST-EMI without CSIT.
### IV-B MST With CSIT
In this case, we consider two main precoding techniques.
#### IV-B1 MRT Precoding
By MRT precoding, we have
$\displaystyle{\mathbf{P}}=\left[{\mathsf{tr}}\left({\mathbf{H}}^{\mathsf{H}}{\mathbf{H}}\right)\right]^{-1/2}{\mathbf{H}}^{\mathsf{H}}\triangleq{\mathbf{P}}_{\rm{mrt}}\in{\mathbbmss{C}}^{N_{\rm{t}}\times
N_{\rm{r}}}$ (15)
and $N=N_{\rm{r}}$, which yields
${\mathbf{y}}=\sqrt{\bar{\gamma}}{\mathbf{G}}{\mathbf{x}}+{\mathbf{n}}$ with
${\mathbf{G}}=\left\|{\mathbf{h}}_{\rm{t}}\right\|\frac{{\mathbf{h}}_{\rm{r}}}{\left\|{\mathbf{h}}_{\rm{r}}\right\|}{\mathbf{h}}_{\rm{r}}^{\mathsf{H}}\in{\mathbbmss{C}}^{N_{\rm{r}}\times
N_{\rm{r}}}$. The high-SNR EMI in this case is characterized as follows.
###### Theorem 5.
Let $\bar{\gamma}\rightarrow\infty$. Then, the MST-EMI achieved by the MRT
precoding satisfies ${\mathscr{I}}_{M}^{\mathcal{Y}}\simeq
H_{{\mathbf{q}}_{{\mathcal{Y}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-1}\right)$.
###### Proof:
Similar to the proof of Theorem 4. ∎
It is worth noting that the diversity order achieved by the MRT precoding is
${\mathcal{G}}_{\rm{d}}=1$, which is the same as that achieved without CSIT.
To address this issue, we proceed to the max-$d_{\min}$ precoding scheme which
enhances the diversity order.
#### IV-B2 Max-$d_{\min}$ Precoding
Optimizing $\texttt{I}\left({\bar{\gamma}};{\mathbf{H}}{\mathbf{P}}\right)$ at
high-SNRs is equivalent to maximizing the minimum distance [14]
$\displaystyle d_{\min}\triangleq\min\nolimits_{g\neq
g^{\prime}}\left\|{\mathbf{H}}{\mathbf{P}}{\mathbf{b}}_{g,g^{\prime}}\right\|=\min\nolimits_{g\neq
g^{\prime}}\left|{\mathbf{h}}_{\rm{t}}^{\mathsf{H}}{\mathbf{P}}{\mathbf{b}}_{g,g^{\prime}}\right|.$
(16)
The resulting max-$d_{\min}$ precoder is given by
$\displaystyle\mathbf{P}_{\star}=\operatorname*{argmax}\nolimits_{{\mathbf{P}}\in{\mathbbmss{C}}^{N_{\rm{t}}\times
N},{\mathsf{tr}}\\{{\mathbf{P}}{\mathbf{P}}^{\mathsf{H}}\\}=1}d_{\min}.$ (17)
Yet, finding a closed-form solution to $\mathbf{P}_{\star}$ is a challenging
task, which makes the subsequent analyses intractable. As a compromise, we
propose a heuristic precoding design by exploiting the structure of
$d_{\min}$. Specifically, by observing (16), we design the heuristic
max-$d_{\min}$ precoder as a rank-one matrix that satisfies
${\mathbf{P}}_{\rm{mm}}={\left\|{\mathbf{h}}_{\rm{t}}\right\|}^{-1}{\mathbf{h}}_{\rm{t}}{\mathbf{d}}_{\star}^{\mathsf{H}}\in{\mathbbmss{C}}^{N_{\rm{t}}\times
N}$ with
$\displaystyle{\mathbf{d}}_{\star}={\operatorname*{argmax}}_{{\mathbf{x}}\in{{\mathbbmss{C}}^{N\times
1}},\left\|{\mathbf{x}}\right\|=1}\min\nolimits_{g\neq
g^{\prime}}\left|{\mathbf{x}}^{\mathsf{H}}{\mathbf{b}}_{g,g^{\prime}}\right|.$
(18)
Note that ${\mathbf{d}}_{\star}$ can be obtained via an off-line exhaustive
search, since it is fixed in $\mathbf{H}$. The corresponding high-SNR MST-EMI
is characterized in the following theorem.
###### Theorem 6.
Let $\bar{\gamma}\rightarrow\infty$. Then, the MST-EMI achieved by the
heuristic max-$d_{\min}$ precoder can be characterized as
${\mathscr{I}}_{M}^{\mathcal{X}}\simeq
H_{{\mathbf{q}}_{{\mathcal{Y}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-{\mathcal{G}}_{\rm{d}}}\right)$
with
${\mathcal{G}}_{\rm{d}}=\min\left\\{N_{\rm{t}}m_{\rm{t}},N_{\rm{r}}m_{\rm{r}}\right\\}$.
###### Proof:
Similar to the proof of Theorem 4. ∎
###### Remark 5.
In contrast to ${\mathbf{P}}_{{\rm{no}}}$ and ${\mathbf{P}}_{{\rm{mrt}}}$, the
diversity order achieved by ${\mathbf{P}}_{{\rm{mm}}}$ can be improved by
increasing $N_{\rm{r}}$ and $N_{\rm{t}}$, which highlights the superiority of
the max-$d_{\min}$ precoder.
### IV-C Discussions on Keyhole Effect
Consider the Rayleigh fading model. Since it is challenging to obtain a
closed-form $\mathbf{P}$ that can maximize the MI with CSIT being available,
we only consider the case without CSIT. Using the approach in deriving Theorem
4, we find that the high-SNR MST-EMI without CSIT in full-rank and keyhole
MIMO channels can be written as
${\mathscr{I}}_{M,{\rm{nk}}}^{\mathcal{X}}\simeq
H_{{\mathbf{q}}_{{\mathcal{Y}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-N_{\rm{r}}}\right)$
and ${\mathscr{I}}_{M,{\rm{k}}}^{\mathcal{X}}\simeq
H_{{\mathbf{q}}_{{\mathcal{Y}}}}-{\mathcal{O}}\left({\bar{\gamma}}^{-1}\right)$,
respectively.
###### Remark 6.
Comparing ${\mathscr{I}}_{M,{\rm{nk}}}^{\mathcal{X}}$ with
${\mathscr{I}}_{M,{\rm{k}}}^{\mathcal{X}}$, we find the keyhole effect can
reduce the diversity order of the MST-EMI. Similar conclusions can be given
for other rank-deficient MIMO channels with finite-alphabet and Gaussian
inputs under MST. We skip further details for sake of brevity.
## V Numerical Results
We now validate our analyses through numerical simulations. Here, we set
$N_{\rm{t}}=N_{\rm{r}}=2$, $m_{\rm{t}}=2$, $m_{\rm{r}}=3$, $p_{g}=\frac{1}{M}$
for $g\in\\{1,\cdots,M\\}$, and $q_{g}=\frac{1}{M^{N}}$ for
$g\in\\{1,\cdots,M^{N}\\}$. As a result, we have
$H_{{\mathbf{p}}_{\mathcal{X}}}=\log_{2}{M}$ and
$H_{{\mathbf{q}}_{\mathcal{Y}}}=N\log_{2}{M}$. The simulation results are
gathered via $10^{6}$ channel realizations.
#### SST-EMI
Figure 2(a) shows the SST-EMI achieved by $M$-QAM signals for
$M\in\\{4,16,64,256\\}$ against the SNR, where the analytical EMI (denoted by
solid lines) is calculated by (10) or (13) by setting $V=200$. As Figure 2(a)
shows, the analytical results closely track the simulations (denoted by
symbols). This verifies the accuracy of (10) and (13). For comparison, we also
plot the EMI achieved by Gaussian signaling in Figure 2(a). As shown, the EMI
of Gaussian inputs grows unboundedly as $\bar{\gamma}$ increases, whereas the
EMI of finite-alphabet inputs converges to the entropy of the input, in the
large limit of $\bar{\gamma}$. Moreover, we observe that the EMI with CSIT is
higher than that without CSIT (denoted by NCSIT). By Remark 1, the rate of the
EMI (${{{\mathcal{I}}}}_{M}^{\mathcal{X}}$) converging to
$H_{{\mathbf{p}}_{\mathcal{X}}}$ equals the rate of
${\mathcal{I}}_{M}^{\rm{con}}=H_{{\mathbf{p}}_{\mathcal{X}}}-{{{\mathcal{I}}}}_{M}^{\mathcal{X}}$
converging to zero. To show this ROC, we plot ${\mathcal{I}}_{M}^{\rm{con}}$
versus $\bar{\gamma}$ in Figure 2(b). As shown, the derived asymptotic results
almost perfectly match the numerical results in the high-SNR regime. This
means that the diversity order derived in previous part is tight. It is
further seen that the EMI with CSIT yields a faster ROC (or a higher diversity
order) than that without CSIT. This agrees with the conclusion in Remark 2.
(a) Explicit results.
(b) Asymptotic results.
Figure 2: EMI of single-stream transmission.
(a) Explicit results.
(b) Asymptotic results.
Figure 3: EMI of multi-stream transmission with $N=2$.
#### MST-EMI
Turn now to the MST-EMI. Figure 3(a) compares the MST-EMI and SST-EMI achieved
by 4-QAM and Gaussian signals. In both cases of with and without CSIT, the
MST-EMI is higher than the SST-EMI and the Gaussian input achieves a higher
EMI than finite-alphabet inputs. We further observe that the max-$d_{\min}$
precoding yields virtually the same EMI as the MRT precoder in the low-SNR
regime but outperforms the latter one in the high-SNR regime. To show the ROC
of the EMI, we plot
${\mathcal{I}}_{M}^{\rm{con}}=H_{{\mathbf{p}}_{\mathcal{X}}}-{{{\mathcal{I}}}}_{M}^{\mathcal{X}}$
(for SST) and
${\mathcal{I}}_{M}^{\rm{con}}=H_{{\mathbf{q}}_{\mathcal{Y}}}-{\mathscr{I}}_{M}^{\mathcal{X}}$
(for MST) versus $\bar{\gamma}$ in Figure 3(b). The curves for
${\bar{\gamma}}^{{-{\mathcal{G}}_{\rm{d}}}}$ are further provided to
demonstrate the achievable diversity order. In the high-SNR regime, the curves
for ${\mathcal{I}}_{M}^{\rm{con}}$ are parallel to
${\bar{\gamma}}^{{-{\mathcal{G}}_{\rm{d}}}}$. This indicates that the derived
achievable diversity order is tight. Moreover, as Figure 3(b) shows, the
max-$d_{\min}$ precoder yields a faster ROC (or a higher diversity order) than
the MRT precoder, which is consistent with the conclusion in Remark 5.
## VI Conclusion
For keyhole MIMO channels with finite-alphabet inputs, irrespective of the
number of streams, theoretical analyses indicate that the ROC of the EMI is
determined by the array gain and the diversity order. It is further found that
the keyhole effect can reduce the diversity order of the EMI achieved by
finite-alphabet inputs.
## Appendix A Proof of Theorem 2
To facilitate the derivation, we rewrite the EMI as
$\displaystyle{\mathcal{I}}_{M}^{\mathcal{X}}\\!=\\!\left.I_{M}^{\mathcal{X}}\left(t\right)F_{S_{1}}\left(\frac{N_{\rm{t}}t}{\bar{\gamma}}\right)\right|_{0}^{\infty}\\!-\\!\int_{0}^{\infty}\\!\\!F_{S_{1}}\left(\frac{N_{\rm{t}}t}{\bar{\gamma}}\right){\rm
d}I_{M}^{\mathcal{X}}\left(t\right)$ (19)
with $F_{S_{1}}\left(\cdot\right)$ denoting the cumulative distribution
function of $S_{1}$. According to [8], we rewrite
${\mathcal{I}}_{M}^{\mathcal{X}}$ as
$\displaystyle{\mathcal{I}}_{M}^{\mathcal{X}}\\!=\\!H_{{\mathbf{p}}_{\mathcal{X}}}\\!-\\!\int_{0}^{\infty}\int_{0}^{{N_{\rm{t}}t}/{\bar{\gamma}}}f_{S_{1}}\left(x\right){\rm{d}}x\frac{{\rm{mmse}}_{M}^{\mathcal{X}}\left(t\right)}{\ln{2}}{\rm
d}t,$ (20)
where ${\rm{mmse}}_{M}^{\mathcal{X}}\left(\gamma\right)=\frac{{\rm
d}I_{M}^{\mathcal{X}}\left(\gamma\right)}{{\rm d}\gamma}\ln{2}$ is the MMSE in
estimating $X$ in (5) by observing $Y$ [8]. When
$\bar{\gamma}\rightarrow\infty$, we have $\frac{1}{\bar{\gamma}}\rightarrow
0$, which together with the facts of
$K_{\nu}\left(z\right)\\!=\\!K_{-\nu}\left(z\right)$ [10, Eq. (10.27.3)] and
$\lim_{z\rightarrow
0}K_{\nu}\left(z\right)\\!=\\!\frac{1}{2}\Gamma\left(\nu\right)\left(\frac{1}{2}z\right)^{-\nu}$
($\nu>0$) [10, Eq. (10.30.2)], yields
$\lim_{\bar{\gamma}\rightarrow\infty}{\mathcal{I}}_{M}^{\mathcal{X}}=\dot{\mathcal{I}}_{M}^{\mathcal{X}}$,
where
$\displaystyle\dot{\mathcal{I}}_{M}^{\mathcal{X}}\triangleq
H_{{\mathbf{p}}_{\mathcal{X}}}\\!-\\!{\mathcal{F}}\\!\left\langle\\!{\frac{\Gamma\left(\left|N_{\rm{r}}m_{\rm{r}}-h-1\right|\right)\hat{\mathcal{M}}\left(\bar{h}+1\right)}{\Gamma\left(N_{\rm{r}}m_{\rm{r}}\right)\bar{h}\left({4U_{N_{\rm{t}}}\bar{\gamma}}/({N_{\rm{t}}m_{\rm{r}})}\right)^{\bar{h}}\ln{2}}\\!}\right\rangle$
(21)
and $\bar{h}\triangleq\min\left\\{N_{\rm{r}}m_{\rm{r}},h+1\right\\}$. Then, we
introduce the following two lemmas for further discussion.
###### Lemma 4.
Given the constellation
${\mathcal{X}}=\left\\{\mathsf{x}_{g}\right\\}_{g=1}^{M}$, the MMSE function
satisfies
$\lim_{\gamma\rightarrow\infty}{\rm{mmse}}_{M}^{\mathcal{X}}\left(\gamma\right)={\mathcal{O}}(\gamma^{-\frac{1}{2}}{\rm
e}^{-\frac{\gamma}{8}d_{{\mathcal{X}},{\min}}^{2}})$, where
$d_{\mathcal{X},\min}\triangleq\min_{g\neq
g^{\prime}}\left|{\mathsf{x}_{g}}-{\mathsf{x}_{g^{\prime}}}\right|$ [16].
###### Lemma 5.
If $p\left(t\right)$ is ${\mathcal{O}}\left(t^{a}\right)$ as $t\rightarrow
0^{+}$ and ${\mathcal{O}}\left(t^{b}\right)$ as $t\rightarrow+\infty$, then
$\left|{\mathcal{M}}\left[p\left(t\right);z\right]\right|<\infty$ when
$-a<z<-b$ [12].
Particularly, $\lim_{t\rightarrow
0^{+}}{\rm{mmse}}_{M}^{\mathcal{X}}\left(t\right)=1$ [8], which together with
Lemma 4, suggests that ${\rm{mmse}}_{M}^{\mathcal{X}}\left(t\right)$ is
${\mathcal{O}}\left(1\right)$ as $t\rightarrow 0^{+}$ and
${\mathcal{O}}\left(t^{-\infty}\right)$ as $t\rightarrow\infty$. Using this
fact and Lemma 5, we find that $|\hat{\mathcal{M}}\left(x\right)|<\infty$
holds for $0<x<\infty$, which in combination with the fact that
${\rm{mmse}}_{M}^{\mathcal{X}}\left(x\right)>0$ ($x>0$) [8], suggests that
$\hat{\mathcal{M}}\left(x\right)\in\left(0,\infty\right)$ holds for
$0<x<\infty$. It follows from
$\bar{h}=\min\left\\{N_{\rm{r}}m_{\rm{r}},h+1\right\\}>0$ that
$\hat{\mathcal{M}}\left(\bar{h}+1\right)\in\left(0,\infty\right)$. As
previously assumed, $N_{\rm{r}}m_{\rm{r}}\neq h+1$,
$m_{\rm{r}}\geq\frac{1}{2}$, and $N_{\rm{r}}>1$, which yields
$N_{\rm{r}}m_{\rm{r}}>1$. We then neglect the higher order terms in (21) to
derive the asymptotic EMI as
${\mathcal{I}}_{M}^{\mathcal{X}}\simeq{H_{{\mathbf{p}}_{\mathcal{X}}}}-{\mathcal{G}}_{\rm{a}}^{-1}{\bar{\gamma}}^{-1}$,
where ${\mathcal{G}}_{\rm{a}}^{-1}$ is shown in (12).
## Appendix B Proof of Theorem 4
###### Proof:
The MI satisfies [14]
$\displaystyle\texttt{I}\left({\bar{\gamma}};{\mathbf{H}}{\mathbf{P}}\right)=L\log_{2}{M}-\frac{1}{\ln{2}}\int_{{\bar{\gamma}}}^{\infty}{\text{mmse}}_{M}^{\mathcal{X}}\left(x;{\mathbf{H}}{\mathbf{P}}\right){\rm{d}}x,$
(22)
where
${\text{mmse}}_{M}^{\mathcal{X}}\left(\bar{\gamma};{\mathbf{H}}{\mathbf{P}}\right)$
denotes the MMSE in estimating $\mathbf{x}$ in (14) by observing $\mathbf{y}$.
Moreover, for any MIMO channels, the MMSE is bounded by [14]
$\displaystyle\underline{\text{mmse}}_{M}^{\mathcal{X}}\left(\bar{\gamma};{\mathbf{H}}{\mathbf{P}}\right)\\!\leq\\!{\text{mmse}}_{M}^{\mathcal{X}}\left(\bar{\gamma};{\mathbf{H}}{\mathbf{P}}\right)\\!\leq\\!\overline{\text{mmse}}_{M}^{\mathcal{X}}\left(\bar{\gamma};{\mathbf{H}}{\mathbf{P}}\right).$
(23)
Defining $f_{l}\left(x\right)\triangleq
1-\frac{1}{\sqrt{\pi}}\int_{-\infty}^{+\infty}\tanh\left(\sqrt{x}a\right){\emph{e}}^{-{\left(a-\frac{\sqrt{x}}{2}\right)^{2}}}{\rm{d}}a$,
$f_{u}\left(x\right)\triangleq Q\left(\sqrt{\frac{{x}}{2}}\right)$ with
$Q\left(x\right)\triangleq\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}{\emph{e}}^{-u^{2}/2}{\rm{d}}u$
being the Q-function, and
$d_{i,k}\triangleq\left\|{\mathbf{H}}{\mathbf{P}}{\mathbf{b}}_{i,k}\right\|^{2}$,
we have [14, Appendix III]
$\displaystyle\underline{\text{mmse}}_{M}^{\mathcal{X}}\left(\bar{\gamma};{\mathbf{H}}{\mathbf{P}}\right)=\sum\nolimits_{i,k=1,k\neq
i}^{M^{L}}\frac{d_{i,k}}{4M^{L}}\frac{f_{l}\left(\bar{\gamma}d_{i,k}\right)}{M^{L}-1},$
(24)
$\displaystyle\overline{\text{mmse}}_{M}^{\mathcal{X}}\left(\bar{\gamma};{\mathbf{H}}{\mathbf{P}}\right)=\sum\nolimits_{i,k=1,k\neq
i}^{M^{L}}\frac{d_{i,k}}{M^{L}}f_{u}\left(\bar{\gamma}d_{i,k}\right).$ (25)
Therefore, the EMI is upper bounded by
$\displaystyle{\mathscr{I}}_{M}^{\mathcal{X}}\leq
L\log_{2}{M}-\frac{1}{\ln{2}}\int_{{\bar{\gamma}}}^{\infty}\underline{\text{mmse}}_{M}^{\mathcal{X}}\left(x;{\mathbf{H}}{\mathbf{P}}\right){\rm{d}}x\triangleq\overline{\mathscr{I}}_{M}^{\mathcal{X}}.$
(26)
After some manipulations, we can get
$\displaystyle\overline{\mathscr{I}}_{M}^{\mathcal{X}}=L\log_{2}{M}-\sum\nolimits_{i,k=1,k\neq
i}^{M^{L}}\\!\frac{\underline{\mathscr{I}}_{M,i,k}^{\mathcal{X}}\log_{2}{\emph{e}}}{4\left(M^{L}-1\right)M^{L}},$
(27)
where
$\underline{\mathscr{I}}_{M,i,k}^{\mathcal{X}}\triangleq\int_{0}^{\infty}\int_{{\bar{\gamma}}}^{\infty}{y}f_{l}\left(\bar{\gamma}y\right)f_{i,k}\left(y\right){\rm{d}}x{\rm{d}}y$
with $f_{i,k}\left(y\right)$ denoting the PDF of $d_{i,k}$. It follows that
$\displaystyle\underline{\mathscr{I}}_{M,i,k}^{\mathcal{X}}=\int_{0}^{\infty}\frac{1}{\bar{\gamma}}f_{i,k}\left(\frac{y}{\bar{\gamma}}\right)\int_{y}^{\infty}f_{l}\left(x\right){\rm{d}}x{\rm{d}}y.$
(28)
When ${\mathbf{P}}=1/\sqrt{N_{\text{t}}}{\mathbf{I}}_{N_{\text{t}}}$, we have
$d_{i,k}=\left\|{\mathbf{h}}_{\text{r}}\right\|^{2}\left|1/\sqrt{N_{\text{t}}}{\mathbf{h}}_{\text{t}}^{\mathsf{H}}{\mathbf{b}}_{i,k}\right|^{2}$,
whose PDF presents the same form as (9) by setting
$U_{N_{\text{t}}}=\sum_{a=1}^{N_{\text{t}}}\frac{\left|b_{i,k,a}\right|}{4m_{\text{t}}{N_{\text{t}}}}$
and
$Y_{N_{\text{t}}}=\prod_{a=1}^{N_{\text{t}}}\left(\frac{\left|b_{i,k,a}\right|}{4m_{\text{t}}{N_{\text{t}}}}\right)^{i_{a}}$.
Following similar steps as those outlined in Appendix A, we find that when
$\bar{\gamma}\rightarrow\infty$, it has
$\displaystyle\underline{\mathscr{I}}_{M,i,k}^{\mathcal{X}}\\!\simeq\\!\sum_{i_{1}=0}^{m_{\text{t}}-1}\\!\\!\cdots\\!\\!\sum_{i_{N_{\text{t}}}=0}^{m_{\text{t}}-1}\\!\frac{S_{N_{\text{t}}}!Y_{N_{\text{t}}}m_{\text{r}}{\mathcal{M}}_{l}\left(1\right){\bar{\gamma}}^{-1}}{4\left(N_{\text{r}}m_{\text{r}}-1\right)X_{N_{\text{t}}}U_{N_{\text{t}}}^{S_{N_{\text{t}}}+1}}$
(29)
with
${\mathcal{M}}_{l}\left(t\right)\\!\triangleq\\!{\mathcal{M}}\\!\left[\int_{y}^{\infty}\\!f_{l}\left(x\right){\rm{d}}x;{t}\right]$.
Define
$\underline{f}\left(y\right)=\int_{y}^{\infty}f_{l}\left(x\right){\rm{d}}x$,
and we have $\lim_{x\rightarrow 0^{+}}f_{l}\left(x\right)=0$ and
$\lim_{x\rightarrow\infty}f_{l}\left(x\right)={\mathcal{O}}\left({\emph{e}}^{-\frac{x}{4}}x^{-\frac{1}{2}}\right)$
[15, Theorem 3, Appendix B], indicating that $f_{l}\left(x\right)$ is
${\mathcal{O}}\left(x^{a}\right)$ ($a\geq 0$) as $t\rightarrow 0^{+}$ and
${\mathcal{O}}\left(x^{-\infty}\right)$ as $x\rightarrow\infty$. It follows
form this fact and Lemma 5 that $\lim_{y\rightarrow
0^{+}}\underline{f}\left(y\right)=\int_{0}^{\infty}f_{l}\left(x\right){\rm{d}}x={\mathcal{O}}\left(y^{0}\right)\in\left(0,\infty\right)$.
Moreover, based on L’Hôspital’s rule and [15, Appendix B], we can get
$\lim_{y\rightarrow\infty}\underline{f}\left(y\right)={\mathcal{O}}\left({\emph{e}}^{-\frac{x}{4}}x^{-\frac{1}{2}}\right)$.
By continuously using Lemma 5, we find that
${\mathcal{M}}_{l}\left(1\right)\in\left(0,\infty\right)$ and thus
$\underline{\mathscr{I}}_{M,i,k}^{\mathcal{X}}={\mathcal{O}}\left({\bar{\gamma}}^{-1}\right)$.
It follows that
$\displaystyle\lim_{\bar{\gamma}\rightarrow\infty}{\mathscr{I}}_{M}^{\mathcal{X}}\leq\lim_{\bar{\gamma}\rightarrow\infty}\overline{\mathscr{I}}_{M}^{\mathcal{X}}=N_{\text{t}}\log_{2}{M}-{\mathcal{O}}\left({\bar{\gamma}}^{-1}\right).$
(30)
Turn now to the EMI’s lower bound given by
$\displaystyle{\mathscr{I}}_{M}^{\mathcal{X}}\geq
L\log_{2}{M}-\sum\nolimits_{i,k=1,k\neq
i}^{M^{L}}\\!\frac{\overline{\mathscr{I}}_{M,i,k}^{\mathcal{X}}}{M^{L}\ln{2}}\triangleq\underline{\mathscr{I}}_{M}^{\mathcal{X}}$
(31)
with
$\overline{\mathscr{I}}_{M,i,k}^{\mathcal{X}}\triangleq\int_{0}^{\infty}\int_{{\bar{\gamma}}}^{\infty}{y}f_{u}\left(\bar{\gamma}y\right)f_{i,k}\left(y\right){\rm{d}}x{\rm{d}}y$.
We find that when $\bar{\gamma}\rightarrow\infty$,
$\displaystyle\overline{\mathscr{I}}_{M,i,j}^{\mathcal{X}}\\!\simeq\\!\sum_{i_{1}=0}^{m_{\text{t}}-1}\\!\\!\cdots\\!\\!\sum_{i_{N_{\text{t}}}=0}^{m_{\text{t}}-1}\\!\frac{S_{N_{\text{t}}}!Y_{N_{\text{t}}}m_{\text{r}}{\mathcal{M}}_{u}\left(1\right)U_{N_{\text{t}}}^{-S_{N_{\text{t}}}-1}{\bar{\gamma}}^{-1}}{4\left(N_{\text{r}}m_{\text{r}}-1\right)\prod_{k=1}^{N_{\text{t}}}\left(\frac{\left(i_{k}!\right)^{2}}{\left(1-m_{\text{t}}\right)_{i_{k}}}\right)},$
(32)
where
${\mathcal{M}}_{u}\left(t\right)\triangleq{\mathcal{M}}\left[\int_{y}^{\infty}f_{u}\left(x\right){\rm{d}}x;{t}\right]$.
Then, following similar steps in proving
${\mathcal{M}}_{l}\left(1\right)\in\left(0,\infty\right)$, we can prove
${\mathcal{M}}_{u}\left(1\right)\in\left(0,\infty\right)$ and thus
$\displaystyle\lim_{\bar{\gamma}\rightarrow\infty}{\mathscr{I}}_{M}^{\mathcal{X}}\geq\lim_{\bar{\gamma}\rightarrow\infty}\underline{\mathscr{I}}_{M}^{\mathcal{X}}=N_{\text{t}}\log_{2}{M}-{\mathcal{O}}\left({\bar{\gamma}}^{-1}\right),$
(33)
which together with (30), yields
$\lim_{\bar{\gamma}\rightarrow\infty}{\mathscr{I}}_{M}^{\mathcal{X}}=N_{\text{t}}\log_{2}{M}-{\mathcal{O}}\left({\bar{\gamma}}^{-1}\right)$.
∎
## References
* [1] D. Chizhik _et al._ , “Keyholes, correlations, and capacities of multielement transmit and receive antennas,” _IEEE Trans. Wireless Commun._ , vol. 1, no. 2, pp. 361–368, Apr. 2002.
* [2] A. Maaref _et al._ , “Impact of spatial fading correlation and keyhole on the capacity of MIMO systems with transmitter and receiver CSI,” _IEEE Trans. Wireless Commun._ , vol. 7, no. 8, pp. 3218–3229, Aug. 2008.
* [3] P. Almers _et al._ , “Keyhole effect in MIMO wireless channels: Measurements and theory,” _IEEE Trans. Wireless Commun._ , vol. 5, no. 12, pp. 3596–3604, Dec. 2006.
* [4] A. Müller _et al._ , “Ergodic capacity and information outage probability of MIMO Nakagami-$m$ keyhole channels with general branch parameters,” in _Proc. IEEE WCNC_ , Mar. 2007, pp. 2184–2189.
* [5] G. Akemann _et al._ , “Products of rectangular random matrices: Singular values and progressive scattering,” _Phys. Rev. E_ , _Stat. Nonlinear Soft Matter Phys._ , vol. 88, no. 5, Nov. 2013.
* [6] H. Q. Ngo and E. G. Larsson, “No downlink pilots are needed in TDD massive MIMO,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 5, pp. 2921–2935, May 2017.
* [7] H. Zhang _et al._ , “Performance analysis of MIMO-HARQ assisted V2V communications with keyhole effect,” _IEEE Trans. Commun._ , vol. 70, no. 5, pp. 3034–3046, May 2022.
* [8] R. W. Heath, Jr., and A. Lozano, _Foundation MIMO Communication_ , Cambridge, U.K.: Cambridge Univ. Press, 2018.
* [9] C. Ouyang _et al._ , “Approximating ergodic mutual information for mixture gamma fading channels with discrete inputs,” _IEEE Commun. Lett._ , vol. 24, no. 4, pp. 734–738, Apr. 2020.
* [10] R. B. Paris _NIST Handbook of Mathematical Functions_ , Cambridge, U.K.: Cambridge Univ. Press, 2010.
* [11] G. K. Karagiannidis, “A closed-form solution for the distribution of the sum of Nakagami-$m$ random phase vectors,” _IEEE Commun. Lett._ , vol. 10, no. 12, pp. 828–830, Dec. 2006.
* [12] P. Flajolet _et al._ , “Mellin transforms and asymptotics: Harmonic sums”, _Theoretical Computer Science_ , vol. 144, no. 1–2, pp. 3–58, 1995.
* [13] G. Levin and S. Loyka, “From multi-keyholes to measure of correlation and power imbalance in MIMO channels: Outage capacity analysis,” _IEEE Trans. Inf. Theory_ , vol. 57, no. 6, pp. 3515–3529, Jun. 2011.
* [14] F. Pérez-Cruz, M. R. D. Rodrigues, and S. Verdú, “MIMO Gaussian channels with arbitrary inputs: Optimal precoding and power allocation,” _IEEE Trans. Inf. Theory_ , vol. 56, no. 3, pp. 1070–1084, Mar. 2010.
* [15] A. Lozano, A. M. Tulino, and S. Verdú, “Optimum power allocation for parallel Gaussian channels with arbitrary input distributions,” _IEEE Trans. Inf. Theory_ , vol. 52, no. 7, pp. 3033–3051, Jul. 2006.
* [16] A. Alvarado _et al._ , “High-SNR asymptotics of mutual information for discrete constellations with applications to BICM,” _IEEE Trans. Inf. Theory_ , vol. 60, no. 2, pp. 1061–1076, Feb. 2014.
|
# Safe Control Design for Unknown Nonlinear Systems with Koopman-based Fixed-
Time Identification
Mitchell Black Dimitra Panagou Department of Aerospace Engineering,
University of Michigan, Ann Arbor, MI 48109, USA (e-mail: mblackjr@umich.edu).
Department of Robotics and Department of Aerospace Engineering, University of
Michigan, Ann Arbor, MI 48109, USA (e-mail<EMAIL_ADDRESS>
###### Abstract
We consider the problem of safe control design for a class of nonlinear,
control-affine systems subject to an unknown, additive, nonlinear disturbance.
Leveraging recent advancements in the application of Koopman operator theory
to the field of system identification and control, we introduce a novel fixed-
time identification scheme for the infinitesimal generator of the infinite-
dimensional, but notably linear, Koopman dynamical system analogous to the
nonlinear system of interest. That is, we derive a parameter adaptation law
that allows us to recover the unknown, residual nonlinear dynamics in the
system within a finite-time independent of an initial estimate. We then use
properties of fixed-time stability to derive an error bound on the residual
vector field estimation error as an explicit function of time, which allows us
to synthesize a provably safe controller using control barrier function based
methods. We conduct a quadrotor-inspired case study in support of our proposed
method, in which we show that safe trajectory tracking is achieved despite
unknown, nonlinear dynamics.
###### keywords:
Control of constrained systems; identification for control; robust adaptive
control; fixed-time stability; nonlinear system identification.
## 1 Introduction
Recent advances in computing power and memory storage have ushered in an era
of estimation, identification, and control for autonomous systems dominated by
data-driven methods. For example, compared to the 74 kilobytes of memory
available on the United States National Aeronautics and Space Administration’s
(NASA) first lunar module computer, the gigabytes of memory used in many of
today’s data-driven approaches to dynamical system identification (e.g. deep
neural networks) have allowed engineers to create significantly more
expressive models. Though regression methods are widely-used for linear system
identification, the field of identification for nonlinear systems is vast.
Popular approaches in recent years include classes of neural networks (NNs),
including deep NNs (e.g. Zancato and Chiuso (2021)) and recurrent NNs for
time-varying systems (Gonzalez and Yu (2018)), Gaussian processes (Frigola and
Rasmussen (2013)), and more recently the application of Koopman operator
theory (e.g. Mauroy and Goncalves (2020); Brunton et al. (2016); Klus et al.
(2020), among others), which introduces an infinite-dimensional but notably
linear representation of a nonlinear system on which traditional linear
identification approaches may be used.
Under Koopman theory there exists a linear Koopman dynamical system that
captures the dynamics of the original nonlinear system over an infinite-
dimensional space of scalar functions known as observables. Beginning with
Mauroy and Goncalves (2020), recent work has focused on using data-driven
approaches to approximate a finite-dimensional matrix representation of the
Koopman operator, which acts as a state-transition operator for the Koopman
dynamical system. In particular, extended dynamic mode decomposition (EDMD),
first introduced in Williams et al. (2015) has emerged as a popular tool for
carrying out such an approximation. The end result in many cases is a batch
estimate of either the Koopman matrix (i.e. in Bruder et al. (2021); Haseli
and Cortés (2021)) or its infinitesimal generator (Klus et al. (2020); Drmač
et al. (2021)) obtained by solving a least-squares regression problem.
Potential shortcomings of this class of approaches include slower response
times than e.g. recursive methods, and a lack of formal guarantees on the
approximation error bound, which may be particularly detrimental when used in
control design. In contrast, it has been shown by Black et al. (2022a) that
fixed-time stability in the context of recursive parameter identification
admits a such bound on the identification error as an explicit function of
time.
Finite- and fixed-time stability (FTS and FxTS) are stronger notions of
stability for equilibria of a dynamical system, each of which guarantee
convergence of the system trajectories to the origin within a finite time.
They have been used in the analysis of linear parameter identification schemes
by Ríos et al. (2017); Ortega et al. (2022), and synthesized for the purpose
of safe control design in Black et al. (2022a); Wang et al. (2022). The
benefit to recursive parameter identification in fixed-time, i.e. in a finite-
time independent of the initial condition, is the knowledge of an error bound
on the identification error as an explicit function of time. When synthesized
with a safe control law, this class of identification schemes yields less
conservative control solutions, as highlighted in Black et al. (2022a).
Control barrier functions (CBFs) have proven to be a useful tool for safe
control synthesis. As a model-based approach, however, it is critical that an
accurate system model be available in order to preserve forward invariance of
the set of safe states. Though robust CBF controllers can protect against
bounded disturbances to the system dynamics (e.g. Jankovic (2018); Black et
al. (2020)), the cost is conservatism. Various other approaches to safe
control have sought to adapt to the unknown residual dynamics (e.g. Taylor and
Ames (2020); Lopez et al. (2021)), or to learn their effects via data-driven
Koopman-based policies both online (Folkestad et al. (2020)) and offline
(Zinage and Bakolas (2022)). None of these methods, however, provide
guarantees on learning convergence time.
In this paper, we address this open problem by introducing a Koopman-based
identification scheme for safe control design that guarantees convergence
within a fixed-time for a class of nonlinear, control-affine systems subject
to an additive, nonlinear perturbation. We use knowledge of the bound on
convergence time to quantify the identification error as an explicit function
of time, the magnitude of which is leveraged to design a provably safe CBF-
based controller. We demonstrate the advantages of our proposed approach on a
trajectory tracking problem, and highlight that the identification and control
laws succeed in preserving safety of the system even in the presence of
measurement noise.
The rest of the paper is organized as follows. In Section 2 we introduce the
preliminaries and define the problem under consideration. Section 3 contains
our main result on fixed-time nonlinear system identification, which we use in
Section 4 to design a safe controller. We demonstrate the approach on a
numerical case study in Section 5, and conclude in Section 6 with directions
for future work.
## 2 Preliminaries and Problem Statement
In this paper, we use the following notation. $\mathbb{R}$ denotes the set of
real numbers. The ones matrix of size $n\times m$ is denoted
$\boldsymbol{1}_{n\times m}$. We use $\|\cdot\|$ to denote the Euclidean norm
and $\|\cdot\|_{\infty}$ to denote the sup norm. We denote the minimum and
maximum eigenvalue of a matrix $\boldsymbol{M}$ as
$\lambda_{min}(\boldsymbol{M})$ and $\lambda_{max}(\boldsymbol{M})$, and refer
to its rth singular value as $\sigma_{r}(\boldsymbol{M})$, to its nullspace as
$\mathcal{N}(\boldsymbol{M})$, and its ith column as
$\mathrm{col}_{i}(\boldsymbol{M})$. The gradient operator is $\nabla$, and the
Lie derivative of a function $V:\mathbb{R}^{n}\rightarrow\mathbb{R}$ along a
vector field $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ at a point
$x\in\mathbb{R}^{n}$ is denoted as $L_{f}V(x)\triangleq\frac{\partial
V}{\partial x}f(x)$.
Consider the following class of nonlinear, control-affine systems
$\dot{\boldsymbol{x}}=f(\boldsymbol{x}(t))+g(\boldsymbol{x}(t))\boldsymbol{u}(t)+d(\boldsymbol{x}(t)),\quad\boldsymbol{x}(0)=\boldsymbol{x}_{0},$
(1)
where $\boldsymbol{x}\in\mathcal{X}\subset\operatorname{\mathbb{R}}^{n}$ and
$\boldsymbol{u}\in\operatorname{\mathbb{R}}^{m}$ denote the state and control
input vectors, the drift vector field
$f:\operatorname{\mathbb{R}}^{n}\rightarrow\operatorname{\mathbb{R}}^{n}$ and
control matrix field
$g:\operatorname{\mathbb{R}}^{n}\rightarrow\operatorname{\mathbb{R}}^{n}\times\operatorname{\mathbb{R}}^{m}$
are known and continuous, and
$d:\operatorname{\mathbb{R}}^{n}\rightarrow\operatorname{\mathbb{R}}^{n}$ is
an unknown disturbance known to be continuous and to obey
$\|d(\boldsymbol{x})\|_{\infty}\leq D<\infty$ for all
$\boldsymbol{x}\in\mathcal{X}$. Consider also the following set of safe
states,
$S=\\{\boldsymbol{x}\in\mathcal{X}\;|\;h(\boldsymbol{x})\geq 0\\},$ (2)
for a continuously differentiable function
$h:\operatorname{\mathbb{R}}^{n}\rightarrow\operatorname{\mathbb{R}}$, where
the boundary and interior of $S$ are $\partial
S=\\{\boldsymbol{x}\in\operatorname{\mathbb{R}}^{n}\;|\;h(\boldsymbol{x})=0\\}$
and
$\textrm{int}(S)=\\{\boldsymbol{x}\in\operatorname{\mathbb{R}}^{n}\;|\;h(\boldsymbol{x})>0\\}$
respectively. The trajectories of (1) are said to be safe if the set $S$ is
forward-invariant, i.e. if $\boldsymbol{x}_{0}\in
S\implies\boldsymbol{x}(t)\in S,\forall t\geq 0$. The following lemma, known
as Nagumo’s Theorem, provides necessary and sufficient conditions for
rendering $S$ forward-invariant.
###### Lemma 1
(Blanchini (1999)) Suppose that $\boldsymbol{u}(t)$ is continuous such that
the closed-loop trajectories of (1) are uniquely determined in forward-time.
The set $S$ is forward-invariant if and only if
$\dot{h}=\frac{\partial
h(\boldsymbol{x})}{\partial\boldsymbol{x}}\dot{\boldsymbol{x}}\geq
0,\;\forall\boldsymbol{x}\in\partial S.$ (3)
In recent years, control barrier functions have emerged as a viable approach
for control design satisfying (3).
###### Definition 1
(Ames et al. (2017)) Given a set
$S\subseteq\mathcal{X}\subset\operatorname{\mathbb{R}}^{n}$ defined by (2) for
a continuously differentiable function
$h:\operatorname{\mathbb{R}}^{n}\rightarrow\operatorname{\mathbb{R}}$, the
function $h$ is a control barrier function (CBF) defined on the set
$\mathcal{X}$ if there exists a Lipschitz continuous class
$\mathcal{K}_{\infty}$ function
$\alpha:\operatorname{\mathbb{R}}\rightarrow\operatorname{\mathbb{R}}$ such
that
$\sup_{\boldsymbol{u}\in\operatorname{\mathbb{R}}^{m}}\dot{h}(\boldsymbol{x},\boldsymbol{u})\geq-\alpha(h(\boldsymbol{x})),$
(4)
for all $\boldsymbol{x}\in\mathcal{X}$.
We refer to (4) as the CBF condition, and observe that it constitutes
sufficiency for the satisfaction of (3). As such, any continuous control law
$\boldsymbol{u}(t)$ that 1) admits unique closed-loop trajectories of (1) in
forward-time and 2) satisfies (4) renders the trajectories of (1) safe.
Consider now that for the system (1) the CBF condition is
$\sup_{\boldsymbol{u}\in\operatorname{\mathbb{R}}^{m}}\left[L_{f}h(\boldsymbol{x})+L_{g}h(\boldsymbol{x})\boldsymbol{u}+L_{d}h(\boldsymbol{x})\right]\geq-\alpha(h(\boldsymbol{x})),$
where, without identification of $d(\boldsymbol{x})$, the precise value of
$L_{d}h(\boldsymbol{x})$ is unknown. It is known, however, that
$-b_{d}\leq L_{d}h(\boldsymbol{x})\leq b_{d},$
where $b_{d}=D\left|\frac{\partial
h(\boldsymbol{x})}{\partial\boldsymbol{x}}\right|\mathbf{1}_{n\times 1}$.
Under such circumstances, a robust-CBF may be used for safe control design.
###### Definition 2
(Jankovic (2018)) Given a set
$S\subseteq\mathcal{X}\subset\operatorname{\mathbb{R}}^{n}$ defined by (2) for
a continuously differentiable function
$h:\operatorname{\mathbb{R}}^{n}\rightarrow\operatorname{\mathbb{R}}$, the
function $h$ is a robust control barrier function (r-CBF) for the system (1)
defined on the set $\mathcal{X}$ if there exists a Lipschitz continuous class
$\mathcal{K}_{\infty}$ function
$\alpha:\operatorname{\mathbb{R}}\rightarrow\operatorname{\mathbb{R}}$ such
that
$\sup_{\boldsymbol{u}\in\operatorname{\mathbb{R}}^{m}}\left[L_{f}h(\boldsymbol{x})+L_{g}h(\boldsymbol{x})\boldsymbol{u}-b_{d}\right]\geq-\alpha(h(\boldsymbol{x})),$
(5)
for all $\boldsymbol{x}\in\mathcal{X}$.
Designing a controller to protect against the worst possible disturbance in
perpetuity, however, may lead to poor performance, especially if $D$ is large.
Recent work (e.g. Lopez et al. (2021); Black et al. (2022a)) has shown that
this may be mitigated by using an estimate of the unknown disturbance
$\hat{d}(\boldsymbol{x})$. Thus, we define the vector field estimation error
$\tilde{d}(\boldsymbol{x})$ as
$\tilde{d}(\boldsymbol{x})\coloneqq
d(\boldsymbol{x})-\hat{d}(\boldsymbol{x}).$
In Black et al. (2022a), it was shown under mild assumptions that if the
uncertain vector field is parameter-affine, i.e. if
$d(\boldsymbol{x})=\Delta(\boldsymbol{x})\boldsymbol{\theta}^{*},$
for some known, continuous, bounded regressor matrix
$\Delta:\mathcal{X}\rightarrow\operatorname{\mathbb{R}}^{n\times p}$ and
unknown, static, polytopic parameters
$\boldsymbol{\theta}^{*}\in\Theta\subset\operatorname{\mathbb{R}}^{p}$, then
the vector field estimation error may be driven to zero within a fixed time
using parameter adaptation, i.e.
$\|\Delta(\boldsymbol{x}(t))(\boldsymbol{\theta}^{*}-\hat{\boldsymbol{\theta}}(t))\|\rightarrow
0$ as $t\rightarrow T<\infty$, independent of $\hat{\boldsymbol{\theta}}(0)$.
We now review the notion of fixed-time stability.
### 2.1 Fixed-Time Parameter Identification
Consider a nonlinear, autonomous system of the form
$\dot{\boldsymbol{x}}=F(\boldsymbol{x}),\quad\boldsymbol{x}(0)=\boldsymbol{x}_{0},$
(6)
where
$F:\operatorname{\mathbb{R}}^{n}\rightarrow\operatorname{\mathbb{R}}^{n}$ is
continuous such that (6) admits a unique solution for all
$\boldsymbol{x}_{0}\in\operatorname{\mathbb{R}}^{n}$, the value of which at
time $t$ is denoted $\boldsymbol{\varphi}_{t}(\boldsymbol{x}_{0})$, and where
$F(0)=0$.
###### Definition 3
(Polyakov (2012)) The origin of (6) is fixed-time stable (FxTS) if it is
stable in the sense of Lyapunov and any solution
$\boldsymbol{\varphi}_{t}(\boldsymbol{x}_{0})$ of (6) reaches the origin
within a finite time $T$ independent of $\boldsymbol{x}_{0}$, i.e. $\exists
T<\infty$ such that $\boldsymbol{\varphi}_{t}(\boldsymbol{x}_{0})=0$ for all
$t\geq T$, $\forall\boldsymbol{x}_{0}\in\operatorname{\mathbb{R}}^{n}$.
In what follows, we review a fixed-time stable parameter adaptation law from
the literature.
###### Theorem 1
(Black et al. (2022a)) Consider a perturbed dynamical system of the form (1).
Suppose that the following hold:
1. i)
the unknown, additive dynamics are parameter-affine, i.e.
$d(\boldsymbol{x})=\Delta(\boldsymbol{x})\boldsymbol{\theta}^{*}$,
2. ii)
there exist a known matrix
$\boldsymbol{M}(t)\in\operatorname{\mathbb{R}}^{n\times p}$ and vector
$\boldsymbol{v}(t)\in\operatorname{\mathbb{R}}^{n}$ such that
$\boldsymbol{M}(t)(\boldsymbol{\theta}^{*}-\hat{\boldsymbol{\theta}}(t))=\boldsymbol{v}(t)$,
3. iii)
the nullspace of $\Delta(\boldsymbol{x}(t))$ is constant for all $t\leq T$,
i.e.
$\mathcal{N}(\Delta(\boldsymbol{x}(t)))=\mathcal{N}(\Delta(\boldsymbol{x}(0)))$,
$\forall t\leq T$,
where
$T=\frac{\mu\pi}{2k_{V}^{2}\sqrt{ab}},$ (7)
with $a,b>0$, $\mu>2$, and
$k_{V}=\sigma_{r}(\boldsymbol{M})\sqrt{2\lambda_{max}(\boldsymbol{\Gamma})},$
(8)
where $\boldsymbol{\Gamma}\in\mathbb{R}^{p\times p}$ is a constant, positive-
definite, gain matrix and $\sigma_{r}(\boldsymbol{M})>0$ denotes the smallest
nonzero singular value of $\boldsymbol{M}$ over the time interval. Then, under
the ensuing parameter adaptation law,
$\dot{\hat{\boldsymbol{\theta}}}=\boldsymbol{\Gamma}\boldsymbol{M}^{T}\boldsymbol{v}\left(a\|\boldsymbol{v}\|^{\frac{2}{\mu}}+\frac{b}{\|\boldsymbol{v}\|^{\frac{2}{\mu}}}\right),$
(9)
the estimated disturbance $\hat{d}(\boldsymbol{x}(t))$ converges to the true
disturbance $d(\boldsymbol{x}(t))$ within fixed-time $T$, i.e.
$\Delta(\boldsymbol{x}(t))\hat{\boldsymbol{\theta}}(t)\rightarrow\Delta(\boldsymbol{x}(t))\boldsymbol{\theta}^{*}$
as $t\rightarrow T$, and
$\Delta(\boldsymbol{x}(t))\hat{\boldsymbol{\theta}}(t)=\Delta(\boldsymbol{x}(t))\boldsymbol{\theta}^{*}$
for all $t\geq T$, independent of $\hat{\boldsymbol{\theta}}(0)$.
See (Black et al., 2022a, Proof of Theorem 3).
Theorem 1 provides a framework for adapting parameter estimates
$\hat{\boldsymbol{\theta}}$ such that an unknown disturbance of the form
$d(\boldsymbol{x})=\Delta(\boldsymbol{x})\boldsymbol{\theta}^{*}$ is learned
within fixed-time. In reality, however, it is far more common for the unknown
vector field $d(\boldsymbol{x})$ to be nonlinear, which to this point has
precluded the use of (9) as a learning or adaptation strategy. By utilizing
Koopman operator theory, however, we can transform the problem of identifying
the nonlinear function $d$ into a linear, albeit infinite-dimensional,
identification problem, which with appropriate modifications permits the use
of the above adaptation framework.
### 2.2 Koopman Operator based Identification
Koopman theory dictates that a nonlinear system of the form (6) has an
analogous and notably linear representation in an infinite-dimensional Hilbert
space $\mathcal{Q}$ consisting of continuous, real-valued functions
$q:\mathcal{X}\rightarrow\operatorname{\mathbb{R}}$ referred to as
observables. The continuous-time Koopman dynamical system analogous to (6) is
then described by
$\dot{q}=\mathcal{L}q,\quad q\in\mathcal{Q},$ (10)
where $\mathcal{L}$ denotes the infinitesimal generator of the linear
semigroup of Koopman operators
$\mathcal{U}^{t}:\mathcal{Q}\rightarrow\mathcal{Q}$, i.e.
$\mathcal{L}q=\lim_{t\rightarrow 0}\frac{\mathcal{U}^{t}q-q}{t}=F\cdot\nabla
q.$
For tractability, however, many works (e.g. Bruder et al. (2021); Drmač et al.
(2021), among others) derive matrix representations
$\boldsymbol{U}\in\operatorname{\mathbb{R}}^{N\times N}$ and
$\boldsymbol{L}\in\operatorname{\mathbb{R}}^{N\times N}$ of the respective
finite-rank operators
$\mathcal{U}_{N}^{t}=\Pi_{N}\mathcal{U}^{t}|_{\mathcal{Q}_{N}}$ and
$\mathcal{L}_{N}=\Pi_{N}\mathcal{L}|_{\mathcal{Q}_{N}}$, where
$\Pi_{N}:\mathcal{Q}\rightarrow\mathcal{Q}_{N}$ is a projection operator onto
the subspace $\mathcal{Q}_{N}\subset\mathcal{Q}$ (spanned by $N>n$ linearly
independent basis functions
$\\{\psi_{i}:\mathcal{X}\rightarrow\operatorname{\mathbb{R}}\\}_{i=1}^{N}$)
and $\mathcal{O}|_{\mathcal{Q}_{N}}$ denotes the restriction of the operator
$\mathcal{O}$ to $\mathcal{Q}_{N}$. We refer the reader to Mauroy et al.
(2020) for additional details, and instead highlight that in practice
$\boldsymbol{U}$ and $\boldsymbol{L}$ are taken to be the respective solutions
to
$\displaystyle\boldsymbol{\psi}^{T}(\boldsymbol{x})\boldsymbol{U}=(\boldsymbol{\psi}(\boldsymbol{\varphi}_{t}(\boldsymbol{x})))^{T},$
(11)
$\displaystyle\boldsymbol{L}^{T}\boldsymbol{\psi}(\boldsymbol{x})=\frac{\partial\boldsymbol{\psi}(\boldsymbol{x})}{\partial\boldsymbol{x}}F(\boldsymbol{x}),$
(12)
where
$\boldsymbol{\psi}(\boldsymbol{x})=[\psi_{1}(\boldsymbol{x})\ldots\psi_{N}(\boldsymbol{x})]^{T}\in\operatorname{\mathbb{R}}^{N}$
and
$\frac{\partial\boldsymbol{\psi}(\boldsymbol{x})}{\partial\boldsymbol{x}}\in\operatorname{\mathbb{R}}^{N\times
n}$.
If $\boldsymbol{L}$ can be identified directly (as in e.g. Klus et al.
(2020)), the vector field $F$ may be reconstructed by solving (12) for
$F(\boldsymbol{x})$. When this is not possible, identification of
$\boldsymbol{U}$ may be used to reconstruct $F$ after computing
$\boldsymbol{L}$ via
$\boldsymbol{L}=\frac{1}{T_{s}}\log\boldsymbol{U},$ (13)
in the case of sampled data, where $\log$ denotes the principal matrix
logarithm and $T_{s}>0$ is the sampling interval. We observe that both (11)
and (12) describe linear systems of equations of the form
$\boldsymbol{a}^{T}\boldsymbol{X}=\boldsymbol{b}$, and thus $\boldsymbol{X}$
(in this case $\boldsymbol{U}$ or $\boldsymbol{L}$) can be identified using
linear identification techniques such as the parameter identification law (9).
### 2.3 Problem Statement
Now, reconsider the unknown, control-affine, nonlinear system (1). Suppose
that an estimate of its Koopman generator matrix $\hat{\boldsymbol{L}}$ is
available, and let the estimated unknown vector field
$\hat{d}(\boldsymbol{x})$ then via (12) be the solution to
$\hat{\boldsymbol{L}}^{T}\boldsymbol{\psi}(\boldsymbol{x})=\frac{\partial\boldsymbol{\psi}(\boldsymbol{x})}{\partial\boldsymbol{x}}\big{(}f(\boldsymbol{x})+g(\boldsymbol{x})\boldsymbol{u}+\hat{d}(\boldsymbol{x})\big{)}.$
We assume that
$\frac{\partial\boldsymbol{\psi}(\boldsymbol{x})}{\partial\boldsymbol{x}}$ is
full column rank, which may be satisfied by design (e.g. sinusoidal basis
functions), and thus have that $\hat{d}(\boldsymbol{x})\rightarrow
d(\boldsymbol{x})$ as $\hat{\boldsymbol{L}}\rightarrow\boldsymbol{L}$ (which
can also be satisfied if $\hat{\boldsymbol{U}}\rightarrow\boldsymbol{U}$).
Define the vectorized Koopman matrix and generator ($\boldsymbol{\mu}^{*}$ and
$\boldsymbol{\lambda}^{*}$), and their estimates ($\hat{\boldsymbol{\mu}}$ and
$\hat{\boldsymbol{\lambda}}$), as
$\displaystyle\boldsymbol{\mu}^{*}$
$\displaystyle\coloneqq[\mathrm{col}_{1}^{T}(\boldsymbol{U})\ldots\mathrm{col}_{N}^{T}(\boldsymbol{U})]^{T},$
(14) $\displaystyle\boldsymbol{\lambda}^{*}$
$\displaystyle\coloneqq[\mathrm{col}_{1}^{T}(\boldsymbol{L})\ldots\mathrm{col}_{N}^{T}(\boldsymbol{L})]^{T},$
(15) $\displaystyle\hat{\boldsymbol{\mu}}$
$\displaystyle\coloneqq[\mathrm{col}_{1}^{T}(\hat{\boldsymbol{U}})\ldots\mathrm{col}_{N}^{T}(\hat{\boldsymbol{U}})]^{T},$
(16) $\displaystyle\hat{\boldsymbol{\lambda}}$
$\displaystyle\coloneqq[\mathrm{col}_{1}^{T}(\hat{\boldsymbol{L}})\ldots\mathrm{col}_{N}^{T}(\hat{\boldsymbol{L}})]^{T},$
(17)
and observe that for the system (1) the relations (11) and (12) are equivalent
to
$\boldsymbol{\Psi}(\boldsymbol{x})\boldsymbol{\mu}^{*}=(\boldsymbol{\psi}(\boldsymbol{\varphi}_{t}(\boldsymbol{x})))^{T},$
(18)
and
$\boldsymbol{\Psi}(\boldsymbol{x})\boldsymbol{\lambda}^{*}=\frac{\partial\boldsymbol{\psi}(\boldsymbol{x})}{\partial\boldsymbol{x}}\big{(}f(\boldsymbol{x})+g(\boldsymbol{x})\boldsymbol{u}+d(\boldsymbol{x})\big{)},$
(19)
respectively, where
$\boldsymbol{\Psi}(\boldsymbol{x})\coloneqq\begin{bmatrix}\boldsymbol{\psi}^{T}(\boldsymbol{x})&0&\ldots&0\\\
0&\boldsymbol{\psi}^{T}(\boldsymbol{x})&\ldots&0\\\ \vdots&&\ddots&\vdots\\\
0&\ldots&0&\boldsymbol{\psi}^{T}(\boldsymbol{x})\end{bmatrix}\in\operatorname{\mathbb{R}}^{N\times
N^{2}}.$ (20)
Let the Koopman matrix and Koopman generator estimation errors respectively be
denoted
$\displaystyle\tilde{\boldsymbol{\mu}}$
$\displaystyle=\boldsymbol{\mu}^{*}-\hat{\boldsymbol{\mu}},$
$\displaystyle\tilde{\boldsymbol{\lambda}}$
$\displaystyle=\boldsymbol{\lambda}^{*}-\hat{\boldsymbol{\lambda}},$
and observe that
$\boldsymbol{\Psi}(\boldsymbol{x})\hat{\boldsymbol{\lambda}}=\boldsymbol{\Psi}(\boldsymbol{x})\boldsymbol{\lambda}^{*}$
for all
$\tilde{\boldsymbol{\lambda}}\in\mathcal{N}(\boldsymbol{\Psi}(\boldsymbol{x}))$.
We are now ready to formally define the problem under consideration.
###### Problem 1
Consider a dynamical system of the form (1). Design adaptation and control
laws,
$\dot{\hat{\boldsymbol{\lambda}}}=\eta(\boldsymbol{x},\boldsymbol{u},\hat{\boldsymbol{\lambda}})$
and $\boldsymbol{u}=\kappa(\boldsymbol{x},\hat{\boldsymbol{\lambda}})$
respectively, such that
1. 1.
the Koopman generator error vector, $\tilde{\boldsymbol{\lambda}}$, is
rendered fixed-time stable to the nullspace of
$\boldsymbol{\Psi}(\boldsymbol{x})$, i.e.
$\tilde{\boldsymbol{\lambda}}(t)\rightarrow\mathcal{N}(\boldsymbol{\Psi}(\boldsymbol{x}))$
as $t\rightarrow T$ and
$\tilde{\boldsymbol{\lambda}}(t)\in\mathcal{N}(\boldsymbol{\Psi}(\boldsymbol{x}))$
for all $t\geq T$, independent of $\hat{\boldsymbol{\lambda}}(0)$, and
2. 2.
the system trajectories remain safe for all time, i.e. $\boldsymbol{x}(t)\in
S$, $\forall t\geq 0$.
In the ensuing section, we introduce our approach to solving the first element
of Problem 1.
## 3 Nonlinear Estimation in Fixed-Time
In this section, we introduce our proposed adaptation law
$\dot{\hat{\boldsymbol{\lambda}}}=\eta(\boldsymbol{x},\boldsymbol{u},\hat{\boldsymbol{\lambda}})$
for the fixed-time identification of the Koopman generator vector
$\boldsymbol{\lambda}$, which allows us to identify the unknown vector field
$d(\boldsymbol{x})$ in (1) within a fixed-time. Before introducing one of our
main results, we require the following assumptions.
###### Assumption 1
The projection of the infinite-dimensional Koopman operator $\mathcal{U}^{t}$
onto the finite-rank subspace $\mathcal{Q}_{N}$ exactly describes the
evolution of observables $q\in\mathcal{Q}$, i.e.
$\mathcal{U}_{N}^{t}q=(\Pi_{N}\mathcal{U}^{t})q$, for all $q\in\mathcal{Q}$.
###### Assumption 2
There exist scalars $s>0$, $T>0$ such that
$\sigma_{N}(\boldsymbol{\Psi}(\boldsymbol{x}(t)))\geq s$ for all $0\leq t\leq
T$, where $\boldsymbol{\Psi}(\boldsymbol{x}(t))$ is given by (20).
The satisfaction of Assumption 1 depends on the choice of $N$ (and thus on the
basis functions $\boldsymbol{\psi}$), and while generally this is an open
problem recent work has studied the existence of Koopman invariant subspaces
(see e.g. Brunton et al. (2016)), i.e. subspaces
$\mathcal{Q}_{N}\subset\mathcal{Q}$ over which Assumption 1 holds. For our
numerical study in Section 5, we find that bases $\boldsymbol{\psi}$
constructed using monomials or sinusoids work well. The satisfaction of
Assumption 2 evidently depends on the choice of basis functions $\psi_{i}$.
Note, however, that $\boldsymbol{\Psi}(\boldsymbol{x}(t))$ is guaranteed to be
full row-rank (which implies that
$\sigma_{N}(\boldsymbol{\Psi}(\boldsymbol{x}(t)))>0$) provided that $\exists
i\in[N]$ such that $\psi_{i}(\boldsymbol{x}(t))\neq 0$. This can be guaranteed
with an appropriate choice of bases, e.g. $\psi_{1}(\boldsymbol{x}(t))=1$.
###### Theorem 2
Suppose that Assumptions 1 and 2 hold, where
$T=\frac{w\pi}{4s\lambda_{max}(\boldsymbol{\Gamma})\sqrt{ab}},$ (21)
with $a,b>0$, $w>2$, and
$\boldsymbol{\Gamma}\in\operatorname{\mathbb{R}}^{N^{2}\times N^{2}}$ a
constant, positive-definite gain matrix. Then, under the ensuing adaptation
law
$\displaystyle\small\dot{\hat{\boldsymbol{\lambda}}}=\boldsymbol{\Gamma}\boldsymbol{\Psi}^{T}(\boldsymbol{x})\boldsymbol{\nu}(\boldsymbol{x},\hat{\boldsymbol{\lambda}})\left(a\|\boldsymbol{\nu}(\boldsymbol{x},\hat{\boldsymbol{\lambda}})\|^{2/w}+\frac{b}{\|\boldsymbol{\nu}(\boldsymbol{x},\hat{\boldsymbol{\lambda}})\|^{2/w}}\right),$
(22)
the Koopman generator error vector $\tilde{\boldsymbol{\lambda}}$ is rendered
FxTS to the nullspace of $\boldsymbol{\Psi}(\boldsymbol{x})$, i.e.
$\tilde{\boldsymbol{\lambda}}(t)\rightarrow\mathcal{N}(\boldsymbol{\Psi}(\boldsymbol{x}(t)))$
as $t\rightarrow T$ and
$\tilde{\boldsymbol{\lambda}}(t)\in\mathcal{N}(\boldsymbol{\Psi}(\boldsymbol{x}))$
for all $t\geq T$, independent of $\hat{\boldsymbol{\lambda}}(0)$, where
$\boldsymbol{\nu}(\boldsymbol{x},\hat{\boldsymbol{\lambda}})=\frac{\partial\boldsymbol{\psi}(\boldsymbol{x})}{\partial\boldsymbol{x}}\dot{\boldsymbol{x}}-\boldsymbol{\Psi}(\boldsymbol{x})\hat{\boldsymbol{\lambda}}.$
(23)
We first show that there exists a time-invariant Koopman generator vector
$\boldsymbol{\lambda}(t)=\boldsymbol{\lambda}^{*}$, $\forall t\geq 0$, and
then prove that under (22) the aassociated Koopman generator error vector
$\tilde{\boldsymbol{\lambda}}$ is rendered FxTS to
$\mathcal{N}(\boldsymbol{\Psi}(\boldsymbol{x}))$.
First, under Assumption 1 it follows that there exists a finite-rank operator
$\mathcal{L}_{N}:\mathcal{Q}_{N}\rightarrow\mathcal{Q}_{N}$ such that the
nonlinear dynamics of (1) may be represented by the following linear system in
the space of observables:
$\dot{q}=\mathcal{L}_{N}q,\quad q\in\mathcal{Q}.$
Then, there exists a finite-dimensional matrix representation
$\boldsymbol{L}\in\operatorname{\mathbb{R}}^{N\times N}$ in a basis
$\\{\psi_{i}:\mathcal{X}\rightarrow\operatorname{\mathbb{R}}\\}_{i=1}^{N}$
corresponding to the operator $\mathcal{L}_{N}$ such that the relation (12)
holds over the trajectories of (1). Thus, the Koopman generator matrix
$\boldsymbol{L}$ admits the (time-invariant) Koopman generator vector
$\boldsymbol{\lambda}^{*}$ defined by (15).
Next, observe that (19) over the trajectories of (1) may be modified to obtain
$\displaystyle\boldsymbol{\Psi}(\boldsymbol{x})\boldsymbol{\lambda}^{*}-\boldsymbol{\Psi}(\boldsymbol{x})\hat{\boldsymbol{\lambda}}$
$\displaystyle=\frac{\partial\boldsymbol{\psi}(\boldsymbol{x})}{\partial\boldsymbol{x}}\dot{\boldsymbol{x}}-\boldsymbol{\Psi}(\boldsymbol{x})\hat{\boldsymbol{\lambda}},$
$\displaystyle\boldsymbol{\Psi}(\boldsymbol{x})\tilde{\boldsymbol{\lambda}}$
$\displaystyle=\boldsymbol{\nu}(\boldsymbol{x},\hat{\boldsymbol{\lambda}}),$
where $\boldsymbol{\nu}(\boldsymbol{x},\hat{\boldsymbol{\lambda}})$ is given
by (23). Thus, we have that the premises of Theorem 1 are satisfied with
$\boldsymbol{M}=\boldsymbol{\Psi}$ and $\boldsymbol{v}=\boldsymbol{\nu}$ and
the adaptation law (22) takes the form of (9). Then, with Assumption 2 it
follows directly from Theorem 1 that $\tilde{\boldsymbol{\lambda}}$ is
rendered FxTS to $\mathcal{N}(\boldsymbol{\Psi}(\boldsymbol{x}))$ with
settling time given by (21).
In what follows, we show how the parameter adaptation law (22) results in
learning the exact disturbance $d(\boldsymbol{x})$ to the system dynamics (1)
within fixed-time.
###### Corollary 1
Consider the system (1). Suppose that the premises of Theorem 2 hold, and that
the estimated Koopman vector $\hat{\boldsymbol{\lambda}}$ is adapted according
to (22). If the estimated disturbance $\hat{d}(\boldsymbol{x})$ is taken to be
$\hat{d}(\boldsymbol{x}(t))=\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}^{\dagger}\boldsymbol{\Psi}(\boldsymbol{x}(t))\hat{\boldsymbol{\lambda}}(t)-a(\boldsymbol{x}(t),\boldsymbol{u}(t)),$
(24)
where
$a(\boldsymbol{x}(t),\boldsymbol{u}(t))=f(\boldsymbol{x}(t))+g(\boldsymbol{x}(t))\boldsymbol{u}(t)$,
then, the vector field estimation error $\tilde{d}(\boldsymbol{x}(t))$ is
rendered FxTS to the origin and the estimated disturbance
$\hat{d}(\boldsymbol{x}(t))$ converges to the true disturbance
$d(\boldsymbol{x}(t))$ within a fixed-time $T$ given by (21), i.e.
$\tilde{d}(\boldsymbol{x}(t))\rightarrow 0$ and
$\hat{d}(\boldsymbol{x}(t))\rightarrow d(\boldsymbol{x}(t))$ as $t\rightarrow
T$ independent of $\hat{d}(\boldsymbol{x}(0))$.
We first observe from (19) that the disturbance $d(\boldsymbol{x}(t))$ is the
solution to
$\displaystyle\small\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}d(\boldsymbol{x}(t))=\boldsymbol{\Psi}(\boldsymbol{x}(t))\boldsymbol{\lambda}^{*}-\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}a(\boldsymbol{x}(t),\boldsymbol{u}(t)).$
(25)
Next, it follows from Theorem 2 that under (22)
$\hat{\boldsymbol{\lambda}}(t)\rightarrow\boldsymbol{\lambda}^{*}$ as
$t\rightarrow T$. Then, we have that
$\boldsymbol{\Psi}(\boldsymbol{x}(t))\hat{\boldsymbol{\lambda}}(t)\rightarrow\boldsymbol{\Psi}(\boldsymbol{x}(t))\boldsymbol{\lambda}^{*}$
and thus that
$\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}\hat{d}(\boldsymbol{x}(t))\rightarrow\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}d(\boldsymbol{x}(t))$
as $t\rightarrow T$ when $\hat{d}(\boldsymbol{x}(t))$ is taken to be the
solution to (25). Finally, with
$\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}$
full column rank we use its pseudoinverse
$\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}^{\dagger}$
to recover (24) and thus have that $\hat{d}(\boldsymbol{x}(t))\rightarrow
d(\boldsymbol{x}(t))$ as $t\rightarrow T$.
For the purpose of control design it is important to know how the estimation
error signals behave during the transient period $t\leq T$ before the unknown
vector field $d(\boldsymbol{x})$ has been learned. In contrast to least-
squares and related regression based approaches to learning the Koopman matrix
$\boldsymbol{U}$ and/or generator matrix $\boldsymbol{L}$, our FxTS parameter
adaptation law allows us to derive explicit estimation error bounds as a
function of time. In fact, prior work (see Black et al. (2022a)) has shown
that the magnitude of this error bound is a monotonically decreasing function
of time. In the following result, we introduce a modification to the prior
work in order to derive a bound on the magnitude of the vector field
estimation error $\tilde{d}(\boldsymbol{x}(t))$ as an explicit function of
time.
###### Corollary 2
Suppose that the premises of Corollary 1 hold. If, in addition, the initial
estimated Koopman generator vector is set to zero, i.e.
$\hat{\boldsymbol{\lambda}}(0)=\mathbf{0}_{N^{2}\times 1}$, and
$\boldsymbol{\Gamma}$ in (22) is constant, positive-definite, and also
diagonal, then $\forall t\in[0,T]$, where $T$ is given by (21), the following
expression constitutes a monotonically decreasing upper bound on
$\|\tilde{d}(\boldsymbol{x}(t))\|_{\infty}$:
$\|\tilde{d}(\boldsymbol{x}(t))\|_{\infty}\leq\Lambda\sigma_{max}(\boldsymbol{W}(t))\tan^{\frac{w}{2}}(A(t))\coloneqq\delta(t),$
(26)
where
$\Lambda=\sqrt{2\lambda_{max}(\boldsymbol{\Gamma})}\left(\frac{a}{b}\right)^{w/4},$
(27)
and
$\displaystyle\boldsymbol{W}(t)$
$\displaystyle=\frac{\partial\boldsymbol{\psi}(\boldsymbol{x}(t))}{\partial\boldsymbol{x}}^{\dagger}\boldsymbol{\Psi}(\boldsymbol{x}),$
(28) $\displaystyle A(t)$
$\displaystyle=\max\left\\{\Xi-\frac{\sqrt{ab}}{w}t,0\right\\},$ (29)
$\displaystyle\Xi$
$\displaystyle=\tan^{-1}\left(\sqrt{\frac{b}{a}}\left(\frac{1}{2}\boldsymbol{l}^{T}\Gamma^{-1}\boldsymbol{l}\right)^{\frac{1}{w}}\right),$
(30)
where
$\boldsymbol{l}=\frac{2D}{\sigma_{min}(\boldsymbol{W}(0))}\cdot\boldsymbol{1}_{N^{2}\times
1}$, and $\|\tilde{d}(\boldsymbol{x}(t))\|_{\infty}=0$, $\forall t>T$.
See Appendix A.
Knowledge of the upper bound on the disturbance estimation error bound (26)
permits the use of robust, adaptive model-based control techniques. In
particular, we will show in the next section how to synthesize a CBF-based
controller that guarantees safety both before and after the transient phase
$t\leq T$ during which the unknown disturbance $d(\boldsymbol{x})$ is learned,
and in doing so address the second element of Problem 1.
## 4 Robust-Adaptive Control Design
In this section, we describe two approaches to synthesizing the Koopman-based
parameter adaptation law with a CBF-based control law for safe control under
model uncertainty.
### 4.1 Robust-CBF Approach
In the first approach, we demonstrate how to apply robust-CBF principles to
the design of a safe controller
$\boldsymbol{u}=\kappa(\boldsymbol{x},\hat{\boldsymbol{\lambda}})$ when using
the Koopman-based adaptation scheme (22).
###### Theorem 3
Consider a system of the form (1), a safe set $S$ defined by (2) for a
continuously differentiable function
$h:\mathcal{X}\rightarrow\operatorname{\mathbb{R}}$, and suppose that the
premises of Corollary 2 hold. Then, any control input $\boldsymbol{u}$
satisfying
$\sup_{\boldsymbol{u}\in\operatorname{\mathbb{R}}^{m}}\left[L_{f}h(\boldsymbol{x})+L_{g}h(\boldsymbol{x})\boldsymbol{u}+L_{\hat{d}}h(\boldsymbol{x})-b_{d}(t)\right]\geq-\alpha(h(\boldsymbol{x}))$
(31)
renders the trajectories of (1) safe, where
$b_{d}(t)=\left|\frac{\partial
h}{\partial\boldsymbol{x}}\right|\delta(t)\cdot\mathbf{1}_{n\times 1},$ (32)
and $\delta(t)$ is given by (26).
Observe that over the trajectories of (1)
$\displaystyle\dot{h}$
$\displaystyle=L_{f}h(\boldsymbol{x})+L_{g}h(\boldsymbol{x})\boldsymbol{u}+L_{d}h(\boldsymbol{x})$
$\displaystyle=L_{f}h(\boldsymbol{x})+L_{g}h(\boldsymbol{x})\boldsymbol{u}+\frac{\partial
h}{\partial\boldsymbol{x}}\hat{d}(\boldsymbol{x})+\frac{\partial
h}{\partial\boldsymbol{x}}\tilde{d}(\boldsymbol{x})$ $\displaystyle\geq
L_{f}h(\boldsymbol{x})+L_{g}h(\boldsymbol{x})\boldsymbol{u}+\frac{\partial
h}{\partial\boldsymbol{x}}\hat{d}(\boldsymbol{x})-\left|\frac{\partial
h}{\partial\boldsymbol{x}}\right|\delta(t)\cdot\mathbf{1}_{n\times 1}.$
By Corollary 2 it follows that
$\|\tilde{d}(\boldsymbol{x}(t))\|_{\infty}\leq\delta(t)$ for all $t\geq 0$.
Therefore, $\dot{h}\geq-\alpha(h(\boldsymbol{x})$ whenever (31) holds, and
thus $S$ is rendered forward-invariant by any control input satisfying (31).
It is worth noting that as the estimated disturbance $\hat{d}(\boldsymbol{x})$
converges to the true disturbance $d(\boldsymbol{x})$ the robustness term
$b_{d}(t)$ will go to zero. So while initially the condition (31) may demand
large control inputs to guarantee safety in the face of a the unknown
disturbance, as $t\rightarrow T$ the term $b_{d}(t)\rightarrow 0$ and the
standard CBF condition is recovered.
### 4.2 Robust-Adaptive CBF Approach
In this approach, we define the following robust-adaptive safe set
$S_{r}=\\{\boldsymbol{x}\in\mathcal{X}:h_{r}(\boldsymbol{x},t)\geq 0\\}$ (33)
for the continuously differentiable function
$h_{r}(\boldsymbol{x},t)=h(\boldsymbol{x})-\frac{1}{2}\boldsymbol{\delta}^{T}(t)\boldsymbol{\Omega}^{-1}\boldsymbol{\delta}(t),$
for $\boldsymbol{\delta}(t)=\delta(t)\cdot\mathbf{1}_{n\times 1}$ with
$\delta(t)$ given by (26), and a constant, positive-definite matrix
$\boldsymbol{\Omega}\in\operatorname{\mathbb{R}}^{n\times n}$. We note that
the set $S_{r}$ defined by (33) is a subset of the safe set $S$ defined by
(2), i.e. $S_{r}\subseteq S$. We now introduce a robust-adaptive CBF condition
that renders the trajectories of (1) safe.
###### Theorem 4
Consider a system of the form (1), a set $S_{r}$ defined by (33) for a
continuously differentiable function
$h_{r}:\mathcal{X}\rightarrow\operatorname{\mathbb{R}}$, and suppose that the
premises of Corollary 2 hold. Then, any control input $\boldsymbol{u}$
satisfying
$\sup_{\boldsymbol{u}\in\operatorname{\mathbb{R}}^{m}}\left[L_{f}h_{r}(\boldsymbol{x})+L_{g}h_{r}(\boldsymbol{x})\boldsymbol{u}-r\big{(}t,\hat{d}(\boldsymbol{x}(t))\big{)}\right]\geq-\alpha(h_{r}(\boldsymbol{x}))$
(34)
renders the trajectories of (1) safe, where
$r\big{(}t,\hat{d}(\boldsymbol{x}(t))\big{)}=\mathrm{Tr}(\boldsymbol{\Omega}^{-1})\delta(t)\dot{\delta}(t)+b_{d}(t),$
where $\delta(t)$ is given by (26), $b_{d}(t)$ is given by (32), and
$\displaystyle\dot{\delta}(t)$
$\displaystyle=\Lambda\dot{\sigma}_{max}(\boldsymbol{W}(t))\tan^{\frac{w}{2}}(A(t))$
(35)
$\displaystyle\quad-\frac{1}{2}\Lambda\sigma_{max}(\boldsymbol{W}(t))\sqrt{ab}\tan^{\frac{w}{2}-1}(A(t))\mathrm{sec}^{2}(A(t))$
Follows directly from (Black et al., 2022a, Theorem 5) by replacing
$\tilde{\boldsymbol{\theta}}$ with $\tilde{d}(\boldsymbol{x})$.
###### Remark 1
We note that the robust-adaptive CBF condition (34) requires the time-
derivative of the maximum singular value of the matrix $\boldsymbol{W}(t)$
given by (28), i.e. $\dot{\sigma}_{max}(\boldsymbol{W}(t))$. While this may
not be available in closed-form, it may be approximated in practice using
finite-difference methods.
Since both the robust (31) and robust-adaptive (34) CBF conditions ensure
safety of the trajectories of (1), either condition may be included as an
affine constraint in the now popular quadratic program based control law (eg.
Ames et al. (2017); Black et al. (2020)). We now introduce one such iteration
of the QP controller,
$\displaystyle\boldsymbol{u}^{*}=\operatorname*{arg\,min}_{\boldsymbol{u}\in\operatorname{\mathbb{R}}^{m}}$
$\displaystyle\frac{1}{2}\|\boldsymbol{u}-\boldsymbol{u}^{0}\|^{2}$ (36a) s.t.
$\displaystyle\forall s\in[1..c]$
$\displaystyle\mathrm{Either}\;\eqref{eq.robust_koopman_cbf_condition}\;\mathrm{or}\;\eqref{eq.ra_koopman_cbf_condition},$
(36b)
the objective (36a) of which seeks to find a minimally deviating solution
$\boldsymbol{u}^{*}$ from a nominal, potentially unsafe input
$\boldsymbol{u}^{0}$ subject to the specified CBF constraint (36b).
In the following section, we demonstrate the efficacy of our jointly proposed
adaptation (22) and control (36) laws on a quadrotor tracking problem.
## 5 Numerical Case Study
Let $\mathcal{F}$ be an inertial frame with a point $s_{0}$ denoting its
origin. Consider a quadrotor seeking to track a Gerono lemnisicate (i.e.
figure-eight) trajectory amidst circular obstacles in the 2D plane. Quadrotor
dynamics are known to be differentially-flat, thus as shown to be feasible in
Zhou and Schwager (2014) we take the model to be the following 2D double-
integrator subject to an unknown, wind disturbance:
$\begin{bmatrix}\dot{x}\\\ \dot{y}\\\ \dot{v}_{x}\\\
\dot{v}_{y}\end{bmatrix}=\begin{bmatrix}v_{x}\\\ v_{y}\\\ a_{x}\\\
a_{y}\end{bmatrix}+\begin{bmatrix}0\\\ 0\\\ d_{x}(\boldsymbol{z})\\\
d_{y}(\boldsymbol{z})\end{bmatrix},$ (37)
where $x$ and $y$ denote the position coordinates (in m), $v_{x}$ and $v_{y}$
are the velocities (in m/s), and $a_{x}$ and $a_{y}$ are the accelerations (in
m/s2). The full state and control input vectors are
$\boldsymbol{z}=[x\;y\;v_{x}\;v_{y}]^{T}\in\operatorname{\mathbb{R}}^{4}$ and
$\boldsymbol{u}=[a_{x}\;a_{y}]^{T}\in\operatorname{\mathbb{R}}^{2}$
respectively, and
$d_{x}:\operatorname{\mathbb{R}}^{4}\rightarrow\operatorname{\mathbb{R}}$ and
$d_{y}:\operatorname{\mathbb{R}}^{4}\rightarrow\operatorname{\mathbb{R}}$ are
unknown wind-gust accelerations satisfying the requirements of $d$ in (1).
Specifically, we used the wind-gust model from Davoudi et al. (2020) to obtain
spatially varying wind velocities $w_{i}(\boldsymbol{z})$ and set
$d_{i}(\boldsymbol{z})=C_{d}(w_{i}(\boldsymbol{z})-v_{i})$ for
$i\in\\{x,y\\}$, where $C_{d}$ is a drag coefficient, such that
$\|d_{x}(\boldsymbol{z})\|_{\infty},\|d_{y}(\boldsymbol{z})\|_{\infty}\leq
D=10$.
We consider the presence of two circular obstacles, each of which occludes the
desired quadrotor path. As such, the safe set is defined as
$S=\\{\boldsymbol{z}\in\operatorname{\mathbb{R}}^{4}:h_{1}(\boldsymbol{z})\geq
0\\}\cap\\{\boldsymbol{z}\in\operatorname{\mathbb{R}}^{4}:h_{2}(\boldsymbol{z})\geq
0\\},$
where $h_{i}(\boldsymbol{z})=(x-c_{x,i})^{2}+(y-c_{y,i})^{2}-R^{2}$ for
$i\in\\{1,2\\}$, $(c_{x,i},c_{y,i})$ denotes the center of the ith obstacle,
and $R$ is its radius. Since $h_{1},h_{2}$ are relative-degree two with
respect to (37), we use future-focused CBFs for a form of safe, predictive
control (see Black et al. (2022b) for details).
We use forms of the CBF-QP control law111All simulation code and data are
available online at https://github.com/6lackmitchell/nonlinear-fxt-adaptation-
control (36) corresponding to both the robust (31) and robust-adaptive (34)
CBF conditions, and compare the performance against a naive (i.e. assuming
exact identification, $\hat{d}=d$) CBF controller equipped with the data-
driven Koopman-based identification schemes proposed in Bruder et al. (2021)
and Klus et al. (2020) respectively. For the robust and robust-adaptive
simulations we inject additive Gaussian measurement noise into both
$\boldsymbol{x}$ and $\dot{\boldsymbol{x}}$ in order to stress-test the
algorithm under non-ideal conditions. We use the nominal control law
introduced for quadrotors in Schoellig et al. (2012) and adapted for our
dynamics, where the reference trajectory is the Gerono lemniscate defined by
$\displaystyle x^{*}(t)$ $\displaystyle=4\sin(0.2\pi t)$ $\displaystyle
y^{*}(t)$ $\displaystyle=4\sin(0.2\pi t)\cos(0.2\pi t),$
which specifies that one figure-eight pattern be completed every 10s. Our
circular obstacles are centered on $(-2.5,0)$ and $(2,-1)$ respectively, each
with a radius of $R=1.5$m. For all controllers, we used linear class
$\mathcal{K}_{\infty}$ functions $\alpha(h)=h$. For our Koopman basis
functions, we used sinusoids of the form $\psi_{i}=\sqrt{2}\cos(n\pi z)$,
$\psi_{i+1}=\sqrt{2}\sin(n\pi z)$, for $n\in\\{1,2\\}$ and
$z\in\\{x,y,v_{x},v_{y}\\}$.
The resulting paths taken by the simulated CBF-controlled vehicles (Koopman-
based naive, robust, and robust-adaptive), as well as the path taken for the
nominally controlled vehicle without disturbance estimation are displayed in
Figure 1. Here, only the robust and robust-adaptive CBF controllers that use
our fixed-time identification approach preserve safety (as seen in Figure 2).
As the data-driven Koopman matrix (Bruder et al. (2021)) and generator (Klus
et al. (2020)) approaches are non-recursive and unable to quantify the
identification error, they are neither sufficiently responsive nor accurate
enough to guarantee safety in this example. Figure 3 highlights that our
disturbance estimates indeed converge to the true values within the fixed-time
$T=0.12$ sec, computed using (21), and the control inputs are shown in Figure
4. We further note that even when measurement noise is injected into the
system, the adaptation-based approach succeeds in both reconstructing the
unknown disturbance to within a small error and preserving safety. We leave
quantification of this measurement error and any error associated with
representing the infinite-dimensional Koopman operator in a finite-dimensional
subspace to future work.
Figure 1: XY paths under the various CBF-QP control laws in the double-
integrator example. Only the controllers using the proposed Koopman-based
fixed-time identification scheme succeed in preserving safety. Figure 2:
Evolutions of $h_{1}$ and $h_{2}$ for the various controllers considered in
the double-integrator example. Figure 3: The estimates $\hat{d}_{x}$,
$\hat{d}_{y}$ of the unknown wind gusts ($d_{x}$ and $d_{x}$). In our scheme,
the estimates converge to the true values within the fixed-time $T$ without
noise, and converge to a close approximation in the presence of measurement
noise. Figure 4: Control inputs for the double-integrator example.
## 6 Conclusion
We introduced a safe control synthesis using Koopman-based fixed-time system
identification. We showed that under mild assumptions we can learn the
unknown, additive, nonlinear vector field perturbing the system dynamics
within a fixed-time independent of the initial estimate. The a priori
knowledge of this identification guarantee allows us to derive robust and
robust-adaptive control barrier function conditions suitable for use in a
standard class of quadratic program based controllers.
We recognize that there are practical limitations to our method, including the
need to measure the state derivative and to be able to exactly represent the
linear, infinite-dimensional Koopman dynamical system with a finite-rank
operator. Though we demonstrated some robustness to measurement noise in our
simulated study, in the future we will seek to relax these assumptions by
analyzing the use of observers and filters for state and state-derivative
approximation and by seeking to quantify the residual error associated with
projecting the infinite-dimensional Koopman operator onto a finite-dimensional
subspace.
The authors would like to acknowledge the support of the National Science
Foundation (NSF) through grants 1931982 and 1942907.
## References
* Ames et al. (2017) Ames, A.D., Xu, X., Grizzle, J.W., and Tabuada, P. (2017). Control barrier function based quadratic programs for safety critical systems. _IEEE Trans. on Automatic Control_ , 62(8), 3861–3876.
* Black et al. (2022a) Black, M., Arabi, E., and Panagou, D. (2022a). Fixed-time parameter adaptation for safe control synthesis. _arXiv preprint arXiv:2204.10453_.
* Black et al. (2020) Black, M., Garg, K., and Panagou, D. (2020). A quadratic program based control synthesis under spatiotemporal constraints and non-vanishing disturbances. In _2020 59th IEEE Conference on Decision and Control (CDC)_ , 2726–2731.
* Black et al. (2022b) Black, M., Jankovic, M., Sharma, A., and Panagou, D. (2022b). Future-focused control barrier functions for autonomous vehicle control. _arXiv preprint arXiv:2204.00127_.
* Blanchini (1999) Blanchini, F. (1999). Set invariance in control. _Automatica_ , 35(11), 1747–1767.
* Bruder et al. (2021) Bruder, D., Fu, X., Gillespie, R.B., Remy, C.D., and Vasudevan, R. (2021). Data-driven control of soft robots using koopman operator theory. _IEEE Transactions on Robotics_ , 37(3), 948–961. 10.1109/TRO.2020.3038693.
* Brunton et al. (2016) Brunton, S.L., Brunton, B.W., Proctor, J.L., and Kutz, J.N. (2016). Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. _PloS one_ , 11(2), e0150171.
* Davoudi et al. (2020) Davoudi, B., Taheri, E., Duraisamy, K., Jayaraman, B., and Kolmanovsky, I. (2020). Quad-rotor flight simulation in realistic atmospheric conditions. _AIAA Journal_ , 58(5), 1992–2004.
* Drmač et al. (2021) Drmač, Z., Mezić, I., and Mohr, R. (2021). Identification of nonlinear systems using the infinitesimal generator of the koopman semigroup—a numerical implementation of the mauroy–goncalves method. _Mathematics_ , 9(17), 2075. 10.3390/math9172075.
* Folkestad et al. (2020) Folkestad, C., Chen, Y., Ames, A.D., and Burdick, J.W. (2020). Data-driven safety-critical control: Synthesizing control barrier functions with koopman operators. _IEEE Control Systems Letters_ , 5(6), 2012–2017.
* Frigola and Rasmussen (2013) Frigola, R. and Rasmussen, C.E. (2013). Integrated pre-processing for bayesian nonlinear system identification with gaussian processes. In _52nd IEEE Conference on Decision and Control_ , 5371–5376. IEEE.
* Gonzalez and Yu (2018) Gonzalez, J. and Yu, W. (2018). Non-linear system modeling using lstm neural networks. _IFAC-PapersOnLine_ , 51(13), 485–489.
* Haseli and Cortés (2021) Haseli, M. and Cortés, J. (2021). Data-driven approximation of koopman-invariant subspaces with tunable accuracy. In _2021 American Control Conference (ACC)_ , 470–475. 10.23919/ACC50511.2021.9483259.
* Jankovic (2018) Jankovic, M. (2018). Robust control barrier functions for constrained stabilization of nonlinear systems. _Automatica_ , 96, 359–367.
* Klus et al. (2020) Klus, S., Nüske, F., Peitz, S., Niemann, J.H., Clementi, C., and Schütte, C. (2020). Data-driven approximation of the koopman generator: Model reduction, system identification, and control. _Physica D: Nonlinear Phenomena_ , 406, 132416. https://doi.org/10.1016/j.physd.2020.132416.
* Lopez et al. (2021) Lopez, B.T., Slotine, J.J.E., and How, J.P. (2021). Robust adaptive control barrier functions: An adaptive and data-driven approach to safety. _IEEE Control Systems Letters_ , 5(3), 1031–1036. 10.1109/LCSYS.2020.3005923.
* Mauroy and Goncalves (2020) Mauroy, A. and Goncalves, J. (2020). Koopman-based lifting techniques for nonlinear systems identification. _IEEE Transactions on Automatic Control_ , 65(6), 2550–2565. 10.1109/TAC.2019.2941433.
* Mauroy et al. (2020) Mauroy, A., Susuki, Y., and Mezić, I. (2020). _Koopman operator in systems and control_. Springer.
* Ortega et al. (2022) Ortega, R., Bobtsov, A., and Nikolaev, N. (2022). Parameter identification with finite-convergence time alertness preservation. _IEEE Control Systems Letters_ , 6, 205–210. 10.1109/LCSYS.2021.3057012.
* Polyakov (2012) Polyakov, A. (2012). Nonlinear feedback design for fixed-time stabilization of linear control systems. _IEEE Transactions on Automatic Control_ , 57(8), 2106.
* Ríos et al. (2017) Ríos, H., Efimov, D., Moreno, J.A., Perruquetti, W., and Rueda-Escobedo, J.G. (2017). Time-varying parameter identification algorithms: Finite and fixed-time convergence. _IEEE Transactions on Automatic Control_ , 62(7), 3671–3678.
* Schoellig et al. (2012) Schoellig, A.P., Wiltsche, C., and D’Andrea, R. (2012). Feed-forward parameter identification for precise periodic quadrocopter motions. In _2012 American Control Conference (ACC)_ , 4313–4318. 10.1109/ACC.2012.6315248.
* Taylor and Ames (2020) Taylor, A.J. and Ames, A.D. (2020). Adaptive safety with control barrier functions. In _2020 American Control Conference (ACC)_ , 1399–1405. 10.23919/ACC45564.2020.9147463.
* Wang et al. (2022) Wang, S., Lyu, B., Wen, S., Shi, K., Zhu, S., and Huang, T. (2022). Robust adaptive safety-critical control for unknown systems with finite-time elementwise parameter estimation. _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , 1–11. 10.1109/TSMC.2022.3203176.
* Williams et al. (2015) Williams, M.O., Kevrekidis, I.G., and Rowley, C.W. (2015). A data–driven approximation of the koopman operator: Extending dynamic mode decomposition. _Journal of Nonlinear Science_ , 25(6), 1307–1346.
* Zancato and Chiuso (2021) Zancato, L. and Chiuso, A. (2021). A novel deep neural network architecture for non-linear system identification. _IFAC-PapersOnLine_ , 54(7), 186–191.
* Zhou and Schwager (2014) Zhou, D. and Schwager, M. (2014). Vector field following for quadrotors using differential flatness. In _2014 IEEE International Conference on Robotics and Automation (ICRA)_ , 6567–6572. 10.1109/ICRA.2014.6907828.
* Zinage and Bakolas (2022) Zinage, V. and Bakolas, E. (2022). Neural koopman control barrier functions for safety-critical control of unknown nonlinear systems. _arXiv preprint arXiv:2209.07685_.
## Appendix A Proof of Corollary 1
Using (24) and (28) we can express the disturbance vector field error as
$\displaystyle\tilde{d}(\boldsymbol{x}(t))$
$\displaystyle=d(\boldsymbol{x}(t))-\hat{d}(\boldsymbol{x}(t)),$
$\displaystyle=\boldsymbol{W}(t)\tilde{\boldsymbol{\lambda}}(t),$
and thus can observe that
$\|d(\boldsymbol{x}(t))\|_{\infty}=\|\boldsymbol{W}(t)\tilde{\boldsymbol{\lambda}}(t)\|_{\infty}\leq\sigma_{max}(\boldsymbol{W}(t))\|\tilde{\boldsymbol{\lambda}}(t)\|_{\infty}$.
Then, using (Black et al., 2022a, Corollary 1) we obtain that
$\|\tilde{\boldsymbol{\lambda}}(t)\|_{\infty}\leq\Lambda\tan^{\frac{w}{2}}(A(t))$
for all $t\leq T$, where $\Lambda$, $A(t)$, and $T$ are given by (27), (29),
(21) respectively, and $\|\tilde{\boldsymbol{\lambda}}(t)\|_{\infty}=0$ for
all $t>T$.
Then, to obtain the $\Xi$ term in (30), observe that with
$\hat{\boldsymbol{\lambda}}(0)=\mathbf{0}_{N^{2}\times 1}$ and the assumption
that $\|d(\boldsymbol{x})\|_{\infty}\leq D$,
$\forall\boldsymbol{x}\in\mathcal{X}$, it follows that at $t=0$
$\sigma_{min}(\boldsymbol{W})\|\tilde{\boldsymbol{\lambda}}\|_{\infty}\leq\|\boldsymbol{W}\tilde{\boldsymbol{\lambda}}\|_{\infty}=\|\tilde{d}(\boldsymbol{x})\|_{\infty}\leq
2D,$
from which we obtain that
$\|\tilde{\boldsymbol{\lambda}}(0)\|_{\infty}\leq\frac{2D}{\sigma_{min}\left(\boldsymbol{W}(0)\right)}.$
Thus we obtain
$\boldsymbol{l}=\frac{2D}{\sigma_{min}\left(\boldsymbol{W}(0)\right)}\cdot\mathbf{1}_{N^{2}\times
1}$, and this completes the proof.
|
# A Mutual Information Perspective on
Federated Contrastive Learning
Christos Louizos, Matthias Reisser, Denis Korzhenkov
Qualcomm AI Research
<EMAIL_ADDRESS>
Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. and/or
its subsidiaries.
###### Abstract
We investigate contrastive learning in the federated setting through the lens
of SimCLR and multi-view mutual information maximization. In doing so, we
uncover a connection between contrastive representation learning and user
verification; by adding a user verification loss to each client’s local SimCLR
loss we recover a lower bound to the global multi-view mutual information. To
accommodate for the case of when some labelled data are available at the
clients, we extend our SimCLR variant to the federated semi-supervised
setting. We see that a supervised SimCLR objective can be obtained with two
changes: a) the contrastive loss is computed between datapoints that share the
same label and b) we require an additional auxiliary head that predicts the
correct labels from either of the two views. Along with the proposed SimCLR
extensions, we also study how different sources of non-i.i.d.-ness can impact
the performance of federated unsupervised learning through global mutual
information maximization; we find that a global objective is beneficial for
some sources of non-i.i.d.-ness but can be detrimental for others. We
empirically evaluate our proposed extensions in various tasks to validate our
claims and furthermore demonstrate that our proposed modifications generalize
to other pretraining methods.
## 1 Introduction
For many machine-learning applications “at the edge”, data is observed without
labels. Consider for example pictures on smartphones, medical data
measurements on smart watches or video-feeds from vehicles. Leveraging the
information in those data streams traditionally requires labelling - _e.g._
asking users to confirm the identity of contacts in photo libraries, uploading
road recordings to a central labelling entity - or the data might remain
unused. Fundamentally, labelling data from the edge either happens at the edge
or one accepts the communication overhead, privacy costs and infrastructure
effort to transfer the data to a central entity and label it there. Labelling
at the edge on the other hand either requires enough hardware resources to run
a more powerful teacher model or it requires costly end-user engagement with
inherent label noise and potential lack of expertise for labelling. Ideally,
we can leverage unlabelled data directly at the edge by applying unsupervised
learning, without the need for labels nor needing to transfer data to a
central location.
In this work, we consider the case of federated unsupervised and semi-
supervised learning through the lens of contrastive learning and multi-view
mutual information (MI) maximization. The main challenges in this context are
twofold: estimating the MI can be difficult because it often requires
intractable marginal distributions (Poole et al., 2019). Additionally, the
federated environment introduces extra complications, as the global MI
objective does not readily decompose into a sum of local (client-wise) loss
functions, thereby making it difficult to employ FedAvg (McMahan et al.,
2017), the go-to algorithm in federated learning.
To combat these challenges, we introduce specific lower bounds to the global
MI that decompose appropriately into local objectives, allowing for
straightforward federated optimization. In doing so, we arrive at a principled
extension of SimCLR (Chen et al., 2020) to the federated (semi-) unsupervised
setting, while uncovering interesting properties. While each user can run
vanilla SimCLR locally, to establish a lower bound for the global MI, it is
necessary to add a "user-verification" (UV) loss (Hosseini et al., 2021) for
each view. When also dealing with labelled data, the local SimCLR loss on each
client needs to contrast datapoints in the batch that belong to the _same_
class, thus acting as a form of hard-negative mining. Additionally, besides
the UV loss, a label loss is also required for each view. Along with the
proposed extensions, we also consider how different sources of non-i.i.d.-ness
can impact the performance of federated unsupervised learning through _global_
MI maximization. We show that such an objective is beneficial for specific
sources of non-i.i.d.-ness but it can be detrimental for others. Finally,
while our theoretical analysis and model design was based on SimCLR, we
demonstrate that they are generally applicable to other pretraining methods as
well, such as spectral contrastive learning (HaoChen et al., 2021) and SimSiam
(Chen & He, 2021).
## 2 Federated multi-view mutual information maximization
Mutual information (MI) has been a paramount tool for unsupervised
representation learning; SimCLR (Chen et al., 2020), one of the most popular
self-supervised learning methods, can be cast as learning an encoder model
that maximizes the MI between two views of the same image (Wu et al., 2020).
Applying SimCLR to the federated setting however is not straightforward,
primarily because the global dataset is not accessible during optimization. In
FL, each client only has a subset of the available dataset, and this subset is
not necessarily representative of the global dataset due to differences in the
data-generative process between clients. Various methods have been proposed to
mitigate this effect via global dictionaries of representations (Zhang et al.,
2020) or feature alignment regularizers (Wang et al., 2022). In this work, we
adopt a different view and extend SimCLR to the federated setting through the
lens of global multi-view MI maximization.
### 2.1 Federated SimCLR
Assume that we have access to an encoder
$p_{\theta}({\mathbf{z}}|{\mathbf{x}})$ with parameters $\theta$. We would
like to train this encoder, such that we maximize the MI between the
representations of two views of the input ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$,
namely, ${\mathbf{z}}_{1},{\mathbf{z}}_{2}\in\mathbb{R}^{D_{z}}$, in the
federated setting. Let $s\in\mathbb{N}$ denote the client ID and $p(s)$ a
distribution over clients.
In federated learning (FL), the non-i.i.d.-ness can manifest in various ways:
a) label skew, where each client $s$ has a different distribution over labels
$p(y|s)$ but the same $p({\mathbf{x}}|y)$, the most common non-iid-ness
assumed in the FL literature, b) covariate shift, where each client has a
different distribution over features for a specific class
$p({\mathbf{x}}|y,s)$, _e.g._ due to different mobile sensors, but the same
$p(y)$ and c) joint shift, where both, the distribution of ${\mathbf{x}},y$
vary as a function of $s$. This affects the assumed data-generating process of
SimCLR representations accordingly, which we illustrate in Figure 1.
$s$$y$${\mathbf{x}}$${\mathbf{z}}_{1}$${\mathbf{z}}_{2}$ Figure 1: Graphical
model of the assumed generative process under the various sources of
non-i.i.d.-ness: label-skew, covariate shift and joint shift.
Let $\mathrm{I}(x;y)$ denote the MI between $x,y$ and $\mathrm{I}(x;y|z)$ be
the MI between $x,y$ conditioned on a third variable $z$. Based on the
aforementioned generative process and assuming that all labels are unknown, we
start the derivation of federated SimCLR from the chain rule of MI:
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};s,{\mathbf{z}}_{2})$
$\displaystyle=\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2})+\mathrm{I}_{\theta}({\mathbf{z}}_{1};s|{\mathbf{z}}_{2})=\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)+\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
(1)
$\displaystyle\underbrace{\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2})}_{\text{Global
multi-view MI}}$
$\displaystyle=\underbrace{\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)}_{\text{Local
multi-view
MI}}+\underbrace{\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)}_{\text{Client ID
MI}}-\underbrace{\mathrm{I}_{\theta}({\mathbf{z}}_{1};s|{\mathbf{z}}_{2})}_{\text{Excess
client ID MI}}.$ (2)
We see that the multi-view MI in the federated setting decomposes into three
terms; we want to maximize the average, over the clients, local MI between the
representations of the two views ${\mathbf{z}}_{1}$, ${\mathbf{z}}_{2}$, along
with the MI between the representation ${\mathbf{z}}_{1}$ and the client ID
$s$ while simultaneously minimizing the additional information
${\mathbf{z}}_{1}$ carries about $s$ conditioned on ${\mathbf{z}}_{2}$. Such
MI decompositions have also been considered at Sordoni et al. (2021) for
improving MI estimation in a different context. Unfortunately, in our case
these terms require access to potentially intractable or hard to obtain
distributions, so we will resort to easy to compute and evaluate variational
bounds.
For the first term, _i.e._ , the client conditional MI between the two views,
we provide proposition 1 which uses the standard InfoNCE bound (Poole et al.,
2019), leading to an objective that decomposes into a sum of local terms, one
for each client, thus allowing for federated optimization with FedAvg.
###### Proposition 1.
Let $s\in\mathbb{N}$ denote the user ID, ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$
the input and ${\mathbf{z}}_{1},{\mathbf{z}}_{2}\in\mathbb{R}^{D_{z}}$ the
latent representations of the two views of ${\mathbf{x}}$ given by the encoder
with parameters $\theta$. Given a critic function
$f:\mathbb{R}^{D_{z}}\times\mathbb{R}^{D_{z}}\rightarrow\mathbb{R}$, we have
that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|s)_{1:K}}\left[\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{1k},{\mathbf{z}}_{2k}))}{\frac{1}{K}\sum_{j=1}^{K}\exp(f({\mathbf{z}}_{1j},{\mathbf{z}}_{2k}))}\right]$
(3)
All of the proofs can be found in the appendix. This corresponds to a
straightforward application of SimCLR to the federated setting where each
client performs SimCLR training locally, _i.e._ , clients contrast against
their local dataset instead of the global dataset. We will refer to this
objective as _Local SimCLR_.
In order to optimize the global MI instead of the local MI, we need to address
the two remaining terms of equation 2. The first term,
$\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)$, requires information from the
entire federation, _i.e._ , $p_{\theta}({\mathbf{z}}_{1})$, which is
intractable. However, with lemma 2.1 we show that by introducing a “client
classification” task, we can form a simple and tractable lower bound to this
term.
###### Lemma 2.1.
Let $s\in\mathbb{N}$ denote the client ID, ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$
the input and ${\mathbf{z}}_{1}\in\mathbb{R}^{D_{z}}$ the latent
representation of a view of ${\mathbf{x}}$ given by the encoder with
parameters $\theta$. Let $\phi$ denote the parameters of a client classifier
$r_{\phi}(s|{\mathbf{z}}_{1})$ that predicts the client ID from this specific
representation and let $\mathrm{H}(s)$ be the entropy of the client
distribution $p(s)$. We have that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{1})\right]+\mathrm{H}(s)$ (4)
With this bound we avoid the need for the intractable marginal
$p_{\theta}({\mathbf{z}}_{1})$ and highlight an interesting connection between
self-supervised learning in FL and user-verification models (Yu et al., 2020;
Hosseini et al., 2021). For the last term of equation 2, we need an upper
bound to maintain an overall lower bound to
$\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2})$. Upper bounds to the
MI can be problematic as they require explicit densities (Poole et al., 2019).
Fortunately, in our specific case, we show in lemma 2.2 that with an
additional client classification task for the second view, we obtain a simple
and tractable upper bound.
###### Lemma 2.2.
Let $s\in\mathbb{N}$ denote the user ID, ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$
the input and ${\mathbf{z}}_{1},{\mathbf{z}}_{2}\in\mathbb{R}^{D_{z}}$ the
latent representations of the views of ${\mathbf{x}}$ given by the encoder
with parameters $\theta$. Let $\phi$ denote the parameters of a client
classifier $r_{\phi}(s|{\mathbf{z}}_{2})$ that predicts the client ID from the
representations. We have that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};s|{\mathbf{z}}_{2})\leq-\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{2})\right]$ (5)
By combining our results, we arrive at the following lower bound for the
global MI that decomposes into a sum of local objectives involving the
parameters $\theta,\phi$. We dub it as _Federated SimCLR_.
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2})$
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|s)_{1:K}}\Bigg{[}\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{1k},{\mathbf{z}}_{2k}))}{\frac{1}{K}\sum_{j=1}^{K}\exp(f({\mathbf{z}}_{1j},{\mathbf{z}}_{2k}))}$
$\displaystyle\qquad\qquad+\log r_{\phi}(s|{\mathbf{z}}_{1k})+\log
r_{\phi}(s|{\mathbf{z}}_{2k})\Bigg{]}+\mathrm{H}(s).$ (6)
In this way, Federated SimCLR allows for a straightforward optimization of
$\theta,\phi$ with standard FL optimization methods, such as Reddi et al.
(2020), and inherits their convergence guarantees. Furthermore, it is
intuitive; each client performs locally SimCLR, while simultaneously training
a shared classifier that predicts their user ID from both views. The
additional computational overhead of this classifier is relatively minor
compared to the encoder itself, making it appropriate for resource constrained
devices.
#### Optimizing the user-verification loss
For the client ID loss we use a single linear layer followed by softmax with
three important modifications, as the _local_ optimization of the client ID
loss is prone to bad optima due to having “labels” from only “a single class”
(that of the client optimizing it) (Yu et al., 2020); a) the linear layer does
not have a bias, as that would make the local optimization of the UV loss
trivial and would not meaningfully affect the encoder, b) both the inputs to
the linear layer as well as the linear layer weights are constrained to have
unit norm and, c) each client locally optimizes only their associated vector
weight in the linear classifier while all of the others are kept fixed. In
this way each client needs to find their “own cluster center” to optimize the
UV loss locally. These centers need to be sufficiently far from the cluster
centers of the other clients that a client receives from the server and keeps
fixed throughout local optimization.
#### Effects of non-i.i.d.-ness on the performance on downstream tasks
Given access to both the global and local MI objectives, we now want to
understand how the type of non-i.i.d.-ness determines whether a specific
objective is the better choice. To answer this question, we first show at
proposition 2 that in the case of label skew, the client classification
objective is a lower bound to the MI between the representations
${\mathbf{z}}_{1},{\mathbf{z}}_{2}$ and the unavailable label $y$.
###### Proposition 2.
Consider the label skew data-generating process for federated SimCLR from
Figure 1 with $s\in\mathbb{N}$ denoting the user ID with $\mathrm{H}(s)$ being
the entropy of $p(s)$, ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$ the input,
${\mathbf{z}}_{1},{\mathbf{z}}_{2}\in\mathbb{R}^{D_{z}}$ the latent
representations of the two views of ${\mathbf{x}}$ given by the encoder with
parameters $\theta$. Let $y$ be the label and let
$r_{\phi}(s|{\mathbf{z}}_{i})$ be a model with parameters $\phi$ that predicts
the user ID from the latent representation ${\mathbf{z}}_{i}$. In this case,
we have that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};y)+\mathrm{I}_{\theta}({\mathbf{z}}_{2};y)\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{1})+\log
r_{\phi}(s|{\mathbf{z}}_{2})\right]+2\mathrm{H}(s).$ (7)
Therefore, when the source of non-i.i.d.-ness is heavily dependent on the
actual downstream task, the additional client classification objective
stemming from the global MI bound is beneficial as it is a good proxy for the
thing we care about. In the case of covariate shift, we know that the source
of non-i.i.d.-ness is independent of the label, _i.e._ , $\mathrm{I}(y;s)=0$,
so the additional client classification term can actually become detrimental;
the representation will encode information irrelevant for the downstream task
and, depending on the capacity of the network and underlying trade-offs, can
lead to worse task performance. In this case, optimizing the local MI is
expected to work better, as the client specific information (_i.e._ , the
irrelevant information) is not encouraged in the representations.
Figure 2: Overview of the SimCLR architectures considered. Local SimCLR
(left): each client optimizes a contrastive loss on their own data, thus the
federation implicitly optimizes a lower bound to
$\mathrm{I}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$. Federated SimCLR (center):
along with the contrastive loss on their own data, each client also optimizes
a client classifier, thus the federation implicitly optimizes a lower bound to
$\mathrm{I}({\mathbf{z}}_{1};{\mathbf{z}}_{2})$. Supervised federated SimCLR
(right): a label-dependent variant of federated SimCLR that encourages
clustering according to the label while also optimizing a lower bound to
$\mathrm{I}({\mathbf{z}}_{1};{\mathbf{z}}_{2})$.
### 2.2 Federated Semi-Supervised SimCLR
In practice, labeled data for a specific task are sometimes available. These
could for example constitute a curated dataset at the server or a small
labelled subset of data on each client. In this case, it will generally be
beneficial for the downstream task if the objective takes these labels into
account. To this end, we can use the following label-dependent expression for
the client conditional MI
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle=\mathrm{I}_{\theta}({\mathbf{z}}_{1};y|s)+\mathrm{I}_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|y,s)-\mathrm{I}_{\theta}({\mathbf{z}}_{1};y|s,{\mathbf{z}}_{2}).$
(8)
Therefore, once we obtain a label-specific lower bound for this quantity, it
will be straightforward to translate it to a label-specific lower bound for
the global MI by adding back the user-verification losses for the two views.
For the following we will assume that we have an underlying classification
task, hence a label $y\in\mathbb{N}$.
For the MI between the two views ${\mathbf{z}}_{1},{\mathbf{z}}_{2}$
conditioned on the label $y$ and client $s$, we can make use of proposition 1
by treating $s,y$ as the conditioning set. In this case, we again use the
InfoNCE loss, with the exception that we now contrast between datapoints that
also belong to the same class,
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|y,s)$
$\displaystyle\geq\mathbb{E}_{p(s,y)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|y,s)_{1:K}}\left[\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{1k},{\mathbf{z}}_{2k}))}{\frac{1}{K}\sum_{j=1}^{K}\exp(f({\mathbf{z}}_{1j},{\mathbf{z}}_{2k}))}\right].$
(9)
For the other two terms that involve the label $y$ we can proceed in a similar
manner to the client ID $s$. For the MI between ${\mathbf{z}}_{1}$ and $y$
conditioned on $s$, as $y$ is also discrete, we can make use of lemma 2.1 by
treating $y$ as $s$. Therefore, we introduce a classifier
$r_{\phi}(y|{\mathbf{z}}_{1})$ and obtain the following lower bound
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};y|s)\geq\mathbb{E}_{p(s)p_{\theta}(y,{\mathbf{z}}_{1}|s)}\left[\log
r_{\phi}(y|{\mathbf{z}}_{1})\right]+\mathrm{H}(y|s),$ (10)
where $\mathrm{H}(y|s)$ denotes the entropy of the label marginal at the
client, $p(y|s)$. For the MI between ${\mathbf{z}}_{1}$ and $y$ conditioned on
${\mathbf{z}}_{2}$ and $s$ we make use of lemma 2.2 and get the following
upper bound
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};y|{\mathbf{z}}_{2},s)$
$\displaystyle\leq-\mathbb{E}_{p(s,y)p_{\theta}({\mathbf{z}}_{2}|y,s)}\left[\log
r_{\phi}(y|{\mathbf{z}}_{2})\right].$ (11)
Putting everything together, we arrive at the following label-dependent lower
bound for local SimCLR
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq\mathbb{E}_{p(s,y)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|y,s)_{1:K}}\Bigg{[}\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{1k},{\mathbf{z}}_{2k}))}{\frac{1}{K}\sum_{j=1}^{K}\exp(f({\mathbf{z}}_{1j},{\mathbf{z}}_{2k}))}$
$\displaystyle\qquad+\log r_{\phi}(y|{\mathbf{z}}_{1k})+\log
r_{\phi}(y|{\mathbf{z}}_{2k})+\mathrm{H}(y|s)\Bigg{]},$ (12)
which decomposes into intuitive terms; we are performing InfoNCE between the
views of the datapoints that belong to the same class and client, while
simultaneously trying to predict the class from the representations of both
views. To transition from a label-dependent bound for the local SimCLR to a
label-dependent bound of the federated SimCLR, it suffices to add the client
classifiers
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2})$
$\displaystyle\geq\mathbb{E}_{p(s,y)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|y,s)_{1:K}}\Bigg{[}\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{1k},{\mathbf{z}}_{2k}))}{\frac{1}{K}\sum_{j=1}^{K}\exp(f({\mathbf{z}}_{1j},{\mathbf{z}}_{2k}))}+\log
r_{\phi}(s|{\mathbf{z}}_{1k})$ $\displaystyle\qquad+\log
r_{\phi}(s|{\mathbf{z}}_{2k})+\log r_{\phi}(y|{\mathbf{z}}_{1k})+\log
r_{\phi}(y|{\mathbf{z}}_{2k})+\mathrm{H}(y|s)\Bigg{]}+\mathrm{H}(s).$ (13)
Figure 2 visualizes all of the SimCLR architectures considered in this work.
#### The case of unlabelled data
The primary motivation of the previous discussion is to tackle the semi-
supervised case, _i.e._ , the case when some clients do not have access to all
labels. A simple way to handle the unlabelled data is to fall back to the
bound of proposition 1 for the conditional MI when we do not have access to
labels. In this way, each client can do a form of “more difficult” contrastive
learning for their labelled data, where they contrast against datapoints which
are more semantically similar (_i.e._ , they share the same class), while
simultaneously trying to predict the correct class whereas for their
unlabelled data, they perform standard contrastive learning.
#### Label-dependent vs label-independent bound
Even though both our label-dependent and label-independent bounds are lower
bounds of the MI between the representations of the two views, the former
should be preferred if labels are available. This is because the label
independent one can be satisfied without necessarily clustering the
representations semantically, whereas the label dependent one directly
encourages clustering according to the label through the additional
classification losses, so it is expected to perform better for downstream
tasks.
## 3 Related work
Unsupervised learning in the federated context has gained significant
attention in recent years. On the contrastive learning side, Zhang et al.
(2020) introduces FedCA, a SimCLR variant for federated setting. The main idea
is that the representations between the clients can become misaligned due to
the non-i.i.d. nature of FL. The authors then introduce a global dictionary of
representations which is shared between all participants and is used to align
the representation spaces. One of the main drawbacks of this method is that it
requires the transmission of data representations of clients, which leads to
reduced privacy. Compared to a global dictionary module, our federated SimCLR
aligns the representations of the clients through the additional UV loss
component, requiring the communication of just some additional model
parameters and not raw representations. Dong & Voiculescu (2021) introduces
FedMoCo, an extension of MoCo (He et al., 2020) to the federated setting.
Similar to FedCA, FedMoCo shares additional client metadata, _i.e._ , moments
of the local feature distributions, from the clients to the server, thus
leading to reduced privacy. Li et al. (2023a) also extends MoCo to the
federated setting however, instead of using a FedAvg type of protocol, the
authors employ a split learning (Poirot et al., 2019) protocol, which leads to
reduced compute requirements at the edge but also requires communicating raw
representations of the local data to the server. Finally, the closest to our
work is the work of Wang et al. (2022) where the authors also explore the
effects of non-i.i.d.-ness when training a model with SimCLR in the federated
setting. The authors further propose an extension that uses multiple models
and encourages feature alignment with an additional loss function. In contrast
to FeatARC where the feature alignment loss is added ad-hoc to SimCLR, we can
see that from our MI perspective on SimCLR, a feature alignment loss naturally
manifests via an additional user-verification loss to SimCLR when optimizing a
lower bound to the global MI.
On the non-contrastive learning side, Makhija et al. (2022) introduces Hetero-
SSFL, an extension of BYOL (Grill et al., 2020) and SimSiam (Chen & He, 2021)
to the federated setting where each client can have their own encoder model
but, in order to align the local models, an additional public dataset is
required. Zhuang et al. (2022) introduces FedEMA, where a hyperparameter of
BYOL is adapted in a way that takes into account the divergence of the local
and global models. In contrast to these methods which require several tricks
for improved performance, _i.e._ , moving average updates, custom type of
aggregations and stop gradient operations, our federated SimCLR method works
by just optimizing a straightforward loss function with the defacto standard,
FedAvg. On a different note, Lu et al. (2022) proposes to train a model with
pseudo-labels for the unlabelled data and then recover the model for the
desired labels via a post-processing step. Finally Lubana et al. (2022)
proposes an unsupervised learning framework through simultaneous local and
global clustering, which requires communicating client data representations,
_i.e._ , the cluster centroids, to the server.
On the federated semi-supervised learning side, most works rely on generating
pseudo-labels for the unlabelled examples. Jeong et al. (2020) proposes
FedMatch, an adaptation of FixMatch (Sohn et al., 2020) to the federated
setting by adding one more consistency loss that encourages the models learned
on each client to output similar predictions for the local data. The authors
also propose a pseudo-labelling strategy that takes into account the agreement
of client models and a parameter decomposition strategy that allocates
separate parameters to be optimized on unlabelled and labelled data. In
contrast, our semi-supervised objectives are simpler, do not rely on pseudo-
labels (which introduce additional hyper-parameters for filtering low-
confidence predictions) and do not require communicating client specific
models among the federation. Liang et al. (2022) proposes a student-teacher
type scheme for training on unlabelled data, where consistency regularization
is applied. The teacher model is an exponential moving average of the student
and a novel aggregation mechanism is introduced. Our proposed methods for
semi-supervised learning could potentially also benefit from better
aggregation mechanisms, but we leave such an exploration for future work.
Finally, Kim et al. (2022) introduces ProtoFSSL, which incorporates knowledge
from other clients in the local training via sharing prototypes between the
clients. While such prototypes do improve performance, they also reveal more
information about the local data of each client, thus reducing privacy. In
contrast, our federated semi-supervised framework does not rely on sharing
prototypes between the clients.
## 4 Experiments
Our experimental evaluation consist of unsupervised and semi-supervised
experiments, where for the latter each client has labels for $10\%$ of their
data. To quantify the quality of the learned representations, we adapt the
classical evaluation pipeline of training a linear probe (LP) to be in line
with common assumptions of self-supervised learning. In the unsupervised case,
we report the LP accuracy on the union of clients’ labelled version of their
data, as this corresponds to the traditional non-federated evaluation
pipeline. For the semi-supervised case, we train a LP on top of the
representations of the clients’ labelled training data (which is a subset of
the full training set) and then report its test accuracy. At every evaluation
for plotting of learning curves, we initialize the LP from the final
parameters of the previous evaluation. Furthermore, as we mention at section
2.1, the nature of non-i.i.d. data in FL can manifest in various ways: label
skew, covariate shift and joint shift, _i.e._ , a combination of the two. s We
therefore evaluate, besides label skew (the predominant type of
non-i.i.d.-ness assumed in the FL literature), covariate shift by creating a
rotated version of CIFAR10 and CIFAR100 as well as a joint shift case where
both sources of non-i.i.d.-ness are present. For CIFAR 10 we consider 100
clients whereas for CIFAR100 we consider 500 clients. For the encoder we use a
ResNet18 architecture adapted for the CIFAR datasets where, following Hsieh et
al. (2020), we replace batch normalization (Ioffe & Szegedy, 2015) with group
normalization (Wu & He, 2018).
In order to demonstrate the general usefulness of our theoretical results and
model design stemming from our MI perspective, we include two more methods in
our evaluation besides SimCLR. The first one is spectral contrastive learning
(HaoChen et al., 2021) (dubbed as Spectral CL) as another instance of
constrastive learning and the other is SimSiam (Chen & He, 2021), a non-
contrastive method. For both of these methods, we consider both a “local”
variant where each of the losses is optimized locally and Reddi et al. (2020)
is applied to the parameters as well as, based on the intuition from our
federated SimCLR, a “global” variant where the same UV loss component of
federated SimCLR is added to the baselines. As we show in proposition 2, such
an auxiliary task is beneficial in the case of label skew in general.
Furthermore we also extend these baselines to the semi-supervised setting.
Based on the insights from our label-dependent MI bounds for SimCLR, we
consider label-dependent variants of SimSiam and Spectral CL where, when
labels are available, the unsupervised losses are evaluated between elements
that share the same class and a classification loss for the two views is added
to the overall loss function.
#### Unsupervised setting
The results in the unsupervised setting can be seen in Table 1. In the case of
label skew, adding our user-verification loss to each of the local losses
leads to (sometimes dramatic) improvements in all cases. This is to be
expected, as in this case the mutual information between the labels and the
client ID, $\mathrm{I}(y;s)$, is quite high, so the UV loss acts as a good
proxy for the downstream task. For SimCLR we observe a $\sim 6\%$ improvement
on CIFAR 10/100 and on Spectral CL we observe $\sim 11\%$ and $\sim 8\%$
respectively. SimSiam type of methods generally underperformed compared to
SimCLR and Spectral CL, and we believe this is due to representation collapse,
especially given that in our setting we employ group normalization instead of
batch-normalization. On covariate shift, we now see that the situation is
flipped; as in this case $\mathrm{I}(y;s)=0$, local SimCLR / Spectral CL are
doing better compared to their global counterparts that include the UV loss.
Both local SimCLR and Spectral CL perform better by $\sim 1-2\%$ and $\sim
2-4\%$ on CIFAR 10 and CIFAR 100 respectively, with local SimCLR providing the
better overall performance. Finally, on the joint shift case, the label skew
is strong enough to allow for improvements with the additional UV loss
components in most cases; for SimCLR there is an improvement of $\sim 4-5\%$
and for Spectral CL there is a $\sim 8\%$ improvement for CIFAR 10 but a drop
of $\sim 8\%$ for CIFAR 100. We attribute the latter to the overall
instability of Spectral CL in our CIFAR 100 experiments, explained by the
large standard error.
Table 1: Test set performance ($\%$) on the unsupervised setting along with
standard error over $5$ seeds. Clients’ data is assumed to be fully annotated
for LP fine-tuning in the unsupervised case.
| CIFAR 10 | CIFAR 100
---|---|---
Method | Label skew | Covariate shift | Joint shift | Label skew | Covariate shift | Joint shift
Local SimCLR | $79.4_{\pm 0.2}$ | $\mathbf{74.3_{\pm 0.3}}$ | $71.0_{\pm 0.4}$ | $42.2_{\pm 0.2}$ | $\mathbf{41.2_{\pm 0.2}}$ | $38.1_{\pm 0.3}$
Federated SimCLR | $\mathbf{85.0_{\pm 0.2}}$ | $73.8_{\pm 0.2}$ | $\mathbf{74.8_{\pm 0.5}}$ | $\mathbf{48.5_{\pm 0.1}}$ | $39.5_{\pm 0.2}$ | $\mathbf{43.1_{\pm 0.2}}$
Spectral CL | $76.5_{\pm 1.1}$ | $\mathbf{73.5_{\pm 0.4}}$ | $68.2_{\pm 0.6}$ | $33.3_{\pm 6.0}$ | $\mathbf{33.6_{\pm 2.3}}$ | $\mathbf{29.6_{\pm 6.2}}$
Spectral CL + UV | $\mathbf{87.8_{\pm 0.3}}$ | $71.7_{\pm 0.5}$ | $\mathbf{76.6_{\pm 0.6}}$ | $\mathbf{41.0_{\pm 6.4}}$ | $29.3_{\pm 4.8}$ | $21.5_{\pm 6.2}$
SimSiam | $\mathbf{40.0_{\pm 0.5}}$ | $\mathbf{39.9_{\pm 0.3}}$ | $\mathbf{39.6_{\pm 0.3}}$ | $16.9_{\pm 0.3}$ | $16.6_{\pm 0.4}$ | $16.9_{\pm 0.4}$
SimSiam + UV | $35.4_{\pm 0.4}$ | $35.4_{\pm 0.2}$ | $34.5_{\pm 0.3}$ | $16.5_{\pm 0.2}$ | $16.5_{\pm 0.3}$ | $16.3_{\pm 0.5}$
Supervised | $89.6_{\pm 0.1}$ | $78.3_{\pm 0.4}$ | $76.3_{\pm 1.1}$ | $59.2_{\pm 0.2}$ | $47.9_{\pm 0.2}$ | $43.9_{\pm 0.3}$
Overall, we observe that the results are consistent with our expectations;
when the source of non-i.i.d.-ness in the federated setting is strongly
correlated with the downstream task, optimizing a “global” objective, such as
$\mathrm{I}({\mathbf{z}}_{1},{\mathbf{z}}_{2})$, is beneficial, as the
additional UV term serves for a good proxy for the downstream task. This
intuition also generalizes to one of our baselines as, _e.g._ , even Spectral
CL benefits from the addition of the UV loss in such settings. In the absence
of such correlation, the simple local SimCLR / Spectral CL variants are doing
better since they do not encode information in the representations that is
irrelevant for the downstream task.
#### Semi-supervised setting
Our semi-supervised results with $10\%$ labelled data in Table 2 show
interesting observations. Overall, we improve performance with semi-supervised
training relative to purely supervised training on the labelled subset of the
data. On CIFAR 10, we notice that our semi-supervised models with the UV loss
do better than the local variants on all sources of non-i.i.d.-ness, even in
the case of covariate shift. Despite the limited quantity of labels available,
we believe that the encoders possessed sufficient capacity to both retain and
separate the label-specific and label-independent (_e.g._ , rotation)
information. Consequently, the downstream LP could accurately use the label-
specific portion of the representations for its predictions. SimSiam does much
better in this setting, as the supervised objective prevented representation
collapse, achieving the best performance on label skew when we add the UV
loss, whereas Federated SimCLR does best on the joint shift.
Table 2: Test set performance ($\%$) on the semi-supervised setting with
$10\%$ labelled data on each client along with standard error over $5$ seeds.
We use the corresponding labelled subset for the LP.
| CIFAR 10 | CIFAR 100
---|---|---
Method | Label skew | Covariate shift | Joint shift | Label Skew | Covariate shift | Joint shift
Local SimCLR | $74.5_{\pm 0.3}$ | $\mathbf{49.1_{\pm 1.3}}$ | $45.8_{\pm 1.4}$ | $30.3_{\pm 0.2}$ | $15.1_{\pm 0.4}$ | $13.1_{\pm 0.3}$
Federated SimCLR | $\mathbf{78.0_{\pm 0.2}}$ | $\mathbf{50.3_{\pm 1.1}}$ | $\mathbf{49.9_{\pm 1.4}}$ | $\mathbf{34.5_{\pm 0.3}}$ | $14.8_{\pm 0.3}$ | $\mathbf{14.6_{\pm 0.3}}$
Spectral CL | $74.2_{\pm 0.3}$ | $48.0_{\pm 0.7}$ | $45.4_{\pm 1.5}$ | $30.1_{\pm 0.2}$ | $14.1_{\pm 0.4}$ | $12.3_{\pm 0.3}$
Spectral CL + UV | $\mathbf{79.6_{\pm 0.3}}$ | $\mathbf{49.7_{\pm 1.0}}$ | $\mathbf{49.8_{\pm 1.1}}$ | $\mathbf{34.0_{\pm 0.2}}$ | $13.7_{\pm 0.3}$ | $\mathbf{13.6_{\pm 0.4}}$
SimSiam | $75.3_{\pm 0.4}$ | $46.8_{\pm 0.7}$ | $40.5_{\pm 0.9}$ | $30.7_{\pm 0.2}$ | $13.4_{\pm 0.3}$ | $12.8_{\pm 0.3}$
SimSiam + UV | $\mathbf{80.4_{\pm 0.2}}$ | $\mathbf{50.0_{\pm 1.2}}$ | $\mathbf{44.3_{\pm 1.0}}$ | $\mathbf{34.3_{\pm 0.1}}$ | $13.6_{\pm 0.3}$ | $\mathbf{14.0_{\pm 0.4}}$
Supervised | $75.1_{\pm 0.2}$ | $48.1_{\pm 0.9}$ | $42.7_{\pm 1.7}$ | $29.6_{\pm 0.3}$ | $12.6_{\pm 0.2}$ | $12.2_{\pm 0.1}$
### 4.1 Ablation studies
In this section we perform additional experiments in order to investigate the
behaviour of local and federated SimCLR under different settings. We adopt our
CIFAR 10 setting with 100 clients and strong ($\alpha=0.1$) joint shift,
unless mentioned otherwise.
(a)
(b)
(c)
Figure 3: CIFAR 10 ablation studies. (a) Performance of local and federated
SimCLR as a function of the non-i.i.d.-ness strength $\alpha$ for covariate
shift and label skew. (b) Performance of local and federated SimCLR for
different amount of local epochs $E$ in the case of strong ($\alpha=0.1$)
covariate shift and label skew. (c) Performance of local and federated SimCLR
in the semi-supervised setting as a function of the amount of available
labelled data.
#### Amount of non-i.i.d.-ness
For the first set of experiments we investigate how the amount of
non-i.i.d.-ness affects the local and federated SimCLR performance with $E=1$.
We adopt the joint shift setting and perform experiments with different
strengths for each source of non-i.i.d.-ness. The results can be seen in
Figure 3(a) where we have an interesting observation; federated SimCLR does
_better_ the _higher_ the amount of label skew non-i.i.d.-ness is, in fact
even surpassing the performance of local SimCLR on i.i.d. data. This can be
explained from our proposition 2. As the amount of label skew increases, the
client ID carries more information about $y$, thus
$\mathrm{I}_{\theta}({\mathbf{z}}_{1},y|s)$ becomes lower and the lower bound
tighter. On the flipside, when there is strong covariate shift and not enough
label-skew, we observe that local SimCLR has consistently better performance.
#### Amount of local updates
The auxiliary UV objective in federated SimCLR can be problematic for a large
amount of local updates, as there is only a single available class at each
client. Therefore, federated SimCLR requires relatively frequent
synchronization. We show in Figure 3(b) how the amount of local epochs affect
local and federated SimCLR when keeping a fixed computation budget; more local
epochs imply less communication rounds and vice versa. We can see that
federated SimCLR achieves the best performance of the two with $1$ local step,
however, its performance drops with more local updates and eventually becomes
worse or comparable to local SimCLR.
#### Amount of labelled data for the semi-supervised setting
Finally, we also measure the impact of the amount of available labelled data
in the semi-supervised setting for local and federated SimCLR. We measure this
by keeping a fixed and labelled holdout set which we use to train a LP on top
of the representations given by the two algorithms. We also train a fully
supervised (_i.e._ , on $100\%$ labelled training data) baseline with the same
augmentations as the SimCLR variants. We can see in Figure 3(c) that the test
accuracy of the LP improves with more labelled data for both algorithms, as
expected. Federated SimCLR demonstrates improved performance compared to local
SimCLR on all cases considered, with the biggest advantages seen when the
amount of available labelled data during training is low. Furthermore,
federated SimCLR reaches performance comparable to the fully supervised
baseline with $\geq 50\%$ labelled training data.
## 5 Discussion
In this work we analyze contrastive learning and SimCLR in the federated
setting. By adopting a multi-view MI view, we arrive at several interesting
observations and extensions. We show that a naive application of local SimCLR
training at each client coupled with parameter averaging at the server,
corresponds to maximizing a lower bound to the client conditional MI between
the two views. We then identify that, in order to close the gap, for global MI
an auxiliary user-verification task is necessary. Finally, through the same MI
lens, we extend both local and federated SimCLR to the semi-supervised setting
in order to handle the case of partially available data. Despite the fact that
these modifications were developed through the MI view for SimCLR, we show
that they are generally useful for pretraining in the federated setting,
yielding improvements for both spectral contrastive learning and SimSiam.
As non-i.i.d. data are an inherent challenge in FL, we further discuss how it
affects contrastive learning, both theoretically and empirically. In the case
of label skew, the most predominant type of non-i.i.d.-ness in the FL
literature, we show that maximizing the global MI through federated SimCLR is
appropriate, as the auxiliary user classification task is a good proxy for the
unavailable label. On the flipside, in the case of covariate shift, local
SimCLR leads to better models due to not being forced to encode irrelevant,
for the downstream task, information in the representations.
For future work, we will explore improved variants of the UV loss that can
tolerate more local optimization, as well as better bounds for the MI in the
federated setting.
## References
* Chen et al. (2020) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_ , pp. 1597–1607. PMLR, 2020.
* Chen & He (2021) Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pp. 15750–15758, 2021.
* Dong & Voiculescu (2021) Nanqing Dong and Irina Voiculescu. Federated contrastive learning for decentralized unlabeled medical images. In _Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part III 24_ , pp. 378–387. Springer, 2021.
* Grill et al. (2020) Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. _Advances in neural information processing systems_ , 33:21271–21284, 2020.
* HaoChen et al. (2021) Jeff Z HaoChen, Colin Wei, Adrien Gaidon, and Tengyu Ma. Provable guarantees for self-supervised deep learning with spectral contrastive loss. _Advances in Neural Information Processing Systems_ , 34:5000–5011, 2021.
* Hassani et al. (2021) Ali Hassani, Steven Walton, Nikhil Shah, Abulikemu Abuduweili, Jiachen Li, and Humphrey Shi. Escaping the big data paradigm with compact transformers. _arXiv preprint arXiv:2104.05704_ , 2021.
* He et al. (2020) Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pp. 9729–9738, 2020.
* Hosseini et al. (2021) Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, and Max Welling. Federated learning of user verification models without sharing embeddings. In _International Conference on Machine Learning_ , pp. 4328–4336. PMLR, 2021.
* Hsieh et al. (2020) Kevin Hsieh, Amar Phanishayee, Onur Mutlu, and Phillip Gibbons. The non-iid data quagmire of decentralized machine learning. In _International Conference on Machine Learning_ , pp. 4387–4398. PMLR, 2020.
* Hsu et al. (2019) Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. _arXiv preprint arXiv:1909.06335_ , 2019.
* Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International conference on machine learning_ , pp. 448–456. pmlr, 2015.
* Jeong et al. (2020) Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang. Federated semi-supervised learning with inter-client consistency & disjoint learning. _arXiv preprint arXiv:2006.12097_ , 2020.
* Kim et al. (2022) Woojung Kim, Keondo Park, Kihyuk Sohn, Raphael Shu, and Hyung-Sin Kim. Federated semi-supervised learning with prototypical networks. _arXiv preprint arXiv:2205.13921_ , 2022.
* Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Li et al. (2023a) Jingtao Li, Lingjuan Lyu, Daisuke Iso, Chaitali Chakrabarti, and Michael Spranger. MocoSFL: enabling cross-client collaborative self-supervised learning. In _The Eleventh International Conference on Learning Representations_ , 2023a. URL https://openreview.net/forum?id=2QGJXyMNoPz.
* Li et al. (2023b) Ming Li, Qingli Li, and Yan Wang. Class balanced adaptive pseudo labeling for federated semi-supervised learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 16292–16301, 2023b.
* Liang et al. (2022) Xiaoxiao Liang, Yiqun Lin, Huazhu Fu, Lei Zhu, and Xiaomeng Li. Rscfed: Random sampling consensus federated semi-supervised learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 10154–10163, 2022.
* Lin et al. (2021) Haowen Lin, Jian Lou, Li Xiong, and Cyrus Shahabi. Semifed: Semi-supervised federated learning with consistency and pseudo-labeling. _arXiv preprint arXiv:2108.09412_ , 2021.
* Lu et al. (2022) Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, and Masashi Sugiyama. Federated learning from only unlabeled data with class-conditional-sharing clients. _arXiv preprint arXiv:2204.03304_ , 2022.
* Lubana et al. (2022) Ekdeep Singh Lubana, Chi Ian Tang, Fahim Kawsar, Robert P Dick, and Akhil Mathur. Orchestra: Unsupervised federated learning via globally consistent clustering. _arXiv preprint arXiv:2205.11506_ , 2022.
* Makhija et al. (2022) Disha Makhija, Nhat Ho, and Joydeep Ghosh. Federated self-supervised learning for heterogeneous clients. _arXiv preprint arXiv:2205.12493_ , 2022.
* McMahan et al. (2017) Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In _Artificial intelligence and statistics_ , pp. 1273–1282. PMLR, 2017.
* Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_ , 2018.
* Poirot et al. (2019) Maarten G Poirot, Praneeth Vepakomma, Ken Chang, Jayashree Kalpathy-Cramer, Rajiv Gupta, and Ramesh Raskar. Split learning for collaborative deep learning in healthcare. _arXiv preprint arXiv:1912.12115_ , 2019.
* Poole et al. (2019) Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In _International Conference on Machine Learning_ , pp. 5171–5180. PMLR, 2019.
* Reddi et al. (2020) Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečnỳ, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. _arXiv preprint arXiv:2003.00295_ , 2020.
* Sohn et al. (2020) Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. _Advances in neural information processing systems_ , 33:596–608, 2020.
* Sordoni et al. (2021) Alessandro Sordoni, Nouha Dziri, Hannes Schulz, Geoff Gordon, Philip Bachman, and Remi Tachet Des Combes. Decomposed mutual information estimation for contrastive representation learning. In _International Conference on Machine Learning_ , pp. 9859–9869. PMLR, 2021.
* Wang et al. (2022) Lirui Wang, Kaiqing Zhang, Yunzhu Li, Yonglong Tian, and Russ Tedrake. Does learning from decentralized non-iid unlabeled data benefit from self supervision? In _The Eleventh International Conference on Learning Representations_ , 2022.
* Wu et al. (2020) Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, and Noah Goodman. On mutual information in contrastive learning for visual representations. _arXiv preprint arXiv:2005.13149_ , 2020.
* Wu & He (2018) Yuxin Wu and Kaiming He. Group normalization. In _Proceedings of the European conference on computer vision (ECCV)_ , pp. 3–19, 2018.
* Yu et al. (2020) Felix Yu, Ankit Singh Rawat, Aditya Menon, and Sanjiv Kumar. Federated learning with only positive labels. In _International Conference on Machine Learning_ , pp. 10946–10956. PMLR, 2020.
* Zhang et al. (2020) Fengda Zhang, Kun Kuang, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Yueting Zhuang, and Xiaolin Li. Federated unsupervised representation learning. _arXiv preprint arXiv:2010.08982_ , 2020.
* Zhuang et al. (2022) Weiming Zhuang, Yonggang Wen, and Shuai Zhang. Divergence-aware federated self-supervised learning. _arXiv preprint arXiv:2204.04385_ , 2022.
## Appendix A Experimental setup
#### Data partitioning and non-i.i.d.-ness
For the label-skew setting, we use the Dirichlet splits for CIFAR 10, 100
discussed at Reddi et al. (2020) with $\alpha=0.1$ in both cases. Notice that
we adopt the convention of Hsu et al. (2019) where $\alpha$ is multiplied by
the prior probability of the label in the dataset, so, for example, in the
case of CIFAR 10 the final concentration parameter is $0.01$.
For the covariate shift setting we consider the case of rotation
non-i.i.d.-ness. More specifically, we first perform an i.i.d., with respect
to the labels, split of the data into 100 and 500 clients for CIFAR 10 and
CIFAR 100 respectively. Afterwards, we bin the $[0,2\pi]$ range into $10$
rotation bins and then assign to each client bins according to a Dirichlet
distribution with $\alpha=0.1$. In this case, each client receives one or two
bins of rotations. After that bin assignment, each client randomly rotates
each image of their local dataset once with an angle for each image sampled
i.i.d. from the bins selected at that client. For the evaluation we consider
non-rotated images.
For the joint shift setting we mix the two cases above by first performing a
non-i.i.d., Dirichlet split, _i.e._ , $\alpha=0.1$, according to the labels
and then apply the non-i.i.d. rotation strategy described above.
#### Architecture details
For all methods we use the same encoder model, a ResNet18 architecture adapted
for CIFAR 10/100 by replacing the kernel of the first convolutional layer with
a $3\times 64\times 3\times 3$ kernel and removing the max-pooling and last
fully connected layer. Furthermore, to better accommodate for the non-i.i.d.
issues in the federated learning scenario (Hsieh et al., 2020) we replace
batch normalization (Ioffe & Szegedy, 2015) with group normalization (Wu & He,
2018). For the client ID projector, we use a simple MLP on top of the encoder
output with a single ReLU hidden layer of $2048$ units and $128$ output units.
For the auxiliary classifier in the case of semi-supervised learning we use a
simple linear layer on top of the encoder output.
For our SimCLR and spectral constrastive learning variants, the
representations of the encoder are passed through an MLP projector with a
single hidden layer of $2048$ units and $128$ dimensional outputs. The
contrastive loss between the two views is measured at the output of the
projector.
For our SimSiam baseline we measure the cosine similarity objective on the
output of a projector that follows the SimCLR design with the exception that
we also add a group normalization layer before the hidden layer, as SimSiam
was unstable without it (especially at the unsupervised experiments). For the
predictor we use another single hidden layer MLP with $2048$ ReLU units and
group normalization.
For the data augmentations, in order to create the two views, we follow the
standard recipe of random cropping into $32\times 32$ images, followed by a
random horizontal flip, a random, with probability $0.8$, color distortion
with brightness, contrast, saturation factors of $0.4$ and a hue factor of
$0.1$. The final augmentation is a random, with probability $0.2$, RGB-to-
grayscale transformation.
#### Optimization details
For local optimization we use standard stochastic gradient descent with a
learning rate of $0.1$ for both CIFAR 10 and CIFAR 100 for, unless mentioned
otherwise, a single local epoch and a batch size of $128$. After the local
optimization on a specific round has been completed, each client communicates
to the server the delta between the finetuned parameters and the model
communicated from the server to the clients. The server averages these deltas,
interprets them as “gradients”, and uses them in conjunction with the Adam
Kingma & Ba (2014) optimizer in order to update the global model. This is a
strategy originally proposed in Reddi et al. (2020). For the server-side Adam
we are using the default hyperparameters.
## Appendix B Additional experiments
In this section we consider more baselines for both our unsupervised and semi-
supervised setups in the federated setting.
### B.1 Unsupervised setting
#### Additional baseline
We consider one more baseline for self-supervised learning in the federated
setting, FeatARC (Wang et al., 2022), specifically the "Align Only" variant.
We omit the clustering approach as it makes additional assumptions compared to
our unsupervised learning setup. The authors report results with a loss-
coefficient of $\lambda=1.0$, which lead to loss divergence in our case, so we
report $\lambda=0.5$, which was stable except for the covariate shift setting.
We see in table 3 that adding FeatARC alignment regularization does not result
in improved accuracy, contrary to what the FeatARC paper results would lead us
to expect. We hypothesise that this is due to the differences in our setup.
Whereas FeatARC considers a cross-silo setting with a large number of local
update steps, our setting focuses on the cross-device setting with one local
epoch per client communication round. We leave a further analysis of FeatARC
applicability to this cross-device setting to future work.
Table 3: Test set performance on the unsupervised setting of CIFAR 10. Clients’ data is assumed to be fully annotated for LP fine-tuning in the unsupervised case. Method | Label skew | Covariate shift | Joint shift
---|---|---|---
Local SimCLR | $79.4_{\pm 0.2}$ | $\mathbf{74.3_{\pm 0.3}}$ | $71.0_{\pm 0.4}$
Local SimCLR + FeatARC | $70.4_{\pm 0.2}$ | $34.4_{\pm-}$ | $57.6_{\pm 2.7}$
Federated SimCLR | $\mathbf{85.0_{\pm 0.2}}$ | $73.8_{\pm 0.2}$ | $\mathbf{74.8_{\pm 0.5}}$
Spectral CL | $76.5_{\pm 1.1}$ | $\mathbf{73.5_{\pm 0.4}}$ | $68.2_{\pm 0.6}$
Spectral CL + UV | $\mathbf{87.8_{\pm 0.3}}$ | $71.7_{\pm 0.5}$ | $\mathbf{76.6_{\pm 0.6}}$
SimSiam | $\mathbf{40.0_{\pm 0.5}}$ | $\mathbf{39.9_{\pm 0.3}}$ | $\mathbf{39.6_{\pm 0.3}}$
SimSiam + UV | $35.4_{\pm 0.4}$ | $35.4_{\pm 0.2}$ | $34.5_{\pm 0.3}$
Supervised | $89.6_{\pm 0.1}$ | $78.3_{\pm 0.4}$ | $76.3_{\pm 1.1}$
#### TinyImagenet dataset
To demonstrate the scalability of our theoretical results and model design
stemming from our MI perspective, we also consider the more challenging task
of self-supervised pretraining on TinyImagenet. It consists of 100k training
examples and 10k test examples, each beloging to one of 200 classes. We apply
our federated CIFAR 10 setting to this dataset as well, i.e., we partition the
training dataset to 100 clients with either the covariate shift or joint shift
non-i.i.d. strategies. We sample 10 clients per round in order to optimize the
models and each client performs one local epoch of updates. The encoder mdoel
we use is a Compact Convolutional Transformer Hassani et al. (2021) in the
“CCT-4/3×2” variant, i.e. with 4 transformer encoder layers and a 2-layer
convolutional feature extractor with a 3x3 kernel size. The results with the
different methods can be seen at table 4.
Table 4: Test set performance ($\%$) on the unsupervised setting of TinyImagenet with 100 clients after 50k rounds. Clients’ data is assumed to be fully annotated for LP fine-tuning in the unsupervised case. Method | Label skew | Covariate shift | Joint shift
---|---|---|---
Local SimCLR | $33.3$ | $\mathbf{30.3}$ | $29.6$
Federated SimCLR | $\mathbf{38.0}$ | $30.0$ | $\mathbf{31.6}$
Spectral CL | $34.0$ | $28.4$ | $27.9$
Spectral CL + UV | $\mathbf{39.7}$ | $\mathbf{29.5}$ | $\mathbf{32.4}$
SimSiam | $\mathbf{10.6}$ | $\mathbf{4.7}$ | $0.5$
SimSiam + UV | $0.5$ | $0.5$ | $0.5$
Supervised | $44$ | $36.6$ | $33.0$
Overall, we see that the results are consistent with our intuitions and story
in the case of contrastive methods; the biggest gains from the additional UV
loss are in the case of label skew and joint shift. SimSiam generally
underperformed in this setting, which is also consistent with our observations
in the case of unsupervised learning on CIFAR 10/100, probably due to
representation collapse, given that in our setting we use group normalization
instead of batch normalization.
### B.2 Semi-supervised setting
#### Additional pseudo-labelling baselines
We provide more results on our partially labeled (with $10\%$ labeled data on
each client) semi-supervised setting by also considering baselines that
perform pseudo-labelling as a means for semi-supervised learning. The two
methods we consider are SemiFed (Lin et al., 2021) and CBAFed (Li et al.,
2023b). For both of these settings we have the following modifications that
bring them in line with our semi-supervised setup.
For SemiFed we do not make use of an ensemble of client models in order to
impute the missing labels but rather assign a pseudo-label to the datapoint
based on the received server model on each client. In this way, our proposed
methods and SemiFed have similar communication costs and privacy, as
exchanging models directly trained on local data between clients reduces the
overall privacy. For CBAFed, we do not use residual weight connection, in
order to have a consistent optimization strategy for all our methods, but do
use the class balanced adaptive threshold strategy. We follow the setup
described in Appendix F.5 of (Li et al., 2023b) to train a model with
partially labeled clients.
From what we can see it table 5 and table 6, our conclusion about the
usefulness of the UV loss (c.f. proposition 2) applies to this setting as
well. While SemiFed underperforms when trained without the UV loss, it manages
to improve upon the fully supervised baseline and be comparable to the other
methods when we add it back. On CIFAR 10, adding the UV loss yields a
significant $16.7\%$ improvement in the case of label skew and on CIFAR 100,
while it gets a more modest $6\%$ improvement, it manages to outperform all
other methods. CBAFed performs worse than self-supervised methods albeit also
benefits from adding the UV loss in all the conducted experiments.
Table 5: Test set performance ($\%$) on the semi-supervised setting of CIFAR 10 with $10\%$ labelled data on each client along with standard error over $5$ seeds for all experiments except of CBAFed which have one seed only. We use the corresponding labelled subset for the LP. Method | Label skew | Covariate shift | Joint shift
---|---|---|---
Local SimCLR | $74.5_{\pm 0.3}$ | $\mathbf{49.1_{\pm 1.3}}$ | $45.8_{\pm 1.4}$
Federated SimCLR | $\mathbf{78.0_{\pm 0.2}}$ | $\mathbf{50.3_{\pm 1.1}}$ | $\mathbf{49.9_{\pm 1.4}}$
Spectral CL | $74.2_{\pm 0.3}$ | $48.0_{\pm 0.7}$ | $45.4_{\pm 1.5}$
Spectral CL + UV | $\mathbf{79.6_{\pm 0.3}}$ | $\mathbf{49.7_{\pm 1.0}}$ | $\mathbf{49.8_{\pm 1.1}}$
SimSiam | $75.3_{\pm 0.4}$ | $46.8_{\pm 0.7}$ | $40.5_{\pm 0.9}$
SimSiam + UV | $\mathbf{80.4_{\pm 0.2}}$ | $\mathbf{50.0_{\pm 1.2}}$ | $\mathbf{44.3_{\pm 1.0}}$
SemiFed | $60.0_{\pm 4.5}$ | $18.6_{\pm 1.8}$ | $37.2_{\pm 0.9}$
SemiFed + UV | $\mathbf{76.7_{\pm 1.2}}$ | $\mathbf{24.0_{\pm 2.2}}$ | $\mathbf{45.1_{\pm 2.0}}$
CBAFed | $66.3$ | 45.9 | $34.8$
CBAFed + UV | $\mathbf{74.1}$ | $\mathbf{48.2}$ | $\mathbf{36.2}$
Supervised | $75.1_{\pm 0.2}$ | $48.1_{\pm 0.9}$ | $42.7_{\pm 1.7}$
Table 6: Test set performance ($\%$) on the semi-supervised setting of CIFAR 100 with $10\%$ labelled data on each client along with standard error over $5$ seeds. We use the corresponding labelled subset for the LP. Method | Label Skew | Covariate shift | Joint shift
---|---|---|---
Local SimCLR | $30.3_{\pm 0.2}$ | $15.1_{\pm 0.4}$ | $13.1_{\pm 0.3}$
Federated SimCLR | $\mathbf{34.5_{\pm 0.3}}$ | $14.8_{\pm 0.3}$ | $\mathbf{14.6_{\pm 0.3}}$
Spectral CL | $30.1_{\pm 0.2}$ | $14.1_{\pm 0.4}$ | $12.3_{\pm 0.3}$
Spectral CL + UV | $\mathbf{34.0_{\pm 0.2}}$ | $13.7_{\pm 0.3}$ | $\mathbf{13.6_{\pm 0.4}}$
SimSiam | $30.7_{\pm 0.2}$ | $13.4_{\pm 0.3}$ | $12.8_{\pm 0.3}$
SimSiam + UV | $\mathbf{34.3_{\pm 0.1}}$ | $13.6_{\pm 0.3}$ | $\mathbf{14.0_{\pm 0.4}}$
SemiFed | $29.7_{\pm 0.5}$ | $13.3_{\pm 0.2}$ | $12.3_{\pm 0.2}$
SemiFed + UV | $\mathbf{35.7_{\pm 0.2}}$ | $13.4_{\pm 0.6}$ | $\mathbf{13.1_{\pm 0.2}}$
Supervised | $29.6_{\pm 0.3}$ | $12.6_{\pm 0.2}$ | $12.2_{\pm 0.1}$
#### TinyImagenet dataset
To demonstrate the scalability of our semi-supervised model design stemming
from our MI perspective, we also consider the more challenging TinyImagenet
task in the case of label skew non-i.i.d.-ness with Dirichlet splitting and an
$\alpha=0.1$ multiplied by the prior probability of each class. The setup is
similar to our semi-supervised federated CIFAR 10 setting, with 100 clients
and 10$\%$ labelled data per client. We sample 10 clients per round in order
to optimize the models and each client performs one local epoch of updates. We
use the same CCT architecture as the unsupervised TinyImagenet experiment. The
results with the different methods can be seen in table 7.
Table 7: Test set performance ($\%$) on the semi-supervised setting of TinyImagenet with 100 clients after 50k rounds. We use the corresponding labelled subset for the linear probe. Method | Label skew | Covariate shift | Joint shift
---|---|---|---
Local SimCLR | $18.5$ | $8.1$ | $6.7$
Federated SimCLR | $\mathbf{19.5}$ | $\mathbf{8.4}$ | $\mathbf{7.4}$
Spectral CL | $17.8$ | $\mathbf{8.3}$ | $6.9$
Spectral CL + UV | $\mathbf{18.9}$ | $8.1$ | $\mathbf{7.5}$
SimSiam | $0.5$ | $8.1$ | $\mathbf{6.9}$
SimSiam + UV | $\mathbf{20.0}$ | $\mathbf{8.5}$ | $\mathbf{6.9}$
Supervised | $17.9$ | $8.4$ | $7.7$
We observe similar patterns to our unsupervised TinyImagenet setting, with the
biggest gains for the contrastive methods from the UV loss being in the case
where some label skew is present. SimSiam did experience representation
collapse at the case of label skew, however, by adding to it the UV loss, this
was successfully mitigated and improved significantly the performance.
## Appendix C Algorithms
Algorithm 1 The server side algorithm for our federated SimCLR / Spectral CL /
SimSiam with optional user-verification and semi-supervision.
Initialize $\theta$ and $\phi$ with $\theta_{1},\phi_{i}$
for round $t$ in $1,\dots T$ do
Sample $\mathcal{S}$ clients from the population
Initialize $\nabla_{\theta}^{t}=\mathbf{0},\nabla_{\phi}^{t}=\mathbf{0}$
for $s$ in $\mathcal{S}$ do
$\theta_{s},\phi_{s}\leftarrow$ Client($s,\theta_{t},\phi_{t}$)
$\nabla_{\theta}^{t}+=\frac{\theta_{t}-\theta_{s}}{|\mathcal{S}|}$
$\nabla_{\phi}^{t}+=\frac{\phi_{t}-\phi_{s}}{|\mathcal{S}|}$
end for
$\theta^{t+1},\phi^{t+1}\leftarrow$
Adam($\nabla_{\theta}^{t},\nabla_{\phi}^{t}$)
end for
Algorithm 2 The client side algorithm for our federated SimCLR / Spectral CL /
SimSiam with optional user-verification and semi-supervision. $L_{ul}$
corresponds to the unsupervised loss component of SimCLR / Spectral CL /
SimSiam. $\beta$ is a coefficient that determines the weight of the UV loss,
with a default value of $1$.
Get $\theta,\phi$ from the server
$\theta_{s},\phi_{s}\leftarrow\theta,\phi$
for epoch $e$ in $1,\dots,E$ do
for batch $b\in B$ do
$\triangleright$ COMMENTUnlabelled and labelled datapoints of the batch $b$
$x_{ul},(x_{l},y_{l})\leftarrow b$
$\triangleright$ COMMENTGet the two views through augmentations
$[x^{1}_{ul},x^{1}_{l}],[x^{2}_{ul},x^{2}_{l}]=\textsc{Aug}([x_{ul},x_{l}]),\textsc{Aug}([x_{ul},x_{l}])$
$\triangleright$ COMMENTRepresentations of the two views from the encoder $f$
with parameters $\theta_{s}$
$[z^{1}_{ul},z^{1}_{l}],[z^{2}_{ul},z^{2}_{l}]\leftarrow
f([x^{1}_{ul},x^{1}_{l}];\theta_{s}),f([x^{2}_{ul},x^{2}_{l}];\theta_{s})$
$\triangleright$ COMMENTUnsupervised loss with, depending on $\beta$, an
additional UV loss
$\mathcal{L}_{s}=\mathcal{L}_{ul}(z^{1}_{ul},z^{2}_{ul};\phi_{s})+\beta\mathcal{L}_{uv}(s,z^{1}_{ul},z^{2}_{ul};\phi_{s})$
$\triangleright$ COMMENTSupervised loss on the labelled data
for label $i\in\\{0,\dots,|Y|-1\\}$ do
$\triangleright$ COMMENTUnsupervised loss between datapoints of the same class
$\mathcal{L}_{s}+=\mathcal{L}_{ul}(z^{1}_{l}[y_{l}==i],z^{2}_{l}[y_{l}==i];\phi_{s})$
end for
$\triangleright$ COMMENTStandard supervised loss
$\mathcal{L}_{s}+=\mathcal{L}_{y}(y_{l},z^{1}_{l},z^{2}_{l};\phi_{s})$
$\triangleright$ COMMENTLocal gradient updates on the loss
$\theta_{s},\phi_{s}\leftarrow$ SGD($\nabla_{\theta_{s},\phi_{s}}L_{s}$)
end for
end for
return $\theta_{s},\phi_{s}$
## Appendix D Missing proofs
###### Proposition 1.
_Let $s\in\mathbb{N}$ denote the user ID, ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$
the input and ${\mathbf{z}}_{1},{\mathbf{z}}_{2}\in\mathbb{R}^{D_{z}}$ the
latent representations of the two views of ${\mathbf{x}}$ given by the encoder
with parameters $\theta$. Given a critic function
$f:\mathbb{R}^{D_{z}}\times\mathbb{R}^{D_{z}}\rightarrow\mathbb{R}$, we have
that _
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|s)_{1:K}}\left[\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{1k},{\mathbf{z}}_{2k}))}{\frac{1}{K}\sum_{j=1}^{K}\exp(f({\mathbf{z}}_{1j},{\mathbf{z}}_{2k}))}\right].$
(14)
###### Proof.
The proof follows Poole et al. (2019). We can show that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1,1},{\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,2:K}|s)}\left[\log\frac{p_{\theta}({\mathbf{z}}_{1,1}|{\mathbf{z}}_{2},s)p_{\theta}({\mathbf{z}}_{1,2:K}|s)}{p_{\theta}({\mathbf{z}}_{1,2:K}|s)p_{\theta}({\mathbf{z}}_{1,1}|s)}\right]$
(15)
$\displaystyle=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1,1:K},{\mathbf{z}}_{2}|s)}\left[\log\frac{p_{\theta}({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\right]$
(16)
$\displaystyle=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1,1:K},{\mathbf{z}}_{2}|s)}\left[\log\frac{p_{\theta}({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}{q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\right]$
(17)
$\displaystyle=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1,1:K},{\mathbf{z}}_{2}|s)}\left[\log\frac{q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\right]$
$\displaystyle\qquad+\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}\left[\log\frac{p_{\theta}({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}{q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}\right]$
(18)
$\displaystyle=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1,1:K},{\mathbf{z}}_{2}|s)}\left[\log\frac{q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\right]$
$\displaystyle\qquad+\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)}\left[D_{\mathrm{KL}}(p_{\theta}({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)||q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s))\right]$
(19)
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1,1:K},{\mathbf{z}}_{2}|s)}\left[\log\frac{q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)}{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\right],$
(20)
and then by parametrizing $q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)$ in
terms of a critic function $f$,
$\displaystyle q({\mathbf{z}}_{1,1:K}|{\mathbf{z}}_{2},s)$
$\displaystyle=\frac{p_{\theta}({\mathbf{z}}_{1,1:K}|s)\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))}{\mathbb{E}_{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}[\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))]},$
(21)
we have that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1,1:K},{\mathbf{z}}_{2}|s)}\left[\log\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))}{\mathbb{E}_{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))\right]}\right].$
(22)
Since the denominator depends on the aggregate score
$\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))$ over
$p_{\theta}({\mathbf{z}}_{1,1:K}|s)$, which is similarly intractable, we can
introduce one more lower bound that will allow us to work with minibatches of
data Poole et al. (2019). Due to the positivity of the exponent, we have that
for any $a>0$
$\displaystyle\log\mathbb{E}_{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))\right]$
$\displaystyle\leq\frac{\mathbb{E}_{p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))\right]}{a}+\log
a-1.$ (23)
Using this bound with $\alpha=\exp(1)$, we have that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1:K},{\mathbf{z}}_{2}|s)}\left[\log\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))\right]$
$\displaystyle\qquad-\exp(-1)\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}))\right].$
(24)
We can now set $f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})$ as Poole et al.
(2019)
$\displaystyle f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})\rightarrow
1+f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1})-\log
a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}).$ (25)
In this way, we end up with
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq
1+\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}|s)}\left[\log\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right]$
$\displaystyle\qquad-\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right].$
(26)
We can now average the bound over $K$ replicates and reindex
${\mathbf{z}}_{1}$ as
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq
1+\frac{1}{K}\sum_{k=1}^{K}\Bigg{(}\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}|s)}\left[\log\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right]$
$\displaystyle\qquad-\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right]\Bigg{)}$
(27)
$\displaystyle=1+\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}|s)}\left[\log\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right]$
$\displaystyle\qquad-\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,1}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right]$
(28)
$\displaystyle=1+\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K}|s)}\left[\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,k}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right]$
$\displaystyle\qquad-\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,k}))}{a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})}\right]$
(29)
and for the specific choice of
$a({\mathbf{z}}_{2},{\mathbf{z}}_{1,1:K})=\frac{1}{K}\sum_{k=1}^{K}\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,k}))$,
we have that terms cancel, i.e.,
$\displaystyle\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\frac{\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,k}))}{\frac{1}{K}\sum_{k=1}^{K}\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,k}))}\right]$
$\displaystyle=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)p_{\theta}({\mathbf{z}}_{1,1:K}|s)}\left[\frac{\frac{1}{K}\sum_{k=1}^{K}\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,k}))}{\frac{1}{K}\sum_{k=1}^{K}\exp(f({\mathbf{z}}_{2},{\mathbf{z}}_{1,k}))}\right]=1.$
(30)
In this way, we end up with the well known InfoNCE loss Oord et al. (2018),
where now we contrast between datapoints that share the same class
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};{\mathbf{z}}_{2}|s)$
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|s)_{1:K}}\left[\frac{1}{K}\sum_{k=1}^{K}\log\frac{\exp(f({\mathbf{z}}_{1k},{\mathbf{z}}_{2k}))}{\frac{1}{K}\sum_{j=1}^{K}\exp(f({\mathbf{z}}_{1j},{\mathbf{z}}_{2k}))}\right].$
(31)
∎
###### Lemma 2.1.
_Let $s\in\mathbb{N}$ denote the client ID,
${\mathbf{x}}\in\mathbb{R}^{D_{x}}$ the input and
${\mathbf{z}}_{1}\in\mathbb{R}^{D_{z}}$ the latent representation of a view of
${\mathbf{x}}$ given by the encoder with parameters $\theta$. Let $\phi$
denote the parameters of a client classifier $r_{\phi}(s|{\mathbf{z}}_{1})$
that predicts the client ID from this specific representation and let
$\mathrm{H}(s)$ be the entropy of the client distribution $p(s)$. We have that
_
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{1})\right]+\mathrm{H}(s)$ (32)
###### Proof.
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)$
$\displaystyle=\mathbb{E}_{p_{\theta}(s,{\mathbf{z}}_{1})}\left[\log\frac{p_{\theta}(s,{\mathbf{z}}_{1})}{p(s)p_{\theta}({\mathbf{z}}_{1})}\right]=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1}|s)}\left[\log\frac{p_{\theta}(s|{\mathbf{z}}_{1})}{p(s)}\right]$
(33)
$\displaystyle=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1}|s)}\left[\log\frac{r_{\phi}(s|{\mathbf{z}}_{1})}{p(s)}\right]+\mathbb{E}_{p(s)}\left[D_{\mathrm{KL}}(p_{\theta}(s|{\mathbf{z}}_{1})||r_{\phi}(s|{\mathbf{z}}_{1}))\right]$
(34)
$\displaystyle\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1}|s)}\left[\log\frac{r_{\phi}(s|{\mathbf{z}}_{1})}{p(s)}\right]=\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{1})\right]+\mathrm{H}(s).$ (35)
∎
###### Lemma 2.2.
_Let $s\in\mathbb{N}$ denote the user ID, ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$
the input and ${\mathbf{z}}_{1},{\mathbf{z}}_{2}\in\mathbb{R}^{D_{z}}$ the
latent representations of the views of ${\mathbf{x}}$ given by the encoder
with parameters $\theta$. Let $\phi$ denote the parameters of a client
classifier $r_{\phi}(s|{\mathbf{z}}_{2})$ that predicts the client ID from the
representations. We have that _
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};s|{\mathbf{z}}_{2})\leq-\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{2})\right]$ (36)
###### Proof.
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};s|{\mathbf{z}}_{2})$
$\displaystyle=\mathrm{H}_{\theta}(s|{\mathbf{z}}_{2})-\mathrm{H}_{\theta}(s|{\mathbf{z}}_{2},{\mathbf{z}}_{1})$
(37)
$\displaystyle\leq\mathrm{H}_{\theta}(s|{\mathbf{z}}_{2})=\mathrm{H}(s)-\mathrm{I}_{\theta}({\mathbf{z}}_{2};s)\leq-\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{2})\right]$ (38)
where $\mathrm{H}(s)$ is the entropy of $p(s)$,
$\mathrm{H}_{\theta}(s|{\mathbf{z}}_{2})$,
$\mathrm{H}_{\theta}(s|{\mathbf{z}}_{2},{\mathbf{z}}_{1})$ are the conditional
entropies of $s$ given ${\mathbf{z}}_{2}$ and
${\mathbf{z}}_{2},{\mathbf{z}}_{1}$ and the last inequality is due to the
lower bound of lemma 2.1. We also used the fact that the entropy of a discrete
distribution is non-negative. ∎
###### Proposition 2.
_Consider the label skew data-generating process for federated SimCLR from
Figure 1 with $s\in\mathbb{N}$ denoting the user ID with $\mathrm{H}(s)$ being
the entropy of $p(s)$, ${\mathbf{x}}\in\mathbb{R}^{D_{x}}$ the input,
${\mathbf{z}}_{1},{\mathbf{z}}_{2}\in\mathbb{R}^{D_{z}}$ the latent
representations of the two views of ${\mathbf{x}}$ given by the encoder with
parameters $\theta$. Let $y$ be the label and let
$r_{\phi}(s|{\mathbf{z}}_{i})$ be a model with parameters $\phi$ that predicts
the user ID from the latent representation ${\mathbf{z}}_{i}$. In this case,
we have that _
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};y)+\mathrm{I}_{\theta}({\mathbf{z}}_{2};y)\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1},{\mathbf{z}}_{2}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{1})+\log
r_{\phi}(s|{\mathbf{z}}_{2})\right]+2\mathrm{H}(s).$ (39)
###### Proof.
The claim is a consequence of the data processing inequality. We start by
noting that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};y)+\mathrm{I}_{\theta}({\mathbf{z}}_{1};s|y)=\mathrm{I}_{\theta}({\mathbf{z}}_{1};y,s)=\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)+\mathrm{I}_{\theta}({\mathbf{z}}_{1};y|s)$
(40)
and since in this graphical model we have that
$s\perp\\!\\!\\!\perp{\mathbf{z}}_{1}|y$, so
$\mathrm{I}_{\theta}(s;{\mathbf{z}}_{1}|y)=0$, we end up with
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{1};y)$
$\displaystyle=\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)+\mathrm{I}_{\theta}({\mathbf{z}}_{1};y|s)$
(41)
$\displaystyle\geq\mathrm{I}_{\theta}({\mathbf{z}}_{1};s)\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{1}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{1})\right]+\mathrm{H}(s),$ (42)
where we use the positivity of mutual information and our lemma 2.1. In a
similar manner we can also show that
$\displaystyle\mathrm{I}_{\theta}({\mathbf{z}}_{2};y)\geq\mathbb{E}_{p(s)p_{\theta}({\mathbf{z}}_{2}|s)}\left[\log
r_{\phi}(s|{\mathbf{z}}_{2})\right]+\mathrm{H}(s).$ (43)
By adding up eq. 42 and eq. 43 we arrive at the claim. ∎
|
# Physical-space estimates for axisymmetric waves on extremal Kerr spacetime
Elena<EMAIL_ADDRESS>Jingbo<EMAIL_ADDRESS>
###### Abstract
We study axisymmetric solutions to the wave equation $\square_{g}\psi=0$ on
extremal Kerr backgrounds and obtain integrated local energy decay (or
Morawetz estimates) through an analysis exclusively in physical-space.
Boundedness of the energy and Morawetz estimates for axisymmetric waves in
extremal Kerr were first obtained by Aretakis [11] through the construction of
frequency-localized currents used in particular to express the trapping
degeneracy. Here we extend to extremal Kerr a method introduced by Stogin [60]
in the sub-extremal case, simplifying Aretakis’ derivation of Morawetz
estimates through purely classical currents.
## 1 Introduction
The study of the Cauchy problem for the wave equation
$\displaystyle\square_{g}\psi=0,$ (1)
where $g$ is given by a black hole solution to the Einstein equation, is a
topic that has been extensively studied in the past two decades. One of the
most important black hole solutions is the vacuum Kerr family, a 2-parameter
family of solutions $(\mathcal{N}_{M,a},g_{M,a})$ with $|a|\leq M$,
representing a stationary and rotating black hole. Boundedness and decay
properties for solutions to the wave equation on Kerr have been obtained in
numerous works in the past two decades, see already Section 1.1 for an
overview of previous results. Numerous such works rely on the derivation of
integrated local energy decay estimates, or Morawetz estimates, through an
analysis in physical- or frequency-space, or the use of pseudo-differential
operators.
We consider here the case of axially symmetric solutions to the wave equation
on extremal Kerr backgrounds, corresponding to $|a|=M$. Even though
instability properties hold for solutions to the wave equations in extremal
black holes as shown by Aretakis [9][10][12], boundedness of the energy as
well as integrated local energy and pointwise decay estimates have been
obtained by Aretakis [11] for axially symmetric waves in extremal Kerr. In
[11], Aretakis adapted the method used by Dafermos-Rodnianski in [26] relying
on the separability of the wave equation to construct frequency-localized
currents. For general solutions to the wave equation, it is in fact not
possible to obtain positive definite spacetime estimates through classical
energy currents, or in physical-space, as shown by Alinhac [1]. This is
related to the complicated structure of trapped null geodesics for $|a|\neq
0$, whose (Boyer-Lindquist) constant $r$-value covers an open range of values.
On the other hand, for axially symmetric solutions the trapping degeneracy
collapses to a hypersurface in physical-space and, as observed by Aretakis in
the introduction of [11], “the obstruction uncovered by Alinhac [1] does not
apply to the axisymmetric case and thus one could in principle expect to
derive integrated decay for the full range $|a|\leq M$ using purely classical
currents; this remains however an open problem.”
In this paper, we address such problem by deriving integrated local energy
estimates for axially symmetric solutions to the wave equation on extremal
Kerr exclusively through a physical-space analysis. Here by physical-space
estimates we refer to an analysis of the wave equation which does not require
a mode or frequency decomposition and involves only differential operators.
Recall that in extremal Kerr the event horizon lies at $r=M$ and the effective
photon sphere lies at $r_{trap}=(1+\sqrt{2})M$. The degenerate energy for
solutions to (1) is given by
$\displaystyle E^{(T)}[\psi](0)$ $\displaystyle=$
$\displaystyle\int_{\Sigma_{0}}|\partial_{t}\psi|^{2}+\left(1-\frac{M}{r}\right)^{2}|\partial_{r}\psi|^{2}+|\nabla\mkern-13.0mu/\,\psi|^{2},$
where
$|\nabla\mkern-13.0mu/\,\psi|^{2}=\frac{1}{r^{2}}|\nabla\mkern-13.0mu/\,_{\mathbb{S}^{2}}\psi|^{2}$
with $|\nabla\mkern-13.0mu/\,_{\mathbb{S}^{2}}\psi|^{2}$ the norm of the
gradient of $\psi$ on the unit sphere with respect to the standard metric, and
in what follows $(t,r,\theta,\phi)$ denote the Boyer-Lindquist coordinates. We
prove the following.
###### Theorem 1.1.
Let $\psi$ be a sufficiently regular axisymmetric solution to the wave
equation in extremal Kerr spacetime with initial data on a spacelike
hypersurface $\Sigma_{0}$ which decays sufficiently fast. Then the following
Morawetz estimates on the exterior region:
$\displaystyle\int_{\mathcal{M}}\frac{1}{r^{3}}\left(1-\frac{M}{r}\right)^{2}|\partial_{r}\psi|^{2}+\frac{1}{r}\left(1-\frac{r_{trap}}{r}\right)^{2}\left(\frac{1}{r^{2}}(\partial_{t}\psi)^{2}+|\nabla\mkern-13.0mu/\,\psi|^{2}\right)+\frac{1}{r^{4}}\left(1-\dfrac{M}{r}\right)^{2}|\psi|^{2}\leq
CE^{(T)}[\psi](0),$ (2)
where $C$ only depending on $M$, can be obtained through exclusively a
physical-space analysis.
As a corollary of our main result we recover the following bound that appeared
as the crucial Proposition 12.5.1 in [11], which summarized the results
involving frequency decomposition, reflecting the fact that the
microlocalization in [11] was only needed in a spatially compact region
located away from the horizon.
###### Corollary 1.2.
Let $\psi$ be a sufficiently regular axisymmetric solution to the wave
equation in the extremal Kerr spacetime with initial data on $\Sigma_{0}$
which decays sufficiently fast and let $R_{e}>r_{e}>M$. Then the following
Morawetz estimates:
$\displaystyle\int_{r_{e}\leq r\leq
R_{e}}\Big{(}|\partial_{r*}\psi|^{2}+|\psi|^{2}+\big{(}r-r_{trap}\big{)}^{2}\big{(}|\nabla\mkern-13.0mu/\,\psi|^{2}+|\partial_{t}\psi|^{2}\big{)}\Big{)}\leq
CE^{(T)}[\psi](0),$ (3)
where $r^{*}=\int\frac{r^{2}+a^{2}}{\Delta}$ and $C$ only depends on $M$,
$r_{e}$ and $R_{e}$, can be obtained through exclusively a physical-space
analysis.
The proof of Proposition 12.5.1 in [11] relied on the separation of variables
for the solution of (1) and an analysis in frequency-space relying on a series
of involved microlocal currents tailored to the different regions of validity
in frequency space. On the other hand, we obtain Theorem 1.1, and consequently
Corollary 1.2, through the definition of one physical-space current, resulting
in a considerable simplification of the construction.
In [11], Aretakis combined the above result as stated in Corollary 1.2 with
positive-definite currents near null infinity and near the horizon and with a
uniform boundedness statement of energy, both of which were obtained in [11]
in physical-space, to deduce a complete integrated local energy decay.
Finally, by applying Dafermos-Rodnianski’s $r^{p}$-method [24], Aretakis [11]
improved the decay towards null infinity of the integrated local energy decay
and used the improved decay to obtain pointwise decay for the solution. Since
these proofs were obtained in [11] through exclusively a physical-space
analysis we will not rederive them here, with the exception of the boundedness
of the degenerate energy in Section 3.4, and refer to [11] for details. In
particular, by combining Theorem 1.1 with the above mentioned steps obtained
by Aretakis in [11], one can recover the full results of pointwise and power-
law energy decay for axially symmetric waves in extremal Kerr exclusively in
physical-space.
### 1.1 Previous works
We recall here the main results and techniques used in the analysis of the
wave equation (1) and related stability problems in black hole solutions.
Stability results for the wave equation on Schwarzschild spacetime,
corresponding to the case of $a=0$, have been first obtained by Kay-Wald [46],
who derived a statement of energy boundedness. In the following decades, such
statement has been refined to include local energy decay estimates, also known
as Morawetz estimates [56], which give control over a positive-definite
spacetime norm through the use of a current associated to the radial
vectorfield, as in Blue-Soffer [14][15], Blue-Sterbenz [16], Dafermos-
Rodnianski [25], Marzuola-Metcalfe-Tataru-Tohaneanu [53]. The estimates in
this case are obtained as a modified version of the classical Morawetz
integral energy decay estimate through the use of a vectorfield of the form
$\mathcal{F}(r)\partial_{r}$, with $\mathcal{F}$ vanishing at $r=3M$, which is
the location of orbital null geodesics in Schwarzschild called the photon
sphere. Also, in [25] Dafermos-Rodnianski introduced a vectorfield estimate
which captures the so-called redshift effect, allowing for pointwise estimates
along the event horizon.
In the case of Kerr spacetime with $|a|\neq 0$, the orbital null geodesics are
not confined to a hypersurface in physical-space, but cover an open region in
(Boyer-Lindquist) $r$-value which depends on the energy and angular momentum
of the geodesics. Moreover, the stationary Killing vectorfield $\partial_{t}$
fails to be timelike in an ergoregion, and therefore its associated conserved
energy is not positive definite, in a phenomenon called superradiance. The
analysis of the wave equation is complicated by the presence of the ergoregion
and the dependence of the trapping region on the frequency of the solution, as
the the high frequency obstruction to decay given by the trapping region
cannot be described by the classical vectorfield method as shown by Alinhac
[1]. For this reason, the derivation of a Morawetz estimate in this case
requires a more refined analysis involving both the vectorfield method and
mode decompositions or pseudo-differential operators.
The mode decomposition refers to the analysis of mode solutions of the
separated form
$\psi(r,t,\theta,\phi)=e^{-i\omega
t}e^{im\phi}R(r)S(\theta),\qquad\omega\in\mathbb{R},\qquad m\in\mathbb{Z}$ (4)
which is related to the Fourier transform of the solution with respect to the
symmetries of the spacetime, and corresponds to its frequency decomposition.
The presence of an additional hidden symmetry of the spacetime, known as the
Carter tensor [17], allows to reduce the study of the wave equation to the
respective radial and angular ODEs for the functions $R(r)$ and $S(\theta)$.
Such frequency-analysis has been developed by Dafermos-Rodnianski [26] and
Dafermos-Rodnianski-Shlapentokh-Rothman [30] in sub-extremal Kerr, where
frequency-dependent multiplier currents for the separated solutions are
carefully constructed using the structure of trapping (which stays localized
in frequency-space) and the fact that superradiant frequencies are not
trapped. This allows for the construction of a frequency-space analogue of the
current $\mathcal{F}(r)\partial_{r}$ which vanishes at a different $r_{trap}$
for each set of trapped frequencies [27][28][29]. Remarkably, the frequency-
space analysis in [30] is the only one among the techniques mentioned here
which holds in the full sub-extremal range $|a|<M$. Observe that to justify
the separation of general solutions into (4) through a Fourier transform, one
needs to require square integrability in time, which can be proved to hold
through a continuity argument in $a$.
The use of pseudo-differential operators appeared in the work of Tataru-
Tohaneanu [61], where they made use of a pseudo-differential modification of
the vectorfield $\mathcal{F}(r)\partial_{r}$ which was differential in
$\partial_{t}$, and with a kernel supported in a small neighborhood of $r=3M$.
The pseudo-differential operator is constructed perturbatively from the
choices of vectorfield and functions in Schwarzschild given in [53], and
therefore yields local energy decay estimates for slowly rotating Kerr
spacetime only.
Despite Alinhac’s obstruction [1], Andersson-Blue [4] obtained integrated
local energy estimates for the equation in slowly rotating Kerr spacetime
exclusively in physical space by generalizing the vectorfield method.
Andersson-Blue’s method makes use of the Carter hidden symmetry in Kerr as a
physical-space commutator to the wave equation. This allows to obtain a local
energy decay identity at the level of three derivatives of the solution which
degenerate near $r=3M$, as trapped null geodesics lie within $O(|a|)$ of the
photon sphere $r=3M$ for slowly rotating Kerr black holes. Such physical-space
estimates have the usual advantages of the classical vectorfield method, such
as being robust with respect to perturbations of the metric, see [39][38].
The geometry of the extremal Kerr spacetime satisfying $|a|=M$ (or extremal
Reissner-Nordström with $|Q|=M$) exhibits several qualitative differences from
the sub-extremal case, most notably the degeneration of the red-shift effect
at the horizon due to the vanishing surface gravity. In extremal Kerr for
generic solutions to the wave equation certain higher order derivatives
asymptotically blow up along the event horizon as a consequence of
conservation laws discovered by Aretakis [9][10][12], in what is now known as
the Aretakis instability. This generic blow up is unrelated to superradiance
and holds even for axially symmetric solutions. See also [5][6][7].
Axially symmetric solutions to the wave equation, both in the sub-extremal and
the extremal case, present two major simplifications: superradiance is
effectively absent and the trapping region collapses to a physical-space
hypersurface. The conserved energy associated to $\partial_{t}$ is positive-
definite for axially symmetric solutions even though the energy is
degenerate333There is however a way to capture in a quantitative way the
degenerate redshift close to the event horizon in extremal Kerr as shown by
Aretakis [11]. along the event horizon (see Section 3.4). Also, in axial
symmetry the orbital null geodesics all asymptote towards a unique
hypersurface $\\{r=r_{trap}\\}$ in physical-space, where $r_{trap}$ is defined
as the unique root in the exterior region of the polynomial
$\mathcal{T}:=r^{3}-3Mr^{2}+a^{2}r+Ma^{2}.$ (5)
In this case, the construction of the current $\mathcal{F}(r)\partial_{r}$
simplifies (see [26]) and can in principle be performed in physical-space. In
[60], Stogin constructed a current in physical space which yields positivity
of the local integrated energy estimates in the full sub-extremal range
$|a|<M$. Notice that to obtain positivity of the zero-th order term, Stogin
uses the non-degenerate redshift effect which is absent in the extremal case.
In the case of axially symmetric solutions in extremal Kerr, Aretakis [11]
proved integrated local energy decay, energy and pointwise uniform boundedness
of solutions and power-law energy and pointwise decay of solutions, all of
them up to and including the event horizon. The derivation of the integrated
local energy decay in [11] uses an adaptation of the frequency-analysis of
Dafermos-Rodnianski [26], which require novel microlocal currents allowing to
decouple the Morawetz estimates from the (degenerate) redshift. As in [26], to
justify the Fourier transform in time, cut-off in time are needed which create
error terms that have to be controlled by auxiliary microlocal currents in
addition to novel classical vectorfields, resulting in intricate constructions
to obtain positivity of the spacetime energy.
Finally, we remark here that the advances developed for the study of the wave
equation have been used in the analysis of the Einstein equation in various
settings, see [18][51][43] for perturbations of Minkowski spacetime, see
[20][44][45][47][22][13] for perturbations of Schwarzschild spacetime, see
[33][34][35][36] for perturbations of sub-extremal Reissner-Nordström and the
recent [8] for perturbations of extremal Reissner-Nordström, see
[63][58][52][21][2][62][3][40][48][49][59][50][57][39] for perturbations of
Kerr, see [19][37][38] for perturbations of Kerr-Newman. In the case of
positive cosmological constant, see [23][42][41][54][55].
### 1.2 Overview of the result
We give here an overview of the proof of Theorem 1.1. We apply the vectorfield
method to the current associated to a vectorfield $X$, a scalar function $w$
and a one-form $J$:
$\displaystyle\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$ $\displaystyle=$
$\displaystyle\mathcal{Q}[\psi]_{\mu\nu}X^{\nu}+\frac{1}{2}w\psi{\partial}_{\mu}\psi-\frac{1}{4}({\partial}_{\mu}w)|\psi|^{2}+\frac{1}{4}J_{\mu}|\psi|^{2},$
where $\mathcal{Q}[\psi]_{\mu\nu}$ is the energy-momentum tensor associated to
a solution to the wave equation
$\displaystyle\mathcal{Q}[\psi]_{\mu\nu}$ $\displaystyle=$
$\displaystyle{\partial}_{\mu}\psi{\partial}_{\nu}\psi-\frac{1}{2}{\bf
g}_{\mu\nu}{\partial}_{\lambda}\psi{\partial}^{\lambda}\psi.$
In order to derive Morawetz estimates, we use the vectorfield
$X=\mathcal{F}(r)\partial_{r}$, scalar function $w$ and one-form $J$ given by
$\displaystyle\mathcal{F}=zu,\qquad w=z\partial_{r}u,\qquad J=v\partial_{r},$
where $z(r)$, $u(r)$, and $v(r)$ are well-chosen functions of $r$, so that the
divergence of the current can be written as
$\displaystyle|q|^{2}{\bf D}^{\mu}\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$
$\displaystyle=$
$\displaystyle\mathcal{A}|{\partial}_{r}\psi|^{2}+\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)+\mathcal{V}|\psi|^{2}+\frac{1}{4}|q|^{2}{\mbox{d}iv\,}(J|\psi|^{2}),$
(6)
where $|q|^{2}=r^{2}+a^{2}\cos^{2}\theta$ and $\partial_{\alpha}$,
$\partial_{\beta}$ indicate $\partial_{t}$, $\partial_{\theta}$,
$\partial_{\phi}$, see Lemma 3.2 for the expression of the coefficients. The
axial symmetry of the solution crucially allows to simplify further the
principal term
$\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)$,
which for $z=\frac{\Delta}{(r^{2}+a^{2})^{2}}$ is given by
$\displaystyle\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)$
$\displaystyle=$
$\displaystyle\frac{u{\mathcal{T}}}{(r^{2}+a^{2})^{3}}\,|q|^{2}|\nabla\psi|^{2},$
where $|\nabla\psi|^{2}$ is defined by (12).
For the choice of functions $z$, $u$, $w$, we adapt a construction introduced
by Stogin [60] in sub-extremal Kerr for $|a|<M$ (also subsequently used and
adapted in [47][33][36]). In [60] the function $u$ is defined in terms of $w$
using the relation $w=z\partial_{r}u$ and imposed to vanish at $r_{trap}$ the
root of polynomial ${\mathcal{T}}$, while $z$ and $w$ are given respectively
by the geodesic potential and a differentiable function defined piecewise. In
the sub-extremal case, such construction implies the non-negativity of the
first three coefficients on the right hand side of (6), but still presents
remaining issues. In particular, the vectorfield $X=zu\partial_{r}$ blows up
logarithmically towards the horizon and the coefficient of the zero-th order
term $\mathcal{V}$ vanishes in an interval of $r$ outside the event horizon.
Such issues are solved in [60] by relying on the use of the redshift
vectorfield in sub-extremal Kerr: the vectorfield $X$ and function $w$ are
modified close to the event horizon to obtain a vectorfield which is regular
up to the event horizon (see also [53]) but such modification introduces a
negative contribution in the zero-th order term close to the event horizon.
The use of the redshift vectorfield as in [25] is then used to fix the
degeneracy of the $|\partial_{r}\psi|^{2}$ at the event horizon, which is then
used in an integrated local Hardy estimate to obtain positivity of the zero-th
order term.
Here in extremal Kerr, we set
$\displaystyle z=\frac{(r-M)^{2}}{(r^{2}+M^{2})^{2}},$
and explicitly define the differentiable function $w$ defined piecewise, see
(36). We show that in this case both the vectorfield $X$ and the function $w$
are regular up to the horizon, so the first issue appearing in the sub-
extremal case is not present here. Nevertheless, we still have the vanishing
of the zero-th order term in an interval of $r$, for which a non-degenerate
redshift estimate cannot be used as is absent in extremality. We rely instead
on a global pointwise Hardy inequality in $r\geq r_{e}>M$ which degenerates as
$r_{e}\to M$, capturing the degeneracy of the redshift. The Hardy inequality
is based on the use of the one-form $J=v\partial_{r}$ for an explicit function
$v$, see (39), solution to an ODE, that is used to obtain positivity of the
zero-th order term $\mathcal{V}$, fixing the second issue. We finally also add
a trapped control on the time derivative by using the Lagrangian of the wave
equation to prove Theorem 1.1. The above construction gives a simple
alternative proof of Aretakis’ result in [11] in physical-space, bypassing the
frequency decomposition and addressing the open problem raised by Aretakis
[11].
This paper is organized as follows: in Section 2 we recall the main properties
of the extremal Kerr spacetime and in Section 3 we review preliminary
computations on the wave equation and the vectorfield method. Finally, in
Section 4 we extend Stogin’s method [60] to derive the Morawetz estimates for
axisymmetric waves in extremal Kerr through a physical-space analysis.
Acknowledgements. The first author acknowledges the support of NSF Grant No.
DMS-2128386 and of a grant of the Simons Foundation (825870, EG).
## 2 Extremal Kerr spacetime
We recall here the main properties of the extremal Kerr spacetime which are
relevant to this paper. In Section 2.1 we introduce the metric in Boyer-
Lindquist and Eddington-Finkelstein coordinates and define the differential
structure of the manifold. In Section 2.2 we define the Killing vectorfields
of the metric and in Section 2.3 we recall the properties of trapped null
geodesics on extremal Kerr. For a more detailed presentation of properties of
extremal black holes see [9][10][11][12].
### 2.1 The manifold and the metric
The Kerr metric in Boyer-Lindquist coordinates $(t,r,\theta,\phi)$ takes the
form
$\displaystyle\begin{split}{\bf
g}_{M,a}&=-\frac{\Delta-a^{2}\sin^{2}\theta}{|q|^{2}}dt^{2}-\frac{2a\sin^{2}\theta}{|q|^{2}}\left((r^{2}+a^{2})-\Delta\right)dtd\phi+\frac{|q|^{2}}{\Delta}dr^{2}+|q|^{2}d\theta^{2}\\\
&+\frac{\sin^{2}\theta}{|q|^{2}}\left((r^{2}+a^{2})^{2}-\Delta
a^{2}\sin^{2}\theta\right)d\phi^{2},\end{split}$ (7)
where
$\displaystyle\Delta$ $\displaystyle=$ $\displaystyle
r^{2}-2Mr+a^{2}=(r-r_{+})(r-r_{-}),\qquad|q|^{2}=r^{2}+a^{2}\cos^{2}\theta.$
and $r_{\pm}=M\pm\sqrt{M^{2}-a^{2}}$.
The Kerr metric represent a stationary and rotating black hole of mass $M$ and
angular momentum $Ma$. For $|a|<M$ the metric describes the sub-extremal Kerr
spacetime, for $|a|=M$ the extremal Kerr and for $|a|>M$ the spacetime
contains a naked singularity. If $a=0$ we obtain the Schwarzschild solution.
If $|a|\leq M$, to remove the coordinate singularity at $\Delta=0$ describing
the black hole event horizon, one can define the functions
$\displaystyle
r^{*}=\int\frac{r^{2}+a^{2}}{\Delta},\qquad\phi^{*}=\phi+\int\frac{a}{\Delta},\qquad
v=t+r^{*}$
and obtain the Kerr metric in the ingoing Eddington-Finkelstein coordinates
$(v,r,\theta,\phi^{*})$
$\begin{split}{\bf
g}_{M,a}&=-\frac{\Delta-a^{2}\sin^{2}\theta}{|q|^{2}}dv^{2}+2dvdr-\frac{2a\sin^{2}\theta\left((r^{2}+a^{2})-\Delta\right)}{|q|^{2}}dvd\phi^{*}\\\
&-2a\sin^{2}\theta
drd\phi^{*}+|q|^{2}d\theta^{2}+\frac{\sin^{2}\theta}{|q|^{2}}\left((r^{2}+a^{2})^{2}-\Delta
a^{2}\sin^{2}\theta\right)(d\phi^{*})^{2},\end{split}$ (8)
which is regular at the horizon.
From the form of the Kerr metric in Boyer-Lindquist coordinates given by (7),
one can deduce [4] that its conformal inverse $|q|^{2}{\bf g}_{M,a}^{-1}$ can
be written as
$\displaystyle|q|^{2}{\bf g}_{M,a}^{{\alpha}{\beta}}$ $\displaystyle=$
$\displaystyle\Delta\partial_{r}^{\alpha}\partial_{r}^{\beta}+\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}$
(9)
where
$\displaystyle\mathcal{R}^{{\alpha}{\beta}}$ $\displaystyle=$
$\displaystyle-(r^{2}+a^{2})^{2}\partial_{t}^{\alpha}\partial_{t}^{\beta}-2a(r^{2}+a^{2})\partial_{t}^{({\alpha}}\partial_{\phi}^{{\beta})}-a^{2}\partial_{\phi}^{\alpha}\partial_{\phi}^{\beta}+\Delta
O^{{\alpha}{\beta}},$ (10) $\displaystyle O^{{\alpha}{\beta}}$
$\displaystyle=$
$\displaystyle\partial_{\theta}^{\alpha}\partial_{\theta}^{\beta}+\frac{1}{\sin^{2}\theta}\partial_{\phi}^{\alpha}\partial_{\phi}^{\beta}+2a\partial_{t}^{({\alpha}}\partial_{\phi}^{{\beta})}+a^{2}\sin^{2}\theta\partial_{t}^{\alpha}\partial_{t}^{\beta},$
(11)
where $O^{{\alpha}{\beta}}$ is related to the hidden Carter symmetry of the
Kerr spacetime. We denote
$\displaystyle
O^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)=|\partial_{\theta}\psi|^{2}+\big{|}\frac{1}{\sin\theta}\partial_{\phi}\psi+a\sin\theta\partial_{t}\psi\big{|}^{2}=:|q|^{2}|\nabla\psi|^{2}.$
(12)
We now describe the differential structure of the metric. Given standard
spherical coordinates $(\theta,\phi^{*})$ on the sphere $\mathbb{S}^{2}$ and
$(v,r)$ global coordinates system on $\mathbb{R}^{2}$, the ambient manifold is
defined to be
$\mathcal{N}=\\{(v,r,\theta,\phi^{*})\in\mathbb{R}\times\mathbb{R}\times\mathbb{S}^{2}\setminus\\{\mathbb{R}\times\\{0\\}\times
S_{eq}\\}\\}$, where $S_{eq}=\mathbb{S}^{2}\cap\\{\theta=\frac{\pi}{2}\\}$
denotes the equator of the sphere.
In the case of extremal Kerr spacetimes, we have
$\displaystyle\Delta=(r-M)^{2}$ (13)
and the roots of $\Delta=0$ degenerate to $r_{+}=r_{-}=M$. The event horizon
is defined by $\mathcal{H}^{+}=\mathcal{N}\cap\\{r=M\\}$, the black hole
region corresponds to $\mathcal{N}\cap\\{r<M\\}$ and the exterior region
(covered by the Boyer-Lindquist coordinates) is given by
$\mathcal{D}=\mathcal{N}\cap\\{r>M\\}$.
### 2.2 The Killing vectorfields
The coordinate vectorfields $T=\partial_{v}$ and $Z=\partial_{\phi^{*}}$
coincide with the coordinate vectorfields $\partial_{t}$ and $\partial_{\phi}$
in Boyer-Lindquist coordinates, which are manifestly Killing for the metric
(7). The stationary Killing vectorfield $T=\partial_{t}$ is asymptotically
timelike as $r\to\infty$, and spacelike close to the horizon, in the
ergoregion $\\{\Delta-a^{2}\sin^{2}\theta<0\\}$.
The vectorfield
$\widehat{T}:={\partial}_{t}+\frac{a}{r^{2}+a^{2}}{\partial}_{\phi}$
satisfies, see for example Proposition 3.2.2 of [39],
$\displaystyle{\bf g}_{M,a}(\widehat{T},\widehat{T})$ $\displaystyle=$
$\displaystyle-\frac{\Delta|q|^{2}}{(r^{2}+a^{2})^{2}},$ (14)
and is therefore timelike in the exterior region $\mathcal{D}$ and null on the
horizon $\mathcal{H}^{+}$. In particular, its restriction to the event
horizon, also called the Hawking vectorfield
$\displaystyle\widehat{T}_{\mathcal{H}}:=\partial_{t}+\omega_{\mathcal{H}}\partial_{\phi},\qquad\text{with}\quad\omega_{\mathcal{H}}=\frac{a}{r_{+}^{2}+a^{2}},$
is a Killing vectorfield which is null and normal to the horizon.
In the extremal case, the angular velocity $\omega_{\mathcal{H}}$ of the
horizon is given by $\omega_{\mathcal{H}}=\frac{1}{2M}$ and we have
$\nabla_{\widehat{T}_{\mathcal{H}}}\widehat{T}_{\mathcal{H}}=\kappa\widehat{T}_{\mathcal{H}}=0$
along the horizon, where $\kappa=\frac{r_{+}-r_{-}}{2(r_{+}^{2}+a^{2})}$ is
the surface gravity, which is positive in the sub-extremal range and vanishes
in the extremal case.
### 2.3 Trapped null geodesics
In Kerr spacetime there exist orbital null geodesics, i.e. geodesics for which
the radial coordinate $r$ remains constant. Because of the integrability of
the geodesic flow due to the presence of the Carter tensor [17], we can give
the following characterization of trapped null geodesics in Kerr spacetime.
###### Lemma 2.1 (Lemma 3.8.3 in [39]).
Let $\gamma(\lambda)$ be a null geodesics in Kerr spacetime whose constant of
motions
$\displaystyle{\bf e}:=-{\bf
g}(\dot{\gamma},\partial_{t}),\qquad{\bf\ell_{z}}:=-{\bf
g}(\dot{\gamma},\partial_{\phi})$
denote its energy and azimuthal angular momentum respectively. Then $\gamma$
is an orbital null geodesic if it satisfies
$\displaystyle{\mathcal{T}}_{{\bf
e},{\bf\ell_{z}}}:=\big{(}r^{3}-3Mr^{2}+a^{2}r+Ma^{2}\big{)}{\bf
e}-(r-M)a{\bf\ell_{z}}=0.$ (15)
The orbital null geodesics obtained above are trapped, i.e. neither cross the
event horizon nor terminate at null infinity. From (15) we can see that for
$a=0$ trapped null geodesics all concentrate at $\\{r=3M\\}$, which is the
photon sphere of Schwarzschild spacetime. On the other hand, for $|a|\neq 0$
there are null geodesics with constant $r$ for an open range of $r$.
Nevertheless, if ${\bf\ell_{z}}=0$, i.e. for trapped null geodesics orthogonal
to the axial Killing vectorfield, the trapped region defined by (15) reduces
to a hypersurface defined by
$\displaystyle{\mathcal{T}}:=r^{3}-3Mr^{2}+a^{2}r+Ma^{2}=0.$ (16)
Observe that the polynomial ${\mathcal{T}}$ has a unique single root in the
exterior of the black hole region, and we denote it by $r_{trap}$. Trapped
null geodesics constitute an obstruction to decay for the high frequency limit
of solutions to the wave equation. For axisymmetric waves, the trapping
obstruction simplifies as it concentrates on the hypersurface
${\mathcal{T}}=0$ in physical-space, becoming an effective photon sphere.
In the extremal Kerr for $|a|=M$, the trapping hypersurface becomes
$\displaystyle{\mathcal{T}}=r^{3}-3Mr^{2}+M^{2}r+M^{3}=(r-M)(r^{2}-2Mr-M^{2})$
which vanishes at $r_{trap}=(1+\sqrt{2})M$.
## 3 Preliminaries
We recall here some preliminaries concerning the wave equation. In Section 3.1
we introduce the wave equation operator and the foliation in extremal Kerr and
in Section 3.2 we recall the main notations of the vectorfield method. In
Section 3.3 we collect preliminaries computations for the derivation of the
Morawetz estimates obtained in Section 4.
### 3.1 The wave equation
The wave operator for a scalar function $\psi$ on a Lorentzian manifold is
given by
$\displaystyle\square_{\bf g}\psi=\frac{1}{\sqrt{-\det{\bf
g}}}\partial_{\alpha}((\sqrt{-\det{\bf g}}){\bf
g}^{{\alpha}{\beta}}\partial_{\beta}\psi).$
From the expression for the inverse metric (9), we deduce that the wave
operator for the Kerr metric in Boyer-Lindquist coordinates
$(t,r,\theta,\phi)$ is given by
$\begin{split}|q|^{2}\square_{{\bf
g}_{M,a}}&={\partial}_{r}(\Delta{\partial}_{r})+\frac{1}{\Delta}\Big{(}-(r^{2}+a^{2})^{2}{\partial}^{2}_{t}-2a(r^{2}+a^{2}){\partial}_{t}{\partial}_{\phi}-a^{2}{\partial}_{\phi}^{2}\Big{)}\\\
&+\frac{1}{\sin\theta}{\partial}_{\theta}(\sin\theta{\partial}_{\theta})+\frac{1}{\sin^{2}\theta}{\partial}^{2}_{\phi}+2a\partial_{t}\partial_{\phi}+a^{2}\sin^{2}\theta{\partial}^{2}_{t}.\end{split}$
(17)
In ingoing Eddington-Finkelstein coordinates $(v,r,\theta,\phi^{*})$, the wave
opeator is given by
$\displaystyle\begin{split}|q|^{2}\square_{{\bf
g}_{M,a}}&={\partial}_{r}(\Delta{\partial}_{r})+2(r^{2}+a^{2}){\partial}_{v}{\partial}_{r}+2a{\partial}_{r}{\partial}_{\phi^{*}}+2r{\partial}_{v}\\\
&+\frac{1}{\sin\theta}{\partial}_{\theta}(\sin\theta{\partial}_{\theta})+\frac{1}{\sin^{2}\theta}{\partial}^{2}_{\phi^{*}}+2a{\partial}_{v}{\partial}_{\phi^{*}}+a^{2}\sin^{2}\theta{\partial}_{v}^{2}.\end{split}$
(18)
Let $\Sigma_{0}$ be a closed connected axisymmetric spacelike hypersurface in
$({\mathcal{D}}\cup\mathcal{H}^{+})$ which crosses the event horizon
$\mathcal{H}^{+}$ and terminates at null infinity. We define the region
$\mathcal{M}=J^{+}(\Sigma_{0})\cap({\mathcal{D}}\cup\mathcal{H}^{+})$, and
consider the foliation $\Sigma_{\tau}=\phi_{\tau}^{T}(\Sigma_{0})$, where
$\phi_{\tau}^{T}$ is the flow of $T$. Since $T$ is Killing, the hypersurfaces
$\Sigma_{\tau}$ are all isometric to $\Sigma_{0}$. We denote by
$n_{\Sigma_{\tau}}$ the future directed unit vector field normal to
$\Sigma_{\tau}$. By convention, along the event horizon $\mathcal{H}^{+}$ we
choose $n_{\mathcal{H}^{+}}=\widehat{T}_{\mathcal{H}}$. We define the regions
$\mathcal{M}(0,\tau)=\cup_{0\leq\tilde{\tau}\leq\tau}\Sigma_{\tilde{\tau}}$,
$\mathcal{H}^{+}(0,\tau)=\mathcal{H}^{+}\cap\mathcal{M}(0,\tau)$ and
$\mathcal{I}^{+}(0,\tau)=\mathcal{I}^{+}\cap\mathcal{M}^{+}(0,\tau)$.
In what follows we consider axisymmetric solutions to the wave equation in
extremal Kerr, i.e.
$\displaystyle\square_{{\bf g}}\psi=0,\qquad\partial_{\phi}\psi=0,$
where ${\bf g}$ denotes the metric of the extremal Kerr spacetime. We consider
the Cauchy problem for the wave equation in $\mathcal{M}$ with axisymmetric
initial data prescribed on $\Sigma_{0}$,
$\displaystyle\psi|_{\Sigma_{0}}=\psi_{0}\in H^{k}_{loc}(\Sigma_{0}),\qquad
n_{\Sigma_{0}}\psi|_{\Sigma_{0}}=\psi_{1}\in H^{k-1}_{loc}(\Sigma_{0}),$
for $k\geq 2$ and assuming that $\lim_{x\to\mathscr{I}^{+}}r\psi^{2}(x)=0$.
Standard results imply well-posedness for the above Cauchy problem.
### 3.2 The vectorfield method
The vectorfield method is based on applying the divergence theorem in a causal
domain such as $\mathcal{M}(0,\tau)$, to certain energy currents, which are
constructed from the energy momentum tensor. The energy-momentum tensor
associated to the wave equation $\square_{\bf g}\psi=0$ is given by
$\displaystyle\mathcal{Q}[\psi]_{\mu\nu}$ $\displaystyle=$
$\displaystyle{\partial}_{\mu}\psi{\partial}_{\nu}\psi-\frac{1}{2}{\bf
g}_{\mu\nu}{\partial}_{\lambda}\psi{\partial}^{\lambda}\psi.$ (19)
If $\square_{\bf g}\psi=0$, the energy-momentum $\mathcal{Q}[\psi]_{\mu\nu}$
is divergence free.
Let $X$ be a vectorfield, $w$ be a scalar function and $J$ a one-form. The
current associated to $(X,w,J)$ is defined as
$\displaystyle\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$ $\displaystyle=$
$\displaystyle\mathcal{Q}[\psi]_{\mu\nu}X^{\nu}+\frac{1}{2}w\psi{\partial}_{\mu}\psi-\frac{1}{4}({\partial}_{\mu}w)|\psi|^{2}+\frac{1}{4}J_{\mu}|\psi|^{2}.$
(20)
The energy associated to $(X,w,J)$ on the hypersurface $\Sigma_{\tau}$ is
$\displaystyle E^{(X,w,J)}[\psi](\tau)$ $\displaystyle=$
$\displaystyle\int_{\Sigma_{\tau}}\mathcal{P}^{(X,w,J)}_{\mu}[\psi]n_{\Sigma_{\tau}}^{\mu},$
where $n_{\Sigma_{\tau}}$ denotes the future directed timelike unit normal to
$\Sigma_{\tau}$.
A standard computation gives the following divergence of $\mathcal{P}$ for a
solution to the wave equation $\square_{g}\psi=0$, see for example [47][39],
$\displaystyle{\bf
D}^{\mu}\mathcal{P}_{\mu}^{(X,w,J)}[\psi]=\frac{1}{2}\mathcal{Q}[\psi]\cdot\,^{(X)}\pi-\frac{1}{4}\square_{\bf
g}w|\psi|^{2}+\frac{1}{2}w({\partial}_{\lambda}\psi{\partial}^{\lambda}\psi)+\frac{1}{4}{\mbox{d}iv\,}(J|\psi|^{2}),$
(21)
where $\,{}^{(X)}\pi_{\mu\nu}={\bf D}_{(\mu}X_{\nu)}$ is the deformation
tensor of the vectorfield $X$. Recall that if $X$ is a Killing vectorfield,
then $\,{}^{(X)}\pi=0$.
By applying the divergence theorem to $\mathcal{P}_{\mu}^{(X,w,J)}$ to
$\mathcal{M}(0,\tau)$ one obtains the associated energy identity:
$\displaystyle
E[\psi](\tau)+\int_{\mathcal{H}^{+}(0,\tau)}\mathcal{P}_{\mu}[\psi]n_{\mathcal{H}^{+}}^{\mu}+\int_{\mathcal{I}^{+}(0,\tau)}\mathcal{P}_{\mu}[\psi]n_{\mathcal{I}^{+}}^{\mu}+\int_{\mathcal{M}(0,\tau)}{\bf
D}^{\mu}\mathcal{P}_{\mu}[\psi]=E[\psi](0),$ (22)
where we suppressed the superscript $(X,w,J)$ and the induced volume forms are
to be understood. By convention, along the event horizon we choose
$n_{\mathcal{H}^{+}}=T+\frac{a}{M^{2}+a^{2}}Z$.
### 3.3 Preliminary computations for the Morawetz estimates
In deriving Morawetz estimates for the wave equation we make use of the
vectorfield $X=\mathcal{F}(r){\partial}_{r}$, for a well chosen function
$\mathcal{F}$. We collect here some relevant computations (see also
[4][60][39]) which will be used in the next section.
###### Lemma 3.1.
For $X=\mathcal{F}(r){\partial}_{r}$, we have
$\,{}^{(X)}\pi^{{\alpha}{\beta}}=|q|^{-2}\Big{(}2\Delta^{3/2}{\partial}_{r}\big{(}\frac{\mathcal{F}}{\Delta^{1/2}}\big{)}{\partial}_{r}^{\alpha}{\partial}_{r}^{\beta}-\mathcal{F}{\partial}_{r}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}\Big{)}+|q|^{-2}X\big{(}|q|^{2}\big{)}{\bf
g}^{{\alpha}{\beta}},$ (23)
and therefore
$\begin{split}|q|^{2}\mathcal{Q}[\psi]\cdot\,^{(X)}\pi&=2\Delta^{3/2}{\partial}_{r}\big{(}\frac{\mathcal{F}}{\Delta^{1/2}}\big{)}|{\partial}_{r}\psi|^{2}-\mathcal{F}{\partial}_{r}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}{\partial}_{\alpha}\psi{\partial}_{\beta}\psi\\\
&+\Big{(}X\big{(}|q|^{2}\big{)}-|q|^{2}({\mbox{d}iv\,}X)\Big{)}{\partial}_{\lambda}\psi{\partial}^{\lambda}\psi.\end{split}$
(24)
###### Proof.
Using the expression for the inverse metric (9), we compute
$\displaystyle\mathcal{L}_{X}(|q|^{2}{\bf g}^{{\alpha}{\beta}})$
$\displaystyle=$
$\displaystyle\mathcal{L}_{X}\big{(}\Delta{\partial}_{r}^{\alpha}{\partial}_{r}^{\beta}\big{)}+\mathcal{L}_{X}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}=X(\Delta){\partial}_{r}^{\alpha}{\partial}_{r}^{\beta}+\Delta[X,{\partial}_{r}]^{\alpha}{\partial}_{r}^{\beta}+\Delta{\partial}_{r}^{\alpha}[X,{\partial}_{r}]^{\beta}+\mathcal{L}_{X}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}.$
For $X=\mathcal{F}\partial_{r}$, we obtain
$\displaystyle\mathcal{L}_{X}(|q|^{2}{\bf g}^{{\alpha}{\beta}})$
$\displaystyle=$
$\displaystyle\mathcal{F}({\partial}_{r}\Delta){\partial}_{r}^{\alpha}{\partial}_{r}^{\beta}+\Delta[\mathcal{F}{\partial}_{r},{\partial}_{r}]^{\alpha}{\partial}_{r}^{\beta}+\Delta{\partial}_{r}^{\alpha}[\mathcal{F}{\partial}_{r},{\partial}_{r}]^{\beta}+\mathcal{F}\mathcal{L}_{{\partial}_{r}}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}$
$\displaystyle=$
$\displaystyle\mathcal{F}({\partial}_{r}\Delta){\partial}_{r}^{\alpha}{\partial}_{r}^{\beta}-2\Delta({\partial}_{r}\mathcal{F}){\partial}_{r}^{\alpha}{\partial}_{r}^{\beta}+\mathcal{F}\partial_{r}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}$
$\displaystyle=$
$\displaystyle-2\Delta^{3/2}{\partial}_{r}\big{(}\frac{\mathcal{F}}{\Delta^{1/2}}\big{)}{\partial}_{r}^{\alpha}{\partial}_{r}^{\beta}+\mathcal{F}\partial_{r}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}.$
By writing
$\,{}^{(X)}\pi^{{\alpha}{\beta}}$ $\displaystyle=$
$\displaystyle-\mathcal{L}_{X}\big{(}|q|^{-2}|q|^{2}{\bf
g}^{{\alpha}{\beta}}\big{)}=-|q|^{-2}\mathcal{L}_{X}\big{(}|q|^{2}{\bf
g}^{{\alpha}{\beta}}\big{)}-|q|^{2}\mathcal{L}_{X}\big{(}|q|^{-2}\big{)}{\bf
g}^{{\alpha}{\beta}}$
we obtain the stated expressions for the deformation tensors.
Finally we write
$\displaystyle\mathcal{Q}[\psi]\cdot\,^{(X)}\pi$ $\displaystyle=$
$\,{}^{(X)}\pi^{{\alpha}{\beta}}{\partial}_{\alpha}\psi{\partial}_{\beta}\psi-({\mbox{d}iv\,}X){\partial}_{\lambda}\psi{\partial}^{\lambda}\psi$
since ${\bf g}_{\mu\nu}\,^{(X)}\pi^{\mu\nu}={\bf g}_{\mu\nu}{\bf
D}^{(\mu}X^{\nu)}=2{\mbox{d}iv\,}X$.
∎
###### Lemma 3.2.
Let $z(r)$, $u(r)$, $v(r)$ be functions of $r$. Then for
$\displaystyle X=\mathcal{F}\partial_{r},\qquad\quad\mathcal{F}=zu,\qquad\quad
w=z{\partial}_{r}u,\qquad\quad J=v\partial_{r}$ (25)
the divergence of $\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$ satisfies
$\displaystyle|q|^{2}{\bf D}^{\mu}\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$
$\displaystyle=$
$\displaystyle\mathcal{A}|{\partial}_{r}\psi|^{2}+\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)+\mathcal{V}|\psi|^{2}+\frac{1}{4}|q|^{2}{\mbox{d}iv\,}(J|\psi|^{2}),$
(26)
where
$\displaystyle\mathcal{A}$ $\displaystyle=$ $\displaystyle
z^{1/2}\Delta^{3/2}\partial_{r}\left(\frac{z^{1/2}u}{\Delta^{1/2}}\right),$
(27) $\displaystyle\mathcal{U}^{{\alpha}{\beta}}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}u{\partial}_{r}\left(\frac{z}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\right),$
(28) $\displaystyle\mathcal{V}$ $\displaystyle=$
$\displaystyle-\frac{1}{4}{\partial}_{r}\big{(}\Delta{\partial}_{r}w\big{)}=-\frac{1}{4}{\partial}_{r}\big{(}\Delta{\partial}_{r}\big{(}z{\partial}_{r}u\big{)}\big{)},$
(29) $\displaystyle\frac{1}{4}|q|^{2}{\mbox{d}iv\,}(J|\psi|^{2})$
$\displaystyle=$
$\displaystyle\frac{1}{4}|q|^{2}\Big{(}2v\psi\cdot\nabla_{r}\psi+\big{(}{\partial}_{r}v+\frac{2r}{|q|^{2}}v\big{)}|\psi|^{2}\Big{)}.$
(30)
###### Proof.
Using (21) and (24) we compute for $J=0$
$\displaystyle|q|^{2}{\bf
D}^{\mu}\mathcal{P}_{\mu}^{(\mathcal{F}\partial_{r},w,J=0)}[\psi]$
$\displaystyle=$
$\displaystyle\Delta^{3/2}{\partial}_{r}\big{(}\frac{\mathcal{F}}{\Delta^{1/2}}\big{)}|{\partial}_{r}\psi|^{2}-\frac{1}{2}\mathcal{F}{\partial}_{r}\big{(}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\big{)}{\partial}_{\alpha}\psi{\partial}_{\beta}\psi-\frac{1}{4}|q|^{2}\square_{\bf
g}w|\psi|^{2}$
$\displaystyle+\frac{1}{2}\Big{(}X\big{(}|q|^{2}\big{)}-|q|^{2}({\mbox{d}iv\,}X)+|q|^{2}w\Big{)}{\partial}_{\lambda}\psi{\partial}^{\lambda}\psi.$
By defining an intermediate function $w_{int}$ as
$\displaystyle\frac{1}{2}\Big{(}X\big{(}|q|^{2}\big{)}-|q|^{2}{\mbox{d}iv\,}X+|q|^{2}w\Big{)}=\frac{1}{2}|q|^{2}\Big{(}|q|^{-2}X\big{(}|q|^{2}\big{)}-{\mbox{d}iv\,}X+w\Big{)}=:-\frac{1}{2}|q|^{2}w_{int},$
and using (9) to write
$\displaystyle|q|^{2}{\partial}_{\lambda}\psi{\partial}^{\lambda}\psi$
$\displaystyle=$ $\displaystyle|q|^{2}{\bf
g}^{\lambda\mu}{\partial}_{\lambda}\psi{\partial}_{\nu}\psi=\Delta|{\partial}_{r}\psi|^{2}+\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}{\partial}_{\alpha}\psi{\partial}_{\beta}\psi,$
we obtain
$\displaystyle|q|^{2}{\bf
D}^{\mu}\mathcal{P}_{\mu}^{(\mathcal{F}\partial_{r},w,J=0)}[\psi]$
$\displaystyle=$
$\displaystyle\mathcal{A}|{\partial}_{r}\psi|^{2}+\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)+\mathcal{V}|\psi|^{2},$
where
$\displaystyle\mathcal{A}$ $\displaystyle=$
$\displaystyle\Delta^{3/2}{\partial}_{r}\big{(}\frac{\mathcal{F}}{\Delta^{1/2}}\big{)}-\frac{1}{2}w_{int}\Delta$
$\displaystyle\mathcal{U}^{{\alpha}{\beta}}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\mathcal{F}{\partial}_{r}\left(\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\right)-\frac{1}{2}w_{int}\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}$
$\displaystyle\mathcal{V}$ $\displaystyle=$
$\displaystyle-\frac{1}{4}|q|^{2}\square_{\bf g}w.$
Now the above can be written as
$\displaystyle\mathcal{U}^{{\alpha}{\beta}}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\mathcal{F}z^{-1}\partial_{r}\left(\frac{z}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\right)+\frac{1}{2}\left(\mathcal{F}z^{-1}\partial_{r}z-w_{int}\right)\frac{\mathcal{R}^{{\alpha}{\beta}}}{\Delta}.$
Setting $\mathcal{F}=zu$ for a function $u$, and choosing
$w_{int}=\mathcal{F}z^{-1}\partial_{r}z=u{\partial}_{r}z$, we deduce the
stated expression for $\mathcal{U}^{{\alpha}{\beta}}$ in (28). With such
choices for $\mathcal{F}$ and $w_{int}$, we compute
$\displaystyle w$ $\displaystyle=$ $\displaystyle|q|^{2}{\bf
D}_{\alpha}\big{(}|q|^{-2}\mathcal{F}{\partial}_{r}^{\alpha}\big{)}-w_{int}=|q|^{2}{\partial}_{r}\big{(}|q|^{-2}\mathcal{F}\big{)}+\mathcal{F}({\bf
D}_{\alpha}{\partial}_{r}^{\alpha})-u{\partial}_{r}z$ $\displaystyle=$
$\displaystyle|q|^{2}{\partial}_{r}\big{(}|q|^{-2}zu\big{)}+zu|q|^{-2}{\partial}_{r}\big{(}|q|^{2}\big{)}-u{\partial}_{r}z={\partial}_{r}\big{(}zu\big{)}-u{\partial}_{r}z=z{\partial}_{r}u,$
where we used that ${\bf
D}_{\alpha}{\partial}_{r}^{\alpha}=\frac{1}{\sqrt{|{\bf
g}|}}{\partial}_{r}\big{(}\sqrt{|{\bf
g}|}\big{)}=\frac{1}{|q|^{2}}{\partial}_{r}\big{(}|q|^{2}\big{)}$. We also
compute
$\displaystyle\mathcal{A}$ $\displaystyle=$
$\displaystyle\partial_{r}\left(\frac{\mathcal{F}}{\Delta^{1/2}}\right)\Delta^{3/2}-\frac{1}{2}\Delta
w_{int}=\partial_{r}\left(\frac{zu}{\Delta^{1/2}}\right)\Delta^{3/2}-\frac{1}{2}\Delta(\partial_{r}z)u$
$\displaystyle=$
$\displaystyle\frac{1}{2}\partial_{r}z\frac{u}{\Delta^{1/2}}\Delta^{3/2}+z^{1/2}\partial_{r}\left(\frac{z^{1/2}u}{\Delta^{1/2}}\right)\Delta^{3/2}-\frac{1}{2}\Delta(\partial_{r}z)u=z^{1/2}\Delta^{3/2}\partial_{r}\left(\frac{z^{1/2}u}{\Delta^{1/2}}\right),$
and
$\displaystyle|q|^{2}\square_{\bf
g}w={\partial}_{r}\big{(}\Delta{\partial}_{r}w\big{)}={\partial}_{r}\big{(}\Delta{\partial}_{r}\big{(}z{\partial}_{r}u\big{)}\big{)},$
as stated. Finally, for $J=v\partial_{r}$ we compute
$\displaystyle{\bf D}^{\mu}(|\psi|^{2}J_{\mu})$ $\displaystyle=$
$\displaystyle 2v\psi\cdot\nabla_{r}\psi+|\psi|^{2}{\mbox{d}iv\,}J.$
Using that
${\mbox{d}iv\,}J=\frac{1}{|q|^{2}}{\partial}_{r}\big{(}|q|^{2}v)={\partial}_{r}v+\frac{2r}{|q|^{2}}v$,
we obtain the stated identity.
∎
### 3.4 Boundedness of the energy
We show here how to obtain boundedness of the energy associated to $T$ for
axially symmetric solutions to the wave equation in extremal Kerr. Such
statement already appeared in [11] and in axial symmetry can be proved
independently of the Morawetz estimates.
Even though the Killing vectorfield $T$ fails to be everywhere timelike and as
a consequence the energy $E^{(T)}[\psi]$ associated to it fails to be non-
negative definite, superradiance is effectively absent for axially symmetric
solutions. In fact, let $n$ be a vector orthogonal to $Z$. Then for an axially
symmetric $\psi$ we have
$\displaystyle
E^{(Z)}[\psi](\tau)=\int_{\Sigma_{\tau}}\mathcal{Q}[\psi]_{\mu\nu}Z^{\nu}n^{\mu}_{\Sigma_{\tau}}=\int_{\Sigma_{\tau}}Z(\psi)n_{\Sigma_{\tau}}(\psi)-\frac{1}{2}{\bf
g}(Z,n_{\Sigma_{\tau}}){\partial}_{\lambda}\psi{\partial}^{\lambda}\psi=0.$
On the other hand, the Hawking vectorfield $\widehat{T}$ is causal everywhere
in the exterior and using that
$\mathcal{Q}[\psi]_{\mu\nu}V_{1}^{\mu}V_{2}^{\nu}$ is non-negative if $V_{1}$,
$V_{2}$ are causal this implies
$\displaystyle E^{(T)}[\psi](\tau)=E^{(\widehat{T})}[\psi](\tau)\geq 0,$
and similarly
$\int_{\mathcal{H}^{+}(0,\tau)}\mathcal{P}^{(T)}_{\mu}[\psi]n_{\mathcal{H}^{+}}^{\mu},\int_{\mathcal{I}^{+}(0,\tau)}\mathcal{P}^{(T)}_{\mu}[\psi]n_{\mathcal{I}^{+}}^{\mu}\geq
0$.
Working in the $(v,r,\theta,\varphi^{*})$ coordinates, if
$n_{\Sigma_{\tau}}=n^{v}T+n^{r}Y+n^{\varphi}Z$, then for axially symmetric
solutions,
$\displaystyle E^{(T)}[\psi](\tau)$ $\displaystyle=$
$\displaystyle\int_{\Sigma_{\tau}}\mathcal{Q}[\psi]_{\mu\nu}T^{\nu}n^{\mu}_{\Sigma_{\tau}}=\int_{\Sigma_{\tau}}T(\psi)n_{\Sigma_{\tau}}(\psi)-\frac{1}{2}{\bf
g}(T,n_{\Sigma_{\tau}}){\partial}_{\lambda}\psi{\partial}^{\lambda}\psi$
$\displaystyle=$
$\displaystyle\int_{\Sigma_{\tau}}n^{v}|T(\psi)|^{2}+n^{r}T(\psi)Y(\psi)-\frac{1}{2}\big{(}n^{v}{\bf
g}(T,T)+n^{r}{\bf g}(T,Y)+n^{\varphi}{\bf
g}(T,Z)\big{)}{\partial}_{\lambda}\psi{\partial}^{\lambda}\psi,$
where from (8) we deduce that
${\partial}_{\lambda}\psi{\partial}^{\lambda}\psi=\frac{1}{|q|^{2}}\big{(}a^{2}\sin^{2}\theta|T\psi|^{2}+\Delta|Y\psi|^{2}+2(r^{2}+a^{2})T(\psi)Y(\psi)\big{)}+|\nabla\mkern-13.0mu/\,\psi|^{2}$.
Since the only contribution for $|Y\psi|^{2}$ comes from the term in
${\partial}_{\lambda}\psi{\partial}^{\lambda}\psi$, which vanishes at the
horizon, to have positivity of the energy we need the coefficient of
$T(\psi)Y(\psi)$ to vanish at the horizon too. We therefore obtain
$\displaystyle E^{(T)}[\psi](\tau)$ $\displaystyle\sim$
$\displaystyle\int_{\Sigma_{\tau}}|T\psi|^{2}+\left(1-\frac{M}{r}\right)^{2}|Y\psi|^{2}+|\nabla\mkern-13.0mu/\,\psi|^{2}.$
From the energy identity (22) applied to $X=T$, since
$\mathcal{E}^{(T,0)}[\psi]=0$ we then obtain
$\displaystyle
E^{(T)}[\psi](\tau)+\int_{\mathcal{H}^{+}(0,\tau)}\mathcal{P}^{(T)}_{\mu}[\psi]n_{\mathcal{H}^{+}}^{\mu}+\int_{\mathcal{I}^{+}(0,\tau)}\mathcal{P}^{(T)}_{\mu}[\psi]n_{\mathcal{I}^{+}}^{\mu}\leq
CE^{(T)}[\psi](0).$ (31)
As a consequence of the vanishing of the surface gravity of the Hawking
vectorfield at the horizon, the redshift that takes place there degenerates in
the extremal case. In particular, as shown in [11], there is no time invariant
timelike vectorfield $N$ such that $\mathcal{E}^{(N,0)}[\psi]$ is non-negative
on the horizon. However, one can still quantitatively capture the degenerate
redshift close to the horizon by using a current first introduced in [9], and
obtain a non-degenerate energy boundedness statement.
## 4 Morawetz estimates
We provide here the proof of our main result. In Section 4.1 we recall the
method introduced by Stogin [60] to construct the relevant functions in the
estimates and extend it to the extremal case and in Section 4.2 we complete
the construction with a new adapted global pointwise Hardy inequality and an
added trapped control on the time derivative of the solution.
### 4.1 Stogin’s construction
Recall Lemma 3.2, according to which for functions $z,u,v$ chosen as in (25),
the divergence of $\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$ is given by
$\displaystyle|q|^{2}{\bf D}^{\mu}\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$
$\displaystyle=$
$\displaystyle\mathcal{A}|{\partial}_{r}\psi|^{2}+\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)+\mathcal{V}|\psi|^{2}+\frac{1}{4}|q|^{2}{\mbox{d}iv\,}(J|\psi|^{2}),$
where $\mathcal{A}$, $\mathcal{U}$ and $\mathcal{V}$ are given as in (27),
(28), (29).
Following a standard choice in derivation of Morawetz estimates (see
[27][29][60][4][39]), we choose the function $z$ so that the coefficient of
$|\partial_{t}\psi|^{2}$ vanishes and the coefficient of $|\nabla\psi|^{2}$
degenerates at trapping. From (28) and (10), we have for axially symmetric
solutions
$\displaystyle\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)$
$\displaystyle=$
$\displaystyle-\frac{1}{2}u{\partial}_{r}\left(\frac{z}{\Delta}\mathcal{R}^{{\alpha}{\beta}}\right)({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)$
$\displaystyle=$
$\displaystyle\frac{1}{2}u{\partial}_{r}\left(\frac{z}{\Delta}(r^{2}+M^{2})^{2}\right)|\partial_{t}\psi|^{2}-\frac{1}{2}u({\partial}_{r}z)\,O^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi).$
So we set
$\displaystyle z=\frac{(r-M)^{2}}{(r^{2}+M^{2})^{2}},$
and obtain
$\displaystyle\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)$
$\displaystyle=$
$\displaystyle\frac{u{\mathcal{T}}}{(r^{2}+M^{2})^{3}}\,O^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)$
$\displaystyle=$
$\displaystyle\frac{u(r-M)(r^{2}-2Mr-M^{2})}{(r^{2}+M^{2})^{3}}\,|q|^{2}|\nabla\psi|^{2}.$
Observe that from (12) we have
$\displaystyle
O^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)=|q|^{2}|\nabla\psi|^{2}=(\partial_{\theta}\psi)^{2}+M^{2}\sin^{2}\theta(\partial_{t}\psi)^{2}.$
Using (27) and (29) we deduce for such choice of $z$,
$\displaystyle\mathcal{A}=\frac{(r-M)^{4}}{(r^{2}+M^{2})}\partial_{r}\big{(}\frac{u}{r^{2}+M^{2}}\big{)},\qquad\mathcal{V}=-\frac{1}{4}{\partial}_{r}\Big{(}(r-M)^{2}{\partial}_{r}w\Big{)}$
(32)
with
$\displaystyle w=\frac{(r-M)^{2}}{(r^{2}+M^{2})^{2}}{\partial}_{r}u.$ (33)
The main goal here is to choose the functions $u$, $w$ and $v$ so that the
divergence of $\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$ is positive definite. For
the choice of functions $u$ and $w$ we make use of a construction due to
Stogin in the sub-extremal Kerr spacetime, see Lemma 5.2.6 in [60], also used
in [47] [36] [38]. In what follows, we adapt Stogin’s construction to the case
of extremal Kerr. Stogin’s construction fails to obtain a positive definite
term for $|\psi|^{2}$ in the entire exterior region and makes use of the
redshift estimate and local integrated Hardy inequality to fix this
deficiency. In the extremal case, because of the degenerate redshift estimate,
we need a new adapted Hardy inequality that we derive in Section 4.2.
In Stogin’s construction [60], the relation between $u$ and $w$ in (33) is
used to define $u$ in terms of $w$ and then treat $w$ as the free variable. In
order to have the coefficient of $|\nabla\psi|^{2}$ to be non-negative, the
function $u$ has to change sign at $r=r_{trap}$ and therefore we impose
following Stogin [60]:
$\displaystyle
u(r)=\int_{r_{trap}}^{r}\frac{(s^{2}+M^{2})^{2}}{(s-M)^{2}}w(s)ds.$ (34)
Further imposing the positivity of the function $w$ we obtain that $u$ is
increasing everywhere and changing sign at $r_{trap}$, which implies that
$\mathcal{U}^{{\alpha}{\beta}}({\partial}_{\alpha}\psi)({\partial}_{\beta}\psi)$
is non-negative.
Following Stogin, we now choose the function $w$ in order to have positivity
of $\mathcal{A}$, i.e. positivity of
$\partial_{r}\big{(}\frac{u}{r^{2}+M^{2}}\big{)}$. By defining
$\widetilde{\mathcal{A}}:=\frac{(r^{2}+M^{2})^{2}}{2r}\partial_{r}\left(\frac{u}{r^{2}+M^{2}}\right)$,
a straightforward computation shows that
$\displaystyle\partial_{r}\widetilde{\mathcal{A}}=(r^{2}+M^{2})\partial_{r}\left(\frac{w(r^{2}+M^{2})^{2}}{2r(r-M)^{2}}\right).$
(35)
Defining $r_{*}:=(2+\sqrt{3})M$ to be the point attaining the maximum of the
function $\frac{2r(r-M)^{2}}{(r^{2}+M^{2})^{2}}$, we define $w$ as the
positive $C^{1}$ function
$\displaystyle w=\begin{cases}\frac{1}{4M}\ \qquad&r\leq r_{*}\\\
\frac{2r(r-M)^{2}}{(r^{2}+M^{2})^{2}}\ \qquad&r>r_{*}.\end{cases}$ (36)
Since $r_{*}$ also attains the minimum of the function
$\frac{(r^{2}+M^{2})^{2}}{2r(r-M)^{2}}$, the above construction implies that
the function $\frac{w(r^{2}+M^{2})^{2}}{2r(r-M)^{2}}$ is constant for $r\geq
r_{*}$ and decreasing for $r\leq r_{*}$. From (35), one can deduce the same
behavior for $\widetilde{\mathcal{A}}$. We now show that the constant value of
this function is positive. We have
$\displaystyle\widetilde{\mathcal{A}}(r_{*})$ $\displaystyle=$
$\displaystyle\frac{(r_{*}^{2}+M^{2})^{2}}{2r_{*}}\partial_{r}\left(\frac{u}{r^{2}+M^{2}}\right)\Big{|}_{r=r_{*}}=\frac{(r_{*}^{2}+M^{2})}{2r_{*}}\partial_{r}u\Big{|}_{r=r_{*}}-u(r_{*})$
$\displaystyle=$
$\displaystyle\frac{(r_{*}^{2}+M^{2})^{3}}{2r_{*}(r_{*}-M)^{2}}w(r_{*})-w(r_{*})\int_{r_{trap}}^{r_{*}}\frac{(r^{2}+M^{2})^{2}}{(r-M)^{2}}dr$
Observe that since the function $\frac{(r^{2}+M^{2})^{2}}{(r-M)^{2}}$ is
increasing between $r_{trap}$ and $r_{*}$, we can bound the above by
$\displaystyle\widetilde{\mathcal{A}}(r_{*})$ $\displaystyle>$
$\displaystyle\frac{(r_{*}^{2}+M^{2})^{3}}{2r_{*}(r_{*}-M)^{2}}w(r_{*})-\frac{(r_{*}^{2}+M^{2})^{2}}{(r_{*}-M)^{2}}w(r_{*})(r_{*}-r_{trap})$
$\displaystyle=$
$\displaystyle\frac{(r_{*}^{2}+M^{2})^{2}}{2r_{*}(r_{*}-M)^{2}}w(r_{*})\Big{(}(r_{*}^{2}+M^{2})-2r_{*}(r_{*}-r_{trap})\Big{)}$
$\displaystyle=$
$\displaystyle\frac{(r_{*}^{2}+M^{2})^{2}}{4(r_{*}-M)^{2}}\big{(}1+\sqrt{2}-\sqrt{3}\big{)}=2r_{*}\big{(}1+\sqrt{2}-\sqrt{3}\big{)}M=c_{0}M^{2},$
where $c_{0}>0$ is a positive constant, explicitly given by
$c_{0}=2(2+\sqrt{3})\big{(}1+\sqrt{2}-\sqrt{3}\big{)}$, and where we used that
$r_{*}^{2}+M^{2}=4r_{*}M$, $(r_{*}-M)^{2}=2Mr_{*}$ and
$r_{*}-r_{trap}=(1+\sqrt{3}-\sqrt{2})M$.
Since
$\mathcal{A}=\frac{2r(r-M)^{4}}{(r^{2}+M^{2})^{3}}\widetilde{\mathcal{A}}$,
the above implies that $\mathcal{A}$ is non-negative, and more precisely:
$\displaystyle\mathcal{A}(r)\geq\frac{2r(r-M)^{4}}{(r^{2}+M^{2})^{3}}\widetilde{\mathcal{A}}(r_{*})\geq\frac{2c_{0}M^{2}r(r-M)^{4}}{(r^{2}+M^{2})^{3}}.$
(37)
Finally, we are left to analyze the positivity of $\mathcal{V}$. With the
choice of $w$ in (36) we compute explicitly
$\displaystyle\partial_{r}\Big{(}(r-M)^{2}\partial_{r}w\Big{)}=$
$\displaystyle\begin{cases}0\ &r\leq r_{*}\\\
-\frac{12M(r-M)^{2}(r^{4}-6M^{2}r^{2}+M^{4})}{(r^{2}+M^{2})^{4}}\
&r>r_{*}.\end{cases}$
Observe that the polynomial $r^{4}-6M^{2}r^{2}+M^{4}$ is positive for
$r>(1+\sqrt{2})M$, and since $r_{*}>(1+\sqrt{2})M$ the above is non-negative
everywhere in the exterior region.
Integrating the relation $u^{\prime}=\frac{(r^{2}+M^{2})^{2}}{(r-M)^{2}}w$
from (34) and using (36) we deduce a closed form for $u$:
$\displaystyle
u=\begin{cases}-\frac{M^{3}}{r-M}+\frac{5Mr}{4}+\frac{r^{2}}{4}+\frac{r^{3}}{12M}+2M^{2}\log(r-M)+C_{1},\
&r\leq r_{*}\\\ r^{2}+C_{2}\ &r\geq r_{*},\end{cases}$
where $C_{1}$, $C_{2}$ are suitable constants, such that $u(r_{trap})=0$ and
$u$ is continuous at $r_{*}$.
For $r$ close to $r_{+}$, we write
$u\approx-\frac{M^{3}}{r-M}+2M^{2}\log(r-M)+O(1)$, from which we deduce
$\displaystyle\frac{(r-M)^{4}}{(r^{2}+M^{2})}\partial_{r}\Big{(}\frac{u}{r^{2}+M^{2}}\Big{)}\approx$
$\displaystyle\frac{(r-M)^{4}}{(r^{2}+M^{2})}\left(\frac{M}{2}\frac{1}{(r-M)^{2}}+O(\frac{1}{r-M})\right)$
$\displaystyle\approx$ $\displaystyle\frac{1}{4M}(r-M)^{2}+O(r-M)^{3},$
which gives a degeneracy of multiplicity $2$ for the coefficient of
$|\partial_{r}\psi|^{2}$ at the horizon. We can deduce that for $J=0$
$\displaystyle\begin{split}|q|^{2}{\bf
D}^{\mu}\mathcal{P}_{\mu}^{(X,w,J=0)}[\psi]&\gtrsim\dfrac{1}{r}\left(1-\dfrac{M}{r}\right)^{2}|{\partial}_{r}\psi|^{2}+\dfrac{1}{r}\left(1-\dfrac{r_{trap}}{r}\right)^{2}|q|^{2}|\nabla\psi|^{2}+\frac{1}{r^{2}}1_{r>r_{*}}|\psi|^{2}.\end{split}$
(38)
Observe that, in view of the behavior of the function $u$ close to the
horizon, the vectorfield $X$ and the function $w$ are both regular up to the
horizon, in contrast with the sub-extremal case, where the vectorfield $X$
diverges like $\log$ approaching the horizon [60]. On the other hand, the main
issue with the estimate (38) is the vanishing of the coefficient for the zero-
th order for $r\leq r_{*}$. We will now fix this issue with a Hardy inequality
adapted to the extremal case.
### 4.2 The global Hardy inequality and trapped control of the time
derivative
Here we make use of the one-form $J$ to obtain positivity of the zero-th order
term in the entire exterior region. From (37) and since for $r>M$ the function
$\frac{r(r-M)^{4}}{(r^{2}+M^{2})^{3}}$ achieves its maximum at
$(3+2\sqrt{2})M>r_{*}=(2+\sqrt{3})M$, we define for any $r_{e}\in(M,r_{*})$
the following minimum $c_{1}:=\min\limits_{r\in[r_{e},r_{*}]}\mathcal{A}(r)$.
Observe that because of the bound (37), we have that $c_{1}>0$, with
$c_{1}\downarrow 0$ as $r_{e}\to M$. Then for $r\in[r_{e},r_{*}]$ we can use
the bound $\mathcal{A}\geq c_{1}$ and (30) to obtain
$\displaystyle\mathcal{A}|{\partial}_{r}\psi|^{2}+\frac{1}{4}|q|^{2}{\mbox{d}iv\,}(J|\psi|^{2})$
$\displaystyle=$
$\displaystyle\mathcal{A}|\partial_{r}\psi|^{2}+\frac{1}{4}|q|^{2}\left(2v\psi\partial_{r}\psi+\left(\partial_{r}v+\frac{2r}{|q|^{2}}v\right)|\psi|^{2}\right)$
$\displaystyle=$
$\displaystyle\mathcal{A}\left(\partial_{r}\psi+\frac{|q|^{2}}{4\mathcal{A}}v\psi\right)^{2}-\frac{|q|^{4}v^{2}}{16\mathcal{A}}|\psi|^{2}+\frac{1}{4}|q|^{2}\left(\partial_{r}v+\frac{2r}{|q|^{2}}v\right)|\psi|^{2}$
$\displaystyle\geq$
$\displaystyle\frac{1}{4}|q|^{2}\left(\partial_{r}v+\frac{2r}{|q|^{2}}v-\frac{|q|^{2}v^{2}}{4\mathcal{A}}\right)|\psi|^{2}$
$\displaystyle\geq$
$\displaystyle\frac{1}{4}|q|^{2}\left(\partial_{r}v+\frac{2r}{|q|^{2}}v-\frac{|q|^{2}v^{2}}{4c_{1}}\right)|\psi|^{2}$
$\displaystyle=$
$\displaystyle\frac{c_{1}}{4}|q|^{2}\left(\left(\frac{v}{c_{1}}\right)^{\prime}+\frac{2r}{|q|^{2}}\left(\frac{v}{c_{1}}\right)-\frac{|q|^{2}}{4}\left(\frac{v}{c_{1}}\right)^{2}\right)|\psi|^{2}.$
We want to find a function $y(r)$ that for $r\in[r_{e},r_{*}]$
$\displaystyle y^{\prime}+\frac{2r}{|q|^{2}}y-\frac{|q|^{2}}{4}y^{2}>0.$
Using that $r^{2}\leq|q|^{2}\leq r^{2}+M^{2}$ we observe that for a negative
function $y(r)<0$ we can bound
$\displaystyle y^{\prime}+\frac{2r}{|q|^{2}}y-\frac{|q|^{2}}{4}y^{2}\geq
y^{\prime}+\frac{2}{r}y-\frac{r^{2}+M^{2}}{4}y^{2}.$
In particular we will look for a negative function in $r\in[r_{e},r_{*}]$
satisfying $y^{\prime}+\frac{2}{r}y-\frac{r^{2}+M^{2}}{4}y^{2}>0$. A
straightforward computation shows that $y_{0}(r)=-\frac{4}{r(r+M)(r-M)}$ is a
negative solution in $r\in[r_{e},r_{*}]$ to the ODE
$y_{0}^{\prime}+\frac{2}{r}y_{0}-\frac{r^{2}+M^{2}}{4}y_{0}^{2}=0$. Let
$y(r)=\lambda y_{0}(r)$ be a rescaling of $y_{0}$ for any constant
$0<\lambda<1$, then
$\displaystyle y^{\prime}+\frac{2}{r}y-\frac{r^{2}+M^{2}}{4}y^{2}$
$\displaystyle=$
$\displaystyle\lambda\big{(}y_{0}+\frac{2}{r}y_{0}-\frac{r^{2}+M^{2}}{4}y_{0}^{2}\big{)}+\lambda(1-\lambda)\frac{r^{2}+M^{2}}{4}y_{0}^{2}$
$\displaystyle=$
$\displaystyle\lambda(1-\lambda)\frac{r^{2}+M^{2}}{4}y_{0}^{2}>0.$
In particular, for $y(r)=\frac{1}{2}y_{0}(r)=-\frac{2}{r(r+M)(r-M)}<0$, we
have in $r\in[r_{e},r_{*}]$
$\displaystyle y^{\prime}+\frac{2r}{|q|^{2}}y-\frac{|q|^{2}}{4}y^{2}$
$\displaystyle\geq$
$\displaystyle\frac{1}{2}(1-\frac{1}{2})\frac{r^{2}+M^{2}}{4}y_{0}^{2}=\frac{r^{2}+M^{2}}{r^{2}(r+M)^{2}(r-M)^{2}},$
and therefore for
$\displaystyle v(r)=c_{1}y(r)=-\frac{2c_{1}}{r(r+M)(r-M)},$ (39)
we have
$\displaystyle\mathcal{A}|{\partial}_{r}\psi|^{2}+\frac{1}{4}|q|^{2}{\mbox{d}iv\,}(J|\psi|^{2})$
$\displaystyle\geq$
$\displaystyle\frac{c_{1}}{4}\frac{r^{2}+M^{2}}{(r+M)^{2}(r-M)^{2}}|\psi|^{2}.$
To conclude, combining the above Hardy inequality with the bound (38) we can
improve it to
$\displaystyle|q|^{2}{\bf D}^{\mu}\mathcal{P}_{\mu}^{(X,w,J)}[\psi]$
$\displaystyle\gtrsim$
$\displaystyle\frac{1}{r}\left(1-\frac{M}{r}\right)^{2}|{\partial}_{r}\psi|^{2}+\frac{1}{r}\left(1-\dfrac{r_{trap}}{r}\right)^{2}|q|^{2}|\nabla\psi|^{2}+\frac{1}{r^{2}}1_{r>r_{e}}|\psi|^{2},$
(40)
for $r_{e}>M$.
The only term that is missing from the above right hand side to give the
integral appearing in (3) of Theorem 1.1 is the trapped control on the time
derivative. For a function of $r$ $w_{T}$, we have from (21)
$\displaystyle|q|^{2}{\bf D}^{\mu}\mathcal{P}_{\mu}^{(X=0,w_{T},J=0)}[\psi]$
$\displaystyle=$ $\displaystyle-\frac{1}{4}|q|^{2}\square_{\bf
g}w_{T}|\psi|^{2}+\frac{1}{2}w_{T}|q|^{2}({\partial}_{\lambda}\psi{\partial}^{\lambda}\psi)$
$\displaystyle=$ $\displaystyle-\frac{1}{4}|q|^{2}\square_{\bf
g}w_{T}|\psi|^{2}+\frac{1}{2}w_{T}\big{(}\Delta|{\partial}_{r}\psi|^{2}+\frac{1}{\Delta}\mathcal{R}^{{\alpha}{\beta}}{\partial}_{\alpha}\psi{\partial}_{\beta}\psi\big{)}$
$\displaystyle=$
$\displaystyle-\frac{1}{2}w_{T}\frac{(r^{2}+M^{2})^{2}}{(r-M)^{2}}|\partial_{t}\psi|^{2}+\frac{1}{2}w_{T}(r-M)^{2}|{\partial}_{r}\psi|^{2}+\frac{1}{2}w_{T}O^{{\alpha}{\beta}}{\partial}_{\alpha}\psi{\partial}_{\beta}\psi$
$\displaystyle-\frac{1}{4}|q|^{2}\square_{\bf g}w_{T}|\psi|^{2}.$
We choose $w_{T}$ to be given by
$\displaystyle w_{T}=-\frac{(r-M)^{2}(r-r_{trap})^{2}}{r^{7}},$
and we have
$\displaystyle\begin{split}|q|^{2}{\bf
D}^{\mu}\mathcal{P}_{\mu}^{(X=0,w_{T},J=0)}[\psi]&=\frac{1}{2}\frac{(r-r_{trap})^{2}(r^{2}+M^{2})^{2}}{r^{7}}|\partial_{t}\psi|^{2}-\frac{1}{2}\frac{(r-M)^{4}(r-r_{trap})^{2}}{r^{7}}|{\partial}_{r}\psi|^{2}\\\
&-\frac{1}{2}\frac{(r-M)^{2}(r-r_{trap})^{2}}{r^{7}}O^{{\alpha}{\beta}}{\partial}_{\alpha}\psi{\partial}_{\beta}\psi-\frac{1}{4}|q|^{2}\square_{\bf
g}w_{T}|\psi|^{2}.\end{split}$ (41)
We explicitly compute
$\displaystyle-\frac{1}{4}|q|^{2}\square_{\bf g}w_{T}$ $\displaystyle=$
$\displaystyle-\frac{1}{4}{\partial}_{r}((r-M)^{2}\partial_{r}w_{T})$
$\displaystyle=$
$\displaystyle\frac{(r-M)^{2}}{2r^{9}}\Big{[}3r^{4}-3\left(9+4\sqrt{2}\right)Mr^{3}+\left(93+68\sqrt{2}\right)M^{2}r^{2}$
$\displaystyle-7\left(21+16\sqrt{2}\right)M^{3}r+(56\sqrt{2}+84)M^{4}\Big{]}$
$\displaystyle=$
$\displaystyle\frac{3(r-M)^{2}}{2r^{9}}\big{(}r-x_{1}M\big{)}\big{(}r-x_{2}M\big{)}\big{(}r-x_{3}M\big{)}\big{(}r-x_{4}M\big{)},$
where $1<x_{1}<x_{2}<x_{3}<x_{4}$ are four roots of
$\displaystyle
3x^{4}-3\left(9+4\sqrt{2}\right)x^{3}+\left(93+68\sqrt{2}\right)x^{2}-7\left(21+16\sqrt{2}\right)x+56\sqrt{2}+84=0.$
Even though $-\frac{1}{4}|q|^{2}\square_{\bf g}w_{T}$ can be negative for
$r\in[x_{1}M,x_{4}M]$, it must have a finite negative lower bound there. In
particular, by choosing $r_{e}\in(M,\min(x_{1}M,r_{*}))$, there exists a
sufficiently small $\delta_{T}>0$ such that
$\displaystyle\frac{1}{r^{2}}1_{r>r_{e}}-\frac{1}{4}\delta_{T}|q|^{2}\square_{\bf
g}w_{T}\geq\frac{1}{r^{2}}\left(1-\dfrac{M}{r}\right)^{2}.$
Finally combining (40) and (41) we deduce
$\displaystyle{\bf D}^{\mu}\mathcal{P}_{\mu}^{(X,w+\delta_{T}w_{T},J)}[\psi]$
$\displaystyle\gtrsim$
$\displaystyle\frac{1}{r^{3}}\left(1-\frac{M}{r}\right)^{2}|\partial_{r}\psi|^{2}+\frac{1}{r}\left(1-\frac{r_{trap}}{r}\right)^{2}\Big{(}\frac{1}{r^{2}}(\partial_{t}\psi)^{2}+|\nabla\psi|^{2}\Big{)}$
$\displaystyle+\frac{1}{r^{4}}\left(1-\dfrac{M}{r}\right)^{2}|\psi|^{2}.$
Observe that the degeneracy at the horizon for the $\partial_{r}$ derivative
is consistent with the conservation laws along the event horizon [12] which
implies non-decay of the transversal derivative along the event horizon.
We are finally set to apply the divergence theorem to the current
$\mathcal{P}_{\mu}^{(X,w+\delta_{T}w_{T},J)}[\psi]$. Observe that, by making
use of the simple Hardy estimate
$\displaystyle\int_{0}^{\infty}|\psi|^{2}dx\lesssim\int_{0}^{\infty}x^{2}|\partial_{x}\psi|^{2}dx$
with $x=r-r_{+}$, see also [4], to obtain bounds for the zero-th order term,
one can estimate the boundary terms
$\int_{\Sigma_{\tau}}\mathcal{P}_{\mu}^{(X,w+\delta_{T}w_{T},J)}[\psi]n^{\mu}_{\Sigma_{\tau}}$
by a large constant times a positive definite energy current, such as
$\displaystyle E^{(T)}[\psi](\tau)$ $\displaystyle=$
$\displaystyle\int_{\Sigma_{\tau}}|T\psi|^{2}+\big{(}1-\frac{M}{r}\big{)}^{2}|\partial_{r}\psi|^{2}+|\nabla\mkern-13.0mu/\,\psi|^{2}.$
We therefore deduce
$\displaystyle\int_{0}^{\tau}\int\frac{1}{r^{3}}\left(1-\frac{M}{r}\right)^{2}|\partial_{r}\psi|^{2}+\frac{1}{r}\left(1-\frac{r_{trap}}{r}\right)^{2}\big{(}\frac{1}{r^{2}}(\partial_{t}\psi)^{2}+|\nabla\mkern-13.0mu/\,\psi|^{2}\big{)}+\frac{1}{r^{4}}\left(1-\dfrac{M}{r}\right)^{2}|\psi|^{2}$
$\displaystyle\leq$ $\displaystyle
C\Big{(}E^{(T)}[\psi](\tau)+\int_{\mathcal{H}^{+}(0,\tau)}\mathcal{P}^{(T)}_{\mu}[\psi]n_{\mathcal{H}^{+}}^{\mu}+E^{(T)}[\psi](0)\Big{)}.$
Using the boundedness of the energy statement given in (31) we conclude the
proof of Theorem 1.1.
## References
* [1] Alinhac S., Energy multipliers for perturbations of the Schwarzschild metric. Commun. Math. Phys. 288, 199-224 (2009)
* [2] Andersson L., Bäckdahl T., Blue P., Ma S. Stability for linearized gravity on the Kerr spacetime. arXiv:1903.03859 (2019)
* [3] Andersson L., Bäckdahl T., Blue P., Ma S. Nonlinear radiation gauge for near Kerr spacetimes, arXiv:2108.03148.
* [4] Andersson L., Blue P. Hidden symmetries and decay for the wave equation on the Kerr spacetime. Ann. of Math. (2) 182, no.3, 787-853 (2015)
* [5] Angelopoulos Y., Aretakis S., Gajic D. A vector field approach to almost-sharp decay for the wave equation on spherically symmetric, stationary spacetimes. Annals of PDE, 4:15 (2018)
* [6] Angelopoulos Y., Aretakis S., Gajic D. Late-time asymptotics wave equation on spherically symmetric, stationary spacetimes. Adv. in Math. 323, 529–621 (2018)
* [7] Angelopoulos Y., Aretakis S., Gajic D. Late-time asymptotics for the wave equation on extremal Reissner–Nordström backgrounds. Adv. in Math. 375, 107363 (2020).
* [8] Apetroaie M., Instability of gravitational and electromagnetic perturbations of extremal Reissner-Nordström spacetime. arXiv:2211.09182 (2022)
* [9] Aretakis S. Stability and instability of extreme Reissner-Nordström black hole spacetimes for linear scalar perturbations I. Communications in mathematical physics 307 (1), 17-63 (2011)
* [10] Aretakis S. Stability and instability of extreme Reissner-Nordström black hole spacetimes for linear scalar perturbations II. Annales Henri Poincaré 12 (8), 1491-1538 (2011)
* [11] Aretakis S. Decay of axisymmetric solutions of the wave equation on extreme Kerr backgrounds. J. Funct. Anal. 263, 2770 (2012)
* [12] Aretakis S. Horizon instability of extremal black holes. Advances in Theoretical and Mathematical Physics 19. 507-530 (2015)
* [13] Benomio G., A new gauge for gravitational perturbations of Kerr spacetimes II: The linear stability of Schwarzschild revisited. arXiv:2211.00616. (2022)
* [14] Blue P., Soffer A., Semilinear wave equations on the Schwarzschild manifold. I. Local decay estimates. Adv. Differ. Equ. 8(5), 595–614 (2003)
* [15] Blue P., Soffer A., Errata for “Global existence and scattering for the nonlinear Schrödinger equation on Schwarzschild manifolds” , “Semilinear wave equations on the Schwarzschild manifold I: Local Decay Estimates”, and “ The wave equation on the Schwarzschild metric II: Local Decay for the spin 2 Regge Wheeler equation” , gr-qc/0608073, 6 pages.
* [16] Blue P., Sterbenz J., Uniform decay of local energy and the semi-linear wave equation on Schwarzschild space, Comm. Math. Phys. 268 (2006), 481–504.
* [17] Carter B., Global structure of the Kerr family of gravitational fields, Physical Review. 174 (5): 1559–1571 (1968)
* [18] Christodoulou D., Klainerman S., The global nonlinear stability of the Minkowski space, Princeton University Press (1993).
* [19] Civin D., Stability at charged rotating black holes for linear scalar perturbations, Ph.D. thesis, University of Cambridge. Available at http://www.repository cam.ac.uk/handle/1810/247 (2014)
* [20] Dafermos M., Holzegel G., Rodnianski I. The linear stability of the Schwarzschild solution to gravitational perturbations. Acta Mathematica, 222: 1-214 (2019)
* [21] Dafermos M., Holzegel G., Rodnianski I. Boundedness and decay for the Teukolsky equation on Kerr spacetimes I: the case $|a|\ll M$. Ann. PDE, 5, 2 (2019)
* [22] Dafermos M., Holzegel G., Rodnianski I., Taylor M., The non-linear stability of the Schwarzschild family of black holes, arXiv:2104.0822.
* [23] Dafermos M., Rodnianski I. The wave equation on Schwarzschild-de Sitter spacetimes. arXiv:0709.2766. (2007)
* [24] Dafermos M., Rodnianski I. A new physical-space approach to decay for the wave equation with applications to black hole spacetimes. XVIth International Congress on Mathematical Physics, 421-433, (2009)
* [25] Dafermos M., Rodnianski I. The red-shift effect and radiation decay on black hole spacetimes. Comm. Pure Appl. Math, 62:859-919 (2009)
* [26] Dafermos M., Rodnianski I. Decay for solutions of the wave equation on Kerr exterior spacetimes I-II: The cases $|a|\ll M$ or axisymmetry. arXiv:1010.5132 (2010)
* [27] Dafermos M., Rodnianski I., The black hole stability problem for linear scalar perturbations, Proceedings of the Twelfth Marcel Grossmann Meeting on General Relativity, T. Damour et al (ed.) (2011).
* [28] Dafermos M., Rodnianski I., A proof of the uniform boundedness of solutions to the wave equation on slowly rotating Kerr backgrounds, Invent. Math. 185(3), 467–559 (2011)
* [29] Dafermos M., Rodnianski I. Lectures on black holes and linear waves. Evolution equations, Clay Mathematics Proceedings, Vol. 17, pages 97-205 Amer. Math. Soc. (2013)
* [30] Dafermos M., Rodnianski I., Shlapentokh-Rothman Y. Decay for solutions of the wave equation on Kerr exterior spacetimes III: The full subextremal case $|a|<M$. Annals of Math. 183, no. 3, 787-913 (2016)
* [31] Fang A., Nonlinear stability of the slowly-rotating Kerr-de Sitter family. arXiv:2112.07183 (2021)
* [32] Fang A., Linearized stability of the slowly-rotating Kerr-de Sitter family. arXiv:2207.07902 (2022)
* [33] Giorgi E. Boundedness and decay for the Teukolsky system of spin $\pm 2$ on Reissner-Nordström spacetime: the case $|Q|\ll M$. Ann. Henri Poincaré, 21, 2485 - 2580 (2020)
* [34] Giorgi E. Boundedness and decay for the Teukolsky system of spin $\pm 1$ on Reissner–Nordström spacetime: the $\ell=1$ spherical mode. Class. Quantum Grav. 36, 205001 (2019)
* [35] Giorgi E. The linear stability of Reissner-Nordström spacetime for small charge. Annals of PDE, 6, 8 (2020)
* [36] Giorgi E. The linear stability of Reissner-Nordström spacetime: the full sub-extremal range $|Q|<M$. Commun. Math. Phys. 380, 1313–1360 (2020)
* [37] Giorgi E., Electromagnetic-gravitational perturbations of Kerr-Newman spacetime: the Teukolsky and Regge-Wheeler equations J. Hyperbolic Differ. Equ., Vol. 19, No. 01, pp. 1-139 (2022)
* [38] Giorgi E., The Carter tensor and the physical-space analysis in perturbations of Kerr-Newman spacetime preprint arXiv:2105.14379, to appear in J. Differential Geom. (2022)
* [39] Giorgi E., Klainerman S., Szeftel J., Wave equation estimates and the nonlinear stability of slowly rotating Kerr black holes. arXiv:2205.14808 (2022), 917 pages
* [40] Häfner D., Hintz P., Vasy A. Linear stability of slowly rotating Kerr black holes. Invent. Math., 223 (2021), 1227–1406.
* [41] Hintz P. Non-linear stability of the Kerr-Newman-de Sitter family of charged black holes. Annals of PDE, 4(1):11 (2018)
* [42] Hintz P., Vasy A. The global non-linear stability of the Kerr-de Sitter family of black holes. Acta Math., 220:1-206 (2018)
* [43] Hintz P., Vasy A., Stability of Minkowski space and polyhomogeneity of the metric, Ann. PDE 6, 2 (2020).
* [44] Hung P.-K. , Keller J. , Wang M.-T. Linear Stability of Schwarzschild Spacetime: The Cauchy Problem of Metric Coefficients. J. Differential Geom. 116 (3) 481 - 541 (2020)
* [45] Johnson T. The linear stability of the Schwarzschild solution to gravitational perturbations in the generalised wave gauge. Ann. PDE 5, 13 (2019)
* [46] Kay, B.S., Wald, R.M., Linear stability of Schwarzschild under perturbations which are nonvanishing on the bifurcation two sphere. Class. Quantum Grav. 4, 893–898 (1987)
* [47] Klainerman S., Szeftel J. Global Non-Linear Stability of Schwarzschild Spacetime under Polarized Perturbations. Annals of Math Studies, 210. Princeton University Press, Princeton NJ, 2020, xviii+856 pp.
* [48] Klainerman S., Szeftel J., Construction of GCM spheres in perturbations of Kerr, arXiv:1911.00697, to appear in Annals of PDE.
* [49] Klainerman S., Szeftel J., Effective results in uniformization and intrinsic GCM spheres in perturbations of Kerr, arXiv:1912.12195. To appear in Annals of PDE.
* [50] Klainerman S., Szeftel J., Kerr stability for small angular momentum, arXiv:2104.11857.
* [51] Lindblad H., Rodnianski, I. Global existence in the Einstein Vacuum equations in wave co-ordinates. Comm. Math. Phys., 256 (2005), 43–110.
* [52] Ma S. Uniform energy bound and Morawetz estimate for extreme component of spin fields in the exterior of a slowly rotating Kerr black hole II: linearized gravity. Commun. Math. Phys. 377, 2489–2551 (2020)
* [53] J. Marzuola, J. Metcalfe, D. Tataru and M. Tohaneanu, Strichartz estimates on Schwarzschild black hole backgrounds, Comm. Math. Phys. 293 (2010), 37–83.
* [54] Mavrogiannis G. Morawetz estimates without relative degeneration and exponential decay on Schwarzschild de Sitter spacetimes. arXiv:2111.09494 (2021)
* [55] Mavrogiannis G. Quasilinear wave equations on Schwarzschild de Sitter. arXiv:2111.09495 (2021)
* [56] C. Morawetz, Decay of solutions of the exterior initial boundary value problem for the wave equation. Comm. Pure and App. Math. 14 (1961), 561–568.
* [57] Shen D., Construction of GCM hypersurfaces in perturbations of Kerr, arXiv:2205.12336.
* [58] Shlapentokh-Rothman Y., Quantitative Mode Stability for the Wave Equation on the Kerr Spacetime, Ann. Henri Poincaré. 16 (2015), 289–345.
* [59] Shlapentokh-Rothman Y., Teixeira da Costa R. Boundedness and decay for the Teukolsky equation on Kerr in the full subextremal range $|a|<M$: frequency space analysis, arXiv preprint arXiv:2007.07211.
* [60] Stogin J., Nonlinear wave dynamics in black hole spacetimes, Ph.D. thesis, Princeton University. Available at http://arks.princeton.edu/ark:/88435/dsp01p5547t983 (2017)
* [61] Tataru D., Tohaneanu M. A local energy estimate on Kerr black hole background. IMRN no.2, 248-292 (2011)
* [62] Teixeira da Costa R. Mode stability for the Teukolsky equation on extremal and subextremal Kerr spacetimes, Commun. Math. Phys., 378(1), 705-781 (2020)
* [63] Whiting B. F. Mode stability of the Kerr black hole. J. Math. Phys., 30 (6):1301-1305 (1989)
|
# Beautiful mixing and CP violation at LHCb
Philippe d’Argent on behalf of the LHCb collaboration CERN, Geneva,
Switzerland
###### Abstract
Precision measurements of beauty hadron decays are sensitive probes of the
Standard Model and a promising way to look for new physics phenomena far
beyond the energy scale accessible for direct production searches. This
article reviews recent measurements of mixing and CP violation in beauty
decays performed at the LHCb experiment that have been presented at the
$55^{th}$ Rencontres de Moriond QCD conference.
## 1 The Standard Model and beyond
In the framework of the Standard Model of particle physics, the charge-parity
(CP) symmetry between quarks and antiquarks is broken by a single complex
phase in the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix. The
unitarity of this matrix leads to the condition
$V_{ud}^{\phantom{*}}V^{*}_{ub}+V_{cd}^{\phantom{*}}V^{*}_{cb}+V_{td}^{\phantom{*}}V^{*}_{tb}=0$,
where $V_{ij}$ are the complex elements of the CKM matrix. This equation can
be visualised as a triangle in the complex plane with angles $\alpha$, $\beta$
and $\gamma$. A key consistency test of the Standard Model is to verify the
unitarity conditions by over-constraining the CKM matrix with various
independent measurements sensitive to distinct combinations of matrix
elements. While the magnitudes of the CKM matrix elements can be determined
from the decay rates of respective flavour-changing transitions, measurements
of CP asymmetries generally permit determining the CKM phases. Here, the angle
$\gamma\equiv\rm{arg}[-(V_{ud}^{\phantom{*}}V_{ub}^{*})/(V_{cd}^{\phantom{*}}V_{cb}^{*})]$
has particularly interesting features as it is the only CKM angle that can be
measured in tree-level decays. In such decays, the interpretation of physical
observables (rates and CP asymmetries) in terms of the underlying CKM
parameters is subject to negligible theoretical uncertainties. Hence, a
precision measurement of $\gamma$ provides a Standard Model benchmark, to be
compared with indirect determinations from other CKM matrix observables which
are more susceptible to new physics phenomena beyond the Standard Model.
## 2 Direct CP violation in beauty decays
The most stringent constraints on the CKM angle $\gamma$ come from
measurements of direct CP violation in $B^{\mp}\to DK^{\mp}$ decays, where $D$
represents an admixture of the $D^{0}$ and $\bar{D}^{0}$ flavour states. While
the $B^{-}\to D^{0}K^{-}$ decay proceeds via a $b\to c\bar{u}s$ quark-level
transition, a $b\to u\bar{c}s$ transition leads to the
$B^{-}\to\bar{D}^{0}K^{-}$ decay. Provided that the charm meson decays into a
final state, $f$, which is accessible for both flavour states, phase
information can be determined from the interference between these two decay
paths. The relative phase between the corresponding decay amplitudes has both
CP-violating ($\gamma$) and CP-conserving ($\delta_{B}^{DK}$) contributions. A
measurement of the decay rate asymmetry between $B^{+}$ and $B^{-}$ mesons
thus gives sensitivity to $\gamma$. The sensitivity is driven by the size of
$r_{B}^{DK}$, the ratio of the magnitudes of the $B^{-}\to\bar{D}^{0}K^{-}$
and $B^{-}\to D^{0}K^{-}$ amplitudes. Similar interference effects also occur
in $B^{\mp}\to D\pi^{\mp}$ decays, albeit with a significantly reduced
sensitivity to the phases due to additional Cabibbo-suppression
($r_{B}^{DK}\approx 20\,r_{B}^{D\pi}$). Two recent measurements of direct CP
violation in $B^{\mp}\to Dh^{\mp}$ ($h\in\\{K,\pi\\}$) decays study two-body
($D\to h^{\pm}h^{\mp}$) and self-conjugate three-body ($D\to K^{0}_{\rm
s}h^{\pm}h^{\mp}$) charm decays, respectively. Both analyses use data
accumulated with the LHCb detector over the period from 2011 to 2018 in $pp$
collisions at energies of $\sqrt{s}=7,8$ and $13$ TeV, corresponding to a
total integrated luminosity of approximately $9\,\rm{fb}^{-1}$.
The first analysis [1] considers the CP-eigenstates $D\to\pi^{\pm}\pi^{\mp}$
and $D\to K^{\pm}K^{\mp}$ as well as $D\to K^{+}\pi^{-}$, where the $D^{0}\to
K^{+}\pi^{-}$ and $\bar{D}^{0}\to K^{+}\pi^{-}$ decays are related by the
amplitude magnitude $r_{D}^{K\pi}$ and the strong-phase difference
$\delta_{D}^{K\pi}$. For the latter case, the similar magnitude of
$r_{B}^{DK}$ and $r_{D}^{K\pi}$ leads to significant interference between the
two decay paths (favoured B decay followed by suppressed D decay, and
suppressed B decay followed by favoured D decay). As is evident from the
invariant-mass distributions shown in Fig. 2, this results in a huge asymmetry
between the $B^{-}$ and $B^{+}$ decay rates. Moreover, the analysis includes
partially reconstructed $B^{\mp}\to D^{*}h^{\mp}$ decays, in which the vector
$D^{*}$ meson decays to either the $D\pi^{0}$ or $D\gamma$ final state. In
total 28 observables (CP asymmetries and decay rate ratios) are measured. The
combined information allows deriving tight constrains on the underlying
physics parameters $r_{B}^{X},\delta_{B}^{X},r_{D}^{f},\delta_{D}^{f}$ and
$\gamma$
($X\in\\{DK,D\pi,D^{*}K,D^{*}\pi\\},f\in\\{K^{\pm}\pi^{\mp},K^{+}K^{-},\pi^{+}\pi^{-}\\}$)
as displayed in Fig. 2 for the ($r_{B}^{DK},\gamma$) plane.
Similarly, the analysis of $D\to K^{0}_{\rm s}h^{\pm}h^{\mp}$ decays [2]
investigates differences in the phase-space distributions of $B^{+}$ and
$B^{-}$ meson decays. To interpret the result in terms of the physical
observables, knowledge of the the strong-phase variation over the Dalitz plot
of the $D$ decay is required. A model-unbiased approach is employed that uses
direct measurements of the strong-phase difference between $D^{0}$ and
$\bar{D}^{0}$ decays, averaged over regions of the phase space. These strong-
phase differences have been measured by the CLEO and the BESIII collaborations
using quantum correlated pairs of $D$ mesons produced in decays of
$\psi(3770)$. The Dalitz-plot binning scheme is optimized with the help of an
amplitude model. With this procedure, the CKM angle $\gamma$ is determined to
be $\gamma=(68.7^{+5.2}_{-5.1})^{\circ}$, the most precise single measurement
to date. The results are in good agreement with the $D\to h^{\pm}h^{\mp}$
analysis and are crucial to resolve the remaining ambiguities in the parameter
space, see Fig. 2.
Figure 1: Invariant-mass distribution of $B^{-}\to[K^{+}\pi^{-}]_{D}K^{-}$
(left) and $B^{+}\to[K^{-}\pi^{+}]_{D}K^{+}$ (right) candidates with the fit
projections overlaid. The signal component (red peak) and show a huge
asymmetry. Partially reconstructed decays are visible at low invariant mass.
Figure 2: Confidence region in the ($r_{B}^{DK},\gamma$) plane for both
$B^{\mp}\to D^{(*)}h^{\mp}$ analyses.
The family of $B\to K\pi$ decays receives significant contributions from loop-
level transitions providing a powerful probe for new physics phenomena.
Measurements of direct CP violation in these channels have revealed
significant deviations from the expected isospin symmetry, an anomaly known as
the $K\pi$ puzzle. The reconstruction of the $B^{+}\to K^{+}\pi^{0}$ decay is
particularly challenging at a hadron collider as no B-meson decay vertex can
be reconstructed from a single charged track. Charged kaons that are
inconsistent with originating from the primary collision point but consistent
with the reconstructed trajectory of the b-meson candidate are selected from a
data sample corresponding to a luminosity of $5.4\rm{fb}^{-1}$. The CP
asymmetry between $B^{-}$ and $B^{+}$ decay rates is found to be [3]
$A_{CP}(B^{+}\to K^{+}\pi^{0})=0.025\pm 0.015\pm 0.006\pm 0.003$, where the
uncertainties are statistical, systematic and due to external inputs. This
result is consistent with the world average and exceeds its precision. It
confirms and significantly enhances the observed anomalous difference between
the direct CP asymmetries of the $B^{+}\to K^{+}\pi^{0}$ and $B^{+}\to
K^{+}\pi^{-}$ decays.
## 3 Mixing-induced CP violation in beauty decays
Neutral $B_{s}^{0}$ mesons can oscillate into their antiparticle counterparts
via quantum loop processes opening additional mechanisms for CP symmetry
breaking. The frequency of this oscillation, $\Delta m_{s}$, is an important
parameter of the Standard Mode and provides powerful constraints in global CKM
fits. The mixing from $B_{s}^{0}$ to $\bar{B}_{s}^{0}$ occurs about three
million million times per second, making it a major experimental challenge to
resolve it. Due to the excellent decay vertex resolution and track momentum
resolution, the LHCb detector is ideally suited for this task. Two recent
measurements of $\Delta m_{s}$ use flavour specific $B_{s}^{0}\to
D_{s}^{-}\pi^{+}\pi^{-}\pi^{+}$ ($9\,\rm{fb}^{-1}$) [4] and $B_{s}^{0}\to
D_{s}^{-}\pi^{+}$ ($6\,\rm{fb}^{-1}$) [5] decays, respectively. To determine
if a neutral meson oscillated into its antiparticle, knowledge of the
initially created flavour eigenstate is required. This is achieved by using a
combination of several flavour-tagging algorithms that exploit different
features of the b-hadron production process. Figure 3 shows the oscillation
pattern of signal decays having the same flavour at the production and decay,
and those, for which the flavour has changed. Both measurements of the
oscillation frequency are significantly more precise than the current world-
average value. Their combination with previous LHCb measurements yields
$\Delta m_{s}=17.7656\pm 0.0057\,\rm{ps}^{-1}$, a crucial legacy measurement
of the original LHCb detector.
Figure 3: Decay-time distribution of flavour-tagged $B_{s}^{0}\to
D_{s}^{-}\pi^{+}\pi^{-}\pi^{+}$ signal decays (left) and mixing asymmetry for
$B_{s}^{0}\to D_{s}^{\mp}K^{\pm}\pi^{\pm}\pi^{\mp}$ signal decays folded into
one oscillation period (right). The fit projections are overlaid (lines).
Interference between the amplitudes of a $B_{s}^{0}$ meson directly decaying
through $b\to c\bar{c}s$ into a CP eigenstate or after oscillation to a
$\bar{B}_{s}^{0}$ meson gives rise to the CP violating phase
$\phi_{s}\approx-2\beta_{s}$ with
$\beta_{s}\equiv\rm{arg}[-(V_{ts}^{\phantom{*}}V_{tb}^{*})/(V_{cs}^{\phantom{*}}V_{cb}^{*})]$.
The precise measurement of this phase is of high interest because of its
potential sensitivity to new particles altering the mixing amplitudes. A time-
dependent angular analysis of $B_{s}^{0}\to J/\psi\phi$ decays with $J/\psi\to
e^{+}e^{-}$ and $\phi\to K^{+}K^{-}$ determines the mixing phase to be
$\phi_{s}=0.00\pm 0.28\pm 0.05\,\rm{rad}$ ($3\,\rm{fb}^{-1}$) [6]. This is the
first measurement of $\phi_{s}$ with an electron pair in the final state. The
result shows no evidence of CP violation and is consistent with previous
measurements and the Standard Model prediction. It also constitutes an
important cross-check for the results with muons in the final state with
independent systematic uncertainties.
Complementary to the $\gamma$ measurements in charged b-hadron decays, mixing-
induced CP violation in $B_{s}^{0}\to D_{s}^{\mp}K^{\pm}\pi^{\pm}\pi^{\mp}$
decays provides sensitivity to the weak phase $\gamma-2\beta_{s}$. This is
studied for the first time using $9\,\rm{fb}^{-1}$ of $pp$ collision data
recorded by the LHCb detector [4]. Due to the multi-body final state, the
hadronic parameters vary across the five dimensional phase space of the decay.
A time-dependent amplitude analysis is performed to disentangle the various
intermediate-state components contributing via $b\to c$ or $b\to u$ quark-
level transitions. The prominent contributions are found to be the cascade
decays $B_{s}^{0}\to D_{s}^{\mp}K_{1}(1270)^{\pm}$ and $B_{s}^{0}\to
D_{s}^{\mp}K_{1}(1400)^{\pm}$ with $K_{1}(1270)^{\pm}\to
K^{*}(892)^{0}\pi^{\pm},K^{\pm}\rho(770)^{0},K^{*}_{0}(1430)^{0}\pi^{\pm}$ as
well as $K_{1}(1400)\to K^{*}(892)^{0}\pi^{\pm}$. Figure 3 shows the mixing
asymmetry for final state $f=D_{s}^{-}K^{+}\pi^{+}\pi^{-}$, defined as
$A_{\text{mix}}^{f}(t)=(N_{f}(t)-\bar{N}_{f}(t))/(N_{f}(t)+\bar{N}_{f}(t))$,
where $N_{f}(t)$ ($\bar{N}_{f}(t)$) denote the number of initially produced
$B_{s}^{0}$ ($\bar{B}_{s}^{0}$) mesons. The equivalent mixing asymmetry for
the CP-conjugate process, $A_{\text{mix}}^{\bar{f}}(t)$, shows a phase shift
related to the weak phase $\gamma-2\beta_{s}$ signifying time-dependent CP
violation. The CKM angle $\gamma$ is determined to be $\gamma=(44\pm
12)^{\circ}$ by taking the mixing phase $\beta_{s}$ as external input. An
alternative model-independent measurement, integrating over the phase space of
the decay, gives a consistent result, $\gamma=(44^{+20}_{-13})^{\circ}$, with
reduced statistical precision but free of model uncertainties related to the
amplitude parameterization.
The CP asymmetries of charmless $B^{0}_{(s)}$ decays to two-body charged final
states provide access to the CKM angles $\alpha$ and $\gamma$ and the $B^{0}$
and $B_{s}^{0}$ mixing phases. In contrast to the tree-level measurements from
$B^{\mp}\to Dh^{\mp}$ and $B_{s}^{0}\to D_{s}^{\mp}K^{\pm}\pi^{\pm}\pi^{\mp}$
decays, the sensitivity to the CKM angles originates from the interference of
the $b\to u$ tree-level with the $b\to d$ or $b\to s$ loop-level transitions.
Figure 5 shows the decay-time distribution of flavour-tagged $B_{s}^{0}\to
K^{+}K^{-}$ signal decays using a data sample corresponding to a luminosity of
$1.9\rm{fb}^{-1}$. The CP observables describing the decay-time distribution
are measured with world-best precision [7]. Combined with previous LHCb
results, the first observation of time-dependent CP violation in the
$B_{s}^{0}$ system is reported. This is an important milestone for flavour
physics.
Figure 4: Decay-time distribution of flavour-tagged $B_{s}^{0}\to K^{+}K^{-}$
signal decays.
Figure 5: Profile-likelihood scan of 1-CL ($p$-value) for the LHCb $\gamma$
combination.
## 4 Outlook
The LHCb collaboration continues to push the frontier of heavy flavour
physics. New measurements of the $B_{s}^{0}-\bar{B}_{s}^{0}$ mixing frequency
have reached unprecedented precision. While time-dependent CP violation in the
$B_{s}^{0}$ system has now been observed for the first time, the breaking of
CP symmetry has still not been observed in the baryon sector. With the first
amplitude analysis of any b-baryon decay mode allowing for CP-violation
effects, the LHCb collaboration is also pioneering in this field. Within the
current precision, no significant CP asymmetries have been observed for the
amplitude components contributing to $\Xi_{b}^{-}\to pK^{-}K^{-}$ decays [8].
Thanks to the combination of plenty of decay modes and advanced analysis
techniques, the LHCb collaboration achieved an impressive overall precision on
the CKM angle $\gamma$ as shown in Fig. 5. Including the new results presented
here, the LHCb average [9] yields $\gamma=(67\pm 4)^{\circ}$. This is in
excellent agreement with global CKM fits. With the upcoming Run 3 data-taking
period, the combination of LHCb results will enter the high precision region
where discrepancies between direct measurement and indirect CKM prediction may
be observed. An ultimate precision at the sub-degree level will be achievable
in the high luminosity LHC era. It remains thrilling to see whether new
phenomena beyond the established theory can be uncovered. The anomaly observed
in $B\to K\pi$ decays, strengthened by recent LHCb results, might already
point to internal inconsistencies of the Standard Model.
## References
## References
* [1] LHCb collaboration, R. Aaij et al et al, JHEP 04, 081 (2021).
* [2] LHCb collaboration, R. Aaij et al et al, JHEP 02, 169 (2021).
* [3] LHCb collaboration, R. Aaij et al et al, Phys. Rev. Lett. 126, 091802 (2021).
* [4] LHCb collaboration, R. Aaij et al et al, JHEP 03, 137 (2021).
* [5] LHCb collaboration, R. Aaij et al et al, arXiv:2104.04421.
* [6] LHCb collaboration, R. Aaij et al et al, LHCB-PAPER-2020-042.
* [7] LHCb collaboration, R. Aaij et al et al, JHEP 03, 075 (2021).
* [8] LHCb collaboration, R. Aaij et al et al, arXiv:2104.15074.
* [9] LHCb collaboration, R. Aaij et al et al, LHCb-CONF-2020-003.
|
# Accuracy criterion for mean field approximations of Markov processes on
hypergraphs 111Supported by the ÚNKP-21-1 New National Excellence Program of
the Ministry for Innovation and Technology from the source of the National
Research, Development and Innovation Fund. 222 Partially supported by the ERC
Synergy under Grant No. 810115 - DYNASNET.
Dániel Keliger
Department of Stochastics, Institute of Mathematics,
Budapest University of Technology and Economics
Műegyetem rkp. 3., H-1111 Budapest, Hungary;
Alfréd Rényi Institute of Mathematics, Budapest, Hungary
e-mail<EMAIL_ADDRESS>
Illés Horváth
MTA-BME Information Systems Research Group
e-mail<EMAIL_ADDRESS>
###### Abstract
We provide error bounds for the $N$-intertwined mean-field approximation
(NIMFA) for local density-dependent Markov population processes with a well-
distributed underlying network structure showing NIMFA being accurate when a
typical vertex has many neighbors. The result justifies some of the most
common approximations used in epidemiology, statistical physics and opinion
dynamics literature under certain conditions. We allow interactions between
more than 2 individuals, and an underlying hypergraph structure accordingly.
## 1 Introduction
The analysis of stochastic population processes is an important topic in
several disciplines, such as epidemiology, biology, economics or computer
systems [5, 2, 12, 6, 27]. Such processes consist of a large number of
interacting individuals (agents) that execute random actions based on the
behavior of other individuals.
A widely-used framework is Markov population processes, where each individual
is in a local state from a fixed, finite state space, and can change their
state in a Markovian manner. For such models, the state space increases
exponentially with the population size, making an exact analysis infeasible
even for moderate population sizes, instead raising the question of good
approximations as the next best thing.
The classical result of Kurtz [16, 17] is based on two main assumptions: that
each individual can observe the entire population, and that the Markovian
transition rates of each individual depend on the observation in a density-
dependent manner. The conclusion is that, as the number of individuals
diverges, the evolution of the stochastic system converges to a deterministic
mean-field limit. This limit is straightforward to compute numerically, and
can serve as a good approximation of the stochastic system when the number of
individuals is large. The mean-field limit of Kurtz is referred to as the
_homogeneous mean-field approximation_ in the present paper.
While the density-dependent Markov setting is flexible and covers many
potential applications, the assumption that each individual can observe the
entire population is very restrictive. In many population processes arising
from real-life examples, individuals do not have full information about the
entire population; instead, each individual can observe only a subset of the
population. This information structure can be described by a network topology,
where each individual has interactions only with its neighbors according to
that topology.
The $N$-intertwined mean field approximation (NIMFA) [19] is a quenched mean-
field approximation, where differential equations are considered for each
individual based on their expected evolution. NIMFA is a deterministic process
different from the homogeneous mean-field approximation that incorporates the
network structure naturally, making it a potentially more accurate
approximation. On the flip side, the computational complexity is considerably
increased compared to the homogeneous mean-field approximation; nevertheless,
it remains tractable for population sizes large enough to make it relevant for
practical applications. Unfortunately, unlike for homogeneous systems, the
justification for using NIMFA is poorly understood, mostly relying on
numerical evidence [18, 28] along with a few theoretical results [29, 30, 31,
24].
In the present paper, we focus on a specific class of Markov processes dubbed
_local density-dependent Markov population processes_ , which preserves the
density-dependent assumption of Kurtz, but allows an underlying network
structure that dictates the environments observed by each individual. This
setting covers many of the frequently used stochastic models, such as the SIS
process in epidemiology [7, 13, 14, 3], Glauber dynamics in statistical
physics [10, 22], or the voter model and majority vote in opinion dynamics
[21, 23]. We incorporate interactions between more than 2 vertices into the
model with an underlying hypergraph structure accordingly to reflect on some
recent developments in the theory of higher order interactions.
We provide general error bounds for NIMFA that are strong on well-distributed
networks. Furthermore, under additional homogeneity assumptions, such as
annealed or activity driven networks [11, 26] we show these error bounds to be
small, with the added benefit of further reducing the number of equations to
other well-known approximations, like the _heterogenous mean field
approximation_ [25]. Finally, we elaborate the on the argument given by K.
Devriendt and P. Van Mieghem [9] and show that Szemerédi’s regularity lemma
[32] can be applied to reduce the number of equations (depending on a given
$\varepsilon$ error).
The rest of the paper is structured as follows. Section 2 introduces basic
notation and setup for density-dependent Markov population processes along
with examples of models used in the literature to illustrate these concepts
and their applicability. Section 3 states the main results and also relates
them to the recent work of Sridhar and Kar [30, 31] and Parasnis et al. [24].
Section 4 discusses further reductions of NIMFA to more simple approximations
used throughout the literature. Section 5 contains a summary of this paper
along with the limitations of these results and possible directions for
further research.
Finally, proofs are contained in Section 6.
## 2 Setup
### 2.1 The underlying hypergraph
Let $G$ be a finite hypergraph on $N$ vertices. The vertex set is labeled
$[N]=\\{1,\dots,N\\}$. The hypergraph is not necessarily uniform; edges may
contain up to $M+1$ vertices. The edges are ordered, with the first vertex
being special, and we will usually use the notation $(i,j_{1},\dots,j_{m})$
for an edge where $1\leq m\leq M$ and $i,j_{1},\dots,j_{m}\in[N]$. The idea
behind the distinction of the first vertex in an edge is that
$w^{(m)}_{i,j_{1},\dots,j_{m}}$ will describe the strength of connections
where $j_{1},\dots,j_{m}$ have a joint effect on vertex $i$ (see Figure 1).
Figure 1: Edge (hyperedge) with weight $w^{(m)}_{i,j_{1},\dots,j_{m}}$.
The $M=1$ case corresponds to (directed) graphs.
We allow so-called _secondary loops_ (abbreviated as s. loop), which are
$(i,\underline{j})$ edges with non-distinct vertices among
$j_{1},\dots,j_{m}\in[N]$. Note that traditional loops for the $m=1$ case are
excluded from this definition.
We use the notation $[N]^{m}$ to denote the set of $m$-tuples, and
$\underline{j}$ abbreviates $(j_{1},\dots,j_{m})$.
For unweighted hypergraphs, adjacency indicators
$a^{(m)}_{i,j_{1},\dots,j_{m}}$ (where $1\leq m\leq M$ and
$i,j_{1},\dots,j_{m}\in[N]$)
$\displaystyle a_{i,\underline{j}}^{(m)}=\begin{cases}1\ \text{ if
$i,j_{1},\dots,j_{m}$ are on the same hyperedge}\\\ 0\ \text{else}\end{cases}$
describe the connections between the vertices.
In-degrees for $1\leq m\leq M$ are defined as
$\displaystyle d^{(m)}(i):=$
$\displaystyle\frac{1}{m!}\sum_{\underline{j}\in[N]^{m}}a_{i,\underline{j}}^{(m)},$
(1)
(where $m!$ is included to cancel the re-orderings of $\underline{j}$), and
the average in-degree for each $1\leq m\leq M$ is
$\displaystyle\bar{d}^{(m)}:=$
$\displaystyle\frac{1}{N}\sum_{i=1}^{N}d^{(m)}(i).$
In the literature, some normalization is usually assumed. In the present
paper, we introduce normalized weights $w^{(m)}_{i,j_{1},\dots,j_{m}}$ and
corresponding normalized in-degrees
$\delta^{(m)}(i):=\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}.$
representing the total weight of $m$-interactions effecting vertex $i\in[N]$.
In the $M=1$ case (classical graphs) we tend to omit the upper index $(m)$ and
write $w_{i,\underline{j}}^{(m)}$ simply as $w_{ij},$ and we also utilize the
matrix notation $W=(w_{ij})_{i,j\in[N]}.$
We have two Conventions for the normalization.
Convention 1:
$\displaystyle
w_{i,\underline{j}}^{(m)}=\frac{a_{i,\underline{j}}^{(m)}}{m!\bar{d}^{(m)}},\qquad\delta^{(m)}(i)=\frac{d^{(m)}(i)}{\bar{d}^{(m)}}.$
(2)
Convention 2:
$\displaystyle
w_{i,\underline{j}}^{(m)}=\frac{a_{i,\underline{j}}^{(m)}}{m!d^{(m)}(i)},\qquad\delta^{(m)}(i)=1.$
(3)
(The same $m!$ from (1) is now included in the conventions.)
For either convention, whenever the denominator would be 0, the numerator will
also be 0, and $w_{i,\underline{j}}^{(m)}$ is simply set to 0 as well.
We set
$\displaystyle w_{\max}=\max_{i,\underline{j},m}w_{i,\underline{j}}^{(m)}.$
We are going to set regularity assumptions for the weights and degrees:
$\displaystyle\delta^{(m)}(i)\leq$ $\displaystyle\,\delta_{\max},$ (4)
$\displaystyle\sum_{\begin{subarray}{c}\underline{j}\in[N]^{m}\\\
\underline{j}\textrm{ s. loop}\end{subarray}}w_{i,\underline{j}}^{(m)}\leq$
$\displaystyle\,R\sqrt{w_{\max}}.$ (5)
For Convention 2, (4) always holds. For Convention 1, we need
$d^{(m)}(i)\leq\delta_{\max}\bar{d}^{(m)}$ (upper regularity of the
hypergraph).
(5) always holds for $M=1$. It also obviously holds if there are no secondary
loops. In other cases, it is an actual restriction on the total weight of
secondary loops.
Symmetry is in general not assumed, that is, the hypergraph may be directed.
For some results concerning classical graphs ($M=1$) with Convention 2, the
extra assumption is needed for _out-_ degrees as well.
$\displaystyle\delta^{\textrm{out}}(j):=\sum_{i\in[N]}w_{ij}\leq\delta_{\textrm{max}}^{\textrm{out}}$
(6)
Assumption (4) and (6) can be understood as a weaker version of double
stochasticity of $W$ assumed in [30, 31].
### 2.2 Local density dependent Markov population process
We define a Markov process on the hypergraph. Each vertex is in a state from a
finite state space $\mathcal{S}$. $\xi_{i,s}(t)$ denotes the indicator that
vertex $i$ is in state $s$ at time $t$; the corresponding vector notation is
$\xi_{i}(t)=\left(\xi_{i,s}(t)\right)_{s\in\mathcal{S}}.$
We also introduce the notation
$\xi^{(m)}_{\underline{i},\underline{s}}(t)=\prod_{k=1}^{m}\xi_{i_{k},s_{k}}(t),$
where $\underline{i}=(i_{1},\dots,i_{m})$ is an edge and
$\underline{s}=(s_{1},\dots,s_{m})$ is a collection of states
($s_{k}\in\mathcal{S},k=1,\dots,m)$.
$\xi^{(m)}_{\underline{i},\underline{s}}(t)$ describes the indicator of
vertices $i_{1},\dots,i_{m}$ being in states $s_{1},\dots,s_{n}$ at time $t$,
respectively.
We define the _$m$ -neighborhood_ of vertex $i$ corresponding to
$\underline{s}=(s_{1},\dots,s_{m})$ as
$\displaystyle\phi^{(m)}_{i,\underline{s}}(t)=\sum_{\underline{j}\in\\{N\\}^{m}}w^{(m)}_{i,\underline{j}}\xi^{(m)}_{\underline{j},\underline{s}}(t).$
(7)
Some explanation is in order. Let $\underline{s}=(s_{1},\dots,s_{m})$ be fixed
for now. According to (7), we consider all edges that include $i$ and $m$
other vertices; for each such edge, we check whether the $m$ other vertices
are exactly according to the configuration of states described by
$\underline{s}$; if yes, their contribution to
$\phi^{(m)}_{i,\underline{s}}(t)$ is $w^{(m)}_{i,\underline{j}}$, otherwise
their contribution is 0.
The $m$-neighborhoods of $i$ consist of $\phi^{(m)}_{i,\underline{s}}(t)$ for
all possible configurations of states $\underline{s}$. The corresponding
vector notation is
$\displaystyle\phi^{(m)}_{i}(t)=\left(\phi^{(m)}_{i,\underline{s}}(t)\right)_{\underline{s}\in\mathcal{S}^{m}},$
(8)
and we may even write
$\displaystyle\phi_{i}(t)=\left(\phi^{(m)}_{i}(t)\right)_{m=1}^{M}$ (9)
for the entire neighborhood of $i$.
In (7), the normalized weights $w^{(m)}_{i,\underline{j}}$ are used; in case
$w^{(m)}_{i,\underline{j}}=0$ for some $\underline{j}$, the corresponding
interaction is simply not present.
Each vertex may transition to another state in continuous time. The transition
rates of a vertex may depend on all of its $m$-neighborhoods for $1\leq m\leq
M$; accordingly, the transition rate from $s^{\prime}$ to $s$ is described by
the function
$q_{ss^{\prime}}:\otimes_{m=1}^{M}\mathbb{R}^{\mathcal{S}^{m}}\to\mathbb{R}$
for each $s^{\prime}\neq s\in\mathcal{S}$.
We assume $q_{ss^{\prime}}$ is locally Lipschitz, and we also require
$q_{ss^{\prime}}(\phi^{(1)},\dots,\phi^{(M)})\geq 0$ for non-negative inputs.
For “diagonal” rates,
$q_{ss}:=-\sum_{s^{\prime}\neq s}q_{s^{\prime}s}$
corresponds to the total outgoing rate from state $s$.
The corresponding transition matrix is
$Q=\left(q_{ss^{\prime}}\right)_{s,s^{\prime}\in\mathcal{S}}.$ We emphasize
that in this convention $q_{ss^{\prime}}$ refers to an $s\leftarrow
s^{\prime}$ transition and not an $s\to s^{\prime}$ one. This ordering allows
us to use column vectors and matrix multiplication from the left.
The dynamics of $\left(\xi_{i}(t)\right)_{i=1}^{N}$ is a continuous-time
Markov chain with state-space $\mathcal{S}^{N}$ where each vertex performs
transitions according to the transition rates $q_{s^{\prime}s}$, independently
from the others. After a transition, vertices update their neighborhood
vectors $\phi_{i}(t)$. We call such dynamics _local-density dependent Markov
processes_.
We define the process $\left(\xi_{i,s}\right)_{i,s}$ formally via Poisson
representation:
$\displaystyle\begin{split}{\xi}_{i,s}(t)=&\xi_{i,s}(0)+\sum_{\begin{subarray}{c}s^{\prime}\in\mathcal{S}\\\
s^{\prime}\neq
s\end{subarray}}\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(t)\right)-\mathcal{N}_{i,s^{\prime}s}\left(\mathcal{H}_{i,s^{\prime}s}(t)\right),\\\
\mathcal{H}_{i,ss^{\prime}}(t)=&\left\\{(\tau,x)\in\mathbb{R}^{2}\left.\right|0\leq\tau\leq
t,\ 0\leq x\leq
q_{ss^{\prime}}\left(\phi_{i}(\tau)\right){\xi}_{i,s^{\prime}}(\tau)\right\\},\end{split}$
(10)
where for each choice of $1\leq i\leq N$ and $s\neq s^{\prime}\in\mathcal{S}$,
$\left(\mathcal{N}_{i,ss^{\prime}}(x,y):x,y\geq 0\right)$ is a 2-dimensional
Poisson-process with density 1, and the processes are independent for
different $(i,s,s^{\prime})$ triples.
(10) is a cumulative formula counting all transitions of the vertex $i$ to and
from state $s$ up to time $t$; $s\leftarrow s^{\prime}$ transitions are
generated using the Poisson points in the 2-dimensional domain
$\mathcal{H}_{i,ss^{\prime}}(t)$ which has area
$\int_{0}^{t}q_{ss^{\prime}}\left(\phi_{i}(\tau)\right){\xi}_{i,s^{\prime}}(\tau)\mathrm{d}\tau$,
ensuring the proper transition rate for $s\leftarrow s^{\prime}$ jumps at time
$\tau$. The second term of the sum corresponds to $s^{\prime}\leftarrow s$
transitions in a similar manner.
### 2.3 N-intertwined mean field approximation
Although the state occupation probabilities of the population process can be
described by the Chapman–Kolmogorov equations, the number of equations is
$\left|\mathcal{S}\right|^{N}$, making it infeasible for numeric or analytic
investigations even for moderate sized populations. To address this issue,
several approximation schemes had been introduced in the literature with
varying complexity.
This chapter discusses the quenched mean field approximation [19], also called
the N-intertwined mean field approximation (NIMFA). NIMFA preserves all
information regarding the graph structure and only neglects dynamical
correlation between vertices. The goal is to derive state occupation
probabilities for each vertex separately, resulting in a total of
$\left|\mathcal{S}\right|N$ equations.
A possible intuition for NIMFA is as follows.
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}\left(\xi_{i}(t)\right)=\mathbb{E}\left[Q\left(\phi_{i}(t)\right)\xi_{i}(t)\right]$
(11)
can be derived from (10). To close (11), we apply the approximation
$\phi_{i}(t)\approx\mathbb{E}\left(\phi_{i}(t)\right)$, which is reasonable
when $N$ is large and there is low correlation between vertices:
$\displaystyle\mathbb{E}\left[Q\left(\phi_{i}(t)\right)\xi_{i}(t)\right]\approx\mathbb{E}\left[Q\left(\mathbb{E}\left(\phi_{i}(t)\right)\right)\xi_{i}(t)\right]=Q\left(\mathbb{E}\left(\phi_{i}(t)\right)\right)\mathbb{E}\left(\xi_{i}(t)\right).$
Accordingly, the NIMFA approximation
$z_{i}(t)=(z_{i,s}(t))_{s\in\mathcal{S}},1\leq i\leq N$ is the solution of the
system
$\displaystyle\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}z_{i}(t)=&Q\left(\zeta_{i}(t)\right)z_{i}(t),\\\
\zeta_{i}(t)=&\left(\zeta_{i}^{(m)}(t)\right)_{m=1}^{M},\\\
\zeta_{i}^{(m)}(t)=&\left(\zeta_{i,\underline{s}}^{(m)}(t)\right)_{\underline{s}\in[N]^{m}}=\left(\sum_{\underline{j}\in\mathcal{S}^{m}}w_{i,\underline{j}}^{(m)}z_{\underline{j},\underline{s}}^{(m)}(t)\right)_{\underline{s}\in\mathcal{S}^{m}},\end{split}$
(12)
where $z_{i}(t)$ corresponds to $\xi_{i}(t)$ and $\zeta_{i}(t)$ corresponds to
$\phi_{i}(t)$, and then the approximation used is
$\mathbb{P}\left(\xi_{i,s}(t)=1\right)=\mathbb{E}\left(\xi_{i,s}(t)\right)\approx
z_{i,s}(t).$
The following theorem ensures the existence and uniqueness of the solution of
(12).
###### Theorem 1.
Let $\Delta^{\mathcal{S}}$ denote the set of probability vectors from
$\mathbb{R}^{\mathcal{S}}.$ For any initial condition
$z_{i}(0)\in\Delta^{\mathcal{S}}$ for all $i$ the ODE system (12) has a unique
global solution such that $z_{i}(t)\in\Delta^{S}$ for all $i$ and $t>0$ as
well.
### 2.4 Examples
In this section we give some examples for models covered by the formalism of
Section 2.2.
#### The simplicial SIS model
We will use the simplicial SIS model, also referred to as the contact process
as a running example.
In the $M=1$ case (graphs) the setup is the following: Each vertex can be in
one of two states: susceptible ($S$) and infected ($I$), hence the state space
is $\mathcal{S}=\\{S,I\\}.$ Infected vertices become susceptible at a constant
rate $\gamma\geq 0$ while susceptible vertices receive the illness with rate
proportional to number of its infected neighbhours.
The number of infected neighbhours of vertex $i\in[N]$ at time $t$ equals to
$\sum_{j=1}^{N}a_{ij}\xi_{j,I}(t)$
as $a_{ij}\xi_{j,I}(t)$ the indicator of vertex $j$ is connected to vertex $i$
and that it is infected at time $t$. After normalizing it with $\bar{d}$ or
$d(i)$ depending on our choice of convention 1 or 2 one gets
$\sum_{j=1}^{N}w_{ij}\xi_{j,I}(t)=\phi_{i,I}(t).$
Therefore, the transition rates takes the form $q_{SI}(\phi_{i}(t))=\gamma,\
q_{IS}(\phi_{i}(t))=\beta\phi_{i,I}(t)$ where $\beta\geq 0$ is a suitable
constant factor. In matrix form:
$\displaystyle Q(\phi_{i}(t))=\left[{\begin{array}[]{cc}-\gamma&\gamma\\\
\beta\phi_{i,I}(t)&-\beta\phi_{i,I}(t)\\\ \end{array}}\right]$
For the SIS process NIMFA takes the form:
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}z_{i,I}(t)=-\gamma
z_{i,I}(t)+\beta(1-z_{i,I}(t))\sum_{j=1}^{N}w_{ij}z_{j,I}(t).$
Here we used $z_{i,S}(t)=1-z_{i,I}(t)$ which is also the reason why it enough
to write the $I$ components only.
The extension of the SIS model to hypergraphs is called the simplicial SIS
model. The curing rate stays $\gamma$, however the infection dynamics is
modified. A susceptible vertex can be infected via any $(m+1)$-edge if all
other $m$ vertices are infected. The weighted sum of such edges $(m+1)$-edges
is
$\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}\xi_{\underline{j},(I,\dots,I)}^{(m)}(t)=\phi_{i,(I,\dots,I)}^{(m)}(t).$
The infection rates is sum of all the $1\leq m\leq M$ with appropriate
$\beta_{1},\dots,\beta_{M}\geq 0$ factors:
$q_{IS}(\phi_{i}(t))=\sum_{m=1}^{M}\beta_{m}\phi_{i,(I,\dots,I)}^{(m)}(t).$
For the simplicial SIS model NIMFA takes the form
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}z_{i,I}(t)=-\gamma
z_{i,I}(t)+\left(1-z_{i,I}(t)\right)\sum_{m=1}^{M}\beta_{m}\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}z_{\underline{j},(I,\dots,I)}^{(m)}(t),$
#### Glauber dynamics
Glauber dynamics is a stochastic process whose stationary distribution
coincides with the distribution given by a spin system, such as the Ising
model [10].
There are two possible states: $\mathcal{S}=\\{+,-\\}.$ Instead of the
indicators
$\xi_{i,+}(t),\,\xi_{i,-}(t)$
it is customary use the sign variables
$\sigma_{i}(t):=\xi_{i,+}(t)-\xi_{i,-}(t)=2\xi_{i,+}(t)-1.$
In physical systems it is natural to assume $w_{ij}$ is symmetric and
$w_{ii}=0$.
The dynamics is the following:
* •
At each time step, choose a vertex $i$ uniformly.
* •
With probability $p_{i}(\sigma)=\frac{e^{\beta S_{i}(\sigma)}}{e^{\beta
S_{i}(\sigma)}+1}$, vertex $i$ switches to state + (else -), where
$S_{i}(\sigma)=\sum_{j=1}^{N}w_{ij}\sigma_{j}.$
Note that $S_{i}(\sigma)$ arises from the reduction of the energy
$\displaystyle H(\sigma):=-\frac{1}{2}\sum_{i<j}w_{ij}\sigma_{i}\sigma_{j}$
when vertex $i$ is flipped from $-$ to $+$. The stationary distribution is
then given by the Gibbs measure
$\displaystyle P(\sigma):=$ $\displaystyle\frac{1}{Z}e^{-\beta H(\sigma)},$
$\displaystyle Z:=$ $\displaystyle\sum_{\sigma}e^{-\beta H(\sigma)}.$
We modify the above dynamics. First, note that, in accordance with (7),
$\displaystyle
S_{i}(\sigma(t))=\sum_{j=1}^{N}w_{ij}\left(\xi_{j,+}(t)-\xi_{j,-}(t)\right)=\phi_{i,+}(t)-\phi_{i,-}(t).$
With a slight abuse of notation, we denote
$\displaystyle
S\left(\phi_{i}(t)\right):=\alpha\phi_{i,+}(t)-\gamma\phi_{i,-}(t),$
allowing the dynamics to have a preferred state.
Furthermore, we turn to the continuous time version instead with transition
rates given by
$\displaystyle q_{+-}(\phi)=$ $\displaystyle e^{\beta S(\phi)},$
$\displaystyle q_{-+}(\phi)=$ $\displaystyle 1.$
Since there are only two states, it is enough to consider the probabilities of
occupying state +. For this, NIMFA gives the following system of ODEs:
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}z_{i,+}(t)=$
$\displaystyle(1-z_{i,+}(t))e^{\beta S(\zeta_{i}(t))}-z_{i,+}(t).$ (13)
The equilibrium state is given by the fixed point problem
$\displaystyle z_{i,+}=\frac{e^{\beta S(\zeta_{i})}}{e^{\beta
S(\zeta_{i})}+1}.$ (14)
Assume $\alpha=1,\gamma=-1$ as in the original setting and that the underlying
weighted graph is regular: $\forall i\ \delta(i)=\sum_{j}w_{ij}=1$. Than (14)
reduces to
$\displaystyle\sigma=\tanh\left(\frac{1}{2}\beta\sigma\right),$
$\displaystyle\forall i\ 2z_{i,+}-1=\sigma$
giving back the classical mean field approximation of the Ising model on
lattice. This is not surprising as both NIMFA and the classical mean field
approach is based on the assumption of independence of vertices.
Based on [22], we can generalize the model for hypegraphs via extending
$S(\phi)$ to
$\displaystyle
S(\phi_{i}(t)):=\sum_{m=0}^{M}\alpha_{m}\phi_{i,(+,\dots,+)}^{(m)}(t)-\gamma_{m}\phi_{i,(-,\dots,-)}^{(m)}(t)$
allowing the system to lose even more energy when $3$ or more neighbors have
the same configuration on a hyper-edge.
#### The voter model
The voter model is a conceptually simple stochastic process modeling opinion
dynamics [21]. In the most simple case, there are two possible states:
$\mathcal{S}={0,1}.$
The dynamics can be described the following way: At each time step, we choose
a vertex uniformly. Said vertex chooses an neighbor also uniformly, and copies
its state. Similarly to the Glauber dynamics, we will study the continuous
time version instead.
For vertex $i$, the ratio of neighbors sharing belief $s\in\\{0,1\\}$ is
$\displaystyle\frac{1}{d(i)}\sum_{j=1}^{N}a_{ij}\xi_{j,s}(t)=\phi_{i,s}(t)$
with the choice of Convention 2. Hence, the transition rates take the form
$\displaystyle q_{01}(\phi_{i}(t))=$ $\displaystyle\lambda\phi_{i,0}(t),$
$\displaystyle q_{10}(\phi_{i}(t))=$
$\displaystyle\lambda\phi_{i,1}(t)=\lambda\left(1-\phi_{i,0}(t)\right).$
Using $z_{i,1}(t)=1-z_{i,0}(t)$, NIMFA can be written as
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}z_{i,0}(t)=-\lambda(1-\zeta_{i,0}(t))z_{i,0}(t)+\lambda\zeta_{i,0}(t)\left(1-z_{i,0}(t)\right).$
#### A modified majority rule model
Another popular model of opinion dynamics is the majority rule [21]. In this
setting a group of $m+1$ individuals are choosen who update their state
simultaneously to the majority opinion. Ties are usually broke with either a
random choice or setting a preferred opinion to win in this case, say opinion
$1$. For the sake of simplicity, we apply the latter approach.
Due to the continuous time setting we use, we modify the majority rule such
that only one individual updates its opinion during a transition based on the
state of the other vertices (not including its own opinion for the sake of
simplicity).
As it is stated in [21], the hypergraph setting is more suitable for majority
rule. We assume communities have a bounded size $M+1$, while each individual
can be a part of many, possibly overlapping communities.
$a_{i,j_{1},\dots,j_{m}}^{(m)}$ is the indicator of vertices
$i,j_{1},\dots,j_{m}\in[N]$ being in a community. We assume symmetry in the
indices and set $a_{i,j_{1},\dots,j_{m}}^{(m)}=0$ if there are duplicates. We
use a slightly modified version of Convention 1:
$\displaystyle
w_{i,\underline{j}}^{(m)}=\frac{\alpha_{m}a_{i,\underline{j}}}{m!\bar{d}^{(m)}},$
where $\alpha_{m}$ measures how much importance vertices put on communities of
size $m+1$. $w_{\textrm{max}}=\max_{m}\frac{\alpha_{m}}{m!\bar{d}^{(m)}}$ can
be small either due to vertices being part of many communities of size $m+1$
on average or because they put less importance on said communicates.
Introduce the notation $|s|=\sum_{l=1}^{m}s_{l}.$ Vertex $i$ in community
$i,j_{1},\dots,j_{m}$ changes its opinion to the majority of
$j_{1},\dots,j_{m}$ at rate $w_{i,\underline{j}}^{(m)}$. Therefore,
$\displaystyle q_{01}(\phi_{i}(t))=$
$\displaystyle\sum_{m=0}^{M}\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}{\mathds{1}}_{\left\\{0\
\textit{is the majority for $j_{1},\dots,j_{m}$}\right\\}}$ $\displaystyle=$
$\displaystyle\sum_{m=0}^{M}\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}\sum_{|\underline{s}|<\frac{m}{2}}\prod_{l=1}^{m}\xi_{j_{l},s_{l}}(t)=\sum_{m=0}^{M}\sum_{|\underline{s}|<\frac{m}{2}}\phi_{i,\underline{s}}^{(m)}(t),$
$\displaystyle q_{10}(\phi_{i}(t))=$
$\displaystyle\sum_{m=0}^{M}\sum_{|\underline{s}|\geq\frac{m}{2}}\phi_{i,\underline{s}}^{(m)}(t).$
The NIMFA ODEs are
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}z_{i,0}(t)=$
$\displaystyle(1-z_{i,0}(t))\sum_{m=0}^{M}\sum_{|\underline{s}|<\frac{m}{2}}\zeta_{i,\underline{s}}^{(m)}(t)-z_{i,0}(t)\sum_{m=0}^{M}\sum_{|\underline{s}|\geq\frac{m}{2}}\zeta_{i,\underline{s}}^{(m)}(t).$
## 3 Error bounds for NIMFA
In this section we are presenting our main results which bound the error
arising from neglecting the dynamical correlation between vertices.
Recall that (11) was closed by assuming
$\phi_{i}(t)\approx\mathbb{E}\left(\phi_{i}(t)\right)$. We introduce an
auxiliary process where the empirical neighborhood $\phi_{i}(t)$ is replaced
by the approximate $\zeta_{i}(t)$ from (12):
$\displaystyle\begin{split}\hat{\xi}_{i,s}(t)=&\xi_{i,s}(0)+\sum_{\begin{subarray}{c}s^{\prime}\in\mathcal{S}\\\
s^{\prime}\neq
s\end{subarray}}\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{K}_{i,ss^{\prime}}(t)\right)-\mathcal{N}_{i,s^{\prime}s}\left(\mathcal{K}_{i,s^{\prime}s}(t)\right),\\\
\mathcal{K}_{i,ss^{\prime}}(t)=&\left\\{(\tau,x)\in\mathbb{R}^{2}\left.\right|0\leq\tau\leq
t,\ 0\leq x\leq
q_{ss^{\prime}}\left(\zeta_{i}(\tau)\right)\hat{\xi}_{i,s^{\prime}}(\tau)\right\\}.\end{split}$
(15)
The process $\hat{\xi}_{i,s}(t)$ is an indicator process just like
$\xi_{i,s}(t)$, so it takes 0 or 1 values, and
$\sum_{s\in\mathcal{S}}\hat{\xi}_{i,s}(t)=1$ for any $i\in[N]$ and $t\geq 0$.
However, assuming independent initial conditions, $\hat{\xi}_{i}(t)$ remain
independent. Applying total expectation to (15) shows
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}\left(\hat{\xi}_{i}(t)\right)=Q\left(\zeta_{i}(t)\right)\mathbb{E}\left(\hat{\xi}_{i}(t)\right),$
which, along with (12), implies that if
$\mathbb{E}\left(\hat{\xi}_{i}(0)\right)=z_{i}(0)$, then
$\hat{\xi}_{i}(t)-z_{i}(t)$ is a martingale and
$\displaystyle\mathbb{E}\left(\hat{\xi}_{i}(t)\right)=z_{i}(t)\quad\forall
t\geq 0.$ (16)
Using the same background Poisson processes $\mathcal{N}_{i,ss^{\prime}}$
provides a coupling between $\xi$ and $\hat{\xi}$ that will be useful later
on.
We aim to give an upper bound for $|\hat{\xi}(t)-\xi(t)|$, as well as for
$|\hat{\xi}(t)-z(t)|$. We start with $|\hat{\xi}(t)-\xi(t)|$ by introducing
the error terms
$\displaystyle D_{i}^{(0)}(t)=$ $\displaystyle\sup_{0\leq\tau\leq
t}\mathbb{E}\left(\sum_{s\in\mathcal{S}}\left|\xi_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right|\right),$
$\displaystyle\tilde{D}_{i}^{(0)}(t)=$
$\displaystyle\mathbb{E}\left(\sup_{0\leq\tau\leq
t}\sum_{s\in\mathcal{S}}\left|\xi_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right|\right).$
Apparently, the only difference between the two is the order in which we take
the supremum in time. $\tilde{D}_{i}^{(0)}(t)$ is more strict as
$\displaystyle D_{i}^{(0)}(t)\leq\tilde{D}_{i}^{(0)}(t).$
Observe that
$\sum_{s\in\mathcal{S}}\left|\xi_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right|$
only has two possible values: $0$ if $\xi_{i}(t)=\hat{\xi}_{i}(t)$, and $2$
otherwise (as there will be two $s\in\mathcal{S}$ indices where
$\xi_{i,s}(t),\hat{\xi}_{i,s}(t)$ differs). This implies
$\displaystyle\sup_{0\leq\tau\leq
t}\mathbb{P}\left(\xi_{i}(\tau)\neq\hat{\xi}_{i}(\tau)\right)=$
$\displaystyle\frac{1}{2}D_{i}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{(0)}}}(t),$
$\displaystyle\mathbb{P}\left(\exists\ 0\leq\tau\leq t:\
\xi_{i}(\tau)\neq\hat{\xi}_{i}(\tau)\right)=$
$\displaystyle\frac{1}{2}\bar{D}_{i}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{(0)}}}(t)$
We also introduce error terms describing the environments arising from
$\xi_{i}(t)$ and $\hat{\xi}_{i}(t)$:
$\displaystyle D_{i}^{(m)}(t)=$ $\displaystyle\sup_{0\leq\tau\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\phi_{i,\underline{s}}^{(m)}(\tau)-\zeta_{i,\underline{s}}^{(m)}(\tau)\right|\right]\quad(1\leq
m\leq M),$ $\displaystyle\tilde{D}_{i}^{(m)}(t)=$
$\displaystyle\,\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\phi_{i,\underline{s}}^{(m)}(\tau)-\zeta_{i,\underline{s}}^{(m)}(\tau)\right|\right]\quad(1\leq
m\leq M).$
Since the neighborhoods $\phi_{i}(t)$ and $\zeta_{i}(t)$ are constructed from
the indicators $\xi_{i}(t)$ and $\hat{\xi}_{i}(t)$, it is reasonable to expect
$\phi_{i}(t)$ and $\zeta_{i}(t)$ to be close to each other – as long as
$\xi_{i}(t)$ and $\hat{\xi}_{i}(t)$ are also close. To avoid circular
reasoning, we carry on handling these two types of errors together at the same
time. This motivates the introduction of
$\displaystyle D_{\max}^{(m)}(t)=$
$\displaystyle\max_{i\in[N]}D_{i}^{(m)}(t),$ $\displaystyle D_{\max}(t)=$
$\displaystyle\sum_{m=0}^{M}D_{\max}^{(m)}(t),$
$\displaystyle\tilde{D}_{i}(t)=$
$\displaystyle\sum_{m=0}^{M}\tilde{D}_{i}^{(m)}(t).$
The vector notation $\tilde{D}(t)=\left(\bar{D}_{i}(t)\right)_{i\in[N]}$ will
also be utilized.
Now we can go ahead to state the main results of the paper. The idea behind
the staments is when the vertex weights are generally small (the network is
well-distributed) then vertices has low correlation between each other, hence
NIMFA is accurate.
###### Theorem 2.
(Main)
Assume the initial conditions $\xi_{i}(0)$ are independent and (16) is
satisfied. Then for every $t\geq 0$ there is a constant
$C=C\left(t,\delta_{\max},R\right)$ such that
$\displaystyle\max_{i}\sup_{0\leq\tau\leq
t}\mathbb{P}\left(\xi_{i}(\tau)\neq\hat{\xi}_{i}(\tau)\right)\leq\frac{1}{2}D_{\max}(t)\leq$
$\displaystyle C\sqrt{w_{\max}}.$ (17)
Furthermore, if we additionally assume $M=1$ (having $1$-uniform hypergraphs)
then there exist constants
$C_{1}=C_{1}(\delta_{\max}),C_{2}=C_{2}(\delta_{\max})$ such that for all
$t\geq 0$
$\displaystyle\begin{split}\left\|\tilde{D}(t)\right\|\leq&C_{1}\exp\left(C_{2}\left\|W+I\right\|t\right)\|\mu\|,\\\
\mu=&\left(\sqrt{\sum_{j=1}^{N}w_{ij}^{2}}\right)_{i\in[N]},\end{split}$ (18)
where the norm $\|\cdot\|$ is arbitrary, $W=\left(w_{ij}\right)_{i,j=1}^{N}$
and $I$ is the identity matrix.
###### Remark 1.
The reason why we have different results for $M>1$ and $M=1$ is technical in
nature. The main observation is that in the $M=1$ case
$\hat{\xi}_{i,s}(t)-z_{i,s}(t)$ is a martingale making possible to take
$\sup_{0\leq\tau\leq t}$ inside the expectation via Doob’s inequality. It is
no longer the case for $M>1$ where
$\hat{\xi}_{\underline{i},\underline{s}}^{(m)}(t)-z_{\underline{i},\underline{s}}^{(m)}(t)$
is typically not a martingale itself.
(17) is a local result in the sense that it provides a uniform bound, ensuring
that $\hat{\xi}_{i,s}(t)$ and $\xi_{i,s}(t)$ are close for all vertices $i$
simultaneously. For example, in the SIS process it allows us to approximate
infection probabilities for concrete individuals, not just global or
mesoscopic ratios.
(18) will be elaborated on in Theorem 3.
In general, we cannot expect a similar local result for $\hat{\xi}_{i,s}(t)$
and $z_{i,s}(t)$ since $\hat{\xi}_{i,s}(t)$ is an indicator while $z_{i,s}(t)$
is a continuous variable. However, if we average out $\hat{\xi}_{i,s}(t)$ over
a macroscopic set of vertices, a similar result will hold.
In (18) the use of $\ell^{2}$ or $\ell^{\infty}$ is advised. Observe
$\displaystyle\|W\|_{\infty}=\max_{i}\sum_{j}w_{ij}\leq\delta_{\textrm{max}}$
$\displaystyle\|W\|_{2}\leq\sqrt{\|W\|_{1}\|W\|_{\infty}}=\sqrt{\left(\max_{j}\sum_{i}w_{ij}\right)\left(\max_{j}\sum_{i}w_{ij}\right)}\leq\sqrt{\delta_{\textrm{max}}^{\textrm{out}}\delta_{\textrm{max}}},$
(19)
making $\exp\left(C_{2}\left\|W+I\right\|t\right)$ bounded in (18). Note that
(19) is the only step where Assumption (6) regarding
$\delta_{\textrm{max}}^{\textrm{out}}$ is used.
As for $\|\mu\|$:
$\displaystyle\|\mu\|_{\infty}=$ $\displaystyle\max_{1\leq i\leq
N}\sqrt{\sum_{j=1}^{n}w_{ij}^{2}}\leq\max_{1\leq i\leq
N}\sqrt{w_{\textrm{max}}\sum_{j=1}^{n}w_{ij}}\leq\sqrt{w_{\textrm{max}}\delta_{\textrm{max}}},$
$\displaystyle\|\mu\|_{2}=$
$\displaystyle\sqrt{\sum_{i=1}^{N}\sum_{j=1}^{N}w_{ij}^{2}}.$
Convention 1 works well with the $O\left(\sqrt{w_{\textrm{max}}}\right)$ error
bound as $w_{\textrm{max}}=\frac{1}{\bar{d}}$ holds in that case suggesting
vertices being close to independent when they have a lot of neighbors on
average. Similarly to (17), it also gives a uniform error bound, making it
possible to approximate the probabilities at the individual level. For
Convention 2 on the other hand, $w_{\textrm{max}}=\frac{1}{d_{\min}}$ is
sensitive to even one vertex with a low degree. If we are not attached to
uniform bounds in $i,$ we can provide a more robust on for the error of a
typical vertex, thus, it is possible to describe global or mesoscopic
population statistics.
Let $\iota\sim U\left([N]\right)$ the index of a randomly chosen vertex.
$\displaystyle\mathbb{P}\left(\exists\ \tau\in[0,t]:\
\xi_{\iota}(\tau)\neq\hat{\xi}_{\iota}(\tau)\right)=$
$\displaystyle\frac{1}{N}\sum_{i=1}^{N}\mathbb{P}\left(\exists\ \tau\in[0,t]:\
\xi_{i}(\tau)\neq\hat{\xi}_{i}(\tau)\right)\leq$
$\displaystyle\frac{1}{2N}\sum_{i=1}^{N}\tilde{D}_{i}(t)\leq\sqrt{\frac{1}{4N}\sum_{i=1}^{N}\tilde{D}_{i}^{2}(t)}=O\left(\sqrt{\frac{1}{N}\|\mu\|_{2}^{2}}\right)$
Observe
$\displaystyle\frac{1}{N}\|\mu\|_{2}^{2}=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}w_{ij}^{2}$
(20)
is the squared and normalized Frobenius norm of the matrix $W.$ We mention
that such bound were used in [31] under more strict assumptions regarding $W$.
Note that for Convention 2
$\displaystyle\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}w_{ij}^{2}=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{a_{ij}}{d^{2}(i)}=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{d(i)},$
(21)
meaning the error is small when vertices typically have large degrees.
These observations along with Theorem 2 give the following result:
###### Theorem 3.
For $M=1$ (directed, weighted graphs), there exist constants
$C_{1}=C_{1}(t,\delta_{\textrm{max}})$ and
$C_{2}=C_{2}(t,\delta_{\textrm{max}},\delta_{\textrm{max}}^{\textrm{out}})$
such that
$\displaystyle\max_{i}\mathbb{P}\left(\exists\ \tau\in[0,t]:\
\xi_{i}(\tau)\neq\hat{\xi}_{i}(\tau)\right)\leq$ $\displaystyle
C_{1}\sqrt{w_{\textrm{max}}},$ (22)
$\displaystyle\frac{1}{N}\sum_{i=1}^{N}\mathbb{P}\left(\exists\ \tau\in[0,t]:\
\xi_{i}(\tau)\neq\hat{\xi}_{i}(\tau)\right)\leq$ $\displaystyle
C_{2}\sqrt{\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}w_{ij}^{2}}.$ (23)
So far, we have only accounted for the error between $\xi_{i}(t)$ and
$\hat{\xi}_{i}(t)$, however, what we are actually interested in is the
expectation $\mathbb{E}\left(\hat{\xi}_{i}(t)\right)=z_{i}(t)$, the solution
of the ODE system given by NIMFA. Thankfully,
$\left(\hat{\xi}_{i}(t)\right)_{i\in[N]}$ are independent, hence, their
averages must concentrate around the mean:
###### Theorem 4.
Assume (16) holds with independent initial conditions. Then for any $t\geq 0$
and any $1\leq K\leq N$,
$\displaystyle\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\sum_{s\in\mathcal{S}}\left|\frac{1}{K}\sum_{i=1}^{K}\left(\hat{\xi}_{i,s}(\tau)-z_{i,s}(\tau)\right)\right|\right]\leq\frac{2|\mathcal{S}|}{\sqrt{K}}.$
(24)
The most natural application of Theorem 4 is for $K=N$, but it is formulated
in a way so that it can be applied to any convenient subset of vertices (the
fact that the first $K$ vertices are considered has no significance as the
vertices can be reordered arbitrarily).
Together, Theorems 2, 3 and 4 give an error bound for the NIMFA approximation.
###### Theorem 5.
Assume (16) holds with independent initial conditions. Then for any $t\geq 0$,
there exists a constant $C=C(t,\delta_{\max},R)$ such that
$\displaystyle\sup_{0\leq\tau\leq
t}\mathbb{E}\left(\sum_{s\in\mathcal{S}}\left|\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i,s}(\tau)-z_{i,s}(\tau)\right)\right|\right)\leq
C\left(\sqrt{w_{\max}}+\frac{1}{\sqrt{N}}\right).$ (25)
Furthermore, if we additionally assume $M=1$, there exist constants
$C_{1}=C_{1}(t,\delta_{\max}),C_{2}=C_{2}(t,\delta_{\max},\delta_{\textrm{max}}^{\textrm{out}})$
such that
$\displaystyle\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\left(\sum_{s\in\mathcal{S}}\left|\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i,s}(t)-z_{i,s}(t)\right)\right|\right)\right]\leq$
$\displaystyle C_{1}\left(\sqrt{w_{\textrm{max}}}+\frac{1}{\sqrt{N}}\right)$
(26)
and
$\displaystyle\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\left(\sum_{s\in\mathcal{S}}\left|\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i,s}(t)-z_{i,s}(t)\right)\right|\right)\right]\leq$
$\displaystyle
C_{2}\left(\sqrt{\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}w_{ij}^{2}}+\frac{1}{\sqrt{N}}\right)$
(27)
where $\mu$ is the same as for Theorem 2.
Figure 2: The ratio of infected based on the average of $1000$ simulations
(triangles) compared to the estimate of NIMFA (solid line) on an $N=1000$
vertex modified cycle graphs with the closest $10$ (left) and $100$ (right)
neighbors being connected. ($\beta=2,\gamma=1$) As we increase the degrees
NIMFA performs better.
Figure 3: The ratio of infected based on the average of $10$ simulations
(triangles) compared to the estimate of NIMFA (solid line) on an $N=5000$
vertex modified cycle graphs with the closest $10$ (left) and $100$ (right)
neighbors being connected. ($\beta=2,\gamma=1$) As we increase the degrees
NIMFA performs better.
### Related works
In this section we compare our results to the recent independent work of
Sridhar and Kar [30, 31] and Parasnis et al. [24].
In [30] the authors describe how the state densities of certain related
stochastic processes on weighted graphs with doubly symmetric matrix $W$ can
be approximated by a set of $O(N)$ ODEs analogous to NIMFA given that the
normalized Frobenius norm $\frac{1}{N}\sum_{i=1}\sum_{j=1}^{N}w_{ij}^{2}$ is
small and $N$ is large.
Given the conclusions of Theorem 4.2 in [30] and Theorem 5 in the present
paper are very similar in nature, it makes sense to compare the general setup,
the conditions, the conclusions and the technique directly to those in the
present paper.
Setup. Strictly speaking, the stochastic processes discussed in the present
paper and in [30, 31] are different. In our work, time is continuous while
[30] and [31] start from discrete time steps then speed up time. This is a
minor difference though, and with appropriate time scaling, the models in [30,
31] and the present paper define essentially the same object.
Conditions. In the present paper, we require only that the normalized degrees
are bounded. This is more general than the doubly stochastic $W$ assumption of
[30, 31]. Specifically, our result also justifies Example 4.2 in [31].
Via (27), qualitatively the same type of error terms were retained in terms of
the normalized Frobenius norm, but [30, 31] provides an error probability
bound that is exponential in $N$. In the present paper, we do not focus on
this kind of large deviation bound in $N$.
[30, 31] derive bounds for the global average. On the other hand, our results
show more localized, uniform bounds in terms of vertices. This is made
possible by the use of the auxiliary Markov processes $\hat{\xi}_{i}(t)$,
allowing accurate predictions about individual vertices too, not just global
averages.
Our framework also allows higher order interactions, while [30, 31] is
restricted to order 2 interactions (graphs).
In [24] the authors study the SIR process in age-structured populations on
time-varying networks. They show that when $N$ and the rewiring rate is high
the prevalence of the age groups can be described via an ODE system analogous
to the metapopulation NIMFA model (34) in Section 4.2. Note that [24] applies
to cases with fast, but finite rewiring rates as well, while our result only
considers the idealized case of infinite rewiring rates.
## 4 Further reductions to NIMFA
This section relates NIMFA to other approaches from the literature. Although
NIMFA is a major reduction of the exact Kolmogorov-equations, requiring only
$O(N)$ ODEs to be solved, it can be still computationally prohibitive when the
number of vertices is too large. Furthermore, NIMFA requires knowing both the
full network structure and precise initial conditions for all vertices. We
look at further reductions to (12) when additional structure is known for the
network or initial conditions; several of these actually lead to other well-
known models from the literature.
### 4.1 Homogeneous mean field approximation
The homogeneous mean field approximation (HMFA) assumes that the vertices are
_well mixed_ , meaning, every vertex interacts with every other with equal
weights. Formally, this can be this can be described by a complete hypergraph
(with all loops and secondary loops):
$\displaystyle w_{i,\underline{j}}^{(m)}=\frac{1}{N^{m}}.$
This definition may be generalized to include cases when
$w_{i,\underline{j}}^{(m)}=0$ for certain $m$ indices, e.g. $(M+1)$-uniform
hypergraphs. For ease of notation, instead of modifying the definition of
$w_{i,\underline{j}}^{(m)}$, it is also possible to choose the rate functions
$q_{ss^{\prime}}(\phi)$ so that they do not depend on the appropriate
$\phi^{(m)}$ coordinates, making the choice of $w_{i,\underline{j}}^{(m)}$
irrelevant.
It is easy to see that for such networks, $w_{\max}=\frac{1}{N}$ and
$\delta_{\max}=1$. What remains to show is that (5) holds with some bounded
$R$.
$\displaystyle\begin{split}&\sum_{\begin{subarray}{c}\underline{j}\in[N]^{m}\\\
\underline{j}\textit{ is s.\
loop}\end{subarray}}w_{i,\underline{j}}^{(m)}=\frac{1}{N^{m}}\left|\left\\{\left.\underline{j}\in[N]^{m}\right|\underline{j}\textrm{
s.\ loop}\right\\}\right|=\\\
&1-\frac{1}{N^{m}}\left|\left\\{\left.\underline{j}\in[N]^{m}\right|\underline{j}\textrm{
not s. loop}\right\\}\right|=1-\prod_{l=0}^{m-1}\left(1-\frac{l}{N}\right)=\\\
&O\left(\frac{1}{N}\right)\ll\frac{1}{\sqrt{N}}=\sqrt{w_{\max}},\end{split}$
(28)
hence, $R$ can be chosen arbitrarily small for large enough $N$.
Our goal now is to derive a small system of equations for
$\displaystyle u(t):=\frac{1}{N}\sum_{i=1}^{N}z_{i}(t).$
Our strategy is based on the observation that the neighbourhood vectors
$\zeta_{i}(t)$ are the same for all vertices.
$\displaystyle\zeta_{i,\underline{s}}^{(m)}(t)=$
$\displaystyle\frac{1}{N^{m}}\sum_{\underline{j}\in[N]^{m}}\prod_{l=1}^{m}z_{j_{l},s_{l}}(t)=\prod_{l=1}^{m}\left(\frac{1}{N}\sum_{j_{l}=1}^{N}z_{j_{l},s_{l}}(t)\right)=$
$\displaystyle\prod_{l=1}^{m}u_{s_{l}}(t)=:u_{\underline{s}}^{(m)}(t)$
This results in the ODE system:
$\displaystyle\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}u(t)=&Q\left(U(t)\right)u(t),\\\
U(t)=&\left(u^{(m)}(t)\right)_{m=1}^{M},\\\
u^{(m)}(t)=&\left(u_{\underline{s}}^{(m)}(t)\right)_{\underline{s}\in\mathcal{S}^{m}}=\left(\prod_{l=1}^{m}u_{s_{l}}(t)\right)_{\underline{s}\in\mathcal{S}^{m}}.\end{split}$
(29)
For example, the simplicial SIS model (29) takes the form
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}u_{I}(t)=-\gamma
u_{I}(t)+\left(1-u_{I}(t)\right)\sum_{m=1}^{M}\beta_{m}u_{I}^{m}(t).$
which was used in [13].
In this setting, Theorem 5 shows the ratio of vertices in state
$s\in\mathcal{S}$ can be approximated by $u_{s}(t)$ with
$O\left(\frac{1}{\sqrt{N}}\right)$ error. The well known results of Kurtz [16,
17] correspond to the $M=1$ case.
#### Regular hypergraphs
Although (29) is both feasible for analytical and numerical investigations
(due to its finite size) the assumption that the network structure is well-
mixed is quite restrictive. However, as we will see, the well-mixed condition
can be relaxed given uniform initial conditions.
We call a weighted hypergraph _regular_ if
$\displaystyle\forall\ 1\leq i\leq N,\ 1\leq m\leq M\ \ \delta^{(m)}(i)=1.$
(30)
Note that the value $1$ is arbitrary and any other constant value would work
with minor modifications to the rate functions $q_{ss^{\prime}}$.
We note that (30) always holds for Convention 2 hypergraphs. For Convention 1,
it holds when $d^{(m)}(i)=\bar{d}^{(m)}\,\,\forall 1\leq i\leq N,\ 1\leq m\leq
M$ (that is, the hypergraph is regular in the usual sense).
###### Proposition 1.
Assume (30) and
$z_{i}(0)=u(0)\quad\forall\ 1\leq i\leq N$
for some $u(0)\in\Delta^{\mathcal{S}}.$ Then the solution of (12) takes the
form
$\ z_{i}(t)=u(t)\quad\forall\ 1\leq i\leq N$
where $u(t)$ satisfies (29).
We mention that statements similar to Proposition 1 have appeared in the
literature before in certain special cases [15, Proposition 3.18 ]. Combining
Proposition 1 with Theorem 2 ensures the accuracy of the homogeneous mean
field approximation on regular graphs with large degrees and homogeneous
initial conditions disregarding any further network structure.
###### Proof.
(Proposition 1)
Let $u(t)$ be the solution of (29). Set $z_{i}(t)=u(t).$ We have to show that
$z_{i}(t)$ satisfies (12). The initial conditions are satisfied according to
the assumption, and for the derivatives,
$\displaystyle u_{\underline{s}}^{(m)}(t)=$ $\displaystyle
u_{\underline{s}}^{(m)}(t)\delta^{(m)}(i)=u_{\underline{s}}^{(m)}(t)\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}=\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}z_{\underline{j},\underline{s}}^{(m)}(t)=\zeta_{i,\underline{s}}^{(m)}(t),$
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}z_{i}(t)=$
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}u(t)=Q\left(U(t)\right)u(t)=Q\left(\zeta_{i}(t)\right)z_{i}(t).$
∎
### 4.2 Metapopulation models
As we saw in Section 4.1 , a way to reduce the number of equations is by
grouping vertices together and representing them by a single averaged-out
term. In practice, this approach will only work if the vertices grouped
together are sufficiently homogeneous, which is typically not the case for the
entire population. To mitigate this issue, we may introduce _communities_ ,
inside which we assume homogeneity, then derive the dynamics between
communities. This ”higher resolution” may increase accuracy, at the cost of a
larger ODE system.
In practice, the communities can be chosen by demographic and geographic
criterion such as age and location. Alternatively, it is also possible to
group vertices according to degree, or a third option is the use of community
detection algorithms [1].
We present the general setup for metapopulation models first for graphs in
Section 4.2.1, then for hypergraphs in Section 4.2.2.
For the SIS process on graphs similar results had been derived in [4].
#### 4.2.1 Metapopulation models on graphs
First, assume $M=1$. Divide the vertices into a partition $V_{1},\dots,V_{K}$
with size $\left|V_{k}\right|=N_{k}$ such that vertices inside a group are
similar in some sense. The average weight between group $V_{k}$ and $V_{l}$ is
$\displaystyle\tilde{w}_{kl}=\frac{\sum_{i\in V_{k}}\sum_{j\in
V_{l}}w_{ij}}{N_{k}N_{l}}.$ (31)
(In the idealized case of metapopulations, $w_{ij}$ would have the same value
$\tilde{w}_{kl}$ for each $i\in V_{k},j\in V_{l}$ pair.)
Next we derive the dynamics for the averages
$\displaystyle\bar{z}_{k}(t):=\frac{1}{N_{k}}\sum_{i\in V_{k}}z_{j}(t).$ (32)
$\zeta_{i}(t)$ has the same value $\bar{\zeta}_{k}(t)$ for all $i\in V_{k}$:
$\displaystyle\bar{\zeta}_{k}(t)=\zeta_{i}(t)=\sum_{j=1}^{N}w_{ij}z_{j}(t)=\sum_{l=1}^{K}\underbrace{N_{l}\tilde{w}_{kl}}_{\bar{w}_{kl}}\frac{1}{N_{l}}\sum_{j\in
V_{l}}z_{j}(t)=\sum_{l=1}^{K}\bar{w}_{kl}\bar{z}_{l}(t).$ (33)
Therefore, we can derive an ODE system for (32)
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\bar{z}_{k}(t)=Q\left(\bar{\zeta}_{k}(t)\right)\bar{z}_{k}(t)$
(34)
which is equivalent to (12) on the graph $\overline{\mathcal{G}}$ with vertex
set $\\{1,\dots,K\\}$ and weights $\left(\bar{w}_{kl}\right)_{k,l=1}^{K}.$
#### 4.2.2 Metapopulation models on hypergraphs
For the general metapopulation setting, we assume that for each $m=1,\dots,M$,
the population is partitioned into _local groups_
$V_{1}^{(m)},\dots,V_{K^{(m)}}^{(m)}$. The _type_ of a vertex will be denoted
by $k=\left(k^{(1)},\dots,k^{(M)}\right)$, which means that for each
$m=1,\dots,M$, the given vertex is in the local group $V_{k^{(m)}}^{(m)}$.
Vertices can be partitioned according to their type into
$\prod_{m=1}^{M}K^{(m)}$ _global groups_.
We aim to define a hypergraph on the types, with weights consistent with the
average of weights within each group. That said, with the above setup, this is
easier to do using local groups for each $m=1,\dots,M$.
For a given $m$, $k^{(m)}$ and
$\underline{l}^{(m)}=\left(l_{1}^{(m)},\dots,l_{m}^{(m)}\right)$, the _total
local $m$-weight between $k^{(m)}$ and $\underline{l}^{(m)}$_ is defined as
$\displaystyle W_{k^{(m)},\underline{l}^{(m)}}^{(m)}:=\sum_{i\in
V_{k^{(m)}}^{(m)}}\sum_{j_{1}\in V_{l_{1}^{(m)}}^{(m)}}\dots\sum_{j_{m}\in
V_{l_{m}^{(m)}}^{(m)}}w_{i,\underline{j}}^{(m)}.$ (35)
Then, using the notation
$N_{\underline{l}^{(m)}}:=\prod_{r=1}^{M}N_{l_{r}^{(m)}}^{(m)},$
we define the weight of the edge containing the local groups
$k^{(m)},\underline{l}^{(m)}$ as
$\displaystyle\tilde{w}_{k^{(m)},\underline{l}^{(m)}}^{(m)}:=\frac{W_{k^{(m)},\underline{l}^{(m)}}^{(m)}}{N_{k^{(m)}}N_{\underline{l}^{(m)}}}.$
(36)
Let $k(i)=\left(k^{(1)}(i),\dots,k^{(M)}(i)\right)$ denote the type of $i$.
For easier notation, we will often use $\iota\sim U\left([N]\right)$, which is
a random vertex independent from everything else. Then we define the average
of $z_{i}(t)$ over type $k$ as
$\displaystyle\bar{z}_{k}(t):=\mathbb{E}\left(\left.z_{\iota}(t)\right|k(\iota)=k\right)=\frac{1}{N_{k}}\sum_{i\in
V_{k}}z_{i}(t).$ (37)
In this case as well, $\zeta_{i}(t)$ has the same value for all $i\in V_{k}$;
this common value will be denoted by $\bar{\zeta}_{k}(t).$ Let
$\iota_{1},\dots,\iota_{m}$ denote i.i.d. copies of $\iota.$ Then
$\displaystyle\begin{split}\bar{\zeta}_{k}^{(m)}(t)=&\zeta_{i}^{(m)}(t)=\sum_{j\in[N]^{m}}w_{i,\underline{j}}^{(m)}z_{\underline{j}}^{(m)}(t)=\sum_{\underline{l}^{(m)}}\tilde{w}_{k,\underline{l}^{(m)}}^{(m)}\sum_{j_{1}\in
V_{l_{1}^{(m)}}^{(m)}}\dots\sum_{j_{m}\in
V_{l_{m}^{(m)}}^{(m)}}z_{\underline{j}}^{(m)}(t)\\\
=&\sum_{\underline{l}^{(m)}}\underbrace{N_{\underline{l}^{(m)}}\tilde{w}_{k,\underline{l}^{(m)}}^{(m)}}_{:=\bar{w}_{k^{(m)},\underline{j}^{(m)}}^{(m)}}\mathbb{E}\left(\left.\prod_{r=1}^{m}z_{\iota_{r}}(t)\right|k^{(m)}(\iota_{1})=l_{1}^{(m)},\dots,k^{(m)}(\iota_{m})=l_{m}^{(m)}\right)=\\\
=&\sum_{\underline{l}^{(m)}}\bar{w}_{k^{(m)},\underline{j}^{(m)}}^{(m)}\prod_{r=1}^{m}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right){\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{.}}\end{split}$
(38)
This means that the ODE system for (37) is formally the same as (34) (with the
appropriate definition of $\bar{z}_{k}(t)$ and $\bar{\zeta}_{k}(t)$).
Note that $\bar{\zeta}_{k}(t)$ can also be expressed via $\bar{z}_{k}(t)$ as
$\displaystyle\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)=$
$\displaystyle\mathbb{E}\left(\left.\mathbb{E}\left(\left.z_{\iota}(t)\right|k(i)=k\right)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)=$
$\displaystyle\mathbb{E}\left(\left.\bar{z}_{k(\iota)}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right),$
making (34) a closed system.
In the special case when the hypergraph is $(M+1)$-uniform, we can set
$K^{(m)}=1$ for all $m<M$ virtually making the local group $k^{(M)}$ and the
global group $k$ the same (apart from some $1$’s in the first $M-1$
components). In this case, $Q$ only depends on $\bar{\zeta}^{(M)}(t)$ which
can be expressed as
$\displaystyle\bar{\zeta}_{k^{(M)}}^{(M)}=\sum_{\underline{l}^{(m)}}\bar{w}_{k^{(m)},\underline{l}^{(m)}}^{(m)}\prod_{r=1}^{m}\bar{z}_{k^{(M)}(l_{r})}(t).$
### 4.3 Annealed networks
So far, we only focused on the dynamics of the Markov process neglecting the
dynamics of the network itself. When there is a separation of scale between
the speed of the Markov process and the changes to the network itself, two
kinds of idealizations are typically used:
* •
_quenched networks_ : the speed at which the network changes is much slower
than the Markov process. In this case, the network is assumed constant in
time.
* •
_annealed networks_ : the speed at which the network changes is much faster
than the Markov process. In this case, we consider the network changes
averaged out for the interactions.
Annealed networks can be modeled by replacing connections
$a_{i,\underline{j}}^{(m)}$ in (2) and (3) with the average $\langle
a\rangle_{i,\underline{j}}^{(m)}$.
In this section, we present a setup for annealed networks generated via the
configuration model [20]. Similar calculations can be made for other models
that include e.g. degree correlation such as equation (93) in [8].
Once again, we start with the graph case.
In the configuration model the degrees $d(1),\dots,d(N)$ are given beforehand,
and vertex $i$ receives $d(i)$ half-edges (_stubs_) initially. Then in each
round, we choose two stubs at random to connect and form an edge, repeating
this procedure until all stubs are paired.
Loops and multiple edges are possible, but their effect will be neglected. The
expected connection between vertices $i$ and $j$ is
$\displaystyle\langle a\rangle_{ij}=\frac{d(i)d(j)}{\bar{d}N}.$
The degree of each vertex $i$ indeed matches the prescribed $d(i)$ as
$\displaystyle\sum_{j=1}^{N}\langle
a\rangle_{ij}=\frac{d(i)}{\bar{d}}\frac{1}{N}\sum_{j=1}^{N}d(j)=d(i).$
$\langle a\rangle_{ij}$ depends only on the degrees of $i$ and $j$, so it can
be interpreted as a metapopulation model where vertices are grouped according
to their degree. (Note that here we also use the index $k=0$ for isolated
vertices if any.) The corresponding weights are
$\displaystyle\tilde{w}_{kl}=\frac{kl}{\bar{d}^{2}N},$
for Convention 1, and
$\displaystyle\tilde{w}_{kl}=\frac{l}{\bar{d}N}.$
for Convention 2.
Let $q_{k}:=\frac{kN_{k}}{\bar{d}N}$ denote the size biased degree
distribution and introduce
$\displaystyle\Theta(t):=\sum_{l=0}^{d_{\max}}q_{l}\bar{z}_{l}(t).$ (39)
Using (33), $\bar{\zeta}_{k}(t)$ can be written as
$\displaystyle\bar{\zeta}_{k}(t)=\frac{k}{\bar{d}}\Theta(t),$
for Convention 1, and
$\displaystyle\bar{\zeta}_{k}(t)=\Theta(t).$
for Convention 2.
For example, the I component of the SIS process assuming Convention 1 is
$\displaystyle\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\bar{z}_{k,I}(t)&=-\gamma\bar{z}_{k,I}(t)+\frac{\beta}{\bar{\bar{d}}}k\left(1-\bar{z}_{k,I}(t)\right)\Theta_{I}(t){\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{,}}\\\
\Theta_{I}(t)&=\sum_{l=0}^{d_{\textrm{max}}}q_{l}\bar{z}_{l,I}(t){\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{.}}\end{split}$
(40)
which is the Inhomogeneous Mean Field Approximation (IMFA) studied by Pastor-
Satorras and Vespignani [25].
For Convention 1, to apply the results of the present paper, we need to assume
upper regularity, i.e. $\delta_{\max}=\frac{d_{\max}}{\bar{d}}$ to be bounded.
In many applications, the degree distribution converges to a fixed
distribution, making $\bar{d}$ bounded; in such a setting, we accordingly
require $d_{\max}$ to be bounded as well.
Assuming upper regularity,
$\displaystyle
w_{\max}=\frac{d_{\max}^{2}}{\bar{d}^{2}N}=\frac{1}{N}\delta_{\max}^{2}$
thus Theorem 5 actually provides an $O\left(\frac{1}{\sqrt{N}}\right)$ error
bound.
As for Convention 2, $\delta_{\max}=1$ holds as usual, and
$\displaystyle w_{\max}=\frac{1}{N}\frac{d_{\max}}{\bar{d}}.$
Unfortunately, one can not relax the bound on $d_{\textrm{max}}$ by using (23)
instead of (22) as it requires bounds for the out-degrees:
$\displaystyle\delta^{\textrm{out}}(j)=\sum_{i=1}^{N}w_{ij}=\sum_{k=0}^{d_{\textrm{max}}}N_{k}\frac{d(j)}{\bar{d}N}=\frac{d(j)}{\bar{d}}\leq\frac{d_{\textrm{max}}}{\bar{d}}\leq\delta_{\textrm{max}}^{\textrm{out}}.$
Now we turn to the hypergraph case $M>1$. We generalize the notion of the
configuration model in the following manner: For a fixed $m$, the $m$-degrees
are given as $d^{(m)}(1),\dots,d^{(m)}(N)$ and each vertex receives $m$-stubs
based on their degree. In each round, we choose $m+1$ $m$-stubs at random to
form an $m$-edge, then repeat this procedure until all of the stubs have been
paired. This procedure is performed for each $1\leq m\leq M$ independently.
For distinct $i,j_{1},\dots j_{m}$, the probability of connecting them in a
given round is
$\displaystyle\frac{d^{(m)}(i)\prod_{r=1}^{m}d^{(m)}(j_{r})}{{\bar{d}^{(m)}N\choose
m+1}}\approx\frac{(m+1)!d^{(m)}(i)\prod_{r=1}^{m}d^{(m)}(j_{r})}{\left(\bar{d}^{(m)}N\right)^{m+1}}.$
Since there are $\frac{\bar{d}^{(m)}N}{m+1}$ rounds in total, we set
$\displaystyle\langle
a\rangle_{i,\underline{j}}^{(m)}:=\frac{m!d^{(m)}(i)\prod_{r=1}^{m}d^{(m)}(j_{r})}{\left(\bar{d}^{(m)}N\right)^{m}}.$
For the hypergraph case, we only examine Convention 1, for which
$\displaystyle\tilde{w}_{k^{(m)},\underline{l}^{(m)}}^{(m)}=\frac{k^{(m)}}{\bar{d}^{(m)}}\frac{\prod_{r=1}^{m}l_{r}^{(m)}}{\left(\bar{d}^{(m)}N\right)^{m}}.$
Once again, the resulting hypergraph can be interpreted as a metapopulation
model, where the local groups are given according to the $m$-degrees of the
vertices.
Clearly $\delta^{(m)}(i)=\frac{d^{(m)}(i)}{\bar{d}^{(m)}},$ so we make an
upper regularity assumption in this case as well, from which
$w_{\max}=O\left(\frac{1}{N}\right)$ follows.
For hypergraphs, we also need to check the condition (5).
$\tilde{w}_{k^{(m)},\underline{l}^{(m)}}^{(m)}\leq\frac{\delta_{\max}^{m+1}}{N^{m}},$
so (28) implies
$\displaystyle\sum_{\begin{subarray}{c}\underline{j}\in[N]^{m}\\\
\underline{j}\textit{ is s.\ loop}\end{subarray}}w_{i,\underline{j}}^{(m)}\leq
C\sum_{\begin{subarray}{c}\underline{j}\in[N]^{m}\\\ \underline{j}\textit{ is
s.\
loop}\end{subarray}}\frac{1}{N^{m}}=O\left(\frac{1}{N}\right)\ll\sqrt{w_{\max}},$
(41)
hence arbitrarily small $R$ can be used for large enough $N$.
The next step is to calculate $\bar{\zeta}_{k}(t)$ based on (34). Define
$q_{k^{(m)}}^{(m)}:=\frac{k^{(m)}N_{k^{(m)}}}{\bar{d}^{(m)}N},$
the size-biased degree distribution of the $m$-vertices. Also define
$\displaystyle\Theta^{(m)}(t):=\sum_{l=1}^{d_{\max}^{(m)}}q_{l}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|d^{(m)}(\iota)=l\right),$
(42)
once again using the notation $\iota\sim U\left([N]\right)$.
Using (38),
$\displaystyle\bar{\zeta}_{k}^{(m)}(t)$
$\displaystyle=\sum_{\underline{l}^{(m)}}\bar{w}_{k^{(m)},\underline{j}^{(m)}}^{(m)}\prod_{r=1}^{m}\mathbb{E}\left(\left.z_{\iota}(t)\right|d^{(m)}(\iota)=l_{r}^{(m)}\right)=$
$\displaystyle=\frac{k^{(m)}}{\bar{d}^{(m)}}\sum_{\underline{l}^{(m)}}\prod_{r=1}^{m}q_{l_{r}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|d^{(m)}(\iota)=l_{r}\right)$
$\displaystyle=\frac{k^{(m)}}{\bar{d}^{(m)}}\prod_{r=1}^{m}\sum_{l_{r}=1}^{d_{\textrm{max}}^{(m)}}q_{l_{r}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|d^{(m)}(\iota)=l_{r}\right)=$
$\displaystyle=\frac{k^{(m)}}{\bar{d}^{(m)}}\left(\Theta^{(m)}(t)\right)^{m}.$
Accordingly, e.g. the dynamics for the simplicial SIS model can be written as
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\bar{z}_{k,I}(t)=-\gamma\bar{z}_{k,I}(t)+(1-\bar{z}_{k,I}(t))\sum_{m=1}^{M}\frac{\beta^{(m)}}{\bar{d}^{(m)}}\left(\Theta_{I}^{(m)}(t)\right)^{m}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{.}}$
(43)
(43) was studied in [14] for the $(M+1)$-uniform case, where
$\mathbb{E}\left(\left.z_{\iota}(t)\right|d^{(M)}(\iota)=l\right)$ simplifies
to $\bar{z}_{k}(t)$ as the global class $k$ and the local class $k^{(M)}$
coincide.
### 4.4 Activity-driven networks
Activity-driven networks were introduced in [26].
Let $a_{1},\dots,a_{K}$ be positive numbers called _activities_ and let $a(i)$
denote the activity of vertex $i$. Instead of a graphs structure, each vertex
chooses a random vertex uniformly with rate $\beta a(i)$ and if they are an SI
pair, the susceptible node becomes infected. Recoveries happen independently
with rate $\gamma.$
The above model corresponds to an SIS process on the weighted graph
$\displaystyle w_{ij}=\frac{a(i)+a(j)}{N}$
since to form the $(i,j)$ pair, either $i$ or $j$ needs to activate, and each
vertex is chosen with probability $\frac{1}{N}.$ The graph is a metapopulation
model, with groups corresponding to the activity values.
We generalize this concept to allow higher order interactions.
$a_{1}^{(m)},\dots,a_{K^{(m)}}^{(m)}$ are the possible $m$-activities and we
assume that vertex $i$ chooses $m$ other vertices at random with rate
$a^{(m)}(i).$ This results in a hypergraph with weights
$\displaystyle
w_{i,\underline{j}}^{(m)}=\frac{1}{N^{m}}\left(a_{i}^{(m)}+\sum_{r=1}^{m}a_{j_{r}}^{(m)}\right).$
Assume the activity rates are bounded from above by some $a_{\max}<\infty.$
Also, introduce
$\bar{a}^{(m)}:=\frac{1}{N}\sum_{i=1}^{N}a^{(m)}(i).$
Then
$\displaystyle\delta^{(m)}(i)=a_{i}^{(m)}+\frac{1}{N^{m}}\sum_{\underline{j}\in[N]^{m}}\sum_{r=1}^{m}a_{j_{r}}^{(m)}=a_{i}^{(m)}+\bar{a}^{(m)}\leq
2a_{\max}$
so (4) is satisfied.
$w_{\max}\asymp\frac{1}{N}$ and (41) is applicable here as well satisfying
(5), hence Theorem 2 applies.
$\bar{\zeta}_{k}(t)$ can also be expressed with the help of (38).
###### Proposition 2.
Let $\iota\sim U([N])$ a random index and $p^{(m)}_{k^{(m)}}$ be the ratio of
vertices in the local group $k^{(m)}.$ Also, define
$\displaystyle\psi^{(m)}(t):=\sum_{l=1}^{K^{(m)}}a_{l}^{(m)}p_{l}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|a^{(m)}(\iota)=l^{(m)}\right).$
Then the neighborhood vectors have the form
$\displaystyle\bar{\zeta}^{(m)}_{k}(t)=\left(a_{k^{m}}^{(m)}\mathbb{E}\left(z_{\iota}(t)\right)+\psi^{(m)}(t)\right)\mathbb{E}^{m-1}\left(z_{\iota}(t)\right).$
The proof of Proposition 2 is given in Section 6.
For activity-driven networks, the simplicial SIS model takes the form
$\displaystyle\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\bar{z}_{k,I}(t)=&-\gamma\bar{z}_{k,I}(t)+\left(1-\bar{z}_{k,I}(t)\right)\cdot\\\
&\sum_{m=1}^{M}\beta_{m}\mathbb{E}^{m-1}\left(z_{\iota,I}(t)\right)\left(a_{k^{m}}^{(m)}\mathbb{E}\left(z_{\iota,I}(t)\right)+\psi_{I}^{(m)}(t)\right).\end{split}$
(44)
[33] proves that (44) describes the large graph limit correctly when $M=1$.
### 4.5 Dense graphs and Szemerédi’s regularity lemma
We call a hypegraph dense if there is some $0<p_{0}\leq 1$ such that
$\displaystyle\bar{d}^{(m)}\geq p_{0}N^{m}\quad\forall\ 1\leq m\leq M.$ (45)
For Convention 1 graphs,
$\displaystyle\frac{1}{M!\ N}\leq w_{\max}\leq$
$\displaystyle\frac{1}{p_{0}N},$ $\displaystyle\delta_{\max}\leq$
$\displaystyle\frac{1}{p_{0}}$
hold and (41) directly follows, satisfying the conditions for Theorem 2.
We focus on the graph case $M=1$. We assume that the rate functions
$q_{ss^{\prime}}$ are _affine_ , that is, they have the form
$\displaystyle
q_{ss^{\prime}}\left(\phi\right)=q_{ss^{\prime}}^{(0)}+\sum_{r\in\mathcal{S}}q_{ss^{\prime},r}^{(1)}\phi_{r}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{,}}$
(46)
where
$q_{ss^{\prime}}^{(0)},\left(q_{ss^{\prime},r}^{(1)}\right)_{r\in\mathcal{S}}$
are nonnegative constants. Many epidemiological models have this form,
including the SIS process.
As it was pointed out in the preliminary work [9], Szemerédi’s regularity
lemma [32] provides a method to approximate (12) with a finite system up to
arbitrary precision (for large enough $N$).
Roughly speaking, Szemerédi’s regularity lemma states that any large enough
dense graph can be partitioned into finitely many “boxes” (called an
$\varepsilon$-regular partition) which have the same size (except one
remainder box), and besides a few exceptional pairs the edge count between two
boxes behaves as if coming from a randomly mixed graph, with error at most
$\varepsilon$.
We denote an $\varepsilon$-regular partition by $V_{0},V_{1},\dots,V_{K}$,
where $V_{0}$ is the exceptional set.
$\displaystyle e(A,B):=\sum_{i\in A}\sum_{j\in B}a_{ij}$
refers to the number of edges between the vertex sets $A,B$ with the
convention that edges in $A\cap B$ are counted double.
We define the graph $\overline{\mathcal{G}}$ on vertices
$\left(V_{1},\dots,V_{K}\right)$. ($V_{0}$ is neglected.)
The adjacency matrix is replaced by the edge density between $A,B\subseteq[N]$
defined as
$\displaystyle\rho(A,B):=\frac{e\left(A,B\right)}{\left|A\right|\cdot\left|B\right|}$
(47)
It is easy to see that $0\leq\rho\left(A,B\right)\leq 1.$
The adjacency matrix counterpart for $\overline{\mathcal{G}}$ is simply the
edge density between the $V_{1},\dots,V_{K}$ sets. For the average degree we
further define
$\displaystyle p:=$ $\displaystyle\frac{\bar{d}}{N},$ (48)
$\displaystyle\kappa:=$
$\displaystyle\frac{\left|V_{1}\right|}{N}=\dots=\frac{\left|V_{K}\right|}{N}$
(49)
where $p$ is the global edge density of $\mathcal{G}$ and $\kappa$ is the
portion of vertices one box contains. The average degree in
$\overline{\mathcal{G}}$ is $Kp\approx\frac{p}{\kappa},$ motivating the
definition of the weights
$\displaystyle\bar{w}_{kl}:=\frac{\kappa}{p}\rho\left(V_{k},V_{l}\right).$
(50)
The corresponding solution of (12) on the graph $\overline{\mathcal{G}}$ with
weights (50) is denoted by $\left(v_{k}(t)\right)_{k=1}^{K}$ with initial
condition
$\displaystyle v_{k}(0)=\frac{1}{|V_{k}|}\sum_{i\in V_{k}}z_{i}(0).$ (51)
Finally, we define
$\displaystyle\bar{v}(t):=\sum_{k=1}^{K}\frac{\left|V_{k}\right|}{N}v_{k}(t)$
(52)
and the average global density vector
$\displaystyle\bar{z}(t):=\frac{1}{N}\sum_{i=1}^{N}z_{i}(t).$ (53)
###### Theorem 6.
$\forall T>0,\varepsilon>0,p_{0}>0\,\exists K_{\max}\in\mathbb{Z}^{+}$ such
that for any $\mathcal{G}$ simple graph with density parameter $p_{0}$ and
$N\geq K_{\textrm{max}}$, there exists a partition $V_{0},V_{1},\dots,V_{K}$
with $K\leq K_{\max}$ such that
* •
$\left|V_{1}\right|=\dots=\left|V_{K}\right|{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}{,}}$
* •
$\left|V_{0}\right|\leq\varepsilon N$,
* •
$\sup_{0\leq t\leq T}\left\|\bar{z}(t)-\bar{v}(t)\right\|_{1}\leq\varepsilon.$
The proof is provided in Section 6.
Szemerédi’s regularity lemma also guarantees that such a partition can be
found in polynomial time [1].
We note that $K_{\max}$ may increase rapidly as $\varepsilon\to 0^{+}$
limiting the applicability of the approach. That said, for networks with extra
community structure, this approach may still be useful.
## 5 Discussion
In this paper we examined the accuracy of the so called N-Intertwined Mean
Field Approximation on hypergraphs. The idea of NIMFA is to assume vertices
are independent from each other, then derive the dynamics of the occupation
probabilities of each vertex. This leaves us with and ODE system of size
$O(N)$ instead of an exponentially increasing system given by the exact
Kolmogorov equations.
Our findings show that when the incoming weights are well distributed – for
example, vertices typically have large degrees – then NIMFA gives an accurate
approximation. Under additional assumptions we showed how the number of ODEs
can be further reduced to give well-known approximation methods from the
literature, such as the heterogenous mean field approximation. Finally, we
showed how Szemerédy’s regularity lemma can be used to reduce the number of
equations to constant order (depending only on the error desired) for large
enough dense graphs.
These results have their limitations. The error bounds work poorly for truly
sparse graphs (with bounded average degrees). Analyzing such systems probably
requires qualitatively different approaches.
The upper regularity condition can be restrictive for certain applications. We
conjecture that the results could be greatly generalized in this direction for
degree distributions with fast decaying tails.
For the reduction for dense graph we applied the strong version of Szemerédy’s
lemma. The weak version of Szemerédy’s lemma, however, has more desirable
algorithmic properties and a smaller bound on the number of ”boxes” one needs
for a given $\varepsilon$. Extending the theorem in this direction might be
beneficial for large, inhomogeneous, dense systems.
Finally, NIMFA has the disadvantage of requiring full knowledge of the network
which is usually not possible in practice. Using metapopulation networks
instead mitigates this problem, and also greatly reduces the number of
equations required. This method, however, relies on the assumption that the
metapopulation dynamics is close enough to the original one. Further research
is needed to understand how well coarse graining performs in terms of
preserving the network dynamics.
## 6 Proofs
### 6.1 General proofs
We state and prove a technical lemma first which will be used throughout other
proofs.
###### Lemma 1.
Let $a_{1},\dots,a_{n}$ and $b_{1},\dots,b_{n}$ two sets of numbers such that
$0\leq\left|a_{i}\right|,\left|b_{i}\right|\leq 1$. Then
$\displaystyle\left|\prod_{i=1}^{n}a_{i}-\prod_{i=1}^{n}b_{i}\right|\leq\sum_{i=1}^{n}\left|a_{i}-b_{i}\right|.$
###### Proof.
(Lemma 1)
The proof is by induction on $n$. The statement is trivial for $n=1$. For
$n>1$,
$\displaystyle\left|\prod_{i=1}^{n}a_{i}-\prod_{i=1}^{n}b_{i}\right|=$
$\displaystyle\left|a_{n}\prod_{i=1}^{n-1}a_{i}-b_{n}\prod_{i=1}^{n-1}b_{i}\right|=$
$\displaystyle\left|\left(a_{n}-b_{n}\right)\prod_{i=1}^{n-1}a_{i}+b_{n}\left(\prod_{i=1}^{n-1}a_{i}-\prod_{i=1}^{n-1}b_{i}\right)\right|\leq$
$\displaystyle\left|a_{n}-b_{n}\right|\prod_{i=1}^{n-1}\left|a_{i}\right|+\left|b_{n}\right|\cdot\left|\prod_{i=1}^{n-1}a_{i}-\prod_{i=1}^{n-1}b_{i}\right|\leq$
$\displaystyle\left|a_{n}-b_{n}\right|+\left|\prod_{i=1}^{n-1}a_{i}-\prod_{i=1}^{n-1}b_{i}\right|\leq\left|a_{n}-b_{n}\right|+\sum_{i=1}^{n-1}\left|a_{i}-b_{i}\right|=$
$\displaystyle\sum_{i=1}^{n}\left|a_{i}-b_{i}\right|.$
∎
Next we show that (12) exhibits a unique global solution.
###### Proof.
(Theorem 1)
The right hand side of (12) is locally Lipschitz, so there is a unique local
solution.
Instead of $q_{ss^{\prime}}$, we use the modified rate functions
$\displaystyle\hat{q}_{ss^{\prime}}(\phi):=$
$\displaystyle\left|q_{ss^{\prime}}(\phi)\right|$ (54)
$\displaystyle\hat{q}_{ss}(\phi)=$ $\displaystyle-\sum_{s^{\prime}\neq
s}\hat{q}_{s^{\prime}s}(\phi)$
which are nonnegative for any input; note that
$\left.\hat{q}_{ss^{\prime}}(\phi)\right|_{\phi\geq
0}=\left.q_{ss^{\prime}}(\phi)\right|_{\phi\geq 0}.$
The modified version of (12) is
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\hat{z}_{i}(t)=\hat{Q}\left(\hat{\zeta}_{i}(t)\right)\hat{z}_{i}(t)$
where
$\hat{Q}(\phi)=\left(\hat{q}_{ss^{\prime}}(\phi)\right)_{s,s^{\prime}\in\mathcal{S}}.$
The local solution uniquely exist in this case as well, and it either extends
to a global solution or blows up at a finite time.
Assume that the local solution blows up at time $t_{0}$. Then
$\hat{\zeta}_{i}(t)$ is well-defined for any $t<t_{0}.$
We construct an auxiliary time-inhomogeneous Markov process on $[0,t_{0})$.
The state space is $\mathcal{S}$ and the transition rates at time $t$ are
given by the matrix $\hat{Q}\left(\hat{\zeta}_{i}(t)\right)$. $p_{s}(t)$
denotes the probability of being in state $s\in\mathcal{S}.$ The Kolmogorov
equations have the form
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(t)=\hat{Q}\left(\hat{\zeta}_{i}(t)\right)p(t).$
Since $\hat{Q}\left(\hat{\zeta}_{i}(t)\right)$ is continuous for $t<t_{0}$,
$\max_{0\leq\tau\leq
t}\left\|\hat{Q}\left(\hat{\zeta}_{i}(\tau)\right)\right\|$
exists and is finite.
Based on Grönwall’s inequality,
$\displaystyle\hat{z}_{i}(t)-p(t)=$
$\displaystyle\hat{z}_{i}(0)-p(0)+\int_{0}^{t}\hat{Q}\left(\hat{\zeta}_{i}(u)\right)\left[\hat{z}_{i}(\tau)-p(\tau)\right]\mathrm{d}\tau,$
$\displaystyle\left\|\hat{z}_{i}(t)-p(t)\right\|=$
$\displaystyle\left\|\hat{z}_{i}(0)-p(0)\right\|+\sup_{0\leq u\leq
t}\left\|\hat{Q}\left(\hat{\zeta}_{i}(\tau)\right)\right\|\int_{0}^{t}\left\|\hat{z}_{i}(\tau)-p(\tau)\right\|\mathrm{d}\tau,$
$\displaystyle\sup_{0\leq\tau\leq
t}\left\|\hat{z}_{i}(\tau)-p(\tau)\right\|\leq$
$\displaystyle\left\|\hat{z}_{i}(0)-p(0)\right\|\exp\left(\sup_{0\leq\tau\leq
t}\left\|\hat{Q}\left(\hat{\zeta}_{i}(\tau)\right)\right\|\cdot t\right).$
Choosing $p(0)=\hat{z}_{i}(0)$ shows that $\hat{z}_{i}(t)=p(t)$ for any $0\leq
t<t_{0}$ as well.
But $p(t)$ is a probability vector, that is,
$\hat{z}_{i}(t)\in\Delta^{\mathcal{S}}$, which contradicts $\hat{z}_{i}(t)$
blowing up as $t\to t_{0}$, so the solution must be global.
Since the solution is on the simplex $\Delta^{S}$, we have
$\hat{q}_{ss^{\prime}}\left(\hat{\zeta}_{i}(t)\right)=q_{ss^{\prime}}\left(\hat{\zeta}_{i}(t)\right)$
(that is, the absolute values in (54) are not necessary). Therefore
$\hat{z}_{i}(t)$ is a solution for the original equation (12) as well. Since
the solution for (12) is unique, $\hat{z}_{i}(t)=z_{i}(t).$ This makes
$z_{i}(t)$ a global solution with values on the simplex $\Delta^{S}.$ ∎
### 6.2 Proof of Theorem 2
The strategy of the proof is to derive an inequality for $D_{\max}(t)$ and
$\tilde{D}_{i}(t)$ such that Grönwall’s inequality could be applied.
In the first step, we are showing an inequality for the error of the
indicators.
###### Lemma 2.
There exists $\tilde{C}_{1}=\tilde{C}_{1}(\delta_{\max})$ such that
$\displaystyle D_{\max}^{(0)}(t)\leq$
$\displaystyle\tilde{C}_{1}\int_{0}^{t}D_{\max}(\tau)\mathrm{d}\tau,$
$\displaystyle\tilde{D}_{i}^{(0)}(t)\leq$
$\displaystyle\tilde{C}_{1}\int_{0}^{t}\tilde{D}_{i}(\tau)\mathrm{d}\tau.$
###### Proof.
(Lemma 2) $\oplus$ denotes symmetric difference.
$\displaystyle\left|\xi_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right|\leq$
$\displaystyle\sum_{\begin{subarray}{c}s^{\prime}\in\mathcal{S}\\\
s^{\prime}\neq
s\end{subarray}}\left|\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(\tau)\right)-\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{K}_{i,ss^{\prime}}(\tau)\right)\right|+\left|\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(\tau)\right)-\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{K}_{i,ss^{\prime}}(\tau)\right)\right|\leq$
$\displaystyle\sum_{\begin{subarray}{c}s^{\prime}\in\mathcal{S}\\\
s^{\prime}\neq
s\end{subarray}}\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(\tau)\oplus\mathcal{K}_{i,ss^{\prime}}(\tau)\right)+\mathcal{N}_{i,s^{\prime}s}\left(\mathcal{H}_{i,s^{\prime}s}(\tau)\oplus\mathcal{K}_{i,s^{\prime}s}(\tau)\right)\leq$
$\displaystyle\sum_{\begin{subarray}{c}s^{\prime}\in\mathcal{S}\\\
s^{\prime}\neq
s\end{subarray}}\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(t)\oplus\mathcal{K}_{i,ss^{\prime}}(t)\right)+\mathcal{N}_{i,s^{\prime}s}\left(\mathcal{H}_{i,s^{\prime}s}(t)\oplus\mathcal{K}_{i,s^{\prime}s}(t)\right)$
In the last step we used the fact that
$\mathcal{H}_{i,ss^{\prime}}(\tau)\oplus\mathcal{K}_{i,ss^{\prime}}(\tau)$ is
an increasing set in $\tau$.
Since the right hand side does not depend on $\tau$, it makes no difference
whether we take $\sup_{0\leq\tau\leq t}$ inside or outside of the expectation.
$\displaystyle D_{i}^{(0)}(t)\leq\tilde{D}_{i}^{(0)}(t)\leq$
$\displaystyle\sum_{s\in\mathcal{S}}\sum_{\begin{subarray}{c}s^{\prime}\in\mathcal{S}\\\
s^{\prime}\neq
s\end{subarray}}\mathbb{E}\left[\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(t)\oplus\mathcal{K}_{i,ss^{\prime}}(t)\right)+\mathcal{N}_{i,s^{\prime}s}\left(\mathcal{H}_{i,s^{\prime}s}(t)\oplus\mathcal{K}_{i,s^{\prime}s}(t)\right)\right]$
The summations with respect to $s$ and $s^{\prime}$ only contribute a constant
factor $\left|\mathcal{S}\right|^{2}$ which will be neglected. Also, the same
bound applies for
$E\left[\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(t)\oplus\mathcal{K}_{i,ss^{\prime}}(t)\right)\right]$
and
$E\left[\mathcal{N}_{i,s^{\prime}s}\left(\mathcal{H}_{i,s^{\prime}s}(t)\oplus\mathcal{K}_{i,s^{\prime}s}(t)\right)\right]$,
so it is enough to keep track of only the first one, with a factor of $2$.
The rate functions are Lipschitz-continuous on a compact domain due to
assumption (4), so they are bounded; their maximum is denoted by $q_{\max}$.
$\displaystyle\mathbb{E}\left[\mathcal{N}_{i,ss^{\prime}}\left(\mathcal{H}_{i,ss^{\prime}}(t)\oplus\mathcal{K}_{i,ss^{\prime}}(t)\right)\right]=$
$\displaystyle\mathbb{E}\left[\int_{0}^{t}\left|q_{ss^{\prime}}\left(\phi_{i}(\tau)\right)\xi_{i,s^{\prime}}(\tau)-q_{ss^{\prime}}\left(\tilde{\phi}_{i}(\tau)\right)\hat{\xi}_{i,s^{\prime}}(\tau)\right|\mathrm{d}\tau\right]\leq$
$\displaystyle\mathbb{E}\left[\int_{0}^{t}q_{\max}\left|\xi_{i,s^{\prime}}(\tau)-\hat{\xi}_{i,s^{\prime}}(\tau)\right|+L_{q}\sum_{m=1}^{M}\sum_{\underline{r}\in\mathcal{S}^{m}}\left|\phi_{i,\underline{r}}^{(m)}(\tau)-\tilde{\phi}_{i,\underline{r}}^{(m)}(\tau)\right|\mathrm{d}\tau\right]\leq$
$\displaystyle\left(q_{\max}+L_{q}\right)\int_{0}^{t}\sum_{m=0}^{M}D_{i}^{(m)}(\tau)\mathrm{d}\tau\leq\left(q_{\max}+L_{q}\right)\int_{0}^{t}\sum_{m=0}^{M}\tilde{D}_{i}^{(m)}(\tau)\mathrm{d}\tau$
Setting
$\tilde{C}_{1}:=2\left(q_{\max}+L_{q}\right)\left|\mathcal{S}\right|^{2}$
yields
$\displaystyle
D_{i}^{(0)}(t)\leq\tilde{D}_{i}^{(0)}(t)\leq\tilde{C}_{1}\int_{0}^{t}\sum_{m=0}^{M}D_{i}^{(m)}(\tau)\mathrm{d}\tau\leq\tilde{C}_{1}\int_{0}^{t}\underbrace{\sum_{m=0}^{M}\tilde{D}_{i}^{(m)}(\tau)}_{=\tilde{D}_{i}(\tau)}\mathrm{d}\tau.$
∎
The second half of the proof of Theorem 2 involves estimating the difference
between the neighbors $\phi_{i}(t)$ and $\zeta_{i}(t)$ via the differences of
the indicators.
$\zeta_{i}(t)$ does not contain the indicators $\hat{\xi}_{i}(t)$ directly,
only their expectation $z_{i}(t)$. To bridge this gap, we introduce
“intermediate neighborhoods”
$\displaystyle\hat{\phi}_{i,\underline{s}}^{(m)}(t)=$
$\displaystyle\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}\hat{\xi}_{\underline{j},\underline{s}}^{(m)}(t).$
Note that under (16) and independent initial conditions,
$\mathbb{E}\left(\hat{\xi}_{\underline{i},\underline{s}}^{(m)}\right)=\mathbb{E}\left(\prod_{l=1}^{m}\hat{\xi}_{i_{l},s_{l}}(t)\right)=\prod_{l=1}^{m}\mathbb{E}\left(\hat{\xi}_{i_{l},s_{l}}(t)\right)=\prod_{l=1}^{m}z_{i_{l},s_{l}}(t)=z_{\underline{i},\underline{s}}^{(m)}$
for non-secondary loop $\underline{i}$ indices. Assumption (5) was made to
ensure secondary loops have low total weight.
$\displaystyle\begin{split}&\left|\mathbb{E}\left(\hat{\phi}_{i,\underline{s}}^{(m)}(t)\right)-\zeta_{i,\underline{s}}^{(m)}(t)\right|=\left|\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}\left[\mathbb{E}\left(\hat{\xi}_{\underline{j},\underline{s}}^{(m)}(t)\right)-z_{\underline{j},\underline{s}}^{(m)}(t)\right]\right|=\\\
&\left|\sum_{\begin{subarray}{c}\underline{j}\in[N]^{m}\\\
\underline{j}\textrm{ s.
loop}\end{subarray}}w_{i,\underline{j}}^{(m)}\left[\mathbb{E}\left(\hat{\xi}_{\underline{j},\underline{s}}^{(m)}(t)\right)-z_{\underline{j},\underline{s}}^{(m)}(t)\right]\right|\leq\sum_{\begin{subarray}{c}\underline{j}\in[N]^{m}\\\
\underline{j}\textrm{ s. loop}\end{subarray}}w_{i,\underline{j}}^{(m)}\leq
R\sqrt{w_{\max}}.\end{split}$ (55)
The next lemma shows that $\hat{\phi}_{i}(t)$ and $\zeta_{i}(t)$ are close.
###### Lemma 3.
Assume (16) holds with independent initial conditions. Then there is a
$\tilde{C}_{2}=\tilde{C}_{2}\left(\delta_{\max},R\right)$ such that for any
$1\leq m\leq M,\ i\in[N]$
$\displaystyle\sup_{0\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\hat{\phi}_{i,\underline{s}}^{(m)}(t)-\zeta_{i,\underline{s}}^{(m)}(t)\right|\right]\leq\tilde{C}_{2}\sqrt{w_{\max}}.$
(56)
If we further assume $M=1$, there exists a $\tilde{C}_{3}$ such that for all
$t\geq 0$,
$\displaystyle\mathbb{E}\left[\sup_{0\leq
t}\sum_{s\in\mathcal{S}}\left|\hat{\phi}_{i,s}(t)-\zeta_{i,s}(t)\right|\right]\leq\tilde{C}_{3}\underbrace{\sqrt{\sum_{j=1}^{n}w_{ij}^{2}}}_{=\mu_{i}}.$
(57)
###### Proof.
(Lemma 3)
We start by applying (55).
$\displaystyle\sup_{0\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\hat{\phi}_{i,\underline{s}}^{(m)}(t)-\zeta_{i,\underline{s}}^{(m)}(t)\right|\right]\leq$
$\displaystyle R\left|\mathcal{S}\right|^{M}\sqrt{w_{\max}}+\sup_{0\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\hat{\phi}_{i,\underline{s}}^{(m)}(t)-\mathbb{E}\left(\hat{\phi}_{i,\underline{s}}^{(m)}(t)\right)\right|\right].$
The first term is of the desired form; we examine the second term.
$\displaystyle\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\hat{\phi}_{i,\underline{s}}^{(m)}(t)-\mathbb{E}\left(\hat{\phi}_{i,\underline{s}}^{(m)}(t)\right)\right|\right]=\sum_{\underline{s}\in\mathcal{S}^{m}}\mathbb{E}\left(\left|\hat{\phi}_{i,\underline{s}}^{(m)}(t)-\mathbb{E}\left(\hat{\phi}_{i,\underline{s}}^{(m)}(t)\right)\right|\right)\leq$
$\displaystyle\sum_{\underline{s}\in\mathcal{S}^{m}}\sqrt{\mathbb{D}^{2}\left(\hat{\phi}_{i,\underline{s}}^{(m)}(t)\right)}=\sum_{\underline{s}\in\mathcal{S}^{m}}\sqrt{\sum_{\underline{j}\in[N]^{m}}\left(w_{i,\underline{j}}^{(m)}\right)^{2}\mathbb{D}^{2}\left(\hat{\xi}_{\underline{j},\underline{s}}^{(m)}(t)\right)}\leq$
$\displaystyle\left|\mathcal{S}\right|^{M}\sqrt{\sum_{\underline{j}\in[N]^{m}}\left(w_{i,\underline{j}}^{(m)}\right)^{2}}\leq\left|\mathcal{S}\right|^{M}\sqrt{\delta_{\max}w_{\max}}.$
The bound is uniform in $t$, so it can be upgraded to $\sup_{0\leq t}$ for
free, and (56) holds with
$\tilde{C}_{2}=\left(R+\sqrt{\delta_{\max}}\right)\left|\mathcal{S}\right|^{M}.$
Next we turn to (57).
$\hat{\xi}_{i,s}(t)-z_{i,s}(t)$ is a martingale, so
$\hat{\phi}_{i,s}(t)-\zeta_{i,s}(t)=\sum_{j=1}^{N}w_{ij}\left[\hat{\xi}_{j,s}(t)-z_{j,s}(t)\right]$
is also a martingale, and Doob’s martingale inequality yields
$\displaystyle\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\sum_{s\in\mathcal{S}}\left|\hat{\phi}_{i,s}(\tau)-\zeta_{i,s}(\tau)\right|\right]\leq\sum_{s\in\mathcal{S}}\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\left|\hat{\phi}_{i,s}(\tau)-\zeta_{i,s}(\tau)\right|\right]\leq$
$\displaystyle\sum_{s\in\mathcal{S}}\sqrt{\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\left|\hat{\phi}_{i,s}(\tau)-\zeta_{i,s}(\tau)\right|^{2}\right]}\leq
2\sum_{s\in\mathcal{S}}\sqrt{\mathbb{E}\left(\left|\hat{\phi}_{i,s}(t)-\zeta_{i,s}(t)\right|^{2}\right)}=$
$\displaystyle
2\sum_{s\in\mathcal{S}}\sqrt{\mathbb{D}^{2}\left(\hat{\phi}_{i,s}(t)\right)}=2\sum_{s\in\mathcal{S}}\sqrt{\sum_{j=1}^{N}w_{ij}^{2}\mathbb{D}^{2}\left(\hat{\xi}_{j,s}(t)\right)}\leq\underbrace{2\left|\mathcal{S}\right|}_{=:\tilde{C}_{3}}\sqrt{\sum_{j=1}^{N}w_{ij}^{2}}.$
∎
Next we show an upper bound for the differences of neighborhood vectors, which
are captured by the values $D^{(m)}_{\max}(t)$.
###### Lemma 4.
Assume (16) and independent initial conditions. Then there exist constants
$\tilde{C}_{4}=\tilde{C}_{5}\left(\delta_{\max}\right)$ such that for any
$t\geq 0$ and $1\leq m\leq M$
$\displaystyle
D^{(m)}_{\max}(t)\leq\tilde{C}_{2}\sqrt{w_{\max}}+\tilde{C}_{4}D_{\max}^{(0)}(t).$
where $\tilde{C}_{2}$ comes from Lemma 3.
If we further assume $M=1$ then
$\displaystyle\tilde{D}^{(1)}(t)\leq\tilde{C}_{3}\mu+W\tilde{D}^{(0)}(t).$
where $\tilde{C}_{3}$ comes from Lemma 3.
###### Proof.
(Lemma 4)
Using Lemma 3, we have
$\displaystyle D_{i}^{(m)}(t)=\sup_{0\leq\tau\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\phi_{i,\underline{s}}^{(m)}(\tau)-\zeta_{i,\underline{s}}^{(m)}(\tau)\right|\right]\leq$
$\displaystyle\tilde{C}_{2}\sqrt{w_{\max}}+\sup_{0\leq\tau\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\phi_{i,\underline{s}}^{(m)}(\tau)-\hat{\phi}_{i,\underline{s}}^{(m)}(\tau)\right|\right]\leq$
$\displaystyle\tilde{C}_{2}\sqrt{w_{\max}}+\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}\left(\sup_{0\leq\tau\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\xi_{\underline{j},\underline{s}}^{(m)}(\tau)-\hat{\xi}_{\underline{j},\underline{s}}^{(m)}(\tau)\right|\right]\right).$
Lemma 1 provides
$\displaystyle\left|\xi_{\underline{j},\underline{s}}^{(m)}(\tau)-\hat{\xi}_{\underline{j},\underline{s}}^{(m)}(\tau)\right|\leq\sum_{l=1}^{m}\left|\xi_{j_{l},s_{l}}(\tau)-\hat{\xi}_{j_{l},s_{l}}(\tau)\right|$
$\displaystyle\sup_{0\leq\tau\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\left|\xi_{\underline{j},\underline{s}}^{(m)}(\tau)-\hat{\xi}_{\underline{j},\underline{s}}^{(m)}(\tau)\right|\right]\leq\sup_{0\leq\tau\leq
t}\mathbb{E}\left[\sum_{\underline{s}\in\mathcal{S}^{m}}\sum_{l=1}^{m}\left|\xi_{j_{l},s_{l}}(\tau)-\hat{\xi}_{j_{l},s_{l}}(\tau)\right|\right]\leq$
$\displaystyle\left|S\right|^{M}\sum_{l=1}^{m}\sup_{0\leq\tau\leq
t}\mathbb{E}\left[\sum_{r\in\mathcal{S}}\left|\xi_{j_{l},r}(\tau)-\hat{\xi}_{j_{l},r}(\tau)\right|\right]\leq\left|S\right|^{M}\sum_{l=1}^{m}D_{j_{l}}^{(0)}(t)\leq
M\left|\mathcal{S}\right|^{M}D_{\max}^{(0)}(t).$
Putting the inequalities together yields
$\displaystyle D_{i}^{(m)}(t)\leq$
$\displaystyle\tilde{C}_{2}\sqrt{w_{\max}}+M\left|\mathcal{S}\right|^{M}D_{\max}^{(0)}(t)\underbrace{\sum_{\underline{j}\in[N]^{m}}w_{i,\underline{j}}^{(m)}}_{=\delta^{(m)}(i)}$
$\displaystyle D_{\max}^{(m)}(t)\leq$
$\displaystyle\tilde{C}_{2}\sqrt{w_{\max}}+\underbrace{M\left|\mathcal{S}\right|^{M}\delta_{\max}}_{=:\tilde{C}_{4}}D_{\max}^{(0)}(t).$
For the second part of Lemma 4, we once again use Lemma 3.
$\displaystyle\tilde{D}_{i}^{(1)}(t)=\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\sum_{s\in\mathcal{S}}\left|\phi_{i,s}(\tau)-\zeta_{i,s}(\tau)\right|\right]\leq$
$\displaystyle\tilde{C}_{3}\mu_{i}+\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\sum_{s\in\mathcal{S}}\left|\phi_{i,s}(\tau)-\hat{\phi}_{i,s}(\tau)\right|\right]\leq$
$\displaystyle\tilde{C}_{3}\mu_{i}+\sum_{j=1}^{N}w_{ij}\left(\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\sum_{s\in\mathcal{S}}\left|\xi_{j,s}(\tau)-\hat{\xi}_{j,s}(\tau)\right|\right]\right)=\tilde{C}_{3}\mu_{i}+\sum_{j=1}^{N}w_{ij}\tilde{D}_{j}^{(0)}(t),$
so
$\displaystyle\tilde{D}^{(1)}(t)\leq\tilde{C}_{3}\mu+W\tilde{D}^{(0)}(t).$
∎
With all the preparations done, we finally turn to proving Theorem 2.
###### Proof.
(Theorem 2)
Using Lemma 2 and 4 and Grönwall’s inequality yields
$\displaystyle
D_{\max}(t)=D_{\max}^{0}(t)+\sum_{m=1}^{M}D_{\max}^{(m)}(t)\leq$
$\displaystyle
M\tilde{C}_{2}\sqrt{w_{\max}}+\left(M\tilde{C}_{4}+1\right)D_{\max}^{(0)}(t)\leq$
$\displaystyle
M\tilde{C}_{2}\sqrt{w_{\max}}+\left(M\tilde{C}_{4}+1\right)\int_{0}^{t}D_{\max}(\tau)\mathrm{d}\tau,$
so
$\displaystyle
D_{\max}(t)\leq\underbrace{M\tilde{C}_{2}e^{\left(M\tilde{C}_{4}+1\right)t}}_{=:C}\sqrt{w_{\max}}.$
Proving the second part is similar.
$\displaystyle\tilde{D}(t)=$
$\displaystyle\tilde{D}^{(0)}(t)+\sum_{m=1}^{M}\tilde{D}^{(m)}(t)\leq\underbrace{M\tilde{C}_{3}}_{=:C_{1}}\mu+M\left(W+I\right)\tilde{D}^{(0)}(t)\leq$
$\displaystyle
C_{1}\mu+\underbrace{\tilde{C_{1}}M}_{=:C_{2}}\int_{0}^{t}\left(W+I\right)\tilde{D}(\tau)\mathrm{d}\tau\Rightarrow$
$\displaystyle\left\|\tilde{D}(t)\right\|\leq$ $\displaystyle
C_{1}\left\|\mu\right\|+C_{2}\left\|W+I\right\|\int_{0}^{t}\left\|\tilde{D}(\tau)\right\|\mathrm{d}\tau,$
so
$\displaystyle\left\|\tilde{D}(t)\right\|\leq$ $\displaystyle
C_{1}e^{C_{2}\left\|W+I\right\|t}\left\|\mu\right\|.$
∎
### 6.3 Proof of Theorems 4 and 5
###### Proof.
(Theorem 4) For a fixed $t$ and $s$, we apply Doob’s inequality for the
martingale $\frac{1}{K}\sum_{i=1}^{K}(\hat{\xi}_{i,s}(t)-z_{i,s}(t))$ and use
independence to get
$\displaystyle\mathbb{E}\left(\sup_{0\leq\tau\leq
t}\left|\frac{1}{K}\sum_{i=1}^{K}\left(z_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right)\right|\right)\leq
2\mathbb{D}\left(\frac{1}{K}\sum_{i=1}^{K}\left(z_{i,s}(t)-\hat{\xi}_{i,s}(t)\right)\right)=$
$\displaystyle
2\left(\frac{1}{K^{2}}\sum_{i=1}^{K}\underbrace{\mathbb{D}^{2}\left(z_{i,s}(t)-\hat{\xi}_{i,s}(t)\right)}_{\leq
1}\right)^{1/2}\leq\frac{2}{\sqrt{K}},$ (58)
and (24) follows by inserting $\sum_{s\in\mathcal{S}}$ on the left hand side
at the cost of an $|\mathcal{S}|$ factor on the right hand side. The bound is
uniform in $t$, so we can upgrade to $\sup_{0\leq t}$. ∎
###### Proof.
(Theorem 5) For (25), we consider $0\leq\tau\leq t$ and use both Theorems 2
and 4:
$\displaystyle\mathbb{E}\left(\sum_{s\in\mathcal{S}}\left|\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i,s}(\tau)-z_{i,s}(\tau)\right)\right|\right)\leq$
$\displaystyle\quad\sum_{s\in\mathcal{S}}\mathbb{E}\left(\left|\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right)\right|+\left|\frac{1}{N}\sum_{i=1}^{N}\left(\hat{\xi}_{i,s}(\tau)-z_{i,s}(\tau)\right)\right|\right)\leq$
$\displaystyle\quad\frac{1}{N}\sum_{i=1}^{N}\underbrace{\sum_{s\in\mathcal{S}}\mathbb{E}\left|\left(\xi_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right)\right|}_{\leq
D_{\max}(t)}+\sum_{s\in\mathcal{S}}\underbrace{\mathbb{E}\left|\frac{1}{N}\sum_{i=1}^{N}\left(\hat{\xi}_{i,s}(\tau)-z_{i,s}(\tau)\right)\right|}_{\leq
2/\sqrt{N}}\leq$ $\displaystyle\quad
D_{\max}(t)+\frac{2|\mathcal{S}|}{\sqrt{N}}\leq
C\left(\sqrt{w_{\max}}+\frac{1}{\sqrt{N}}\right).$
The derivation of (26) is analogous to (25) with the exception of keeping the
$\sup_{0\leq\tau\leq t}$ inside the expectation and using (18) instead of
(17).
For (27), we just note that
$\displaystyle\mathbb{E}\left[\sup_{0\leq\tau\leq
t}\left(\sum_{s\in\mathcal{S}}\left|\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i,s}(\tau)-\hat{\xi}_{i,s}(\tau)\right)\right|\right)\right]$
$\displaystyle\leq\frac{1}{N}\|\tilde{D}(t)\|_{1}$
$\displaystyle\leq\frac{1}{\sqrt{N}}\|\tilde{D}(t)\|_{2}=O\left(\sqrt{\frac{1}{N}\|\mu\|_{2}^{2}}\right),$
and the rest of the argument is essentially identical to the previous one.
∎
### 6.4 Proof of Proposition 2
Let $p_{k^{m}}^{(m)}:=\frac{N_{k^{(m)}}}{N}$ denote the ratio of vertices in
the local group $k^{m}.$
$\displaystyle\bar{\zeta}_{k}^{(m)}(t)$
$\displaystyle=\sum_{\underline{l}^{(m)}}\bar{w}_{k^{(m)},\underline{j}^{(m)}}^{(m)}\prod_{r=1}^{m}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)$
$\displaystyle=\sum_{\underline{l}^{(m)}}\left(\prod_{r=1}^{m}p_{l_{r}}^{(m)}\right)\left(a_{k^{m}}^{(m)}+\sum_{r=1}^{m}a_{l_{r}^{(m)}}^{(m)}\right)\prod_{r=1}^{m}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)$
$\displaystyle=\sum_{\underline{l}^{(m)}}\left(a_{k^{m}}^{(m)}+\sum_{r=1}^{m}a_{l_{r}^{(m)}}^{(m)}\right)\prod_{r=1}^{m}p_{l_{r}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)$
(59)
Observe
$\displaystyle\sum_{l^{m}=1}^{K^{(m)}}p_{l_{r}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|a^{(m)}(\iota)=l_{r}^{(m)}\right)=$
$\displaystyle\mathbb{E}\left(\mathbb{E}\left(\left.z_{\iota}(t)\right|a^{(m)}(\iota)=l_{r}^{(m)}\right)\right)=\mathbb{E}\left(z_{\iota}(t)\right).$
Also introduce
$\displaystyle\psi^{(m)}(t):=\sum_{l=1}^{K^{(m)}}a_{l}^{(m)}p_{l}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|a^{(m)}(\iota)=l^{(m)}\right)$
which is renaissance of an activity biased average.
We expand (59) based on the terms
$a_{k^{m}}^{(m)}+\sum_{r=1}^{m}a_{l_{r}^{(m)}}^{(m)}$. For $a_{k^{m}}^{(m)}$
$\displaystyle
a_{k^{m}}^{(m)}\sum_{\underline{l}^{(m)}}\prod_{r=1}^{m}p_{l_{r}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)=$
$\displaystyle
a_{k^{m}}^{(m)}\left(\sum_{l=1}^{K^{(}m)}p_{l}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l^{(m)}\right)\right)^{m}=$
$\displaystyle a_{k^{m}}^{(m)}\mathbb{E}^{m}\left(z_{\iota}(t)\right).$
For the $a_{l_{r}^{\prime}}^{(m)}$ terms we have
$\displaystyle\sum_{\underline{l}^{(m)}}a_{l_{r^{\prime}}^{(m)}}\prod_{r=1}^{m}p_{l_{r}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)=$
$\displaystyle\underbrace{\sum_{l_{r^{\prime}}=1}^{K^{(m)}}a_{l_{r^{\prime}}^{(m)}}p_{l_{r^{\prime}}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r^{\prime}}^{(m)}\right)}_{\psi^{(m)}(t)}\sum_{\begin{subarray}{c}l_{r}^{(m)}=1\\\
r\neq r^{\prime}\end{subarray}}^{K^{(m)}}\prod_{\begin{subarray}{c}r=1\\\
r\neq
r^{\prime}\end{subarray}}^{m}p_{l_{r}}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l_{r}^{(m)}\right)=$
$\displaystyle\psi^{(m)}(t)\left(\sum_{l=1}^{K^{(}m)}p_{l}^{(m)}\mathbb{E}\left(\left.z_{\iota}(t)\right|k^{(m)}(\iota)=l^{(m)}\right)\right)^{m-1}=\psi^{(m)}(t)\mathbb{E}^{m-1}\left(z_{\iota}(t)\right).$
Therefore, (59) reduces to
$\displaystyle\bar{\zeta}^{(m)}_{k}(t)$
$\displaystyle=a_{k^{m}}^{(m)}\mathbb{E}^{m}\left(z_{\iota}(t)\right)+\psi^{(m)}(t)\mathbb{E}^{m-1}\left(z_{\iota}(t)\right)$
$\displaystyle=\left(a_{k^{m}}^{(m)}\mathbb{E}\left(z_{\iota}(t)\right)+\psi^{(m)}(t)\right)\mathbb{E}^{m-1}\left(z_{\iota}(t)\right).$
### 6.5 Proof of Theorem 6
Recall (47). We call the sets $X,Y\subset[N]$ $\varepsilon$-regular if for all
$A\subseteq X,\ B\subseteq Y$ such that
$\left|A\right|>\varepsilon\left|X\right|,\
\left|B\right|>\varepsilon\left|Y\right|$ one has
$\displaystyle\left|\rho\left(A,B\right)-\rho\left(X,Y\right)\right|<\varepsilon.$
We use Szemerédi’s regularity lemma.
###### Lemma.
(Szemerédi’s regularity lemma)
For every $\varepsilon>0,\ K_{\min}\in\mathbb{Z}^{+}$ there is a $K_{\max}$
such that if $N\geq K_{\max}$ there is a partition $V_{0},V_{1},\dots,V_{K}$
such that
$\displaystyle\left|V_{0}\right|<\varepsilon N,$
$\displaystyle\left|V_{1}\right|=\dots=\left|V_{K}\right|,$ $\displaystyle
K_{\min}\leq K\leq K_{\max}$
and there are at most $\varepsilon{K\choose 2}$ pairs of
$\left(V_{k},V_{l}\right),\ 1\leq k<l\leq K$ such that they are not
$\varepsilon$-regular.
Fix a $\varepsilon^{\prime}>0$ and a $K_{\min}$ such that
$\displaystyle K_{\min}>\frac{1}{\varepsilon^{\prime}}.$
This choice ensures that there are enough boxes such that most of the vertices
are between boxes and not within them. This is a fairly common approach in the
context of Szemerédi’s regularity lemma [32].
Using Szemerédi’s regularity lemma for $\varepsilon^{\prime}$, we obtain a
partition denoted by $V_{0},V_{1},\dots,V_{K}.$
For $p$ and $\kappa$, as defined in (48) and (49), we have the following
inequalities:
$\displaystyle
p=\frac{\bar{d}}{N}\geq\frac{p_{0}(N-1)}{N}\geq\frac{p_{0}}{2}>0$
$\displaystyle
1=\sum_{k=0}^{K}\frac{\left|V_{k}\right|}{N}\geq\sum_{k=1}^{K}\frac{\left|V_{k}\right|}{N}=K\kappa\Longrightarrow\kappa\leq\frac{1}{K}\leq\frac{1}{K_{\min}}<\varepsilon^{\prime}$
where we used $N\geq 2$.
Introduce the notations
$\displaystyle\bar{z}_{k}(t):=$
$\displaystyle\frac{1}{\left|V_{k}\right|}\sum_{i\in V_{k}}z_{i}(t),$
$\displaystyle\psi(t):=$
$\displaystyle\sum_{k=1}^{K}\frac{\left|V_{k}\right|}{N}\left\|\bar{z}_{k}(t)-v_{k}(t)\right\|_{1}=\kappa\sum_{k=1}^{K}\left\|\bar{z}_{k}(t)-v_{k}(t)\right\|_{1}.$
If $V_{0}=\emptyset,$ we use the convention $z_{0}(t)\equiv 0.$
From (53) and (52), we have
$\displaystyle\bar{z}(t)=\frac{1}{N}\sum_{i=1}^{N}z_{i}(t)=\sum_{k=0}^{K}\frac{\left|V_{k}\right|}{N}\frac{1}{\left|V_{k}\right|}\sum_{i\in
V_{k}}z_{i}(t)=\sum_{k=0}^{K}\frac{\left|V_{k}\right|}{N}\bar{z}_{k}(t)$
$\displaystyle\left\|\bar{z}(t)-\bar{v}(t)\right\|_{1}=\left\|\frac{\left|V_{0}\right|}{N}\bar{z}_{0}(t)+\sum_{k=1}^{K}\frac{\left|V_{k}\right|}{N}\left[\bar{z}_{k}(t)-v_{k}(t)\right]\right\|_{1}\leq$
$\displaystyle\frac{\left|V_{0}\right|}{N}\left\|\bar{z}_{0}(t)\right\|_{1}+\sum_{k=1}^{K}\frac{\left|V_{k}\right|}{N}\left\|\bar{z}_{k}(t)-v_{k}(t)\right\|_{1}\leq\varepsilon^{\prime}+\psi(t)$
where in the last step we used $\left|V_{0}\right|<\varepsilon^{\prime}N$ and
$\displaystyle\left\|\bar{z}_{0}(t)\right\|_{1}\leq\frac{1}{\left|V_{0}\right|}\sum_{i\in
V_{0}}\underbrace{\left\|z_{i}(t)\right\|_{1}}_{=1}=1.$
Going forward, it is enough to examine $\psi(t).$
Next we calculate the derivative of $\bar{z}_{k}(t).$ As $M=1,$ (29) takes the
form
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}z_{i,s}(t)=$
$\displaystyle\sum_{s^{\prime}\in\mathcal{S}}q_{ss^{\prime}}\left(\zeta_{i}(t)\right)z_{i,s^{\prime}}(t)=$
$\displaystyle\sum_{s^{\prime}\in\mathcal{S}}q_{ss^{\prime}}^{(0)}z_{i,s^{\prime}}(t)+\sum_{s^{\prime}\in\mathcal{S}}\sum_{r\in\mathcal{S}}q_{ss^{\prime},r}^{(1)}\zeta_{i,r}(t)z_{i,s^{\prime}}(t)=$
$\displaystyle\sum_{s^{\prime}\in\mathcal{S}}q_{ss^{\prime}}^{(0)}z_{i,s^{\prime}}(t)+\sum_{s^{\prime}\in\mathcal{S}}\sum_{r\in\mathcal{S}}q_{ss^{\prime},r}^{(1)}\left[\sum_{j=1}^{N}\underbrace{\frac{a_{ij}}{\bar{d}}}_{w_{ij}^{(m)}}z_{i,s^{\prime}}(t)z_{j,r}(t)\right]$
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\bar{z}_{k,s}(t)=$
$\displaystyle\sum_{s^{\prime}\in\mathcal{S}}q_{ss^{\prime}}^{(0)}\bar{z}_{k,s^{\prime}}(t)+\sum_{s^{\prime}\in\mathcal{S}}\sum_{r\in\mathcal{S}}q_{ss^{\prime},r}^{(1)}\left[\frac{1}{\left|V_{k}\right|}\sum_{i\in
V_{k}}\sum_{j=1}^{N}\frac{a_{ij}}{\bar{d}}z_{i,s^{\prime}}(t)z_{j,r}(t)\right]$
Similarly,
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}v_{k,s}(t)=$
$\displaystyle\underbrace{\sum_{s^{\prime}\in\mathcal{S}}q_{ss^{\prime}}^{(0)}v_{k,s^{\prime}}(t)+\sum_{s^{\prime}\in\mathcal{S}}\sum_{r\in\mathcal{S}}q_{ss^{\prime},r}^{(1)}\sum_{l=1}^{K}{\bar{w}_{kl}}v_{k,s^{\prime}}(t)v_{l,r}(t)}_{=:f_{k,s}\left(V(t)\right)}$
where $V(t):=\left(v_{k,s}(t)\right)_{k\in[K],\ s\in\mathcal{S}}$ and
$\overline{Z}(t)=\left(\bar{z}_{k,s}(t)\right)_{k\in[K],\ s\in\mathcal{S}}$
analogously.
Next we show a Lipschitz-type inequality for
$f_{k}=\left(f_{k,s}\right)_{s\in\mathcal{S}}.$
$\displaystyle\left|\bar{z}_{k,s^{\prime}}(t)\sum_{l=1}^{K}\bar{w}_{kl}\bar{z}_{l,r}(t)-v_{k,s^{\prime}}(t)\sum_{l=1}^{K}\bar{w}_{kl}v_{l,r}(t)\right|\leq$
$\displaystyle\left|\bar{z}_{k,s^{\prime}}(t)-v_{k,s^{\prime}}(t)\right|\underbrace{\sum_{l=1}^{K}\bar{w}_{kl}\bar{z}_{l,r}(t)}_{\leq\sum_{k=1}^{K}\bar{w}_{kl}\leq\frac{2}{p_{0}K}K=\frac{2}{p_{0}}}+\underbrace{v_{k,s^{\prime}}(t)}_{\leq
1}\sum_{l=1}^{K}\underbrace{\bar{w}_{kl}}_{\leq\frac{2\kappa}{p_{0}}}\left|\bar{z}_{l,r}(t)-v_{l,r}(t)\right|\leq$
$\displaystyle\frac{2}{p_{0}}\left(\left|\bar{z}_{k,s^{\prime}}(t)-v_{k,s^{\prime}}(t)\right|+\kappa\sum_{l=1}^{K}\left|\bar{z}_{l,r}(t)-v_{l,r}(t)\right|\right),$
so
$\displaystyle\left|f_{k,s}\left(\bar{Z}(t)\right)-f_{k,s}\left(V(t)\right)\right|\leq
q_{\max}\sum_{s^{\prime}\in\mathcal{S}}\left|\bar{z}_{k,s^{\prime}}(t)-v_{k,s^{\prime}}(t)\right|+$
$\displaystyle\frac{2q_{\max}}{p_{0}}\sum_{s^{\prime}\in\mathcal{S}}\sum_{r\in\mathcal{S}}\left(\left|\bar{z}_{k,s^{\prime}}(t)-v_{k,s^{\prime}}(t)\right|+\kappa\sum_{l=1}^{K}\left|\bar{z}_{l,r}(t)-v_{l,r}(t)\right|\right)=$
$\displaystyle
q_{\max}\left(1+\frac{2\left|\mathcal{S}\right|}{p_{0}}\right)\left\|\bar{z}_{k}(t)-v_{k}(t)\right\|_{1}+\frac{2q_{\max}\left|\mathcal{S}\right|}{p_{0}}\psi(t).$
Summation for $s\in\mathcal{S}$ results only in an extra $\mathcal{S}$ factor,
so there exists a constant $L_{f}$ such that
$\displaystyle\left\|f_{k}\left(\overline{Z}(t)\right)-f_{k}\left(V(t)\right)\right\|_{1}\leq
L_{f}\left(\left\|\bar{z}_{k}(t)-v_{k}(t)\right\|_{1}+\psi(t)\right).$ (60)
Next we look to replace the right hand side of
$\frac{\mathrm{d}}{\mathrm{d}t}\bar{z}_{k,s}(t)$ with
$f_{k,s}\left(\overline{Z}(t)\right).$ The corresponding error term is
$\displaystyle
g_{k,s}(t):=\sum_{s^{\prime}\in\mathcal{S}}\sum_{r\in\mathcal{S}}q_{ss^{\prime},r}^{(1)}\left[\frac{1}{\left|V_{k}\right|}\sum_{i\in
V_{k}}\sum_{j=1}^{N}\frac{a_{ij}}{\bar{d}}z_{i,s^{\prime}}(t)z_{j,r}(t)-\sum_{l=1}^{K}{\bar{w}_{kl}}\bar{z}_{k,s^{\prime}}(t)\bar{z}_{l,r}(t)\right],$
(61)
and from
$\frac{\mathrm{d}}{\mathrm{d}t}\bar{z}_{k}(t)=g_{k}(t)+f_{k}\left(\overline{Z}(t)\right)$,
we have
$\displaystyle\bar{z}_{k}(t)=$
$\displaystyle\bar{z}_{k}(0)+\int_{0}^{t}g_{k}(\tau)\mathrm{d}\tau+\int_{0}^{t}f_{k}\left(\overline{Z}(\tau)\right)\mathrm{d}\tau.$
Using $\bar{z}_{k}(0)=v_{k}(0)$, $\psi(t)$ can be bounded from above by
$\displaystyle\psi(t)=$
$\displaystyle\kappa\sum_{k=1}^{K}\left\|\bar{z}_{k}(t)-v_{k}(t)\right\|_{1}\leq$
$\displaystyle t\cdot\sup_{0\leq\tau\leq
t}\kappa\sum_{k=1}^{K}\left\|g_{k}(\tau)\right\|_{1}+\int_{0}^{t}\kappa\sum_{k=1}^{K}\left\|f_{k}\left(\overline{Z}(\tau)\right)-f_{k}\left(V(\tau)\right)\right\|_{1}\mathrm{d}\tau\leq$
$\displaystyle t\cdot\sup_{0\leq\tau\leq
t}\kappa\sum_{k=1}^{K}\left\|g_{k}(\tau)\right\|_{1}+L_{f}\int_{0}^{t}\kappa\sum_{k=1}^{K}\left(\left\|\bar{z_{k}}(\tau)-v_{k}(\tau)\right\|_{1}+\psi(\tau)\right)\mathrm{d}\tau\leq$
$\displaystyle t\cdot\sup_{0\leq\tau\leq
t}\kappa\sum_{k=1}^{K}\left\|g_{k}(\tau)\right\|_{1}+2L_{f}\int_{0}^{t}\psi(\tau)\mathrm{d}\tau,$
so from Grönwall’s inequality,
$\displaystyle\sup_{0\leq t\leq T}\psi(t)\leq$
$\displaystyle\left(T\cdot\sup_{0\leq t\leq
T}\kappa\sum_{k=1}^{K}\left\|g_{k}(t)\right\|_{1}\right)e^{2L_{f}T}.$
Therefore it is enough to show that $\sup_{0\leq t\leq
T}\kappa\sum_{k=1}^{K}\left\|g_{k}(t)\right\|_{1}=O\left(\varepsilon^{\prime}\right)$,
and with an appropriate choice of $\varepsilon=C\varepsilon^{\prime}$ we can
conclude
$\displaystyle\sup_{0\leq t\leq
T}\left\|\bar{z}(t)-\bar{v}(t)\right\|_{1}\leq\varepsilon.$
$\displaystyle\kappa\sum_{l=1}^{K}\left\|g_{k}(t)\right\|_{1}=$
$\displaystyle\kappa\sum_{s\in\mathcal{S}}\sum_{k=1}^{K}\left|\sum_{s^{\prime},r\in\mathcal{S}}q_{ss^{\prime},r}^{(1)}\left[\frac{1}{\left|V_{k}\right|}\sum_{i\in
V_{k}}\sum_{j=1}^{N}\frac{a_{ij}}{\bar{d}}z_{i,s^{\prime}}(t)z_{j,r}(t)-\sum_{l=1}^{K}{\bar{w}_{kl}}\bar{z}_{k,s^{\prime}}(t)\bar{z}_{l,r}(t)\right]\right|\leq$
$\displaystyle\kappa
q_{\max}\sum_{s,s^{\prime},r\in\mathcal{S}}\sum_{k=1}^{K}\sum_{l=0}^{K}\left|\frac{1}{|V_{k}|}\sum_{i\in
V_{k}}\sum_{j\in
V_{l}}\frac{a_{ij}}{\bar{d}}z_{i,s^{\prime}}(t)z_{j,r}(t)-\bar{w}_{kl}\bar{z}_{k,s^{\prime}}(t)\bar{z}_{l,r}(t)\right|$
$\sum_{s,s^{\prime},r\in\mathcal{S}}(\dots)$ only contributes a factor of
$\left|\mathcal{S}\right|^{3}$ which we can include in the constant factor
along with $q_{\max}.$ The remaining terms are
$\displaystyle\kappa\sum_{k=1}^{K}\sum_{l=0}^{K}\left|\frac{1}{|V_{k}|}\sum_{i\in
V_{k}}\sum_{j\in
V_{l}}\frac{a_{ij}}{\bar{d}}z_{i,s^{\prime}}(t)z_{j,r}(t)-\bar{w}_{kl}\bar{z}_{k,s^{\prime}}(t)\bar{z}_{l,r}(t)\right|.$
(62)
In the next step we shall get rid of the diagonal $(k,l)$ terms and also the
terms with $l=0$. We have
$\displaystyle\frac{1}{|V_{k}|}\sum_{i\in V_{k}}\sum_{j\in
V_{l}}\frac{a_{ij}}{\bar{d}}z_{i,s^{\prime}}(t)z_{j,r}(t)\leq\frac{1}{\left|V_{k}\right|\bar{d}}\sum_{i\in
V_{k}}\sum_{j\in
V_{k}}1=\frac{\left|V_{l}\right|}{\bar{d}}\leq\frac{\varepsilon^{\prime}}{p}\leq\frac{2\varepsilon^{\prime}}{p_{0}},$
$\displaystyle\bar{w}_{kl}\bar{z}_{k,s^{\prime}}(t)\bar{z}_{l,r}(t)\leq\frac{\kappa}{p}\leq\frac{2\varepsilon^{\prime}}{p_{0}},$
so each term in the sum of (62) is $O\left(\varepsilon^{\prime}\right).$ There
are $O(K)$ pairs which are either diagonal or $l=0$, so their overall
contribution to the sum is $O\left(\kappa
K\varepsilon^{\prime}\right)=O\left(\varepsilon^{\prime}\right),$ hence we can
neglect them and what we are left with is
$\displaystyle\kappa\
\sum_{(k,l)\in\mathcal{I}}\left|\frac{1}{|V_{k}|}\sum_{i\in V_{k}}\sum_{j\in
V_{l}}\frac{a_{ij}}{\bar{d}}z_{i,s^{\prime}}(t)z_{j,r}(t)-\bar{w}_{kl}\bar{z}_{k,s^{\prime}}(t)\bar{z}_{l,r}(t)\right|.$
(63)
where $\mathcal{I}=\\{(k,l)|k,l\in[K],k\neq l\\}$.
In order to have an upper bound for (63) we want to use the properties of the
$\varepsilon^{\prime}$-regular partition. However, Szemerédi’s regularity
lemma uses subsets of $[N]$, or in other words, $0-1$ valued indicators of
vertices compared to $z_{i,s}(t)$ which may take any value from $[0,1].$
To account for this problem, we introduce $N$ independent homogeneous Markov
processes taking values from $\mathcal{S}$. Each process makes Markov
transitions according to the transition rate matrix
$Q\left(\zeta_{i}(t)\right)$ and its initial distribution is given by
$\left(z_{i,s}(0)\right)_{s\in\mathcal{S}}.$ Let $\eta_{i,s}(t)$ be an
indicator of the $i$’th such process is at state $s$ at time $t$. We also
apply the notations
$\displaystyle\eta_{i}(t)=$
$\displaystyle\left(\eta_{i,s}(t)\right)_{s\in\mathcal{S}},$
$\displaystyle\bar{\eta}_{k}(t):=$
$\displaystyle\frac{1}{\left|V_{k}\right|}\sum_{i\in V_{k}}\eta_{i}(t).$
It is easy to see that $\mathbb{E}\left(\eta_{i}(t)\right)=z_{i}(t).$ Also,
since $i\in V_{k}$ and $j\in V_{l}$, $i$ and $j$ are different for $k\neq l$,
hence the corresponding processes are independent, so
$\displaystyle z_{i,s^{\prime}}(t)z_{j,r}(t)=$
$\displaystyle\mathbb{E}\left(\eta_{i,s^{\prime}}(t)\eta_{j,k}(t)\right),$
$\displaystyle\bar{z}_{k,s^{\prime}}(t)\bar{z}_{l,r}(t)=$
$\displaystyle\mathbb{E}\left(\bar{\eta}_{k,s^{\prime}}(t)\bar{\eta}_{l,r}(t)\right).$
Therefore, (63) can be bounded from above by
$\displaystyle\mathbb{E}\left[\kappa\
\sum_{(k,l)\in\mathcal{I}}\left|\frac{1}{|V_{k}|}\sum_{i\in V_{k}}\sum_{j\in
V_{l}}\frac{a_{ij}}{\bar{d}}\eta_{i,s^{\prime}}(t)\eta_{j,r}(t)-\bar{w}_{kl}\bar{\eta}_{k,s^{\prime}}(t)\bar{\eta}_{l,r}(t)\right|\right].$
(64)
The upper bound we aim to obtain does not depend on the artificial randomness
just introduced, hence the expectation is ignored.
We make some algebraic manipulation to end up with edge densities needed for
Szemerédi’s regularity lemma. We use the notation
$\displaystyle V_{k,s}(t):=\left\\{\left.i\in
V_{k}\right|\eta_{i,s}(t)=1\right\\}.$
Then
$\displaystyle\frac{1}{|V_{k}|}\sum_{i\in V_{k}}\sum_{j\in
V_{l}}\frac{a_{ij}}{\bar{d}}\eta_{i,s^{\prime}}(t)\eta_{j,r}(t)=\frac{1}{\left|V_{k}\right|\bar{d}}e\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)=$
$\displaystyle\frac{\left|V_{l}\right|}{\bar{d}}\rho\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)\frac{\left|V_{k,s^{\prime}}(t)\right|}{\left|V_{k}\right|}\frac{\left|V_{l,r}(t)\right|}{\left|V_{l}\right|}=\frac{\kappa}{p}\rho\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)\bar{\eta}_{k,s^{\prime}}(t)\bar{\eta}_{l,k}(t).$
By recalling (50), the inside of (64) can be rewritten as
$\displaystyle\frac{\kappa^{2}}{p}\sum_{(k,l)\in\mathcal{I}}\left|\rho\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)-\rho\left(V_{k},V_{l}\right)\right|\bar{\eta}_{k,s^{\prime}}(t)\bar{\eta}_{l,r}(t).$
(65)
Note that the summands of (65) are $O(1)$.
Using Szemerédi’s lemma to (65) is relatively straightforward from now on. We
still have to deal with non-$\varepsilon^{\prime}$-regular $k,l$ pairs, and
pairs where either
$\left|V_{k,s^{\prime}}(t)\right|\leq\varepsilon^{\prime}\left|V_{k}\right|$
or $\left|V_{l,r}(t)\right|\leq\varepsilon^{\prime}\left|V_{l}\right|$. The
former set of pairs are denoted by $\mathcal{I}_{1}$ and the latter by
$\mathcal{I}_{2}$, and
$\mathcal{I}_{3}:=\mathcal{I}\setminus\left(\mathcal{I}_{1}\cup\mathcal{I}_{2}\right)$
denotes the non-problematic pairs.
Then from $\left|\mathcal{I}_{1}\right|\leq\varepsilon^{\prime}{K\choose
2}\leq\varepsilon^{\prime}K^{2}$ we have
$\displaystyle\frac{\kappa^{2}}{p}\sum_{(k,l)\in\mathcal{I}_{1}}\left|\rho\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)-\rho\left(V_{k},V_{l}\right)\right|\bar{\eta}_{k,s^{\prime}}(t)\bar{\eta}_{l,r}(t)=O\left(\varepsilon^{\prime}\kappa^{2}K^{2}\right)=O\left(\varepsilon^{\prime}\right).$
$(k,l)\in\mathcal{I}_{2}$ is equivalent with
$\bar{\eta}_{k,s^{\prime}}(t)\leq\varepsilon^{\prime}$ or
$\bar{\eta}_{l,k}(t)\leq\varepsilon^{\prime}$, yielding
$\displaystyle\frac{\kappa^{2}}{p}\sum_{(k,l)\in\mathcal{I}_{2}}\left|\rho\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)-\rho\left(V_{k},V_{l}\right)\right|\bar{\eta}_{k,s^{\prime}}(t)\bar{\eta}_{l,r}(t)\leq$
$\displaystyle\frac{\varepsilon^{\prime}\kappa^{2}}{p}\sum_{(k,l)\in\mathcal{I}_{2}}1=O\left(\varepsilon^{\prime}\kappa^{2}K^{2}\right)=O\left(\varepsilon^{\prime}\right).$
Finally, $(k,l)\in\mathcal{I}_{3}$ gives
$\displaystyle\left|\rho\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)-\rho\left(V_{k},V_{l}\right)\right|<\varepsilon^{\prime}\Rightarrow$
$\displaystyle\frac{\kappa^{2}}{p}\sum_{(k,l)\in\mathcal{I}_{3}}\left|\rho\left(V_{k,s^{\prime}}(t),V_{l,r}(t)\right)-\rho\left(V_{k},V_{l}\right)\right|\bar{\eta}_{k,s^{\prime}}(t)\bar{\eta}_{l,r}(t)\leq$
$\displaystyle\frac{\varepsilon^{\prime}\kappa^{2}}{p}\sum_{(k,l)\in\mathcal{I}_{2}}1=O\left(\varepsilon^{\prime}\kappa^{2}K^{2}\right)=O\left(\varepsilon^{\prime}\right).$
This ensures that $\sup_{0\leq t\leq
T}\kappa\sum_{k=1}^{K}\left\|g_{k}(t)\right\|_{1}=O\left(\varepsilon^{\prime}\right)$
indeed holds, concluding the proof of Theorem 6.
∎
## References
* [1] N. Alon, R. A. Duke, H. Lefmann, V. Rödl, and R. Yuster. The algorithmic aspects of the regularity lemma. Proceedings, 33rd Annual Symposium on Foundations of Computer Science, pages 473–481, 1992.
* [2] R. Bakhshi, L. Cloth, W. Fokkink, and B. R. Haverkort. Mean-field framework for performance evaluation of push-pull gossip protocols. Performance Evaluation, 68(2):157–179, Feb. 2011.
* [3] Á. Bodó, G. Y. Katona, and P. L. Simon. SIS epidemic propagation on hypergraphs. Bulletin of Mathematical Biology, 78:713–735, 2016.
* [4] S. Bonaccorsi, S. Ottaviano, D. Mugnolo, and F. De Pellegrini. Epidemic outbreaks in networks with equitable or almost-equitable partitions. SIAM Journal on Applied Mathematics, 75:2421–2443, 11 2015.
* [5] D. Bruneo, M. Scarpa, A. Bobbio, D. Cerotti, and M. Gribaudo. Markovian agent modeling swarm intelligence algorithms in wireless sensor networks. Performance Evaluation, 2011.
* [6] G. Caravagna. Formal Modeling and Simulation of Biological Systems with Delays. Ph. D. thesis, Università di Pisa, 2011.
* [7] P. Cisneros-Velarde and F. Bullo. Multi-group SIS epidemics with simplicial and higher-order interactions, 2020. https://arxiv.org/abs/2005.11404.
* [8] G. F. de Arruda, F. A. Rodrigues, and Y. Moreno. Fundamentals of spreading processes in single and multilayer complex networks. Physics Reports, 756:1–59, 2018.
* [9] K. Devriendt and P. Van Mieghem. Unified mean-field framework for susceptible-infected-susceptible epidemics on networks, based on graph partitioning and the isoperimetric inequality. Physical Review E, 96:052314, Nov 2017.
* [10] R. J. Glauber. Time‐dependent statistics of the Ising model. Journal of Mathematical Physics, 4(2):294–307, 1963.
* [11] B. Guerra and J. Gómez-Gardeñes. Annealed and mean-field formulations of disease dynamics on static and adaptive networks. Physical Review E, Statistical, nonlinear, and soft matter physics, 82:035101, 09 2010.
* [12] R. A. Hayden, I. Horváth, and M. Telek. Mean field for performance models with generally-distributed timed transitions. In G. Norman and W. Sanders, editors, Quantitative Evaluation of Systems, volume 8657 of Lecture Notes in Computer Science, pages 90–105. Springer International Publishing, 2014.
* [13] I. Iacopini, G. Petri, A. Barrat, and V. Latora. Simplicial models of social contagion. Nature Communications, 10:2485, 06 2019.
* [14] B. Jhun, M. Jo, and B. Kahng. Simplicial SIS model in scale-free uniform hypergraph. arXiv, 2019. https://arxiv.org/abs/1910.00375.
* [15] I. Kiss, J. Miller, and P. Simon. Mathematics of Epidemics on Networks, volume 46. Springer, 01 2017.
* [16] T. Kurtz. Solutions of ordinary differential equations as limits of pure jump Markov processes. Journal of Applied Probability, 7:49–58, 04 1970.
* [17] T. G. Kurtz. Strong approximation theorems for density dependent Markov chains. Stochastic Processes and their Applications, 6(3):223 – 240, 1978\.
* [18] C. Li, R. van de Bovenkamp, and P. Van Mieghem. Susceptible-infected-susceptible model: A comparison of $n$-intertwined and heterogeneous mean-field approximations. Physical Review E, 86:026116, Aug 2012.
* [19] P. Mieghem. The N-intertwined SIS epidemic network model. Computing, 93:147–169, 12 2011.
* [20] M. Molloy and B. Reed. A critical point for random graphs with a given degree sequence. Random Structures & Algorithms, 6(2‐3):161–180, 1995.
* [21] J. Noonan and R. Lambiotte. Dynamics of majority rule on hypergraphs. Physical Review E, 104:024316, Aug 2021.
* [22] M. Ostilli and F. Mukhamedov. Continuous- and discrete-time Glauber dynamics. First- and second-order phase transitions in mean-field Potts models. EPL (Europhysics Letters), 101(6):60008, 2013.
* [23] N. Papanikolaou, G. Vaccario, E. Hormann, R. Lambiotte, and F. Schweitzer. Consensus from group interactions: An adaptive voter model on hypergraphs. Physical Review E, 105:054307, May 2022.
* [24] R. Parasnis, R. Kato, A. Sakhale, M. Franceschetti, and B. Touri. Usefulness of the age-structured SIR dynamics in modelling COVID-19, 2022. https://arxiv.org/abs/2203.05111.
* [25] R. Pastor-Satorras and A. Vespignani. Epidemic spreading in scale-free networks. Physical Review Letters, 86:3200–3203, Apr 2001.
* [26] N. Perra, B. Gonçalves, R. Pastor-Satorras, and A. Vespignani. Activity driven modeling of time varying networks. Scientific reports, 2, 03 2012.
* [27] R. Schlicht and G. Winkler. A delay stochastic process with applications in molecular biology. Journal of Mathematical Biology, 57(5):613–48, Nov. 2008.
* [28] D. H. Silva, S. C. Ferreira, W. Cota, R. Pastor-Satorras, and C. Castellano. Spectral properties and the accuracy of mean-field approaches for epidemics on correlated power-law networks. Physical Review Research, 1:033024, Oct 2019.
* [29] P. L. Simon and I. Z. Kiss. On bounding exact models of epidemic spread on networks, 2017. https://arxiv.org/abs/1704.01726.
* [30] A. Sridhar and S. Kar. Mean-field approximation for stochastic population processes in networks under imperfect information, 01 2021. https://arxiv.org/abs/2101.09644.
* [31] A. Sridhar and S. Kar. On the accuracy of deterministic models for viral spread on networks. ArXiv, 2021. https://arxiv.org/abs/2104.04913.
* [32] E. Szemeredi. Regular partitions of graphs. Proceedings of the C.N.R.S. International Colloquium, 260:11, 04 1975.
* [33] L. Zino, A. Rizzo, and M. Porfiri. An analytical framework for the study of epidemic models on activity driven networks. Journal of Complex Networks, 5(6):924–952, 11 2017.
|
# ScaleVLAD: Improving Multimodal Sentiment Analysis via
Multi-Scale Fusion of Locally Descriptors
Huaishao Luo1 , Lei Ji2, Yanyong Huang3, Bin Wang4, Shenggong Ji5, Tianrui Li1
1Southwest Jiaotong University, Chengdu, China
<EMAIL_ADDRESS><EMAIL_ADDRESS>
2Microsoft Research Asia, Beijing, China
3Southwestern University of Finance and Economics, Chengdu, China
4Ocean University of China, Qingdao, China
5Tencent, Shenzhen, China
<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS>This work was done during the
first author’s internship in MSR Asia
###### Abstract
Fusion technique is a key research topic in multimodal sentiment analysis. The
recent attention-based fusion demonstrates advances over simple operation-
based fusion. However, these fusion works adopt single-scale, i.e., token-
level or utterance-level, unimodal representation. Such single-scale fusion is
suboptimal because that different modality should be aligned with different
granularities. This paper proposes a fusion model named ScaleVLAD to gather
multi-Scale representation from text, video, and audio with shared Vectors of
Locally Aggregated Descriptors to improve unaligned multimodal sentiment
analysis. These shared vectors can be regarded as shared topics to align
different modalities. In addition, we propose a self-supervised shifted
clustering loss to keep the fused feature differentiation among samples. The
backbones are three Transformer encoders corresponding to three modalities,
and the aggregated features generated from the fusion module are feed to a
Transformer plus a full connection to finish task predictions. Experiments on
three popular sentiment analysis benchmarks, IEMOCAP, MOSI, and MOSEI,
demonstrate significant gains over baselines.
## 1 Introduction
Multimodal Sentiment Analysis (MSA) has been a hot research direction with the
increasing number of user-generated videos available on online platforms such
as YouTube and Facebook in recent years Poria et al. (2020); Tsai et al.
(2019); Zadeh et al. (2017).
(a) Single-scale alignment
(b) Multi-scale alignment
Figure 1: Illustration of alignment between text, video, and audio.
(LABEL:sub@fig:singlescale_alignment) single-scale alignment.
(LABEL:sub@fig:scale_alignment) our multi-scale alignment.
Its main objective is to identify sentiment and emotion with multimodal
signals such as textual, visual, and acoustic information. Compared with
unimodal sentiment analysis, multimodal fusion can provide more comprehensive
information and capture more emotional characteristics, which leads to robust
and salient improvements Yang, Xu, and Gao (2020); Yu et al. (2021). For
example, to judge the sentiment of _this movie is sick_ is a non-trivial task
due to the existing language ambiguity only from this sentence, and if given
the acoustic and visual modalities, e.g., a loud voice and a smile, this
sentence will certainly be predicted as positive Zadeh et al. (2017); Wang,
Wan, and Wan (2020).
There are two main components in multimodal sentiment analysis: unimodal
representation and information fusion. For the unimodal representation, there
are some off-the-shelf methods. These methods are elaborate and specialized
for each modality or can be improved with pretraining on extra pure datasets,
e.g., MFCC for audio and BERT encoding for text Devlin et al. (2019). Thus,
multimodal information fusion is the key to affect performance Poria et al.
(2020); Zhang et al. (2020). Most of the works focus on investigating
effective multimodal fusion. These fusion methods can be categorized into
including but not limited to simple operation-based Poria et al. (2016),
attention-based Zadeh et al. (2018c); Gu et al. (2018); Akhtar et al. (2019);
Han et al. (2021); Rahman et al. (2020), tensor-based Zadeh et al. (2017),
translation-based Pham et al. (2019); Wang, Wan, and Wan (2020); Mai, Hu, and
Xing (2020), GANs-based Peng and Qi (2019), graph-based Yang et al. (2021),
and routing-based methods Tsai et al. (2020). The fusion target is to learn a
modality-invariant embedding space, then use the modality-invariant feature or
integrate the modality-invariant with modality-specific features to finish the
final prediction.
However, most of the fusion methods either adopt the token-level or the
utterance-level unimodal representation. Such a single-scale fusion is
suboptimal because different modalities need to align with different
granularities. For example, the ‘really really good’ shown in Figure 1. The
single-scale alignment of the three tokens can not capture the intense
emotion. Instead, they should be regarded as an entirety shown in Figure 1(b).
Besides, the visual and acoustic features do not have apparent semantic
boundaries due to variable sampling rates, leading to inherent data non-
alignment for each modality Tsai et al. (2019). Although the attention-based
methods can make each token in one modality cover long-range contexts in other
modalities, they are still single-scale alignment and can not capture _many
tokens-to-many tokens_ relationship.
To this end, we propose a multi-scale fusion method called ScaleVLAD to gather
multi-Scale representation from text, video, and audio with shared Vectors of
Locally Aggregated Descriptors to address the _unaligned_ multimodal sentiment
analysis. Instead of detecting the boundary of different semantic scales in
each modality, ScaleVLAD utilizes learnable shared latent semantic vectors to
select and aggregate the modality features automatically. These latent
semantic vectors, regarded as different semantic topics, are shared across
different modalities and scales. Thus they can reduce the semantic gap between
modalities and align various scale features naturally. In our implementation,
we use three Transformer-based modules Vaswani et al. (2017) to extract
unimodal representation from text, video, and audio, respectively. Then, the
unimodal feature sequences are fed to the ScaleVLAD module with different
scales of shifted windows. The aggregated features from the ScaleVLAD module
are used to predict the final output via a Transformer and a full connection
layer. Figure 2 shows the main structure of the proposed ScaleVLAD. Besides,
to keep the differentiation of the fused feature among samples and leverage
label information effectively, we propose a self-supervised shifted clustering
loss to train the model jointly. This loss will pull clusters of samples
belonging to the same category (or close score) together in embedding space.
The contribution of this paper can be summarized as follows:
1) We propose a multi-scale fusion method ScaleVLAD to address the unaligned
multimodal sentiment analysis. It is a flexible approach to fuse unimodal
representation with a multi-scale perspective.
2) We propose a self-supervised shifted clustering loss to keep the fused
feature differentiation among samples and leverage label information
effectively.
3) We report new records on three benchmark datasets, including IEMOCAP Busso
et al. (2008), CMU-MOSI Zadeh et al. (2016), and CMU-MOSEI Zadeh et al.
(2018b). Extensive experiments validate the effectiveness of ScaleVLAD.
## 2 Related Works
### 2.1 Multimodal Sentiment Analysis
In recent years, multimodal sentiment analysis has become a popular research
topic as the increasing of user-generated multimedia data on online
communities, blogs, and multimedia platforms. It mainly focuses on integrating
multiple heterogeneous resources, such as textual, visual, and acoustic
signals to comprehend varied human emotions Morency, Mihalcea, and Doshi
(2011); Poria et al. (2020). Previous researchers mainly focus on unimodal
representation learning and multimodal fusion. For the unimodal
representation, Hazarika, Zimmermann, and Poria (2020) attempted to factorize
modality features in joint spaces and presented modality-invariant and
modality-specific representations across different modalities. Yu et al.
(2021) designed a unimodal label generation strategy based on the self-
supervised approach to acquire information-rich unimodal representations by
learning one multimodal task and three unimodal subtasks. Wang et al. (2019)
constructed a recurrent attended variation embedding network to model the
fine-grained structure of nonverbal sub-word sequences and dynamically shift
word representations based on nonverbal cues.
For the multimodal fusion, the previous methods can be divide into simple
operation-based Poria et al. (2016), attention-based Zadeh et al. (2018c); Gu
et al. (2018); Akhtar et al. (2019); Han et al. (2021); Rahman et al. (2020),
tensor-based Zadeh et al. (2017); Verma et al. (2019, 2020), translation-based
Pham et al. (2019); Wang, Wan, and Wan (2020); Mai, Hu, and Xing (2020), GANs-
based Peng and Qi (2019), graph-based Yang et al. (2021); Mai et al. (2020),
and routing-based methods Tsai et al. (2020), etc. Some works assumed the
given multimodal sequences are aligned with each word’s boundary Pham et al.
(2019); Gu et al. (2018); Dumpala et al. (2019); Rahman et al. (2020).
However, some modalities, e.g., video and audio, exist inherent data non-
alignment due to variable sampling rates. Thus, modeling unaligned multimodal
sequences is more flexible and practical. Tsai et al. (2019); Yang, Xu, and
Gao (2020); Siriwardhana et al. (2020) used multiple cross-modal Transformers
to model unaligned multimodal language sequences. Yang et al. (2021) proposed
a parameter-efficient and interpretable graph-based neural model by
integrating an efficient trimodal-temporal graph fusion operation and dynamic
pruning technique.
Figure 2: The main structure of our ScaleVLAD, which comprises four
components, including three unimodal encoders, and a fusion module. The model
is trained with a task-related loss and an extra clustering loss.
This paper aims at unaligned multimodal sentiment analysis. Unlike previous
studies adopting the token-level or the utterance-level unimodal
representation, we propose a multi-scale fusion method to align different
granularity information from multiple modalities.
### 2.2 VLAD, Vector of Locally Aggregated Descriptors
The Vector of Locally Aggregated Descriptors (VLAD) Jégou et al. (2010);
Arandjelovic and Zisserman (2013) has achieved great impacts in aggregating
discriminative features for various scenarios, including video retrieval and
video classification. NetVLAD Arandjelovic et al. (2016) extending from the
VLAD is an end-to-end differentiable layer that could be readily plugged into
many existing neural models. This paper borrows the idea of VLAD and NetVLAD
to align different modalities, e.g., text, video, and audio, instead of using
to be as a discriminative feature learner. Wang, Zhu, and Yang (2021) has a
similar motivation that leverages NetVLAD to reduce the gap of locally learned
features from texts and videos. However, their objective is for text-video
local similarity matching, and we have a different target. Besides, we
introduce multi-scale features for enhanced fusion performance. Hausler et al.
(2021) also presents a multi-scale fusion by deriving patch-level features
from NetVLAD residuals. However, it is designed for place recognition and only
on visual modality. We focus on unaligned multimodal sentiment analysis and
involves text, video, and audio modalities.
## 3 Framework
Given a set of multimodal signals including text $\mathcal{T}$, video clips
$\mathcal{V}$, and audios $\mathcal{A}$, the target is to predict their
sentiment. Specifically, these signals can be regarded as a set of triplets
$(T_{i},V_{i},A_{i})$, where $T_{i}\in\mathcal{T}$, $V_{i}\in\mathcal{V}$ and
$A_{i}\in\mathcal{A}$. The $T_{i}$, $V_{i}$, and $A_{i}$ contain a sequence of
tokens, respectively, such that
$T_{i}=\big{\\{}t_{i}^{j}|j\in[1,|T_{i}|]\big{\\}}$,
$V_{i}=\big{\\{}\boldsymbol{v}_{i}^{j}|j\in[1,|V_{i}|]\big{\\}}$, and
$A_{i}=\big{\\{}\boldsymbol{a}_{i}^{j}|j\in[1,|A_{i}|]\big{\\}}$, where
$t_{i}^{j}$ is word token, $\boldsymbol{v}_{i}^{j}$ is visual feature, and
$\boldsymbol{a}_{i}^{j}$ denotes acoustic feature. We regard the visual
features and acoustic features as tokens for a consistent description with the
word tokens. Multimodal sentiment analysis aims to learn a function
$f(T_{i},V_{i},A_{i})$ to get the sentiment score or emotion category. The
function learning can be regarded as either a regression or a classification
task.
Figure 2 demonstrates our framework. We focus on the multi-scale fusion module
and a training loss, S3C loss, in this paper. Besides, three unimodal
encoders, a text encoder, a video encoder, and an audio encoder, are also
introduced in detail in this section.
### 3.1 Modality Representation Learning
The unimodality representation is the footstone of this model and will affect
the performance of the subsequential fusion module. We use Transformer Vaswani
et al. (2017) with different layers to encode original text $T_{i}$, raw video
feature sequence $V_{i}\in\mathbb{R}^{|V_{i}|\times\hat{d}_{v}}$, and raw
audio feature sequence $A_{i}\in\mathbb{R}^{|A_{i}|\times\hat{d}_{a}}$, where
$\hat{d}_{v}$ and $\hat{d}_{a}$ are the dimensions of the raw feature. The raw
video feature and raw audio feature are extracted with pretrained toolkits
following previous works Zadeh et al. (2017); Yu et al. (2021). For the text
encoder, we use the pretrained 12-layers BERT Devlin et al. (2019) and
12-layers T5 Raffel et al. (2020) to extract text representation
$\mathcal{F}_{T_{i}}\in\mathbb{R}^{|T_{i}|\times d_{t}}$ since the tremendous
success of the pre-trained language model on many downstream NLP tasks, where
$d_{t}=768$ is the dimension of the text representation.
$\displaystyle\mathcal{F}_{T_{i}}=\texttt{Transformer}_{\texttt{T}}(T_{i}),$
(1)
where $\texttt{Transformer}_{\texttt{T}}$ means the Transformer-based text
encoder, e.g., BERT and T5 in our implementation.
Similarly, the video feature sequence
$\mathcal{F}_{V_{i}}\in\mathbb{R}^{|V_{i}|\times d_{v}}$ and audio feature
sequence $\mathcal{F}_{A_{i}}\in\mathbb{R}^{|A_{i}|\times d_{a}}$ can be
calculated with $V_{i}$ and $A_{i}$ respectively as follows,
$\displaystyle\mathcal{F}_{V_{i}}=\texttt{Transformer}_{\texttt{V}}(V_{i}),$
(2)
$\displaystyle\mathcal{F}_{A_{i}}=\texttt{Transformer}_{\texttt{A}}(A_{i}),$
(3)
where $\texttt{Transformer}_{\texttt{V}}$ and
$\texttt{Transformer}_{\texttt{A}}$ are Transformer-based video encoder and
Transformer-based audio encoder, respectively, both of them are randomly
initialized. $d_{v}$ and $d_{a}$ are the dimension of the video feature and
audio feature, respectively.
### 3.2 ScaleVLAD Module
After generating the unimodality representation, the framework comes to the
fusion module. We propose a multi-scale fusion method to cover different
granularities of unimodality representation in this paper. Different full
connection layers are used for the generated $\mathcal{F}_{T_{i}}$,
$\mathcal{F}_{V_{i}}$, and $\mathcal{F}_{A_{i}}$ to map the hidden size to a
common size $d_{s}$ before the following modules if their current hidden sizes
are not equal to this value. When considering the fusion of the three
unimodality features, especially with different granularities, a core problem
is aligning different semantic units. However, the semantic unit of each
unimodality has no clear alignment boundary and can not be fused directly. A
feasible approach is to assume some shared semantic vectors among these
unimodality features and align them to these shared anchors. Such shared
vectors can be regarded as shared topics and can also be shared across
different unimodality scales.
Motivated by this spirit and Inspired by the VLAD and NetVLAD, we propose a
ScaleVLAD module to fuse different unimodality representations. The different
scale information of unimodality is generated by mean pooling with different
kernel size (the stride size is the same as the kernel size) in our
implementation. Specifically, for $m$-scale unimodality representation
$\mathcal{F}_{M_{i}},M\in\\{T,V,A\\}$, the scaled features can be denoted as
$\mathcal{F}_{M_{i}}^{(m)}=\\{\boldsymbol{f}_{j}^{(m)}|j\in[1,|\mathcal{F}_{M_{i}}^{(m)}|]\\}$,
where $\boldsymbol{f}_{j}^{(m)}$ is generated via mean pooling with kernel
size $m$. The $\mathcal{F}_{M_{i}}^{(m)}$ is equal to $\mathcal{F}_{M_{i}}$
when $m=1$. Assuming there are $K$ shared semantic vectors
$\\{\boldsymbol{c}_{k}|k\in[1,K]\\}$ with $d_{s}$ dimension. The similarity
between the $m$-scale feature $\boldsymbol{f}_{j}^{(m)}$ and the shared
vectors can be calculated by dot-product operation following Arandjelovic et
al. (2016),
$\displaystyle
w_{ij}^{(m)}=\frac{\exp(\boldsymbol{f}_{i}^{(m)}\boldsymbol{c}_{j}^{\top}+b_{j})}{\sum_{k=1}^{K}\exp(\boldsymbol{f}_{i}^{(m)}\boldsymbol{c}_{k}^{\top}+b_{k})},$
(4)
where $b_{j}$ and $b_{k}$ are learnable biases, the shared semantic vectors
are jointly learned with the whole model. Then the aggregated feature on each
vector can be generated as follows,
$\displaystyle\hat{\boldsymbol{r}}_{j}^{(m)}=$
$\displaystyle\sum\nolimits_{i=1}^{|\mathcal{F}_{M_{i}}^{(m)}|}w_{ij}^{(m)}(\boldsymbol{f}_{i}^{(m)}-\hat{\boldsymbol{c}}_{j}),$
(5) $\displaystyle\boldsymbol{r}_{j}^{(m)}=$
$\displaystyle\hat{\boldsymbol{r}}_{j}^{(m)}/{\lVert\hat{\boldsymbol{r}}_{j}^{(m)}\rVert_{2}},$
(6)
where $\hat{\boldsymbol{c}}_{j}$ has the same size as $\boldsymbol{c}_{j}$,
and using two groups of similar vectors increases the adaptation capability as
described in Arandjelovic et al. (2016). The output $\boldsymbol{r}_{j}^{(m)}$
can be regarded as the aligned feature for unimodality with $m$-scale. Thus,
the aggregated feature corresponding to $\mathcal{F}_{M_{i}}$ can be generated
as follows,
$\displaystyle\hat{\boldsymbol{u}}=\texttt{stack}([\boldsymbol{r}_{1}^{(m)},\boldsymbol{r}_{2}^{(m)},\cdots,\boldsymbol{r}_{K}^{(m)}]),$
(7)
$\displaystyle\boldsymbol{u}_{M_{i}}^{(m)}=\texttt{LN}(\texttt{GELU}(\hat{\boldsymbol{u}}\mathbf{W}\\!_{M}+\boldsymbol{b}_{M})),$
(8)
where stack is a stack operation and
$\hat{\boldsymbol{u}}\in\mathbb{R}^{Kd_{s}}$,
$\mathbf{W}\\!_{M}\in\mathbb{R}^{Kd_{s}\times d_{s}}$ and
$\boldsymbol{b}_{M}\in\mathbb{R}^{d_{s}}$ ($M\in\\{T,V,A\\}$) are learnable
weights and biases, GELU and LN are GELU activate function Hendrycks and
Gimpel (2016) and Layer Normalization operation Ba, Kiros, and Hinton (2016),
respectively.
The fusion and prediction are conducted on the multi-scale aggregated features
$\boldsymbol{u}_{M_{i}}^{(m)}$. We stack all the representation with different
scales $m_{1},m_{2},\dots$ together to get representation matrix,
$R_{i}=[\boldsymbol{u}_{T_{i}}^{(m_{1})},\boldsymbol{u}_{V_{i}}^{(m_{1})},\boldsymbol{u}_{A_{i}}^{(m_{1})},\boldsymbol{u}_{T_{i}}^{(m_{2})},\dots,\bar{\boldsymbol{f}}_{T_{i}},\bar{\boldsymbol{f}}_{V_{i}},\bar{\boldsymbol{f}}_{A_{i}}]\in\mathbb{R}^{(3\cdot|m|+3)\times
d_{s}}$, where $|m|$ means the number of scales,
$\bar{\boldsymbol{f}}_{M_{i}}$ $(M\in\\{T,V,A\\})$ is the mean pooling result
on $\mathcal{F}_{M_{i}}$. After obtaining $R_{i}$, a randomly initialized
Transformer encoder $\texttt{Transformer}_{\texttt{F}}$ is utilized to
interact the learned multi-scale representation:
$\displaystyle\hat{R}_{i}=\texttt{Transformer}_{\texttt{F}}(R_{i}).$ (9)
Finally, the score or probability can be calculated as,
$\displaystyle\hat{\boldsymbol{r}}=\texttt{max-pooling}(\hat{R}_{i}),$ (10)
$\displaystyle\boldsymbol{o}_{i}=\hat{\boldsymbol{r}}\mathbf{W}\\!_{r}+\boldsymbol{b}_{r},$
(11)
where $\hat{\boldsymbol{r}}\in\mathbb{R}^{d_{s}}$ is the max pooling result of
$\hat{R}_{i}$, $\mathbf{W}_{r}\in\mathbb{R}^{d_{s}\times c}$ and
$\boldsymbol{b}_{r}\in\mathbb{R}^{c}$ are learnable weights and biases, $c$ is
the number of categories for classification task or 1 for regression task.
### 3.3 S3C Loss, Self-supervised Shifted Clustering Loss
Beyond proposing the ScaleVLAD module to capture and align different
granularities of unimodality representation, we proposed an extra self-
supervised shifted clustering loss (S3C Loss) to keep the differentiation of
the fused feature among samples and to leverage label information effectively.
For the fusion feature $\hat{\boldsymbol{r}}$ of each sample from Eq. (10), we
first perform $k$-means to obtain $C$ clusters111We use the Faiss
(https://github.com/facebookresearch/faiss) to finish clustering in our
implementation.. We refer to the $i$-th cluster center as
$\boldsymbol{z}_{i}\in\mathbb{R}^{d_{s}}$ and refer to all cluster centers as
a matrix $Z\in\mathbb{R}^{C\times d_{s}}$. The clustering operation is
calculated on all representations of training samples at each epoch beginning.
For the same sample in the running epoch, we assign its cluster center index
$i$ as a classified label. The S3C Loss can be obtained as follows,
$\displaystyle\boldsymbol{p}=\texttt{softmax}(Z\hat{\boldsymbol{r}}),$ (12)
$\displaystyle\mathcal{L}_{s3c}=-\frac{1}{N}\sum_{i=1}^{N}\left(\mathbb{I}(i)(\log(\boldsymbol{p}))^{\top}\right),$
(13)
where $\mathbb{I}(i)$ means the one-hot vector with length $C$ and its $i$-th
value is 1, $N$ is the number of training samples.
This loss is self-supervised but the clustering centers are not stable at the
beginning of the training stage. So we set a start epoch $s_{s3c}$ to train
with $\mathcal{L}_{s3c}$ instead of optimizing it from training beginning.
Such a setting makes the features used for clustering semantically relate to
the group-truth labels. To make the cluster centers stable, we adopt a shifted
update with a momentum parameter $\alpha$ as $Z^{(t)}=\alpha
Z^{(t-1)}+(1-\alpha)Z$ and use $Z^{(t)}$ to replace $Z$ at each iteration. The
$\alpha$ is set as a constant of 0.99 in our experiments. The clustering loss
makes the fusion features differentiate in the embedding space.
To improve the weak robustness caused by the unknown ground-truth cluster
number of the fusion space, we design multiple clustering, e.g., with $C_{1}$
clusters and $C_{2}$ clusters. Thus, the $\mathcal{L}_{s3c}$ will be replaced
by
$\mathcal{L}_{s3c}=\mathcal{L}_{s3c}^{(C_{1})}+\mathcal{L}_{s3c}^{(C_{2})}+\dots$,
where $\mathcal{L}_{s3c}^{(C_{i})}$ means $\mathcal{L}_{s3c}$ with $C_{i}$
clusters.
### 3.4 Training Objectives
The overall objective of the model is to minimize:
$\displaystyle\mathcal{L}=\mathcal{L}_{task}+\mathcal{L}_{s3c},$ (14)
where $\mathcal{L}_{s3c}$ is the S3C Loss, and $\mathcal{L}_{task}$ is the
task loss. The task loss has different formulations for the classification
task and regression task. For the classification task, we use cross-entropy
error with $\boldsymbol{o}_{i}$ in Eq. (11) as
$\mathcal{L}_{task}=-\frac{1}{N}\sum_{i=1}^{N}(\mathbb{I}(y_{i})(\log(\boldsymbol{o}_{i}))^{\top})$,
where $\mathbb{I}(y_{i})$ means the one-hot vector of $y_{i}$. For the
regression task, we use mean MSE as the training objective as
$\mathcal{L}_{task}=\frac{1}{N}\sum_{i=1}^{N}(\lVert
y_{i}-\boldsymbol{o}_{i}\rVert_{2}^{2})$. $y_{i}$ is the category for
classification or the score for regression, and $N$ is the number of training
samples.
## 4 Experiments
We conduct experiments to evaluate the effectiveness of the proposed
framework. The datasets, experimental settings, and results are introduced in
this section.
### 4.1 Datasets
We evaluate our framework on three benchmark datasets, IEMOCAP Busso et al.
(2008), CMU-MOSI Zadeh et al. (2016), and CMU-MOSEI Zadeh et al. (2018b).
These datasets provide unaligned language, visual, and acoustic signals for
multimodal sentiment analysis.
IEMOCAP IEMOCAP Busso et al. (2008) consists of 10,000 videos for human
emotion analysis. We follow Wang et al. (2019) and select four emotions
(happy, sad, angry, and neutral) for emotion recognition. The task of this
dataset is a multilabel task (e.g., a person can be sad and angry
simultaneously). The metric used on this dataset is the binary classification
accuracy (Acc) and the F1 score of the predictions.
CMU-MOSI Multimodal Opinion Sentiment and Emotion Intensity Zadeh et al.
(2016) is sentence-level sentiment analysis and emotion recognition in online
videos. CMU-MOSI contains 2,199 opinion video clips, each annotated with real-
valued sentiment intensity annotations in the range [-3, +3]. We evaluate the
model performances using various metrics following prior works: binary
accuracy (BA), F1 score, mean absolute error (MAE) of the score, and the
correlation of the prediction with humans (Corr).
CMU-MOSEI The CMU-MOSEI dataset Zadeh et al. (2018b) improves over MOSI with a
higher number of utterances, greater variety in samples, speakers, and topics.
The dataset contains 23,453 annotated video segments (utterances), from 5,000
videos, 1,000 distinct speakers and 250 different topics. The metrics are the
same as the CMU-MOSI.
Pretrained | IEMOCAP | CMU-MOSI | CMU-MOSEI
---|---|---|---
$\overline{\text{Acc}}\uparrow$ | $\overline{\text{F1}}\uparrow$ | BA$\uparrow$ | F1$\uparrow$ | BA$\uparrow$ | F1$\uparrow$
BERT-Base | 82.9 | 82.6 | 85.0/86.9 | 84.9/86.9 | 82.9/86.1 | 83.3/86.1
T5-Base | 82.6 | 82.4 | 87.2/89.3 | 87.3/89.3 | 84.5/86.4 | 84.7/86.3
Table 1: Text Encoder. T5-Base has better performance than BERT-Base
summarily. $\overline{\text{Acc}}$ and $\overline{\text{F1}}$ of IEMOCAP are
the average values of Acc and F1, respectively.
Scale | IEMOCAP | CMU-MOSI | CMU-MOSEI
---|---|---|---
$\overline{\text{Acc}}\uparrow$ | $\overline{\text{F1}}\uparrow$ | BA$\uparrow$ | F1$\uparrow$ | BA$\uparrow$ | F1$\uparrow$
1 | 82.2 | 82.1 | 86.4/88.8 | 86.2/88.8 | 83.3/85.9 | 83.3/85.9
1,2 | 82.5 | 82.2 | 86.3/88.9 | 86.3/88.8 | 83.2/86.2 | 83.6/86.2
1,3 | 81.9 | 81.8 | 86.4/88.9 | 86.4/88.9 | 83.5/86.3 | 83.6/86.2
1,2,3 | 82.0 | 81.9 | 86.7/89.0 | 86.6/89.0 | 84.3/86.4 | 84.0/86.2
1,2,10 | 82.6 | 82.4 | 86.7/89.0 | 86.8/89.1 | 84.0/86.2 | 84.2/86.3
1,2,3,10 | 82.1 | 81.9 | 87.2/89.3 | 87.3/89.3 | 84.5/86.4 | 84.7/86.3
Table 2: Multi-scale Fusion. Fusing different scale features improve the
performance.
Cluster | IEMOCAP | CMU-MOSI | CMU-MOSEI
---|---|---|---
$\overline{\text{Acc}}\uparrow$ | $\overline{\text{F1}}\uparrow$ | BA$\uparrow$ | F1$\uparrow$ | BA$\uparrow$ | F1$\uparrow$
10 | 82.1 | 82.1 | 86.3/88.9 | 86.2/88.8 | 83.8/86.4 | 84.1/86.3
15 | 82.3 | 82.1 | 86.4/88.7 | 86.4/88.7 | 83.6/86.5 | 83.9/86.4
20 | 82.5 | 82.2 | 86.3/88.6 | 86.2/88.6 | 84.1/86.5 | 84.4/86.5
10,15 | 82.6 | 82.4 | 87.2/89.3 | 87.3/89.3 | 84.5/86.3 | 84.7/86.2
15,20 | 82.3 | 82.1 | 85.4/87.9 | 85.3/87.9 | 84.5/86.4 | 84.7/86.3
10,15,20 | 82.0 | 82.0 | 85.8/88.2 | 85.8/88.2 | 83.3/86.5 | 83.8/86.5
Table 3: Cluster NO. in S3C Loss. Cluster number is an important impact to
affect the performance.
Following previous works Tsai et al. (2019); Rahman et al. (2020) and the CMU-
MultimodalSDK222https://github.com/A2Zadeh/CMU-MultimodalSDK, the video
feature is extracted via Facet333iMotions. Facial expression analysis, 2017.
and the acoustic feature is extracted using COVAREP Degottex et al. (2014).
The video feature mainly contains 35 facial action units, e.g., facial muscle
movement. The acoustic feature mainly includes Mel-frequency cepstral
coefficients (MFCCs), pitch tracking and voiced/unvoiced segmenting features,
glottal source parameters, peak slope parameters, and maxima dispersion
quotients. The video feature dimension $\hat{d}_{v}$ is 35 for IEMOCAP and
CMU-MOSEI, and 47 for CMU-MOSI. The acoustic feature dimension $\hat{d}_{a}$
is 74 for all three benchmarks. We refer to this version of the feature as
_Facet &COVAREP_.
For the IEMOCAP, we also compare the video feature extracted by
OpenFace444https://github.com/TadasBaltrusaitis/OpenFace and the acoustic
feature extracted by librosa555https://github.com/librosa/librosa to
investigate the influence of the unimodality representation. Compared with
CMU-MOSI and CMU-MOSEI, each frame of IEMOCAP has two people in the scenario
simultaneously, making the judgment difficult. We partition two people
according to the layout of the frame and extract the feature separately. The
video feature dimension $\hat{d}_{v}$ is 709 and the acoustic feature
dimension $\hat{d}_{a}$ is 33. We refer to this version of the feature as
_OpenFace &Librosa_.
Modality | CMU-MOSI
---|---
BA$\uparrow$ | F1$\uparrow$
T | 86.4/88.6 | 86.4/88.6
V | 53.1/54.1 | 52.9/54.0
A | 54.7/55.0 | 54.1/54.4
T,V | 86.6/88.9 | 86.5/88.9
T,A | 87.0/89.3 | 87.0/89.3
V,A | 54.9/55.4 | 54.9/55.6
T,V,A | 87.2/89.3 | 87.3/89.3
Table 4: Multi-modality Fusion. Combining different unimodality can improve
model performance. T, V, and A mean text, video, and audio modality,
respectively. For the BA and F1 of CMU-MOSI and CMU-MOSEI, we report two
values: the left side of “/” is calculated following Zadeh et al. (2018c), and
the right side is following Tsai et al. (2019).
Features | IEMOCAP
---|---
$\overline{\text{Acc}}\uparrow$ | $\overline{\text{F1}}\uparrow$
_Facet &COVAREP_ | 82.6 | 82.4
_OpenFace &Librosa_ | 85.1 | 85.0
Table 5: Nonverbal Feature. Stronger nonverbal features can improve
performance.
Methods | Happy | Sad | Angry | Neutral | Average
---|---|---|---|---|---
Acc$\uparrow$ | F1$\uparrow$ | Acc$\uparrow$ | F1$\uparrow$ | Acc$\uparrow$ | F1$\uparrow$ | Acc$\uparrow$ | F1$\uparrow$ | $\overline{\text{Acc}}\uparrow$ | $\overline{\text{F1}}\uparrow$
CTC + EF-LSTM Tsai et al. (2019) | 76.2 | 75.7 | 70.2 | 70.5 | 72.7 | 67.1 | 58.1 | 57.4 | 69.3 | 67.7
LF-LSTM Tsai et al. (2019) | 72.5 | 71.8 | 72.9 | 70.4 | 68.6 | 67.9 | 59.6 | 56.2 | 68.4 | 66.6
CTC + RAVEN Wang et al. (2019) | 77.0 | 76.8 | 67.6 | 65.6 | 65.0 | 64.1 | 62.0 | 59.5 | 67.9 | 66.5
CTC + MCTN Pham et al. (2019) | 80.5 | 77.5 | 72.0 | 71.7 | 64.9 | 65.6 | 49.4 | 49.3 | 66.7 | 66.0
MulT Tsai et al. (2019) | 84.8 | 81.9 | 77.7 | 74.1 | 73.9 | 70.2 | 62.5 | 59.7 | 74.7 | 71.5
PMR Lv et al. (2021) | 86.4 | 83.3 | 78.5 | 75.3 | 75.0 | 71.3 | 63.7 | 60.9 | 75.9 | 72.7
MTAG Yang et al. (2021) | - | 86.0 | - | 79.9 | - | 76.7 | - | 64.1 | - | 76.7
ScaleVLAD | 86.7 | 85.9 | 84.8 | 84.6 | 86.8 | 86.9 | 72.1 | 72.1 | 82.6 | 82.4
\- w/o multi-scale | 86.6 | 85.7 | 84.1 | 84.2 | 86.7 | 86.9 | 71.5 | 71.3 | 82.2 | 82.0
\- w/o S3C loss | 85.1 | 84.9 | 84.3 | 84.4 | 88.5 | 88.3 | 69.4 | 68.5 | 81.8 | 81.5
Table 6: Sentiment prediction on IEMOCAP (unaligned) dataset.
$\overline{\text{Acc}}$ and $\overline{\text{F1}}$ are the average values. CTC
Graves et al. (2006) denotes connectionist temporal classification. The
results of CTC + EF-LSTM, LF-LSTM, CTC + RAVEN and CTC + MCTN are from Tsai et
al. (2019).
### 4.2 Experimental Details
We initial the text encoder with T5 Base Encoder Raffel et al. (2020) in this
paper due to its advantages after training with an extensive corpus. We also
conduct an ablation study to compare with BERT Base uncased version Devlin et
al. (2019). The rest of the parameters, e.g., Video Transformer, Audio
Transformer, and Fusion module, are initialized randomly. The fusion dimension
$d_{s}$ is set to 128. We train the model with the Adam optimizer Kingma and
Ba (2015) with a linear schedule. The warmup rate is set to 0.1 based on the
total epoch 50. The learning rate is set from {1e-3, 1e-4, 5e-5, 1e-5}. The
Video Transformer and Audio Transformer are set from {4, 6} layers with {128,
768} hidden size. The fusion Transformer in Eq. (9) is set with layer 2. The
multi-scale parameter $m$ and the number of shared semantic vectors $K$ in Eq.
(4) is set from {1, 2, 3, 10} and {8, 10}, respectively. The cluster $C$ is
set from {10, 15, 20}. Note these candidate choices are not exact and also can
not set with a grid search strategy, so we set them through empirical testing
on validation set. The start epoch $s_{s}3c$ for loss $\mathcal{L}_{s3c}$ is
set to 5, the same as the warmup epochs. All hyper-parameters are set
according to the performance from the validation set. The batch size is 64
across three datasets. All experiments are carried out on 4 NVIDIA Tesla V100
GPUs.
Methods | BA$\uparrow$ | F1$\uparrow$ | MAE$\downarrow$ | Corr$\uparrow$
---|---|---|---|---
MV-LSTM Rajagopalan et al. (2016) | 73.9/- | 74.0/- | 1.019 | 0.601
TFN Zadeh et al. (2017) | 73.9/- | 73.4/- | 1.040 | 0.633
MARN Zadeh et al. (2018c) | 77.1/- | 77.0/- | 0.968 | 0.625
MFN Zadeh et al. (2018a) | 77.4/- | 77.3/- | 0.965 | 0.632
RMFN Liang et al. (2018) | 78.4/- | 78.0/- | 0.922 | 0.681
RAVEN Wang et al. (2019) | 78.0/- | -/- | 0.915 | 0.691
MulT Tsai et al. (2019) | -/81.1 | -/81.0 | 0.889 | 0.686
ICCN Sun et al. (2020) | -/83.1 | -/83.0 | 0.862 | 0.714
PMR Lv et al. (2021) | -/82.4 | -/82.1 | - | -
FMT Zadeh et al. (2019) | 81.5/83.5 | 81.4/83.5 | 0.837 | 0.744
UniVL Luo et al. (2020) | 83.2/84.6 | 83.3/84.6 | 0.781 | 0.767
MISA (Hazarika et al. 2020) | 81.8/83.4 | 81.7/83.6 | 0.783 | 0.761
MAG-BERT Rahman et al. (2020) | 84.2/86.1 | 84.1/86.0 | 0.712 | 0.796
MAG-XLNet Rahman et al. (2020) | 85.7/87.9 | 85.6/87.9 | 0.675 | 0.821
Self-MM Yu et al. (2021) | 84.0/86.0 | 84.4/86.0 | 0.713 | 0.798
MTAG Yang et al. (2021) | -/82.3 | -/82.1 | 0.866 | 0.722
ScaleVLAD | 87.2/89.3 | 87.3/89.3 | 0.684 | 0.819
\- w/o multi-scale | 86.3/88.6 | 86.2/88.6 | 0.713 | 0.807
\- w/o S3C loss | 86.0/88.0 | 85.9/88.0 | 0.727 | 0.810
Human | 85.7/- | 87.5/- | 0.710 | 0.820
Table 7: Sentiment prediction on CMU-MOSI dataset. For BA and F1, we report
two values: the left side of “/” is calculated following Zadeh et al. (2018c),
and the right side is following Tsai et al. (2019).
### 4.3 Ablation Studies
We conduct comprehensive ablation studies on text encoder, key hyper-
parameters settings, and features in this section.
Text Encoder. In Table 1, we compare the BERT-Base with the T5-Base. The
T5-Base wins on CMU-MOSI and CMU-MOSEI. Besides, it has comparable results on
IEMOCAP. Thus, we use T5-Base as our text encoder in our work. We suppose that
a larger pretrained model, e.g., T5-Large, can achieve better performance but
needs more computational resources.
Multi-scale Fusion. In Table 2, we ablate the scale setting of the ScaleVLAD
module. The table lists a part of combinations, and we find {1,2,10} and
{1,2,3,10} can achieve better results than others. It proves that fusing
different granularities of representation can achieve better performance.
Cluster NO. in S3C Loss. In Table 3, we ablate the cluster setting of S3C
loss. The table lists a part of combinations, and we find {10,15} and {15,20}
can achieve better results than others. It indicates an appropriate choice of
the cluster will keep the feature clustering and thus improve the results.
Multi-modality Fusion. The results in Table 4 prove that multimodal fusion can
provide more comprehensive information and capture more emotional
characteristics than unimodality.
Nonverbal Feature. In Table 5, different nonverbal features are conducted on
IEMOCAP. It shows that more sophisticated features can obtain better results.
Further, we suppose that end-to-end training from raw signals instead of the
features extracted by off-the-shelf tools can improve more, like video
retrieval from Luo et al. (2021).
Methods | BA$\uparrow$ | F1$\uparrow$ | MAE$\downarrow$ | Corr$\uparrow$
---|---|---|---|---
MV-LSTM Rajagopalan et al. (2016) | 76.4/- | 76.4/- | - | -
MFN Zadeh et al. (2018a) | 76.0/- | 76.0/- | - | -
RAVEN Wang et al. (2019) | 79.1/- | 79.5/- | 0.614 | 0.662
PMR Lv et al. (2021) | -/83.1 | -/82.8 | - | -
MAG-BERT Rahman et al. (2020) | -/84.7 | -/84.5 | - | -
MAG-XLNet Rahman et al. (2020) | -/85.6 | -/85.7 | - | -
TFN Zadeh et al. (2017) | -/82.5 | -/82.1 | 0.593 | 0.700
MulT Tsai et al. (2019) | -/81.6 | -/81.6 | 0.591 | 0.694
ICCN Sun et al. (2020) | -/84.2 | -/84.2 | 0.565 | 0.713
MISA (Hazarika et al. 2020) | 83.6/85.5 | 83.8/85.3 | 0.555 | 0.756
Self-MM Yu et al. (2021) | 82.8/85.2 | 82.5/85.3 | 0.530 | 0.765
ScaleVLAD | 84.5/86.4 | 84.7/86.3 | 0.527 | 0.781
\- w/o multi-scale | 83.1/85.8 | 83.3/85.7 | 0.541 | 0.779
\- w/o S3C loss | 82.7/86.1 | 83.1/86.1 | 0.548 | 0.773
Table 8: Sentiment prediction on CMU-MOSEI dataset. For BA and F1, the values
on the both sides of “/” have the same calculations as Table 7.
### 4.4 Comparison to State-of-the-art
We compare ScaleVLAD with state-of-the-art methods on IEMOCAP, CMU-MOSI, and
CMU-MOSEI, and the results are shown in Table 6, Table 7, and Table 8,
respectively. In summary, 1) the proposed ScaleVLAD outperforms all baselines
in all datasets; 2) The ablation on multi-scale fusion and S3C loss proves
their effectiveness in all metrics and datasets. Our BERT-based results shown
in Table 1 can also have advantages over the BERT feature-based models, e.g.,
UniVL Luo et al. (2020), MAG-BERT Rahman et al. (2020), and Self-MM Yu et al.
(2021) in Table 7. The T5 based feature can improve the performance of IEMOCAP
by a significant margin shown in Table 6, which proves the strong capability
of the pretrained model after training with an extensive corpus in a self-
supervised manner.
### 4.5 Qualitative Analysis
(a)
(b)
Figure 3: Visualization of the ScaleVLAD w/o and w/ S3C loss in the training
set of MOSI using t-SNE projections van der Maaten and Hinton (2008).
Figure 3 displays the visualization of fusion features calculated by Eq. (11)
on training with S3C loss or not. For a clear observation, we regard the
continuous labels as six groups, each having width 1, e.g., $[-3.0,-2.0)$.
Figure 3(b) illustrates a tight clustering and clearer boundary, e.g., the
samples in blue color, when using S3C loss. It proves the effectiveness of the
S3C loss on representation learning. Figure 4 shows the similarity calculated
by Eq. (4). The alignment patterns of text, video, and audio with different
scales are different and are dynamically learned by the model. In this case,
the ‘really really loved’ with the yellow box can be regarded entirely to
align with the latent shared semantic vectors. Besides, the video and audio
with red boxes, which have bigger scales, i.e., 3 and 10, show consistently
shared vectors (NO. 2 and 6) with the text. Through the shared vectors, the
model can align and fuse the video and audio representation despite their
fuzzy semantic boundaries. We suppose the improvement of the ScaleVLAD is
benefits from such a multi-scale alignment.
Figure 4: Visualization of the similarity weights from ScaleVLAD module (Eq.
(4)). The tokens are processed by T5 tokenization. The y-axis means ten shared
semantic vectors. The x-axis denotes three blocks: text, video, and audio.
Each block has four scales: {1, 2, 3, 10}.
## 5 Conclusion
We proposed a multi-scale fusion method ScaleVLAD and a self-supervised
shifted clustering loss to address unaligned multimodal sentiment analysis in
this paper. The proposed method considers different granularities of
representation through aligning different modalities to trainable latent
semantic vectors. Thus, it can remit the fuzzy semantic boundary of
unimodality. The proposed self-supervised shifted clustering loss keeps the
differentiation of the fusion features via maintaining a momentum updated
cluster centers. The extensive experiments on three publicly available
datasets demonstrate the effectiveness of the proposed model.
## References
* Akhtar et al. (2019) Akhtar, M. S.; Chauhan, D. S.; Ghosal, D.; Poria, S.; Ekbal, A.; and Bhattacharyya, P. 2019. Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis. In _NAACL-HLT_ , 370–379.
* Arandjelovic et al. (2016) Arandjelovic, R.; Gronát, P.; Torii, A.; Pajdla, T.; and Sivic, J. 2016. NetVLAD: CNN Architecture for Weakly Supervised Place Recognition. In _CVPR_ , 5297–5307.
* Arandjelovic and Zisserman (2013) Arandjelovic, R.; and Zisserman, A. 2013. All About VLAD. In _IEEE Conference on Computer Vision and Pattern Recognition_ , 1578–1585.
* Ba, Kiros, and Hinton (2016) Ba, J. L.; Kiros, J. R.; and Hinton, G. E. 2016. Layer normalization. _arXiv preprint arXiv:1607.06450_.
* Busso et al. (2008) Busso, C.; Bulut, M.; Lee, C.-c.; Kazemzadeh, A.; Mower, E.; Kim, S.; Chang, J. N.; Lee, S.; and Narayanan, S. S. 2008. IEMOCAP: interactive emotional dyadic motion capture database. _Language Resources and Evaluation_.
* Degottex et al. (2014) Degottex, G.; Kane, J.; Drugman, T.; Raitio, T.; and Scherer, S. 2014. COVAREP—A collaborative voice analysis repository for speech technologies. In _ICASSP_ , 960–964.
* Devlin et al. (2019) Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _NAACL-HLT_ , 4171–4186.
* Dumpala et al. (2019) Dumpala, S. H.; Sheikh, I.; Chakraborty, R.; and Kopparapu, S. K. 2019. Audio-visual fusion for sentiment classification using cross-modal autoencoder. In _NIPS_ , 1–4.
* Graves et al. (2006) Graves, A.; Fernández, S.; Gomez, F. J.; and Schmidhuber, J. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In _ICML_ , volume 148, 369–376.
* Gu et al. (2018) Gu, Y.; Yang, K.; Fu, S.; Chen, S.; Li, X.; and Marsic, I. 2018. Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment. In _ACL_ , 2225–2235.
* Han et al. (2021) Han, W.; Chen, H.; Gelbukh, A.; Zadeh, A.; Morency, L.-p.; and Poria, S. 2021. Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis. _ICMI_.
* Hausler et al. (2021) Hausler, S.; Garg, S.; Xu, M.; Milford, M.; and Fischer, T. 2021. Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition. In _CVPR_ , 14141–14152.
* Hazarika, Zimmermann, and Poria (2020) Hazarika, D.; Zimmermann, R.; and Poria, S. 2020. MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis. In _ACM MM_ , 1122–1131.
* Hendrycks and Gimpel (2016) Hendrycks, D.; and Gimpel, K. 2016. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_.
* Jégou et al. (2010) Jégou, H.; Douze, M.; Schmid, C.; and Pérez, P. 2010. Aggregating local descriptors into a compact image representation. In _CVPR_ , 3304–3311.
* Kingma and Ba (2015) Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. _ICLR_.
* Liang et al. (2018) Liang, P. P.; Liu, Z.; Zadeh, A.; and Morency, L. 2018. Multimodal Language Analysis with Recurrent Multistage Fusion. In _EMNLP_ , 150–161.
* Luo et al. (2020) Luo, H.; Ji, L.; Shi, B.; Huang, H.; Duan, N.; Li, T.; Li, J.; Bharti, T.; and Zhou, M. 2020. UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation. _arXiv preprint arXiv:2002.06353_.
* Luo et al. (2021) Luo, H.; Ji, L.; Zhong, M.; Chen, Y.; Lei, W.; Duan, N.; and Li, T. 2021. CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval. _arXiv preprint arXiv:2104.08860_.
* Lv et al. (2021) Lv, F.; Chen, X.; Huang, Y.; Duan, L.; and Lin, G. 2021. Progressive Modality Reinforcement for Human Multimodal Emotion Recognition From Unaligned Multimodal Sequences. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2554–2562.
* Mai, Hu, and Xing (2020) Mai, S.; Hu, H.; and Xing, S. 2020. Modality to modality translation: An adversarial representation learning and graph fusion network for multimodal fusion. In _AAAI_ , volume 34, 164–172.
* Mai et al. (2020) Mai, S.; Xing, S.; He, J.; Zeng, Y.; and Hu, H. 2020. Analyzing unaligned multimodal sequence via graph convolution and graph pooling fusion. _arXiv preprint arXiv:2011.13572_.
* Morency, Mihalcea, and Doshi (2011) Morency, L.; Mihalcea, R.; and Doshi, P. 2011. Towards multimodal sentiment analysis: harvesting opinions from the web. In _ICMI_ , 169–176.
* Peng and Qi (2019) Peng, Y.; and Qi, J. 2019. CM-GANs: Cross-modal generative adversarial networks for common representation learning. _TOMM_ , 15(1): 1–24.
* Pham et al. (2019) Pham, H.; Liang, P. P.; Manzini, T.; Morency, L.; and Póczos, B. 2019. Found in Translation: Learning Robust Joint Representations by Cyclic Translations between Modalities. In _AAAI_ , 6892–6899.
* Poria et al. (2016) Poria, S.; Chaturvedi, I.; Cambria, E.; and Hussain, A. 2016. Convolutional MKL Based Multimodal Emotion Recognition and Sentiment Analysis. In _ICDM_ , 439–448.
* Poria et al. (2020) Poria, S.; Hazarika, D.; Majumder, N.; and Mihalcea, R. 2020. Beneath the tip of the iceberg: Current challenges and new directions in sentiment analysis research. _IEEE Transactions on Affective Computing_.
* Raffel et al. (2020) Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. _Journal of Machine Learning Research_ , 21(140): 1–67.
* Rahman et al. (2020) Rahman, W.; Hasan, M. K.; Lee, S.; Zadeh, A. B.; Mao, C.; Morency, L.; and Hoque, M. E. 2020. Integrating Multimodal Information in Large Pretrained Transformers. In _ACL_ , 2359–2369.
* Rajagopalan et al. (2016) Rajagopalan, S. S.; Morency, L.; Baltrusaitis, T.; and Goecke, R. 2016. Extending Long Short-Term Memory for Multi-View Structured Learning. In _ECCV_ , 338–353.
* Siriwardhana et al. (2020) Siriwardhana, S.; Reis, A.; Weerasekera, R.; and Nanayakkara, S. 2020. Jointly Fine-Tuning ”BERT-Like” Self Supervised Models to Improve Multimodal Speech Emotion Recognition. In _Interspeech_ , 3755–3759.
* Sun et al. (2020) Sun, Z.; Sarma, P.; Sethares, W.; and Liang, Y. 2020. Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis. In _AAAI_ , volume 34, 8992–8999.
* Tsai et al. (2019) Tsai, Y. H.; Bai, S.; Liang, P. P.; Kolter, J. Z.; Morency, L.; and Salakhutdinov, R. 2019. Multimodal Transformer for Unaligned Multimodal Language Sequences. In _ACL_ , 6558–6569.
* Tsai et al. (2020) Tsai, Y. H.; Ma, M.; Yang, M.; Salakhutdinov, R.; and Morency, L. 2020. Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis. In _EMNLP_ , 1823–1833.
* van der Maaten and Hinton (2008) van der Maaten, L.; and Hinton, G. 2008. Visualizing Data using t-SNE. _Journal of Machine Learning Research_ , 9(86): 2579–2605.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In _NeurIPS_ , 5998–6008.
* Verma et al. (2019) Verma, S.; Wang, C.; Zhu, L.; and Liu, W. 2019. DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis. In _IJCAI_ , 3627–3634.
* Verma et al. (2020) Verma, S.; Wang, J.; Ge, Z.; Shen, R.; Jin, F.; Wang, Y.; Chen, F.; and Liu, W. 2020\. Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment Analysis. In _ICDM_ , 561–570.
* Wang, Zhu, and Yang (2021) Wang, X.; Zhu, L.; and Yang, Y. 2021. T2VLAD: Global-Local Sequence Alignment for Text-Video Retrieval. In _CVPR_ , 5079–5088.
* Wang et al. (2019) Wang, Y.; Shen, Y.; Liu, Z.; Liang, P. P.; Zadeh, A.; and Morency, L. 2019. Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors. In _AAAI_ , 7216–7223.
* Wang, Wan, and Wan (2020) Wang, Z.; Wan, Z.; and Wan, X. 2020. TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis. In _WWW_ , 2514–2520.
* Yang et al. (2021) Yang, J.; Wang, Y.; Yi, R.; Zhu, Y.; Rehman, A.; Zadeh, A.; Poria, S.; and Morency, L. 2021. MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences. In _NAACL-HLT_ , 1009–1021.
* Yang, Xu, and Gao (2020) Yang, K.; Xu, H.; and Gao, K. 2020. CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis. In _Proceedings of the 28th ACM International Conference on Multimedia_ , 521–528.
* Yu et al. (2021) Yu, W.; Xu, H.; Yuan, Z.; and Wu, J. 2021. Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis. In _AAAI_ , 10790–10797.
* Zadeh et al. (2017) Zadeh, A.; Chen, M.; Poria, S.; Cambria, E.; and Morency, L. 2017. Tensor Fusion Network for Multimodal Sentiment Analysis. In _EMNLP_ , 1103–1114.
* Zadeh et al. (2018a) Zadeh, A.; Liang, P. P.; Mazumder, N.; Poria, S.; Cambria, E.; and Morency, L.-P. 2018a. Memory Fusion Network for Multi-view Sequential Learning. _AAAI_.
* Zadeh et al. (2018b) Zadeh, A.; Liang, P. P.; Poria, S.; Cambria, E.; and Morency, L. 2018b. Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph. In _ACL_ , 2236–2246.
* Zadeh et al. (2018c) Zadeh, A.; Liang, P. P.; Poria, S.; Vij, P.; Cambria, E.; and Morency, L.-P. 2018c. Multi-attention recurrent network for human communication comprehension. In _AAAI_.
* Zadeh et al. (2019) Zadeh, A.; Mao, C.; Shi, K.; Zhang, Y.; Liang, P. P.; Poria, S.; and Morency, L. 2019. Factorized Multimodal Transformer for Multimodal Sequential Learning. _arXiv preprint arXiv:1911.09826_.
* Zadeh et al. (2016) Zadeh, A.; Zellers, R.; Pincus, E.; and Morency, L.-P. 2016. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. _IEEE Intelligent Systems_ , 31(6): 82–88.
* Zhang et al. (2020) Zhang, C.; Yang, Z.; He, X.; and Deng, L. 2020. Multimodal Intelligence: Representation Learning, Information Fusion, and Applications. _IEEE J. Sel. Top. Signal Process._ , 14(3): 478–493.
|
# Distilling Missing Modality Knowledge from Ultrasound for Endometriosis
Diagnosis with Magnetic Resonance Images
###### Abstract
Endometriosis is a common chronic gynecological disorder that has many
characteristics, including the pouch of Douglas (POD) obliteration, which can
be diagnosed using Transvaginal gynecological ultrasound (TVUS) scans and
magnetic resonance imaging (MRI). TVUS and MRI are complementary non-invasive
endometriosis diagnosis imaging techniques, but patients are usually not
scanned using both modalities and, it is generally more challenging to detect
POD obliteration from MRI than TVUS. To mitigate this classification
imbalance, we propose in this paper a knowledge distillation training
algorithm to improve the POD obliteration detection from MRI by leveraging the
detection results from unpaired TVUS data. More specifically, our algorithm
pre-trains a teacher model to detect POD obliteration from TVUS data, and it
also pre-trains a student model with 3D masked auto-encoder using a large
amount of unlabelled pelvic 3D MRI volumes. Next, we distill the knowledge
from the teacher TVUS POD obliteration detector to train the student MRI model
by minimizing a regression loss that approximates the output of the student to
the teacher using unpaired TVUS and MRI data. Experimental results on our
endometriosis dataset containing TVUS and MRI data demonstrate the
effectiveness of our method to improve the POD detection accuracy from MRI.
Index Terms— Endometriosis, Knowledge Distillation, Masked Auto-Encoder,
Pouch of Douglas Obliteration
## 1 Introduction
Endometriosis is a gynecological disorder associated with the growth of
endometrial glands and stroma outside the uterine cavity [1, 2]. The clinical
manifestations of endometriosis include infertility and endometriosis-related
pain [3]. As a common chronic gynecological disease, it affects approximately
1.5 million women worldwide [4]. There is currently no known way to prevent or
cure endometriosis, but early diagnosis, intervention and management may slow
or stop the natural disease progression. However, the diagnosis of
endometriosis can take about 7 years on average after the appearance of
initial symptoms [5]. Laparoscopy used to be the diagnostic gold standard [6],
but with the improvement in the quality and availability of imaging modalities
for endometriosis diagnosis, there has been evidence suggesting that accurate
endometriosis diagnosis can be achieved with the analysis of TVUS sequences
and MRI volumes [7, 8].
Many of the symptomatic endometriosis cases can be associated with the pouch
of Douglas (POD) obliteration, which can be diagnosed from complementary TVUS
and MRI data, as shown in Fig. 1. However, in clinical practice, it is
difficult to access clinicians who can diagnose endometriosis with one of
these modalities, not to mention those who are proficient in both modalities.
For TVUS, POD obliteration can be accurately detected manually [9] and
automatically [10] via the ultrasound ‘sliding sign’ [11], which is classified
as positive (i.e. normal) or negative (i.e. abnormal), where a negative
sliding sign is considered when the anterior rectum or bowel glides cannot
freely slide over the posterior upper uterine fundus. For MRI, POD can be
classified as obliterated or normal, where the POD obliteration can be
characterized by endometrial plaques and dense adhesions between uterosacral
ligaments, uterine serosa, ovaries, rectum and vagina on T2-weighted and
T1-weighted images [12]. However, differently from TVUS, the manual POD
obliteration detection from MRI can only reach 61.4-71.9% accuracy [13]. There
has been some effort to propose methods that can automatically diagnose deep
pelvic endometriosis classification from MRI111Deep infiltrating endometriosis
(DIE) can lead to a partial or complete obliteration of the pouch of Douglas
(POD) [14]., but we are not aware of methods that can detect POD obliteration
from MRI.
Fig. 1: Examples of POD obliteration on MRI and sliding sign on TVUS. (a) and
(b) represent negative and positive POD obliteration sign on sagittal plane
MRI, respectively. (c) and (d) represent positive and negative sliding sign on
TVUS, respectively.
Leveraging the TVUS POD obliteration detection to improve the automated
detection accuracy from MRI using an unpaired training set containing scans
from both modalities is the main goal of this paper. We achieve this goal by
proposing a new knowledge distillation algorithm based on two stages: 1) pre-
training a teacher model to detect POD obliteration from TVUS data, and pre-
training a student model with 3D masked auto-encoder (MAE) using a large
amount of unlabelled pelvic 3D MRI volumes; and 2) knowledge distillation from
the teacher TVUS detector to train the student MRI model by minimizing a
regression loss that approximates the output of the student to the teacher
using unpaired TVUS and MRI data. The testing is realised using only MRI data.
The main innovations of this work are:
* •
To the best of our knowledge, this is the first POD obliteration detection
method that distills knowledge from TVUS to MRI using unpaired data, with the
objective of improving the accuracy of diagnosing endometriosis from MRI;
* •
It is also the first machine learning method that can automatically detect POD
obliteration from MRI data with the goal of diagnosing endometriosis.
Experimental results on a private endometriosis training set containing
unpaired TVUS and MRI data show the effectiveness of our method to increase
the POD detection accuracy from testing MRI volumes.
## 2 Related Work
The automated detection of endometriosis from medical imaging has received
some attention lately. Using ultrasound images, Guerriero et al. [15] compared
the ability of six machine learning algorithms and neural networks for the
diagnosis of endometriosis in the rectosigmoid, where the neural network
achieved the highest classification accuracy of 0.73. Maicas et al. [10]
constructed a deep learning model based on a temporal residual network to
classify POD obliteration from TVUS videos, achieving an AUC of 96.5% and an
accuracy of 88.8%. The methods above use TVUS data alone, but recently, Yang
et al. [16] built a bi-model method with one model for MRI and another for
TVUS, but they do not explore the relationship between MRI and TVUS, like we
propose in this paper. Furthermore, as mentioned above, it is unlikely that
patients will have access to both modalities in clinical practice, which
justifies the need for single-modality approaches that have highly accurate
endometriosis detection.
Knowledge distillation is a general framework to extract the knowledge learned
from a teacher model to a student model by soft-label supervision. The
original purpose of knowledge distillation is to compress deep learning
models, so they can run on resource-constrained devices [17], but in this
paper we focus on the transfer of knowledge from a teacher network trained on
source modalities to a student network that is trained on different target
modalities [18]. In medical image analysis, data from different modalities can
provide rich and complementary information about diseases, so multimodal
knowledge distillation is suitable for scenarios where data or labels for some
modalities are missing during training or testing. Inspired by knowledge
distillation, Dou et al. [19] tackled the task of unpaired multi-modal image
segmentation. Guan et al. [20] leverage additional supervision distilled from
clinical data to improve MRI-based Alzheimer’s disease prediction models.
Cheng et al. [21] utilise both a pixel-level and an image-level distillation
scheme to transfer knowledge from a multimodal-MRI teacher network to a
unimodal segmentation student network. However, most unpaired multi-modal
studies above focus on MRI and CT scans, which is arguably easier than
focusing on MRI and TVUS, which is the case considered in this paper.
## 3 Method
An overview of our proposed TVUS to MRI knowledge distillation model is shown
in Figure 2, which consists of two models: a teacher model pre-trained with a
TVUS dataset and a student model pre-trained on an MRI dataset, and then
trained on an unpaired dataset of TVUS and MRI data by distilling the
knowledge in the representation learned by the teacher model to the student
model.
Formally, let
$\mathcal{D}_{M}=\\{(\mathbf{x}_{i},\mathbf{y}_{i})\\}_{i=1}^{N}$ denote the
endometriosis MRI dataset, with $N$ T2 SPC images
$\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{H\times W\times D}$ and
corresponding POD obliteration one-hot label $\mathbf{y}\in\\{0,1\\}^{2}$,
where $H$, $W$ and $D$ are height, width and depth of the MRI, respectively.
For TVUS dataset, let
$\mathcal{D}_{U}^{s}=\\{(\mathbf{x}_{i}^{s},\mathbf{y}_{i}^{s})\\}_{i=1}^{M}$
be the video clips dataset, where
$\mathbf{x}^{s}\in\mathcal{X}^{s}\subset\mathbb{R}^{H\times W\times T}$ ($H$,
$W$ and $T$ are height, width and number of frames), and
$\mathbf{y}^{s}\in\\{0,1\\}^{2}$ denotes the POD obliteration one-hot label.
For the self-supervised pre-training of the MRI POD obliteration detector, we
have $\mathcal{D}_{P}=\\{\mathbf{x}_{i}^{p}\\}_{i=1}^{K}$, which contains $K$
unlabeled MRI volumes, where the number of unlabeled images is much larger
than the labeled images (i.e., $K>>N$ and $K>>M$). The teacher model
$f_{\theta_{U}}:\mathcal{X}^{s}\to\Delta$ (with $\Delta\subset[0,1]^{2}$ being
the probability simplex) is trained with dataset $\mathcal{D}^{s}_{U}$, the
student model $f_{\theta_{M}}:\mathcal{X}\to\Delta$ is pre-trained with
dataset $\mathcal{D}_{P}$ and fine-tuned using $\mathcal{D}_{M}$. The final
knowledge distillation model is initialised by the pre-trained student model
$f_{\theta_{M}}(.)$, which is then trained from $\mathcal{D}_{M}$ and
$\mathcal{D}^{s}_{U}$. The testing to classify POD obliteration uses the
trained $f_{\theta_{M}}(.)$ on the MRI testing images from $\mathcal{D}_{M}$.
Fig. 2: Proposed POD obliteration detector trained by distilling knowledge to
MRI from unpaired TVUS. (a) MRI pre-training with 3D masked auto-encoder, (b)
TVUS pre-training with ResNet(2+1)D, (c) MRI Knowledge Distillation from the
frozen teacher model pretrained on TVUS.
### 3.1 Pre-training
For pre-training the MRI encoder, we explore the self-supervised masked auto-
encoder (MAE) method [22] using the large dataset of un-annotated T2 MRI
images $\mathcal{D}_{P}$. 3D-MAE has an asymmetric encoder-decoder
architecture based on 3D Vision Transformer (3D ViT) [23], as shown in Figure
2. During pre-training, each volume is cropped into $8\times 8\times 8$
patches, then 50% of the patches are randomly masked. The encoder, denoted by
$f_{\theta_{E}}:\mathcal{X}\to\mathcal{E}$, takes visible patches embedded by
a linear projection (of dimension 768) with additional positional embeddings,
and processes via the 3D ViT with 12 Transformer blocks whose output is then
fed into the decoder, denoted by $f_{\theta_{D}}:\mathcal{D}\to\mathcal{X}$,
together with the masked volume tokens, to reconstruct the original volume at
the voxel level. After pre-training, the decoder is discarded and the pre-
trained encoder is applied to extract MRI features for the downstream
classification task. During MRI pre-training, we minimize the mean squared
error (MSE) on the masked patches between the reconstructed and original
volumes in the voxel space, defined as follows:
$\displaystyle\ell_{MAE}(\mathcal{D}_{P};\theta_{E},\theta_{D})=\frac{1}{L\times
K}\sum_{i=1}^{K}\sum_{l=1}^{L}\|\mathbf{x}_{i}(l)-\hat{\mathbf{x}}_{i}(l)\|_{2}^{2},$
(1)
where $K$ is the size of the unlabelled MRI dataset, $L$ indicates the number
of masked patches, $\mathbf{x}_{i}(l)$ and $\hat{\mathbf{x}}_{i}(l)$ represent
the voxel values of the $l^{th}$ masked patch in the original and
reconstructed volumes, respectively, with the reconstructed volume obtained
from $\hat{\mathbf{x}}_{i}=f_{\theta_{D}}(f_{\theta_{E}}(\mathbf{x}_{i}))$.
After this self-supervised pre-training, we take the encoder
$f_{\theta_{E}}(.)$, add a linear layer to change the size from 768 (output
size of the MRI encoder) to 512 (output size of the TVUS encoder), and add a
final classification layer with a 2-dimensional output activated by softmax to
form the student network, denoted by $f_{\theta_{M}}(.)$. We fine-tune all
transformer blocks in the encoder and fully connected layer to classify POD
obliteration by minimizing the cross-entropy (CE) loss:
$\displaystyle\ell_{PTM}(\mathcal{D}_{M};\theta_{M})=-\sum_{(\mathbf{x}_{i},\mathbf{y}_{i})\in\mathcal{D}_{M}}\ell_{CE}(\mathbf{y}_{i},f_{\theta_{M}}(\mathbf{x}_{i})).$
(2)
For the TVUS pre-training, we adopted the ResNet (2+1)D model proposed in
[10]. This model contains 18 modules of R(2+1) convolutional layers, with each
convolution being followed by batch normalization. During this TVUS pre-
training, we also minimize the CE loss, as follows:
$\displaystyle\ell_{PTU}(\mathcal{D}^{s}_{U};\theta_{U})=-\sum_{(\mathbf{x}^{s}_{i},\mathbf{y}^{s}_{i})\in\mathcal{D}_{M}}\ell_{CE}(\mathbf{y}^{s}_{i},f_{\theta_{U}}(\mathbf{x}^{s}_{i})).$
(3)
### 3.2 Knowledge Distillation
In the knowledge distillation (KD) stage, we consider the pre-trained TVUS
model $f_{\theta_{U}}(.)$ as the teacher model, and the pre-trained MRI model
$f_{\theta_{M}}(.)$ as the student model. Given that the pre-trained TVUS
model tends to produce superior classification accuracy than the pre-trained
MRI model, we aim to fine-tune the MRI model to match the predictions produced
by the TVUS model. The goal is to use this knowledge distillation procedure to
improve the classification accuracy of the MRI model, which uses only the MRI
volume during testing. However, recall that we do not have matched TVUS and
MRI data for this knowledge distillation procedure, so we match the data based
only on their classification labels, i.e., an MRI sample with positive POD
obliteration is matched with a random TVUS sample with positive POD
obliteration, and similarly for the negative POD obliteration. Then, the KD
training minimises the following loss
$\displaystyle\ell_{KD}(\mathcal{D}_{M},\mathcal{D}_{U}^{s};\theta_{M})=-\sum_{\begin{subarray}{c}(\mathbf{x}_{i},\mathbf{y}_{i})\in\mathcal{D}_{M}\\\
(\mathbf{x}^{s}_{j},\mathbf{y}^{s}_{j})\in\mathcal{D}_{U}^{s}\\\
\mathbf{y}_{i}=\mathbf{y}^{s}_{j}\end{subarray}}\|f_{\theta_{U}}(\mathbf{x}^{s}_{j})-f_{\theta_{M}}(\mathbf{x}_{i}))\|_{1}.$
(4)
The loss in (4) is added to the $\ell_{PTM}(.)$ loss from (2) to form the
final loss that encourages the model to pull the TVUS and the MRI outputs
closer to distill the TVUS classification information to the student network,
as follows
$\displaystyle\ell(\mathcal{D}_{M},\mathcal{D}_{U}^{s};\theta_{M})=$
$\displaystyle\alpha^{epoch}\times\ell_{KD}(\mathcal{D}_{M},\mathcal{D}_{U}^{s};\theta_{M})+$
(5)
$\displaystyle(1-\alpha^{epoch})\times\ell_{PTM}(\mathcal{D}_{M};\theta_{M}),$
which is used to estimate the optimal value of $\theta_{M}$, where
$\alpha^{epoch}$ is a parameter to dynamically balance the contributions of
the two loss terms during training.
## 4 Experiments
### 4.1 Dataset
Our private dataset contains: an MRI endometriosis dataset, a TVUS
endometriosis dataset, and a female pelvic MRI dataset for self-supervised
pre-training.
The MRI endometriosis dataset contains 88 T2 SPACE MRI scans from women aged
18-45, including 19 POD obliteration cases. These examinations were performed
in several clinical sites in Australia. The scans contain the whole pelvic
area, but we focus on the region around the uterus that can display the POD
obliteration. We performed 3D contrast limited adaptive histogram equalization
(CLAHE) to improve the local contrast and enhance the definitions of edges in
each region of a sequence. For the experiments, we use stratified random
sampling to split the dataset into 5 folds, each with 17-18 subjects, with 4
folds used for training and 1 for testing, which are rotated to produce a
cross-validated result.
The TVUS endometriosis dataset has 749 video clips of the ’sliding sign’,
including 103 negative (POD obliterated) cases. We follow the data preparation
and pre-processing protocol as well as model parameter settings proposed in
[10] to pre-train the TVUS model. For the knowledge distillation phase, we
divide the dataset into negative and positive groups, then use stratified
random sampling to split each group into 5 folds and use 4 folds in each group
as the training set.
The female pelvic MRI dataset contains 8,984 volumes. In the context of
endometriosis research, we accept all scans when the patient is aged between
18 to 45, the physical examination site is pelvis, and the sequence
description contains ’T2’. It is worth noting that most of these volumes were
scanned in different settings, so they may contain signs of other diseases and
the scanned area may or may not overlap with the diagnostic area of
endometriosis. We re-sampled all scans by SimpleITK with an output spacing of
$1\times 1\times 1mm$, and filtered out volumes if the number of slices in any
dimension is less than 65.
### 4.2 Training and Evaluation
In the pre-training phase, the 3D MAE network is trained for 200 epochs on the
female pelvic MRI dataset using a batch size of 3 with AdamW optimizer and a
learning rate of 1e-3. Then we fine-tuned the pre-trained checkpoint saved
from epoch 195 for 25 epochs on the endometriosis MRI dataset with a “5-fold
cross validation” strategy. The ResNet(2+1)D network was pre-trained on the
Kinetics-400 dataset then fine-tuned for 30 epochs on the TVUS endometriosis
dataset using a batch size of 30 with Adam optimizer and a learning rate of
1e-5. For knowledge distillation, we loaded the weights of the $10^{th}$
checkpoint fine-tuned from each fold as the MRI feature extractor of the MRI
model and the pre-trained TVUS feature extractor. The knowledge distillation
network is trained for 10 epochs on the endometriosis MRI dataset using a
batch size of 7 with AdamW optimizer and a learning rate of 1e-3 with the same
“5-fold cross validation” strategy. We set the hyper-parameter $\alpha=0.85$
based on the cross-validation results. We evaluate our method by computing
Area Under the Receiver Operating Characteristic Curve (ROC AUC) for five
folds and calculate the mean and standard deviation of them.
### 4.3 Results
The classification results with our proposed method is shown in Table 1. The
ResNet(2+1)D model shows an outstanding classification AUC of 96.9% from TVUS
data. The small amount of training samples limited the generalisation of 3D
ViT to classify POD obliteration from MRI volumes with an AUC of 65.0%, but
the MAE pre-training (PT) partially mitigates the issue, improving the AUC to
87.2%. With knowledge distillation (KD), we observe that training a 3D ViT
from scratch on such a small dataset is still challenging, with an AUC of
66.7%. Also, the KD performance of 3D ViT with MAE pre-training (PT) reaches
AUC=77.2%), which is worse than without KD, with AUC=87.2%, which may be due
to the excessive domain shift between the pre-training dataset and TVUS
dataset. By fine-tuning (FT) the model from MAE pre-training, the model
improves accuracy from AUC=87.2% (without KD and FT) to AUC=90.6% (with KD and
FT).
Table 1: POD obliteration classification results.
Method | Training | Testing | AUC
---|---|---|---
| Modality | Modality | mean$\pm$stddev
ResNet(2+1)D | TVUS | TVUS | 0.969$\pm$0.012
3D ViT | MRI | MRI | 0.650$\pm$0.102
| 3D ViT: MAE PT
---
MRI | MRI | 0.872$\pm$0.094
| 3D ViT: KD
---
MRI,TVUS | MRI | 0.667$\pm$0.107
| 3D ViT: MAE PT + KD
---
MRI,TVUS | MRI | 0.772$\pm$0.087
| 3D ViT: MAE PT + KD + FT
---
MRI,TVUS | MRI | 0.906$\pm$0.099
## 5 Conclusion
In this paper, we proposed a two-stage algorithm to distill the knowledge from
a TVUS to an MRI classifier, thereby improving the POD obliteration
classification accuracy of the MRI classifier. Through the MAE pre-training,
knowledge distillation and fine-tuning, we are able to significantly reduce
the distance between the two domains and accomplish a promising knowledge
distillation from TVUS to MRI. The efficacy and superiority of our proposed
approach are demonstrated by experimental results on our endometriosis
datasets. In the future, we will introduce a missing modality deep learning
approach and expand our proposed method to perform weakly-supervised lesion
segmentation, thereby improving the interpretability of the model, so it can
be widely applied in future clinical trials.
## 6 Compliance with ethical standards
This study was performed in line with the principles of the Declaration of
Helsinki. Approval was granted by Human Research Ethics Committee (HREC) of
University of Adelaide(Date 01-03-2020/No. H-2020-051) and the Southern
Adelaide Clinical Human Research Ethics Committee (SAC HREC) (Date
01-11-2021/No. 111.20).
## 7 Acknowledgments
This work received funding from the Australian Government through the Medical
Research Futures Fund: Primary Health Care Research Data Infrastructure Grant
2020 and from Endometriosis Australia.
## References
* [1] Antonio Simone Lagana et al., “Evaluation of m1 and m2 macrophages in ovarian endometriomas from women affected by endometriosis at different stages of the disease,” Gynecological Endocrinology, vol. 36, no. 5, pp. 441–444, 2020\.
* [2] KM Moss, J Doust, H Homer, IJ Rowlands, R Hockey, and GD Mishra, “Delayed diagnosis of endometriosis disadvantages women in art: a retrospective population linked data study,” Human Reproduction, vol. 36, no. 12, pp. 3074–3082, 2021.
* [3] Paolo Vercellini et al., “Endometriosis: pathogenesis and treatment,” Nature Reviews Endocrinology, vol. 10, no. 5, pp. 261–275, 2014\.
* [4] Krina T. Zondervan et al., “Endometriosis,” New England Journal of Medicine, vol. 382, no. 13, pp. 1244–1256, 2020, PMID: 32212520.
* [5] Australian Institute of Health and Welfare, “Endometriosis in australia: prevalence and hospitalisations,” 2019, [Online; accessed 18-02-2022].
* [6] Christian M Becker et al., “Eshre guideline: endometriosis,” Human reproduction open, vol. 2022, no. 2, pp. hoac009, 2022.
* [7] Alison Deslandes et al., “Current status of transvaginal ultrasound accuracy in the diagnosis of deep infiltrating endometriosis before surgery: a systematic review of the literature,” Journal of Ultrasound in Medicine, vol. 39, no. 8, pp. 1477–1490, 2020.
* [8] T Indrielle-Kelly et al., “Diagnostic accuracy of ultrasound and mri in the mapping of deep pelvic endometriosis using the international deep endometriosis analysis (idea) consensus,” BioMed research international, vol. 2020, 2020.
* [9] Le Chi Chiu et al., “Predicting pouch of douglas obliteration using ultrasound and laparoscopic video sets: an interobserver and diagnostic accuracy study,” Journal of Ultrasound in Medicine, vol. 38, no. 12, pp. 3155–3161, 2019.
* [10] Gabriel Maicas et al., “Deep learning to diagnose pouch of douglas obliteration with ultrasound sliding sign,” Reproduction and Fertility, vol. 2, no. 4, pp. 236–243, 2021.
* [11] Stefano Guerriero et al., “Systematic approach to sonographic evaluation of the pelvis in women with suspected endometriosis, including terms, definitions and measurements: a consensus opinion from the international deep endometriosis analysis (idea) group,” Ultrasound in Obstetrics & Gynecology, vol. 48, no. 3, pp. 318–332, 2016.
* [12] Anitha L Thalluri et al., “Mri findings in deep infiltrating endometriosis: a pictorial essay,” Journal of Medical Imaging and Radiation Oncology, vol. 61, no. 6, pp. 767–773, 2017.
* [13] Milliam L Kataoka et al., “Posterior cul-de-sac obliteration associated with endometriosis: Mr imaging evaluation,” Radiology, vol. 234, no. 3, pp. 815–823, 2005.
* [14] Kristina Arion et al., “Prediction of pouch of douglas obliteration: point-of-care ultrasound versus pelvic examination,” Journal of Minimally Invasive Gynecology, vol. 26, no. 5, pp. 928–934, 2019.
* [15] Stefano Guerriero et al., “Artificial intelligence (ai) in the detection of rectosigmoid deep endometriosis,” European Journal of Obstetrics & Gynecology and Reproductive Biology, vol. 261, pp. 29–33, 2021.
* [16] Minmin Yang et al., “Diagnostic efficacy of ultrasound combined with magnetic resonance imaging in diagnosis of deep pelvic endometriosis under deep learning,” The Journal of Supercomputing, vol. 77, no. 7, pp. 7598–7619, 2021\.
* [17] Geoffrey Hinton et al., “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, vol. 2, no. 7, 2015.
* [18] Nikolaos Passalis et al., “Learning deep representations with probabilistic knowledge transfer,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 268–284.
* [19] Qi Dou et al., “Unpaired multi-modal segmentation via knowledge distillation,” IEEE transactions on medical imaging, vol. 39, no. 7, pp. 2415–2425, 2020.
* [20] Hao Guan et al., “Mri-based alzheimer’s disease prediction via distilling the knowledge in multi-modal data,” NeuroImage, vol. 244, pp. 118586, 2021.
* [21] Cheng Chen et al., “Learning with privileged multimodal knowledge for unimodal segmentation,” IEEE Transactions on Medical Imaging, vol. 41, no. 3, pp. 621–632, 2021.
* [22] Kaiming He et al., “Masked autoencoders are scalable vision learners,” arXiv preprint arXiv:2111.06377, 2021.
* [23] Alexey Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
|
# MiddleNet: A Unified, High-Performance NFV and Middlebox Framework with eBPF
and DPDK
Shixiong Qi, Ziteng Zeng, Leslie Monis, and K. K. Ramakrishnan,
Dept. of Computer Science and Engineering, University of California, Riverside
###### Abstract
Traditional network resident functions (e.g., firewalls, network address
translation) and middleboxes (caches, load balancers) have moved from purpose-
built appliances to software-based components. However, L2/L3 network
functions (NFs) are being implemented on Network Function Virtualization (NFV)
platforms that extensively exploit kernel-bypass technology. They often use
DPDK for zero-copy delivery and high performance. On the other hand, L4/L7
middleboxes, which have a greater emphasis on functionality, take advantage of
a full-fledged kernel-based system.
L2/L3 NFs and L4/L7 middleboxes continue to be handled by distinct platforms
on different nodes. This paper proposes MiddleNet that develops a unified
network resident function framework that supports L2/L3 NFs and L4/L7
middleboxes. MiddleNet supports function chains that are essential in both NFV
and middlebox environments. MiddleNet uses the Data Plane Development Kit
(DPDK) library for zero-copy packet delivery without interrupt-based
processing, to enable the ‘bump-in-the-wire’ L2/L3 processing performance
required of NFV. To support L4/L7 middlebox functionality, MiddleNet utilizes
a consolidated, kernel-based protocol stack for processing, avoiding a
dedicated protocol stack for each function. MiddleNet fully exploits the
event-driven capabilities of the extended Berkeley Packet Filter (eBPF) and
seamlessly integrates it with shared memory for high-performance communication
in L4/L7 middlebox function chains. The overheads for MiddleNet in L4/L7 are
strictly load-proportional, without needing the dedicated CPU cores of DPDK-
based approaches. MiddleNet supports flow-dependent packet processing by
leveraging Single Root I/O Virtualization (SR-IOV) to dynamically select the
packet processing needed (Layers 2 - 7). Our experimental results show that
MiddleNet achieves high performance in such a unified environment.111 This
paper is an extended version of our previously published IEEE NetSoft 2022 [1]
paper and IEEE TNSM paper [2]. In this extended version, we additionally
perform overhead auditing of our shared memory-based design (§III) to clearly
show the reason why shared memory communication can fundamentally improve the
data plane performance for a chain of L2/L3 NFs or L4/L7 middleboxes (details
in Appendix-B).
###### Index Terms:
Middleboxes, NFV, DPDK, eBPF, service function chains.
## I Introduction
Networks have increasingly become software-based, using virtualization to
exploit common off-the-shelf (COTS) hardware to provide a wide array of
network-resident functions, thus avoiding having to deploy functions in
purpose-built hardware appliances. This has broadened the networking
capabilities provided by both the network and cloud platforms, offloading the
burden from end-hosts that may have limited power and compute capability
(e.g., cell phones or IoT devices). With software-based network-resident
functions, network services can be more agile. They can be deployed more
dynamically on end-systems that house multiple services.
But there continues to be a dichotomy in how various network-resident services
are supported on software-based platforms. Layer 2 and Layer 3 (L2/L3)
functions that seek to be transparent and act as a bump-in-the-wire are
currently being supported with Network Function Virtualization (NFV)
technologies. These focus on performance and are built with network functions
(NFs) running in userspace supported by kernel-bypass technology such as Data
Plane Development Kit (DPDK [3]). Primarily providing switching
(demultiplexing and forwarding), they typically do not provide a full network
protocol stack, and are exemplified by approaches such as OpenNetVM [4] and
OpenvSwitch (OVS) [5].
On the other hand, middleboxes operating at Layer 4 through Layer 7 (L4/L7)
require the full network protocol stack’s processing (e.g., for application
layer functionality such as HTTP proxies), in addition to more complex
stateful functionality in userspace, including storage and other I/O
operations (e.g., caching). Thus, flexibility and functionality are prominent
concerns, with performance being a second (albeit important) consideration. A
robust and proven kernel-based protocol stack is often desirable [6], as
specialized userspace protocol stack implementations often do not support all
possible corner cases.
These distinct requirements for NFV and middlebox designs typically result in
the need for different systems. However, networks require both types of
functionality to be supported concurrently for different flows, and in many
cases, even for the same flow. This calls for supporting them in a unified
framework so that they can be deployed on COTS end-systems dynamically and
flexibly.
Both NFV and middleboxes often have to build complex packet processing
pipelines using function chaining. This helps ease development through the use
of microservices, which can be independently scaled as needed to improve
resource utilization. But the excessive overhead (e.g., interrupts, data
copies, context switches, protocol processing, serialization/deserialization)
incurred within the data plane of current service function chains can be a
deterrent. Even worse, the data plane overhead in current function chaining
solutions increases with the function chain size, which significantly reduces
their data transfer performance (see §II-C).
Using shared memory communication can help us achieve a more streamlined,
efficient data plane design. Shared memory communication supports zero-copy
packet delivery between network-resident functions, by having a shareable
backend buffer to store packet data, avoiding unnecessary data plane overheads
within a function chain.
Another dichotomy is in how the key building block for shared memory
communication is designed. This relates to how packets are moved between the
NIC and the shared memory buffer, and how packet descriptors are passed
between functions in a function chain. The first option is to exploit the
event-driven networking subsystem provided by the extended Berkeley Packet
Filter (eBPF [7]). eBPF offers extensive toolkits (e.g., AF_XDP [8], SKMSG
[9]) in support of zero-copy packet delivery. Importantly, eBPF incurs
negligible overhead in the absence of events (such as packet arrivals to a
given function or even to the platform), making it an excellent fit for
supporting a rich set of diverse, efficient network-resident functions. An
eBPF program does have size restrictions and must run to completion, requiring
careful design [10]. A second alternative approach is to build the shared
memory communication framework around polling-based DPDK, as has been used in
many high-performance virtualized software-based networking environments,
e.g., OpenNetVM [4]. They provide zero-copy delivery into the userspace. Using
poll-mode drivers (PMD) [11] and RTE RING [12], they avoid the deleterious
effects of interrupt-based processing of network I/O (e.g., receive-livelocks)
under overload [13], making it possible to support complex function chaining
at line rate. Nevertheless, dedicated polling continuously consumes
significant CPU resources, and thus is not load-proportional. While this may
be reasonable in an NFV-only dedicated system, it is challenging for systems
that host many services, including middlebox functions.
In this work, we develop MiddleNet, a unified, high-performance NFV and
middlebox framework. We take a somewhat unconventional approach by examining
an event-driven eBPF design, and separately a polling-based DPDK design for
supporting NFV and middlebox function chains with shared memory, and
evaluating each design approach. We then arrive at the design of MiddleNet as
the most suitable framework for a unified platform supporting both NFV and
middlebox functionality. MiddleNet uses Single Root I/O Virtualization (SR-IOV
[14]) to enable their co-existence.
MiddleNet makes the following contributions:
(1) We qualitatively discuss the usability of different data plane models for
supporting NFV and middlebox capabilities. We carefully audit their data plane
overheads and quantitatively assess the performance of each approach. We also
look at how current data plane models support function chaining (§II).
(2) We then design the shared memory communication for MiddleNet both the NFV
and middlebox (§III) functionality. We (qualitatively and quantitatively)
examine the suitability of eBPF and DPDK in supporting different aspects of
shared memory communication, including NIC-shared memory packet exchange and
zero-copy I/O (i.e., packet descriptor delivery) within the function chain
(§IV and §V). This helps us understand the strengths and limitations of each
option (DPDK’s PMD, polling/interrupt-based AF_XDP in eBPF, DPDK’s RTE RING,
eBPF’s SKMSG), and the root causes. MiddleNet chooses to leverage the
strengths of polling-based DPDK for L2/L3 NFV, and takes advantage of event-
driven eBPF for L4/L7 middleboxes, to strike the balance between performance
and resource efficiency.
(3) For achieving a unified NFV/middlebox framework, we evaluate different
alternatives: a hardware-based approach (via SR-IOV [14]) and a software-based
approach (via virtual device interfaces, e.g., virtio/vhost [15]). We assess
the performance with SR-IOV and recommend its use for the unified design
because of its minimum data plane overhead (§VI).
(4) MiddleNet supports function-chain-level isolation to address security
concerns with shared memory communication. We create a private memory pool for
each function chain to prevent unauthorized access from untrusted functions
outside the chain. MiddleNet further enhances traffic isolation by applying
packet descriptor filtering between functions (§VII).
## II Background and Motivation
We examine a number of virtualization frameworks and the networking support
that can be provided for supporting network resident functions. We audit the
data plane overheads for these different combinations of virtualization
frameworks and networking approaches, and discuss their applicability for
achieving a high-performance, lightweight, and unified NFV/middlebox
framework.
### II-A Basic elements in supporting network resident functions
Figure 1: Distinct data plane models for NFV and Middlebox, with different
vSwitch options, virtual device interfaces, and virtualization frameworks: (a)
kernel-based vSwitch + virtio-user/vhost-net & TUN/TAP \+ VM; (b) kernel-based
vSwitch + virtio-user/vhost-net & TUN/TAP \+ container; (c) kernel-based
vSwitch + virtio-net/vhost-net & TUN/TAP \+ VM; (d) kernel-based vSwitch +
veth \+ container; (e) userspace vSwitch + virtio-user/vhost-user \+ VM; (f)
userspace vSwitch + virtio-user/vhost-user \+ Container; (g) userspace vSwitch
+ virtio-net/vhost-user \+ VM; (h) userspace vSwitch + virtio-user/vhost-net &
TUN/TAP \+ veth \+ container. We assess (f) as the best solution for L2/L3 NFs
and (d) as the best solution for L4/L7 middleboxes (§II-C).
We identify four key elements for building NFV and middleware environments,
including virtualization frameworks, the virtual switch (vSwitch), the
protocol stack, and the virtual device interface. Virtualization helps to
multiplex compute resources, and can greatly improve resource efficiency, and
reduce costs, while also providing isolation for building L2/L3 NFs and L4/L7
middleboxes. A vSwitch is typically used to provide L2 forwarding/L3 routing.
The network protocol stack, often implemented in the OS kernel, provides
protocol layer processing (e.g., TCP/IP). It is necessary for L4/L7
middleboxes, but is less important for L2/L3 NFs. Virtual device interfaces
are used to connect the virtualized function and its protocol stack (for L4/L7
middleboxes only) to the vSwitch, thus building a complete NF and middlebox
environment. There are several alternatives for each of these elements, which
we describe below.
Virtualization frameworks: Widely-adopted virtualization frameworks include
virtual machines (VMs) and containers. VMs often depends on hardware-level
virtualization supported by the Virtual Machine Monitor (VMM) or the
hypervisor in the host that multiplexes the physical resources across multiple
VMs. Each VM has its own OS layer (i.e., guest OS). Unlike a VM, a container
is built utilizing OS-level virtualization. Containers share a host’s OS to
access the underlying physical resources, instead of depending on the
hypervisor. The host’s OS utilizes Linux namespaces and cgroups to provide
isolation between containers and restrict their access to system resources.
Sharing the host’s OS makes containers more lightweight. They can be
provisioned more quickly compared to VMs [16].
Virtual switch (vSwitch): vSwitches can be broadly classified into kernel-
based approaches (e.g., in-kernel Open vSwitch and Linux bridge) and userspace
approaches that bypass the kernel (e.g., OVS-DPDK [17], and OVS-AF_XDP [18]).
The kernel-based vSwitch runs within the host’s OS kernel, using an in-kernel
NIC driver to exchange packets with the physical NIC. The userspace vSwitch
runs in the userspace of the host, using a userspace NIC driver to exchange
packets with the physical NIC.
The userspace vSwitch relies on kernel-bypass to exchange packets with the
NIC. We consider two distinct, but widely adopted, kernel-bypass
architectures: DPDK [3] and AF_XDP [8]. They both support zero-copy packet I/O
between the NIC and userspace. However, they are fundamentally different in
the way they are driven to execute. DPDK’s kernel-bypass depends only on
polling while the kernel-bypass in AF_XDP can be either event-driven (i.e.,
triggered by each arriving packet) or polling. DPDK implements a Poll Mode
Driver (PMD), polling the NIC for received packets and packet transmission
completions. This facilitates high-performance packet I/O between the NIC and
the userspace functions. However, this leads to high CPU usage even if there
is no incoming packet. An additional, specialized kernel driver (e.g., UIO
driver or VFIO driver) is required to block interrupt signals from the NIC,
which helps the userspace PMD to work properly through active polling.
However, this requires the NIC to be dedicated to DPDK. The exclusivity of
DPDK leads to compatibility problems between DPDK and the kernel stack; e.g.,
the kernel stack now cannot access the NIC once DPDK has bound its kernel
driver to the NIC. One solution is to use Single Root I/O Virtualization (SR-
IOV [14]) to create multiple virtual Ethernet interfaces (these are called
Virtual Functions, or VFs), and to dedicate DPDK’s kernel driver to one of the
VFs without disturbing the kernel stack (see §VI).
AF_XDP [8], is another kernel-bypass alternative to DPDK. The event-driven
mode of AF_XDP makes it strict load-proportional. Event-driven AF_XDP executes
only when a new packet arrives, thus it consumes no CPU cycle when there is no
packet. This fundamentally makes event-driven AF_XDP more resource-efficient
under light load compared to DPDK. The polling mode AF_XDP acts in a similar
manner as DPDK. However, the polling mode of AF_XDP still introduces interrupt
overhead due to the execution of the XDP program at the NIC driver, which
results in lower performance compared to DPDK. We evaluate both polling-based
and event-driven AF_XDP in §IV-D. In addition, AF_XDP (either polling or
event-driven mode) does not require a specialized kernel driver to enable
kernel-bypass, and thus it can work seamlessly with the kernel stack to
support protocol processing for an L4/L7 middlebox. DPDK on the other hand
requires SR-IOV support, in addition, to share the physical NIC with the
kernel stack. Compared to a purely kernel-based solution (i.e., using the
kernel stack for both L2/L3 NFs and L4/L7 middleboxes), AF_XDP achieves
comparatively higher performance with zero-copy packet I/O between the NIC and
userspace functions.
Network protocol stack: The protocol stack can be kernel-based or could be in
userspace, using kernel-bypass for passing packets. The kernel-based network
protocol stack (e.g., Linux kernel protocol stack) provides a full-function,
robust, and proven solution for protocol processing, often with better
usability than userspace protocol stack solutions such as Microboxes [19] and
mTCP [20], which provide limited support (e.g., only TCP), thus limiting their
usage. We primarily focus on the kernel-based protocol stack in this work.
Virtual device interfaces: Typical virtual device interfaces include TUN/TAP,
veth pairs, and virtio/vhost devices. TUN/TAP operates as a data pipe (TUN for
sending over L3 Tunnels, TAP for receiving L2 frames) that connects the kernel
stack with userspace applications. TUN/TAP can work with virtio/vhost virtual
device interfaces to connect VMs or containers to the kernel-based vSwitch
(Fig. 1 (a) - (c)). The virtio/vhost interfaces execute as virtual NICs
(vNICs) for VMs and containers. The virtio interface is in the VM/container,
while the vhost interface is in the host as the backend of the virtio device.
It is important to note that each has a userspace variant (virtio-user, vhost-
user) as well as a kernel-based variant (virtio-net, vhost-net). The virtio
variants and vhost variants can be freely combined, e.g., virtio-user can work
with vhost-net (Fig. 1 (a), (b)); virtio-net can work with vhost-user (Fig. 1
(g)), etc. because they all follow the vhost protocol [15], having a
consistent messaging APIs to work with different variants. Veth pairs are
often used in container networking [21], working as data pipes between the
container’s network namespace and the host’s network namespace. Unlike
virtio/vhost, the veth pair works only in the kernel. It does not have a
userspace variant, so it does not work directly with the userspace vSwitch
(see Fig. 1 (h)).
### II-B Usability analysis of data plane models
Fig. 1 shows different variants for data plane connectivity for L2/L3 NFs and
L4/L7 middleboxes by combining different options for virtualization, vSwitch,
and virtual device interfaces. L2/L3 NFs do not require protocol layer
processing, since they only offer an L2/L3 switch’s forwarding capability, as
in a vSwitch. L4/L7 middleboxes additionally require protocol stack
processing. We first qualitatively evaluate the usability of different data
plane models for L2/L3 NFs and L4/L7 middleboxes in Fig. 1, depending on
whether the data plane model has a protocol stack or not.
The data plane models in Fig. 1 (a), (b), (e), (f) do not involve protocol
layer processing and are suitable for L2/L3 NFs. The data plane models in Fig.
1 (c), (d), (g), (h), are all equipped with the kernel protocol stack and are
suitable for L4/L7 middleboxes. Although data plane models for an L4/L7
middlebox (Fig. 1 (c), (d), (g), (h)) can also be used for an L2/L3 NF. The
protocol processing however adds unnecessary overhead, as it is not required.
In addition, we can extend the L2/L3 NF data plane models to support L4/L7
middleboxes by adding a userspace protocol stack; however, this approach is
not favored by us for two reasons: (1) we want to use a full-function kernel
protocol stack, and (2) having a separate userspace protocol stack in each
middlebox function again adds to the memory footprint.
The use of the virtio-user interface helps an L2/L3 NF data plane to bypass
protocol layer processing, acting as the vNIC driver in a VM/container’s
userspace, directly interacting with the userspace function. Depending on the
vSwitch being used, the virtio-user device cooperates with different backend
vhost devices to create a direct data pipe between the userspace function and
the vSwitch (either kernel-based or in userspace) to exchange raw packets: the
vhost-net device is used to connect with the kernel-based vSwitch through the
TUN/TAP (Fig. 1 (a), (b)); the vhost-user device is used to connect with the
userspace vSwitch (Fig. 1 (e), (f)).
When using containers to virtualize L4/L7 middleboxes (Fig. 1 (d), (h)), the
key element to enable the network protocol stack is the veth pair. The
container-side veth connects to the protocol stack in the container’s network
namespace (implemented in the host’s kernel), for necessary protocol
processing.222Note: there is no L2/L3 processing in the container’s network
namespace. The reason is the container actually shares the same kernel with
the host. As the L2/L3 processing is performed by the kernel-based vSwitch in
the host’s network namespace, packets enter into the protocol layer stack
after being passed to the container’s network namespace. Thus, no duplicate
L2/L3 processing is performed inside the container. Each veth pair is assigned
a unique IP address, which is used for L2/L3 forwarding across different
containers’ network namespaces. Applications with a container namespace share
the same IP address and are differentiated by L4 port numbers. The host-side
veth connects to host’s network namespace, so it can seamlessly work with the
kernel-based vSwitch (d). However, if we have to work with a userspace vSwitch
(h), the packet needs to be injected from the userspace to the container’s
network namespace for protocol processing. To achieve this goal, the userspace
vSwitch is connected to the kernel via the virtio-user/vhost-net and TUN/TAP
device interfaces. The TUN/TAP interface is configured with a point-to-point
link to the veth pair, which helps avoid duplicate L2/L3 processing in host’s
network namespace.
When using VMs to virtualize L4/L7 middlebox functions, the virtio-net device
interface is used to utilize the protocol stack in VM’s kernel. The virtio-net
device operates as the in-kernel vNIC driver, interacting with the userspace
function through VM’s kernel stack. Just like the virtio-user device
interface, the virtio-net interface can work with either a kernel-based
vSwitch (Fig. 1 (c)) or a userspace vSwitch (Fig. 1 (g)) by cooperating with
specific backend vhost device interface.
### II-C Auditing Overheads of data plane models
The data plane models in Fig. 1, with their selection of elements (i.e.,
vSwitch, virtualization framework, virtual device interfaces) in constructing
the data plane, may result in different data plane performance. Through a
careful auditing of the overhead, we seek to identify the optimal data plane
model for L2/L3 NFs and L4/L7 middleboxes. For this, we focus on the data
plane overhead with a function chain.
For both L2/L3 NFs and L4/L7 middleboxes, function chains are mediated by the
vSwitch to route packets between functions to be processed in the order they
are configured in the chain. Additional protocol processing is required for
the L4/L7 middlebox case. We only show the auditing results when using DPDK as
the kernel-bypass architecture for the userspace vSwitch in this auditing.
Figure 2: A generalized data pipeline for an NFV/Middlebox chain. Note: we
only show the client-to-server datapath; protocol processing is only available
for L4/L7 middlebox.
We use the abstract function chain setup of two functions (Fig. 2) to
represent the data pipeline for all cases. We assume functions in the same
chain are placed on the same node so that there is no cross-node data
transfer. The client sends packets to the backend server through an
intermediate node (node-2 in Fig. 2) that implements the function chain. (①) A
packet first arrives at the physical NIC and is then passed to the vSwitch.
(②) The vSwitch routes the packet to the first function in the chain (Fn-1).
(③) After the first function completes processing the packet, the packet is
sent back to the vSwitch. (④) The vSwitch routes the packet to the next
function in the chain (Fn-2). (⑤) The second function processes the packet and
returns it to the vSwitch. (⑥) The vSwitch then routes the packet out through
the NIC to the backend server.
TABLE I: Overhead auditing of L2/L3 NF data plane models
Data pipeline No. | | Outside the chain
---
(NIC-vSwitch)
| Within the chain
---
(Fn-vSwitch-Fn)
total
① | ⑥ | ② | ③ | ④ | ⑤
# of copies | kernel-based vSwitch | (a) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(b) | 0 | 0 | 1 | 1 | 1 | 1 | 4
userspace vSwitch | (e) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(f) | 0 | 0 | 1 | 1 | 1 | 1 | 4
# of interrupts | kernel-based vSwitch | (a) | 1 | 0 | 1 | 1 | 1 | 1 | 5
(b) | 1 | 0 | 1 | 1 | 1 | 1 | 5
userspace vSwitch | (e) | 0 | 0 | 0 | 0 | 0 | 0 | 0
(f) | 0 | 0 | 0 | 0 | 0 | 0 | 0
# of context switch | kernel-based vSwitch | (a) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(b) | 0 | 0 | 1 | 1 | 1 | 1 | 4
userspace vSwitch | (e) | 0 | 0 | 0 | 0 | 0 | 0 | 0
(f) | 0 | 0 | 0 | 0 | 0 | 0 | 0
(a) kernel-based vSwitch + virtio-user/vhost-net & TUN/TAP + VM;
(b) kernel-based vSwitch + virtio-user/vhost-net & TUN/TAP + container;
(e) userspace vSwitch + virtio-user/vhost-user + VM;
(f) userspace vSwitch + virtio-user/vhost-user + container;
Note: Context switches may happen when two userspace processes (e.g., the NF
and the vSwitch) are placed on the same CPU core. However, in NFV scenario,
NFs and the vSwitch are typically dedicated with a separate CPU core, owing to
the need of high performance. We assume NFs and the vSwitch assigned with
dedicated CPU core in the overhead auditing. virtio-user uses DPDK’s PMD to
send/receive packets. There is no interrupt involved.
Table I shows the overhead auditing for the L2/L3 scenarios (Fig. 1 (a), (b),
(e), (f)). Table II shows the overhead auditing for the L4/L7 scenarios (Fig.
1 (c), (d), (g), (h)). We do not include the switching/routing overhead (i.e.,
cycles spent on forwarding/routing table lookup), as it is a necessary
operation to exchange packets between functions (either L2/L3 or L4/L7) and
cannot be avoided. We have several key takeaways below drawn from our auditing
of the packet flow.
Takeaway#1: Using the userspace vSwitch in conjunction with virtio-user/vhost-
user ((e) and (f)) saves a significant amount of overhead, and is preferred
for L2/L3 NFs.
The userspace vSwitch does not show a significant overhead difference compared
to the kernel-based vSwitch when moving the packet between the vSwitch and the
NIC (① and ⑥, see “Outside the chain” column in Table I). Compared to the
userspace vSwitch (using DPDK for kernel-bypass), the kernel-based vSwitch
incurs one additional interrupt when receiving packets from the NIC.
The advantage of the userspace vSwitch is the ability to work with userspace
virtual device interfaces, i.e., virtio-user/vhost-user. Working in
conjunction with virtio-user/vhost-user, the userspace vSwitch does not incur
an interrupt or context switch when passing packets within the function chain
(② to ⑤). On the other hand, the kernel-based vSwitch has to exchange the
packet with the function in userspace through virtio-user/vhost-net & TUN/TAP
((a) and (b)), which incurs an interrupt and a context switch each time the
packet crosses the kernel-userspace boundary (② to ⑤), a less desirable
option. However, none of them avoid the data copies incurred when transmitting
the packet within the chain (details below in Takeaway#3).
TABLE II: Overhead auditing of L4/L7 middlebox data plane models
Data pipeline No. | | Outside the chain
---
(NIC-vSwitch)
| Within the chain
---
(Fn-vSwitch-Fn)
total
① | ⑥ | ② | ③ | ④ | ⑤
# of copies | kernel-based vSwitch | (c) | 0 | 0 | 2 | 2 | 2 | 2 | 8
(d) | 0 | 0 | 1 | 1 | 1 | 1 | 4
userspace vSwitch | (g) | 0 | 0 | 2 | 2 | 2 | 2 | 8
(h) | 0 | 0 | 2 | 2 | 2 | 2 | 8
# of interrupts | kernel-based vSwitch | (c) | 1 | 0 | 2 | 2 | 2 | 2 | 9
(d) | 1 | 0 | 2 | 2 | 2 | 2 | 9
userspace vSwitch | (g) | 0 | 0 | 2 | 2 | 2 | 2 | 8
(h) | 0 | 0 | 3 | 3 | 3 | 3 | 12
# of context switch | kernel-based vSwitch | (c) | 0 | 0 | 2 | 2 | 2 | 2 | 8
(d) | 0 | 0 | 1 | 1 | 1 | 1 | 4
userspace vSwitch | (g) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(h) | 0 | 0 | 2 | 2 | 2 | 2 | 8
# of protocol processing tasks | kernel-based vSwitch | (c) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(d) | 0 | 0 | 1 | 1 | 1 | 1 | 4
userspace vSwitch | (g) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(h) | 0 | 0 | 1 | 1 | 1 | 1 | 4
# of serialization or deserialization (L7) | kernel-based vSwitch | (c) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(d) | 0 | 0 | 1 | 1 | 1 | 1 | 4
userspace vSwitch | (g) | 0 | 0 | 1 | 1 | 1 | 1 | 4
(h) | 0 | 0 | 1 | 1 | 1 | 1 | 4
# of L2/L3 processing tasks | kernel-based vSwitch | (c) | 0 | 1 | 2 | 1 | 2 | 1 | 7
(d) | 0 | 1 | 1 | 0 | 1 | 0 | 3
userspace vSwitch | (g) | 0 | 1 | 2 | 1 | 2 | 1 | 7
(h) | 0 | 1 | 1 | 0 | 1 | 0 | 3
(c) kernel-based vSwitch + virtio-net/vhost-net & TUN/TAP \+ VM;
(d) kernel-based vSwitch + veth \+ container;
(g) userspace vSwitch + virtio-net/vhost-user \+ VM;
(h) userspace vSwitch + virtio-user/vhost-net & TUN/TAP \+ veth \+ container
Takeaway#2: Using the kernel-based vSwitch in conjunction with veth and
container (d) incurs the least overhead for L4/L7 middleboxes.
Just as with the L2/L3 NF use case, the use of different vSwitches in L4/L7
middlebox case to exchange packets between the NIC and middlebox (① and ⑥)
does not have a significant difference. However, as L4/L7 middleboxes require
kernel protocol processing, the kernel-based vSwitch has an advantage, as it
can work seamlessly with the protocol stack in the host’s kernel. Since
containers share the host’s kernel, it is ideal to follow the data plane model
(d) and connect the kernel-based vSwitch with the container via the veth pair.
As shown in Table II, each time when the packet is exchanged between the
middlebox and the vSwitch (② to ⑤), (d) it saves 1 data copy and 1 context
switch compared to (c), which also adopts the kernel-based vSwitch. As (c)
uses virtio-net/vhost-net & TUN/TAP to connect VM and host’s kernel, there is
1 data copy and 1 context switch involved.
The use of a userspace vSwitch along with the virtio-user/vhost-net interface
(h) is also less preferable than (d). (h) with the userspace vSwitch differs
from (d) (which uses the kernel-based vSwitch) because packets have to be
looped back from the vSwitch in userspace to the kernel for protocol
processing. This incurs one more data copy, interrupt, and context switch
compared to (d), as seen in Table II, resulting in poorer performance.
Using the userspace vSwitch and the vhost-user interface to work with a VM (g)
is slightly better, as both the userspace vSwitch and the vhost-user interface
work in the userspace, thus eliminating one context switch compared to using
the virtio-net/vhost-net & TUN/TAP in (c). However, (g) still incurs an
additional data copy because of the kernel-userspace boundary crossing within
the VM. Moreover, as the packet has to traverse the entire VM’s kernel stack
in (c) and (g), there is unnecessary, duplicate L2/L3 processing involved in
the VM’s kernel in addition to the L2/L3 processing performed by the vSwitch
in the host. This duplicate processing is avoided in (d) with the use of
containers, which reuses the OS kernel from the host and avoids duplicate
processing.
Takeaway#3: Heavyweight service function chain for L2/L3 NFs and L4/L7
middleboxes.
As shown in Table I and II, the major source of data plane overhead comes
within the function chain (② to ⑤). Even with the best combination we
identified for L2/L3 NFs (f) and L4/L7 middleboxes (d), there are excessive
data copies within a service function chain with existing solutions. With the
best L2/L3 solution (f), one data copy is incurred each time a packet is
passed from the vSwitch to the NF (②, ④), and vice versa (③, ⑤). This also
holds true for the best L4/L7 solution (d). The situation is worse for the
L4/L7 case, as there are many additional overheads, including interrupts,
context switches, protocol processing tasks, and serialization/deserialization
tasks, that are incurred for the communication within the chain (② to ⑤).
Discussion: Containers share the host’s kernel protocol stack, resulting in a
smaller memory footprint than having a dedicated kernel stack in each VM. This
becomes important with scale, as the number of NFs/middleboxes grows. The
smaller footprint contributes to faster startup of containerized functions
[16]. Containers also avoid duplicate L2/L3 processing for L4/L7 middleboxes
(see Takeaway#2). For L2/L3 NFs, there is no significant difference in the
data plane cost between VMs and containers (compare (e) and (f) in Table I).
While we choose to work with containers, the design of MiddleNet is also
generally applicable to a VM-based environment.
Data plane models (f) “userspace vSwitch + virtio-user/vhost-user \+
container” and (d) “kernel-based vSwitch + veth \+ container” are the best
solution for L2/L3 NFs and L4/L7 middleboxes, respectively, as they introduce
the minimal amount of overhead and are most lightweight against other
alternatives. However, even the optimal data plane models are too heavyweight
to construct the function chain for L2/L3 NFs and L4/L7 middleboxes. In fact,
the overhead in the current service function chain design builds as the size
of the function chain increases, which can result in significant performance
loss. Unnecessary packet processing overhead is introduced in the data
transfer between vSwitch and functions, as well as expensive protocol
processing (for L4/L7 only). All these factors make it difficult for us to
achieve a high-performance NFV/middlebox framework.
## III Shared memory communication in MiddleNet
Figure 3: A generalized shared memory communication data pipeline for a
function chain in MiddleNet. Note: we only show the client-to-server datapath
Shared memory communication can alleviate the data movement overheads of the
data plane within a function chain by keeping the data in a userspace memory
pool to be shared by different functions in the chain. Fig. 3 shows a
generalized data pipeline using shared memory communication in MiddleNet. It
is a chain, with two functions (either L2/L3 NFs or L4/L7 middlebox
functions), both on the same host. Steps ① and ⑥ move the packets between the
NIC and shared memory, while ② to ⑤ pass packet descriptors between functions
to achieve zero-copy packet delivery within the function chain. An
intermediate component (running in userspace) is used to provide
forwarding/routing support within the function chain, which is similar to the
vSwtich in Fig. 1. We call this intermediate component the “NF manager” in the
L2/L3 scenario, or “message broker” in the L4/L7 scenario. The NF
manager/message broker is responsible for moving packets between the NIC and
the shared memory in steps ① and ⑥.
Three key elements enable shared memory communication for a function chain:
(1) NIC-shared memory packet exchange. An incoming packet is moved into the
userspace shared memory prior to processing by the function chain (either
L2/L3 NF chain or L4/L7 middlebox chain); (2) Zero-copy I/O within the
function chain. Instead of moving the data from one function to another,
shared memory communication achieves zero-copy I/O within the function chain,
by passing a pointer, which is the packet descriptor, to the data in shared
memory. This substantially reduces overhead; (3) Shared memory support. A
memory pool is initialized and mapped to each function in the chain before it
can be accessed. There are multiple alternatives, with significant
differences, for the “NIC-shared memory packet exchange” and “zero-copy I/O
within the function chain” operations, which we now describe qualitatively.
#### III-1 NIC-shared memory packet exchange
There are two distinct options: one approach bypasses the kernel, the other is
a kernel-based approach. The kernel-bypass approach DMA’s the packet to shared
memory without involving the kernel stack. Exploiting kernel-bypass avoids
heavyweight kernel processing and is better suited for building L2/L3 NFs as a
‘bump-in-the-wire’. As discussed in §II-A, the kernel-bypass approach can be
further classified into a polling-based kernel-bypass (i.e., with DPDK’s PMD)
and event-driven kernel-bypass (i.e., using AF_XDP). The NF manager (Fig. 3)
works with these kernel-bypass alternatives to move packets between the NIC
and shared memory (details in §IV-B and §IV-C).
The kernel-based approach, on the other hand, uses the kernel stack to pass
packets between the NIC and the message broker in the userspace. The message
broker exchanges packets with the kernel stack via the Linux socket interface.
It then moves packets to shared memory for zero-copy processing within the
function chain. This inevitably introduces overheads (e.g., copy, context
switch, etc) when a packet crosses the kernel-userspace boundary. It also
incurs the overhead of kernel protocol layer processing, which is only useful
for L4/L7 middleboxes. The kernel-based approach is ideal for L4/L7
middleboxes, as it provides necessary processing using a full-function kernel
protocol stack.
#### III-2 Zero-copy I/O for function chaining
Zero-copy I/O for function chaining can also be broadly implemented using
either: (1) polling-based zero-copy I/O, e.g., DPDK’s RTE RING [12]; or (2)
event-driven zero-copy I/O, e.g., eBPF’s SKMSG [9]. It’s important to
understand the difference between these two options and their impact on
performance.
eBPF’s SKMSG is a socket-related eBPF program type, “BPF_PROG_TYPE_SK_MSG”
[9]. SKMSG is attached to the socket of the function during its creation. It
processes packets sent/received on the attached socket to/from the kernel. The
execution of SKMSG is triggered by the arrival of a packet, which is strictly
event-driven and is thus load-proportional. Working in conjunction with the
eBPF socket map (BPF_MAP_TYPE_SOCKMAP [22]), which provides necessary routing
information, SKMSG can deliver packet descriptors between functions. The other
option, DPDK’s RTE RING, is implemented as a circular FIFO queue, used for
buffering packet descriptors. Dedicated for each function is a Receive (RX)
and Transmit (TX) ring pair to pass packet descriptors using polling.333Note:
Polling the RTE ring does not require the simultaneous use of DPDK’s PMD. It
can be simply implemented as a while loop. A function polls its own RX ring
(using rte_ring_dequeue()) to receive packet descriptors and enqueue packet
descriptors to its TX ring (using rte_ring_enqueue()) for transmission. A
centralized routing component on the other side polls the TX ring of each
function and moves queued packet descriptors to the RX ring of the destination
function, based on its internal routing table.
#### III-3 Shared memory support
MiddleNet uses DPDK’s multi-process support [23] to construct shared memory
between functions within a service chain. We utilize a shared memory manager
(running as a DPDK primary process444The DPDK primary process has privileges,
enabling it to initialize memory pools in huge pages.) to manage shared memory
pools. During the initialization stage of MiddleNet, the shared memory manager
in MiddleNet creates a private memory pool, with a unique “shared data file
prefix” specified to isolate with other shared memory pools on the same node.
The “shared data file prefix” is used by DPDK’s EAL to create hugepage files
(i.e., actual file system objects for DPDK’s memory pools) in the Linux file
system. A DPDK process is allowed to access a hugepage file, only if the same
file prefix was specified during its creation. Additional details are in
Appendix A, including shared memory support for VM-based functions. We
leverage this feature to build a security domain for MiddleNet that enhances
the security of using shared memory for communication between NFs (see §VII).
Each key element described is independent of the other, e.g., using DPDK’s
multi-process doesn’t require DPDK’s PMD. So using DPDK’s multi-process
support to manage memory sharing between different functions incurs no polling
overhead.
Overhead Auditing & Discussion: We perform overhead auditing of the function
chain using shared memory communication. We consider two distinct approaches
for both the L2/L3 NFs and L4/L7 middleboxes use cases: the polling-based
approach (using DPDK’s PMD and RTE RING), and the event-driven approach (using
eBPF’s AF_XDP and SKMSG).
To conserve space, we have summarized the main takeaways here. A detailed
discussion can be found in Appendix B. The overhead auditing clearly shows the
advantage of using shared memory communication, to reduce the overhead in
almost every dimension (e.g., data copy, interrupt, context switch, etc).
Thus, we factor it into our NFV/middlebox framework, MiddleNet. It is clear
that L2/L3 MiddleNet should consider kernel-bypass NIC-shared memory packet
exchange to facilitate high performance. L4/L7 MiddleNet adopts kernel-based
NIC-shared memory packet exchange to provide the needed protocol processing.
We understand the trade-off between a polling-based solution and an event-
driven solution by implementing the alternatives, and evaluating their
performance, to help us decide which to use for MiddleNet.
## IV Design of MiddleNet: L2/L3 NFV
We discuss the eBPF-based and DPDK-based alternatives for L2/L3 NFV support,
given the performance requirement of operating at line rate and being capable
of supporting service function chains. Since they operate at L2/L3, there is
less emphasis on having a full-function protocol stack.
### IV-A Overview
NIC-userspace kernel-bypass: MiddleNet takes full advantage of zero-copy
packet delivery and kernel-bypass to move packets between the NIC and the
userspace shared memory, so as to minimize overheads, reduce resource
consumption, and achieve full line-rate L2/L3 packet processing (§III-1). We
consider two kernel-bypass alternatives: polling-based DPDK’s PMD and event-
driven AF_XDP (§II-A).
Zero-copy I/O for function chaining: We evaluate two alternatives for L2/L3
MiddleNet, the polling-based approach and the event-driven approach. The
polling-based alternative adopts DPDK’s PMD for NIC-to-userspace delivery
using kernel-bypass and DPDK’s RTE RING for function chaining. The event-
driven alternative adopts AF_XDP for NIC-to-userspace kernel-bypass and SKMSG
for function chains. This helps us evaluate the trade-off between performance
and resource efficiency when using a polling-based design or an event-driven
design to achieve a ‘bump-in-the-wire’ L2/L3 NFV environment. Both of them use
DPDK’s multi-process support to manage the shared memory of L2/L3 MiddleNet
(§III-3). We implement these two alternatives based on OpenNetVM’s design [4],
that is similar in principle to the design described in Fig. 3, §III.
### IV-B The DPDK-based L2/L3 NFV design
The DPDK-based approach can be ‘expensive’ in having dedicated CPU cores for
polling. In addition to the NF manager that dedicates one CPU core for the
PMD, for each NF of the L2/L3 function chain, one CPU core is used up for each
function to poll its RTE RING. This can be wasteful if incoming traffic is
low. Somewhat more complex NFV support, such as NFVnice [24], can be used to
mitigate these overheads by sharing a CPU core across multiple NFs.
Figure 4: Packet processing flow for DPDK-based L2/L3 NFV: RX and TX
Fig. 4 depicts the packet flow of DPDK-based L2/L3 NFs. In the RX path, PMD
provides a packet descriptor for the NIC (①) to deliver the packet into the
shared memory via DMA (②). The NF manager examines the packet, and moves the
packet descriptor into the RX ring of the target NF (③), based on the routing
table. The target NF obtains the packet descriptor by polling its RX ring and
uses it to access the packet in shared memory (④). After the NF’s packet
processing is complete (⑤), the NF writes the descriptor to its TX ring (⑥).
On the other side, the NF manager continuously polls the NF’s TX ring and sets
up the packet transmission based on the descriptor in the ring (⑦). The PMD
then completes the processing once the packet is transmitted, to clean up the
transmit descriptor (⑧). Both TX and RX rings are polled by the PMD for RX and
TX from/to the NIC, and NFs use polling to RX or TX packet descriptors.
Service function chains: The NF manager utilizes destination information in
the packet descriptor to support routing within an NF chain for the DPDK-based
approach. The routing table in the NF manager is used to resolve that NF’s ID,
thus avoiding the need for each NF to maintain a private routing table. After
the NF manager gets a packet descriptor from the TX ring of an NF, it parses
the packet descriptor to look at the destination NF information. It then
pushes a packet descriptor to the RX ring of the next NF to transfer ownership
of the shared memory frame (as pointed to by the descriptor). Ownership for
write is based on the NF currently owning a descriptor to that frame in shared
memory, thus ensuring a single writer and obviating the need for locks. Using
the NF manager for ‘centralized’ routing mitigates contention when multiple
NFs may forward to a downstream NF.
### IV-C The eBPF-based L2/L3 NFV design
The NF manager in the eBPF-based L2/L3 MiddleNet opens a dedicated AF_XDP
socket (i.e., XSK [8]) that serves as an interface to interact with the kernel
to handle RX and TX for AF_XDP-based packet delivery. Each XSK is assigned a
set of RX and TX rings to pass packet descriptors containing pointers to
packets in shared memory. All XSKs share a set of ‘Completion’ and ‘Fill’
rings, owned by the kernel and used to transfer ownership of the shared memory
frame between the kernel and userspace NFs. AF_XDP depends on interrupts
triggered by the event execution of the XDP program attached to the NIC driver
(Fig. 5). This interrupt notifies the packet processing component in
userspace. However, these interrupts have to be managed with care to avoid
poor overload behavior when subjected to high packet rates [13].
Fig. 5 depicts the zero-copy packet flow based on AF_XDP. An XDP program works
in the kernel space with the NIC driver to handle packet reception (and
transmission). The NIC is provided a descriptor (①) pointing to an empty frame
in shared memory. Upon reception, the packet is DMAed into shared memory (②),
and a receive interrupt triggers an XDP_REDIRECT which moves the packet
descriptor to the RX ring of the NF manager (③) before invoking it. In the
interrupt service routine, the kernel notifies the NF manager about updates in
its RX ring, which the NF manager then accesses via its XSK (④). The interrupt
service routine is completed once the NF manager fetches the packet descriptor
from the RX ring. The NF manager invokes the corresponding NF (⑤) and waits
for NFs to complete processing.
Figure 5: Packet processing flow for eBPF-based L2/L3 NFV: RX and TX Figure 6:
Function chaining in MiddleNet: eBPF-based approach
After the NF completes packet processing, the NF manager is invoked to
transmit the packet out of the node (❶). The descriptor is populated in the TX
ring (❷). The system call by the NF manager (typically sendmsg()) notifies the
kernel about the TX event (❸). The kernel then transmits the packet based on
the descriptor given in the TX ring (❹). If the packet is successfully
transmitted, the kernel pushes the descriptor back to the ‘Completion’ ring
(❺) to inform the NF manager that the frame can now be reused for the
subsequent transmission. The NF manager fetches the packet descriptor from the
‘Completion’ ring (❻) and moves it to the ‘Fill’ ring for incoming packets
(❼).
We implement the NF manager with three threads to manage the different rings
without locks. We use one thread to handle the read of the RX ring (④) and
another one to handle the transmit to the TX ring (❷). We use a third thread
to coordinate between the ‘Completion’ ring and the ‘Fill’ ring. This thread
watches for the kernel to move packet descriptors into the ‘Completion’ ring
(❻) upon transmitting completions. The third thread then moves the packet
descriptor from the ‘Completion’ ring to the ‘Fill’ ring (❼).
Figure 7: Comparison between different L2/L3 alternatives: (a) Maximum loss
free rate (MLFR) under different packet sizes, (b) CPU usage under MLFR under
different packet sizes, (c) end-to-end latency under MLFR under different
packet sizes. Note: D-MN refers to D-MiddleNet; E-MN-i refers to E-MiddleNet
with interrupt-driven AF_XDP socket; E-MN-p refers to E-MiddleNet with
polling-based AF_XDP socket; OVS-A-i refers to OVS-AF_XDP with interrupt-
driven AF_XDP socket; OVS-A-p refers to OVS-AF_XDP with polling-based AF_XDP
socket.
Service function chains: The eBPF-based L2/L3 approach uses SKMSG to support
NF chains. To support flexible routing between functions, we utilize eBPF’s
socket map. The in-kernel socket map maintains a map between the ID of the
target NF and the socket interface information. As shown in Fig. 6, the NF
creates a packet descriptor to be sent (①). The SKMSG performs a lookup in the
socket map to determine the destination socket (②). It then redirects the
packet descriptor to the next NF (③). That NF uses the descriptor to access
data in shared memory (④) and passes the packet descriptor to the next NF
through SKMSG after processing.
### IV-D Performance evaluation
Experiment setup: We compare the performance of DPDK (i.e., polling-based,
hereafter referred to as D-MiddleNet) and eBPF (i.e., event-driven, hereafter
referred to as E-MiddleNet) approaches to support L2/L3 NFVs with a ‘packet-
centric’ evaluation by comparing the Maximum Loss Free Rate (MLFR), the end-
to-end latency, and CPU utilization at this MLFR for different packet sizes.
We use the data plane model (f) in §II-A as the primary baseline to compare
with. For this, we choose two implementations of Open vSwitch as the kernel-
bypass vSwitch in (f): OVS-DPDK [17] and OVS-AF_XDP [18]. We set up our
experiments on NSF Cloudlab [25] with three nodes: the 1st node is configured
with a _Pktgen_ [26] load generator for L2/L3 NFV use case; the 2nd node is
configured with two MiddleNet alternatives (D-MiddleNet, E-MiddleNet) and the
two OVS alternatives (OVS-DPDK, OVS-AF_XDP). The 3rd node is configured to
return the packets directly back to the 1st node, to measure latency. Each
node has a 40-core CPU, 192GB memory, and a 10Gbps NIC. We use Ubuntu 20.04
with kernel version 5.15. We use DPDK v21.11 [3] and _libbpf_ [27] v0.6.0 for
eBPF-related experiments.
To achieve the best possible performance for OVS-DPDK and OVS-AF_XDP
baselines, we enable the “Multiple Poll-Mode Driver Threads” [28] feature in
OVS. Each PMD thread runs on a dedicated CPU core and continually polls the
physical NIC or the vhost-user (Fig. 1 (f)) to process incoming packets. OVS-
AF_XDP uses polling to retrieve packets from the NIC by default. For this
polling-based OVS-AF_XDP option (OVS-AF_XDP-p, Fig. 1 (f)), and OVS-DPDK, we
create three PMD threads to achieve the highest performance. We additionally
configure the AF_XDP socket in OVS-AF_XDP to run in the interrupt mode (i.e.,
OVS-AF_XDP-i) [29].555To enable the interrupt mode for AF_XDP, a user needs to
specify the device type of the physical NIC as “afxdp-nonpmd” when attaching
it to OVS. This helps to move packets between NIC and userspace OVS in an
event-driven manner. But, to achieve the optimal packet exchange performance
between OVS-AF_XDP-i and NFs, we use polling to avoid interrupt overheads for
packet exchanges between OVS and the NFs. Only a data copy overhead is
incurred between OVS and the NFs when using polling on both sides. For this,
we create two PMD threads to poll packets for getting packets to and from NFs
(via vhost-user). For NFs in both the OVS-DPDK and OVS-AF_XDP setups, each
virtio-user is dedicated with a CPU core to poll packets from OVS. We also
configure the AF_XDP socket in E-MiddleNet to operate in polling mode
(E-MiddleNet-p) and compare with the interrupt-based AF_XDP socket
(E-MiddleNet-i).
We set up two NFs in a chain on the 2nd node: an L3 routing function followed
by an L2 forwarding function. For the L3 routing function, MiddleNet updates
the IP address of received packets, and the L2 forwarding function of a
subsequent NF in the chain updates the MAC address of received packets and
forwards it to the 3rd node. We collect the average value measured across 5
repetitions. Each run is for 60 seconds.
Discussion: Fig. 7 shows the MLFR for different alternatives. D-MiddleNet
achieves almost the line rate for different packet sizes. The exception is for
packet sizes of 64Bytes, achieving 12.6M packets/sec (84% of line rate)
because of our limit on the number of CPU cores for the NF Manager and the
PMD. Even with the limited CPU cores, D-MiddleNet outperforms both
E-MiddleNet-i and E-MiddleNet-p. For a packet size of 64Bytes, E-MiddleNet-i
is limited to a forwarding rate of 3.2 Mpps (only 25% of D-MiddleNet) while
E-MiddleNet-p is limited to a forwarding rate of 6.3 Mpps (50% of
D-MiddleNet). Moreover, if the NFs have more complex processing or if the load
were to be higher (e.g., if there is bidirectional traffic), then we observe
receive-livelock [13]. The performance of E-MiddleNet-i is limited by its
overheads, including a number of interrupts and context switches (see Table
IV). As we observe in Fig. 7, E-MiddleNet-i’s NF manager and the NFs
themselves spent most of the CPU time in the kernel (53% for the NF manager,
67% for NFs) to handle interrupts generated by AF_XDP socket or SKMSG, thus
leaving fewer resources to perform the NF packet forwarding tasks.
E-MiddleNet-p reduces interrupts by operating the AF_XDP socket in polling
mode, which helps it achieve better throughput compared to E-MiddleNet-i. But,
the performance of E-MiddleNet-p is still worse than D-MiddleNet as the
execution of XDP program in the NIC driver is triggered by interrupts, in
addition to the SKMSG overhead, all of which negatively impact the packet
forwarding performance. Although devoting more resources to E-MiddleNet’s NF
manager and the NFs may alleviate this overload, it only postpones the problem
when the traffic load continues to increase. Moreover, using more resources to
mitigate overload defeats the original intention of using eBPF-based event-
driven processing since the goal of using it is for resource efficiency.
Focusing on the end-to-end packet latency, D-MiddleNet achieves a 2.6$\times$
improvement compared to E-MiddleNet-i, and is 1.8$\times$ better compared to
E-MiddleNet-p (Fig. 7).
Note that as the packet size increases, the CPU usage of both E-MiddleNet-i
and E-MiddleNet-p is even lower compared to the other options. For example, at
a packet size of 1024Bytes, the CPU usage of E-MiddleNet-i and E-MiddleNet-p
are 63% and 58% of D-MiddleNet, respectively. Since E-MiddleNet-i and
E-MiddleNet-p use event-driven shared memory communication, as the packet size
increases and the packet rate decreases (bounded by the line rate of the NIC
used in this experiment). The overhead for E-MiddleNet-i and E-MiddleNet-p,
which is strictly proportional to the packet rate, diminishes. Thus the CPU
overhead reduces for larger packet sizes for E-MiddleNet-i and E-MiddleNet-p,
which makes the event-driven design attractive for larger packet sizes for
L2/L3 NFs. However, the event-driven approach still suffers from poor
performance and relatively high CPU usage in handling L2/L3 traffic with
smaller packet sizes. On the other hand, D-MiddleNet maintains good
performance across a range of packet sizes. Further, D-MiddleNet can utilize
the scheduling principles in NFVnice [24] to reduce the CPU consumption by
multiplexing a CPU core across multiple NFs.
Both D-MiddleNet and E-MiddleNet outperform OVS-DPDK and OVS-AF_XDP in terms
of MLFR for receiving packets and latency. Looking at the CPU usage of OVS-
DPDK, even though OVS-DPDK dedicates enough CPU resources (3 CPU cores for the
OVS switch, one CPU core per NF) to achieve the best performance, the
forwarding rate for it is worse than E-MiddleNet. This shows the negative
impact of excessive data copies within the chain (§II-C). Even though
E-MiddleNet also incurs interrupts and context switches (Table V) in the data
pipeline, as shown in Fig. 3, its exploitation of shared memory communication
fundamentally improves the data plane performance of function chains, as
discussed in Appendix B. OVS-AF_XDP on the other hand performs poorly. Running
OVS-AF_XDP in polling mode (OVS-AF_XDP-p) improves throughput and reduces
latency compared to running OVS-AF_XDP in interrupt mode. This is because OVS-
AF_XDP-i suffers the overhead of interrupts and context switches for moving
packets between the NIC and userspace, just like E-MiddleNet-i. But the
improvement of OVS-AF_XDP-p is limited, particularly because of the data copy
overhead within the chain.
D-MiddleNet does constantly consume considerable CPU (one CPU core per NF, 2
CPU cores for the NF manager). While this is a concern, its superior
performance makes it more attractive for L2/L3 NFs, since they have to act
like a ‘bump-in-the-wire’. E-MiddleNet is less attractive because of its poor
overload behavior.
## V Design of MiddleNet: L4/L7 Middlebox
We discuss the corresponding eBPF-based and DPDK-based designs to support
L4/L7 middleboxes. Since an L4/L7 middlebox relies heavily on protocol
processing, we discuss optimizations, leveraging the kernel protocol stack
processing, focusing on resource efficiency.
Figure 8: Packet processing flow for eBPF-based L4/L7 middleboxes
### V-A Overview
Protocol processing support: Unlike L2/L3 NFs, packets pass through the kernel
for the required protocol layer processing for L4/L7 middleboxes. L4/L7
MiddleNet uses a message broker (Fig. 3) to leverage the protocol processing
in the kernel stack. Incoming packets processed by the kernel network protocol
stack are delivered through a socket to a message broker in userspace. This
comes at a cost (see Appendix B), but MiddleNet benefits significantly from a
fully functional in-kernel protocol stack for L4/L7 middleboxes.
Zero-copy I/O for function chaining & shared memory support: We follow a
similar methodology as in §IV to evaluate what is the most suited zero-copy
I/O capability for function chains in L4/L7 MiddleNet. For the eBPF-based
L4/L7 middlebox design, packets are forwarded between MFs using eBPF’s SKMSG
capability. For DPDK-based L4/L7 middlebox functionality, the message broker
delivers descriptor entries to the ring of the target MF, with the payload in
shared memory, after protocol processing by the message broker.
### V-B The eBPF-based L4/L7 middlebox design
Fig. 8 depicts the packet flow for the eBPF-based L4/L7 MiddleNet. For inbound
traffic, after the payload is moved into shared memory by the message broker
(①), a packet descriptor is sent to the target MF via SKMSG (②). The MF then
uses the descriptor to access the data in shared memory (③). For outbound
traffic, once the MF has finished processing the packet (④), it uses SKMSG to
inform the message broker (⑤), which then fetches the packet in shared memory
(⑥) and transmits it on the network via the kernel protocol stack.
Function chain support: The eBPF-based L4/L7 MiddleNet utilizes the eBPF’s
SKMSG and socket map for delivering packet descriptors within the function
chain (similar to what we described for L2/L3 NFV with eBPF), as shown in Fig.
6. Although the eBPF-based L4/L7 approach still executes in a purely
interrupt-driven manner, since the kernel protocol stack is involved, it often
uses a flow-controlled transport protocol. This potentially avoids overloading
the receiver, and therefore, receive-livelocks are less of a concern.
Interrupt-based processing does not use up a CPU like polling, so it is more
resource-efficient and benefits the L4/L7 use case. We further mitigate the
impact of interrupts with batching.
Figure 9: Packet processing flow for DPDK-based L4/L7 middleboxes
Adaptive batching of SKMSG Processing: Since bursty traffic can cause a large
number of SKMSG transfers, we consider an adaptive batching mechanism to
reduce the overhead of frequent SKMSG transfers. For each interrupt generated
by SKMSG, instead of reading only one packet descriptor present in the socket
buffer, we read multiple (up to a limit) packet descriptors available in the
socket buffer. Thus, we can reduce the total number of interrupts, even for
frequent SKMSG transfers, and mitigate overload behavior.
### V-C The DPDK-based L4/L7 middlebox design
To leverage the kernel protocol stack, we restructure the NF manager of the
L2/L3 use case (Fig. 4) into a message broker in the DPDK-based L4/L7
MiddleNet. The message broker writes the received payload to shared memory
(①), then, consulting the routing table, pushes the packet descriptor to the
RX ring of the target MF (②). The MF keeps polling its RX ring for arriving
packets. The MF uses the received packet descriptor to access the packet in
shared memory and processes it (③). Once the processing is complete (④), the
MF pushes the packet descriptor to its TX ring. On the other side, the message
broker polls the TX ring of MFs for the packet descriptor (⑤), then accesses
the shared memory and sends the packet out through the kernel protocol stack
(⑥).
Function chain support: The function chain support in the DPDK-based L4/L7
MiddleNet is the same as in the DPDK-based L2/L3 NFV use case (§IV-B). Here,
the message broker performs the (same) tasks to transfer packet descriptors
between MFs.
Figure 10: RPS (a), latency (b) and CPU usage (c) comparison between different
L4/L7 middlebox approaches. Note: The CPU usage of the data plane model (d)
exceeds 10 CPU cores at concurrency level 32 and consumes 30 CPU cores at
concurrency level 512.
Figure 11: RPS (a), latency (b) and total CPU usage (c) comparison with
increasing number of CPU-intensive MFs in the chain.
### V-D Performance Evaluation of L4/L7 middleboxes
Experiment Setup: We now study the performance differences between the eBPF-
based L4/L7 MiddleNet (Fig. 8, hereafter referred to as E-MiddleNet) and the
DPDK-based L4/L7 MiddleNet implementation (Fig. 9, hereafter referred to as
D-MiddleNet). As a third alternative, we use an NGINX proxy to study the
impact of the loosely-coupled function chain (thus supporting a microservices
paradigm) design in MiddleNet. The NGINX proxy acts as a non-virtualized proxy
to perform functions via internal function calls, which avoids introducing
context switches or interrupts to achieve good data plane performance with a
static, monolithic function implementation. We also use the data plane model
in Fig. 1 (d) (hereafter referred to as K-vSwitch), as an additional
alternative to compare with. We choose the Linux bridge as the implementation
of the kernel-based vSwitch in Fig. 1 (d). While the in-kernel OVS bridge
could be another option, the Linux bridge offers all the functionality of a
vSwitch for our evaluation purposes and is natively supported in Linux. In
addition, the performance difference between Linux bridge and the in-kernel
OVS bridge is not considered to be significant [30, 31]. It has also been
noted that the in-kernel OVS bridge has difficulties being maintained as a
separate project in addition to Linux kernel [18]. We reuse most of the
testbed setup described in §IV-D.
We consider a typical HTTP workload (Apache Benchmark [32]) and examine
application-level metrics, including request rate, response latency, and CPU
usage, where the middlebox acts as a reverse proxy for web servers. The 1st
node is configured to generate HTTP workloads. The 2nd node is configured with
the MiddleNet system. On the 3rd node, we configure two NGINX [33] instances
as web servers. We enable adaptive batching for E-MiddleNet to minimize the
overhead incurred by frequent SKMSG interrupts within the chain at high
concurrency. We use a chain with two MFs. The first is a reverse proxy
function that performs round-robin load balancing between the two web server
backends on the 3rd node. The second function is a URL rewrite function that
helps perform redirection for static websites.
We also compare the scalability of D-MiddleNet and E-MiddleNet, when the
number of MFs in a linear chain increases. To evaluate the impact of CPU-
intensive tasks on the network performance of MF chains, we let MFs perform
prime number generation (based on the sieve-of-Atkin algorithm [34]) when a
request is received. Each MF is assigned one dedicated CPU core to perform
tasks, including RX/TX of requests and the prime number generation. We set the
concurrency level (i.e., the number of clients sending HTTP requests
concurrently) of Apache Benchmark to 512 to generate sufficient load.
Evaluation: Fig. 10 compares the RPS, response latency, and CPU usage of the
different alternatives. K-vSwitch has the lowest performance and highest CPU
usage compared to the others. At a concurrency level of 512, the RPS of
K-vSwitch is only $\sim$42% of the others, while its latency is
$\sim$2.3$\times$ higher. The CPU usage of K-vSwitch is even higher than
D-MiddleNet for concurrency levels greater than 16. This demonstrates the
heavyweight nature of the service function chain as discussed in §II-C and
demonstrates the benefit of having a zero-copy function chain (Appendix B) of
the MiddleNet alternatives.
The use of SKMSG in E-MiddleNet leads to slightly worse latency and throughput
than D-MiddleNet. When the concurrency is between 1 and 32, there is a
throughput difference between D-MiddleNet and E-MiddleNet, ranging from
1.09$\times$ to 1.3$\times$. At the lowest concurrency level of 1, E-MiddleNet
consumes 37% of the CPU, which is a 10$\times$ reduction compared to
D-MiddleNet (404%, i.e., 4 CPU cores). Since D-MiddleNet uses polling to
deliver packet descriptors, it continuously consumes CPU resources even when
the traffic load is low, resulting in wasted CPU resources. Although
D-MiddleNet achieves 1.3$\times$ better RPS and latency compared to the
E-MiddleNet at a concurrency of 1, E-MiddleNet’s resource efficiency more than
makes up for its lower throughput (which is likely not the goal when using a
concurrency of 1, in any case) compared to D-MiddleNet’s constant usage of
CPU. Thus, it is more desirable to use the lightweight E-MiddleNet approach
for these light loads.
When the concurrency level increases and the load is higher, the adaptive
batching of the E-MiddleNet approach amortizes the interrupt and context
switch overheads. The performance gap between E-MiddleNet and the others
reduces to be within 1.05$\times$ for concurrency levels higher than 64. With
adaptive batching, SKMSG can pass a set of packet descriptors, incurring only
one context switch and interrupt, saving substantial CPU cycles, reducing
latency, and improving throughput.
Compared to a monolithic NGINX as a middlebox, the E-MiddleNet approach
exhibits slightly worse throughput and latency performance (1.04$\times$ less
RPS due to 1.04$\times$ higher response delay) because of the overhead of
function chaining, SKMSG, and virtualization. NGINX’s internal function calls
have slightly lower overhead (25% less on average) than SKMSG, which has
additional context switches and interrupts. However, running a set of
middleboxes as microservices improves flexibility and resiliency, allowing us
to scale better, according to traffic load, especially with heterogeneous
functions. Moreover, it allows functions to be shared between different
middlebox chains to improve resource utilization. With orchestration engines,
e.g., Kubernetes, intelligent scaling and placement policies can be applied
with MiddleNet to improve resource efficiency further while still maintaining
performance very close to a monolithic middlebox design.
Fig. 11 evaluates the scalability of D-MiddleNet and E-MiddleNet with CPU-
intensive MFs. Both D-MiddleNet and E-MiddleNet show good scalability as the
number of MFs increases. Surprisingly, E-MiddleNet performs even better than
D-MiddleNet with CPU-intensive MFs, with a 10% improvement in RPS and a 10%
reduction in latency. This is because with the prime number generation being
CPU-intensive, it can quickly saturate the assigned CPU core and contend for
CPU with the polling-based RX tasks of D-MiddleNet’s MF. But for E-MiddleNet,
the RX of requests is triggered by interrupts, which is strictly load-
proportional and avoids CPU contention. Since the prime number generation is
performed within E-MiddleNet’s MFs, it is able to fully utilize the assigned
CPU core, improving its performance. To improve D-MiddleNet’s performance,
more CPU resources need to be assigned to the MFs, meaning that we are using
resources inefficiently. In addition, for the combined CPU usage of the
message broker and MFs, D-MiddleNet always needs one more CPU core than
E-MiddleNet (Fig. 11). The extra CPU usage of D-MiddleNet is due to the RX
polling in the message broker to receive requests from the MF. Since prime
number generation is time-consuming, it results in a lower request rate. This
means that the CPU devoted to handling RX of requests is used inefficiently.
This reiterates the fact that D-MiddleNet uses resources inefficiently for
this case, when dealing with CPU-intensive functions.
Throughout these experiments, E-MiddleNet has significant resource savings at
different concurrency levels compared to D-MiddleNet, while having comparable
throughput. Further, E-MiddleNet can even achieve better performance than
D-MiddleNet when it executes CPU-intensive functions even when it uses
resources more frugally. It also achieves close to the same performance as a
highly optimized, monolithic application like NGINX. The resource efficiency
benefits of the event-driven capability of eBPF, in conjunction with SKMSG to
support shared memory processing, is a highly desirable way of building L4/L7
middlebox functionality in software.
## VI A Unified Design based on SR-IOV
Based on the understanding from studying the alternative approaches and their
performance characteristics, we now develop the overall architecture of
MiddleNet that supports the co-existence of network resident NFV and middlebox
capabilities in a unified framework running on a single system.
Figure 12: The overall architecture of MiddleNet: A Combination of DPDK and
eBPF via SR-IOV.
SR-IOV [14] allows multiple Virtual Functions (VFs) on a shared NIC, as
depicted in Fig. 12. A VF acts as a distinct logical interface on the PCIe
that offers direct access to the physical NIC resources that are shared across
multiple VFs. It still achieves close to the single physical NIC’s
performance. By dividing the hardware resources available on the physical NIC
into multiple VFs, we can dedicate a VF for each L2/L3 MiddleNet and L4/L7
MiddleNet without having anyone take up the entire physical NIC. The aggregate
NIC performance will still be at the line rate. MiddleNet uses the Flow
Bifurcation mechanism [35] for splitting traffic within the physical NIC in a
flow or state-dependent manner. Since each VF is associated with different IP
and MAC addresses, MiddleNet dynamically selects the packet processing layer
(based on the VF it is attached to) from L2 to L7, providing a rich set of
network-resident capabilities.
### VI-A Flow and State-dependent packet processing using SR-IOV
MiddleNet attaches flow rules to the packet classifier in the physical NIC to
support flow (and possibly state) dependent packet processing. Once a packet
is received, the packet classifier parses and processes it based on its IP
5-tuple (i.e., source/destination IPs, source/destination ports, protocol),
which helps differentiate between packet flows.
(1) For a packet that needs to be handled by L2/L3 NFs, the classifier hands
it to the VF bound to DPDK. The VF DMA’s the raw packet to the shared memory
in userspace. On the other side, the NF manager obtains the packet descriptor
via the PMD and processes the packet in shared memory.
(2) For a packet that needs to be handled by L4/L7 middlebox functions (MFs),
the packet classifier hands the packet to the kernel TCP/IP stack through the
corresponding VF. Since L4/L7 MFs require transport layer processing,
MiddleNet utilizes the full-featured kernel protocol stack.
Because SR-IOV allows multiplexing of physical NIC resources, the split
between the DPDK path and Linux kernel protocol stack path can be easily
handled. L2/L3 NFs and L4/L7 MFs can co-exist on the same node in MiddleNet.
Using SR-IOV in a simple design, however, would result in these two frameworks
co-existing as two distinct and separate functions providing services for
distinct flows. There are two options for bridging the L2/L3 MiddleNet and
L4/L7 MiddleNet: (1) A hardware-based approach that utilizes the NIC switch
feature offered by SR-IOV [36] to connect different VFs within the NIC;666A
SR-IOV enabled NIC must include the internal hardware bridge to support
forwarding and packet classification between VFs on the same NIC. (2) A
software-based approach that uses virtio-user/vhost-net & TUN/TAP device
interfaces to connect L2/L3 MiddleNet to the kernel stack (see Fig. 1 (b)),
which is then connected to L4/L7 MiddleNet.777DPDK’s Kernel NIC Interface (KNI
[37]) is another software-based approach that provides equivalent
functionality as virtio-user/vhost-net & TUN/TAP. However, KNI lacks several
important features compared to virtio-user/vhost-net & TUN/TAP, such as multi-
queue support, checksum offloading, etc. This makes the performance of KNI not
as comparable as virtio-user/vhost-net & TUN/TAP [38].
TABLE III: Overhead auditing of unified designs
| NIC switch in SR-IOV | | virtio-user/vhost-net
---
& TUN/TAP
# of interrupts | 2 | 2
# of copies | 1 | 2
# of context switch | 1 | 2
Table III compares the overhead generated by different alternatives. We only
audit the datapath overhead between the NF manager in L2/L3 and the message
broker in L4/L7, as they are the entry point of L2/L3 and L4/L7 MiddleNet. The
hardware-based approach seamlessly works with the kernel-bypass in L2/L3
MiddleNet and moves the packet from the L2/L3 MiddleNet to the NIC via DMA.
The NIC switch forwards the packet to the VF attached to the kernel stack
without incurring any CPU overhead. All the overhead in the hardware-based
approach is caused by passing the packet from the kernel stack to the message
broker, however, is still less than software-based approach. The software-
based approach inevitably introduces extra overhead and may compromise the
performance gain achieved by L2/L3 kernel bypass. Based on the overhead
auditing, we decide to use the NIC switch to have packets pass through the
kernel protocol stack in or out of the L4/L7 layer to the L2/L3 NF, for both
L2/L3 NFs and L4/L7 MFs to operate on the same flow.
### VI-B Performance evaluation of unified design
We investigate the performance of a unified L2/L3 NFV and L4/L7 middlebox and
examine the interaction between the two, using SR-IOV to split the traffic. To
mitigate interference between the load generators for L2/L3 (Pktgen [26]) and
L4/L7 (Apache Benchmark [32]), we deploy Pktgen on the 1st node and Apache
Benchmark on the 3rd node. We configure two NGINX servers on the 3rd node as
the L4/L7 traffic sink. We configure two VFs on the 2nd node with SR-IOV and
bind L2/L3 MiddleNet (DPDK) and L4/L7 MiddleNet (eBPF) to separate VFs. We use
the same NFs (L3 routing and L2 forwarding) and MFs (reverse proxy and URL
rewrite) on the 2nd node as described in §IV-D and §V-D. We modify the NFs and
MFs to perform hairpin routing: L2/L3 NFs return traffic to the 1st node, and
L4/L7 MFs return traffic to the 3rd node. Thus, we eliminate the interference
that occurs between the two traffic generators. For L2/L3 traffic, we keep the
sending rate at the MLFR. For L4/L7 traffic, we use a concurrency of 256 with
the Apache Benchmark.
Figure 13: (a) Aggregate throughput for various packet sizes. For L2/L3 NFV,
we use Maximum loss free rate (MLFR) (b) Time series of throughput for L2/L3
NFV and total (left Y-axis) and L4/L7 middlebox (right Y-axis).
We study whether there is interference by checking the aggregate throughput as
well as the throughput for the L2/L3 traffic processed by NFV and the L4/L7
processed by the middlebox, as shown in Fig. 13. The aggregate throughput of
L2/L3 NFs and L4/L7 MFs remains close to 10Gbps, with negligible performance
loss across various packet sizes. We also study the impact of adding L4/L7
flows when L2/L3 traffic (128Bytes packets) goes through MiddleNet at line
rate (10 Gbps link). As shown in Fig. 13, at the 25th second, the Apache
Benchmark starts to generate L4/L7 traffic (0.22Gbps), and the throughput of
L2/L3 NFs correspondingly drops to 9.78Gbps. Thus, our unified design in
MiddleNet for the co-existence of DPDK-based L2/L3 NFs and eBPF-based MFs
provides both flexibility and performance.
## VII Isolation and Security Domains in MiddleNet
The use of shared memory raises concerns as it may weaken the
isolation/security boundary between the functions that share the same memory
region. Our trust model assumes that only functions in MiddleNet trust each
other. Functions in MiddleNet (NFs or MFs), which run as DPDK secondary
processes, share the same private memory pool by using the same “shared data
file prefix” (specified by the shared memory manager (§IV-A)) during their
startup. We ‘admission control’ functions by validating the creation of a
MiddleNet function that is authenticated and uses the correct file prefix. We
additionally apply inter-function packet descriptor filtering to prevent
unauthorized access to the data in shared memory, through the virtual address
in the packet descriptor. In accordance with the way packet descriptors are
passed, these are different for L2/L3 (with DPDK’s RTE ring) MiddleNet versus
L4/L7 (with eBPF’s SKMSG) MiddleNet.
Descriptor filtering for L2/L3 NFs: We leverage the NF manager in L2/L3
MiddleNet to perform packet descriptor filtering. Once the NF manager polls a
new packet descriptor from an NF’s TX ring, it queries its internal filtering
map and checks whether the packet descriptor is authorized to be sent to the
target NF based on matched rules. Unauthorized packet descriptors are dropped
by the NF manager.
Descriptor filtering in L4/L7: Since the L4/L7 MiddleNet uses SKMSG to pass
packet descriptors between functions (§V-B), it is natural to exploit eBPF’s
extensibility to filter packet descriptors. We add an additional eBPF map to
the SKMSG program to store filtering rules. Each time a packet descriptor
arrives, the SKMSG program parses the destination of the packet descriptor and
uses it as the key to lookup the filtering rule. The packet descriptor is
passed to the destination if allowed; otherwise, the descriptor is recognized
as unauthorized and discarded.
## VIII Related Work
NFV platforms use different implementation approaches and primarily operate at
L2/L3. OpenNetVM [4], based on DPDK, uses the microservice paradigm with a
flexible composition of functions and uses shared memory to achieve full line-
rate performance. However, OpenNetVM lacks full-fledged protocol stack
support, focusing on supporting L2/L3 NFs. Compared to OpenNetVM, MiddleNet
supports processing across the entire protocol stack, including application
support. Other NFV platforms take different approaches. Both ClickOS [39] and
NetMap [40] use traditional kernel style processing and mapping of kernel-user
space memory, using interrupts for notifications. The interrupt-based
notification schemes of ClickOS and NetMap can be vulnerable to poor overload
behavior because of receive-livelocks [13]. In contrast, the L2/L3 processing
in MiddleNet uses polling, thus avoiding receive-livelocks. E2 [41] integrates
all the NFs as one monolith to help improve performance but gives up some
flexibility to build complex NF chains through the composition of
independently developed NFs. NFV designs have increasingly adopted the
microservice paradigm for flexible composition of functions while still
striving to achieve full line-rate performance. Supporting this, MiddleNet’s
disaggregated design offers the flexibility to build complex L2/L3 NF chains.
Network-resident middleboxes’ functionality depends on having full kernel
protocol processing, typically terminating a transport layer connection and
requiring a full-fledged protocol stack. Efforts have been made to pursue a
high-performance middlebox framework with protocol processing support [42, 19,
6]. However, each of these proposals has its difficulties. mOS [42] focuses on
developing a monolithic middlebox, lacking the flexibility of a disaggregated
design like MiddleNet. Microboxes [19] leverages DPDK and OpenNetVM’s shared
memory design to improve packet processing performance and achieve flexible
middlebox chaining. However, it does not provide a full-fledged protocol stack
(it only supports TCP). The CPU consumption of DPDK-based designs is a further
deterrent in the L4/L7 use case, significantly when the chain’s complexity
increases. Establishing communication channels for a chain of middleboxes
using the kernel network stack incurs considerable overhead. Every transfer
between distinct middleboxes typically involves full protocol stack
traversals, which adds considerable overhead. It typically involves two data
copies, context switches, protocol stack processing, multiple interrupts, and
one serialization and deserialization operation. MiddleNet is designed to
reduce these overheads by leveraging shared memory processing, in the
meanwhile, adopting eBPF-based event-driven processing to minimize CPU
consumption. StackMap [6] also leverages the feature-rich kernel protocol
stack to perform protocol processing while bypassing the kernel to improve
packet I/O performance. However, it is more focused on end-system support than
middlebox function chaining. StackMap’s capability may be complementary to the
design of MiddleNet.
There has not been a significant effort to design a unified environment where
L2/L3 NFV and L4/L7 middlebox environments co-exist. MiddleNet is designed to
address this issue.
eBPF-based NFV/Middlebox: [43, 44, 45] explore the use of eBPF to implement
NFV/Middlebox functions. These eBPF-based functions reside in the kernel,
running as a set of eBPF programs attached at various eBPF hooks, e.g.,
eXpress Data Path (XDP), and Traffic Control (TC). This avoids expensive
context switches, as packet processing always remains within the kernel. In
addition, since the packet payload is retained in the kernel buffers. Only the
packet metadata,888The packet metadata is represented as a “xdp_md” data
structure when using the XDP hook, and is in the form of a “sk_buff” data
structure when using TC hook. which contains packet descriptor, is passed
between different eBPF-based functions, thus achieving zero-copy packet
delivery in the kernel. Compared to MiddleNet, [43, 44, 45] focus on the
affinity in the kernel. In contrast, L2/L3 MiddleNet relies on DPDK, which
uses SR-IOV to achieve a unified design. [43, 44, 45] can seamlessly work with
the kernel protocol stack for protocol processing. However, the eBPF-based
functions in [43, 44, 45] are triggered using kernel interrupts, thus
potentially suffering from poor overload behavior [13]. Thus, their approach
can perform poorly compared to L2/L3 MiddleNet, which leverages DPDK to
achieve line-rate performance. Additionally, the eBPF-based functions can only
be used to support L2/L3/L4 use cases within the kernel. Since L7 middleboxes
not only require protocol processing, but have application code that typically
run in userspace, approaches as in [43, 44, 45] result in expensive packet
transfers between the kernel performing packet processing and the L7 userspace
application. The shared memory design in L4/L7 MiddleNet avoids this overhead,
thus achieving better data plane performance for a unified L4/L7 environment.
## IX Conclusion
We presented MiddleNet, a unified environment supporting L2/L3 NFV
functionality and L4/L7 middleboxes. In MiddleNet, we chose the high-
performance packet processing of DPDK for L2/L3 NFs and the resource
efficiency of eBPF for L4/L7 middlebox functions. MiddleNet leverages shared
memory processing for both use cases to support high-performance function
chains. Experimental results demonstrated the performance benefits of using
DPDK for L2/L3 NFV. MiddleNet can achieve full line rate for almost all packet
sizes given adequate CPU resources provided to MiddleNet’s NF manager. Its
throughput outperforms an eBPF-based design that depends on interrupts by
4$\times$ for small packets and has a 2$\times$ reduction in latency. For the
L4/L7 use case, the performance of our eBPF-based design in MiddleNet is close
to the DPDK-based approach, getting to within 1.05$\times$ at higher loads
(large concurrency levels). In addition, the eBPF-based approach has
significant resource savings, with an average of 3.2$\times$ reduction in CPU
usage compared to a DPDK-based L4/L7 design. Using SR-IOV on the NIC,
MiddleNet creates a unified environment with negligible impact on performance,
running the DPDK-based L2/L3 NF chains and eBPF-based L4/L7 middlebox chains
on the same node. This can bring substantial deployment flexibility.
## Acknowledgments
We thank US National Science Foundation for their generous support through
grants CRI-1823270 and CSR-1763929.
## Appendix A Details of DPDK’s shared memory support
After the DPDK primary process (i.e., shared memory manager) initializes the
memory pools, it writes the memory pool information (e.g., base virtual
address, the allocated huge pages) into a configuration file through DPDK’s
EAL (Environment Abstraction Layer [46]). The DPDK secondary processes (i.e.,
functions, L2/L3 NF manager, L4/L7 message broker) read the configuration file
during startup and use DPDK’s EAL to map the same memory regions allocated by
the DPDK primary process. This ensures all the DPDK secondary processes share
the same memory pools, thereby facilitating shared memory communication
between functions.
When VMs are used, they rely on the emulated PCI to access physical memory in
the host. This requires multiple address translations (i.e., Guest Virtual
Address to Guest Physical Address and then to Host Virtual Address). This adds
a burden while sharing memory across different VMs, since they have different
virtual address mappings to the host. It requires the hypervisor (as it knows
the virtual address mappings of different VMs) to remap the base virtual
address in the packet descriptor, which adds additional processing latency. In
contrast, a container shares the same virtual memory address, which means that
its virtual address can be interpreted by other containers without an
additional translation. This facilitates memory sharing between different
functions implemented in containers and makes it straightforward to build
shared memory for function chains using existing tools such as DPDK’s multi-
process support.
## Appendix B Overhead auditing of function chains using shared memory
To quantitatively understand the benefit of shared memory communication and
the difference between alternatives, we now perform an auditing of the
overheads for the function chain in Fig. 3.
TABLE IV: Overhead auditing of L2/L3 NF chain using shared memory
communication
Data pipeline No. | NIC-shared memory | Within the chain | total
---|---|---|---
① | ⑥ | ② | ③ | ④ | ⑤
# of copies | ($\alpha$) polling | 0 | 0 | 0 | 0 | 0 | 0 | 0
($\beta$) event-driven | 0 | 0 | 0 | 0 | 0 | 0 | 0
# of interrupts | ($\alpha$) polling | 0 | 0 | 0 | 0 | 0 | 0 | 0
($\beta$) event-driven | 2 | 1 | 1 | 1 | 1 | 1 | 7
# of context switch | ($\alpha$) polling | 0 | 0 | 0 | 0 | 0 | 0 | 0
($\beta$) event-driven | 1 | 1 | 1 | 1 | 1 | 1 | 6
($\alpha$) polling-based kernel-bypass (using DPDK’s PMD) + polling-based
zero-copy I/O for function chaining (using DPDK’s RTE RING);
($\beta$) event-driven kernel-bypass (using eBPF’s AF_XDP) + event-driven
zero-copy I/O for function chaining (using eBPF’s SKMSG).
(1) L2/L3 NF use case: For the L2/L3 NF use case, we study two alternatives:
first is ($\alpha$) NIC-shared memory packet exchange with polling-based
kernel-bypass (using DPDK’s PMD) + polling-based zero-copy I/O for function
chaining (using DPDK’s RTE RING); second is ($\beta$) NIC-shared memory packet
exchange with event-driven kernel-bypass (using eBPF’s AF_XDP) + event-driven
zero-copy I/O for function chaining (using eBPF’s SKMSG). We skip the kernel-
based NIC-shared memory packet exchange in this auditing, as it is apparently
unsuitable for L2/L3 NFs.
Table IV shows the overhead auditing of L2/L3 NF scenario for both (($\alpha$)
and ($\beta$)). Compared to the optimal L2/L3 data plane model (f) discussed
in §II-C, the polling-based shared memory communication approach ($\alpha$)
avoids any data copy, interrupt, and context switch, throughout the entire
data pipeline (from ① to ⑥ of Fig. 3). The event-driven alternative ($\beta$)
eliminates all the data copies as well. However, the use of AF_XDP and SKMSG
introduces additional interrupts and context switches. In particular, every
packet transfer within the chain incurs one interrupt and context switch,
which is a non-negligible overhead, especially if the chain grows in scale.
TABLE V: Overhead auditing of L4/L7 middlebox chain using shared memory
communication
Data pipeline No. | NIC-shared memory | Within the chain | total
---|---|---|---
① | ⑥ | ② | ③ | ④ | ⑤
# of copies | ($\gamma$) polling | 2 | 2 | 0 | 0 | 0 | 0 | 4
($\delta$) event-driven | 2 | 2 | 0 | 0 | 0 | 0 | 4
# of interrupts | ($\gamma$) polling | 2 | 1 | 0 | 0 | 0 | 0 | 3
($\delta$) event-driven | 2 | 1 | 1 | 1 | 1 | 1 | 7
# of context switch | ($\gamma$) polling | 1 | 1 | 0 | 0 | 0 | 0 | 2
($\delta$) event-driven | 1 | 1 | 1 | 1 | 1 | 1 | 6
# of protocol processing tasks | ($\gamma$) polling | 1 | 1 | 0 | 0 | 0 | 0 | 2
($\delta$) event-driven | 1 | 1 | 0 | 0 | 0 | 0 | 2
# of serialization or deserialization (L7) | ($\gamma$) polling | 1 | 1 | 0 | 0 | 0 | 0 | 2
($\delta$) event-driven | 1 | 1 | 0 | 0 | 0 | 0 | 2
($\gamma$) kernel-based NIC-shared memory packet exchange + polling-based
zero-copy I/O for function chaining (using DPDK’s RTE RING);
($\delta$) kernel-based NIC-shared memory packet exchange + event-driven zero-
copy I/O for function chaining (using eBPF’s SKMSG).
(2) L4/L7 middlebox use case: For the L4/L7 middlebox use case, we study two
alternatives: ($\gamma$) kernel-based NIC-shared memory packet exchange +
polling-based zero-copy I/O for function chaining (using DPDK’s RTE RING);
($\delta$) kernel-based NIC-shared memory packet exchange + event-driven zero-
copy I/O for function chaining (using eBPF’s SKMSG). We skip the kernel-bypass
NIC-shared memory packet exchange in this auditing, as L4/L7 middleboxes
depend on the kernel stack for protocol processing.
Table V shows the overhead auditing of L4/L7 middlebox options (($\gamma$) and
($\delta$)). Compared to the optimal L4/L7 data plane model (d) in §II-C, the
polling-based ($\gamma$) and event-driven ($\delta$) shared memory
communication approaches avoid any data copy within the function chain (② to ⑤
in Fig. 3), because of the zero-copy I/O. However, moving a packet from the
NIC to shared memory (① in Table V) incurs two data copies, and vice versa (⑥
in Table V). One data copy comes from the packet exchange between the NIC and
the message broker (Fig. 3), where the kernel stack needs to copy the packet
from the kernel to the message broker in userspace, after protocol processing.
The message broker then moves the packet into shared memory, which introduces
the second copy. With the middlebox chain of two functions, using shared
memory communication (($\gamma$) or ($\delta$)) shows no significant benefit
compared to optimal L4/L7 data plane model (d) because of the data copy
incurred when moving packets between the NIC and shared memory. They all
introduce 4 data copies throughout the entire data pipeline (from ① to ⑥ in
Fig. 3 and Fig. 2). The shared memory communication for the L4/L7 middlebox
scenario (($\gamma$), ($\delta$)) shows its advantages of saving on data
copies (due to the zero-copy I/O) compared to the L4/L7 data plane model (d)
only when the size of the chain grows. In comparison, the data copy overhead
in (d) will increase as the chain increases.
Another essential asset of shared memory communication is that it completely
eliminates protocol processing, serialization, and deserialization overheads
within the chain. These tasks are performed before the packet is moved to
shared memory by the message broker, and vice versa (① and ⑥ in Table V). No
matter the size of the chain, the total # of protocol processing tasks or
serialization/deserialization tasks incurred when using shared memory
communication is always two. On the other hand, these overheads in the data
plane model (d) increase as the chain scales, indicating poor scalability.
The event-driven approach ($\delta$), which uses SKMSG to implement the zero-
copy I/O, incurs one interrupt and one context switch for each transmission
within the function chain (② to ⑤ in Fig. 3). This inevitably has a higher
latency compared to using DPDK’s RTE RING. With DPDK’s RTE RING, different
functions exchange packet descriptors entirely in userspace and avoid
expensive context switches. For the I/O latency going from one function to the
next, eBPF’s SKMSG needs $\sim$20 microseconds to send each packet descriptor.
On the other hand, DPDK’s RTE RING only needs $\sim$0.5 microseconds. This
penalty with SKMSG’s kernel interrupts and context switching overheads makes
the low-latency DPDK’s RTE RING ideal for building high-performance function
chains, desirable for latency-sensitive workloads. However, DPDK’s RTE RING
comes at the cost of constant polling and thus resource consumption. From a
resource efficiency standpoint, SKMSG ’s event-driven nature makes it more
efficient, because it does not consume CPU cycles when there is no traffic.
This is similar to AF_XDP, as they both belong to the eBPF system of Linux.
The latency of SKMSG is less of a concern if there are other dominant
latencies masking it. This is often true for L4/L7 middleboxes, where
application-level latency and kernel protocol processing latency dominate the
total request delay. It requires further optimization on the use of SKMSG,
e.g., having packet descriptors directly routed between functions without
being mediated by the message broker (details in §V-B), which can considerably
reduce the amount of interrupt and context switch generated by SKMSG.
## References
* [1] Z. Zeng, L. Monis, S. Qi, and K. K. Ramakrishnan, “Middlenet: A high-performance, lightweight, unified nfv and middlebox framework,” in _2022 IEEE 8th International Conference on Network Softwarization (NetSoft)_ , 2022, pp. 180–188.
* [2] S. Qi, Z. Zeng, L. Monis, and K. K. Ramakrishnan, “Middlenet: A unified, high-performance nfv and middlebox framework with ebpf and dpdk,” _IEEE Transactions on Network and Service Management_ , pp. 1–1, 2023.
* [3] “Data Plane Development Kit,” https://www.dpdk.org/, 2022, [online].
* [4] Zhang et al., “Opennetvm: A platform for high performance network service chains,” in _Proceedings of the 2016 Workshop on Hot Topics in Middleboxes and Network Function Virtualization_ , ser. HotMIddlebox ’16. New York, NY, USA: Association for Computing Machinery, 2016, p. 26–31.
* [5] Ben et al., “The design and implementation of open vSwitch,” in _12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15)_. Oakland, CA: USENIX Association, May 2015, pp. 117–130.
* [6] Kenichi et al., “StackMap: Low-Latency networking with the OS stack and dedicated NICs,” in _2016 USENIX Annual Technical Conference (USENIX ATC 16)_. Denver, CO: USENIX Association, Jun. 2016, pp. 43–56.
* [7] The Linux Foundation, “eBPF,” https://ebpf.io/, 2022, [online].
* [8] The kernel development community, “AF_XDP,” https://www.kernel.org/doc/html/latest/networking/af_xdp.html, 2022, [online].
* [9] Red Hat, Inc., “Understanding the eBPF networking features in RHEL,” https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_understanding-the-ebpf-features-in-rhel_configuring-and-managing-networking, 2022, [online].
* [10] Miano et al., “Creating complex network services with ebpf: Experience and lessons learned,” in _2018 IEEE 19th International Conference on High Performance Switching and Routing (HPSR)_. IEEE, 2018, pp. 1–8.
* [11] “Poll Mode Driver,” https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html, 2023, [online].
* [12] “DPDK RTE Ring,” https://doc.dpdk.org/guides/prog_guide/ring_lib.html, 2022, [online].
* [13] J. C. Mogul and K. K. Ramakrishnan, “Eliminating receive livelock in an interrupt-driven kernel,” _ACM Transactions on Computer Systems_ , vol. 15, no. 3, pp. 217–252, 1997.
* [14] Dong et al., “High performance network virtualization with sr-iov,” in _HPCA - 16 2010 The Sixteenth International Symposium on High-Performance Computer Architecture_ , 2010, pp. 1–10.
* [15] Martín, Eugenio Pérez, “Deep dive into Virtio-networking and vhost-net,” https://www.redhat.com/en/blog/deep-dive-virtio-networking-and-vhost-net, 2022, [online].
* [16] V. Jain, S. Qi, and K. K. Ramakrishnan, “Fast function instantiation with alternate virtualization approaches,” in _2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)_ , 2021, pp. 1–6.
* [17] Giller, Robin, “Open vSwitch with DPDK Overview,” https://www.intel.com/content/www/us/en/developer/articles/technical/open-vswitch-with-dpdk-overview.html, 2022, [online].
* [18] W. Tu, Y.-H. Wei, G. Antichi, and B. Pfaff, “Revisiting the open vswitch dataplane ten years later,” in _Proceedings of the 2021 ACM SIGCOMM 2021 Conference_ , ser. SIGCOMM ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 245–257.
* [19] Liu et al., “Microboxes: High performance nfv with customizable, asynchronous tcp stacks and dynamic subscriptions,” in _Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication_ , ser. SIGCOMM ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 504–517.
* [20] E. Jeong, S. Wood, M. Jamshed, H. Jeong, S. Ihm, D. Han, and K. Park, “mTCP: a highly scalable user-level TCP stack for multicore systems,” in _11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14)_. Seattle, WA: USENIX Association, Apr. 2014, pp. 489–502.
* [21] S. Qi, S. G. Kulkarni, and K. K. Ramakrishnan, “Assessing container network interface plugins: Functionality, performance, and scalability,” _IEEE Transactions on Network and Service Management_ , vol. 18, no. 1, pp. 656–671, 2021.
* [22] Cloudflare, Inc. , “SOCKMAP - TCP splicing of the future,” https://blog.cloudflare.com/sockmap-tcp-splicing-of-the-future/, 2022, [online].
* [23] “DPDK Multi-process Support,” https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html, 2022, [online].
* [24] Kulkarni et al., “Nfvnice: Dynamic backpressure and scheduling for nfv service chains,” _IEEE/ACM Transactions on Networking_ , vol. 28, no. 2, pp. 639–652, 2020.
* [25] Dmitry et al., “The design and operation of CloudLab,” in _2019 USENIX Annual Technical Conference (USENIX ATC 19)_. Renton, WA: USENIX Association, Jul. 2019, pp. 1–14.
* [26] “Pktgen - Traffic Generator powered by DPDK,” https://github.com/pktgen/Pktgen-DPDK, 2022, [online].
* [27] “libbpf,” https://github.com/libbpf/libbpf, 2022, [online].
* [28] “Multiple Poll-Mode Driver Threads,” https://docs.openvswitch.org/en/latest/intro/install/dpdk/#multiple-poll-mode-driver-threads, 2022, [online].
* [29] William Tu, “netdev-afxdp: Add interrupt mode netdev class,”<EMAIL_ADDRESS>2020, [online].
* [30] J. Anderson, H. Hu, U. Agarwal, C. Lowery, H. Li, and A. Apon, “Performance considerations of network functions virtualization using containers,” in _2016 International Conference on Computing, Networking and Communications (ICNC)_ , 2016, pp. 1–7.
* [31] M. Amaral, J. Polo, D. Carrera, I. Mohomed, M. Unuvar, and M. Steinder, “Performance evaluation of microservices architectures using containers,” in _2015 IEEE 14th International Symposium on Network Computing and Applications_ , 2015, pp. 27–34.
* [32] The Apache Software Foundation, “ab - Apache HTTP server benchmarking tool,” https://httpd.apache.org/docs/2.4/programs/ab.html, 2022, [online].
* [33] F5 Networks, Inc., “NGINX: Advanced Load Balancer, Web Server, & Reverse Proxy,” https://www.nginx.com/, 2022, [online].
* [34] “Sieve of atkin,” https://en.wikipedia.org/w/index.php?title=Sieve_of_Atkin&oldid=1048307934, 2021, [online].
* [35] “Flow Bifurcation How-to Guide,” https://doc.dpdk.org/guides-19.02/howto/flow_bifurcation.html, 2022, [online].
* [36] Viviano, Amy, “NIC Switches,” https://docs.microsoft.com/en-us/windows-hardware/drivers/network/nic-switches, 2022, [online].
* [37] “DPDK Kernel NIC Interface,” https://doc.dpdk.org/guides/prog_guide/kernel_nic_interface.html#figure-pkt-flow-kni, 2022, [online].
* [38] J. Tan, C. Liang, H. Xie, Q. Xu, J. Hu, H. Zhu, and Y. Liu, “Virtio-user: A new versatile channel for kernel-bypass networks,” in _Proceedings of the Workshop on Kernel-Bypass Networks_ , ser. KBNets ’17. New York, NY, USA: Association for Computing Machinery, 2017, p. 13–18.
* [39] Martins et al., “ClickOS and the art of network function virtualization,” in _11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14)_. Seattle, WA: USENIX Association, Apr. 2014, pp. 459–473.
* [40] L. Rizzo, “Netmap: A novel framework for fast packet i/o,” in _Proceedings of the 2012 USENIX Conference on Annual Technical Conference_ , ser. USENIX ATC’12. USA: USENIX Association, 2012, p. 9.
* [41] Palkar et al., “E2: A framework for nfv applications,” in _Proceedings of the 25th Symposium on Operating Systems Principles_ , ser. SOSP ’15. New York, NY, USA: Association for Computing Machinery, 2015, p. 121–136.
* [42] Jamshed et al., “mOS: A reusable networking stack for flow monitoring middleboxes,” in _14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17)_. Boston, MA: USENIX Association, Mar. 2017, pp. 113–129.
* [43] S. Miano, F. Risso, M. V. Bernal, M. Bertrone, and Y. Lu, “A framework for ebpf-based network functions in an era of microservices,” _IEEE Transactions on Network and Service Management_ , vol. 18, no. 1, pp. 133–151, 2021.
* [44] M. Abranches, O. Michel, and E. Keller, “Getting back what was lost in the era of high-speed software packet processing,” in _Proceedings of the 21st ACM Workshop on Hot Topics in Networks_ , 2022, pp. 228–234.
* [45] N. Van Tu, J.-H. Yoo, and J. W.-K. Hong, “Accelerating virtual network functions with fast-slow path architecture using express data path,” _IEEE Transactions on Network and Service Management_ , vol. 17, no. 3, pp. 1474–1486, 2020.
* [46] “Environment Abstraction Layer,” https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html, 2022, [online].
|
reflection symmetries using $(\times,\times,\times,\phantom{\times})$ is
most effective.
It solves one instances more than the remaining settings and reduces the
running time in comparison to not detecting and handling reflection
symmetries by 4.7 on all instances and 13.5
on the solvable instances.
The setting $(\times,\times,\times,\times)$, which disables lexicographic
reduction and instead applies (<ref>) when it is applicable,
still improves on the setting in which no symmetries are handled.
But in comparison to the other symmetry handling approaches, it is the least
Since the motivation of this setting was to compare the effect
of (<ref>) with lexicographic reduction, we also separately
considered the 25 instances for which (<ref>) is applicable
to see whether it has a positive effect there.
None of these instances could be solved within the time limit though.
§.§ Results for Benchmarking Instances
Besides the structured instances discussed in the previous section, we also
conducted experiments on general benchmarking instances.
The test sets that we considered are all instances from MIPLIB2017 [17] and
MINLPLIB [37], as well as the submitted instances of the SAT 2002 Competition [46].
To evaluate the impact of handling reflection symmetries, we removed all
instances from these test sets for which no reflection symmetries could be
We refer to the corresponding test sets as miplib2017, minlplib, and
sat2002, respectively.
In contrast to the structured instances, we cannot evaluate whether our
framework reliably detects reflection symmetries for benchmarking instances.
Our expectation was that reflection symmetries are rare for linear problems
(miplib2017) and arise frequently for nonlinear problems (minlplib) and SAT
Indeed, as Table <ref> shows, for 282 of the 819 instances
from sat2002, we could detect reflection symmetries, whereas we could find
only 60 instances from miplib2017 admitting reflection symmetries.
Among the 486 instances from minlplib, however, our framework could only
detect 6 instances admitting reflection symmetries.
This came as surprise to us, since MINLPLIB also contains instances corresponding to
geometric packing problems (instances whose names start with “kall_”).
Inspecting these instances revealed two explanations for not detecting
the reflection symmetries.
On the one hand, these instances already contain symmetry handling
On the other hand, in contrast to Example <ref>, the box in
which the objects need to be placed is not fixed.
Instead, one is looking for a box of minimal dimensions that can fit all
This is modeled asymmetrically by fixing the lower coordinate value and
introducing a variable to model the upper coordinate value of each
That is, although the real world problem admits reflection symmetries, the
corresponding MINLP model is asymmetric.
In the following, we will therefore focus on the miplib2017 and sat2002
instances containing reflection symmetries, since the minlplib test set is
too small to draw reliable conclusions.
The running times are summarized in Table <ref>.
Note that the table reports only on 59 instances although
Table <ref> shows that there are 60 instances with
reflection symmetries.
To ensure a fair comparison of the different methods, however, we removed
the instance “tokyometro” since all but one setting reached the memory
Discussion of MIPLIB2017
For miplib2017, we observe that the
$(\times,\times,\times,\phantom{\times})$ setting performs best w.r.t. the
number of solved instances.
It can solve 17 instances, while just handling permutation symmetries can
only solve 14 instances, and handling no symmetries at all solves 15
Regarding the running time, however,
$(\times,\times,\times,\phantom{\times})$ and the settings only handling
permutation symmetries perform equally and are on all instances 4.7
(on the solvable instances 17.1) slower than not handling
It thus seems that handling reflection symmetries can help
solving more instances, on average, however, it slows down the solving
As such, it is not a surprise that the mean running time of
$(\times,\times,\times,\times)$ is better than the one of
To understand why not handling symmetries performs better than handling
symmetries, we compared the results for the 17 solvable instances for the
setting in which no symmetries are handled
and $(\times,\times,\times,\phantom{\times})$.
The following three observations could be made:
* some instances are rather easy such that an improvement in running
time is negligible;
* for the two instances that cannot be solved when not handling
symmetries, also $(\times, \times, \times, \phantom{\times})$ needed
about 5900 and 6400, respectively.
That is, also when handling symmetries, the instances remain hard.
* The dual bound after presolving is (almost) optimal, i.e.,
it is sufficient to find an optimal solution.
While the power of symmetry handling lies in pruning symmetric
subproblems, which allows to more quickly improve the dual bound, it
seems to hinder in finding feasible or optimal solutions.
We conclude that, although handling symmetries on benchmarking instances
has a positive effect in general [40], the characteristics
of instances from MIPLIB2017 that admit reflection symmetries make symmetry
handling less suited to enhance branch-and-bound for these instances.
The second question that arises is why the
setting $(\times,\times,\times,\phantom{\times})$ has the same mean running
time as $(\times,\times,\phantom{\times},\phantom{\times})$ although it
solves three more instances.
Inspecting the symmetries that are found by the two different settings, we
observed that the number of generators varies a lot between only detecting
permutation symmetries and also reflection symmetries.
For example, although the detected symmetry group for the instance
is larger when detecting reflection symmetries
($\approx 10^{91.5}$ group elements in comparison to $\approx 10^{90.9}$
for permutation symmetries), the number of generators we get from
bliss is 35 for reflection symmetries and 64 for permutation symmetries.
When handling symmetries via lexicographic reduction, we thus lose a lot of
potential reductions when computing reflection symmetries.
Moreover, for the instance , we obtain the same number
of generators corresponding to permutation symmetries; when handling
reflection symmetries, however, we detect less column/row symmetries.
That is, we miss the potential of specialized algorithms for column/row
For the three additionally solved instances when
handling reflection symmetries, we either find more generators (instance
) or we detect more row/column symmetries
(instances and ).
The explanation for the same mean running time thus indeed seems to be the
variability in the generators returned by bliss.
Discussion of SAT2002
On the sat2002 test set, the most effective setting is
It solves 169 instances, and thus almost all solvable instances, within the
time limit and improves upon only handling permutation symmetries
by 17.8.
Taking Table <ref> into account, this behavior is not
surprising as at most 125 of the 292 reflection symmetric sat2002 instances
contain permutation symmetries.
That is, if reflection symmetries are not handled, a lot of instances
become asymmetric.
Comparison of running times and number of detected row/column symmetries for solvable sat2002 instances containing permutation symmetries.
sym. row+col refl. simp. # solved time # row/column symmetries
7@lall instances (76):
72 63.10 1.00
$\times$ 74 39.67 14.87
$\times$ $\times$ 74 39.80 14.87
$\times$ $\times$ $\times$ 74 50.57 7.01
$\times$ $\times$ $\times$ $\times$ 74 53.28 7.01
7@lfeasible instances (27):
27 24.06 1.00
$\times$ 25 27.90 4.59
$\times$ $\times$ 25 27.93 4.59
$\times$ $\times$ $\times$ 27 26.88 0.44
$\times$ $\times$ $\times$ $\times$ 27 26.93 0.44
7@linfeasible instances (49):
45 106.55 1.00
$\times$ 49 48.09 20.53
$\times$ $\times$ 49 48.31 20.53
$\times$ $\times$ $\times$ 47 71.38 10.63
$\times$ $\times$ $\times$ $\times$ 47 77.27 10.63
To allow for a fair comparison between the different symmetry handling
settings, we therefore also considered the subset of all solvable sat2002
instances that contain proper permutation symmetries.
This results in 76 instances and the corresponding results are summarized
in Table <ref>.
On these instances, we observe that handling reflection symmetries on top
of permutation symmetries decreases the performance by 27.5,
and this effect is even more pronounced on the infeasible instances, for
which the running time increases by 48.4.
A possible explanation for this unexpected behavior is again the variance
in the generators of the symmetry groups reported by bliss.
While the mean number of row/column symmetries that are detected per instance
are about 20.5 when only detecting permutation symmetries, the number of
row/column symmetries drops to 10.6 when detecting reflection symmetries.
That is, when detecting reflection symmetries, the potential of handling
row/column symmetries by dedicated techniques cannot be exploited.
§.§ Conclusion and Outlook
In the introduction, we have formulated four main
goals (<ref>)–(<ref>), which also could be achieved in this
Our abstract framework of symmetry detection graphs turned out to be a
flexible mechanism for detecting reflection symmetries in
MINLP and beyond, cf. Goal (<ref>).
Our open-source implementation could be used to detect reflection
symmetries in many applications, cf. Goal (<ref>), and the numerical
experiments showed that handling reflection symmetries can be crucial to
accelerate branch-and-bound for specific applications, cf.Goal (<ref>).
Although we devised methods for handling reflection symmetries, cf.Goal (<ref>), we noted that the performance improvement due to
handling symmetries heavily depends on the structure of the detected symmetry
Handling reflection symmetries thus might slow
down the solving process if this prevents our heuristics to detect
row/column symmetries.
This latter observation opens directions for future research.
As we noted in our experiments, the generators of symmetry groups returned
by symmetry detection tools such as bliss heavily depend on the
structure of the symmetry detection graphs.
Thus, based on the returned generators, our heuristics can fail to detect
row and column symmetries.
To circumvent this issue, it might be promising to develop alternative
approaches for detecting row and column symmetries that depend less on the
structure of generators.
Ideally, one would use an exact mechanism to detect row/column symmetries,
but detecting such symmetries is as hard as the graph isomorphism
problem [7].
A possible future direction could thus be to exploit the specific structure
of the symmetry detection graphs to solve the graph isomorphism problem.
Moreover, for MIPLIB2017, we noted that some problems benefit from not
handling symmetries, because symmetry handling can hinder heuristics to
find feasible solutions.
A naive strategy for feasibility problems is thus to completely disable
symmetry handling.
For infeasible instances, however, this arguably slows down the solving
process, since a larger search space needs to be explored until
infeasibility is detected.
Instead, it could be interesting to investigate means to benefit from
handling symmetries in branch-and-bound, while removing the symmetry-based
restrictions in heuristics.
The author thanks Marc E. Pfetsch for very valuable discussions on the
choice of the data structure for encoding symmetry detection graphs in
as well as a thorough code review.
This publication is part of the project
“Local Symmetries for Global Success” with project number
OCENW.M.21.299 which is financed by the Dutch Research Council (NWO).
[1]
Anders, M., Schweitzer, P.: Parallel computation of combinatorial symmetries.
In: 29th Annual European Symposium on Algorithms, ESA 2021,
September 6-8, 2021, Lisbon, Portugal (Virtual Conference), LIPIcs,
vol. 204, pp. 6:1–6:18. Schloss Dagstuhl - Leibniz-Zentrum für
Informatik (2021).
[2]
Anders, M., Schweitzer, P., Stieß, J.: Engineering a preprocessor for
symmetry detection.
CoRR abs/2302.06351 (2023).
[3]
Babai, L., Luks, E.M.: Canonical labeling of graphs.
In: Proceedings of the fifteenth annual ACM symposium on Theory of
computing - STOC '83. ACM Press (1983).
[4]
Belotti, P., Kirches, C., Leyffer, S., Linderoth, J., Luedtke, J., Mahajan, A.:
Mixed-integer nonlinear optimization.
Acta Numerica 22, 1–131 (2013).
[5]
Bendotti, P., Fouilhoux, P., Rottner, C.: Orbitopal fixing for the full
(sub-)orbitope and application to the unit commitment problem.
Mathematical Programming 186, 337–372 (2021).
[6]
Berthold, T.: Measuring the impact of primal heuristics.
Operations Research Letters 41(6), 611–614 (2013).
[7]
Berthold, T., Pfetsch, M.E.: Detecting orbitopal symmetries.
In: B. Fleischmann, K.H. Borgwardt, R. Klein, A. Tuma (eds.)
Operations Research Proceedings 2008, pp. 433–438. Springer Berlin
Heidelberg, Berlin, Heidelberg (2009)
[8]
Bessiere, C., Hebrard, E., Hnich, B., Walsh, T.: The complexity of global
In: Proceedings of the 19th National Conference on AI (2004)
[9]
Bödi, R., Herr, K., Joswig, M.: Algorithms for highly symmetric linear
and integer programs.
Mathematical Programming 137(1), 65–90 (2013).
[10]
Bolusani, S., Besançon, M., Bestuzheva, K., Chmiela, A., Dionísio, J.,
Donkiewicz, T., van Doornmalen, J., Eifler, L., Ghannam, M., Gleixner, A.,
Graczyk, C., Halbig, K., Hedtke, I., Hoen, A., Hojny, C., van der Hulst, R.,
Kamp, D., Koch, T., Kofler, K., Lentz, J., Manns, J., Mexi, G., Mühmer, E.,
Pfetsch, M.E., Schlösser, F., Serrano, F., Shinano, Y., Turner, M.,
Vigerske, S., Weninger, D., Xu, L.: The SCIP Optimization Suite 9.0
[11]
Cohen, J.S.: Computer algebra and symbolic computation: mathematical methods.
AK Peters, Natick, Massachusetts (2003)
[12]
Costa, A., Hansen, P., Liberti, L.: On the impact of symmetry-breaking
constraints on spatial branch-and-bound for circle packing in a square.
Discrete Applied Mathematics 161(1), 96–106 (2013).
[13]
van Doornmalen, J., Hojny, C.: A unified framework for symmetry handling
[14]
van Doornmalen, J., Hojny, C.: Efficient propagation techniques for handling
cyclic symmetries in binary programs.
INFORMS Journal on Computing 0(0), null (2024).
[15]
Flener, P., Frisch, A.M., Hnich, B., Kiziltan, Z., Miguel, I., Pearson, J.,
Walsh, T.: Breaking row and column symmetries in matrix models.
In: P. Van Hentenryck (ed.) Principles and Practice of Constraint
Programming - CP 2002, pp. 462–477. Springer Berlin Heidelberg, Berlin,
Heidelberg (2002)
[16]
Friedman, E.J.: Fundamental domains for integer programs with symmetries.
In: A. Dress, Y. Xu, B. Zhu (eds.) Combinatorial Optimization and
Applications, Lecture Notes in Computer Science, vol. 4616, pp.
146–153. Springer Berlin Heidelberg (2007).
[17]
Gleixner, A., Hendel, G., Gamrath, G., Achterberg, T., Bastubbe, M., Berthold,
T., Christophel, P.M., Jarck, K., Koch, T., Linderoth, J., Lübbecke, M.,
Mittelmann, H.D., Ozyurt, D., Ralphs, T.K., Salvagnin, D., Shinano, Y.:
MIPLIB 2017: Data-Driven Compilation of the 6th Mixed-Integer Programming
Mathematical Programming Computation pp. 443–490 (2021).
[18]
Hojny, C.: Supplementary material for the article “Detecting and handling
reflection symmetries in mixed-integer (nonlinear) programming”.
[19]
Hojny, C.: Packing, partitioning, and covering symresacks.
Discrete Applied Mathematics 283, 689–717 (2020).
[20]
Hojny, C.: Polynomial size IP formulations of knapsack may require
exponentially large coefficients.
Operations Research Letters 48(5), 612–618 (2020).
[21]
Hojny, C., Pfetsch, M.E.: Polytopes associated with symmetry handling.
Mathematical Programming 175, 197–240 (2019).
[22]
Junttila, T., Kaski, P.: Conflict propagation and component recursion for
canonical labeling.
In: A. Marchetti-Spaccamela, M. Segal (eds.) Theory and Practice of
Algorithms in (Computer) Systems – First International ICST Conference,
TAPAS 2011, Rome, Italy, April 18–20, 2011. Proceedings, Lecture
Notes in Computer Science, vol. 6595, pp. 151–162. Springer (2011).
[23]
Junttila, T., Kaski, P.: Conflict propagation and component recursion for
canonical labeling.
In: A. Marchetti-Spaccamela, M. Segal (eds.) Theory and Practice of
Algorithms in (Computer) Systems – First International ICST Conference,
TAPAS 2011, Rome, Italy, April 18–20, 2011. Proceedings, Lecture
Notes in Computer Science, vol. 6595, pp. 151–162. Springer (2011).
[24]
Kaibel, V., Peinhardt, M., Pfetsch, M.E.: Orbitopal fixing.
Discrete Optimization 8(4), 595–610 (2011).
[25]
Kaibel, V., Pfetsch, M.E.: Packing and partitioning orbitopes.
Mathematical Programming 114(1), 1–36 (2008).
[26]
Khajavirad, A.: Packing circles in a square: a theoretical comparison of
various convexification techniques (2017).
[27]
Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms, 6 edn.
Springer, Heidelberg (2018)
[28]
Liberti, L.: Automatic generation of symmetry-breaking constraints.
In: Combinatorial optimization and applications, Lecture Notes
in Computer Science, vol. 5165, pp. 328–338. Springer, Berlin (2008).
[29]
Liberti, L.: Symmetry in mathematical programming.
In: J. Lee, S. Leyffer (eds.) Mixed Integer Nonlinear Programming,
IMA Series, vol. 154, pp. 263–283. Springer New York (2011).
[30]
Liberti, L.: Reformulations in mathematical programming: automatic symmetry
detection and exploitation.
Mathematical Programming 131(1-2), 273–304 (2012).
[31]
Liberti, L., Ostrowski, J.: Stabilizer-based symmetry breaking constraints for
mathematical programs.
Journal of Global Optimization 60, 183–194 (2014)
[32]
Linderoth, J., Núñez Ares, J., Ostrowski, J., Rossi, F., Smriglio, S.:
Orbital conflict: Cutting planes for symmetric integer programs.
INFORMS Journal on Optimization 3(2), 139–153 (2021).
[33]
Margot, F.: Pruning by isomorphism in branch-and-cut.
Mathematical Programming 94(1), 71–90 (2002).
[34]
Margot, F.: Exploiting orbits in symmetric ILP.
Mathematical Programming 98(1–3), 3–21 (2003).
[35]
Margot, F.: Symmetry in integer linear programming.
In: M. Jünger, T.M. Liebling, D. Naddef, G.L. Nemhauser, W.R.
Pulleyblank, G. Reinelt, G. Rinaldi, L.A. Wolsey (eds.) 50 Years of Integer
Programming, pp. 647–686. Springer (2010)
[36]
McKay, B.D., Piperno, A.: Practical graph isomorphism, II.
Journal of Symbolic Computation 60, 94–112 (2014).
[37]
Minlplib: A library of mixed-integer and continuous nonlinear programming
[38]
Ostrowski, J.: Symmetry in integer programming.
PhD dissertation, Lehigh University (2009)
[39]
Ostrowski, J., Linderoth, J., Rossi, F., Smriglio, S.: Orbital branching.
Mathematical Programming 126(1), 147–178 (2011).
[40]
Pfetsch, M.E., Rehn, T.: A computational comparison of symmetry handling
methods for mixed integer programs.
Mathematical Programming Computation 11(1), 37–93 (2019).
[41]
Puget, J.F.: Automatic Detection of Variable and Value Symmetries, pp.
Springer Berlin Heidelberg, Berlin, Heidelberg (2005).
[42]
Saff, E., Kuijlaars, A.: Distributing many points on a sphere.
The Mathematical Intelligencer 19, 5–11 (1997)
[43]
Sakallah, K.A.: Handbook of Satisfiability, Editors: Armin Biere, Marijn Heule,
Hans van Maaren, and Toby Walsh, chap. Symmetry and Satisfiability.
IOS Press (2021)
[44]
Salvagnin, D.: A dominance procedure for integer programming.
Master’s thesis, University of Padova, Padova, Italy (2005)
[45]
Salvagnin, D.: Symmetry breaking inequalities from the Schreier-Sims table.
In: W.J. van Hoeve (ed.) Integration of Constraint Programming,
Artificial Intelligence, and Operations Research, pp. 521–529. Springer
International Publishing (2018).
[46]
Sat 2002 competition: problem instances.
<https://www.cs.ubc.ca/ hoos/SATLIB/Benchmarks/SAT/New/Competition-02/sat-2002-beta.tgz>
[47]
Szabó, P.G., Markót, M.C., Csendes, T.: Global Optimization in Geometry
— Circle Packing into the Square, pp. 233–265.
Springer US, Boston, MA (2005).
[48]
Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter
line-search algorithm for large-scale nonlinear programming.
Mathematical Programming 106, 25–57 (2006).
[49]
Zhu, W.: Unsolvability of some optimization problems.
Applied Mathematics and Computation 174(2), 921–926 (2006).
§ OVERVIEW OF IMPORTANT FUNCTIONS TO APPLY OUR SYMMETRY DETECTION FRAMEWORK
This appendix provides an overview of the most important functions needed
to extend an SDG within a symmetry detection callback.
Since our implementation of SDGs allows for four different types of nodes,
we have different functions for adding these nodes:
SCIPaddSymgraphOpnode() adds an operator node to an SDG;
SCIPaddSymgraphValnode() adds a numerical value node to an SDG;
SCIPaddSymgraphConsnode() adds a constraint node to an SDG.
Recall that we do not allow to add variable nodes to an SDG, because ensures that every SDG contains all necessary variable nodes.
Instead, the indices of variable nodes can be accessed via the functions
SCIPgetSymgraphVarnodeidx() returns the index of the node
corresponding to a given variable;
SCIPgetSymgraphNegatedVarnodeidx() returns the index of the node
corresponding to a negated/reflected variable.
To add edges to a graph, the function
SCIPaddSymgraphEdge() adds an edge between two existing nodes of an SDG
can be used.
To simplify the usage of SDGs, we also provide two functions that add gadgets
for certain variable structures to an SDG:
SCIPextendPermsymDetectionGraphLinear() adds a gadget for a linear
expression $\sprod{a}{x} + b$
to an SDG;
SCIPaddSymgraphVarAggregation() adds a gadget for aggregated variables
to an SDG.
The second function has been introduced, since we require that no
aggregated or fixed variables are present in an SDG.
§ DETAILED NUMERICAL RESULTS
In this appendix, we provide detailed numerical results for the tested
problem classes.
Tables <ref>–<ref> report on
the running times and primal-dual integrals for each instance of the 2- and
3-dimensional packing, kissing number, and energy problems that we
discussed in Section <ref>.
The number of items corresponds to the number of balls, spheres, and points
in these respective problems, whereas the settings refer to the settings
sym0–sym6 and the automatic setting as described in
Section <ref>.
Running times and primal-dual integrals for packing test set and dimension 2.
# items sym0 sym1 sym2 sym3 sym4 sym5 sym6 auto.
9@lrunning time in seconds:
3 0.12 0.12 0.11 0.05 0.09 0.07 0.09 0.09
4 2.84 1.94 0.67 0.30 0.44 0.38 0.29 0.28
5 0.79 0.59 0.38 0.17 0.30 0.31 0.22 0.21
6 43.09 25.56 7.41 0.61 0.50 0.40 0.70 0.68
7 7200.00 7200.00 6606.63 22.81 201.83 16.16 11.01 16.23
8 7200.00 7200.00 4352.59 10.14 70.32 22.82 6.68 12.14
9 7200.00 7200.00 7200.00 644.78 7200.00 249.99 365.62 103.95
10 7200.00 7200.00 7200.00 267.77 7200.00 173.67 153.78 61.42
11 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
12 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
13 7200.00 7200.00 7200.00 7200.00 7200.00 358.37 351.27 301.84
14 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
9@lprimal-dual integral:
3 5 5 6 2 3 2 3 4
4 35 28 16 11 9 9 8 8
5 29 23 19 10 14 14 12 11
6 802 527 151 22 12 9 26 24
7 120899 109585 31757 210 941 109 133 123
8 145839 118007 52138 285 1489 227 90 120
9 234515 269194 152641 6449 67037 1741 2826 667
10 266727 224203 229408 5454 73878 2567 1027 754
11 355432 344742 334154 154772 102571 66800 74434 50360
12 361376 353135 351864 199890 128913 20578 77572 68018
13 368337 354376 353221 190939 98344 7071 8769 6855
14 408953 408172 405794 277031 200833 46888 128374 105141
Running times and primal-dual integrals for packing test set and dimension 3.
# items sym0 sym1 sym2 sym3 sym4 sym5 sym6 auto.
9@lrunning time in seconds:
3 2.09 0.93 0.35 0.29 0.22 0.25 0.27 0.41
4 1.49 0.85 0.29 0.29 0.31 0.31 0.29 0.28
5 7200.00 7200.00 5921.24 380.15 626.00 414.19 67.69 73.69
6 6663.82 1961.23 341.28 10.73 34.86 13.46 8.40 5.87
7 7200.00 7200.00 7200.00 631.20 2443.29 645.28 287.70 395.11
8 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
9 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
10 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
11 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
12 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
13 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
14 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
9@lprimal-dual integral:
3 17 11 9 6 5 6 7 9
4 19 12 5 11 10 10 9 8
5 52376 22897 5366 437 566 421 86 88
6 48670 15778 2794 210 474 128 126 94
7 154538 164221 110088 6448 10830 3605 2901 2755
8 259241 231341 176139 133512 126275 120202 106350 106084
9 281270 271892 224430 154967 92668 66342 79984 69743
10 319452 304708 285611 206888 154403 158242 138025 127942
11 342112 334613 314555 251442 144333 128962 117638 115709
12 361785 345513 334208 267405 171725 163406 166023 145259
13 362833 355450 341441 277927 143289 173997 127984 115555
14 373245 388226 343286 299399 227500 207009 208462 153976
Running times and primal-dual integrals for kissing test set and dimension 2.
# items sym0 sym1 sym2 sym3 sym4 sym5 sym6 auto.
9@lrunning time in seconds:
3 0.02 0.03 0.08 0.01 0.06 0.02 0.04 0.04
4 0.07 0.18 0.16 0.14 0.10 0.15 0.16 0.17
5 0.06 0.35 0.28 0.17 0.39 0.29 0.17 0.17
6 0.16 0.48 0.47 0.23 0.31 0.61 0.61 0.26
7 7200.00 7200.00 1136.23 2.36 22.42 3.62 1.75 2.72
8 7200.00 7200.00 7200.00 5.89 340.06 18.33 6.34 6.11
9 7200.00 7200.00 7200.00 11.85 451.55 9.87 7.22 4.50
10 7200.00 7200.00 7200.00 51.76 7200.00 86.73 54.52 17.89
11 7200.00 7200.00 7200.00 175.53 7200.00 355.42 126.20 29.58
12 7200.00 7200.00 7200.00 448.92 7200.00 1408.36 1578.97 380.60
13 7200.00 7200.00 7200.00 1928.51 7200.00 3137.21 2976.38 116.21
14 7200.00 7200.00 7200.00 5501.44 7200.00 7200.00 7200.00 849.15
9@lprimal-dual integral:
3 2 3 7 1 6 2 2 3
4 5 14 16 8 9 11 9 10
5 6 30 24 9 31 29 14 14
6 9 42 45 20 25 47 33 17
7 38404 21079 5254 99 209 159 54 74
8 245267 213439 101851 264 1873 651 149 201
9 313339 306493 286624 342 1978 193 159 205
10 392407 414595 404572 1598 94434 1503 1095 406
11 491427 491463 484365 4463 104650 5516 1994 682
12 527099 527103 524806 9572 239255 13014 18173 5704
13 555106 555107 555094 45741 152492 34012 29414 1369
14 577444 578765 577441 130808 177820 328974 121905 10812
Running times and primal-dual integrals for kissing test set and dimension 3.
# items sym0 sym1 sym2 sym3 sym4 sym5 sym6 auto.
9@lrunning time in seconds:
3 0.02 0.02 0.03 0.02 0.05 0.02 0.01 0.02
4 0.02 0.03 0.04 0.03 0.05 0.03 0.02 0.02
5 0.04 0.04 0.04 0.01 0.07 0.02 0.02 0.02
6 0.04 0.03 0.03 0.03 0.07 0.02 0.03 0.02
7 0.05 0.05 0.06 0.04 0.08 0.04 0.03 0.04
8 0.04 0.04 0.07 0.03 0.06 0.03 0.05 0.08
9 0.07 0.08 0.07 0.04 0.08 1.37 0.07 0.03
10 0.09 0.09 0.08 0.05 0.13 0.07 0.73 0.05
11 0.10 0.13 0.54 0.07 0.09 1.97 0.12 0.92
12 0.10 1.09 0.14 0.06 0.15 2.08 0.20 0.07
13 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
14 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
9@lprimal-dual integral:
3 2 2 3 2 5 2 1 2
4 2 3 4 3 5 3 2 2
5 4 4 4 1 7 2 2 2
6 4 3 3 3 7 2 3 2
7 5 4 6 4 8 4 3 4
8 4 4 7 3 6 3 5 8
9 7 8 7 4 8 137 7 3
10 9 9 8 5 13 7 18 5
11 10 13 54 7 9 197 9 33
12 10 22 14 6 15 193 11 7
13 61672 61411 61412 61568 61612 61583 62150 62493
14 92105 92101 92102 93511 92106 92116 93969 92138
Running times and primal-dual integrals for energy test set and dimension 2.
# items sym0 sym1 sym2 sym3 sym4 sym5 sym6 auto.
9@lrunning time in seconds:
3 2.17 0.96 0.37 0.14 0.31 0.30 0.20 0.14
4 28.92 14.75 4.69 0.90 3.64 1.87 1.36 1.35
5 637.64 330.00 80.48 3.96 6.50 3.74 2.06 2.00
6 7200.00 7200.00 7200.00 20.58 169.95 36.29 19.36 19.48
7 7200.00 7200.00 7200.00 131.63 1669.89 171.30 95.21 80.70
8 7200.00 7200.00 7200.00 733.38 7200.00 2358.56 7200.00 7200.00
9 7200.00 7200.00 7200.00 7200.00 7200.00 3555.87 3074.32 1258.01
10 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
11 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
12 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
13 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
14 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
9@lprimal-dual integral:
3 6 4 3 3 2 3 5 3
4 59 30 22 5 14 12 11 11
5 1048 733 175 38 25 21 17 13
6 27577 16940 15275 105 339 95 67 60
7 103265 91918 71028 558 3115 460 300 221
8 165382 159741 138685 3004 61183 5533 8552 4083
9 209730 204553 183465 13291 78012 9411 7622 3704
10 248606 244166 225948 56374 138046 64303 60746 38110
11 268921 265619 246456 101186 161925 98961 98287 76904
12 284270 286673 276693 153360 201235 146922 145253 126222
13 303207 301486 290631 178830 210680 163230 162411 123375
14 313677 311820 308313 214843 243370 206209 200767 165437
Running times and primal-dual integrals for energy test set and dimension 3.
# items sym0 sym1 sym2 sym3 sym4 sym5 sym6 auto.
9@lrunning time in seconds:
3 7200.00 1904.11 179.29 73.14 358.43 357.69 24.35 772.85
4 7200.00 7200.00 7200.00 1889.64 7200.00 7200.00 1448.27 1602.78
5 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
6 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
7 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
8 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
9 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
10 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
11 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
12 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
13 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
14 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00 7200.00
9@lprimal-dual integral:
3 1528 447 44 23 80 80 11 186
4 14737 6781 2051 491 2658 1855 385 408
5 77151 59093 36746 12087 16408 12026 5531 5184
6 121289 104487 84450 41473 61835 46654 31565 31476
7 157158 143920 126967 86664 100511 88191 72522 71813
8 178762 168522 156057 120629 133847 120473 109227 108159
9 195888 186304 172755 146157 134636 128682 116954 115462
10 208562 199569 187568 164420 158820 151279 141834 140935
11 219253 213879 201490 185224 174839 168161 159821 159343
12 225733 223219 210340 192819 188728 182468 175968 174244
13 233884 229828 221233 205331 197518 190162 185491 184832
14 240182 238349 227048 216176 209512 202497 198844 198483 |
11affiliationtext: Department of Mathematics, University of Utah, Salt Lake
City, UT, USA22affiliationtext: Department of Mathematics, University of North
Carolina, Chapel Hill, NC, USA33affiliationtext: Advanced Medical Imaging Lab,
University of North Carolina Medical Center, Chapel Hill, NC,
USA44affiliationtext: University of North Carolina School of Medicine, Chapel
Hill, NC, USA55affiliationtext: Division of Cardiology, Department of
Medicine, University of North Carolina, Chapel Hill, NC, USA66affiliationtext:
Department of Biomedical Engineering, University of California Irvine, Irvine,
CA, USA77affiliationtext: Departments of Mathematics, Applied Physical
Sciences, and Biomedical Engineering, University of North Carolina, Chapel
Hill, NC, USA88affiliationtext: Carolina Center for Interdisciplinary Applied
Mathematics, University of North Carolina, Chapel Hill, NC,
USA99affiliationtext: Computational Medicine Program, University of North
Carolina, Chapel Hill, NC, USA1010affiliationtext: McAllister Heart Institute,
University of North Carolina, Chapel Hill, NC, USA1111affiliationtext:
Departments of Mathematics and Biomedical Engineering, University of Utah,
Salt Lake City, UT, USA**affiliationtext<EMAIL_ADDRESS>
# A Model of Fluid-Structure and Biochemical Interactions for Applications to
Subclinical Leaflet Thrombosis
Aaron Barrett Jordan A. Brown Margaret Anne Smith Andrew Woodward John P.
Vavalle Arash Kheradvar Boyce E. Griffith Aaron L. Fogelson
###### Abstract
Subclinical leaflet thrombosis (SLT) is a potentially serious complication of
aortic valve replacement with a bioprosthetic valve in which blood clots form
on the replacement valve. SLT is associated with increased risk of transient
ischemic attacks and strokes and can progress to clinical leaflet thrombosis.
SLT following aortic valve replacement also may be related to subsequent
structural valve deterioration, which can impair the durability of the valve
replacement. Because of the difficulty in clinical imaging of SLT, models are
needed to determine the mechanisms of SLT and could eventually predict which
patients will develop SLT. To this end, we develop methods to simulate leaflet
thrombosis that combine fluid-structure interaction and a simplified
thrombosis model that allows for deposition along the moving leaflets.
Additionally, this model can be adapted to model deposition or absorption
along other moving boundaries. We present convergence results and quantify the
model’s ability to realize changes in valve opening and pressures. These new
approaches are an important advancement in our tools for modeling thrombosis
in which they incorporate both adhesion to the surface of the moving leaflets
and feedback to the fluid-structure interaction.
## 1 Introduction
Subclinical leaflet thrombosis is a potentially serious complication of
bioprosthetic aortic valve replacement and may occur following either surgical
or transcatheter aortic valve replacement. Although bioprosthetic heart valves
(BHVs) are remarkably less thrombogenic than mechanical heart valves (MHVs),
clinical valve thrombosis can occur as a life-threatening complication. Recent
studies [38, 39, 51] have suggested that the rate of subclinical leaflet
thrombosis (SLT) is as high as 13–38% [47]. SLT is associated with increased
risk of transient ischemic attacks and strokes, acute myocardial infarction,
and accelerated valve deterioration [45]. Further, if left untreated, SLT can
progress to clinical valve thrombosis. While a cardiac computed tomography
(CT) scan can detect SLT, predicting which patients will develop SLT is
currently not possible. Accordingly, there is a need for of computational
tools to model the fluid-structure and biochemical interactions that
predispose a particular patient to develop SLT.
Prior work to model leaflet thrombosis has focused on computational fluid
dynamics (CFD) simulations of blood flow through the valve. Plitman Mayo et
al. [44] performed CFD experiments of deployed transcatheter aortic valve
replacements (TAVRs) to determine areas of stagnated blood flow, suggesting
possible sites of thrombosis formation. Vahidkhah et al. [58] compared blood
residence times behind the coronary and non-coronary leaflets after a TAVR
procedure and determined similar residence times for all the leaflets. Kivi et
al. [29] performed two dimensional fluid-structure interaction (FSI)
simulations with leaflets of varying stiffness. A common finding in CFD and
FSI simulations is the presence of stagnant regions in the aortic sinus, in
which blood clots are thought to form. Hatoum et al. [23] combined a CFD model
of flow through patient specific geometry post-TAVR with a reduced order model
that predicted thrombus growth based on the wall shear stress and percent
stasis volume measurements. While they were able to determine a correlation
between circulation and amount of thrombosis, they concluded that finer flow
metrics or FSI analysis are needed to fully predict thrombosis.
Mathematical and computational models of thrombosis have also been developed,
but methods suitable for modeling thrombosis on dynamic flexible structures,
which is critical for describing leaflet thrombosis, are lacking. Fogelson et
al. [12, 36] developed a model of intravascular platelet deposition and
determined the sensitivity of thrombus formation due to various chemical and
platelet factors. Du and Fogelson [8] developed a multiphase model of platelet
aggregation in which the thrombus is modeled as a viscoelastic fluid. This
model can be seen as an extension of models by Fogelson and Guy [11] that were
created to study thrombus formation in a moving fluid. Models describing
flowing platelets and platelet deposition onto a stationary vessel wall have
been developed using a variety of multiscale modeling and computational
approaches [7, 59, 63, 55]. These models describe both fluid-phase transport
of platelets and the influence of platelet deposits on the hemodynamics
through and near the deposits. In these models, the platelets deposit over
stationary surfaces. However, to our knowledge, no thrombosis model has yet
been developed that allows for thrombus growth on a surface whose motion is
determined by solving an FSI problem, e.g., a heart valve leaflet.
There are several models that couple the advection and diffusion of chemical
species and their sources from immersed boundaries [50, 49, 25]. Typically,
these models use sources that are then spread from the immersed boundary to
the surrounding fluid using the regularized delta function. Restricting
species from diffusing across the interface remains a challenge. While many
different methods have been proposed to restrict diffusion and enforce Robin
boundary conditions across a moving interface [56, 27, 60, 54, 24], there are
far fewer that have tested the method in the context of an immersed boundary
model. Chen and Lai [6] used a diffuse domain approach to model absorption of
surfactants on a surface. Their approach is based on the methods introduced by
Li et al. [35] who demonstrated that this method enforces the boundary
condition at first order accuracy. In the methods used herein, we enforce a
boundary condition without smoothing the interface, leading to second order
accuracy up to and including the boundary [5].
The present study introduces new numerical methods to simulate the deposition
of material onto thin moving leaflets. The leaflet and fluid motion are
determined through an FSI calculation and the material deposition feeds back
onto the FSI calculation by modifying the leaflet’s mechanical properties.
While we refer to the fluid as blood and the deposited material as platelets,
the current paper deals only with a prototype of a situation that would arise
in modeling leaflet thrombosis. In a complete model of leaflet thrombosis, the
deposited material would consist of platelets, fibrin, and potentially other
inflammatory cells. The model would include mechanisms for activating
platelets through contact with the leaflet surface, exposure to high shear
stress, or encounter of soluble platelet agonists [14, 8, 9]. It would also
include treatment of coagulation biochemistry [37, 36, 34, 31] coupled with
fibrin polymerization [15, 13]. The current work is a major step towards
simulating the dynamics of such a model.
## 2 Continuous Equations
We consider an FSI model of thrombus formation on the aortic valve leaflets.
The valve geometry is created via slicing a three-dimensional reconstruction
of a typical trileaflet aortic valve, as will be discussed further in section
3.1. In this simplified model, fluid phase platelets can bind to the leaflet
surface while the surface-bound platelets stiffen the leaflets and can also
dissociate back into the fluid.
### 2.1 Fluid-Structure Interaction
The fluid-structure system is modeled using the immersed finite element/finite
difference method [21]. In this approach, a fixed computational domain
$\Omega$ is partitioned into a time-dependent fluid subdomain
$\Omega_{t}^{\text{f}}$ and a time-dependent solid subdomain
$\Omega_{t}^{\text{s}}$, so that
$\Omega=\Omega_{t}^{\text{f}}\cup\Omega_{t}^{\text{s}}$. The fluid domain is
further subdivided into the lumen $\Omega_{t}^{\text{f}^{-}}$ (i.e., the space
occupied by the blood, in which platelets are free to advect and diffuse) and
the space outside the aortic root $\Omega_{t}^{\text{f}^{+}}$, with
$\Omega_{t}^{\text{f}}=\Omega_{t}^{\text{f}^{-}}\cap\Omega_{t}^{\text{f}^{+}}$;
see Figure 1. We denote Eulerian physical coordinates with $\mathbf{x}$. The
solid domain is tracked using Lagrangian material coordinates $\mathbf{X}$,
and the mapping between the reference and current coordinates is
$\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$. The motion of the
fluid-structure system is described by
$\displaystyle\rho\mathopen{}\left(\frac{\partial\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial
t}+\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}\cdot\nabla\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}\right)\mathclose{}$
$\displaystyle=-\nabla
p\mathopen{}\left(\mathbf{x},t\right)\mathclose{}+\mu\nabla^{2}\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}+\mathbf{f}\mathopen{}\left(\mathbf{x},t\right)\mathclose{},$
(1a)
$\displaystyle\nabla\cdot\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$
$\displaystyle=0,$ (1b)
$\displaystyle\mathbf{f}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$
$\displaystyle=\int_{\Omega_{0}^{\text{s}}}\mathbf{F}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}\,\delta\mathopen{}\left(\mathbf{x}-\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}\right)\mathclose{}\,d\mathbf{X},$
(1c) $\displaystyle\frac{\partial\bm{\chi}}{\partial
t}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$
$\displaystyle=\int_{\Omega}\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}\,\delta\mathopen{}\left(\mathbf{x}-\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}\right)\mathclose{}\,d\mathbf{x}=\mathbf{u}\mathopen{}\left(\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},t\right)\mathclose{},$
(1d)
in which $\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ and
$p\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ are the Eulerian velocity
and pressure, respectively,
$\mathbf{f}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ is the Eulerian
force density, $\mathbf{F}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$ is
the Lagrangian force density, which is determined in a manner specfied below,
and $\delta\mathopen{}\left(\mathbf{x}\right)\mathclose{}$ is the Dirac delta
function. The fluid density $\rho$ and viscosity $\mu$ are assumed to be
constant. Equations 1a and 1b are the well known Navier-Stokes equations and
hold across the entire computational domain $\Omega$. Equations 1c and 1d
couple the Lagrangian and Eulerian variables. The integral in equation 1c is
over the reference configuration of the solid subdomain while that in equation
1d is over the entire computational domain.
For the current study, the aortic walls are treated as approximately rigid
while the valve leaflets are elastic and deformable. For rigid structures, we
use a penalty formulation that is intended to tether the structure in place
using the force
$\mathbf{F}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}=\kappa\mathopen{}\left(\mathbf{X}-\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}\right)\mathclose{},$
(2)
in which $\kappa$ is the stiffness parameter [32]. In practice, we choose
$\kappa$ to be the largest stable value permitted by the numerical scheme so
that the structure’s motion is minimized.
Figure 1: The domain is decomposed into a solid subdomain
$\Omega_{t}^{\text{s}}$ (denoted in the black and tan curves) and a fluid
subdomain $\Omega_{t}^{\text{f}}$, which itself is partitioned into interior
subregion $\Omega_{t}^{\text{f}^{-}}$, which corresponds to the lumen, and an
exterior subregion $\Omega_{t}^{\text{f}^{+}}$, which corresponds to the space
outside the vessel. The lumen boundary
$\Gamma_{t}^{\text{f}^{-}}=\partial\Omega_{t}^{\text{f}^{-}}$ is composed of
regions where platelet binding can occur (shown in blue) and no penetration
conditions for the platelets (shown in red). The inlet to the domain is the
left ventricle outflow tract (LVOT) and the outlet is the ascending aorta.
Boundary conditions at the inlet are determined using a time-dependent
elastance based model of the heart, including the left ventricle (LV), mitral
valve (MV), and the left atrium (LA). The outlet boundary conditions are
determined using a three element Windkessel model [33, 22].
For elastic structures which in this study are the valve leaflets, the
response is given by the first Piola-Kirchhoff stress $\mathbb{P}$, which is
determined from a strain-energy function
$\Psi\mathopen{}\left(\mathbb{F}\right)\mathclose{}$ via
$\mathbb{P}=\frac{\partial\Psi}{\partial\mathbb{F}},$ in which
$\mathbb{F}=\frac{\partial\bm{\chi}}{\partial\mathbf{X}}$ is the deformation
gradient tensor. Following the immersed finte element/difference approach of
Vadala-Roth et al. [57], we split the strain energy functional into deviatoric
and dilational parts,
$\Psi\mathopen{}\left(\mathbb{F}\right)\mathclose{}=W\mathopen{}\left(\bar{\mathbb{F}}\right)\mathclose{}+U\mathopen{}\left(J\right)\mathclose{},$
in which $J=\text{det}~{}\mathbb{F}$ is the Jacobian of the deformation tensor
and $\bar{\mathbb{F}}=J^{-1/3}\mathbb{F}$. In what follows, we choose the
dilational part of the energy to be
$U\mathopen{}\left(J\right)\mathclose{}=\frac{\kappa_{\text{stab}}}{2}\mathopen{}\left(\log
J\right)\mathclose{}^{2},$ (3)
in which $\kappa_{\text{stab}}$ is the numerical bulk modulus. The Lagrangian
force density is then computed by requiring
$\int_{\Omega_{0}^{\text{s}}}\mathbf{F}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}\cdot\mathbf{V}\mathopen{}\left(\mathbf{X}\right)\mathclose{}\,d\mathbf{X}=-\int_{\Omega_{0}^{\text{s}}}\mathbb{P}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}:\nabla_{\mathbf{X}}\mathbf{V}\mathopen{}\left(\mathbf{X}\right)\mathclose{}\,d\mathbf{X},$
(4)
for all smooth test functions
$\mathbf{V}\mathopen{}\left(\mathbf{X}\right)\mathclose{}$.
The leaflets are modeled as a hyperelastic material that follows an
exponential neo-Hookean model [18, 41]. The deviatoric strain energy
functional for this model is given by
$W\mathopen{}\left(\bar{\mathbb{F}}\right)\mathclose{}=C_{10}\mathopen{}\left(e^{C_{01}\mathopen{}\left(\bar{I}_{1}-3\right)\mathclose{}}-1\right)\mathclose{},$
(5)
in which $C_{10}$ and $C_{01}$ are material constants and $\bar{I}_{1}$ is the
first deviatoric strain invariant of the modified right Cauchy-Green tensor,
$\bar{I}_{1}=\text{tr}\mathopen{}\left(\bar{\mathbb{C}}\right)\mathclose{}$,
which is defined in terms of the modified deformation tensor,
$\bar{\mathbb{C}}=\bar{\mathbb{F}}^{\text{T}}\bar{\mathbb{F}}$. The material
parameter $C_{10}$ is set to be a function of the bound platelet
concentration, as described in section 2.3.
### 2.2 Boundary Conditions
We use reduced order models to determine pressure-flow relationships in the
ascending aorta and left ventricular outflow tract (LVOT); see Figure 1. These
reduced order models are connected to the FSI model through boundary
conditions imposed on the fluid [33, 22]. We use a coupling scheme in which
the net flow rate through each of the boundary surfaces serves as an input to
the corresponding reduced order model. In turn, each reduced order model
determines a pressure that is prescribed on the corresponding boundary
surface. The net flow rate through the LVOT boundary is the integral of the
vertical component of the velocity over the portion of the bottom boundary of
the computation domain between the vesesl walls. The net flow rate for the
aortic boundary is defined in a similar way. On the remainder of the
computational domain’s boundary, we use zero velocity boundary conditions.
We use a three-element Windkessel model [53] for the downstream reduced order
model that models the aortic outflow,
$\displaystyle C\frac{dP_{\text{Wk}}}{dt}$
$\displaystyle=Q_{\text{Ao}}-\frac{P_{\text{Wk}}}{R_{\text{p}}},$ (6)
$\displaystyle P_{\text{Ao}}$
$\displaystyle=P_{\text{Wk}}+Q_{\text{Ao}}R_{\text{c}},$ (7)
in which $C$ is the compliance, $R_{\text{c}}$ is the characteristic
resistance, $R_{\text{p}}$ is the peripheral resistance, $P_{\text{Wk}}$ is
the Windkessel pressure, $Q_{\text{Ao}}$ is the computed volumetric flow rate
at the outlet of the ascending aorta, and $P_{\text{Ao}}$ is pressure at the
outlet of the ascending aorta which is then prescribed as a boundary condition
for the fluid.
For the upstream model to model the inflow from the heart, we employ a time-
dependent elastance-based left heart model [53],
$\displaystyle\frac{d\mathopen{}\left(C_{\text{LA}}P_{\text{LA}}\right)\mathclose{}}{dt}$
$\displaystyle=Q_{\text{vein}}-Q_{\text{MV}},$ (8)
$\displaystyle\frac{d\mathopen{}\left(C_{\text{LV}}P_{\text{LV}}\right)\mathclose{}}{dt}$
$\displaystyle=Q_{\text{MV}}-Q_{\text{LVOT}},$ (9) $\displaystyle
P_{\text{LVOT}}$ $\displaystyle=P_{\text{LV}}-Q_{\text{LVOT}}R_{\text{LVOT}},$
(10) $\displaystyle Q_{\text{MV}}$
$\displaystyle=\left\\{\begin{array}[]{ll}0,&\mbox{ if }\quad
P_{\text{LA}}\leq P_{\text{LV}},\\\
\frac{P_{\text{LA}}-P_{\text{LV}}}{R_{\text{MV}}},&\mbox{ if }\quad
P_{\text{LA}}>P_{\text{LV}},\end{array}\right.$ (13)
in which $C_{\text{LA}}$ and $C_{\text{LV}}$ are the time-dependent
compliances of the left atrium and left ventricle, respectively.
$R_{\text{LVOT}}$ and $R_{\text{MV}}$ are the resistances of the LVOT and
mitral valve, the latter of which is modeled as a diode. $P_{\text{LA}}$,
$P_{\text{LV}}$, and $P_{\text{LVOT}}$ are the left atrial, left ventricular,
and the LVOT pressures, and $Q_{\text{vein}}$, $Q_{\text{MV}}$, and
$Q_{\text{LVOT}}$ are the volumetric flow rates of the pulmonary veins, mitral
valve, and LVOT. In this model, $Q_{\text{vein}}$ is prescribed as a constant
inflow rate into the left atrium. $Q_{\text{LVOT}}$ is the computed flow rate
at the inlet of the computational domain. $P_{\text{LVOT}}$ is then prescribed
as a boundary condition for the momentum equation 1a. We determine the time-
dependent compliances $C(t)$ from specified elastance functions $E(t)$ via
$C(t)=1/E(t)$. We use the “two-Hill” elastance waveform given by Mynard et
al.[43],
$\displaystyle
E\mathopen{}\left(t\right)\mathclose{}=\mathopen{}\left(E_{\text{max}}-E_{\text{min}}\right)\mathclose{}\alpha\mathopen{}\left(t\right)\mathclose{}+E_{\text{min}},$
(14)
$\displaystyle\alpha\mathopen{}\left(t\right)\mathclose{}=\frac{k\frac{g_{1}}{1+g_{1}}\frac{1}{1+g_{2}}}{\max\mathopen{}\left(\frac{g_{1}}{1+g_{1}},\frac{1}{1+g_{2}}\right)\mathclose{}},$
(15) $\displaystyle g_{i}=\left(\frac{t}{\tau_{i}}\right)^{m_{i}}.$ (16)
We use the elastance parameters in $E\mathopen{}\left(t\right)\mathclose{}$
for the left atrium from Mynard et al. [43]. The remaining parameters are fit
to experimental measurements of human aortic pressures $P_{\text{Ao}}$ and
aortic flow rates $Q_{\text{Ao}}$ from Murgo et al. [42] by taking the
experimental measurements of $Q_{\text{Ao}}$ as input to the Windkessel model,
and comparing the resulting values of $P_{\text{Ao}}$ to its experimental
values. We calculate the best-fit parameters to data from Murgo et al. [42]
for a “Type A” beat for the upstream model. The downstream model is fit using
the corresponding downstream data from Murgo et al. [42]. The fits were
created using MATLAB’s `fmincon`, a nonlinear optimization tool.
### 2.3 Mass Deposition Model
We couple the FSI model to a mass deposition model that includes a fluid-phase
$c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ concentration
measured per unit volume and a surface-bound
$C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$ concentration
field measured per unit reference area. Although this model does not include
the cellular and biochemical interactions describing thrombosis, it does
include fields which we view as platelet populations, and the conversion of
fluid-phase platelets in $c_{\text{f}}$ to surface-bound platelets in
$C_{\text{b}}$ as platelet adhesion. The fluid-phase species diffuses and
advects with the local fluid velocity in the interior fluid domain
$\Omega_{t}^{\text{f}^{-}}$ and can be converted into the surface-bound
species along the boundary
$\Gamma_{t}\subset\Gamma_{t}^{\text{f}^{-}}=\partial\Omega_{t}^{\text{f}^{-}}$.
In the results in section 4, $\Gamma_{t}$ is the downstream side of one or
both of the leaflets. The surface-bound species moves with the structure and
can dissociate to become the fluid-phase species. The model equations are
$\displaystyle\frac{\partial
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial
t}+\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}\cdot\nabla
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}=$ $\displaystyle
D\nabla^{2}c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{},$
$\displaystyle\mathbf{x}\in\Omega_{t}^{\text{f}^{-}},$ (17a)
$\displaystyle\frac{\partial
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial\mathbf{n}}=$
$\displaystyle 0,$
$\displaystyle\mathbf{x}\in\Gamma_{t}^{\text{f}^{-}}\setminus\Gamma_{t},$
(17b) $\displaystyle-D\frac{\partial
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial\mathbf{n}}=$
$\displaystyle
k_{\text{on}}\mathopen{}\left(C_{\text{b}}^{\text{max}}-C_{\text{b}}\mathopen{}\left(\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},t\right)\mathclose{}\right)\mathclose{}J_{\text{s}}c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$
$\displaystyle-
k_{\text{off}}C_{\text{b}}\mathopen{}\left(\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},t\right)\mathclose{}J_{\text{s}},$
$\displaystyle\mathbf{x}\in\Gamma_{t},$ (17c) $\displaystyle\frac{\partial
C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}}{\partial t}=$
$\displaystyle
k_{\text{on}}\mathopen{}\left(C_{\text{b}}^{\text{max}}-C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}\right)\mathclose{}c_{\text{f}}\mathopen{}\left(\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},t\right)\mathclose{}$
$\displaystyle-
k_{\text{off}}C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},$
$\displaystyle\mathbf{X}\in\Gamma_{0},$ (17d)
in which $D$ is the diffusion coefficient, $k_{\text{on}}$ and
$k_{\text{off}}$ are the reaction rates for adhesion and dissociation,
respectively, $C_{\text{b}}^{\text{max}}$ is the carrying capacity of
$C_{\text{b}}$ per unit undeformed area along the boundary $\Gamma_{0}$, and
$J_{\text{s}}=\frac{dA}{da}$ is the surface Jacobian, which is the ratio of
reference and current areas. The first term on the right hand side of equation
17c gives the rate of binding of fluid-phase platelets with concentration
$c_{\text{f}}$ to the valve leaflet where the available binding sites have
surface density
$\mathopen{}\left(C_{\text{b}}^{\text{max}}-C_{\text{b}}\mathopen{}\left(\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},t\right)\mathclose{}\right)\mathclose{}J_{\text{s}}$
with respect to the current leaflet configuration. The second term gives the
rate at which absorbed platelets with surface density
$C_{\text{b}}\mathopen{}\left(\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},t\right)\mathclose{}J_{\text{s}}$
detach from the leaflet.
To model the effect of thrombosis over the valve leaflets, we set the
stiffness coefficient of the leaflets $C_{10}$ to be a function of the surface
concentration $C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$.
Because $C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$ is
defined only on the surface of the leaflet, we use a harmonic interpolation
procedure to extend the surface concentration into the interior of the
leaflet, where the Lagrangian forces are calculated. Specifically, we solve
Laplace’s equation
$\displaystyle\nabla^{2}C_{\text{b}}^{\text{in}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$
$\displaystyle=0,\quad\mathbf{X}\in\Omega^{\text{leaf}}_{0},$ (18)
$\displaystyle
C_{\text{b}}^{\text{in}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$
$\displaystyle=\left\\{\begin{array}[]{ll}C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{},&\mbox{
if }\mathbf{X}\in\Gamma_{0},\\\ 0,&\mbox{ otherwise},\end{array}\right.$ (21)
in which $\Omega^{\text{leaf}}_{0}$ is the leaflet domain in the initial
configuration. Having found
$C_{\text{b}}^{\text{in}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$, we
then set the stiffness of the leaflet to be
$C_{10}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}=C_{10}^{\text{base}}\mathopen{}\left(\frac{\beta+1}{2}-\frac{\beta-1}{2}\cos\mathopen{}\left(\frac{\pi
C_{\text{b}}^{\text{in}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}}{C_{\text{b}}^{\text{max}}}\right)\mathclose{}\right)\mathclose{},$
(22)
in which $C_{10}^{\text{base}}$ is the stiffness with no accumulation and
$\beta C_{10}^{\text{base}}$ is the maximum stiffness.
The parameters of the mass deposition model are chosen so that the reactions
occur on a similar time scale as the fluid-structure interactions. These
values are several orders of magnitude larger than those used in a similar
thrombosis model as described previously [34, 11]. Use of physiologically
relevant reaction rates would require performing simulations over thousands of
computational cycles, which is currently not feasible. We are actively working
on a temporal multiscale method to meet this challenge and allow use of
realistic reaction rates.
## 3 Computational Models and Numerical Methods
The model is implemented in IBAMR, which provides implementations of the
immersed boundary method and several of its extensions along with support for
adaptive mesh refinement [19]. IBAMR utilizes libMesh for the finite element
representation of the structural deformations [28] and PETSc for linear
solvers [2, 3, 4]. Support for structured adaptive mesh refinement is provided
by SAMRAI [26]. While the model can be naturally extended to three spatial
dimensions, we describe the numerical implementation and results in two
spatial dimensions.
### 3.1 Imaged Model and Mesh Generation
Our two-dimensional aortic root geometry is informed by a three-dimensional
patient-specific aortic root model based on pre-procedural computed tomography
(CT) image data of a female patient preparing for aortic valve replacement at
UNC Medical Center. The images used in this study were obtained under a
protocol approved by the UNC Institutional Review Board (study number
18-0202). The CT scan was performed using a Siemens SOMATOM Definition CT
Scanner with an image resolution of $512\times 512\times 226$ and a voxel size
of
$$0.441\text{\,}\frac{\mathrm{mm}}{}$\times$0.441\text{\,}\frac{\mathrm{mm}}{}$\times$0.6\text{\,}\frac{\mathrm{mm}}{}$$.
The patient images are segmented by a semi-automated method in ITK-SNAP [62],
which implements an active contour model that minimizes an energy functional
of voxel intensities [61]. The aortic root measures
$28\text{\,}\frac{\mathrm{mm}}{}$ in diameter and
$7.68\text{\,}\frac{\mathrm{cm}}{}$ in length, and the thickness of the aortic
wall is $1.0\text{\,}\frac{\mathrm{mm}}{}$. The inflow boundary of the model
is truncated at the LVOT, and the outflow boundary of the model is truncated
downstream of the aortic valve before the first arterial bifurcation.
Artificial circular extensions are added at both boundaries using SOLIDWORKS
(Dassault Systèmes SOLIDWORKS Corporation, Waltham, MA, USA) to simplify the
application of boundary conditions to the computational model. The radius of
the vessel at both truncations is $21\text{\,}\frac{\mathrm{mm}}{}$. Idealized
aortic valve replacement leaflets with a thickness of
$0.7\text{\,}\frac{\mathrm{mm}}{}$ are created based on the measurements from
Sahasakul et al. [48] and trimmed to fit within the reconstructed aortic root
in SOLIDWORKS. To derive our two-dimensional aortic root geometry from the
three-dimensional model, we extract a slice through the diameter of the aorta
using Coreform Cubit (Computational Simulation Software, LLC, American Fork,
UT, USA), which is a software application based on CUBIT from Sandia National
Laboratory. We then use Cubit to smooth the angles in both the aortic root and
leaflet surfaces and to generate structural meshes consisting of triangular
elements.
### 3.2 Fluid-Structure Interaction
The fluid equations (1a) and (1b) are solved using a second-order Cartesian
staggered-grid finite difference method. The nonlinear term is approximated
using a piecewise parabolic method [46]. The resulting saddle point system is
solved using GMRES with a projection method as a preconditioner [20].
The solid subdomain $\Omega_{t}^{\text{s}}$ is discretized using
$\mathcal{C}^{0}$ finite elements. A triangulation $\mathcal{T}_{h}$ of the
structure is constructed. The size of each element in the triangulation is
chosen so that there is approximately one node per Cartesian grid cell. On
$\mathcal{T}_{h}$, we define Lagrangian basis functions
$\left\\{\phi_{l}\mathopen{}\left(\mathbf{X}\right)\mathclose{}\right\\}_{l=1}^{m}$,
in which $m$ is the total number of nodes in the triangulation. We approximate
the structural deformation and force using the basis functions via
$\displaystyle\bm{\chi}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}=\sum_{l=1}^{m}\bm{\chi}_{l}\mathopen{}\left(t\right)\mathclose{}\,\phi_{l}\mathopen{}\left(\mathbf{X}\right)\mathclose{},$
(23)
$\displaystyle\mathbf{F}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}=\sum_{l=1}^{m}\mathbf{F}_{l}\mathopen{}\left(t\right)\mathclose{}\,\phi_{l}\mathopen{}\left(\mathbf{X}\right)\mathclose{}.$
(24)
Coupling between the fluid and structure is mediated using regularized delta
functions in equations 1c and 1d. Recently, Lee and Griffith [32] suggested
using delta functions with smaller support for structures in shear driven
regimes. Therefore, in this work, we use the three-point $B$-spline kernel for
the flexible valve leaflets, and a two-point piecewise linear kernel for the
nearly rigid walls of the aortic root.
### 3.3 Mass Deposition Model
The fluid phase concentration field is approximated using a hybrid semi-
Lagrangian cut-cell method [5]. For brevity, we omit the details and only
highlight the changes of the discretization. To summarize, we split equation
17 into an advection step,
$\frac{\partial
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial
t}+\mathbf{u}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}\cdot\nabla
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}=0,\quad\mathbf{x}\in\Omega_{t}^{\text{f}^{-}},$
(25)
and a diffusion step,
$\frac{\partial
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial
t}=D\nabla^{2}c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{},\quad\mathbf{x}\in\Omega_{t}^{f^{-}},$
(26)
along with the boundary conditions equations 17b and 17c and the surface
concentration equation 17d. During the diffusion step, the domain
$\Omega_{t}^{\text{f}^{-}}$ is assumed to be fixed. The advective step is
treated with a semi-Lagrangian method using polyharmonic splines to
reconstruct the function $c_{\text{f}}$. The diffusion step is treated with a
cut-cell finite volume method. The surface concentration
$C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$ is solved for
by extrapolating the fluid-phase field
$c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ to the boundary
and approximating the ODE in equation 17d.
#### 3.3.1 Diffusion
To approximate the diffusion step in equation 26, we employ a cut-cell finite
volume method in which the domain $\Omega_{t}^{f^{-}}$ is considered fixed for
the duration of this step. Integrating equation 26 over a grid cell
$\mathbf{c}_{i,j}$ that is entirely or partially interior to
$\Omega_{t}^{f^{-}}$ and dividing by that cell’s volume, we obtain
$\frac{1}{\left|\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}\right|}\int_{\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}}\frac{\partial
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial
t}\text{d}\mathbf{x}=\frac{1}{\left|\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}\right|}\int_{\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}}D\Delta
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}\text{d}\mathbf{x}.$
(27)
We define $C_{\text{f},i,j}$ as the cell average of
$c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ in the cell
$\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}$. Replacing the cell average in the
left hand side of equation 27 and employing the divergence theorem on the
right hand side, we obtain
$\frac{\text{d}C_{\text{f},i,j}}{\text{d}t}=\frac{1}{\left|\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}\right|}\int_{\partial\mathopen{}\left(\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}\right)\mathclose{}}D\frac{\partial
c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}}{\partial\mathbf{n}}\cdot\text{d}\mathbf{A}.$
(28)
The integral in equation 28 consists of two parts, an integral over the
boundary $\Gamma_{t}^{\text{f}^{-}}$ that is interior to cell
$\mathbf{c}_{i,j}$ and an integral over the portion of the boundary of the
cell $\mathbf{c}_{i,j}$ that is interior to $\Omega_{t}^{\text{f}^{-}}$. The
first type consists of an integral over the physical boundary and using the
provided boundary conditions in equations 17b and 17c, can be computed using
techniques described in the next section. The second integral is discretized
using second order finite differences. This discretization requires the
computation of the cut cell volume
$\left|\mathbf{c}_{i,j}\cap\Omega_{t}^{f^{-}}\right|$, which is described in
section 3.3.4.
#### 3.3.2 Surface Reactions
Along part of the surface $\Gamma_{t}^{\text{f}^{-}}$, we allow for binding of
the fluid-phase species to the boundary and for unbinding of the surface-bound
species into the fluid, as described by equations 17c and 17d. We extract a
boundary mesh represented by $C^{0}$ elements from the volumetric leaflet mesh
as described in section 3.2. We maintain a representation of both the surface
concentration $C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$
per unit reference area and the fluid concentration
$c_{\text{f}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}$ per unit volume
restricted to the boundary. These values are represented using Lagrangian
basis functions
$\left\\{\psi_{l}\mathopen{}\left(\mathbf{X}\right)\mathclose{}\right\\}_{l=1}^{n_{\text{bd}}}$
in which $n_{\text{bd}}$ is the number of nodes of the boundary mesh. We note
these are the same basis functions used for the structural deformation, but
restricted to the surface. The concentrations along the boundary are
accordingly
$\displaystyle
C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}=\sum_{l=1}^{n_{\text{bd}}}C_{\text{b}}^{l}\mathopen{}\left(t\right)\mathclose{}\psi_{l}\mathopen{}\left(\mathbf{X}\right)\mathclose{},$
(29) $\displaystyle
c_{\text{f}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}=\sum_{l=1}^{n_{\text{bd}}}c_{\text{f}}^{l}\mathopen{}\left(t\right)\mathclose{}\psi_{l}\mathopen{}\left(\mathbf{X}\right)\mathclose{}.$
(30)
The values $c_{\text{f}}^{l}$ are found by using a radial basis function
interpolant as described in section 3.3.3 to extrapolate the values of
$c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ to the surface
nodes. The nodal values $C_{\text{b}}^{l}$ are found by solving the ODE in
equation 17d using a two stage Runge Kutta method.
This finite element representation allows for easy evaluations of the flux
defined in equation 17c from the boundary to the fluid. To evaluate this flux,
we require the value of the Jacobian
$J_{\text{s}}=\frac{d\mathbf{a}}{d\mathbf{A}}$ that converts areas in the
reference configuration to areas in the current configuration. Because we are
using a $C^{0}$ representation of the surface, the Jacobian is discontinuous
at nodes. To obtain a continuous representation, we project $J_{\text{s}}$
onto the finite element basis [30]. In practice, this amounts to computing the
Jacobian at quadrature points along the surface.
#### 3.3.3 Reconstructions
Both the semi-Lagrangian step and the surface reactions involve reconstructing
$c_{\text{f}}\mathopen{}\left(\mathbf{x},t\right)\mathclose{}$ at various
points $\hat{\mathbf{x}}$ in the computational domain. The details of the
reconstruction procedure depend on where the reconstruction is being performed
within this domain. Away from the boundary, we use the four closest grid
points to $\hat{\mathbf{x}}$ to form a bilinear interpolant. If there are too
few points to form the bilinear interpolant (e.g., near cut-cells), we use a
radial basis function (RBF) interpolant [10, 52]. The RBF interpolant is
constructed via a polyharmonic spline
$q\mathopen{}\left(\mathbf{x}\right)\mathclose{}=\sum_{j=1}^{k}\lambda_{j}\lVert\mathbf{x}-\mathbf{x}_{j}\rVert^{m}+\sum_{j=1}^{s}\beta_{j}p_{j}\mathopen{}\left(\mathbf{x}\right)\mathclose{},$
(31)
in which $m$ is an odd integer and
$p_{j}\mathopen{}\left(\mathbf{x}\right)\mathclose{}$ form a set of $s$
polynomial basis functions. The total number of points in the stencil $k$ is
chosen so that $k=2m+1$. The points $\mathbf{x}_{j}$ are the $k$ closest
points to the location $\hat{\mathbf{x}}$. We find the coefficients
$\lambda_{j}$ and $\beta_{j}$ by requiring
$\displaystyle q\mathopen{}\left(\mathbf{x}_{j}\right)\mathclose{}$
$\displaystyle=f_{j}$ $\displaystyle\mbox{ for }j=1,\ldots,k,$ (32a)
$\displaystyle\sum_{i=1}^{s}\lambda_{i}p_{i}\mathopen{}\left(\mathbf{x}_{j}\right)\mathclose{}$
$\displaystyle=0$ $\displaystyle\mbox{ for }j=1,\ldots,k.$ (32b)
Equation 32a are the interpolation conditions, and equation 32b are the
orthogonality conditions to ensure a unique interpolant. This results in a
linear system for the coefficients, which is solved using a QR algorithm. In
our computations, we set the integer $m=3$ and use up to quadratic
polynomials.
#### 3.3.4 Cut Cell Geometries
In the cut-cell finite volume discretization of equation 26, we require the
computation of the geometries of cells cut by the boundary $\Gamma^{f^{-}}$.
We denote the node of a Carteisan grid cell by
$\mathbf{x}_{i+\frac{1}{2},j+\frac{1}{2}}$. To find cut cell volumes, we first
calculate the signed distance function to the surface at each node
$\mathbf{x}_{i+\frac{1}{2},j+\frac{1}{2}}$. To do this, we first find
intersections of the $C^{0}$ representation of the immersed structure with the
background Eulerian grid, and for each element of the immersed structure, we
calculate outward facing normals. We note that this requires a consistent
traversal (e.g., counter-clockwise) of the structure to ensure a consistent
facing normal. Then, for each cell node
$\mathbf{x}_{i+\frac{1}{2},j+\frac{1}{2}}$ of the background grid, we find the
projection of the point onto each element and compute its distance from
$\mathbf{x}_{i+\frac{1}{2},j+\frac{1}{2}}$. If multiple minimal distance
projections exist, we use the angle weighted average of the projections [1].
The sign of the distance is computed using the previously computed structure
normal. Once we have the signed distances at each cell node, we can compute
partial cell volumes. Following Min and Gibou [40], we compute cell volumes by
decomposing the cell into simplices, for which analytic formulas for the
volume exist.
### 3.4 Time Stepping
In summary, the steps to advance the solution from time $t^{n}$ to time
$t^{n+1}$ are:
1. 1.
Compute the cut cell geometries.
2. 2.
Perform a half step of the diffusion solve, evolving both the fluid-phase and
the structure-bound concentration fields.
3. 3.
Solve the Navier-Stokes equations and update the position of the immersed
structure.
4. 4.
Update the cut cell geometries using the new position of the immersed
structure.
5. 5.
Perform a full step of the semi-Lagrangian method, using the velocities from
the Navier-Stokes solve.
6. 6.
Perform a half step of the diffusion step, evolving the fluid-phase and
surface-bound concentrations.
Our use of an explicit time stepping scheme for several of these steps limits
our time step size to resolve the fastest time scale. In this case, the
fastest time scale is that of the leaflet elasticity. We determine an
empirical scaling relationship between the time step size and the stiffness of
the leaflet that maintains numerical stability under increasing leaflet
stiffness. Specifically, we choose the time step such that
$\Delta
t=\frac{C_{\text{ts}}}{\sqrt{\text{max}\mathopen{}\left(C_{10}\right)\mathclose{}}},$
(33)
in which $C_{\text{ts}}$ is chosen to be as large as possible.
## 4 Results
Table 1 provides the values of all relevant physical and numerical parameters.
At $t=0$, the initial fluid phase concentration $c_{\text{f}}$ is set to be 1
throughout the domain $\Omega_{t}^{\text{f}^{-}}$. During the first two
cycles, the binding and unbinding coefficient, $k_{\text{on}}$ and
$k_{\text{off}}$, are set to zero; afterward they are reset to their non-zero
values. We emphasize that the binding and unbinding coefficients and the
diffusion coefficient are artificially increased by several orders of
magnitude compared to other clotting models [34, 11] to ensure that sufficient
binding can occur within the duration of the simulation.
Table 1: Values of the parameters used in the simulation. Structure parameters | Deposition and Fluid parameters
---|---
$\kappa$ | $20.17\text{\,}\frac{\mathrm{GPa}}{{\mathrm{cm}}^{2}}$ | $k_{\text{on}}$ | $0.033\,21\text{\,}\frac{{\mathrm{cm}}^{3}}{\mathrm{s}\text{\,}\mathrm{platelet}}$
$k_{\text{stab}}$ | $58.4\text{\,}\frac{\mathrm{MPa}}{}$ | $k_{\text{off}}$ | $0.01\text{\,}\frac{}{\mathrm{s}}$
$C_{01}$ | $3.25\text{\,}\frac{}{}$ | $D$ | $0.1\text{\,}\frac{{\mathrm{cm}}^{2}}{\mathrm{s}}$
$C_{10}^{\text{min}}$ | $2.264\text{\,}\frac{\mathrm{MPa}}{}$ | $C_{\text{b}}^{\text{max}}$ | $1.41\text{\times}{10}^{7}\text{\,}\frac{\mathrm{platelet}}{{\mathrm{cm}}^{2}}$
$\beta$ | varies, between $1-600$ | $c_{\text{f}}^{\text{max}}$ | $1.5\text{\times}{10}^{5}\text{\,}\frac{\mathrm{platelet}}{{\mathrm{cm}}^{3}}$
| | $\rho$ | $1\text{\,}\frac{\mathrm{g}}{{\mathrm{cm}}^{3}}$
| | $\mu$ | $0.035\text{\,}\frac{\mathrm{g}}{\mathrm{cm}\text{\,}\mathrm{s}}$
Boundary model parameters
$R_{\text{p}}$ | $0.9046\text{\,}\frac{\mathrm{mmHg}\text{\,}\mathrm{s}}{\mathrm{mL}}$ | $\tau_{1,\text{LA}}$ | $0.097\,89\text{\,}\frac{\mathrm{s}}{}$
$C$ | $1.950\text{\,}\frac{\mathrm{mL}}{\mathrm{mmHg}}$ | $\tau_{1,\text{LV}}$ | $0.0887\text{\,}\frac{\mathrm{s}}{}$
$R_{\text{c}}$ | $0.042\text{\,}\frac{\mathrm{mmHg}\text{\,}\mathrm{s}}{\mathrm{mL}}$ | $m_{1,\text{LA}}$ | $1.32\text{\,}\frac{}{}$
$Q_{\text{vein}}$ | $6.2\text{\,}\frac{\mathrm{L}}{\mathrm{min}}$ | $m_{1,\text{LV}}$ | $2.404\text{\,}\frac{}{}$
$Q_{\text{LVOT}}$ | $0.015\text{\,}\frac{\mathrm{mmHg}\text{\,}\mathrm{s}}{\mathrm{mL}}$ | $\tau_{2,\text{LA}}$ | $0.1602\text{\,}\frac{\mathrm{s}}{}$
$R_{\text{MV}}$ | $0.005\text{\,}\frac{\mathrm{mmHg}\text{\,}\mathrm{s}}{\mathrm{mL}}$ | $\tau_{2,\text{LV}}$ | $0.4461\text{\,}\frac{\mathrm{s}}{}$
$E_{\text{max,\text{LA}}}$ | $0.17\text{\,}\frac{\mathrm{mmHg}}{\mathrm{mL}}$ | $m_{2,\text{LA}}$ | $13.1\text{\,}\frac{}{}$
$E_{\text{min,\text{LA}}}$ | $0.08\text{\,}\frac{\mathrm{mmHg}}{\mathrm{mL}}$ | $m_{2,\text{LV}}$ | $20.952\text{\,}\frac{}{}$
$E_{\text{min,\text{LV}}}$ | $0.0265\text{\,}\frac{\mathrm{mmHg}}{\mathrm{mL}}$ | $E_{\text{max,\text{LV}}}$ | $0.16\text{\,}\frac{\mathrm{mmHg}}{\mathrm{mL}}$
### 4.1 Convergence Study
The flow regime during peak systole is turbulent, with the largest Reynolds
number being approximately 5000. Because of the chaotic nature of the
simulation, convergence of the numerical method is not well defined. Small
changes in the simulations (e.g. grid spacing and time step size) can lead to
large changes in the flow velocities. Further, the fluid-phase and surface
concentrations and hence the stiffness of the leaflets is directly affected by
the turbulent flow. Therefore, to assess the accuracy of the simulation, we
compare the average fluid velocity near peak systole across grid sizes. We
modify the model to use a parabolic velocity profile which corresponds to
three-quarters systole. We generate three different resolutions of meshes with
maximum element edge lengths of 0.32, 0.24, and
$0.18\text{\,}\frac{\mathrm{mm}}{}$, which correspond to 2, 3, and 4 elements
across the width of the leaflets, respectively. The background Cartesian grid
is refined such that there is approximately one structural mesh node per grid
cell. The modified model is then run without accumulation. Figure 2 shows the
average fluid velocity from time $t=0.5$ to a final time $T=3.5$. We observe
consistent values across all grid resolutions tested. While convergence is not
clear, we expect the average flow velocity to show convergence in the limit as
$T\rightarrow\infty$. In the full model, we do observe grid independence of
the surface concentration field. Figure 3 shows the total bound concentration
$\int_{\Gamma_{0}}C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}d\mathbf{X}$
for all three grid resolutions. For the results presented below, we use the
coarse mesh, consisting of two elements across the leaflet.
(a) | (b) | (c)
---|---|---
(d) | (e)
Figure 2: To assess convergence, we perform simulations at approximately
three-quarters systole, and we compute the average velocity from time $0.5$ to
time $T=3.5$. Panel (a) shows the average fluid velocity magnitude across the
time interval. Panels (b)-(e) show slices of the average fluid velocity
magnitude for three different grid resolutions. The coarsest grid is shown in
light blue, the medium grid is shown in blue, and the finest grid is shown in
black. We observe consistent values across all grid resolutions tested. Figure
3: The total accumulation
$\int_{\Gamma_{0}}C_{\text{b}}\mathopen{}\left(\mathbf{X},t\right)\mathclose{}d\mathbf{X}$
over time for three different mesh sizes. Although we do not expect pointwise
convergence of the fluids phase concentration and velocity fields for the
turbulent flow regime considered in this study, we do observe grid
independence of the bound concentration field.
### 4.2 Leaflet Deposition
Figure 4 shows fluid-phase concentrations $c_{\text{f}}$ and velocity
magnitude snapshots from the last cycle of a simulation with deposition only
on the right leaflet. At the times of these plots, the right leaflet is
substantially stiffer than the left, and the predominant flow through the
valve is shifted towards the left leaflet. Figure 5 shows fluid-phase
concentrations $c_{\text{f}}$ and velocity magnitude snapshots with deposition
on both leaflets. Here, both leaflets become stiff and open less over time. We
observe higher velocity magnitudes when deposition occurs on both leaflets as
opposed to a single leaflet.
---
(a) | (b) | (c) | (d)
| | |
Figure 4: For a simulation in which deposition happens only on the right
leaflet, snapshots during (a) middle of diastole, (b) peak systole, (c), end
systole, and (d) middle of diastole the last cycle showing (top) the fluid-
phase concentration $c_{\text{f}}$ and the surface concentration
$C_{\text{b}}J_{\text{s}}$ and (bottom) the fluid velocity magnitude. This
simulation uses $\beta=600$. Notice that the right leaflet is considerably
stiffer than the left one and that the fluid concentration $c_{\text{f}}$ is
depleted in and downstream of the aortic sinus.
---
(a) | (b) | (c) | (d)
| | |
Figure 5: For a simulation in which deposition happens on both leaflets,
snapshots during (a) middle of diastole, (b) peak systole, (c), end systole,
and (d) middle of diastole in the last cycle showing (top) the fluid-phase
concentration $c_{\text{f}}$ and the surface concentration
$C_{\text{b}}J_{\text{s}}$ and (bottom) the fluid velocity magnitude. This
simulation uses $\beta=600$. Notice that both leaflets stiffen in this
simulation and that the peak velocity magnitudes are larger than those in
Figure 4.
To assess the opening of the valve, we project the leaflets onto the valve
ring, as shown in Figure 6. The opening area is then normalized by the area of
the fully open valve. Figure 6 shows the normalized open valve area over each
cycle for deposition on both leaflets, as we increase the maximum stiffness
factor $\beta$. For lower maximal stiffnesses, we observe similar normalized
open valve areas compared to a simulation with no deposition. For larger
maximum stiffness, we observe a smaller normalized open valve area as more
deposition occurs. Figure 6 compares the normalized open valve area for
deposition on both leaflets versus on only the right leaflet for the same
maximum stiffness. When deposition occurs on only the right leaflet, the
normalized open valve area still decreases compared to no accumulation, but
does not realize as dramatic reductions as when deposition on both leaflets is
allowed. The left leaflet, which has a constant stiffness over time,
compensates and opens more as the right leaflet stiffens.
Figure 7 shows the maximum and minimum accumulations
$C_{\text{b}}J_{\text{s}}$ across the leaflet. Because the diffusion
coefficient is large compared to the reaction rates, there is always
sufficient fluid-phase platelets to bind to the leaflets, and accordingly, a
consistent rate of binding to the surface and an increasing surface
concentration as the simulation progresses. By the end of the simulation,
bound platelets occupy approximately 23% of the carrying capacity. The minimum
concentration periodically jumps while the maximum concentration is
monotonically increasing. The periodic jumps are due to the physical location
of the minimum and maximum which affects the amount of platelets per current
area. The minimum occurs near the position where the leaflet attaches to the
aortic wall. This location sees the largest changes in area as the valve
opens. The maximum concentration is found on the tips of the leaflet, which
move through regions of high fluid-phase concentration. The tip of the
leaflets deform less than the rest of the leaflet upon opening and closing of
the valve, leading to a steadily increasing surface bound concentration field.
While the fluid-phase concentration $c_{\text{f}}$ is not completely depleted,
we do observe reductions in the concentration than initially.
Figure 8 shows the velocity magnitude during the final cycle near peak
systole. We observe a vortex in the sinus region that grows in strength as we
increase the maximum stiffness. This vortex is not present when accumulation
occurs exclusively on the right leaflet, but is present when there is no
accumulation.
(a) (b) (c)
Figure 6: The computed normalized open valve area over time. Panel (a) depicts
the computation of the normalized open valve area. The open area of the valve
is projected onto the valve ring, and then normalized by the area of the fully
opened valve. Panel (b) depicts the normalized open valve area over time for
accumulation on both leaflets as $\beta$ increases during the entire
simulation (top) and during only the last cycle (bottom). Notice that as the
leaflets get stiffer, the normalized open valve area decreases. Panel (c)
depicts the normalized open valve area over time for accumulation on the right
leaflet or both leaflets during all the cycles (top) and during only the last
cycle (bottom). Notice that if accumulation occurs on both leaflets, the
normalized open valve area decreases more than if accumulation occurs on a
single leaflet.
(a)
(c)
(b)
(d)
Figure 7: (a) The surface concentration $C_{\text{b}}J_{\text{s}}$ along the
leafets at the end of the simulation with $\beta=600$. (b) The minimum and
maximum surface concentration $C_{\text{b}}J_{\text{s}}$ on the leaflets.
Panel (c) highlights the accumulation over the last three cycles for
accumulation on both leaflets for three values of $\beta$. In panel (d), the
accumulation is shown for accumulation only on the right leaflet versus both
leaflets with $\beta=600$. The vertical lines denote the beginning of systole.
There is a consistent accumulation of material on the leaflets. The jumps in
the minimum concentration are due to the the change in $J_{\text{s}}$, which
changes most where the leaflets attach to the aortic wall.
(a)
(b) (c)
(d)
(e)
Figure 8: The magnitude of the velocity field at peak systole during the last
cycle for (a) no accumulation, (b) $\beta=100$ with accumulation on both
leaflets, (c) $\beta=300$ with accumulation on both leaflets, (d) $\beta=600$
with accumulation on both leaflets, and (e) $\beta=600$ with accumulation on
only the right leaflet. Notice the presence of a vortex in the right sinus
that is absent if accumulation occurs only on the right leaflet. The choice of
colorbar is intended to highlight vortex formation in the sinus region. We
observe peak flow rates through the valve of
$170\text{\,}\frac{\mathrm{cm}}{\mathrm{s}}$ for $\beta=1$ and
$275\text{\,}\frac{\mathrm{cm}}{\mathrm{s}}$ for $\beta=600$.
### 4.3 Pressures and Flow Rates
Here we quantify the valve’s resistance to the flow at different maximum
stiffnesses. We measure the pressures at locations just upstream and
downstream of the valve. We use a Gaussian filter to smooth the curves in both
space and time, yielding the results shown in Figures 9 and 10. We observe
marginal increases in the pressures upstream of the valve of about
$14\text{\,}\frac{\mathrm{mmHg}}{}$; however, there are sharp decreases of
about $515\text{\,}\frac{\mathrm{mmHg}}{}$ in the aortic pressures downstream
of the leaflets as we increase stiffness. A similar trend is observed with
deposition on only the right leaflet, although the differences are not as
pronounced.
(a)
(b)
Figure 9: The pressures just upstream and downstream of the valve. Panel (a)
compares the pressures for no accumulation and accumulation on both leaflets
with $\beta=600$. Panel (b) compares pressures for accumulation on only the
right leaflet and both leaflets, with $\beta=600$. While the upstream pressure
increases mildly, the downstream pressures decrease by between $5$ and
$10\text{\,}\frac{\mathrm{mmHg}}{}$.
(a)
(b)
Figure 10: The pressures during the last three cycles for accumulation on both leaflets with $\beta=1$, 100, 300, and 600. Note that $\beta=1$ corresponds to a baseline where the leaflet stiffness is constant. The aortic side pressures are shown in panel (a). The left ventricle side pressures are shown in panel (b). While the upstream pressure increases by less than $5\text{\,}\frac{\mathrm{mmHg}}{}$, the downstream pressures decrease by between $5$ and $15\text{\,}\frac{\mathrm{mmHg}}{}$. Table 2: Pressures just upstream of the leaflets (LVOT) and downstream of the leaflets (aorta) during peak systole. While we observe increases of $12\text{\,}\frac{\mathrm{mmHg}}{}$ in the pressure upstream of the valve, there are greater decreases of $1015\text{\,}\frac{\mathrm{mmHg}}{}$ in the pressure downstream of the valve. | Base | Both Leaflets | Right Leaflet
---|---|---|---
| | $\beta=100$ | $\beta=300$ | $\beta=600$ | $\beta=600$
LVOT | $117\text{\,}\frac{\mathrm{mmHg}}{}$ | $117\text{\,}\frac{\mathrm{mmHg}}{}$ | $118\text{\,}\frac{\mathrm{mmHg}}{}$ | $119\text{\,}\frac{\mathrm{mmHg}}{}$ | $117\text{\,}\frac{\mathrm{mmHg}}{}$
Aorta | $111\text{\,}\frac{\mathrm{mmHg}}{}$ | $105\text{\,}\frac{\mathrm{mmHg}}{}$ | $101\text{\,}\frac{\mathrm{mmHg}}{}$ | $94\text{\,}\frac{\mathrm{mmHg}}{}$ | $104\text{\,}\frac{\mathrm{mmHg}}{}$
We additionally compute the effective orifice area (EOA). The EOA
$A_{\text{AV}}$ is computed using conservation of mass in equation 1b by
assuming the relation
$V_{\text{LVOT}}A_{\text{LVOT}}=V_{\text{AV}}A_{\text{AV}},$ in which
$V_{\text{LVOT}}$ is the average time integral flow rate through the left
ventricle outflow tract during each cycle when the valve is open,
$A_{\text{LVOT}}$ is the area of the left ventricle outflow tract, and
$V_{\text{AV}}$ is the average time integral flow rate through the aortic
valve. $V_{\text{LVOT}}$ is computed from the boundary condition model
described in section 2.2. To compute $V_{\text{AV}}$, we interpolate the
velocity to the midpoint between the two points on each leaflet that are
closest during systole. $V_{\text{AV}}$ is then computed as the time integral
of the component of this interpolated velocity normal to the valve ring.
Figure 11 plots the EOA for each cycle. We observe a general decrease in EOA
as more the total surface concentration $C_{\text{b}}$ increases. This
indicates that the fluid velocity increases to compensate for the stiffening
of the valve. The EOA decreases more when accumulation occurs on both leaflets
when compared to accumulation on only the right leaflet. Because the left
leaflet remains at a constant stiffness, the left leaflet opens more to
compensate for the stiffening of the right leaflet. This causes the jet to
shift towards the left leaflet, as shown in Figure 4.
(a) (b)
Figure 11: The effective orifice area (EOA) for each cycle. Panel (a) shows
the EOA with $\beta=1$, 100, 300, and 600 for accumulation on both leaflets.
Note that $\beta=1$ corresponds to a baseline where the leaflet stiffness is
constant. Panel (b) shows the EOA for accumulation on no leaflets (base),
accumulation on the right leaflet (right), and accumulation on both leaflets
(both). We observe a general decrease of EOA over time as the accumulation
increases. There is a greater decrease of EOA if accumulation occurs on both
leaflets than just the right leaflet.
## 5 Conclusions
This study presents new numerical methods incorporating both deposition and
fluid-structure interaction to simulate leaflet thrombosis. The simplified
thrombosis model serves as a stepping stone to demonstrate the capabilities of
our simulation approach that includes concentration fields describing fluid-
phase platelets and structure-bound platelets. Platelets can deposit onto the
leaflet surface, and bound platelets can dissociate into the fluid. In our
model, the stiffness of the leaflet is a function of the bound platelet
concentration. We have shown that our model is capable of realizing drops in
pressure and decreases in effective orifice area, without fully occluding the
aortic valve. The results also show that the stiffness of the valve can lead
to a variety of flow features in the sinus of Valsalva region. These flow
features affect the amount of material that is locally present to deposit over
the leaflets.
Extensions of this model to three dimensions require an efficient method for
solving the advection-diffusion equation in complex, time-evolving domains.
The method utilized here requires the computation of cut-cell volumes and
intersections, which remain challenging in three spatial dimensions. Recent
approaches to this class of problems include mesh-free RBF-FD methods [52] and
volume penalization methods [54]. The implementation of a more physiological
model of thrombosis remains important future work. A primary roadblock is the
disparate time scales present in thrombosis. While the heart beats on the
order of seconds, blood clots can form in hours to days. The use of
conditionally stable time stepping limits the numerical methods to time steps
that resolve the fastest timescale, which in this model is that of the fluid-
structure interaction. Recent work in multiscale time stepping algorithms [16,
17] could enable extensions of our modeling framework to enable such long-time
simulations. Further, with multiscale time stepping algorithms, this model
could be extended to study the affect of saturation of the bound concentration
field. While platelet deposition is important and the beginning step of
thrombus formation, a significant portion of the clot may be from coagulation
and fibrin mesh formation. However, a complete model of thrombosis will
require a computational model in which the blood clot on the moving valve
leaflets grows into the fluid [11, 34]. The development of such a model that
incorporates FSI is ongoing.
The new approaches described herein should be considered an important
steppingstone for thrombosis models in many different contexts. This model is
the first of its kind to incorporate both adhesion of a surface concentration
to the surface of the leaflets and feedback into the fluid-structure
interaction. Further, this model can be adapted to model deposition or
absorption along other moving boundaries, such as for particulate flow in the
lungs or drug absorption in the gut.
## Acknowledgements
We acknowledge funding from the NIH (Awards U01HL143336 and R01HL157631) and
NSF (OAC 1652541 and OAC 1931516).
## References
* [1] J. Bærentzen and Henrik Aanæs “Signed distance computation using the angle weighted pseudonormal” In _IEEE Transactions on Visualization and Computer Graphics_ 11.3, 2005, pp. 243–253 DOI: 10.1109/TVCG.2005.49
* [2] S Balay et al. “PETSc Users Manual” ISBN: ANL-95/11 - Revision 3.0.0 Publication Title: ReVision Volume: 2, 2010, pp. 1–211 URL: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.6845&rep=rep1&type=pdf
* [3] Satish Balay et al. “PETSc Web page”, 2019 URL: https://www.mcs.anl.gov/petsc
* [4] Satish Balay, William D. Gropp, Lois Curfman McInnes and Barry F. Smith “Efficient Management of Parallelism in Object-Oriented Numerical Software Libraries” In _Modern Software Tools for Scientific Computing_ Birkhäuser Press, 1997, pp. 163–202 DOI: 10.1007/978-1-4612-1986-6˙8
* [5] Aaron Barrett, Aaron L. Fogelson and Boyce E. Griffith “A hybrid semi-Lagrangian cut cell method for advection-diffusion problems with Robin boundary conditions in moving domains” In _Journal of Computational Physics_ 449, 2022, pp. 110805 DOI: 10.1016/j.jcp.2021.110805
* [6] Kuan-Yu Chen and Ming-Chih Lai “A conservative scheme for solving coupled surface-bulk convection–diffusion equations with an application to interfacial flows with soluble surfactant” In _Journal of Computational Physics_ 257, 2014, pp. 1–18 DOI: 10.1016/j.jcp.2013.10.003
* [7] Scott L. Diamond, Jeremy Purvis, Manash Chatterjee and Matthew H. Flamm “Systems biology of platelet-vessel wall interactions” In _Frontiers in Physiology_ 4, 2013 DOI: 10.3389/fphys.2013.00229
* [8] Jian Du and Aaron L. Fogelson “A Two-phase mixture model of platelet aggregation” In _Mathematical Medicine and Biology_ 35.2, 2018, pp. 225–256 DOI: 10.1093/imammb/dqx001
* [9] Jian Du et al. “Clot Permeability, Agonist Transport, and Platelet Binding Kinetics in Arterial Thrombosis” In _Biophysical Journal_ 119.10, 2020, pp. 2102–2115 DOI: 10.1016/j.bpj.2020.08.041
* [10] Natasha Flyer, Bengt Fornberg, Victor Bayona and Gregory A. Barnett “On the role of polynomials in RBF-FD approximations: I. Interpolation and accuracy” In _Journal of Computational Physics_ 321, 2016, pp. 21–38 DOI: 10.1016/j.jcp.2016.05.026
* [11] Aaron L. Fogelson and Robert D. Guy “Immersed-boundary-type models of intravascular platelet aggregation” In _Computer Methods in Applied Mechanics and Engineering_ 197.25-28, 2008, pp. 2087–2104 DOI: 10.1016/j.cma.2007.06.030
* [12] Aaron L. Fogelson, Yasmeen H. Hussain and Karin Leiderman “Blood Clot Formation under Flow: The Importance of Factor XI Depends Strongly on Platelet Count” Publisher: Cell Press In _Biophysical Journal_ 102.1, 2012, pp. 10–18 DOI: 10.1016/J.BPJ.2011.10.048
* [13] Aaron L. Fogelson and James P. Keener “Toward an understanding of fibrin branching structure” In _Physical Review E - Statistical, Nonlinear, and Soft Matter Physics_ 81.5, 2010, pp. 1–9 DOI: 10.1103/PhysRevE.81.051922
* [14] Aaron L. Fogelson and Keith B. Neeves “Fluid mechanics of blood clot formation” In _Annual Review of Fluid Mechanics_ 47.1, 2015, pp. 377–403 DOI: 10.1146/annurev-fluid-010814-014513
* [15] Aaron L. Fogelson, Anna C. Nelson, Cheryl Zapata-Allegro and James P. Keener “Development of Fibrin Branch Structure Before and After Gelation” In _SIAM Journal on Applied Mathematics_ 82.1, 2022, pp. 267–293 DOI: 10.1137/21M1401024
* [16] S. Frei and T. Richter “Efficient Approximation of Flow Problems With Multiple Scales in Time” Publisher: Society for Industrial and Applied Mathematics _eprint: 1903.12234 In _Multiscale Modeling & Simulation_ 18.2, 2020, pp. 942–969 DOI: 10.1137/19M1258396
* [17] S. Frei, T. Richter and T. Wick “Long-term simulation of large deformation, mechano-chemical fluid-structure interactions in ALE and fully Eulerian coordinates” Publisher: Academic Press In _Journal of Computational Physics_ 321, 2016, pp. 874–891 DOI: 10.1016/J.JCP.2016.06.015
* [18] T. Gasser, Ray W Ogden and Gerhard A Holzapfel “Hyperelastic modelling of arterial layers with distributed collagen fibre orientations” In _Journal of The Royal Society Interface_ 3.6, 2006, pp. 15–35 DOI: 10.1098/rsif.2005.0073
* [19] Boyce E Griffith “IBAMR: an adaptive and distributed-memory parallel implementation of the immersed boundary method”, 2014 URL: https://github.com/IBAMR/IBAMR
* [20] Boyce E. Griffith “An accurate and efficient method for the incompressible Navier-Stokes equations using the projection method as a preconditioner” ISBN: 0021-9991 Publisher: Elsevier Inc. In _Journal of Computational Physics_ 228.20, 2009, pp. 7565–7595 DOI: 10.1016/j.jcp.2009.07.001
* [21] Boyce E. Griffith and Xiaoyu Luo “Hybrid finite difference/finite element immersed boundary method” Publisher: John Wiley & Sons, Ltd _eprint: 1612.05916 In _International Journal for Numerical Methods in Biomedical Engineering_ 33.12, 2017, pp. e2888 DOI: 10.1002/cnm.2888
* [22] Boyce E. Griffith, Xiaoyu Luo, David M. McQueen and Charles S. Peskin “Simulating the Fluid Dynamics of Natural and Prosthetic Heart Valve Using the Immersed Boundary Method” In _International Journal of Applied Mechanics_ 01.01, 2009, pp. 137–177 DOI: 10.1142/S1758825109000113
* [23] Hoda Hatoum et al. “Predictive Model for Thrombus Formation After Transcatheter Valve Replacement” In _Cardiovascular Engineering and Technology_ 12.6, 2021, pp. 576–588 DOI: 10.1007/s13239-021-00596-x
* [24] Ásdís Helgadóttir, Yen Ting Ng, Chohong Min and Frédéric Gibou “Imposing mixed Dirichlet–Neumann–Robin boundary conditions in a level-set framework” In _Computers & Fluids_ 121, 2015, pp. 68–80 DOI: 10.1016/j.compfluid.2015.08.007
* [25] Matthew M. Hopkins and Lisa J. Fauci “A computational model of the collective fluid dynamics of motile micro-organisms” In _Journal of Fluid Mechanics_ 455, 2002, pp. 149–174 DOI: 10.1017/S0022112001007339
* [26] Richard D. Hornung and Scott R. Kohn “Managing application complexity in the SAMRAI object-oriented framework” In _Concurrency and Computation: Practice and Experience_ 14.5, 2002, pp. 347–368 DOI: 10.1002/cpe.652
* [27] Huaxiong Huang, Kazuyasu Sugiyama and Shu Takagi “An immersed boundary method for restricted diffusion with permeable interfaces” In _Journal of Computational Physics_ 228.15, 2009, pp. 5317–5322 DOI: 10.1016/j.jcp.2009.04.040
* [28] Benjamin S. Kirk, John W. Peterson, Roy H. Stogner and Graham F. Carey “libMesh : a C++ library for parallel adaptive mesh refinement/coarsening simulations” In _Engineering with Computers_ 22.3-4, 2006, pp. 237–254 DOI: 10.1007/s00366-006-0049-3
* [29] Araz R. Kivi et al. “Fluid structure interaction modelling of aortic valve stenosis: Effects of valve calcification on coronary artery flow and aortic root hemodynamics” In _Computer Methods and Programs in Biomedicine_ 196, 2020, pp. 105647 DOI: 10.1016/j.cmpb.2020.105647
* [30] Ebrahim M. Kolahdouz, Amneet Pal Singh Bhalla, Brent A. Craven and Boyce E. Griffith “An immersed interface method for discrete surfaces” In _Journal of Computational Physics_ 400, 2020, pp. 108854 DOI: 10.1016/j.jcp.2019.07.052
* [31] Andrew L. Kuharsky and Aaron L. Fogelson “Surface-mediated control of blood coagulation: The role of binding site densities and platelet deposition” In _Biophysical Journal_ 80.3, 2001, pp. 1050–1074 DOI: 10.1016/S0006-3495(01)76085-7
* [32] Jae H Lee and Boyce E Griffith “On the Lagrangian-Eulerian coupling in the immersed finite element/difference method” In _Journal of Computational Physics_ 457, 2022, pp. 111042 DOI: https://doi.org/10.1016/j.jcp.2022.111042
* [33] Jae H. Lee et al. “Bioprosthetic aortic valve diameter and thickness are directly related to leaflet fluttering: Results from a combined experimental and computational modeling study” Publisher: Elsevier BV In _JTCVS Open_ 6, 2021, pp. 60–81 DOI: 10.1016/J.XJON.2020.09.002/ATTACHMENT/1012F996-5894-4BCC-86B2-6F05A1B63BAD/MMC6
* [34] Karin Leiderman and Aaron L. Fogelson “Grow with the flow: A spatial-temporal model of platelet deposition and blood coagulation under flow” In _Mathematical Medicine and Biology_ 28.1, 2011, pp. 47–84 DOI: 10.1093/imammb/dqq005
* [35] X. Li, J. Lowengrub, A. Ratz and A. Voigt “Solving PDEs in ComplexGeometries: A Diffuse Domain Approach” In _Communications in mathematical sciences_ 7.1, 2009, pp. 81–107 URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3097555/
* [36] Kathryn G. Link et al. “A local and global sensitivity analysis of a mathematical model of coagulation and platelet deposition under flow” In _PLOS ONE_ 13.7, 2018, pp. e0200917 DOI: 10.1371/journal.pone.0200917
* [37] Kathryn G. Link et al. “A mathematical model of coagulation under flow identifies factor V as a modifier of thrombin generation in hemophilia A” In _Journal of Thrombosis and Haemostasis_ 18.2, 2020, pp. 306–317 DOI: 10.1111/jth.14653
* [38] Raj R. Makkar and Tarun Chakravarty “Transcatheter Aortic Valve Thrombosis” In _JACC: Cardiovascular Interventions_ 10.7, 2017, pp. 698–700 DOI: 10.1016/j.jcin.2017.02.041
* [39] Raj R. Makkar et al. “Possible Subclinical Leaflet Thrombosis in Bioprosthetic Aortic Valves” In _New England Journal of Medicine_ 373.21, 2015, pp. 2015–2024 DOI: 10.1056/NEJMoa1509233
* [40] Chohong Min and Frédéric Gibou “Geometric integration over irregular domains with application to level-set methods” In _Journal of Computational Physics_ 226.2, 2007, pp. 1432–1443 DOI: 10.1016/j.jcp.2007.05.032
* [41] Kyle Murdock, Caitlin Martin and Wei Sun “Characterization of mechanical properties of pericardium tissue using planar biaxial tension and flexural deformation” In _Journal of the Mechanical Behavior of Biomedical Materials_ 77, 2018, pp. 148–156 DOI: 10.1016/j.jmbbm.2017.08.039
* [42] J P Murgo, N Westerhof, J P Giolma and S A Altobelli “Aortic input impedance in normal man: relationship to pressure wave forms.” In _Circulation_ 62.1, 1980, pp. 105–116 DOI: 10.1161/01.CIR.62.1.105
* [43] J.. Mynard, M.. Davidson, D.. Penny and J.. Smolich “A simple, versatile valve model for use in lumped parameter and one-dimensional cardiovascular models” In _International Journal for Numerical Methods in Biomedical Engineering_ 28.6-7, 2012, pp. 626–641 DOI: 10.1002/cnm.1466
* [44] Romina Plitman Mayo et al. “Numerical models for assessing the risk of leaflet thrombosis post-transcatheter aortic valve-in-valve implantation” In _Royal Society Open Science_ 7.12, 2020, pp. 201838 DOI: 10.1098/rsos.201838
* [45] Ravi Ramana et al. “Calcification and Thrombosis as Mediators of Bioprosthetic Valve Deterioration” In _Structural Heart_ 3.2, 2019, pp. 106–109 DOI: 10.1080/24748706.2018.1562265
* [46] William J. Rider, Jeffrey A. Greenough and James R. Kamm “Accurate monotonicity- and extrema-preserving methods through adaptive nonlinear hybridizations” In _Journal of Computational Physics_ 225.2, 2007, pp. 1827–1848 DOI: 10.1016/j.jcp.2007.02.023
* [47] Liesbeth Rosseel, Ole De Backer and Lars Søndergaard “Clinical Valve Thrombosis and Subclinical Leaflet Thrombosis Following Transcatheter Aortic Valve Replacement: Is There a Need for a Patient-Tailored Antithrombotic Therapy?” In _Frontiers in Cardiovascular Medicine_ 6, 2019 DOI: 10.3389/fcvm.2019.00044
* [48] Yongyuth Sahasakul, William D. Edwards, James M. Naessens and A.Jamil Tajik “Age-related changes in aortic and mitral valve thickness: Implications for two-dimensional echocardiography based on an autopsy study of 200 normal human hearts” In _The American Journal of Cardiology_ 62.7, 1988, pp. 424–430 DOI: 10.1016/0002-9149(88)90971-X
* [49] Matea Santiago, Nicholas A. Battista, Laura A. Miller and Shilpa Khatri “Passive concentration dynamics incorporated into the library IB2d, a two-dimensional implementation of the immersed boundary method” Publisher: IOP Publishing In _Bioinspiration &$\mathsemicolon$ Biomimetics_ 17.3, 2022, pp. 036003 DOI: 10.1088/1748-3190/ac4afa
* [50] Matea Santiago, Kevin A. Mitchell and Shilpa Khatri “Numerical method for modeling photosynthesis of algae on pulsing soft corals” In _Physical Review Fluids_ 7.3, 2022, pp. 033102 DOI: 10.1103/PhysRevFluids.7.033102
* [51] Gerhard Schymik et al. “How to Adapt the Implantation Technique for the New SAPIEN 3 Transcatheter Heart Valve Design” In _Journal of Interventional Cardiology_ 28.1, 2015, pp. 82–89 DOI: 10.1111/joic.12165
* [52] Varun Shankar and Aaron L. Fogelson “Hyperviscosity-based stabilization for radial basis function-finite difference (RBF-FD) discretizations of advection–diffusion equations” _eprint: 1806.03798 In _Journal of Computational Physics_ 372, 2018, pp. 616–639 DOI: 10.1016/j.jcp.2018.06.036
* [53] Nikos Stergiopulos, Berend E. Westerhof and Nico Westerhof “Total arterial inertance as the fourth element of the windkessel model” In _American Journal of Physiology-Heart and Circulatory Physiology_ 276.1, 1999, pp. H81–H88 DOI: 10.1152/ajpheart.1999.276.1.H81
* [54] Ramakrishnan Thirumalaisamy, Neelesh A. Patankar and Amneet Pal Singh Bhalla “Handling Neumann and Robin boundary conditions in a fictitious domain volume penalization framework” Publisher: Academic Press _eprint: 2101.02806 In _Journal of Computational Physics_ 448, 2022, pp. 110726 DOI: 10.1016/J.JCP.2021.110726
* [55] A. Tosenberger et al. “Modelling of platelet–fibrin clot formation in flow with a DPD–PDE method” In _Journal of Mathematical Biology_ 72.3, 2016, pp. 649–681 DOI: 10.1007/s00285-015-0891-2
* [56] John D. Towers “A source term method for Poisson problems on irregular domains” In _Journal of Computational Physics_ 361, 2018, pp. 424–441 DOI: 10.1016/j.jcp.2018.01.038
* [57] Ben Vadala-Roth et al. “Stabilization approaches for the hyperelastic immersed boundary method for problems of large-deformation incompressible elasticity” _eprint: 1811.06620 In _Computer Methods in Applied Mechanics and Engineering_ 365, 2020, pp. 112978 DOI: 10.1016/j.cma.2020.112978
* [58] Koohyar Vahidkhah et al. “Blood Stasis on Transcatheter Valve Leaflets and Implications for Valve-in-Valve Leaflet Thrombosis” Publisher: Elsevier In _The Annals of Thoracic Surgery_ 104.3, 2017, pp. 751–759 DOI: 10.1016/J.ATHORACSUR.2017.02.052
* [59] Ziheng Wu, Zhiliang Xu, Oleg Kim and Mark Alber “Three-dimensional multi-scale model of deformable platelets adhesion to vessel wall in blood flow” In _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ 372.2021, 2014, pp. 20130380 DOI: 10.1098/rsta.2013.0380
* [60] Sheng Xu and Z. Wang “An immersed interface method for simulating the interaction of a fluid with moving boundaries” In _Journal of Computational Physics_ 216.2, 2006, pp. 454–493 DOI: 10.1016/j.jcp.2005.12.016
* [61] Paul A Yushkevich et al. “User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability” In _NeuroImage_ 31.3, 2006, pp. 1116–1128 DOI: https://doi.org/10.1016/j.neuroimage.2006.01.015
* [62] Paul A. Yushkevich et al. “User-Guided 3D Active Contour Segmentation of Anatomical Structures: Significantly Improved Efficiency and Reliability” In _Neuroimage_ 31.3, 2006, pp. 1116–1128
* [63] Peng Zhang et al. “Multiscale Particle-Based Modeling of Flowing Platelets in Blood Plasma Using Dissipative Particle Dynamics and Coarse Grained Molecular Dynamics” In _Cellular and Molecular Bioengineering_ 7.4, 2014, pp. 552–574 DOI: 10.1007/s12195-014-0356-5
|
# Task-oriented and Semantics-aware Communication Framework for Avatar-centric
Augmented Reality
Zhe Wang, Yansha Deng, and A. Hamid Aghvami Z. Wang, Y. Deng, and A. Hamid
Aghvami (Emeritus Professor) are with the Department of Engineering, King’s
College London, Strand, London WC2R 2LS, U.K. (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>(Corresponding author: Yansha
Deng). Youtube: https://youtu.be/n9YPF979m_0. Github:https://github.com/kcl-
yansha/PCDataset
###### Abstract
With the emergence of the metaverse and its applications in representing
humans and intelligent entities in social and related augmented reality (AR)
applications. The current bit-oriented network faces challenges in supporting
real-time updates for the vast amount of associated information, which hinders
development. Thus, a critical revolution in the Sixth Generation (6G) networks
is envisioned through the joint exploitation of information context and its
importance to the task, leading to a communication paradigm shift towards
semantic and effectiveness levels. However, current research has not yet
proposed any explicit and systematic communication framework for AR
applications that incorporate these two levels. To fill this research gap,
this paper presents a task-oriented and semantics-aware communication
framework for augmented reality (TSAR) to enhance communication efficiency and
effectiveness in 6G. Specifically, we first analyse the traditional wireless
AR point cloud communication framework and then summarize our proposed
semantic information along with the end-to-end wireless communication. We then
detail the design blocks of the TSAR framework, covering both semantic and
effectiveness levels. Finally, numerous experiments have been conducted to
demonstrate that, compared to the traditional point cloud communication
framework, our proposed TSAR significantly reduces wireless AR application
transmission latency by 95.6%, while improving communication effectiveness in
geometry and color aspects by up to 82.4% and 20.4%, respectively.
###### Index Terms:
Metaverse, augmented reality, semantic communication, end-to-end
communication.
## I Introduction
The metaverse, as an expansion of the digital universe, has the potential to
significantly influence people’s lives, affecting their entertainment
experiences and social behaviors. Specific applications such as Augmented
Reality (AR), Virtual Reality (VR), and other immersive technologies within
the metaverse have demonstrated remarkable potential in various areas,
including virtual conferences, online education, and real-time interactive
games, capturing the attention of both industry and academia [1]. These
applications, also referred as eXtended Reality (XR), need to process rich and
complex data, such as animated avatars, point cloud, and model mesh, to create
immersive experiences for clients [2]. However, the extensive transmission of
information and high bandwidth requirements within the XR pose significant
challenges for its wider applications, particularly in avatar-related
applications that necessitate real-time client communication and interaction.
The existing communication networks fails to achieve such high bandwidth
requirement and thus can not adequately support XR applications, necessitating
the development of 6G technology to enhance its applications for further
advancement [3, 4]. Specifically, to ensure a good Quality of Experience (QoE)
in AR applications, a transmission latency of less than 20 ms is required,
which is 20 times less than the transmission latency tolerated in video
communication applications [5]. Due to the nature of numerous sensing data in
AR applications, more packets need to be transmitted in such a short time,
which consequently increases the demand for bandwidth. This results in
challenges in meeting the transmission latency and bandwidth requirements,
particularly for the transmission of more complex and comprehensive data in AR
services [6], underscores the urgency for further research in communication
technology. Such advancements are crucial to facilitate real-time, immersive
experiences in AR-based applications.
To address the high bandwidth requirements for diplomas in wireless
communication in AR applications, the concept of semantic communication has
been proposed [7]. This approach aims to facilitate communication at the
semantic level by exploring not only the content of traditional text and
speech data but also the information’s freshness. Initial research on semantic
communication for text [8], speech [9], and image data [10] has mainly focused
on identifying the semantic content of traditional data. Other research in
semantic communication on sensor and control data emphasizes the task
requirements for using information freshness, such as Age of Information (AoI)
[11], as a semantic metric to estimate timeliness and evaluate the importance
of the information. It should be noted that these AoI-related semantic
communication approaches are unable to adequately capture the importance of
specific information with inherent importance in the emerging AR dataset. This
highlights the need to develop new strategies and techniques that effectively
incorporate specific tasks with semantic communication into AR, considering
not only the timeliness of information but also its relevance and sufficiency
for a given application. In [12], a generic task-oriented and semantics-aware
communication framework is envisioned for robotic applications, taking into
account designs at both the semantic and effectiveness levels for various
tasks with diverse data types. Then, in [13], researchers highlight the
importance of task-oriented and semantics-aware communication in robotic
control (TSRC) by exploiting the context of data, emphasizing its critical
role in successful task execution at both the transmitter and receiver ends.
However, although task-oriented performance has been implemented in these
robotic aspects, a specific and concrete task-oriented, semantics-aware
communication framework for avatar-centric AR applications to improve task
performance has not yet been proposed.
Current XR-related application research typically requires users to utilize
Head-Mounted Displays (HMD) [14]. These applications generally focus on
avatar-centric services, where the use of avatar animation in replacement of
real human figures can decrease HMD computing requirements, reduce
transmission data, and protect user privacy [15]. This avatar representation
method has been implemented in social media platforms, such as TikTok and
Instagram, where avatar characters is used for augmented reality video
effects. Interestingly, using avatars instead of human has shown no
significant differences in social behavior transmission and can even attract
users to complete tasks more quickly in gaming situations [16]. For instance,
fitness coaches can employ virtual avatars for AR conferencing to guide
training. Games, like Pokémon Go, use avatars in mixed reality to encourage
gamer interaction [17]. Avatar-based communication has been considered in
[18], where the point cloud of avatars, structures, and models are transmitted
between transmitter and receiver. Task-related effectiveness level performance
metrics, including point-to-point [19], peak signal-to-noise ratio for the
luminance component [20], mean per joint position error [21] have been
considered to assess the telepresence task [22], point cloud video displaying
task [23], and avatar pose recovery task [24], respectively. Based on these
tasks, recent research has also proposed implementing avatar representations
such as point clouds, skeleton, and $360^{\circ}$ images [25, 22]. Although
these studies emphasize data extraction from a graphical perspective, wireless
AR-related communication applications have not fully addressed the issue of
avatar transmission effectiveness from a wireless communication perspective.
Furthermore, the bandwidth requirements for such applications remain high.
Users continue to experience suboptimal and lagging AR experiences in areas
with moderate signal strength. This suggests that the current AR communication
framework has limitations, particularly in identifying a better method for
avatar representation to enhance communication. Specifically, there is a
demand for approaches that require less bandwidth and improve task
performance, which must be addressed [26].
Several studies have recently begun to explore the representation of avatars
in wired communication. Different data types have been designed to represent
avatars, which results in diverse avatar reconstruction required at the client
side and limited transmission effectiveness evaluation capabilities for AR.
For instance, skeleton elements have been proposed as a means to represent
avatars, where motion capture devices are used to record skeletal positions.
The recorded avatar movements are then replayed in wired Head-Mounted Displays
(HMDs), and the differences in skeleton position between transmitter and
receiver are measured to evaluate wired AR communication [27]. However, how to
best extract semantic information that reflects the importance and context of
information related to the avatar-centric display task is still unclear in a
wireless communication AR application. The presence of redundant messaging can
lead to an increase in transmission packets, resulting in decreased efficiency
of wireless communication and ultimately impacting the user’s viewing
experience.
Inspired by the 3D keypoints extraction method presented in [28], we propose a
task-oriented and semantics-aware communication framework in AR (TSAR) for
avatar-centric end-to-end AR communication. In contrast to traditional point
cloud AR communication frameworks that rely solely on point cloud input, our
proposed TSAR extracts and transmits only essential semantic information. To
the best of our knowledge, our contributions can be summarized as follows:
1. 1.
We propose a task-oriented and semantics-aware communication framework in
augmented reality (TSAR) for interactive avatar-centric displaying
applications with an integration of the semantic and effectiveness levels
design, which includes semantic information extraction, task-oriented
semantics-aware wireless communication, avatar pose recovery and rendering.
2. 2.
We apply an Avatar-based Semantic Ranking (AbSR) algorithm to extract features
from the avatar skeleton graph using shared base knowledge and to sort the
importance of different semantic information. Additionally, by utilizing
Channel State Information (CSI) feedback, we demonstrate the effectiveness of
AbSR in improving avatar transmission quality in wireless AR communication.
3. 3.
We have conducted a series of experiments comparing our proposed TSAR
framework with the traditional point cloud communication framework. Our
results indicate that our proposed TSAR framework outperforms the traditional
point cloud communication framework in terms of color quality, geometry
quality, and transmission latency for avatar-centric displaying task, with
improvements of up to 20.4%, 82.4% and 95.6% respectively.
The rest of the paper is organized as follows: In section II, we present the
system model and problem formation, covering both the traditional point cloud
and the TSAR frameworks. Section III details the design principles for
semantic level. Section IV details the design principles for effectiveness
level. Section V demonstrates the avatar movement and experimental performance
evaluation. Finally, Section VI concludes this paper.
## II System Model and Problem Formation
In this section, we first describe the existing traditional point cloud
communication framework for AR applications. Then, we present our wireless
communication channel model implemented in both the point cloud communication
framework and the TSAR. We further introduce our proposed TSAR in detail,
which considers not only the bit-level but also the semantic and effectiveness
levels. Finally, we present the problem formation and the objective function.
Figure 1: Traditional point cloud communication framework
### II-A Traditional Point Cloud Communication Framework
As shown in Fig. 1, the procedures for traditional point cloud communication
in AR applications typically consist of point cloud collection, downsampling,
upsampling, and rendering.
#### II-A1 Point Cloud Collection
We focus on interactive avatar-centric displaying and gaming AR applications,
which are promising applications in the metaverse [15]. These AR applications
require transmitting avatar animations and other stationary background models
to the client side for displaying on an HMD in the area with dimensions length
$L$, height $H$, and width $W$. To guarantee a smooth viewing experience of
the AR scenery at the client side, high-resolution point cloud of both the
moving avatar and stationary background models need to be captured and
transmitted to the client side. Current Unity3D platform have numerous plugins
for generating sensor data in real time, such as FM POINTS, which is a
comprehensive point cloud visualization plugin that can transform the whole AR
scenery or any 3D models into real-time point cloud. The information for each
point $\overrightharp{v}_{i}$ can be represented as
$\overrightharp{v}_{i}=(\overrightharp{l}_{i},\overrightharp{c}_{i})=(l_{\text{x}},l_{\text{y}},l_{\text{z}},c_{\text{r}},c_{\text{g}},c_{\text{b}}),$
(1)
where the $\overrightharp{l}_{i}$ and $\overrightharp{c}_{i}$ represent the
three-dimensional location and RGB color of point, respectively. The generated
point cloud $\mathbf{P}_{\text{pc}}$ of the whole AR scenery consist of
thousands of points $v_{i}$, which can be represented as
$\mathbf{P}_{\text{pc}}=[\overrightharp{v}_{1},\overrightharp{v}_{2},\cdots,\overrightharp{v}_{{N}_{\text{pc}}}]^{\text{T}},$
(2)
where ${N}_{\text{pc}}$ denotes the total number of generated point cloud of
AR scenery. Typically, each 3D object needs to be represented by over 1,500
thousand point cloud in each frame to achieve a satisfactory viewing
experience for clients [29].
#### II-A2 Point Cloud Downsampling and Upsampling
In the traditional point cloud wireless communication framework, the
transmission of a large number of point cloud can lead to data congestion at
the wireless channel, causing intolerable delays and thus hinders AR
application development [30]. To minimize transmission delays, current
research explores the use of compression algorithms in point cloud
transmission [31]. By introducing an downsample algorithm at the transmitter
and an upsample algorithm at the receiver, the transmission latency can be
reduced through transmitting only the compressed point cloud. The farthest
point sampling algorithm [32] is utilized as the downsample method, which
enables the selection of representative points from the original point cloud
while maintaining the overall features of the 3D objects. This algorithm
reduces the number of points to be transmitted, thus improving the efficiency
of the communication system. The process of farthest point downsampling
$\mathcal{D(\cdot)}$, can be expressed as
$\mathbf{P}_{\text{dpc}}=[\overrightharp{v}_{1},\overrightharp{v}_{2},\cdots,\overrightharp{v}_{{N}_{\text{d}}}]^{\text{T}}=\mathcal{D}(\mathbf{P}_{\text{pc}}),$
(3)
where ${\mathbf{P}_{\text{dpc}}}$ represents the downsampled point cloud data
awaiting transmission, and ${N}_{\text{d}}$ is the total number of downsampled
point cloud data. Then, the client’s view experience can be enhanced by
employing an upsampling algorithm for high-resolution point cloud recovery.
Due to the instability of the wireless channel, the receiver faces the
challenge of converting a sparse, irregular, and non-uniform point cloud into
a dense, complete, and uniform one. To address this challenging issue [33],
the linear interpolation algorithm [34] is introduced for the point cloud
upsampling process. This algorithm involves estimating the positions of the
missing points based on the positions of their neighbors, effectively
generating a denser point cloud that closely resembles the original point
cloud structure. The point cloud upsampling process, denoted as
$\mathcal{U}(\cdot)$, can be expressed as
${\mathbf{P}_{\text{upc}}}=[\overrightharp{v}_{1},\overrightharp{v}_{2},\cdots,\overrightharp{v}_{{N}_{\text{u}}}]^{\text{T}}=\mathcal{U}({\mathbf{P}_{\text{dpc}}^{{}^{\prime}}}),$
(4)
where ${\mathbf{P}_{\text{upc}}}$ is the reconstructed point cloud after
upsampling, ${N}_{\text{u}}$ represents the total number of upsampled point
cloud, and ${\mathbf{P}_{\text{dpc}}^{{}^{\prime}}}$ is the received point
cloud data after transmitting ${\mathbf{P}_{\text{dpc}}}$ over wireless
channels. The upsampling process aims to accurately reconstruct the original
point cloud, ensuring that the client-side viewing experience is maintained at
a high quality despite the data compression and transmission through an
unstable wireless channel.
#### II-A3 Point Cloud Rendering
The point cloud rendering process begins when all the ${N}_{\text{u}}$ point
clouds for the AR scenery are received and upsampled. This process prepares
the point cloud data for the Unity3D platform and facilitates high-resolution
rendering. The rendering process needs to create a comprehensive $360^{\circ}$
view of the avatar, along with immersive background scenery, which involves
point cloud preparation and procedures:
* (1)
Point cloud preparation: Point cloud preparation involves formatting points
from the received point cloud data. Each point contains information such as
three-dimensional location and RGB color value, which determines the point’s
position and visual depiction within the virtual environment.
* (2)
Point cloud processing: The procedure of point cloud processing includes mesh
reconstruction along with positioning. It commences with the transformation of
these discrete points into a compatible mesh format for the Unity3D platform.
Subsequently, the Shader, a uniquely designed program, is employed during the
rendering process to regulate the gradients of illumination, obscurity, and
chromaticity within the virtual environment. The final step of this process
involves implementing the positioning phase to optimize the visualization,
encompassing translation, rotation, and scaling elements. Concurrently, the
Level of Detail (LoD) strategy is invoked in the whole processing process,
which dynamically modulates the complexity of a 3D model representation
contingent upon its spatial relation to the clients. It renders fewer points
when clients are distant and, conversely, more points as they step closer,
thereby providing a better viewing experience.
### II-B Wireless Channel Model
In the described wireless communication model, which utilizes a Frequency-
Division Multiplexing (FDM) scheme within a Rayleigh fading channel affected
by additive white Gaussian noise, the division of the wireless channel into
multiple parallel subchannels. Each subchannel experiences unique frequency-
selective responses due to the Rayleigh fading environment. This results in
varying levels of channel gain across different frequencies, leading to
different Signal-to-Noise Ratios (SNRs) for each subchannel. The frequency-
selective fading, characteristic of such environments, affects the subchannels
differently, leading to a range of channel responses and necessitating
adaptive strategies for efficient communication.
The wireless communication process begins with source encoding, transforming
the awaiting transmit data into the bitstream. Following this, a standard
channel encoding is implemented to inject redundancy into the data to be
transmitted, safeguarding data integrity and enabling the correction of
potential errors during transmission. Traditional communication coding
methods, such as turbo coding and low-density parity-check coding, can be
utilized in the channel coding process [35]. The encoded bits generated by
channel encoding are then carried forward as $b_{n}$. Following channel
encoding, we implement Binary Phase-shift Keying (BPSK), a widely used
modulation technique. BPSK alters the phase of a carrier signal based on the
encoded bits $b_{n}$, resulting in modulated signals denoted as $s_{n}$.
After channel encoding in digital communication systems, Binary Phase-Shift
Keying (BPSK) is a commonly implemented modulation method. This modulation
technique encodes bits, denoted as $b_{n}$, resulting in modulated signals
represented as $s_{n}$. The modulation process of BPSK can be expressed by the
following equation:
$s_{n}=\begin{cases}+1,&\text{if }b_{n}=1,\\\ -1,&\text{if
}b_{n}=0.\end{cases}$ (5)
where $b_{n}$ represents the binary input, and $s_{n}$ represents the
modulated output signal. In BPSK, a phase shift in the carrier signal is used
to convey information. Specifically, the binary input $b_{n}$ in $\\{0,1\\}$
is shifted to the output signal $s_{n}$ in $\\{-1,+1\\}$. This modulation
technique results in two distinct phases of the carrier signal. The simplicity
of this representation makes BPSK an efficient and straightforward modulation
technique. Despite its relative bandwidth inefficiency compared to more
complex schemes, BPSK’s resilience against noise and interference secures its
position as a foundational and widely-used technique in digital communication.
Finally, we take into account the multi-path channel within the FDM,
represented as $\overrightharp{H}_{\text{c}}$. In the wireless channel, each
modulated bit $s_{n}$ is allocated to a subchannel, denoted as $h_{n}$, and is
then ready for transmission over that subchannel. This approach allows for the
simultaneous transmission of multiple modulated bits over different
subchannels, the channel gains in wireless subchannel is represented as
$\overrightharp{H}_{\text{c}}=[h_{1},h_{2},\cdots,h_{N_{\text{c}}}]^{\text{T}},$
(6)
where $N_{\text{c}}$ stands for the total number of subchannels in
$\mathbf{H}_{c}$, and $h_{n}$ signifies the channel gain of the $n$th
subchannel. Thus the SNR in each subchannel $h_{i}$ can be expressed as:
$\mathrm{SNR}_{i}=\frac{P_{i}\left\|h_{i}\right\|^{2}}{\sigma_{i}^{2}},$ (7)
where $P_{i}$ represents the received signal power,
$\left\|\boldsymbol{h}_{i}\right\|^{2}$ denotes the squared norm of the
channel gain from the $i$-th subchannel to the destination, and
$\sigma_{i}^{2}$ is the noise power.
Considering the characteristics of each subchannel, the cumulative SNR of the
communication process within channel $\overrightharp{H}_{\text{c}}$ is
expressed as
$\mathrm{SNR}_{\mathrm{avg}}=\frac{1}{N_{\text{c}}}\sum_{i=1}^{N_{\text{c}}}\mathrm{SNR}_{i},$
(8)
where $\mathrm{SNR}_{\text{avg }}$ is the average SNR across all subchannels,
$N_{\text{c}}$ is the total number of subchannels, and $\mathrm{SNR}_{i}$ is
the SNR for the $i$-th subchannel. Therefore, the received data of the $i$-th
subchannel at the receiver side ${s^{{}^{\prime}}_{i}}$ can be expressed as
${s^{{}^{\prime}}_{i}}=s_{i}\otimes h_{i}+\sigma_{i}^{2},$ (9)
In this context, the symbol $\otimes$ refers to circular convolution, an
operation correlating the input signal with a finite impulse response. The
channel response value $h_{i}$, varies due to frequency-selective fading in
the FDM system. Each subchannel response $h_{i}$ is assumed to be a complex
Gaussian random variable. Subsequently, the received data, denoted as
${s^{{}^{\prime}}_{i}}$, is processed by both a traditional channel decoder
and a source decoder at the receiver to recover the original data.
### II-C Novel Task-oriented and Semantics-aware Framework
In this section, we provide a detailed description of our proposed TSAR
framework, that not only compare with the traditional point cloud
communication framework but also incorporates several task-oriented
strategies, including effectiveness level optimization methodology. The TSAR
framework leverages shared base knowledge and utilizes a task-oriented context
at the semantic level, to exploit more efficient and effective communication
for AR application. As illustrated in Fig. 2 in the next page, the modules in
TSAR include semantic information extraction, task-oriented semantics-aware
wireless communication, avatar pose recovery and rendering.
Figure 2: Task-oriented and semantics-aware communication framework
#### II-C1 Semantic Information Extraction
Unlike traditional point cloud communication framework, which primarily relies
on raw point cloud data for AR scenery representation and transmission, our
proposed TSAR framework provides a more sophisticated approach to extract a
rich depth of semantic and effectiveness levels data from the raw point cloud.
The process begins with the downsampled point cloud sensing data,
$\mathbf{P}_{\text{dpc}}$, as the input. This point cloud data encapsulates
all the AR scenery, which are broadly divided into two categories: the moving
avatar model $\mathcal{A}_{\text{a}}$ and the stationary model
$\mathcal{A}_{\text{s}}$. Only the avatar’s moving position is considered
essential information and needs to be refreshed at every frame. Thus, the
output of this semantic information extraction process is the skeletons
information of the moving avatar, $\overrightharp{I}^{\text{tsar}}_{i}$, which
can be represented as
$\overrightharp{I}^{\text{tsar}}_{i}=(\overrightharp{l}_{i},\overrightharp{r}_{i})=(l_{\text{x}},l_{\text{y}},l_{\text{z}},r_{\text{x}},r_{\text{y}},r_{\text{z}},r_{\text{w}}),\
i\in[0,{N}_{\text{a}}],$ (10)
where $N_{\text{a}}$ represents the total number of skeletons in the avatar,
$\overrightharp{l}_{i}$ represents the three-dimensional location and
$\overrightharp{r}_{i}$ represents the quaternion rotation of the $i$th
skeleton in the avatar model.
Apart from quaternion rotation, current research also employs euler angles to
represent rotations in AR scenery. In comparison to quaternion, euler angles
offer a simpler and more information-efficient method to represent rotation
and calculate root node position when a fixed root node point is available.
This approach needs less information to reconstruct the avatar’s pose compared
to quaternion, resulting in less data packets and potentially more efficient
communication [36]. The transformation from rotation to euler angles can be
expressed as
${\left[\begin{array}[]{c}{e_{\text{p}}}\\\ {e_{\text{r}}}\\\
{e_{\text{y}}}\end{array}\right]=\left[\begin{array}[]{c}\arctan\large\frac{2\left(r_{\text{y}}r_{\text{z}}+r_{\text{w}}r_{\text{x}}\right)}{1-2\left(r_{\text{x}}^{2}+r_{\text{y}}^{2}\right)}\\\
\arcsin\left(2\left(r_{\text{w}}r_{\text{y}}-r_{\text{x}}r_{\text{z}}\right)\right)\\\
\arctan\frac{2\left(r_{\text{x}}r_{\text{y}}+r_{\text{w}}r_{\text{z}}\right)}{1-2\left(r_{\text{y}}^{2}+r_{\text{z}}^{2}\right)}\end{array}\right]*\frac{180}{\pi}.}$
(11)
where the $e_{\text{p}}$, $e_{\text{y}}$, and $e_{\text{r}}$ are defined as
the pitch, roll, and yaw in euler angles to represent rotations around the
three primary axes with an associated root point. The semantic information of
the AR application, denoted as $\mathbf{D}_{\text{tsar}}$, represents all the
skeleton information $\overrightharp{I}^{\text{tsar}}_{i}$ of the avatar model
generated through a semantic information extraction process from the
downsampled point cloud $\mathbf{P}_{\text{dpc}}$, which can be expressed as
$\mathbf{D}_{\text{tsar}}=[{\overrightharp{I}^{\text{tsar}}_{\text{1}},\overrightharp{I}^{\text{tsar}}_{\text{2}},\cdots,\overrightharp{I}^{\text{tsar}}_{{N}_{\text{a}}}}]^{\text{T}}=\mathcal{S}(\mathbf{P}_{\text{dpc}},{\theta}_{\text{s}}),$
(12)
where $\mathcal{S}(\cdot)$ represents the semantic information extraction
process, and ${\theta}_{\text{s}}$ encompasses all the experimental and neural
network parameters. This equation represents the entire semantic information
extraction process, which maps the downsampled point cloud data
$\mathbf{P}_{\text{dpc}}$ to a more meaningful semantic representation
$\mathbf{D}_{\text{tsar}}$ for further transmitting over wireless channels.
* •
* •
#### II-C2 Task-oriented Semantics-aware Wireless Communication
Building upon the extracted semantic information, we develop an avatar-based
semantic ranking algorithm to integrate task-oriented semantic information
ranking into end-to-end wireless communication to exploit the importance of
semantic information to an avatar-based AR displaying task. The algorithm
correlates the importance evaluation of semantic information and task
relevance with channel state information feedback, thereby prioritizing more
important semantic information for optimal transmission over more reliable
subchannels. More specifically, each skeleton is represented as a node in the
avatar skeleton graph $\mathcal{G}$ as shown in the Fig. 4, and the skeleton
ranking is determined by a calculated weight in the skeleton graph, which
indicates the level of importance in the later avatar pose recovery. The
weights of all semantic information $\mathbf{D}_{\text{tsar}}$ are denoted as
$\overrightharp{W}_{\text{tsar}}$ and can be formulated as
$\overrightharp{W}_{{\text{tsar}}}=[{\omega}_{I_{1}},{\omega}_{I_{2}}...,{\omega}_{I_{N_{\text{a}}}}]^{\text{T}}=\mathcal{W}(\mathbf{D}_{\text{tsar}},\mathcal{G}),$
(13)
where ${w}_{I_{i}}$ represents the weight of the semantic information of the
$i$th skeleton in avatar skeleton graph, these node weights essentially
represent the importance of the semantic information to the avatar
representation, with higher weights indicating greater importance of the
skeleton information for avatar pose recovery. By correlating these weights
representing the importance of semantic information with Channel State
Information (CSI) feedback during wireless communication, the effectiveness of
the avatar transmission in AR application could be optimized. Specifically,
the semantically important information is mapped and transmitted over more
reliable subchannels. Current research in the FDM has demonstrated that CSI
can be accurately estimated at the transmitter side using suitable algorithms
and feedback mechanisms [37]. Consequently, the subchannel gains $h_{n}$ at
the receiver side are assumed to be added in the CSI feedback, enabling the
transmitter to be aware of the accurate all the subchannel state in the FDM.
According to Eq. (6), the subchannel with a higher SNR will have a better
subchannel state and thus achieve a more reliable transmission for semantic
information. Therefore, an ascending sorting is employed to establish a
mapping function $\mathcal{M}(\cdot)$ between the semantic information and
various subchannels. This mapping relies on the weights calculated for the
semantic information and the CSI. Higher weights, indicating greater
importance of the semantic information in the avatar pose recovery, are
assigned to more reliable subchannels. The mapping function is expressed as
$\begin{array}[]{r}\mathcal{M}(\overrightharp{W}_{\text{tsar}},\mathcal{G},\overrightharp{H}_{\text{c}})=\\{\overrightharp{I}^{\text{tsar}}_{i},h_{j}\\},i\in[1,N_{\text{a}}],j\in[1,N_{\text{c}}],\end{array}$
(14)
the map $\\{\overrightharp{I}^{\text{tsar}}_{i},h_{j}\\}$ refers to
transmitting the semantic information $\overrightharp{I}^{\text{tsar}}_{i}$ on
the subchannel $h_{j}$. Based on this channel mapping, semantic information
with higher priority is being mapped to subchannels with better channel
responses.
#### II-C3 Avatar Pose Recovery and rendering
In contrast to traditional point cloud wireless communication framework, the
TSAR framework approaches avatar pose recovery differently with the
transmission of the base knowledge at the beginning of AR application. As
illustrated in Fig. 2, the data could be used for base knowledge
$\boldsymbol{B}_{\text{*}}$ encompasses different types of information, which
include avatar skeleton graph $\mathcal{G}$, avatar initial position $l_{o}$,
avatar model $\mathcal{A}_{\text{a}}$, stationary background model
$\mathcal{A}_{\text{s}}$, stationary initial position $l_{s}$, and their
respective appearance meshes, $\mathcal{M}_{\text{a}}$ and
$\mathcal{M}_{\text{s}}$. Whenever a new 3D object appears in the AR scenery,
the base knowledge at both transmitter and receiver need to be updated
synchronously.
In this way, the TSAR framework considers the avatar as a whole entity and
recover the avatar’s pose using a limited set of skeleton points instead of
treating individual points as the smallest recovery unit. The avatar pose
recovery process $\mathcal{R(\cdot)}$ can be expressed as
$\hat{\mathcal{A}}_{a}=\mathcal{R}(\mathbf{D}^{’}_{\text{tsar}},\boldsymbol{B}_{\text{tsar}}),$
(15)
where $\boldsymbol{B}_{\text{tsar}}$ represents the base knowledge of TSAR,
and $\hat{\mathcal{A}}_{a}$ denotes the avatar model
${\mathcal{A}_{\text{a}}}$ with appearance $\mathcal{M}_{\text{a}}$ after pose
recovery with semantic information $\mathbf{D}^{{}^{\prime}}_{\text{tsar}}$.
The AR displaying process is quite straightforward by presenting the
reconstructed avatar $\hat{\mathcal{A}}_{\text{a}}$ and the stationary
background model $S_{o}$ in the AR scenery. The process of avatar pose
recovery in the TSAR framework is intricately designed and hinges on
associating each piece of skeleton information
$\overrightharp{I}^{\text{tsar}}_{i}$ with the avatar model $\mathcal{A}_{a}$
on the Unity3D platform. In traditional point cloud communication frameworks,
the entire point cloud data must be refreshed for each frame, which can be a
computationally expensive and time-consuming process. In contrast, the TSAR
framework only requires the updating of the skeleton information associated
with the avatar’s movements, and update the avatar’s pose based on these
information.
### II-D Problem Formation
In summary, the overall framework aims to achieve task-oriented semantics-
aware communication with efficient data transmission for better avatar
representation in wireless AR applications. The primary objective of the
framework is to maximize the client-side AR viewing experience based on the
transmitted semantic information. The objective function can be represented as
$\displaystyle\mathcal{P}:\min_{\\{\theta_{\text{s}},\left(\overrightharp{I}_{i},h_{j}\right)\\}}\lim_{T\rightarrow+\infty}\frac{1}{T}\sum_{t=0}^{T}\sum_{i=0}^{N_{a}}\left(\overrightharp{I}_{i,t}^{\text{tsar}}-\overrightharp{I}_{i,t}^{{\text{tsar}^{\prime}}}\right)\cdot\omega_{I_{i}},$
(16) $\displaystyle\text{ s.t. }\ \ \ \ i\in[1,N_{\text{a}}],\ \
j\in[1,N_{\text{c}}],$
where $\overrightharp{I}_{i,t}^{\text{tsar}}$ represents the semantic
information of the $i$th skeleton at time $t$, and
$\overrightharp{I}_{i,t}^{\text{tsar}^{\prime}}$ is the received semantic
information after the wireless channel. The weights $\omega_{I_{i}}$ reflect
the importance of each skeleton node $i$ in representing the avatar graph.
This equation formulates the problem of minimizing the error in avatar
representation during transmission.
Figure 3: Semantic information extraction network TABLE I: SANet parameters and training setup Parameter | Value
---|---
_Cell_ |
Semantic network | In (2048,3), out (25,1)
Feature conv | (In feature=2048, out feature=1440)
$1^{\text{st}}$ Conv2d | (In feature=256, out feature=256)
$2^{\text{nd}}$ Conv2d | (In feature=256, out feature=128)
Output layer | (In feature=128, out feature=25)
_Simulation_ |
Learning rate | $10^{-4}$
Optimizer | Adam
Episode | 900
Batch size | 16
Loss Function | MSE
Momentum | SGD
Activation function | ReLU
## III Semantic Level Design
In this section, we will discuss the semantic extraction and recovery blocks,
including semantic information extraction with deep learning, base knowledge
selection, avatar pose recovery, and evaluation metric.
### III-A Semantic Extraction with Deep Learning
Inspired by the KeypointNet proposed in [28], we propose a semantics-aware
network called SANet to extract the skeleton keypoint information of a moving
avatar from the whole point cloud of AR scenery. The extraction is an integral
step towards creating a more interactive and immersive augmented reality
experience. The SANet operates by using downsampled point cloud data
${\mathbf{P}_{\text{dpc}}}$ as input, which represents the 3D coordinates of
both the stationary models and the moving avatar. This data is then processed
by the SANet to extract accurate avatar skeleton information, crucial for
reproducing the avatar’s movements in the virtual environment. The design
objective of the SANet is to minimize the Euclidean distance
($\mathcal{L}_{2}$) between the predicted semantic information, denoted as
$\mathcal{S}({\mathbf{D}_{\text{dpc}}})$, and the labeled semantic information
of the skeleton location, represented as $\mathbf{D}^{l}_{\text{tsar}}$. The
interplay between these variables is captured as
$\text{Loss}=\arg\min\underset{\left(\theta_{\text{s}}\right.)}{\mathcal{L}_{2}}\left(\mathcal{S}({\mathbf{P}_{\text{dpc}}}),\mathbf{D}^{l}_{\text{tsar}}\right).$
(17)
where $\theta_{\text{s}}$ represents all the neural networks and experiment
parameters in the SANet, which is defined in Table I and Fig. 3. Training the
SANet involves optimizing these parameters to minimize the loss, thus
enhancing the accuracy of semantic information extraction.
To determine the most suitable backbone for the designed SANet, we train the
SANet with various backbone networks, including ResNet, RsCNN, PointNet,
SpiderCNN, PointConv, and DGCNN [38]. Similar to [28], we use the mean Average
Precision (mAP) as the performance evaluation metric to assess the semantic
information extraction accuracy of the predicted keypoint probabilities in
relation to the ground truth semantic information labels.
### III-B Base Knowledge Selection
To better explore the most suitable base knowledge, we propose basic TSAR
framework (TSAR) and euler angle based TSAR framework (E-TSAR) that considers
different shared base knowledge and semantic information definition111Semantic
information, as presented in Fig. 2, consists of the skeleton information that
need to be transmitted in every frame. Conversely, base knowledge encompasses
information used primarily in the first frame..
TSAR: For the basic TSAR framework, semantic information for each skeleton is
defined as the data pertaining to position and quaternion rotation as in Eq.
(10). The shared base knowledge, denoted as $\boldsymbol{B}_{\text{tsar}}$,
comprises the stationary background model, stationary model initial position
moving avatar model, and their corresponding appearance meshes, which is
denoted as
$\boldsymbol{B}_{\text{tsar}}=\\{\mathcal{A}_{\text{o}},\mathcal{A}_{\text{s}},\mathcal{M}_{\text{o}},\mathcal{M}_{\text{s}},\overrightharp{l}_{\text{s}}\\}.$
(18)
E-TSAR: As an extension of TSAR, the semantic information in each skeleton
$I_{i}$ is defined as the euler angle rotation in E-TSAR, according to Eq.
(11), which could be defined as
$\overrightharp{I}^{\text{etsar}}_{i}=(\overrightharp{e}_{i})={(e_{\text{r}},e_{\text{y}},e_{\text{p}})},\
i\in[0,{N}_{\text{a}}],$ (19)
where the shared base knowledge $\boldsymbol{B}_{\text{etsar}}$ encompasses
the avatar skeleton graph, avatar initial position, stationary background
model, stationary model initial position, moving avatar model, and their
appearance meshes, defined as
$\boldsymbol{B}_{\text{etsar}}=\\{\mathcal{M}_{\text{a}},\mathcal{M}_{\text{s}},\mathcal{A}_{\text{a}},\mathcal{A}_{\text{s}},\overrightharp{l}_{\text{a}},\overrightharp{l}_{\text{s}},\mathcal{G}\\}.$
(20)
### III-C Avatar Pose Recovery
The avatar pose recovery involves using the skeleton graph $\mathcal{G}$ in
the base knowledge and the received semantic information to reconstruct the
avatar pose. The entire avatar pose recovery process is shown in Algorithm 1.
Specifically, a recursive algorithm is employed to traverse and assign all
skeleton information to the avatar model $\mathcal{A}_{a}$ with initialized
parameters. However, due to differences in the definition of the semantic
information and the shared base knowledge, the avatar poses recovery process
has variations between the TSAR and E-TSAR framework.
On the one hand, the basic TSAR framework employs a simple avatar pose
recovery method, assigning the avatar model with value based on the skeleton
point identity using the received position vector and quaternion rotation. On
the other hand, the E-TSAR framework, which only transmits the euler angle of
each skeleton point as semantic information, requires calculating each
skeleton position with respect to its root point in the skeleton graph before
assigning the skeleton information to the avatar model. The E-TSAR framework
reconstructs the avatar pose by first determining the relationships between
the skeleton points in the avatar skeleton graph $\mathcal{G}$. It then
computes the position of each skeleton point by considering its euler angle
and the position of its root point within the $\mathcal{G}$, the relative
distance vector $\Delta\overrightharp{l}_{(i,i-1)}$ between the $i$th skeleton
node and the previous ${(i-1)}$th node can be represented as
$\Delta\overrightharp{l}_{(i,i-1)}=(\Delta\text{x},\Delta\text{y},\Delta\text{z})=\overrightharp{e}_{i}\times\overrightharp{l}_{i-1},$
(21)
where $e_{i}$ represents the eular angle of the $i$th skeleton node,
$(\Delta\text{x},\Delta\text{y},\Delta\text{z})$ represents the distance
between two skeleton node towards the x, y, and z coordinates, and the actual
position of the $i$th skeleton node will be calculated by combining
$\Delta\overrightharp{l}_{(i,i-1)}$ and $\overrightharp{l}_{i-1}$, which can
be expressed as
$\overrightharp{l}_{i}=\overrightharp{l}_{i-1}+\Delta\overrightharp{l}_{(i,i-1)},$
(22)
where the root node position $\overrightharp{l}_{0}$ is equal to the avatar
initial position $\overrightharp{l}_{\text{a}}$ in the base knowledge, and
$\overrightharp{l}_{i}$ represents the position of the $i$th skeleton node in
the avatar, with its three components representing the x, y, and z coordinates
respectively.
Algorithm 1 Avatar Pose Recovery
1: Initialization: Received base knowledge $\boldsymbol{B}_{\text{*}}$,
received data $\mathbf{D}^{{}^{\prime}}_{\text{tsar}}$
2: Get skeleton graph $\mathcal{G}$, avatar initial position
$\overrightharp{l}_{a}$ avatar model $\mathcal{M}_{a}$, and avatar appearance
mesh $\mathcal{A}_{a}$ from $\boldsymbol{B}_{\text{*}}$
3: Count the skeleton number $N_{\text{a}}=\mathbf{C}_{\text{s}}(\mathcal{G})$
4: Count the received semantic information
$N_{\text{r}}=\mathbf{C}_{\text{r}}(\mathbf{D}^{{}^{\prime}}_{\text{tsar}})$
5: if
$({\mathcal{G}\notin\boldsymbol{B}_{\text{*}}}\And{l_{i}\in\mathbf{D}^{{}^{\prime}}_{\text{tsar}}})$
then
6: for each $i$ in $N_{\text{r}}$ do
7: Attach $\overrightharp{I}^{\text{tsar}}_{i}$ to model $\mathcal{A}_{a}$
(Avatar pose recovery for the TSAR)
8: end for
9: else
10: for each $i$ in $N_{a}$ do
11: update $\overrightharp{l}_{i}$ according to Eq. (22) and Eq. (21)
12: Attach $\overrightharp{I}^{\text{etsar}}_{i}$ to model $\mathcal{A}_{a}$
(Avatar pose recovery for the E-TSAR)
13: end for
14: end if
15: Generate avatar $\hat{\mathcal{A}_{a}}$ with appearance mesh
$\mathcal{M}_{a}$ and model initial position $l_{a}$ according to Eq. (15).
15: Avatar $\hat{\mathcal{A}_{a}}$ with reconstructed pose
### III-D Evaluation Metric
The semantic level of our proposed TSAR aims to enhance the communication
effectiveness to achieve accurate avatar moving of the AR application,
specifically, the skeleton information accuracy between the transmitter and
the receiver. The optimization seeks to minimize the Euclidean distance of the
semantic information transmitted at the transmitter and received at receiver.
Thus, the MPJPE is used to estimate and evaluate the avatar pose error in
geometry aspect between the transmitter and receiver, including the x-axis,
y-axis, and z-axis values, which can be expressed as
$\displaystyle\text{ MPJPE
}=\frac{1}{N_{\mathrm{a}}}\sum_{i=1}^{N_{\mathrm{a}}}\sqrt{{|\overrightharp{l}_{i}-\overrightharp{l}^{{}^{\prime}}_{i}|}^{2}},$
(23)
where the $\overrightharp{l}_{i}$ and $\overrightharp{l}^{{}^{\prime}}_{i}$
represent the three dimensional position value of skeleton at the transmitter
and the receiver respectively.
## IV Effectiveness Level Design
In this section, we will demonstrate the design principles of TSAR
optimization at the effectiveness level based on the above defined semantic
information. In the following, we present task-oriented semantics-aware
wireless communication and its evaluation metric.
### IV-A Task-oriented Semantics-aware Wireless Communication
To further enhance the effectiveness of avatar communication in AR
applications, we propose an avatar-based semantic ranking algorithm to
calculate an importance weight value among all the extracted semantic
information, which plays a more advantageous role in avatar representation.
More specifically, we calculate the importance of the skeleton nodes in the
skeleton graph $\mathcal{G}$ using a ranking method based on the PageRank
algorithm proposed by Google [39], the detailed process of AbSR algorithm is
proposed in Algorithm 2, and the weight is calculated as
$\mathrm{\omega}_{I_{i}}=\frac{N_{J}}{(1-\alpha)}+\sum_{j=0}^{N_{J}}\left({|\Delta\overrightharp{l}_{(i,j)}|}\times\omega_{J_{j}}\right).$
(24)
where $\omega_{I_{i}}$ represents the weight of the semantic information
$\overrightharp{I}_{i}$ in the $i$th skeleton node of skeleton graph, and
$|\Delta\overrightharp{l}_{(i,j)}|$ denotes the Euclidean distance between the
$i$th and $j$th skeleton. $J_{j}$ denotes the node index which are connected
to the $i$th node, $\omega_{J_{j}}$ is the weight value of the ${J_{j}}$th
skeleton, $N_{\text{j}}$ represents the total number of nodes $J_{j}$ in the
skeleton graph, and $\alpha$ is a discount factor ranging from $0$ to $1$. As
suggested in [40], we set the discount factor to 0.7 in this paper. A detailed
diagram is shown in Fig. 4, which illustrates that skeletons with more
connections and longer distances from other connected skeletons are more
critical. The underlying rationale is that a node with more connections will
have a greater impact on connected skeleton nodes if it have bit error in
wireless communication. Furthermore, nodes that are more isolated, indicated
by their greater distance from other skeletons, are likely to have a more
substantial impact on the avatar representation due to their distinctive
appearance contributions, highlighting the importance of these skeletons.
After calculating the critical node weight of skeleton graph, a descending
sort algorithm is applied to arrange the skeleton nodes in descending order of
rank. Leveraging our proposed AbSR algorithm, we consider the effectiveness
level optimization during the wireless communication, focusing on avatar
semantic preservation. This shift advancing the semantic level design in
Section III, thus ensuring that crucial avatar semantic information is
prioritized in our task-based wireless communication approach. As shown in Eq.
(14), this approach maps higher weight semantic information to transmit in FDM
subchannels with better CSI. This is the so called euler angle and channel-
based TSAR framework (EC-TSAR), with details below.
EC-TSAR: Based on the E-TSAR, the CSI information is considered to implement
the AbSR algorithm and channel mapping in Algorithm 2 to improve communication
effectiveness in AR applications. More specifically, the channel mapping
process aims to assign more important semantic information a higher priority
with the better subchannel for wireless transmission. The semantic information
is defined as the vector position and euler angle rotation of all skeletons in
the moving avatar, as shown in Eq. (19), while the base knowledge encompasses
the avatar skeleton graph, shared background model, moving avatar model, and
their appearance meshes, as shown in Eq. (20).
Figure 4: Skeleton graph formation and ranking Algorithm 2 Avatar-based
Semantic Ranking Algorithm
1: Initialization: Base Knowledge $\boldsymbol{B}_{\text{*}}$, Semantic
information $\mathbf{D}_{\text{tsar}}$
2: Get $\mathcal{G},\mathcal{A}_{a}$ from $\boldsymbol{B}_{\text{*}}$,
3: Get $\Delta\overrightharp{l}_{(i,i-1)}$ from $\mathcal{A}_{a}$
4: Count skeleton number $N_{\text{a}}=\mathbf{C}_{\text{s}}(\mathcal{G})$
5: repeat
6: $k=k+1$
7: for each $i$ in $N_{\text{a}}$ do
8: Update $\omega^{k}_{I_{i}}$ with $\Delta\overrightharp{l}_{(i,i-1)}$ based
on Eq. (24)
9: $\delta=||\omega^{k}_{I_{i}}-\omega^{k-1}_{I_{i}}||$
10: end for
11: until $\delta<\varepsilon$
12: Update $\\{\overrightharp{I}^{\text{tsar}}_{i},h_{j}\\}$ according to Eq.
(14)
12: Channel Mapping $\\{\overrightharp{I}^{\text{tsar}}_{i},h_{j}\\}$
(a) Avatar movement range of adjacent frame.
(b) Semantic information extraction accuracy.
Figure 5: Avatar movement distribution and semantic information extraction
accuracy
### IV-B Evaluation Metric
Building upon semantic level optimization, the overall goal of the task in AR
application is to recover the avatar for better clients viewing experience. To
achieve this, we use point cloud to evaluate the entire virtual scenery, which
includes Point-to-Point (P2Point), Peak Signal-to-Noise Ratio for the
luminance component ($\text{PSNR}_{\text{y}}$), and transmission latency:
P2Point [41]: To evaluate the viewing experience of clients in AR
applications, the P2Point metric is employed to assess the AR scenery from a
$360^{\circ}$ viewing angles, comparing the geometry difference between the
point cloud data at transmitter $\mathbf{P}_{\text{t}}$ and the point cloud
data at receiver $\mathbf{P}_{\text{r}}$. The P2Point error calculation can be
expressed as
$\text{P2Point}=\max\left({d}_{\text{rms}}^{\left(\mathbf{P}_{\text{t}},\mathbf{P}_{\text{r}}\right)},{d}_{\text{rms}}^{\left(\mathbf{P}_{\text{r}},\mathbf{P}_{\text{t}}\right)}\right),$
(25)
where the function ${d}_{\text{rms}}$ is the root mean square error between
two point cloud.
$\textbf{PSNR}_{\textbf{y}}$ [42]: The color difference plays a crucial role
in avatar displaying task of AR applications, as it can significantly impact
the user viewing experience if there are discrepancies in the colors
transmitted. The $\text{PSNR}_{\text{y}}$ is used to evaluate the luminance
component of the AR scenery difference between the receiver and transmitter .
The $\text{PSNR}_{\text{y}}$ is then calculated as
$\text{PSNR}_{\text{y}}=10\log_{10}\left(\frac{255^{2}}{{\frac{1}{N_{\text{t}}}\sum_{\overrightharp{v}_{i}\in\mathbf{P}_{\text{t}}}\left[{y}_{\overrightharp{v}_{i}}-{y}_{\overrightharp{v}^{\mathbf{P}_{\text{r}}}_{\text{near}}}\right]^{2}}}\right),$
(26)
where $\overrightharp{v}_{\text{near}}^{\mathbf{P}_{\text{r}}}$ represents the
nearest point to $\overrightharp{v}_{i}$ from point cloud
$\mathbf{P}_{\text{r}}$, $N_{\text{t}}$ represents the total number of point
cloud in the $\mathbf{P}_{\text{t}}$, and ${y}_{\overrightharp{v}_{i}}$
represents the luminance elements of point $\overrightharp{v}_{i}$.
Transmission Latency: Transmission Latency is a critical metric in AR
applications and plays a crucial role in evaluating client QoE. The
transmission latency of the AR application can be divided into different
components, including semantic information extraction time $T_{\text{s}}$,
wireless communication time $T_{\text{w}}$, avatar pose recovery and rendering
time $T_{\text{r}}$. The combination of all these times results in the
transmission delay of the AR application, which can be expressed as
$\text{Transmission Latency}=T_{\text{s}}+T_{\text{w}}+T_{\text{r}},$ (27)
by analyzing and optimizing each component of the transmission latency, we can
justify and indicate the efficiency of our proposed framework.
## V Simulation Results
In this section, we evaluate the performance of our proposed TSAR framework
and compare it with the traditional point cloud communication framework as
well as the enhanced frameworks such as E-TSAR and EC-TSAR, as described in
sections III and IV. To assess the performance of semantic information
extraction, we utilize several different avatar dance types as specified in
Table I. We also configure the hyperparameters for the SANet and wireless
communication as listed in Table II. This includes the learning rate, batch
size, channel fading type, base knowledge information, and so on. The
experimental platform for this study employs Python 3.9.0 on the Ubuntu 18.04
system with an RTX 3070, PyTorch 2.1, and the Unity platform. The SANet
initially undergoes a learning phase where it is trained until it converges to
an optimal state. Once the training phase is complete, the trained neural
network is implemented across TSAR, E-TSAR, and EC-TSAR. The following
sections present the results of our proposed frameworks. Section V-A offers
insights into the avatar movement distribution and Section V-B first provides
the experiment results on the semantic information extraction accuracy
achieved by the SANet. Following that, we present experimental results
examining various metrics to evaluate the XR application and avatar
transmission. These metrics include the Mean Per Joint Position Error (MPJPE),
the adjacent frame MPJPE, transmission latency, Point-to-Point (P2Point)
error, and Peak Signal-to-Noise Ratio ($\text{PSNR}_{y}$).
### V-A Avatar Skeleton Distribution
TABLE II: Experiment Setup _Dance type_ | _Last time_
---|---
Upper body dance | 2min 10s
Slight shaking | 50s
Full body dance | 2min 5s
_Simulation_ | _Value_
Data type | Point cloud
FPS | 60
Avatar skeleton number | 25
Stationary model skeleton number | 15
Point cloud number | 2,048
Attribute information 1 | Point number
Attribute information 2 | Position
Attribute information 3 | Rotation (optional)
Attribute information 4 | Color (optional)
Channel response | Rayleigh fading
Modulation | BPSK
_Base Information_ | _Symbols_
Avatar skeleton graph | $\mathcal{G}$
Avatar initial position | $l_{o}$
Avatar model | $A_{a}$
Stationary background model | $A_{s}$
Stationary initial position | $l_{s}$
Appearance meshes | $M_{a}$, $M_{s}$
To obtain a comprehensive understanding of avatar movement in the AR
environment, several avatar dance types were conducted upon the Unity3D and
Mixamo platform. Mixamo is a robust 3D character creation and animation tool
offering a wide array of diverse and dynamic 3D character animations suitable
for a broad spectrum of movement analysis. Three distinct dance types from
Mixamo were selected for our experiments: an upper-body dance, a slight
shaking dance, and a full-body dance. These dances cover a wide range of
avatar movements, from localized to full-body motions, and each dance has a
specific duration, as detailed in Table II. The transmitter used for these
experiments operates at 60 Frames Per Second (FPS), ensuring a smooth and
continuous displaying of the avatar’s movements at the transmitter. The moving
avatar, with 25 skeletons, is placed on a stationary background stage model.
Fig. 5 (a) plots the data analysis of the experiments, which is carried out
based on the skeleton difference between the adjacent frames across the X, Y,
and Z axes under different SNR sceneries. Green points correspond to adjacent
frame skeleton position differences under optimal wireless channels, which
reveals that the shifts in position from one frame to the next were typically
minimal. The adjacent difference ranges for the three axes are (0, 0.46), (0,
0.48), and (0, 0.48) meters, respectively, suggesting that the maximum
movement of the avatar’s skeleton usually remains less than 0.5 meters per
frame in the Unity3D platform. Furthermore, with the SNR increases, the
adjacent skeleton difference indicates that the received data might be
distorted under highly noisy conditions and the Rayleigh fading channel. This
can result in significant positional differences between adjacent frames,
potentially surpassing the realistic movement capabilities of the avatar and
subsequently causing disjointed in the virtual environment.
(a) Adjacent MPJPE of TSAR.
(b) Adjacent MPJPE of E-TSAR.
(c) Adjacent MPJPE of EC-TSAR.
Figure 6: Adjacent MPJPE difference among TSAR, E-TSAR, and EC-TSAR
### V-B Performance Evaluation
#### V-B1 Semantic information Extraction Performance
Figure 5 (b) plots the semantic extraction precision of the SANet, anchored on
a variety of backbone networks over equivalent training epochs. Each network
exhibits commendable proficiency, corroborating the viability of employing
such a deep learning mechanism to extract semantic information from point
cloud data. The degree of accuracy serves as a benchmark for the effectiveness
of semantic extraction capabilities, the accuracy of which is delineated as
follows: SpiderCNN >PointConv >RsNet >RsCNN >DGCNN. This pecking order
underscores the pronounced superiority of the SpiderCNN-based SANet, achieving
an impressive accuracy surpassing 96% within the same epoch duration. As
outlined in Table II, the SpiderCNN boasts a unique structural design that
performs better in point cloud structure feature extraction. This advantage
may become particularly obvious in handling complex, high-dimensional data
such as avatars and 3D model structures. This could also illuminate the other
backbone networks’ less efficient processing and learning capacities. It is
likely that other backbones struggle with adequately extracting and learning
from the structure of point cloud structure, which could consequently impact
semantic information extraction accuracy. These findings highlight the
importance of not just the SANet, but also the backbone choice while
performing semantic information extraction over point cloud data.
#### V-B2 Avatar Transmission Performance
Fig. 6 (a) plots the MPJPE of adjacent frames, alongside the MPJPE error
between the receiver and transmitter, under different wireless channel
conditions for the proposed TSAR. With the diminishing SNR, a visible
degradation in AR displaying fluency with uncontinued avatar movement of
adjacent frames, marked by an increase in both the adjacent MPJPE and the
MPJPE. This result reemphasize the insights drawn from Fig. 5 (a), signifying
that a lower SNR channel generates noise and blur in the received packets,
thereby increasing the MPJPE. Furthermore, with the SNR decrease below 5 dB,
the MPJPE of adjacent frames amplifies with the decreasing SNR and transcends
the general avatar movement range under optimal wireless channels explicated
in section V-A. This demonstrates that concerning the adjacent MPJPE, with the
SNR decrease, it alludes to precipitous movements of the avatar’s constituent
parts, potentially inducing stutters when substantial positional discrepancies
arise between successive frames. Simultaneously, if the MPJPE escalates
excessively, it could engender distortions in the avatar, with skeletal
elements manifesting in aberrant positions, such as a foot emerging at the
head. Both the uninfluenced and distortion of the avatar in the AR application
could damage the viewing experience on the client side [43].
Fig. 6 (b) plots the MPJPE of adjacent frames, alongside the MPJPE error
between the receiver and transmitter, under different wireless channel
conditions for the proposed E-TSAR. In contrast to the outcomes of our
proposed TSAR shown in Fig. 6 (a), E-TSAR profoundly decreased the MPJPE
between the transmitter and the receiver with the SNR increase and achieved a
40% decrease in MPJPE within the 0.5dB SNR scenery. Such observations denote a
smoother and more fluent avatar movement of the E-TSAR compared to the TSAR,
given the E-TSAR a reduced likelihood of confronting disconcerting avatar
distortions compared to TSAR. Additionally, unlike the basic TSAR results,
where the MPJPE continues to increases as the SNR decreases, the E-TSAR MPJPE
does not increase after the SNR drops below 5 dB. This indicates that using
the avatar model as base knowledge in semantic communication helps the avatar
maintain its undistorted appearance in the poor wireless channel scenarios.
This improvement in avatar representation can lead to an enhanced user
experience and a higher QoE for clients, thereby underscoring the
effectiveness of employing the avatar model as a shared base knowledge in the
domain of wireless AR implementations.
Figure 7: Mean Per Joint Position Error.
(a) Point to point.
(b) Peak signal-to-noise ratio in the luminance (Y).
Figure 8: Point to point and peak signal-to-noise ratio in the luminance (Y).
Figure 9: Transmission Latency.
Fig. 6 (c) plots the MPJPE of adjacent frames, alongside the MPJPE error
between the receiver and transmitter, under different wireless channel
conditions for the proposed EC-TSAR. With a result generally similar to
E-TSAR’s shown in 6 (b), EC-TSAR achieves a significant decrease when the SNR
increase above 5 dB, generating a more fluent video with lower adjacent frames
MPJPE. This illustrates that with the assistance of the AbSR algorithm and
adaptive channel mapping, more important semantic information is effectively
transmitted through wireless communication, ultimately aiding in avatar
recovery on the client side. This highlights the effectiveness of the AbSR
algorithm and adaptive channel mapping in improving the efficacy of avatar
transmission, especially in higher SNR scenarios. Besides, similar to the
E-TSAR, the MPJPE does not continue to increase as the SNR decreases below 5
dB, which reemphasizes the advantages of employing the avatar model as a
shared base knowledge.
Fig. 7 plots the MPJPE performance results, which reveal the differences in
the avatar skeleton’s position between the receiver and transmitter. A lower
MPJPE indicates a better avatar pose recovery ability in wireless
communication, and the overall results of The MPJPE results are ranked as TSAR
$\textless$ EC-TSAR $\textless$ E-TSAR $\textless$ Point Cloud. Specifically,
the TSAR framework achieves the lowest MPJPE with the SNR increase above 3 dB,
achieving about an 83% decrease compared to the point cloud framework at 13 dB
scenery. In contrast, the EC-TSAR framework achieves lower MPJPE than the TSAR
framework when the SNR continues to decrease below 3 dB. Besides, the point
cloud framework struggles to generate key points within the 3D scenery with
the SNR decrease below 8 dB. This observation indicates that in the cloud
point communication framework, the avatars are displayed with distorted
proportions, such as an arm’s length longer than the avatar’s entire body,
which can cause the SANet to fail in distinguishing the skeleton key points
accurately. Meanwhile, in the EC-TSAR, the avatar model used in the shared
base knowledge functions not to allow movements exceeding the avatar’s
capabilities, resulting in a better and undistorted AR avatar displayed on the
client side compared with other frameworks with the SNR continue to decrease
below 3 dB.
Fig. 8 (a) plots the P2Point error, revealing the geometry differences of the
AR scene between the transmitter and receiver. A lower P2Point value indicates
a better viewing experience of the geometry aspect on the client side, and the
overall P2Point value is ranked as EC-TSAR $\textless$ E-TSAR $\textless$ TSAR
$\textless$ Point Cloud. With the SNR increases, the P2Point of all the
frameworks witnessed an increase, indicating all the frameworks are affected
by the worse wireless channel conditions. Besides, The EC-TSAR and E-TSAR
frameworks both achieve a flat P2Point value increase with the SNR decrease
below 4 dB compared with TSAR and Point Cloud, indicating that the avatar
model transmitted in the base knowledge works to prevent the avatar displaying
distortion, and make avatar only generates some odd positions in both
frameworks, while the avatar displaying in the point cloud framework and TSAR
already shows distortion.
Fig. 8 (b) plots the $\text{PSNR}_{\text{y}}$ results, which reveal the color
differences of the AR displaying scenery between the transmitter and receiver.
A higher $\text{PSNR}_{\text{y}}$ value represents a better viewing experience
on the client side, and the $\text{PSNR}_{\text{y}}$ results are ranked as EC-
TSAR $\textgreater$ E-TSAR $\textgreater$ TSAR $\textgreater$ Point Cloud. All
the frameworks shown an increase with the SNR increase, indicating the viewing
experience is affected by the wireless channel conditions. Besides, all the
TSAR, E-TSAR, and EC-TSAR achieve a significant increase when the SNR increase
above 7 dB, while the point cloud communication framework has a relatively
flat increase. This indicates the avatar model used in the shared base
knowledge makes the avatar transmitted as a whole model, which helps to more
effectively transmit the exact color of the avatar model in wireless
communication, whereas the color value in the traditional point cloud
framework totally up to the channel conditions and will exhibit distortions
through wireless communication.
Fig. 9 plots the transmission latency of all frameworks as defined in Eq.
(27). A lower latency could contribute to a better QoE on the client side,
which is ranked as E-TSAR $\textless$ EC-TSAR $\textless$ TSAR $\textless$
Point Cloud. Compared to the traditional point cloud communication framework,
the TSAR, E-TSAR, and EC-TSAR save a substantial amount of transmission time
due to significantly fewer packets transmitted. Although these frameworks
introduce an additional semantic information extraction step with the DL-based
semantic information extractor, it only takes about one second per 100 frames,
constituting only a tiny portion of the total transmission time. Concerning
pose recovery and rendering, which are inherently linked to the data packets,
the point cloud requires rendering all the upsampled point cloud data based on
2,048 points. Conversely, the TSAR, E-TSAR, and EC-TSAR merely require 25
skeletal points to update the pose of an already rendered avatar, thereby
significantly reducing time consumption on the client side. Moreover, although
both E-TSAR and EC-TSAR necessitate calculating the skeletal position
according to Eq. (21) and Eq. (22) before avatar pose recovery, while the TSAR
can directly update the avatar pose. The limited calculation time of 25 cycles
renders the time consumption of this pose recovery and rendering process
relatively uniform among TSAR, E-TSAR, and EC-TSAR. This substantial reduction
in data transmission volume concurrently minimizes bandwidth usage spent on
wireless communication compared with the traditional point cloud framework.
## VI Conclusion
This paper has presented a novel task-oriented and semantics-aware
communication framework designed to enhance the effectiveness and efficiency
of avatar-based communication in wireless AR applications. By introducing new
semantic information in AR and representing relationships between different
types of semantic information using a graph, our proposed task-oriented and
semantics-aware communication framework extracted and transmitted only
essential semantic information in wireless AR communication, substantially
reducing communication bandwidth requirements. This selective transmission of
important semantic information provided a more effective approach to semantic
information extraction compared to traditional communication frameworks,
ensuring minimal errors and lower bandwidth usage. Furthermore, we have
extracted effectiveness level features from the complete avatar skeleton graph
using shared base knowledge based on end-to-end wireless communication,
distinguishing it from and enhancing general semantic communication
frameworks. This pioneering work opened research for further advancements in
wireless AR communication frameworks. Our future work will focus on
improvements by integrating other semantic features and channel optimization
methods, such as model recognition, interaction, and channel encoding, to
further improve effectiveness and efficiency in the avatar-centric wireless AR
application.
## References
* [1] H. Ning, H. Wang, Y. Lin, W. Wang, S. Dhelim, F. Farha, J. Ding, and M. Daneshmand, “A survey on metaverse: the state-of-the-art, technologies, applications, and challenges,” _arXiv preprint arXiv:2111.09673_ , Nov. 2021\.
* [2] Y. Wang, Z. Su, N. Zhang, R. Xing, D. Liu, T. H. Luan, and X. Shen, “A survey on metaverse: Fundamentals, security, and privacy,” _IEEE Commun. Surveys Tuts._ , vol. 25, no. 1, pp. 319–352, Sept, 2022.
* [3] F. Hu, Y. Deng, W. Saad, M. Bennis, and A. Aghvami, “Cellular-connected wireless virtual reality: Requirements, challenges, and solutions,” _IEEE Commun. Mag._ , vol. 58, no. 5, pp. 105–111, 2020.
* [4] F. Hu, Y. Deng, H. Zhou, T. Jung, C.-B. Chae, and A. Aghvami, “A vision of an xr-aided teleoperation system toward 5G/B5G,” _IEEE Commun. Mag._ , vol. 59, no. 1, pp. 34–40, 2021.
* [5] S. Van Damme, M. T. Vega, and F. De Turck, “Human-centric quality management of immersive multimedia applications,” in _Proc. IEEE Conf. Netw. Softwarization (NetSoft)_ , June 2020, pp. 57–64.
* [6] Y. Liao, J. Xie, and A. Geiger, “Kitti-360: A novel dataset and benchmarks for urban scene understanding in 2D and 3D,” _IEEE Trans. Pattern Anal. Mach. Intell._ , vol. 45, no. 3, pp. 3292–3310, 2022.
* [7] M. Kountouris and N. Pappas, “Semantics-empowered communication for networked intelligent systems,” _IEEE Commun. Mag._ , vol. 59, no. 6, pp. 96–102, June 2021.
* [8] L. Yan, Z. Qin, R. Zhang, Y. Li, and G. Y. Li, “Resource allocation for text semantic communications,” _IEEE Wireless Commun. Lett._ , vol. 11, no. 7, pp. 1394–1398, Apr. 2022.
* [9] Z. Weng, Z. Qin, and G. Y. Li, “Semantic communications for speech signals,” in _Proc. IEEE Int. Conf. Commun. (ICC)_. IEEE, June 2021, pp. 1–6.
* [10] P. Jiang, C.-K. Wen, S. Jin, and G. Y. Li, “Wireless semantic communications for video conferencing,” _” IEEE J. Sel. Areas Commun._ , vol. 41, no. 1, pp. 230–244, Nov. 2022.
* [11] A. Maatouk, M. Assaad, and A. Ephremides, “The age of incorrect information: An enabler of semantics-empowered communication,” _IEEE Trans. Commun._ , Oct. 2022.
* [12] H. Zhou, X. Liu, Y. Deng, N. Pappas, and A. Nallanathan, “Task-oriented and semantics-aware 6G networks,” _arXiv preprint arXiv:2210.09372_ , Oct. 2022.
* [13] W. Wu, Y. Yang, Y. Deng, and A. H. Aghvami, “Task-oriented semantics-aware communications for robotic waypoint transmission: the value and age of information approach,” _arXiv preprint arXiv:2312.13182_ , 2023.
* [14] H. Du, D. Niyato, C. Miao, J. Kang, and D. I. Kim, “Optimal targeted advertising strategy for secure wireless edge metaverse,” in _Proc. IEEE Global Commun. Conf. (GLOBECOM)_. IEEE, 2022, pp. 4346–4351.
* [15] C. B. Fernandez and P. Hui, “Life, the metaverse and everything: An overview of privacy, ethics, and governance in metaverse,” in _Proc. IEEE Int. Conf. Distrib. Comput. Syst. Workshops (ICDCSW)_. IEEE, July 2022, pp. 272–277.
* [16] L. S. Pauw, D. A. Sauter, G. A. van Kleef, G. M. Lucas, J. Gratch, and A. H. Fischer, “The avatar will see you now: Support from a virtual human provides socio-emotional benefits,” _Comput. Human Behav._ , vol. 136, p. 107368, May 2022.
* [17] J. S. Lemmens and I. A. Weergang, “Caught them all: Gaming disorder, motivations for playing and spending among core pok’emon go players,” _Entertain. Comput._ , p. 100548, March 2023.
* [18] L. A. da Silva Cruz, E. Dumi’c, E. Alexiou, J. Prazeres, R. Duarte, M. Pereira, A. Pinheiro, and T. Ebrahimi, “Point cloud quality evaluation: Towards a definition for test conditions,” in _Proc. IEEE Int. Conf. Quality of Multimedia Experience (QoMEX)_. IEEE, June 2019, pp. 1–6.
* [19] Q. Yang, Y. Liu, S. Chen, Y. Xu, and J. Sun, “No-reference point cloud quality assessment via domain adaptation,” in _Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit._ , 2022, pp. 21 179–21 188.
* [20] D. Lazzarotto, M. Testolina, and T. Ebrahimi, “Influence of spatial rendering on the performance of point cloud objective quality metrics,” in _Proc. 10th European Workshop on Visual Information Processing (EUVIP)_. IEEE, 2022, pp. 1–6.
* [21] D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli, “3D human pose estimation in video with temporal convolutions and semi-supervised training,” in _Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2019, pp. 7753–7762.
* [22] K. Yu, G. Gorbachev, U. Eck, F. Pankratz, N. Navab, and D. Roth, “Avatars for teleconsultation: Effects of avatar embodiment techniques on user perception in 3D asymmetric telepresence,” _IEEE Trans. Vis. Comput. Graph._ , vol. 27, no. 11, pp. 4129–4139, 2021.
* [23] Y. Xu, Q. Yang, L. Yang, and J.-N. Hwang, “Epes: Point cloud quality modeling using elastic potential energy similarity,” _IEEE Trans. Broadcasting_ , vol. 68, no. 1, pp. 33–42, 2021.
* [24] J. Liu, N. Akhtar, and A. Mian, “Deep reconstruction of 3D human poses from video,” _IEEE Trans. Artif. Intell._ , pp. 1–1, March 2022.
* [25] S. Aseeri and V. Interrante, “The influence of avatar representation on interpersonal communication in virtual social environments,” _IEEE transactions on visualization and computer graphics_ , vol. 27, no. 5, pp. 2608–2617, 2021.
* [26] M. Haruna, M. Ogino, S. Tagashira, and S. Morita, “Augmented avatar toward both remote communication and manipulation tasks,” in _Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS)_. IEEE, 2023, pp. 7075–7081.
* [27] Y. Wu, Y. Wang, S. Jung, S. Hoermann, and R. W. Lindeman, “Towards an articulated avatar in vr: Improving body and hand tracking using only depth cameras,” _Entertain. Comput._ , vol. 31, p. 100303, 2019.
* [28] Y. You, Y. Lou, C. Li, Z. Cheng, L. Li, L. Ma, C. Lu, and W. Wang, “Keypointnet: A large-scale 3D keypoint dataset aggregated from numerous human annotations,” in _Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR)_ , June 2020, pp. 13 647–13 656.
* [29] Z.-L. Zhang, U. K. Dayalan, E. Ramadan, and T. J. Salo, “Towards a software-defined, fine-grained QoS framework for 5G and beyond networks,” in _Proc. ACM SIGCOMM Workshop Netw.-Appl. Integr. (NAI)_ , Aug. 2021, pp. 7–13.
* [30] Y. Huang, B. Bai, Y. Zhu, X. Qiao, X. Su, and P. Zhang, “Iscom: Interest-aware semantic communication scheme for point cloud video streaming,” _arXiv preprint arXiv:2210.06808_ , Oct. 2022.
* [31] F. Nardo, D. Peressoni, P. Testolina, M. Giordani, and A. Zanella, “Point cloud compression for efficient data broadcasting: A performance comparison,” in _Proc. IEEE Wireless Communications and Networking Conference (WCNC)_. IEEE, March 2022, pp. 2732–2737.
* [32] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” _Adv. Neural Inf. Process. Syst._ , vol. 30, Dec. 2017.
* [33] A. Akhtar, Z. Li, G. Van der Auwera, L. Li, and J. Chen, “Pu-Dense: Sparse tensor-based point cloud geometry upsampling,” _IEEE Trans. Image Process._ , vol. 31, pp. 4133–4148, July 2022.
* [34] Y. Chen, V. T. Hu, E. Gavves, T. Mensink, P. Mettes, P. Yang, and C. G. Snoek, “Pointmixup: Augmentation for point clouds,” in _Proc. Eur. Conf. Comput. Vis. (ECCV)_. Springer, June 2020, pp. 330–345.
* [35] Z. B. K. Egilmez, L. Xiang, R. G. Maunder, and L. Hanzo, “Development, operation, and performance of 5G polar codes,” _IEEE Commun. Surv. Tutor._ , vol. 22, no. 1, pp. 96–122, 2019.
* [36] L. Quintero, P. Papapetrou, J. E. Mu noz, J. De Mooij, and M. Gaebler, “Excite-o-meter: an open-source unity plugin to analyze heart activity and movement trajectories in custom vr environments,” in _2022 IEEE Conf. Virtual Reality 3D User Interfaces Abstracts Workshops (VRW)_. IEEE, 2022, pp. 46–47.
* [37] S. S. Thoota and C. R. Murthy, “Massive MIMO-OFDM systems with low resolution adcs: Cram’er-rao bound, sparse channel estimation, and soft symbol decoding,” _IEEE Trans. Signal Process._ , vol. 70, pp. 4835–4850, 2022.
* [38] S. Qiu, S. Anwar, and N. Barnes, “Dense-resolution network for point cloud classification and segmentation,” in _Proc. IEEE/CVF Winter Conf. on Applications of Computer Vision (WACV)_. IEEE, 2021, pp. 3813–3822.
* [39] M. A. Joshi and P. Patel, “Google page rank algorithm and it’s updates,” in _Proc. Int. Conf. Emerg. Trends Sci. Eng. Manage.(ICETSEM)_ , 2018.
* [40] A. K. Srivastava, R. Garg, and P. Mishra, “Discussion on damping factor value in pagerank computation,” _Int. J. Intell. Syst. Appl._ , vol. 9, no. 9, p. 19, Sept. 2017.
* [41] R. Mekuria, Z. Li, C. Tulvan, and P. Chou, “Evaluation criteria for pcc (point cloud compression),” 2016.
* [42] G. Meynet, Y. Nehmé, J. Digne, and G. Lavoué, “Pcqm: A full-reference quality metric for colored 3D point clouds,” in _2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX)_. IEEE, 2020, pp. 1–6.
* [43] M. Fiedler, T. Hossfeld, and P. Tran-Gia, “A generic quantitative relationship between quality of experience and quality of service,” _IEEE Netw._ , vol. 24, no. 2, pp. 36–41, March 2010.
|
# The Fast Exact Closure for Jeffery’s Equation with Diffusion
Stephen Montgomery-Smith David Jack111Corresponding Author. Email:
<EMAIL_ADDRESS>Phone: 254-710-3347. Douglas E. Smith Department of
Mathematics, University of Missouri, Columbia MO 65211, U.S.A. Department of
Mechanical Engineering, Baylor University, Waco TX 76798, U.S.A. Department
of Mechanical and Aerospace Engineering, University of Missouri, Columbia MO
65211, U.S.A.
###### Abstract
Jeffery’s equation with diffusion is widely used to predict the motion of
concentrated fiber suspensions in flows with low Reynold’s numbers.
Unfortunately, the evaluation of the fiber orientation distribution can
require excessive computation, which is often avoided by solving the related
second order moment tensor equation. This approach requires a ‘closure’ that
approximates the distribution function’s fourth order moment tensor from its
second order moment tensor. This paper presents the _Fast Exact Closure_ (FEC)
which uses conversion tensors to obtain a pair of related ordinary
differential equations; avoiding approximations of the higher order moment
tensors altogether. The FEC is exact in that when there are no fiber
interactions, it exactly solves Jeffery’s equation. Numerical examples for
dense fiber suspensions are provided with both a Folgar-Tucker (1984)
diffusion term and the recent anisotropic rotary diffusion term proposed by
Phelps and Tucker (2009). Computations demonstrate that the FEC exhibits
improved accuracy with computational speeds equivalent to or better than
existing closure approximations.
###### keywords:
B. Directional orientation, B. Rheological properties, D. Injection molding,
Jeffery’s equation with rotary diffusion
## 1 Introduction
The industrial demand has continued to increase for high-strength, low-weight,
rapid production parts such as those made of short discontinuous fiber
composites with injection molding processes. For effective design, it is
essential to understand the dependance of the final part performance of short-
fiber injection molded composites with the variations in the microstructure
due to the processing (see e.g. [1, 2]). The Folgar and Tucker model of
isotropic diffusion [3] for fiber interactions within a suspension has been
used for several decades to compute fiber orientation and has been implemented
to some extent within most related industrial and research computer
simulations. Unfortunately, direct computations of the isotropic diffusion
model are computationally prohibitive, and most implementations employ the
orientation tensor approach of Advani and Tucker [4] where the moments of the
fiber orientation are solved, thus indirectly quantifying the fiber
orientation distribution. The orientation tensor approach requires knowledge
of the next higher-order moment tensor, thus requiring some form of a closure.
The hybrid closure of Advani and Tucker [4] has been used extensively due to
its computational efficiencies, but in implementation it will overpredict the
alignment state in simple shear flow [5]. Cintra and Tucker [6] introduced the
class of the orthotropic closures, which result in significant accuracy
improvements when compared to the hybrid closure, but at an increase in
computational costs.
With recent advances in part repeatability, the limitation of the isotropic
diffusion model has become apparent [7]. Recent anisotropic diffusion models
[8, 9, 10, 11] propose new forms with greater accuracies for modeling fiber
collisions, but these anisotropic diffusion models pose a new set of
computational complications. In particular is the concern that nearly all of
the fitted orthotropic closures are obtained by fitting orientation
information based on direct numerical solutions of the Folgar-Tucker diffusion
model. The exception is the orthotropic closures of Wetzel [12] and VerWeyst
[13] which were both constructed on distributions formed through the elliptic
integral form for orientations encompassing the eigenspace [6].
The Exact Closure of Montgomery-Smith et al. [14] presents an alternative to
the classical closure form, and provides an exact solution for pure Jeffery’s
motion (i.e., the dilute regime). The Exact Closure avoids the curve fitting
process required to define fitted closures, by solving a set of related ODEs
of the fiber orientation. In the present paper, we extend the Exact Closure
form to systems of concentrated suspensions that are more relevant to modeling
the processing of short-fiber composites. Furthermore, we introduce the new
_Fast Exact Closure_ (FEC) that defines conversion tensors that lead to a
coupled system of ordinary differential equations that avoid costly closure
computations. The FEC form is derived for fiber collision models for both the
isotropic diffusion model of Folgar and Tucker and the recent anisotropic
diffusion model of Phelps and Tucker [9]. Results presented will demonstrate
the effectiveness of this alternative approach for modeling fiber orientation,
both for accuracy and for computational speed.
## 2 Fiber Motion Basics
Jeffery’s equation [15] has been used to predict the motion of the direction
of axi-symmetric fibers under the influence of a low Reynold’s number flow of
a Newtonian fluid, whose velocity field is
${\mathbf{u}}={\mathbf{u}}({\mathbf{x}},t)$. The directions of the fibers is
represented by the fiber orientation distribution
$\psi=\psi({\mathbf{x}},{\mathbf{p}},t)$, where ${\mathbf{p}}$ is an element
of the orientation space, that is, the 2-dimensional sphere
$S=\\{{\mathbf{p}}=(p_{1},p_{2},p_{3}):p_{1}^{2}+p_{2}^{2}+p_{3}^{2}=1\\}$.
Thus given a subset $E$ of $S$, the proportion of fibers whose direction is in
$E$ is given by $\int_{E}\psi({\mathbf{x}},{\mathbf{p}},t)\,d{\mathbf{p}}$,
where $d{\mathbf{p}}$ represents the usual integration over $S$. In
particular, an isotropic distribution is represented by $\psi=1/4\pi$. The
Jeffery’s equation for the fiber orientation distribution is
$\frac{D\psi}{Dt}=-\tfrac{1}{2}{\boldsymbol{\nabla}}_{\mathbf{p}}\cdot((\Omega\cdot{\mathbf{p}}+\lambda\Gamma\cdot{\mathbf{p}}-\lambda\Gamma:{\mathbf{p}}{\mathbf{p}}{\mathbf{p}})\psi)$
(1)
Here $\Omega$ is the vorticity, that is, the anti-symmetric part
${\boldsymbol{\nabla}}{\mathbf{u}}-({\boldsymbol{\nabla}}{\mathbf{u}})^{T}$ of
the Jacobian of the velocity field
${\boldsymbol{\nabla}}{\mathbf{u}}=(\partial u_{i}/\partial x_{j})_{1\leq
i,j\leq 3}$, and $\Gamma$ is the rate of strain tensor, that is, the symmetric
part
${\boldsymbol{\nabla}}{\mathbf{u}}+({\boldsymbol{\nabla}}{\mathbf{u}})^{T}$ of
the Jacobean of the velocity field. Also, $D/Dt=\partial/\partial
t+{\mathbf{u}}\cdot{\boldsymbol{\nabla}}$ represents the material derivative,
and
${\boldsymbol{\nabla}}_{\mathbf{p}}=(I-{\mathbf{p}}{\mathbf{p}})\cdot\left(\tfrac{\partial}{\partial
p_{1}},\tfrac{\partial}{\partial p_{2}},\tfrac{\partial}{\partial
p_{3}}\right)$ is the gradient operator restricted to the sphere.
Equation (1) is modified to incorporate the rotary diffusion expressed by Bird
et al. [16], occasionally referred to as the generalized Fokker-Planck or the
Smoluchowski equation [17], as
$\frac{D\psi}{Dt}=-\tfrac{1}{2}{\boldsymbol{\nabla}}_{\mathbf{p}}\cdot((\Omega\cdot{\mathbf{p}}+\lambda\Gamma\cdot{\mathbf{p}}-\lambda\Gamma:{\mathbf{p}}{\mathbf{p}}{\mathbf{p}})\psi)+\Delta_{\mathbf{p}}(D_{r}\psi),$
(2)
where $D_{r}$ captures the effect of fiber interaction and depends upon the
flow kinetics. Here
$\Delta_{\mathbf{p}}={\boldsymbol{\nabla}}_{\mathbf{p}}\cdot{\boldsymbol{\nabla}}_{\mathbf{p}}$
represents the Beltrami-Laplace operator on the sphere. Folgar and Tucker [3]
selected $D_{r}=C_{I}\dot{\gamma}$ where
$\dot{\gamma}=\left(\frac{1}{2}\Gamma:\Gamma\right)^{1/2}$ and $C_{I}$ is a
constant that depends upon the volume fraction and aspect ratio of the fibers.
Other authors have considered a wider class of diffusion terms. For example,
Koch [10], and Phelps and Tucker [9] considered anisotropic diffusion
$\frac{D\psi}{Dt}=-\tfrac{1}{2}{\boldsymbol{\nabla}}_{\mathbf{p}}\cdot((\Omega\cdot{\mathbf{p}}+\lambda\Gamma\cdot{\mathbf{p}}-\lambda\Gamma:{\mathbf{p}}{\mathbf{p}}{\mathbf{p}})\psi)+{\boldsymbol{\nabla}}_{\mathbf{p}}\cdot(I-{\mathbf{p}}{\mathbf{p}})\cdot
D_{r}\cdot{\boldsymbol{\nabla}}_{\mathbf{p}}\psi$ (3)
where $D_{r}$ is the anisotropic diffusion matrix, calculated as a function of
$\psi$ and ${\boldsymbol{\nabla}}u$ (see, e.g., [9, 10]).
Since these are, in effect, partial differential equations in 5-spacial
dimensions (3 for space and 2 for the orientation defined on a unit sphere),
numerically calculating solutions can be rather daunting with solutions taking
days to weeks for simple flows. Hence Hinch and Leal [18] suggested to recast
the equation in terms of moment tensors. For example, the second and fourth
moment tensors are defined by
$\displaystyle
A=\int_{S}{\mathbf{p}}{\mathbf{p}}\psi\,d{\mathbf{p}},\qquad\qquad\mathbb{A}=\int_{S}{\mathbf{p}}{\mathbf{p}}{\mathbf{p}}{\mathbf{p}}\psi\,d{\mathbf{p}}$
(4)
Then Jeffery’s equation (1) for the second order moment tensor can be
expressed as
$\frac{DA}{Dt}=\tfrac{1}{2}(\Omega\cdot A-A\cdot\Omega+\lambda(\Gamma\cdot
A+A\cdot\Gamma)-2\lambda\mathbb{A}:\Gamma)$ (5)
and the equations (2) and (3) with diffusion terms become
$\frac{DA}{Dt}=\tfrac{1}{2}(\Omega\cdot A-A\cdot\Omega+\lambda(\Gamma\cdot
A+A\cdot\Gamma)-2\lambda\mathbb{A}:\Gamma)+\mathcal{D}[A]$ (6)
where $\mathcal{D}[A]$ for isotropic diffusion as expressed in equation (2)
becomes
$\mathcal{D}[A]=D_{r}(2I-6A)$ (7)
and subsequently the anisotropic diffusion of equation (3) (see [9]) is
$\mathcal{D}[A]=2D_{r}-2(\mathop{\text{tr}}D_{r})A-5(A\cdot D_{r}+D_{r}\cdot
A)+10\mathbb{A}:D_{r}$ (8)
The difficulty with equations (5) and (6) is that they explicitly include the
fourth order moment tensor, and implicitly the higher order diffusion models
of equation (8) include moments higher than the second-moment. To circumvent
this problem, various authors (for example, [18, 19, 20, 21, 6, 1, 22, 23,
24]) have proposed _closures_ , that is, formulae to calculate the fourth
order moment tensor $\mathbb{A}$ from the second order moment tensor $A$. The
mapping from $A$ to ${\mathbb{A}}$ is not unique, thus closures are only able
to approximately obtain a higher order moment from the lower order moments.
Most closures are often constructed by obtaining the best-fit coefficients of
for a polynomial by fitting numerical data obtained by directly evaluating
equation (2) using a finite element method to solve equation (2) (for example,
Bay [25]).
## 3 The Fast Exact Closure
Verleye and Dupret [21] (see also [12, 13, 26, 27, 28]) noted that there is an
exact closure for Jeffery’s equation when the diffusion terms are not present,
_in the particular case that the fiber orientation distribution is at some
time isotropic_. This exact closure is stated explicitly in [14] for the
scenario when the suspension is dilute. For the sake of labeling, the present
closure retains the reference _Exact Closure_ , as it is exact for Jeffery’s
equation without diffusion terms.
The Exact Closure may be directly computed by solving the elliptic integral
forms presented in equation (6), where $\mathbb{A}$ is computed from $A$ using
equations (38) and (39) as derived in [14]. This approach only gives the exact
answer to equations (2) and (3) when $D_{r}=0$ and when the orientation is
isotropic at some time. Nevertheless it is reasonable to suppose that the
exact closure should give a reasonable approximation in general, even when
$D_{r}\neq 0$ as in Verweyst et al. [1, 13]. Their ORT closure is a polynomial
approximation to the Exact Closure, and as we demonstrate below, gives answers
that are virtually indistinguishable from that of the Exact Closure.
The _Fast Exact Closure_ (FEC) performs the Exact Closure in a computationally
efficient manner. A version of FEC is described in [14], but only when the
diffusion terms are absent. In this section we describe the FEC from an
implementation perspective, and leave the full derivation to the appendix.
The idea behind the FEC is the computation of two rank 4 tensors $\mathbb{C}$
and $\mathbb{D}$, defined in equations (40) and (43), respectively, which we
define as _conversion tensors_. These tensors convert between $DA/Dt$ and
$DB/Dt$ according to the formulae
$\frac{DA}{Dt}=-\mathbb{C}:\frac{DB}{Dt},\qquad\qquad\frac{DB}{Dt}=-\mathbb{D}:\frac{DA}{Dt}$
(9)
as derived in equations (51) - (53). The orientation tensor $A$ retains the
classical meaning as described in [4] and the tensor $B$ turns out to be
extremely useful for computations. $B$ appears to be a more abstract quantity
to describe the degree of orientation much like the orientation tensor. For
example, when the orientation parameter $B$ is given as $B_{ij}=\delta_{ij}$
this is analogous to saying that the orientation is isotropic, whereas when
one of the diagonal terms of $B$ goes to $0$, it indicates that the
orientation is perfectly aligned along the corresponding coordinate axis.
Montgomery-Smith et al. [14] provide a further discussion as to the meaning of
the orientation parameter $B$
What makes everything work is the formula, proven in the appendix by equation
(54), that for any matrix $M$, we have
$\mathbb{C}:(B\cdot M+M^{T}\cdot B)=(\mathop{\text{tr}}M)A+M\cdot A+A\cdot
M^{T}-2\mathbb{A}:M$ (10)
where $\mathbb{A}$ and $A$ satisfy equations (38) and (39).
The FEC present in this paper will be of the form:
$\frac{DA}{Dt}=-\mathbb{C}:F(B)+G(A),\qquad\qquad\frac{DB}{Dt}=F(B)-\mathbb{D}:G(A)$
(11)
where $F(B)$ and $G(A)$ will be given explicitly below. This is a general form
that can be applied to a the known diffusion models that fit the form of
equation (2) or (3). The conversion tensors $\mathbb{C}$ and $\mathbb{D}$ are
defined later in this section, and in the appendix we provide a more
mathematical formula for them along with a proof of the above properties. It
is important to note that $\mathbb{C}$ and $\mathbb{D}$ may be computed
directly from $A$ and $B$ in a rather fast manner, involving nothing more than
the diagonalization and inversion of three by three symmetric matrices,
general simple arithmetic, and where appropriate invoking inverse
trigonometric or inverse hyperbolic functions.
The FEC solves the coupled ODEs of (11) simultaneously. If the initial fiber
orientation is isotropic, then $A=\tfrac{1}{3}I$ and $B=I$ at $t=0$. When the
initial fiber orientation is not isotropic, then one can compute the initial
condition for $B$ from $A$ by inverting equation (38), as described in [14].
It can be shown that the matrices $A$ and $B$ remain positive definite,
simultaneously diagonalizable, and satisfy the equations
$\mathop{\text{tr}}A=\det B=1$ for all time.
For example, the FEC for the Jeffery’s equation with isotropic diffusion given
in equation (2) is given by:
$\displaystyle\frac{DA}{Dt}=\tfrac{1}{2}\mathbb{C}:[B\cdot(\Omega+\lambda\Gamma)+(-\Omega+\lambda\Gamma)\cdot
B]+D_{r}(2I-6A)$ (12)
$\displaystyle\frac{DB}{Dt}=-\tfrac{1}{2}(B\cdot(\Omega+\lambda\Gamma)+(-\Omega+\lambda\Gamma)\cdot
B)-D_{r}\mathbb{D}:(2I-6A)$ (13)
and the FEC for Jeffery’s equation with anisotropic diffusion as shown in
equation (3) is given by
$\displaystyle\frac{DA}{Dt}=\tfrac{1}{2}\mathbb{C}:[B\cdot(\Omega+\lambda\Gamma)+(-\Omega+\lambda\Gamma)\cdot
B]+2D_{r}+3(\mathop{\text{tr}}D_{r})A-5\mathbb{C}:(B\cdot D_{r}+D_{r}\cdot B)$
(14)
$\displaystyle\frac{DB}{Dt}=-\tfrac{1}{2}(B\cdot(\Omega+\lambda\Gamma)+(-\Omega+\lambda\Gamma)\cdot
B)-\mathbb{D}:(2D_{r}+3(\mathop{\text{tr}}D_{r})A)+5(B\cdot D_{r}+D_{r}\cdot
B)$ (15)
Using equation (10) it can be seen that equation (12) comes directly from
equations (6) and (7), and equation (13) comes from applying equation (43) to
equation (12). Similarly for the anisotropic diffusion model, this can be
observed for equations (14) and (15).
Notice, for equations (12) and (13) and for equations (14) and (15), that the
fourth-order orientation tensor ${\mathbb{A}}$ does not appear. The equation
of motion for the orientation is now reduced to developing the relationship
between $A$ and $B$ with that of ${\mathbb{C}}$ and ${\mathbb{D}}$. The
conversion tensors ${\mathbb{C}}$ and ${\mathbb{D}}$ are both computed with
respect to the basis of orthonormal eigenvectors of $B$. With respect to this
basis, the matrix $B$ is diagonal with entries $b_{1}$, $b_{2}$ and $b_{3}$,
and $A$ is diagonal with entries $a_{1}$, $a_{2}$ and $a_{3}$ where we
constrain $b_{1}\leq b_{2}\leq b_{3}$ which implies that $a_{1}\geq a_{2}\geq
a_{3}$.
If the eigenvalues $b_{1}$, $b_{2}$ and $b_{3}$ are not close to each other,
then $\mathbb{C}$ is the symmetric tensor calculated using the formulae from
equations (48) and (49) from the appendix
$\begin{array}[]{lll}\mathbb{C}_{1122}=\frac{a_{1}-a_{2}}{2(b_{2}-b_{1})}&&\mathbb{C}_{1111}=\tfrac{1}{2}b_{1}^{-1}-\mathbb{C}_{1122}-\mathbb{C}_{1133}\\\
\mathbb{C}_{1133}=\frac{a_{1}-a_{3}}{2(b_{3}-b_{1})}&&\mathbb{C}_{2222}=\tfrac{1}{2}b_{2}^{-1}-\mathbb{C}_{1122}-\mathbb{C}_{2233}\\\
\mathbb{C}_{2233}=\frac{a_{2}-a_{3}}{2(b_{3}-b_{2})}&&\mathbb{C}_{3333}=\tfrac{1}{2}b_{3}^{-1}-\mathbb{C}_{1133}-\mathbb{C}_{2233}\\\
\mathbb{C}_{ijkk}=0\text{ if $i\neq j\neq k$}\end{array}$ (16)
If two or more of the eigenvalues are close to each other, then these
equations can give rise to large numerical errors, or even ‘divide by zero’
exceptions. So in this situation, we use different formulae to compute
$\mathbb{C}$.
Suppose two of the eigenvalues are close to each other, for example,
$b_{1}=b_{0}+\epsilon$ and $b_{2}=b_{0}-\epsilon$, where $\epsilon$ is small.
Thus $b_{0}=\tfrac{1}{2}(b_{1}+b_{2})$ and
$\epsilon=\tfrac{1}{2}(b_{1}-b_{2})$. Define the quantity $\mathcal{I}_{n}$
from equation (50) and with equations (57) and (58) this quantity can be
expressed as
$\begin{split}\mathcal{I}_{n+1}=\frac{2n-1}{2n(b_{0}-b_{3})}\mathcal{I}_{n}-\frac{\sqrt{b_{3}}}{nb_{0}^{n}(b_{0}-b_{3})}\text{
if $n\geq 1$}\\\
\mathcal{I}_{1}=\frac{2}{\sqrt{b_{0}-b_{3}}}\cos^{-1}\left(\sqrt{\frac{b_{3}}{b_{0}}}\right)\text{
if $b_{0}>b_{3}$}\\\
\mathcal{I}_{1}=\frac{2}{\sqrt{b_{3}-b_{0}}}\cosh^{-1}\left(\sqrt{\frac{b_{3}}{b_{0}}}\right)\text{
if $b_{0}<b_{3}$}\end{split}$ (17)
Then replace the first equation of equation (16) by
$\mathbb{C}_{1122}=\tfrac{1}{4}\mathcal{I}_{3}+\tfrac{3}{8}\mathcal{I}_{5}\epsilon^{2}+O(\epsilon^{4})$
(18)
If all three of the eigenvalues are almost equal, that is $b_{1}=1+c_{1}$,
$b_{2}=1+c_{2}$, $b_{3}=1+c_{3}$ with $|c_{1}|,|c_{2}|,|c_{3}|\leq\epsilon$,
then it can be similarly shown that
$\begin{split}\mathbb{C}_{1122}&=\textstyle\frac{1}{10}-\frac{3}{28}c_{1}-\frac{3}{28}c_{2}-\frac{1}{28}c_{3}+\frac{5}{48}c_{1}^{2}+\frac{1}{8}c_{1}c_{2}+\frac{1}{24}c_{1}c_{3}+\frac{5}{48}c_{2}^{2}+\frac{1}{24}c_{2}c_{3}+\frac{1}{48}c_{3}^{2}\\\
&\phantom{={}}\textstyle-\frac{35}{352}c_{1}^{3}-\frac{45}{352}c_{1}^{2}c_{2}-\frac{15}{352}c_{1}^{2}c_{3}-\frac{45}{352}c_{1}c_{2}^{2}-\frac{9}{176}c_{1}c_{2}c_{3}\\\
&\phantom{={}}\textstyle-\frac{9}{352}c_{1}c_{3}^{2}-\frac{35}{352}c_{2}^{3}-\frac{15}{352}c_{2}^{2}c_{3}-\frac{9}{352}c_{2}c_{3}^{2}-\frac{5}{352}c_{3}^{3}+O(\epsilon^{4})\end{split}$
(19)
with similar formulae for $\mathbb{C}_{1133}$ and $\mathbb{C}_{2233}$. The
remaining entries of $\mathbb{C}$ are computed using the last four equations
from (16).
The rank 4 conversion tensor $\mathbb{D}$ given in equation (9) is defined
through equation (43) with respect to the basis of orthonormal eigenvectors of
$B$, and can be simplified to
$\begin{split}\left[\begin{smallmatrix}\mathbb{D}_{1111}&\mathbb{D}_{1122}&\mathbb{D}_{1133}\\\
\mathbb{D}_{2211}&\mathbb{D}_{2222}&\mathbb{D}_{2233}\\\
\mathbb{D}_{3311}&\mathbb{D}_{3322}&\mathbb{D}_{3333}\end{smallmatrix}\right]\quad&=\quad\left[\begin{smallmatrix}\mathbb{C}_{1111}&\mathbb{C}_{1122}&\mathbb{C}_{1133}\\\
\mathbb{C}_{2211}&\mathbb{C}_{2222}&\mathbb{C}_{2233}\\\
\mathbb{C}_{3311}&\mathbb{C}_{3322}&\mathbb{C}_{3333}\end{smallmatrix}\right]^{-1}\\\
\mathbb{D}_{ijij}=\mathbb{D}_{ijji}=\frac{1}{4\mathbb{C}_{ijij}}\text{ if
$i\neq j$}&\qquad\quad\mathbb{D}_{ijkk}=0\text{ if $i\neq j\neq
k$}\end{split}$ (20)
Note that there is no reason to suppose that $\mathbb{D}$ is completely
symmetric because in general $\mathbb{D}_{ijij}$ will not be the same as
$\mathbb{D}_{iijj}$.
In performing the numerical calculations, it is more efficient when forming
$DA/Dt$ and $DB/Dt$ from equation (11) to calculate the right hand side in the
coordinate system of the orthonormal eigenvectors of $B$, and then convert
back to the standard coordinate system when solving for $A$ and $B$.
For example, suppose $\mathbb{B}$ is any rank four tensor such that
$\mathbb{B}_{ijkk}=0$ if $i\neq j\neq k$, and
$\mathbb{B}_{ijkl}=\mathbb{B}_{jikl}=\mathbb{B}_{klij}$. Suppose also that $N$
is a symmetric matrix. Then $\mathbb{B}:N$ can be calculated by first defining
the matrices $M_{\mathbb{B}}$ and $\tilde{M}_{\mathbb{B}}$ as
$M_{\mathbb{B}}=\left[\begin{smallmatrix}\mathbb{B}_{1111}&\mathbb{B}_{1122}&\mathbb{B}_{1133}\\\
\mathbb{B}_{1122}&\mathbb{B}_{2222}&\mathbb{B}_{2233}\\\
\mathbb{B}_{1133}&\mathbb{B}_{2233}&\mathbb{B}_{3333}\end{smallmatrix}\right],\qquad\tilde{M}_{\mathbb{B}}=\left[\begin{smallmatrix}0&\mathbb{B}_{1212}&\mathbb{B}_{1313}\\\
\mathbb{B}_{1212}&0&\mathbb{B}_{2323}\\\
\mathbb{B}_{1313}&\mathbb{B}_{2323}&0\end{smallmatrix}\right]$ (21)
then decompose
$N=\text{diag}(\mathbf{n})+\tilde{N}$ (22)
where $\mathbf{n}=(N_{11},N_{22},N_{33})$, and $\tilde{N}$ is the matrix of
the off-diagonal elements of $N$. It follows that
$\mathbb{B}:N=\text{diag}(M_{\mathbb{B}}\cdot\mathbf{n})+2\tilde{M}_{\mathbb{B}}\circ\tilde{N}$
(23)
where for any matrices $U$ and $V$ we define the entrywise product (also known
as the Hadamard or Schur product) by $(U\circ V)_{ij}=U_{ij}V_{ij}$.
### 3.1 The Reduced Strain Closure
Wang et al. [8] described a method that slows down the rate of alignment of
the fibers, which the paper calls the reduced strain closure model (RSC). The
method is implemented by selecting a number $0<\kappa\leq 1$, which is
identified as the rate of reduction. The authors [8] define the tensor
$\mathbb{M}=\sum_{i=1}^{3}{\mathbf{e}}_{i}{\mathbf{e}}_{i}{\mathbf{e}}_{i}{\mathbf{e}}_{i}$
(24)
where ${\mathbf{e}}_{1}$, ${\mathbf{e}}_{2}$, ${\mathbf{e}}_{3}$ are the
orthonormal eigenvectors for $A$. The RSC replaces equations of the form
$\frac{DA}{Dt}=F(A)$ (25)
by
$\frac{DA}{Dt}=F(A)-(1-\kappa)\mathbb{M}:F(A)$ (26)
It turns out this form is simple to reproduce for the FEC. If equation (25) is
represented by the FEC
$\frac{DA}{Dt}=F(A,B),\qquad\frac{DB}{Dt}=G(A,B)$ (27)
then the effect of equation (26) is precisely modeled by the new FEC
$\frac{DA}{Dt}=F(A,B)-(1-\kappa)\mathbb{M}:F(A,B),\qquad\frac{DB}{Dt}=G(A,B)-(1-\kappa)\mathbb{M}:G(A,B)$
(28)
Finally, from a computational point of view, it should be noticed that if we
are working in the basis of orthonormal eigenvectors of $B$, then for any
symmetric matrix $N$ we have that $\mathbb{M}:N$ is simply the diagonal part
of $N$, that is, $\text{diag}(N_{11},N_{22},N_{33})$.
### 3.2 Is the solution to FEC always physical?
By the phrase “the solutions stay physical” we mean that $A$ stays positive
definite with trace one, that is, there exists a fiber orientation
distribution $\psi$ that satisfies equation (4). In fact, if $A$ ever ceases
to become positive definite, then not only is the Exact Closure going to give
the wrong answer, it even ceases to have a meaning in that equation (38) which
is used to define $A$ in terms of $B$ cannot be solved. Thus another way to
state “the solutions stay physical” is that $B$ stays positive definite and
finite, that is, none of the eigenvalues of $B$ become zero, and none of them
become infinite.
###### Theorem 1
The FEC solution to the isotropic diffusion equations (12) and (13) have
global in time physical solutions if $\Omega$, $\Gamma$ and $D_{r}$ are
bounded.
###### Theorem 2
The FEC solution to the anisotropic diffusion equations (14) and (15) have
global in time physical solutions if $D_{r}$ is positive definite, and
$\Omega$, $\Gamma$, $D(D_{r})/Dt$, $D_{r}$ and $1/\|D_{r}^{-1}\|$ are bounded.
where the proofs for both theorems are given in the Appendix beginning with
equation (65). Unfortunately Theorem 2 will not necessarily apply to the Koch
model [10] nor to the Phelps-Tucker ARD model [9], as there is no guarantee
that $1/\|D_{r}^{-1}\|$ is bounded nor, in the ARD case, that $D_{r}$ is
positive definite, unless extra hypotheses are applied.
### 3.3 Algorithm Summary
The algorithm to solve the FEC closure for the second-order orientation tensor
$A$ and the second-order tensor $B$ can be summarized as:
1. 1.
Initialize $A$ and $B$, and define $\lambda$ along with any constants needed
for the diffusion model ${\mathcal{D}}\left[A\right]$
2. 2.
At time $t_{i}$, rotate the tensors $A$ and $B$ into the principal frame of
$B$
3. 3.
When the eigenvalues are distinct, use equation (16) for ${\mathbb{C}}$.
Otherwise when two eigenvalues are repeated, use equation (17) along with
equation (18), or in the case when three eigenvalues are repeated, use
equation (19).
4. 4.
From ${\mathbb{C}}$, compute ${\mathbb{D}}$ using equation (20) in the
principal frame of $B$
5. 5.
Compute $DA/Dt$ and $DB/Dt$ using either equations (12) and (13) for isotropic
diffusion or equations (14), (15) and (28) for the anisotropic diffusion
model, ARD-RSC. For the symmetric rank four tensor contractions with rank two
tensors, use equation (23) to reduce the number of redundant multiplication
operations.
6. 6.
Rotate $DA/Dt$ and $DB/Dt$ into the flow reference frame, and extrapolate
$A\left(t_{i+1}\right)$ and $B\left(t_{i+1}\right)$ from time $t_{i}$ using
any standard ODE solver.
There are a number of coding issues we encountered, and we feel it will be
helpful to share as it will aid others in their computational implementations.
* 1.
There is a choice to compute the basis of orthonormal eigenvectors from either
$A$ or $B$, where in theory these should be identical. We compute the basis
from $B$, arguing that the quantity $B$ is somehow more ‘fundamental’ and $A$
is ‘derived’ from $B$, which is true in the absence of diffusion.
* 2.
We solve a ten dimensional set of ODEs, five for $A$, and five for $B$, where
one of the components of both $A$ and $B$ can be obtained, respectively, from
the relationships $\mathop{\text{tr}}A=1$ and $\det B=1$.
* 3.
When computing $A$ from the orthonormal eigenvector basis of $B$, it is
important to force the off diagonal entries to be non-zero to limit numerical
drifting. In our studies, we found that failing to do this could cause an
adaptive ODE solver to completely freeze in select scenarios.
* 4.
We set the ODE solver to work with a relative tolerance of $10^{-5}$, and
choose to use equations (18) or (19) when the eigenvalues were within
$10^{-4}$ of each other. This should cause $\mathbb{C}$ to be computed with an
accuracy of about $10^{-8}$ when using equations (16), and nearly machine
precision when using equations (18) or (19).
## 4 Numerical Results
Results are presented to demonstrate the accuracy improvements from employing
the FEC closure, and just as important to demonstrate the computational speed
advances over the similarly accurate orthotropic closures. In the present
examples, all flows have an initial isotropic orientation state designated by
$A_{11}=A_{22}=A_{33}=1/3$ and $B_{11}=B_{22}=B_{33}=1$, with all other
components of $A$ and $B$ being zero. The accuracy of the closure does not
depend on the initial orientation state, the isotropic orientation state is
chosen for uniformity. The equations of motion are solved using the FEC
closure for $A$ and $B$ from equations (12) and (13) for isotropic diffusion
or from equations (14), (15) and (28) for the anisotropic rotary diffusion
model with the reduced strain closure ARD-RSC from Phelps and Tucker [9]. For
comparison, the classical equations of motion for the second-order orientation
tensor $A$ requiring a curve-fitted closure for the fourth-order orientation
tensor ${\mathbb{A}}$, are solved using equations (6) and (7) for Folgar-
Tucker diffusion and equations (6), (8) and (25) for the ARD-RSC diffusion
model. Results are compared to solutions obtained using the Spherical Harmonic
approach [29] for solving the full distribution function equations (2) and
(3). It has been demonstrated in [29] that solutions using the Spherical
Harmonic approach are only limited in their accuracy by machine precision and
require considerably less computational effort than solutions using the
control volume approach of Bay [25]. Although a great reduction in speed and
an advancement in accuracy, the Spherical Harmonic approach still requires
more effort than the orientation tensor approach, nor does it readily lend
itself to an applicable form for coupling with commercial FEA solvers. We
select three commonly employed closures for comparisons. The first is the
classical Hybrid closure of Advani and Tucker [4] is selected as it is
regularly used in commercial and research codes due to its computational
efficiency and ease of implementation. The second is an orthotropic closure,
whose class of closures has found increasing use due to their considerable
accuracy improvements over the Hybrid closure. In our study we select the ORT
closure presented by VerWeyst and Tucker [1] based on the Wetzel closure [12].
Our third closure is that of the IBOF from Chung and Kwon [22] which is
claimed to be a more computationally efficient orthotropic closure as it uses
the invariants of $A$ as opposed to the eigenvalues of $A$ thus avoiding
costly tensor rotations.
### 4.1 Results: Simple Shear Flow
The first example is that of a pure shearing flow, given by $v_{1}=Gx_{3}$ and
$v_{2}=v_{3}=0$. Pure shearing flow is commonly employed (see e.g., [6, 22,
30]) to demonstrate a particular closure problem due to the oscillatory nature
of alignment inherent to the Jeffery fiber orbits. Two scenarios are
presented, the first of the Folgar-Tucker isotropic diffusion model in
equation (2) where $D_{r}=C_{I}\dot{\gamma}$, and the second scenario for the
ARD-RSC anisotropic diffusion model.
#### 4.1.1 Simple Shear Flow Orientation
In industrial simulations, the Folgar-Tucker isotropic diffusion model
typically has interaction coefficients that range from $C_{I}=10^{-3}$ to
$C_{I}=10^{-2}$. The effective fiber aspect ratio ranges from 5 to 30
($a_{e}\simeq 1.4\times a_{r}$, where $a_{r}$ is the aspect ratio of
cylindrical fibers), which corresponds to a shape correction factor ranging
from $\lambda=0.96$ to $\lambda=0.999$. Two simulation results using isotropic
diffusion are presented in Figures 1(a) and (b), the first is for
$C_{I}=10^{-3}$ with $\lambda=0.99$ and the later for $C_{I}=10^{-2}$ with
$\lambda=0.95$. Results for the IBOF closure are not shown as they are nearly
graphically indistinguishable from the ORT closure results. It is important to
observe that the ORT and the FEC closure yield results that are graphically
indistinguishable and reasonably close to the orientation state predicted from
the numerically exact Spherical Harmonic solution. Conversely, the orientation
results from the Hybrid closure tend to over predict the the true orientation
state. It is important to point out the apparent oscillatory nature of the
transient solution for the Spherical Harmonic results when $C_{I}=10^{-3}$
with $\lambda=0.99$, which occurs to a lesser extent for $C_{I}=10^{-2}$.
These oscillations are expected due to the low amount of diffusion present.
Equally important is to notice that the oscillations from the FEC closure, as
well as the ORT, both damp out to the same steady state value. Note also that
the FEC does not oscillate excessively for either of the isotropic flow
conditions presented, which was a problem that plagued the early orthotropic
closures (see e.g., [6] and [31]) and the early neural network closures [32].
There remains room for further accuracy improvements (see e.g., [33] for
several preliminary higher accuracy closures). However, it is speculated based
upon the discussion in Jack and Smith [34] that such improvements will be
slight when solving the second-order moment equations, and higher order moment
simulations, such as those that use sixth-order closures (see e.g., [24]) may
need to be considered for significant accuracy improvements.
The Folgar-Tucker model has been used for decades, but tends to overstate the
rate of alignment during the transient solution (see e.g., [7]). The ARD-RSC
model [9] seeks to address these limitations, but few studies have focused on
this new diffusion model and the dependance of computed results on the choice
of closure. In the ARD-RSC model, the rotary diffusion coefficient of Folgar
and Tucker isotropic diffusion model ($D_{r}=C_{I}\dot{\gamma}$ where
$\dot{\gamma}=\left(\frac{1}{2}\Gamma:\Gamma\right)^{1/2}$) is replaced by an
anisotropic diffusion coefficient expressed by
$D_{r}=b_{1}\dot{\gamma}I+b_{2}\dot{\gamma}A+b_{3}\dot{\gamma}A^{2}+\tfrac{1}{2}b_{4}\Gamma+\tfrac{1}{4}b_{5}\dot{\gamma}^{-1}\Gamma^{2}$
(29)
where
$(b_{1},b_{2},b_{3},b_{4},b_{5})=(1.924\times 10^{-4},5.839\times
10^{-3},4.0\times 10^{-2},1.168\times 10^{-5},0)$ (30)
The ARD-RSC model serves as an excellent example of the effectiveness of the
FEC approach for solving the tensor form of orientation as the ARD-RSC model
will yield orientation states that are considerably different than that of the
Folgar-Tucker model. Results from the various closures and the spherical
harmonic results are presented in Figure 2 for the ARD-RSC flow with
$\kappa=1/30$. The value of $\kappa=1/30$ is taken from the results presented
in Phelps and Tucker [9], which was based on their experimental observations.
For a fiber aspect ratio of $\sim 5$, corresponding to $\lambda=0.95$, each of
the investigated closures produces graphically similar results. During the
initial flow stages, the Hybrid tends to over predict alignment, whereas the
ORT and the FEC tend to under predict alignment. As steady state is attained,
the FEC and the ORT yield nearly identical results, both of which over predict
$A_{11}$ in the final orientation state whereas the Hybrid yields a reasonable
representation of the orientation. For a long fiber, corresponding to
$\lambda\rightarrow 1$, the trends are similar to those of the lower aspect
ratio fibers, but in this case the FEC and the ORT better represent the final
orientation state relative to the Hybrid.
#### 4.1.2 Orthotropic Closure Errors
The ORT is a polynomial approximation to the Exact Closure, as demonstrated in
the preceding section, and it is not surprising that the two approaches yield
graphically indistinguishable results for many of the flows investigated. On
closer inspection of the transient solution of the ARD-RSC model for
$\kappa=1/30$ and $\lambda=1$ there is a slight difference. This difference is
shown in Figure 3(a) where a closeup view is provided of the $A_{11}$
component for the flow times of $800\leq Gt\leq 1,200$. These results indicate
how well the fitting was performed in the construction of the ORT. As the ORT
is an approximation of the Exact Closure of Montgomery-Smith et al. [14] for
pure Jeffery’s flow, it is of interest to determine whether the slight
deviation comes from the Jeffery’s component or the diffusion component of
equation (6). To this end, we performed a comparison for the derivative of $A$
computed in two different ways. First, for each point in time $t$, we computed
$A(t)$ and $B(t)$ using the FEC method. Then we computed four quantities:
$\frac{DA^{\text{\tiny FEC, Diff}}}{Dt}$ which contains the terms from the
right hand side of equation (14) that explicitly include $D_{r}$,
$\frac{DA^{\text{\tiny FEC, Jeff}}}{Dt}$ which contains the terms from the
right hand side of equation (14) that do not involve $D_{r}$,
$\frac{DA^{\text{\tiny ORT, Diff}}}{Dt}$ the right hand side of equation (8),
and $\frac{DA^{\text{\tiny ORT, Jeff}}}{Dt}$ the right hand side of equation
(6) when $\mathcal{D}(A)$ is set to zero. In the latter two cases $\mathbb{A}$
is computed using the ORT closure. The error is then defined as
$\displaystyle E_{\text{\tiny
Diffusion}}=\sqrt{\sum_{i=1}^{3}\sum_{j=1}^{3}\left(\frac{DA_{ij}^{\text{\tiny
FEC, Diff}}}{Dt}-\frac{DA_{ij}^{\text{\tiny ORT, Diff}}}{Dt}\right)^{2}}$ (31)
$\displaystyle E_{\text{\tiny
Jeffery}}=\sqrt{\sum_{i=1}^{3}\sum_{j=1}^{3}\left(\frac{DA_{ij}^{\text{\tiny
FEC, Jeff}}}{Dt}-\frac{DA_{ij}^{\text{\tiny ORT, Jeff}}}{Dt}\right)^{2}}$ (32)
Each of the two errors are plotted in Figure 3(b). It is clear from the figure
that although the ORT’s derivative calculation from the diffusion component is
not zero, it is minor in comparison to the error from the Jeffery’s part of
the orientation tensor equation of motion. This error is only a rough
indication of the sources of error, but values of 0.04% at a given moment in
flow time can account for an error as large as 40% for $A$ for the flow times
on the order 1,000. Since the errors from each of the possible sources
probably do not drive the error in the solution toward the same direction, the
total error would be expected to be less than the upper bound of 40%, where in
reality the error is closer to 0.9% as steady state is approached.
Since the ORT and FEC differ by about 0.9%, it begs the question as to which
is more accurate in computing the true exact closure. While the FEC in theory
should exactly compute the exact closure, it is possible that numerical errors
creep into the FEC. To test for this, we performed a consistency check. After
finding the solution $A(t)$ and $B(t)$ using the FEC, we calculated
$E_{\text{\tiny
Exact}}=\sqrt{\sum_{i=1}^{2}\sum_{j=1}^{3}\left(A(B)_{ij}-A_{ij}\right)^{2}}$
(33)
where $A(B)$ was computed using equation (38). This calculation was performed
by diagonalizing $B$, applying the elliptic integrals in equation set (47)
using the software package [35], and then performing the reverse change of
basis. The results for the ARD-RSC model with $\kappa=1/30$ and $\lambda=1.00$
show an error of less than $10^{-8}$ throughout the transient solution, thus
suggesting the implementation as presented in this paper for the FEC is quite
accurate.
### 4.2 Results: Orientation Error Summary
To quantify the errors observed in Figures 1(a) and (b) for the isotropic
diffusion models, a series of fourteen flows are studied as outlined in table
1 where $\lambda=1$ for each of the flows. The solution is obtained using the
classical closure methods and the FEC closure results are compared to
solutions obtained from the Spherical Harmonic approach. To quantify the
error, the time average of the Frobenius Norm of the difference between the
true solution $A_{ij}^{\mbox{\tiny Spherical}}(t)$ and the approximate
solution obtained from a closure $A_{ij}^{\mbox{\tiny Closure}}(t)$ is
computed as
$\displaystyle\overline{E}_{\mbox{\tiny
Closure}}=\frac{1}{t_{f}-t_{0}}\int_{t_{0}}^{t_{f}}\sqrt{\sum_{i=1}^{3}\sum_{j=1}^{3}\left|A_{ij}^{\mbox{\tiny
Spherical}}(t)-A_{ij}^{\mbox{\tiny Closure}}(t)\right|^{2}}dt$ (34)
where $t_{0}$ is the initial time where the fiber orientation is isotropic and
$t_{f}$ is the time when the steady state is attained, which in this example
will be defined when the magnitude of the largest derivative of the
eigenvalues of $A$ is less than $G\times 10^{-4}$. This can be expressed as
the smallest moment in time when the following is satisfied
$\left(\max_{i\in\\{1,2,3\\}}|\frac{DA_{(i)}}{Dt}(t)|\right)\leq G\times
10^{-4}$. The quantitative error metric in equation (34) yields a value for
the simple shear flow of Figure 1(b) for the FEC, ORT and Hybrid closures of,
respectively, $4.74\times 10^{-2}$, $4.85\times 10^{-2}$ and $1.75\times
10^{-1}$. As the objective is to compare the relative accuracy improvements
between the FEC closure and the existing closures we will normalize the error
metric in equation (34) as
$\displaystyle\overline{\varepsilon}_{\mbox{\tiny Closure}}$
$\displaystyle\equiv$ $\displaystyle\frac{\overline{E}_{\mbox{\tiny
Closure}}}{\min\limits_{\mbox{\tiny Closure}}\left(\overline{E}_{\mbox{\tiny
Closure}}\right)}$ (35)
where the closure with the greatest accuracy will have a value of
$\overline{\varepsilon}_{\mbox{\tiny Closure}}=1$, and the remaining closures
will have a value of $\overline{\varepsilon}_{\mbox{\tiny Closure}}$ in excess
of 1. For each of the flows studied, the normalized error of equation (35) is
tabulated in Table 1 for the FEC, ORT, IBOF and the Hybrid closures. In each
of the flows considered, the FEC performs as well as or better than the
orthotropic closures.
### 4.3 Results: Combined Flow
A classical flow to demonstrate the effectiveness and robustness of a closure
is that of the combined flow presented in Cintra and Tucker [6]. This flow is
often selected as the orientation state crisscrosses the eigenspace of
possible orientations. The combined flow begins with pure shear in the
$x_{1}-x_{2}$ direction for $0\leq Gt<10$ defined by the velocity field
$v_{1}=Gx_{2}$, $v_{2}=v_{3}=0$. The flow then transitions to shearing flow in
the $x_{2}-x_{3}$ plane with stretching in the $x_{3}$ direction during the
time $10\leq Gt<20$ defined by the velocity field $v_{1}=-1/20Gx_{1}$,
$v_{2}=-1/20Gx_{2}+Gx_{3}$ and $v_{3}=1/10Gx_{3}$. The flow then transitions
to a flow with a considerable amount of stretching in the $x_{1}$ direction
with a reduced amount of shearing in the $x_{2}-x_{3}$ plane for $20\leq Gt$
defined by the velocity field $v_{1}=Gx_{1}$, $v_{2}=-1/2Gx_{2}+Gx_{1}$ and
$v_{3}=-1/2x_{3}$. The times where the flow transitions are chosen to prevent
the orientation from attaining steady state, thus any error in the transient
solution will be propagated to the next flow state. As observed in Figure 4
for flow results from the Folgar-Tucker model with $C_{I}=10^{-2}$ and
$\lambda=1$, the ORT and the FEC again yield similar results. This is
significant as it further demonstrates the robustness and the accuracy of the
FEC.
### 4.4 Results: Center-gated Disk Flow
The final flow investigated is that of the center-gated disk, a typical flow
condition in industrial processes [1, 36]. The flow enters the mold through
the pin gate and flows radially outward, where the velocity is a function of
both the gap height $2b$ and the radial distance from the gate $r$. The
velocity gradient for a Newtonian fluid can be represented by [6]
$\displaystyle v_{r}=\frac{3Q}{8\pi
rb}\left(1-\left(\frac{z}{b}\right)^{2}\right),\>\>\>\>v_{\theta}=v_{z}=0$
(36) $\displaystyle\frac{\partial v_{i}}{\partial x_{j}}=\frac{3Q}{8\pi
rb}\left[\begin{smallmatrix}-\frac{1}{r}\left(1-\frac{z^{2}}{b^{2}}\right)&0&-\frac{2}{b}\frac{z}{b}\\\
0&\frac{1}{r}\left(1-\frac{z^{2}}{b^{2}}\right)&0\\\ 0&0&0\\\
\end{smallmatrix}\right]$ (37)
where $z$ is the gap height location between the mold walls, $b$ is half the
gap height thickness, and $Q$ is the flow rate. Orientation results are
presented in Figure 5 for a gap height of $z/b=4/10$ for isotropic diffusion
with $C_{I}=10^{-2}$ and $\lambda=1$. Again, the Hybrid overshoots the actual
orientation state, whereas the ORT and the FEC behave in a graphically
identical fashion. This last result further demonstrates the robustness of the
FEC approach. Similar tests were performed for gap heights of
$z/b=0,1/10,2/10,\ldots,9/10$ and similar conclusions were observed at all gap
heights.
### 4.5 Results: Computational Time Enhancement
An additional goal for any new closure is that of reducing the computational
requirements for numerical solutions. Simulations are performed using in-house
developed single threaded code using Intel’s FORTRAN 90 compiler version 11.1.
Computations are solved on a standard desktop with an Intel i7 processor with
8 GB of Ram. The solution of the ORT has been studied by the investigators for
several years, and a reasonably efficient algorithm has been developed.
Solutions for the IBOF were made using the FORTRAN 90 code discussed in Jack
et al. [30].
Notice from Equations (12) and (13) that the operations
${\mathbb{C}}:\left[\cdots\right]$ and ${\mathbb{D}}:\left[\cdots\right]$ are
independent of coordinate frame. As we explained in equation (23), in the
principal frame there are a considerable number of terms in both
${\mathbb{C}}$ and ${\mathbb{D}}$ that are zero that are known prior to any
calculations, and thus operations involving $0$ can be avoided in the coding.
In addition, computing $DA/Dt$ and $DB/Dt$ in the principle reference frame
and then rotating the resulting $3\times 3$ tensors into the local reference
frame will be more efficient than rotating the $3\times 3\times 3\times 3$
tensors ${\mathbb{C}}$ and ${\mathbb{D}}$ into the local reference frame and
then computing $DA/Dt$ and $DB/Dt$. All computations of the FEC utilize this
characteristic, and thus greatly reduce the computational efforts. In
addition, redundant calculations from Equations (12) and (13) are closely
followed and performed only once. These computations are particularly frequent
in the double contractions of the fourth-order tensors with the second-order
tensors.
In the first study, computations were performed for the previous closure
operations for the ORT and the Hybrid using algorithms similar to
implementations discussed in the literature. In studies using an adaptive step
size solver, solutions for the IBOF took nearly 10 times that of the ORT,
whereas for the fixed step size the two closures required similar
computational efforts. To avoid any computational comparisons introduced by an
adaptive step size solver, computations were performed using a fixed step-size
fourth-order Runge-Kutta (R-K) solver with a very small step size of $\Delta
Gt=10^{-4}$. Computational times are tabulated in Table 2 for both CPU time
and normalized time. Normalized time is defined based off of the often
employed Hybrid closure using the standard implementation for the Hybrid
closure with the very small step size. The ORT required nearly 770 seconds, a
factor of 31 times greater than that of the Hybrid. Conversely, the FEC
required only 26 seconds, a slight increase in effort beyond the Hybrid, which
required 25 seconds. This is very striking as the Hybrid closure is often
selected in research and industrial codes due to its computational efficiency,
while recognizing the sacrifice in computational accuracy. This is no longer
the case with the FEC as it has the same accuracy of the orthotropic closures
while providing computational speeds nearly identical to that of the Hybrid
closure.
In the process of developing the FEC algorithm, it was observed that many
redundant operations existed in the implementation of the ORT and the Hybrid
closures. For existing implementation of the classical closures, no special
consideration was given to the ${\mathbb{A}}:\Gamma$ term, but since the rank
four tensor ${\mathbb{A}}$ is symmetric, equation (23) can be used to reduce
the number operations of the double contraction to that of a simple rank two
tensor operations for both the hybrid closure and the ORT closure
implementations. For the ORT, the computational problem can be further
simplified by constructing the second-order tensor $DA/Dt$ in the principal
frame, and then performing the tensor rotation back into the reference frame.
Thus the costly rotations of the fourth-order tensor ${\mathbb{A}}$ are
avoided. These optimized results for the Hybrid and the ORT are shown in Table
2, and it is clear that the computational times were greatly reduced. The
optimized Hybrid implementation reduced the computational time to 30% of the
original time, whereas the ORT implementation improved by over an order of
magnitude. With these additional computational advances the ORT appears to be
a more viable alternative to the Hybrid, but the FEC still has similar
computational requirements. It is expected that with further studies, the FEC
algorithm could be improved to further reduce its computational times.
## 5 Conclusion
The Fast Exact Closure is a robust, computationally efficient, approach to
solve the fiber orientation equations of motion for the orientation tensors.
This unique approach does not require any form of curve fitting based on
orientation data obtained from numerical solutions of the full fiber
orientation distribution. The results presented demonstrate that the FEC is as
accurate and robust as the existing industrially accepted closures, while
enjoying computational speeds equivalent to the industrial form of the hybrid
closure.
## 6 Acknowledgments
The authors gratefully acknowledge support from the N.S.F. via grant C.M.M.I.
0727399 and Baylor University through their financial support through their
faculty member start-up package.
## Appendix: Justification and Proofs
By [14], the Exact Closure is this. Given $A$, compute the symmetric matrix
$B$ by solving
$A=A(B)=\tfrac{1}{2}\int_{0}^{\infty}\frac{(B+sI)^{-1}\,ds}{\sqrt{\text{det}(B+sI)}}$
(38)
It was shown in [14] that $B$ is unique with this property. Then compute
$\mathbb{A}$ using the formula
$\mathbb{A}=\tfrac{3}{4}\int_{0}^{\infty}\frac{s\,\mathcal{S}((B+sI)^{-1}\otimes(B+sI)^{-1})\,ds}{\sqrt{\text{det}(B+sI)}}$
(39)
Here $\mathcal{S}$ represents the symmetrization of a rank 4 tensor, that is,
$\mathcal{S}(\mathbb{B})_{ijkl}$ is the average of $\mathbb{B}_{mnpq}$ over
all permutations $(m,n,p,q)$ of $(i,j,k,l)$.
It can be shown that the following two statements are equivalent:
1. 1.
Equation (38) holds for all time.
2. 2.
Equation (38) holds at $t=0$, and equation (9) holds for all time, where
$\mathbb{C}=\tfrac{3}{4}\int_{0}^{\infty}\frac{\mathcal{S}((B+sI)^{-1}\otimes(B+sI)^{-1})\,ds}{\sqrt{\text{det}(B+sI)}}$
(40)
Furthermore, it can be shown for every symmetric matrix $M$ that
$\mathop{\text{tr}}(B^{-1}\cdot M)=2\mathop{\text{tr}}(\mathbb{C}:M)$ (41)
and hence it can be seen that $\mathop{\text{tr}}(DA/Dt)=0$ if and only if
$\mathop{\text{tr}}(B^{-1}\cdot(DB/Dt))=0$, that is, $\mathop{\text{tr}}A$
stays constant if and only if $\det B$ stays constant.
Next, we have
The linear map on symmetric matrices $M\mapsto\mathbb{C}:M$ is invertible (42)
that is, there exists a rank 4 tensor $\mathbb{D}$ such that
$\mathbb{C}:\mathbb{D}:M=\mathbb{D}:\mathbb{C}:M=M\text{ for any symmetric
matrix $M$}$ (43)
Indeed if we define the six by six matrix
$\mathcal{C}=\left[\begin{smallmatrix}\mathbb{C}_{1111}&\mathbb{C}_{1122}&\mathbb{C}_{1133}&2\mathbb{C}_{1112}&2\mathbb{C}_{1113}&2\mathbb{C}_{1123}\\\
\mathbb{C}_{2211}&\mathbb{C}_{2222}&\mathbb{C}_{2233}&2\mathbb{C}_{2212}&2\mathbb{C}_{2213}&2\mathbb{C}_{2223}\\\
\mathbb{C}_{3311}&\mathbb{C}_{3322}&\mathbb{C}_{3333}&2\mathbb{C}_{3312}&2\mathbb{C}_{3313}&2\mathbb{C}_{3323}\\\
2\mathbb{C}_{1211}&2\mathbb{C}_{1222}&2\mathbb{C}_{1233}&4\mathbb{C}_{1212}&4\mathbb{C}_{1213}&4\mathbb{C}_{1223}\\\
2\mathbb{C}_{1311}&2\mathbb{C}_{1322}&2\mathbb{C}_{1333}&4\mathbb{C}_{1312}&4\mathbb{C}_{1313}&4\mathbb{C}_{1323}\\\
2\mathbb{C}_{2311}&2\mathbb{C}_{2322}&2\mathbb{C}_{2333}&4\mathbb{C}_{2312}&4\mathbb{C}_{2313}&4\mathbb{C}_{2323}\end{smallmatrix}\right]$
(44)
then $\mathbb{D}$ can be calculated using the formula
$\displaystyle\left[\begin{smallmatrix}\mathbb{D}_{1111}&\mathbb{D}_{1122}&\mathbb{D}_{1133}&\mathbb{D}_{1112}&\mathbb{D}_{1113}&\mathbb{D}_{1123}\\\
\mathbb{D}_{2211}&\mathbb{D}_{2222}&\mathbb{D}_{2233}&\mathbb{D}_{2212}&\mathbb{D}_{2213}&\mathbb{D}_{2223}\\\
\mathbb{D}_{3311}&\mathbb{D}_{3322}&\mathbb{D}_{3333}&\mathbb{D}_{3312}&\mathbb{D}_{3313}&\mathbb{D}_{3323}\\\
\mathbb{D}_{1211}&\mathbb{D}_{1222}&\mathbb{D}_{1233}&\mathbb{D}_{1212}&\mathbb{D}_{1213}&\mathbb{D}_{1223}\\\
\mathbb{D}_{1311}&\mathbb{D}_{1322}&\mathbb{D}_{1333}&\mathbb{D}_{1312}&\mathbb{D}_{1313}&\mathbb{D}_{1323}\\\
\mathbb{D}_{2311}&\mathbb{D}_{2322}&\mathbb{D}_{2333}&\mathbb{D}_{2312}&\mathbb{D}_{2313}&\mathbb{D}_{2323}\end{smallmatrix}\right]=\mathcal{C}^{-1}$
(45) $\displaystyle\mathbb{D}_{ijkl}=\mathbb{D}_{jikl}=\mathbb{D}_{ijlk}$ (46)
In the basis of orthonormal eigenvectors of $B$, since $\mathbb{C}_{ijkk}=0$
whenever $i\neq j\neq k$, this reduces to equation (20).
Next, if $B$ is diagonal, then $A$ is diagonal with entries
$\begin{split}a_{1}=\tfrac{1}{2}\int_{0}^{\infty}\frac{ds}{(b_{1}+s)^{3/2}\sqrt{b_{2}+s}\sqrt{b_{3}+s}}\\\
a_{2}=\tfrac{1}{2}\int_{0}^{\infty}\frac{ds}{\sqrt{b_{1}+s}(b_{2}+s)^{3/2}\sqrt{b_{3}+s}}\\\
a_{3}=\tfrac{1}{2}\int_{0}^{\infty}\frac{ds}{\sqrt{b_{1}+s}\sqrt{b_{2}+s}(b_{3}+t)^{3/2}}\end{split}$
(47)
and
$\begin{split}\mathbb{C}_{1111}=\tfrac{3}{4}\int_{0}^{\infty}\frac{ds}{(b_{1}+s)^{5/2}\sqrt{b_{2}+s}\sqrt{b_{3}+s}}&\qquad\quad\mathbb{C}_{1122}=\tfrac{1}{4}\int_{0}^{\infty}\frac{ds}{(b_{1}+s)^{3/2}(b_{2}+s)^{3/2}\sqrt{b_{3}+s}}\\\
\mathbb{C}_{2222}=\tfrac{3}{4}\int_{0}^{\infty}\frac{ds}{\sqrt{b_{1}+s}(b_{2}+s)^{5/2}\sqrt{b_{3}+s}}&\qquad\quad\mathbb{C}_{2233}=\tfrac{1}{4}\int_{0}^{\infty}\frac{ds}{\sqrt{b_{1}+s}(b_{2}+s)^{3/2}(b_{3}+s)^{3/2}}\\\
\mathbb{C}_{3333}=\tfrac{3}{4}\int_{0}^{\infty}\frac{ds}{\sqrt{b_{1}+s}\sqrt{b_{2}+s}(b_{3}+t)^{5/2}}&\qquad\quad\mathbb{C}_{1133}=\tfrac{1}{4}\int_{0}^{\infty}\frac{ds}{(b_{1}+s)^{3/2}\sqrt{b_{2}+s}(b_{3}+s)^{3/2}}\end{split}$
(48)
all the other coefficients of $\mathbb{C}$ being zero. Furthermore
$\mathbb{C}:I=\tfrac{1}{2}B^{-1}$ (49)
Equations (16) now follow by an easy calculation. Equations (18) and (19) are
obtained by expanding the fourth equation of (48) using Taylor’s series, where
$\mathcal{I}_{n}=\int_{0}^{\infty}\frac{ds}{(b_{0}+s)^{n}\sqrt{b_{3}+s}}$ (50)
The proofs of various details now follow.
Proof of equations (9) and (40): Write $\dot{A}$ and $\dot{B}$ for
$\frac{DA}{Dt}$ and $\frac{DB}{Dt}$ respectively. Use the formulae
$\frac{D}{Dt}B^{-1}=-B^{-1}\cdot\dot{B}\cdot
B^{-1}\quad\text{and}\quad\frac{D}{Dt}\det
B=\mathop{\text{tr}}(B^{-1}\cdot\dot{B})\det B$ (51)
to obtain
$\begin{split}\dot{A}&=-\tfrac{1}{2}\int_{0}^{\infty}\frac{(B+sI)^{-1}\cdot\dot{B}\cdot(B+sI)^{-1}\,ds}{\sqrt{\text{det}(B+sI)}}-\tfrac{1}{4}\int_{0}^{\infty}\frac{[(B+sI)^{-1}:\dot{B}]\,(B+sI)^{-1}\,ds}{\sqrt{\text{det}(B+sI)}}\\\
&=-\mathbb{C}:\dot{B}\end{split}$ (52)
since for any symmetric matrix $K$ we have
$\mathcal{S}(K\otimes K):\dot{B}=\tfrac{1}{3}K\cdot\dot{B}\cdot
K+\tfrac{2}{3}(K:\dot{B})K$ (53)
Proof of equation (10): For any invertible symmetric matrix $K$
$\mathcal{S}(K^{-1}K^{-1}):(K\cdot M)=\mathcal{S}(K^{-1}K^{-1}):(M^{T}\cdot
K)=\tfrac{1}{3}((\mathop{\text{tr}}M)K^{-1}+M\cdot K^{-1}+K^{-1}\cdot M^{T})$
(54)
Setting $K=B+sI$, we multiply both sides by $3/(4\sqrt{\det(B+sI)})$, and
integrate with respect to $s$ from zero to infinity, to obtain equation (10).
Proof of equations (17) from (50): To compute $\mathcal{I}_{1}$, use the
formulae
$\displaystyle\frac{d}{ds}\cos^{-1}\left(\sqrt{\frac{b_{3}+s}{b_{0}+s}}\right)=-\frac{\sqrt{b_{0}-b_{3}}}{2(b_{0}+s)\sqrt{b_{3}+s}}$
(55)
$\displaystyle\frac{d}{ds}\cosh^{-1}\left(\sqrt{\frac{b_{3}+s}{b_{0}+s}}\right)=-\frac{\sqrt{b_{3}-b_{0}}}{2(b_{0}+s)\sqrt{b_{3}+s}}$
(56)
Next, integrating by parts, we obtain
$\mathcal{I}_{n}=-\frac{\sqrt{b_{3}}}{2b_{0}^{n}}+\frac{n}{2}\int_{0}^{\infty}\frac{\sqrt{b_{3}+s}\,ds}{(b_{0}+s)^{n+1}}$
(57)
and simple algebra gives
$\int_{0}^{\infty}\frac{\sqrt{b_{3}+s}\,ds}{(b_{0}+s)^{n+1}}=\mathcal{I}_{n}+(b_{3}-b_{0})\mathcal{I}_{n+1}$
(58)
Proof of equation (41): For any positive definite matrix $X$, if
$A(X)=\tfrac{1}{2}\int_{0}^{\infty}\frac{(X+sI)^{-1}\,ds}{\sqrt{\text{det}(X+sI)}}$
(59)
then
$\mathop{\text{tr}}(A(X))=\tfrac{1}{2}\int_{0}^{\infty}\frac{\mathop{\text{tr}}((X+sI)^{-1})\,ds}{\sqrt{\text{det}(X+sI)}}=-\int_{0}^{\infty}\frac{d}{ds}\left(\frac{1}{\sqrt{\text{det}(X+sI)}}\right)\,ds=\frac{1}{\sqrt{\det
X}}$ (60)
If $\mathop{\text{tr}}(B^{-1}\cdot M)=\alpha$, then (remembering that $\det
B=1$) we have $\det(B+\epsilon M)=1+\epsilon\alpha+O(\epsilon^{2})$ as
$\epsilon\to 0$. Hence
$1-\tfrac{1}{2}\epsilon\alpha+O(\epsilon^{2})=\mathop{\text{tr}}(A(B+\epsilon
M))=1-\epsilon\mathop{\text{tr}}(\mathbb{C}:M)+O(\epsilon^{2})$. Therefore
$\mathop{\text{tr}}(\mathbb{C}:M)=\tfrac{1}{2}\alpha$.
Proof of equation (42): This follows because
$\text{$M$ is a symmetric non-zero matrix}\Rightarrow M:\mathbb{C}:M>0$ (61)
and hence $M\neq 0\Rightarrow\mathbb{C}:M\neq 0$.
To see this, suppose that $K$ is a positive definite three by three matrix,
and let $k_{1}$, $k_{2}$ and $k_{3}$ be its eigenvalues. Then in the basis of
corresponding orthonormal eigenvalues of $K$, we have that for any non-zero
symmetric $M$
$M:\mathcal{S}(K\otimes
K):M=\tfrac{1}{3}\left(\sum_{i=1}^{3}k_{i}M_{ii}\right)^{2}+\tfrac{2}{3}\sum_{i,j=1}^{3}k_{i}k_{j}M_{ij}^{2}>0$
(62)
Apply this to $K=(B+sI)^{-1}$, multiply by $(\det(B+sI))^{-1/2}$, and then
integrate over $s$ to obtain $M:\mathbb{C}:M>0$.
Proof of equation (49): Without loss of generality $B$ is diagonal. Hence we
need to prove statements such as
$\mathbb{C}_{1111}+\mathbb{C}_{1122}+\mathbb{C}_{1133}=\tfrac{1}{2}b_{1}^{-1}$
when $\mathbb{C}$ satisfies equation (48). But
$\begin{split}\mathbb{C}_{1111}+\mathbb{C}_{1122}+\mathbb{C}_{1133}&=-\tfrac{1}{2}\int_{0}^{\infty}\frac{d}{ds}\left(\frac{1}{(b_{1}+s)^{3/2}\sqrt{b_{2}+s}\sqrt{b_{3}+s}}\right)\,ds\\\
&=\tfrac{1}{2}b_{1}^{-3/2}b_{2}^{-1/2}b_{3}^{-1/2}\end{split}$ (63)
The result follows since $b_{1}b_{2}b_{3}=1$.
Proof of equation (28): From equation (9), we see that the RSC version of
equation (27) is equation (28) and
$\frac{DB}{Dt}=G(A,B)-(1-\kappa)\mathbb{D}:\mathbb{M}:F(A,B)$ (64)
Since $\mathbb{D}:F(A,B)=G(A,B)$, it follows that all we need to show is
$\mathbb{D}:\mathbb{M}:F(A,B)=\mathbb{M}:\mathbb{D}:F(A,B)$. This is easily
seen by working in the basis of orthonormal eigenvectors of $B$, noticing that
then $\mathbb{M}:N$ is simply the diagonal part of $N$, and applying equation
(23).
Proof of Theorem 1: It follows from $\det B=1$ that the only way that the
solutions can become non-physical is if $B$ ‘blows up,’ that is, if one or
more of the eigenvalues of $B$ become infinite in finite time. (Also, [37,
Theorem 1.4] can be used to show that the finiteness of the eigenvalues of $B$
imply the differential equations have a unique solution.)
Substituting $M=I$ into equation (10), we obtain $\mathbb{C}:B=\tfrac{3}{2}A$,
that is,
$\mathbb{D}:A=\tfrac{2}{3}B$ (65)
Take the trace of equation (13) and use equation (65), to obtain
$\frac{D}{Dt}\mathop{\text{tr}}B\leq
c(\|\Omega\|+\|\Gamma\|+D_{r})(\mathop{\text{tr}}B)-2D_{r}(I:\mathbb{D}:I)$
(66)
for some universal constant $c>0$. Here $\|\cdot\|$ denotes the spectral norm
of a matrix, and we have used the inequality $\mathop{\text{tr}}(X\cdot
Y)\leq\|X\|\mathop{\text{tr}}Y$ whenever $Y$ is positive definite. By equation
(61), we have $I:\mathbb{D}:I=M:\mathbb{C}:M\geq 0$, where $M=\mathbb{D}:I$,
and hence
$\frac{D}{Dt}\mathop{\text{tr}}B\leq
c(\|\Omega\|+\|\Gamma\|+D_{r})(\mathop{\text{tr}}B)$ (67)
Now we can apply Gronwall’s inequality [37, Chapter 2.1.1] (in Lagrangian
coordinates) to obtain
$\mathop{\text{tr}}B\leq(\mathop{\text{tr}}B_{0})e^{ctL}$ (68)
where $L$ is an upper bound for $\|\Omega\|+\|\Gamma\|+D_{r}$, and $B_{0}$ is
the value of $B$ at $t=0$. Therefore $\mathop{\text{tr}}B$ remains finite, and
since $B$ is positive definite, no eigenvalue of $B$ blows up to infinity in
finite time.
Proof of Theorem 2: Note that the positive definiteness of $D_{r}$, and the
boundedness of $D_{r}$ and $1/\|D_{r}^{-1}\|$ guarantee that the ratio of
$\mathop{\text{tr}}B$ and $D_{r}:B$ is bounded from above and below. From
equation (15), and using equation (65)
$\frac{D}{Dt}(D_{r}:B)\leq{}c\left(\|\Omega\|+\|\Gamma\|+\|D_{r}\|+\left\|\frac{D(D_{r})}{Dt}\right\|\right)(D_{r}:B)-2(D_{r}:\mathbb{D}:D_{r})$
(69)
The rest of the proof proceeds by a similar argument as above.
## 7 References
## References
* [1] VerWeyst, B.E., C. Tucker, P. Foss, J. O’Gara, Fiber Orientation in 3-D Injection Molded Features: Prediction and Experiment, International Polymer Processing 14 (1999) 409–420.
* [2] Fan, X., N. Phan-Thien, R. Zheng, A Direct Simulation of Fibre Suspensions, Jn. of Non-Newtonian Fluid Mechanics 74 (1998) 113–135.
* [3] Folgar, F.P., C. Tucker, Orientation Behavior of Fibers in Concentrated Suspensions, Jn. of Reinforced Plastics and Composites 3 (1984) 98–119.
* [4] Advani, S.G., C. Tucker, The Use of Tensors to Describe and Predict Fiber Orientation in Short Fiber Composites, Jn. of Rheology 31 (8) (1987) 751–784.
* [5] Jack, D.A., D. Smith, The Effect of Fiber Orientation Closure Approximations on Mechanical Property Predictions, Composites, Part A 38 (2007) 975–982.
* [6] Cintra, J. S., C. Tucker, Orthotropic Closure Approximations for Flow-Induced Fiber Orientation, Jn. of Rheology 39 (6) (1995) 1095–1122.
* [7] Tucker, C.L., J. Wang, J. O’Gara, G. DeBarr, Improved Fiber Orientation Predictions for Injection Molded Composites, in: NSF/DOE/APC Workshop: The Future of Modeling in Composites Molding Processes, Washington, D.C., 2004.
* [8] Wang, J., J. O’Gara, C. Tucker, An Objective Model for Slow Orientation Kinetics in Concentrated Fiber Suspensions: Theory and Rheological Evidence, Journal of Rheology 52 (5) (2008) 1179–1200.
* [9] Phelps, J.H., C. Tucker, An Anisotropic Rotary Diffusion Model for Fiber Orienation in Short- and LongFiber Thermoplastics, Journal of Non-Newtonian Fluid Mechanics 156 (2009) 165–176.
* [10] Koch, D.L., A Model for Orientational Diffusion in Fiber Suspensions, Physics of Fluids 7 (8) (1995) 2086–2088.
* [11] Jack, D.A., S. Montgomery-Smith, D. Smith, Anisotropic Diffusion Model for Suspensions of Short-Fiber Composite Processes., in: The XVth International Congress on Rheology, the Society of Rheology 80th Annual Meeting, The Society of Rheology, Monterey, CA, 2008.
* [12] Wetzel, E.D., Modeling Flow-Induced Microstructure of Inhomogeneous Liquid-Liquid Mixtures, Ph.D. thesis, University of Illinois at Urbana Champaign (1999).
* [13] VerWeyst, B.E., Numerical Predictions of Flow Induced Fiber Orientation in Three-Dimensional Geometries, Ph.D. thesis, University of Illinois at Urbana Champaign (1998).
* [14] Montgomery-Smith, S.J. W. He, D.A. Jack, D.E. Smith, Exact Tensor Closures for the Three Dimensional Jeffery’s Equation, Under Review Journal of Fluid Mechanics, draft available at http://www.math.missouri.edu/~stephen/preprints/exact-closure.html (2010).
* [15] Jeffery, G.B., The Motion of Ellipsoidal Particles Immersed in a Viscous Fluid, Proceedings of the Royal Society of London A 102 (1923) 161–179.
* [16] Bird, R. B., C. Curtiss, R. C. Armstrong, O. Hassager, Dynamics of Polymeric Liquids, 2nd Edition, Vol. 2: Kinetic Theory, John Wiley & Sons, Inc., New York, NY, 1987.
* [17] Petrie, C.J.S., The Rheology of Fibre Suspensions, Journal of Non-Newtonian Fluid Mechanics 87 (1999) 369–402.
* [18] Hinch, E.J., L. Leal, Time-Dependent Shear Flows of a Suspension of Particles with Weak Brownian Rotations, Journal of Fluid Mechanics 57 (1973) 753–767.
* [19] Altan, M.C., S. Advani, S. Güçeri, R. Pipes, On the Description of the Orientation State for Fiber Suspensions in Homogeneous Flows, Jn. of Rheology 33 (7) (1989) 1129–1155.
* [20] Altan, M.C., S. Subbiah, S. Guceri, R. Pipes, Numerical Prediction of Three-Dimensional Fiber Orientation in Hele-Shaw Flows, Polymer Engineering and Science 30 (14) (1990) 848–859.
* [21] Verleye, V., F. Dupret, Prediction of Fiber Orientation in Complex Injection Molded Parts, in: Developments in Non-Newtonian Flows, 1993, pp. 139–163.
* [22] Chung, D.H., T. Kwon, Invariant-Based Optimal Fitting Closure Approximation for the Numerical Prediction of Flow-Induced Fiber Orientation, Jn. of Rheology 46 (1) (2002) 169–194.
* [23] Han, K.-H., Y.-T. Im, Numerical Simulation of Three-Dimensional Fiber Orientation in Short-Fiber-Reinforced Injection-Molded Parts, Jn. of Materials Processing Technology 124 (2002) 366–371.
* [24] Jack, D.A., D. Smith, An Invariant Based Fitted Closure of the Sixth-order Orientation Tensor for Modeling Short-Fiber Suspensions, Jn. of Rheology 49 (5) (2005) 1091–1116.
* [25] Bay, R.S., Fiber Orientation in Injection Molded Composites: A Comparison of Theory and Experiment, Ph.D. thesis, University of Illinois at Urbana-Champaign (August 1991).
* [26] Dinh, S.M., R. Armstrong, A Rheological Equation of State for Semiconcentrated Fiber Suspensions, Jn. of Rheology 28 (3) (1984) 207–227.
* [27] Lipscomb, G G II., M. Denn, D. Hur, D. Boger, Flow of Fiber Suspensions in Complex Geometries, Jn. of Non-Newtonian Fluid Mechanics 26 (1988) 297–325.
* [28] Altan, M.C., L. Tang, Orientation tensors in simple flows of dilute suspensions of non-brownian rigid ellipsoids, comparison of analytical and approximate solutions, Rheologica Acta 32 (1993) 227–244.
* [29] Montgomery-Smith, S.J., D. Jack, D. Smith, A Systematic Approach to Obtaining Numerical Solutions of Jeffery’s Type Equations using Spherical Harmonics, Composites Part A 41 (2010) 827–835.
* [30] D. Jack, B. Schache, D. Smith, Neural Network Based Closure for Modeling Short-Fiber Suspensions, Polymer CompositesAccepted for Publication.
* [31] Chung, D.H., T. Kwon, Improved Model of Orthotropic Closure Approximation for Flow Induced Fiber Orientation, Polymer Composites 22 (5) (2001) 636–649.
* [32] Qadir, N., D. Jack, Modeling Fibre Orientation in Short Fibre Suspensions Using the Neural Network-Based Orthotropic Closure, Composites, Part A.
* [33] Mullens, M., Developing New Fitted Closure Approximations for Short-Fiber Reinforced Polymer Composites, Master’s thesis, University of Missouri - Columbia (July 2010).
* [34] Jack, D.A., D. Smith, Assessing the Use of Tensor Closure Methods With Orientation Distribution Reconstruction Functions, Jn. of Composite Materials 38 (21) (2004) 1851–1872.
* [35] GNU Scientific Library, http://www.gnu.org/software/gsl.
* [36] Jack D.A., D. Smith, Elastic Properties of Short-Fiber Polymer Composites, Derivation and Demonstration of Analytical Forms for Expectation and Variance from Orientation Tensors, Journal of Composite Materials 42 (3) (2008) 277–308.
* [37] Chicone, C., Ordinary Differential Equations with Applications, 2nd Edition, Springer-Verlag, New York, 2006.
Flow # | $v_{1}$ | $v_{2}$ | $v_{3}$ | $C_{I}$ | $\lambda$ | $\overline{\varepsilon}_{\mbox{\tiny FEC}}$ | $\overline{\varepsilon}_{\mbox{\tiny ORT}}$ | $\overline{\varepsilon}_{\mbox{\tiny IBOF}}$ | $\overline{\varepsilon}_{\mbox{\tiny Hybrid}}$
---|---|---|---|---|---|---|---|---|---
1 | $Gx_{1}$ | $Gx_{2}$ | $-2Gx_{3}$ | $10^{-3}$ | 1 | 1 | 2.06 | 1.28 | 76.2
2 | $2Gx_{1}$ | $-Gx_{2}$ | $-Gx_{3}$ | $10^{-3}$ | 1 | 1.03 | 1.05 | 1 | 25.8
3a | $Gx_{3}$ | $0$ | $0$ | $10^{-3}$ | 0.99 | 1 | 1.02 | 1.02 | 3.69
3b | $Gx_{3}$ | $0$ | $0$ | $10^{-3}$ | 1 | 1 | 1.02 | 1.01 | 3.28
4 | $-Gx_{1}+10Gx_{2}$ | $-Gx_{2}$ | $2Gx_{3}$ | $10^{-3}$ | 1 | 1 | 2.25 | 1.36 | 12.9
5 | $-Gx_{1}+Gx_{2}$ | $-Gx_{2}$ | $2Gx_{3}$ | $10^{-3}$ | 1 | 1.02 | 1 | 1.23 | 22.6
6 | $Gx_{1}+2Gx_{3}$ | $Gx_{2}$ | $-2Gx_{3}$ | $10^{-2}$ | 1 | 1 | 1.01 | 1.08 | 3.57
7 | $Gx_{1}+2.75Gx_{3}$ | $Gx_{2}$ | $-2Gx_{3}$ | $10^{-2}$ | 1 | 1 | 1.02 | 1.05 | 2.98
8 | $Gx_{1}+1.25Gx_{3}$ | $Gx_{2}$ | $-2Gx_{3}$ | $10^{-2}$ | 1 | 1.02 | 1 | 1.12 | 3.85
9 | $-Gx_{1}+10Gx_{3}$ | $Gx_{2}$ | $0$ | $10^{-2}$ | 1 | 1.03 | 1 | 1.03 | 1.65
10 | $-Gx_{1}+Gx_{3}$ | $Gx_{2}$ | $0$ | $10^{-2}$ | 1 | 1.01 | 1 | 1.04 | 2.29
11 | $2Gx_{1}+3Gx_{3}$ | $-Gx_{2}$ | $-Gx_{3}$ | $10^{-2}$ | 1 | 1.04 | 1.04 | 1 | 2.52
12 | $-Gx_{1}+3.75Gx_{2}$ | $Gx_{2}$ | $2Gx_{3}$ | $10^{-2}$ | 1 | 1 | 1.03 | 1.06 | 2.03
13 | $-Gx_{1}+1.5Gx_{2}$ | $-Gx_{2}$ | $2Gx_{3}$ | $10^{-2}$ | 1 | 1.00 | 1 | 1.01 | 2.34
14a | $Gx_{3}$ | $0$ | $0$ | $10^{-2}$ | 0.99 | 1 | 1.00 | 1.03 | 4.14
14b | $Gx_{3}$ | $0$ | $0$ | $10^{-2}$ | 1 | 1 | 1.00 | 1.02 | 3.90
Table 1: Flows used, and the resulting error computation in computing the second-order orientation tensor $A$. Closure | CPU Time | Normalized Time
---|---|---
Hybrid - Original | 25 | 1
Hybrid - Optimized | 6.9 | 0.3
ORT - Original | 770 | 31
ORT - Optimized | 21 | 0.8
FEC | 26 | 1.0
Table 2: Normalized Computational Times
Figure 1: Transient Solution for selected components of $A$ for simple shear
flow under isotropic diffusion (a) $C_{I}=10^{-3}$ and $\lambda=0.99$ and (b)
$C_{I}=10^{-2}$ and $\lambda=0.95$.
Figure 2: Transient Solution for selected components of $A$ for simple shear
flow under anisotropic rotary diffusion (ARD-RSC) (a) $\kappa=1/30$ and
$\lambda=0.95$ and (b) $\kappa=1/30$ and $\lambda=1.0$.
Figure 3: Anisotropic rotary diffusion results, simple shear $\kappa=1/30$ and
$\lambda=1.0$ (a) Selected time range of for $A_{11}$ (b) Transient error in
derivative computation for the fitted orthotropic closure ORT compared to FEC.
Figure 4: Transient solution for selected components of $A$ for mixed flow
from the Folgar-Tucker model with $C_{I}=10^{-2}$ and $\lambda=1.0$.
Figure 5: Transient solution for selected components of $A$ for center-gated
disk flow from the Folgar-Tucker model with $C_{I}=10^{-2}$ and $\lambda=1.0$
for $r/b=4/10$.
|
# Topological transport of mobile impurities
D. Pimenov<EMAIL_ADDRESS>William I. Fine Theoretical Physics Institute,
University of Minnesota, Minneapolis, MN 55455, USA A. Camacho-Guardian
Department of Physics and Astronomy, Aarhus University, Ny Munkegade, DK-8000
Aarhus C, Denmark T.C.M. Group, Cavendish Laboratory, University of
Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, U.K. N. Goldman Université
Libre de Bruxelles, CP 231, Campus Plaine, 1050 Brussels, Belgium P.
Massignan Department de Física, Universitat Politècnica de Catalunya, Campus
Nord, B4-B5, E-08034 Barcelona, Spain G. M. Bruun Department of Physics and
Astronomy, Aarhus University, Ny Munkegade, DK-8000 Aarhus C, Denmark M.
Goldstein Raymond and Beverly Sackler School of Physics and Astronomy, Tel
Aviv University, Tel Aviv 6997801, Israel
###### Abstract
We study the Hall response of topologically-trivial mobile impurities (Fermi
polarons) interacting weakly with majority fermions forming a Chern-insulator
background. This setting involves a rich interplay between the genuine many-
body character of the polaron problem and the topological nature of the
surrounding cloud. When the majority fermions are accelerated by an external
field, a transverse impurity current can be induced. To quantify this
polaronic Hall effect, we compute the drag transconductivity, employing
controlled diagrammatic perturbation theory in the impurity-fermion
interaction. We show that the impurity Hall drag is not simply proportional to
the Chern number characterizing the topological transport of the insulator on
its own – it also depends continuously on particle-hole breaking terms, to
which the Chern number is insensitive. However, when the insulator is tuned
across a topological phase transition, a sharp jump of the impurity Hall drag
results, for which we derive an analytical expression. We describe how to
experimentally detect the polaronic Hall drag and its characteristic jump,
setting the emphasis on the circular dichroism displayed by the impurity’s
absorption rate.
## I Introduction
As a rule of thumb, interacting many-body systems in more than one dimension
are difficult to analyze, and controllable routes to the inclusion of
interactions are rare. One such approach is to consider a non-interacting
“majority” system, couple it to a small number of quantum impurities, and
study interaction effects on the impurities only. If the majority system is a
conventional metal, the impurities are transformed into so-called Fermi
polarons [1] 111Here, we understand the polaron as a mobile quasiparticle, and
not as a static impurity as recently studied in a topological system in Ref.
[55], which by now are routinely observed in ultracold-gas [3, 4, 5, 6, 7] and
also solid state experiments [8] – for a review, see for instance Refs. [9,
10, 11].
In these systems, the local kinematic properties of the impurities are
modified by the interaction with the medium, while the medium itself is
unmodified if the impurity density is small. The next logical question to ask
is whether global topological characteristics of the medium [12] can influence
the impurity as well: Can a topologically trivial impurity inherit the
topological quantum numbers of the medium? Such an interaction-induced
topology is a fundamentally interesting prospect. Furthermore, this question
is of high relevance to current cold-atom experiments, where a broad family of
topological band structures have been realized [13]. Topological and polaronic
physics are thus well-controlled (and highly active) but largely separate
fields in cold-atom research, and it is thus worthwhile and intriguing to
combine them together. This goal has been approached in a few recent
theoretical works, mainly from two perspectives: Either interaction effects
are strong such that an impurity-majority bound state is formed [14, 15, 16,
17], and the impurity inherits the topological quantum numbers of the
majority, or, alternatively, one can study the problem in weak coupling [18,
*PhysRevB.102.119903], as previously done by some of us, with the majority
forming a Chern insulator. This perturbative approach is well-controlled and
does not require additional regularization.
As a diagnostic tool for the inherited topological properties of the impurity
particles, Ref. [18, *PhysRevB.102.119903] numerically computed the impurity
Hall drag for majority particles governed by the Haldane lattice model [20].
It was found that the Hall drag is neither quantized nor simply follows the
majority phase diagram, and even vanishes in the center of the topological
phase; however, it exhibits a sharp jump upon tuning the insulator across its
topological phase transition. In this work, we introduce a generic (continuum)
Dirac model of a Chern insulator. This model follows the same universal
physics as the Haldane model, but allows for an analytical understanding of
the phenomena numerically observed in Ref. [18, *PhysRevB.102.119903].
With a diagrammatical approach, we show that the Hall drag can be split into
two drag contributions exerted by majority particles and holes, respectively.
These two contributions counteract each other, and completely cancel at the
particle-hole symmetric point. This is reminiscent of Coulomb drag in two-
layer systems [21, 22, 23], and explains the observed vanishing of the drag in
the center of the majority topological phase. If particle-hole symmetry is
broken, the impurity Hall drag can be non-vanishing even if the majority Chern
insulator is in the trivial phase. To understand the observed jump across the
topological phase transition, one should view the majority system as a
combination of Dirac-like fermions with linear dispersion, and “spectator”
fermions [24] with a quadratic dispersion. At the phase transition, the
spectator fermions change smoothly, but the Dirac fermions feel the gap
closing and exhibit a singular Berry curvature. We show that this singularity
is integrated over in the expression for the impurity Hall drag, which leads
to a jump proportional to the change in Chern number, including the correct
sign. This is the only clear manifestation of topology in weak-coupling
impurity transport. We derive an analytical formula for the jump, and validate
all results numerically for the Haldane lattice model.
To supplement the theoretical results, we present a detailed discussion on how
to detect the Hall drag and jump with various experimental techniques. A
particular promising approach is to use circular dichroism, that is, measuring
impurity excitation rates upon driving the system with left and right
circularly polarized fields [25, 26, 27, 28]. A systematic method of computing
the excitation rates in an interacting many-body system is presented along the
way.
The remainder of this paper is structured as follows: In Sec. II we present
the continuum Dirac model and the evaluation of the impurity drag. In Sec.
III, we investigate the jump across the topological phase transition. The drag
including its jump at the topological transition is analyzed for the Haldane
model in Sec. IV. The different measurement protocols are detailed in Sec. V,
with special focus on the dichroic measurement. Conclusions and outlook are
presented in Sec. Acknowledgments. Some technical details are relegated to
Appendices.
## II Drag transconductivity in the continuum model
We start by computing the impurity drag in a generic continuum model and
consider the following two-dimensional Bloch Hamiltonian for majority
particles indexed by a pseudospin $\uparrow$:
$\displaystyle
H_{\uparrow}({\boldsymbol{k}})=\sum_{i=0}^{3}\psi_{\uparrow}^{\dagger}({\boldsymbol{k}})h_{i}({\boldsymbol{k}})\sigma_{i}\psi_{\uparrow}({\boldsymbol{k}})\
,$ (1)
$\displaystyle\psi_{\uparrow}({\boldsymbol{k}})=\left(c_{\uparrow,A}({\boldsymbol{k}}),c_{\uparrow,B}({\boldsymbol{k}})\right)^{T}\
,$ $\displaystyle h_{1}({\boldsymbol{k}})=k_{x}\ ,\quad
h_{2}({\boldsymbol{k}})=k_{y}\ ,\quad h_{3}({\boldsymbol{k}})=m+d_{1}k^{2},$
$\displaystyle h_{0}({\boldsymbol{k}})=d_{2}k^{2},\quad k=|{\boldsymbol{k}}|\
,$
with $\sigma_{0}=\mathbbm{1}$ and $\sigma_{i}$ with $i=1,2,3$ being the Pauli
matrices. Throughout this paper we will work in units where $\hbar=c=e=1$; all
quantities are measured in appropriate powers of the (inverse) physical
fermion mass, while momenta are rescaled by the band velocity. Equation (1)
can be seen as a low-energy approximation to a microscopic tight-binding
Hamiltonian with a two-sublattice structure ($A,B$) and broken time-reversal
invariance. The eigenenergies corresponding to (1) read
$\displaystyle\epsilon_{\uparrow;1,2}({\boldsymbol{k}})=h_{0}({\boldsymbol{k}})\mp
h({\boldsymbol{k}}),\quad h({\boldsymbol{k}})=\sqrt{k^{2}+h_{3}(k)^{2}}\ .$
(2)
Without the terms $d_{1},d_{2}$ (which have physical dimensions (mass)-1), Eq.
(1) describes a gapped Dirac cone with mass gap $m$. The term $d_{1}$ serves
as a UV regularizer and makes the dispersion quadratic at higher energies
while preserving particle-hole symmetry,
$\epsilon_{\uparrow,1}({\boldsymbol{k}})=-\epsilon_{\uparrow,2}({\boldsymbol{k}})$.
The symmetry is broken for finite $d_{2}$. We assume $|d_{1}|>|d_{2}|$, thus
the lower (upper) band is filled (empty).
For general $d_{2}$, the Hamiltonian (1) is in the Altland-Zirnbauer class A
[29], and gives rise to a quantized Chern number $\mathcal{C}$. As shown
below, it reads
$\displaystyle\mathcal{C}$
$\displaystyle=\frac{1}{2\pi}\int\\!d{\boldsymbol{k}}\frac{1}{2}\frac{(m-d_{1}k^{2})}{(k^{2}+(m+d_{1}k^{2})^{2})^{3/2}}$
(3) $\displaystyle=\frac{1}{2}\left[\text{sign}(m)-\text{sign}(d_{1})\right]\
.$
The integrand of Eq. (3) is nothing but the Berry curvature
$\mathcal{F}_{xy}(k)$. As visualized in Fig. 1, for $m\rightarrow 0$
$\mathcal{F}_{xy}(k)$ consists of a sharp half-quantized peak for $k\lesssim
m$, arising from the Dirac fermions, on top of a broad background from high-
energy “spectator” fermions [24]. Both types of fermions effectively
contribute a half-integer Chern number, such that the total Chern number is
quantized to an integer.
Figure 1: Berry curvature for $d_{1}=-1$ and $m=\pm 0.1$ (full lines), $m=\pm
0.2$ (dashed lines). The inset shows a zoom-in on small values of
$\mathcal{F}_{xy}(k)$, highlighting the sign-change of the Berry curvature in
the trivial phase.
As explicit in Eq. (3), $\mathcal{C}$ does not depend on the particle-hole
symmetry breaking parameter $d_{2}$. This is in line with the geometrical
interpretation of $\mathcal{C}$ as a winding number 222For the winding number
construction one should view momentum space as compactified,
$\mathbb{R}_{2}\rightarrow S_{2}$, which is independent of the term $h_{0}$
commuting with the Hamiltonian [31].
As a preparation for the later calculations, it is useful to recap the
computation of $\mathcal{C}$ explicitly as $\mathcal{C}=-2\pi{\sigma}_{xy}$
[32, 33], with $\sigma_{xy}$ the transconductivity; the conductivity quantum
is $\sigma_{0}=e^{2}/\hbar=1/2\pi$ with the chosen units. In linear response,
$\sigma_{xy}$ is proportional to the retarded current-current correlation
function, which may be obtained by analytical continuation from imaginary
time:
$\displaystyle\sigma_{xy}=\lim_{\omega\rightarrow 0}\frac{1}{-i\omega
A_{0}}\left[-\braket{\hat{J}_{\uparrow}^{x}\hat{J}_{\uparrow}^{y}}(i\Omega)\bigg{|}_{i\Omega\rightarrow\omega+i0^{+}}\right],$
(4)
with $A_{0}$ the system area, and $\hat{J}_{\uparrow}$ the current operators
at vanishing external momentum.
The imaginary time correlator in Eq. (4) can be written as
$\displaystyle-\braket{\hat{J}_{\uparrow}^{x}\hat{J}_{\uparrow}^{y}}(i\Omega)=$
(5) $\displaystyle
A_{0}\int_{k}G_{\uparrow,\alpha}(\omega_{k},{\boldsymbol{k}})G_{\uparrow,\beta}(\Omega+\omega_{k},{\boldsymbol{k}})J^{x}_{\uparrow,\alpha\beta}({\boldsymbol{k}})J^{y}_{\uparrow,\beta\alpha}({\boldsymbol{k}}),$
$\displaystyle\int_{k}\equiv\int\frac{d{\boldsymbol{k}}d\omega_{k}}{(2\pi)^{3}},\quad
G_{\uparrow,\alpha}(\omega_{k},{\boldsymbol{k}})=\frac{1}{i\omega_{k}-\epsilon_{\uparrow,\alpha}({\boldsymbol{k}})}\
,$
where $\alpha,\beta$ refer to band indices and the Einstein summation
convention is implied. $J_{\uparrow,\alpha\beta}^{x/y}$ are current matrix
element in the diagonal basis (see App. A for details). The standard
diagrammatical representation of Eq. (5) is shown in Fig. 2. The Matsubara
Green function $G_{\uparrow,1}$ describes the propagation of a hole in the
filled lower band, while $G_{\uparrow,2}$ represents a particle in the upper
band. The frequency integral in Eq. (5) only receives contributions when
$\alpha\neq\beta$, and thus one can view creation of virtual particle-hole
pairs as the origin of the conductivity. These quasiparticles are virtual,
since the external field does not provide enough energy ($\Omega\rightarrow
0$) to overcome the band gap.
Figure 2: Diagram representing Eq. (5), with $\alpha=1,\beta=2$.
Evaluation of Eqs. (5) and (4) is straightforward. One finds
$\displaystyle\sigma_{xy}$
$\displaystyle=-i\int\\!\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{J_{\uparrow,12}^{x}({\boldsymbol{k}})J^{y}_{\uparrow,21}({\boldsymbol{k}})-J_{\uparrow,21}^{x}({\boldsymbol{k}})J^{y}_{\uparrow,12}({\boldsymbol{k}})}{(\epsilon_{\uparrow,1}({\boldsymbol{k}})-\epsilon_{\uparrow,2}({\boldsymbol{k}}))^{2}}$
$\displaystyle=-\frac{1}{2\pi}\mathcal{C}\ .$ (6)
Inserting current matrix elements and dispersions into Eq. (6) produces Eq.
(3). After this noninteracting prelude, we are ready to attack the polaron
problem. We consider a minority particle species indexed by $\downarrow$, with
a trivial quadratic Hamiltonian $H_{\downarrow}({\boldsymbol{p}})$:
$\displaystyle
H_{\downarrow}({\boldsymbol{p}})=\epsilon_{\downarrow}({\boldsymbol{p}})c^{\dagger}_{\downarrow}({\boldsymbol{p}})c_{\downarrow}({\boldsymbol{p}}),\quad\epsilon_{\downarrow}({\boldsymbol{p}})=\frac{p^{2}}{2M}\
.$ (7)
We can view the impurities as governed by a similar tight-binding Hamiltonian
as the majority, but with a chemical potential almost at the bottom of the
lower band, around which the dispersion is approximated by an effective mass
$M$. Higher impurity bands can be safely neglected.
The majority and minority particles interact via an onsite-interaction
$H_{\text{int}}$ [18, *PhysRevB.102.119903], which does not distinguish
between the sublattices (recall that the sublattices give rise to the two-band
structure):
$\displaystyle
H_{\text{int}}=\frac{g}{A_{0}}\sum_{\ell=A,B}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}c^{\dagger}_{\uparrow,\ell}({\boldsymbol{k}}+{\boldsymbol{q}})c_{\uparrow,\ell}({\boldsymbol{k}})c^{\dagger}_{\downarrow}({\boldsymbol{p}}-{\boldsymbol{q}})c_{\downarrow}({\boldsymbol{p}})=$
$\displaystyle\frac{g}{A_{0}}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}c^{\dagger}_{\uparrow,\alpha}({\boldsymbol{k}}+{\boldsymbol{q}})c_{\uparrow,\beta}({\boldsymbol{k}})c^{\dagger}_{\downarrow}({\boldsymbol{p}}-{\boldsymbol{q}})c_{\downarrow}({\boldsymbol{p}})W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{q}}),$
$\displaystyle
W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{q}})\equiv\left[U_{\uparrow}^{\dagger}({\boldsymbol{k}}+{\boldsymbol{q}})U_{\uparrow}({\boldsymbol{k}})\right]_{\alpha\beta}\
,$ (8)
where we have rotated to the band space in the second line. Now we imagine a
constant and uniform force $\boldsymbol{E}=E\textbf{e}_{y}$ acting on both
majority and minority particles 333Note that $e=1$ is the effective charge
corresponding to this force and might not be directly related to the electron
charge. Due to the interaction $H_{\text{int}}$, a transverse impurity current
$J_{\downarrow}^{x}$ will be induced; without interaction, there is none due
to time reversal symmetry of the impurities. To quantify this effect, we must
compute the Hall drag transconductivity
$\displaystyle\sigma_{\downarrow\uparrow}\equiv\lim_{\omega\rightarrow
0}\frac{1}{-i\omega
A_{0}}\left[-\braket{\hat{J}_{\downarrow}^{x}\hat{J}_{\uparrow}^{y}}(i\Omega)\bigg{|}_{i\Omega\rightarrow\omega+i0^{+}}\right]\
.$ (9)
This computation will be done to second order in the impurity-majority
coupling $g$, since the first order contribution vanishes [18,
*PhysRevB.102.119903]; thus, attractive and repulsive interactions lead to the
same result. We point out that such perturbative expansion is well-controlled
for small $g$, and no resummation is needed, in contrast with the recent
evaluation of longitudinal polaron drag in the metallic case [35].
Figure 3: Leading contributions to the drag transconductivity. Dashed lines
represent impurities, dotted lines interaction matrix elements $W$, see Eq.
(8). The energy-momentum structure of the central part and the colored
elements are explained in the main text.
As in the case of Coulomb drag in two-layer systems [21], the
$\mathcal{O}(g^{2})$ contribution corresponds to the two diagrams shown in
Fig. 3. We evaluate these diagrams to leading order in the small impurity
density $n_{\downarrow}$. The diagrams involve an impurity loop and are
therefore proportional to $n_{\downarrow}$, unlike the single-particle polaron
diagrams which have an impurity “backbone” [36, *PhysRevB.77.020408]. It is
convenient to identify the impurity lines that represent filled states
($\hat{=}$ impurity holes). Since these carry vanishing momenta in the small
density limit, impurity lines coupled to the current vertex,
$J_{\downarrow}^{x}({\boldsymbol{q}})=q_{x}/M$, are excluded. Thus, the
central (red) line corresponds to a filled state. We may set its momentum to
zero as done in Fig. 3, and the integration over filled states then simply
produces a factor of $n_{\downarrow}$.
Identification of the red line with a filled state also fixes the (red) index
of the central majority line in order for the $\tilde{\omega}$ integral (see
Fig. 3) to be non-vanishing. Schematically the top diagram in Fig. 3 describes
the scattering of an impurity with a particle, with momentum transfer
${\boldsymbol{q}}$, and the bottom diagram the scattering with a hole, with
momentum transfer $-{\boldsymbol{q}}$. Therefore, the net momentum transfer
and drag vanish in the particle-hole symmetric case [21, 22, 23], as will be
seen explictly below. The remaining evaluation of the diagrams is
straightforward (see App. B). We obtain
$\displaystyle\sigma_{\downarrow\uparrow}=-2g^{2}n_{\downarrow}\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}\
\text{Im}\left\\{J^{y}_{\uparrow,12}({\boldsymbol{k}})W^{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}})W^{21}({\boldsymbol{k}},-{\boldsymbol{q}})\right\\}\frac{q_{x}}{M}\frac{1}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})\right)^{2}}\left(d({\boldsymbol{k}},{\boldsymbol{q}})+c({\boldsymbol{k}},{\boldsymbol{q}})\right),$
(10) $\displaystyle
d({\boldsymbol{k}},{\boldsymbol{q}})=\frac{2\epsilon_{\uparrow,1}({\boldsymbol{k}})-\epsilon_{\uparrow,2}({\boldsymbol{k}})-\epsilon_{\uparrow,2}({\boldsymbol{k}}-{\boldsymbol{q}})-\epsilon_{\downarrow}({\boldsymbol{q}})}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-\epsilon_{\downarrow}({\boldsymbol{q}})\right)^{3}}\
,\quad
c({\boldsymbol{k}},{\boldsymbol{q}})=\frac{2{\epsilon_{\uparrow,2}}({\boldsymbol{k}})-{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,1}}({\boldsymbol{k}}-{\boldsymbol{q}})+\epsilon_{\downarrow}({\boldsymbol{q}})}{({\epsilon_{\uparrow,1}}({\boldsymbol{k}}-{\boldsymbol{q}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})-\epsilon_{\downarrow}({\boldsymbol{q}}))^{3}}\
.$ (11)
Here, $c,d$ represent the contributions of the “direct” (top in Fig. 3) and
“crossed” (bottom) diagrams. When flipping $d_{2}\rightarrow-d_{2}$, we have
$\epsilon_{1}\rightarrow-\epsilon_{2}$ and vice versa, thus
${\sigma_{\downarrow\uparrow}}$ is antisymmetric in $d_{2}$. In particular, it
vanishes in the particle-hole symmetric case, $d_{2}=0$. Numerical evaluation
of Eq. (10) as function of $d_{2}$ is shown in Fig. 4(a). Let us point out
that the complete cancellation of $\sigma_{\downarrow\uparrow}$ at $d_{2}=0$
only occurs to second order, $\mathcal{O}(g^{2})$, and is not expected in
higher order, as can be shown explicitly for the Haldane model (see below).
In Fig. 4(b), ${\sigma_{\downarrow\uparrow}}$ is depicted as function of $m$
for non-zero $d_{2}$, tuning the majority system from the trivial phase with
$\mathcal{C}=0$ to a non-trivial one, $\mathcal{C}=1$. While
${\sigma_{\downarrow\uparrow}}$ exhibits a clear jump when the majority
particles undergo a topological phase transition (see next section), it is
neither constant in the non-trivial phase, nor does it vanish in the trivial
phase: For the majority particles, time-reversal symmetry is broken everywhere
in the phase diagram, but for $\mathcal{C}=0$ the transconductivity
contributions of the “Dirac” and “spectator” fermions cancel exactly, as long
as the chemical potential is in the gap and the lower majority band is
completely filled. In the case of the gapless impurity band, such cancellation
is not guaranteed, and the impurity Hall drag therefore does not vanish in the
non-trivial phase.
Figure 4: Impurity transconductivity $\sigma_{\downarrow\uparrow}$ from
numerical evaluation of Eq. (10). Lines are guides for the eye. (a)
$\sigma_{\downarrow\uparrow}$ as function of $d_{2}$ for $M=1,m=0.2,d_{1}=-1$.
(b) $\sigma_{\downarrow\uparrow}$ as function of $m$ for
$M=1,d_{1}=-1,d_{2}=0.5$.
## III The jump across the phase transition for the continuum model
Another salient feature of Fig. 4(b) is the discontinuous change of the drag
transconductivity which occurs upon crossing the topological phase boundary
$m=0$. This jump can be understood as arising from a singular contribution of
Dirac fermions: When the gap closes, the Dirac part of the majority Berry
curvature ($\propto m$ in Eq. (3)) evolves into a delta-function,
$\text{sign}(m)\delta^{(2)}({\boldsymbol{k}})$ – compare also Fig. 1. In
contrast, the part corresponding to the spectator fermions ($\propto d_{1}$ in
Eq. (3)) is smooth across the transition. In the expression for the impurity
drag (10), a singular Dirac contribution
$\propto\text{sign}(m)\delta^{(2)}({\boldsymbol{k}})$ arises as well. This
singular contribution changes sign across the transition, and so induces the
jump $\Delta{\sigma_{\downarrow\uparrow}}$ in the Hall drag, with a sign
determined by the change in Chern number $\Delta\mathcal{C}$. To extract
$\Delta{\sigma_{\downarrow\uparrow}}$ we can set ${\boldsymbol{k}}=0$ in all
parts of Eq. (10) which are non-singular as ${\boldsymbol{k}}\rightarrow 0$.
As detailed in App. C, in this way we obtain
$\displaystyle\sigma_{\downarrow\uparrow,\text{Dirac}}=$ (12) $\displaystyle
g^{2}n_{\downarrow}\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}\
\frac{\text{Im}\left\\{J^{y}_{\uparrow,\text{Dirac},12}({\boldsymbol{k}})J^{x}_{\uparrow,\text{Dirac},21}({\boldsymbol{k}})\right\\}}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})\right)^{2}}\frac{q_{x}^{2}}{Mq\sqrt{1+(d_{1}q)^{2}}}\left(\frac{1}{(\epsilon_{\downarrow}({\boldsymbol{q}})+{\epsilon_{\uparrow,2}}({\boldsymbol{q}}))^{2}}-\frac{1}{(\epsilon_{\downarrow}({\boldsymbol{q}})-{\epsilon_{\uparrow,1}}({\boldsymbol{q}}))^{2}}\right)\
,$
where $J^{x/y}_{\uparrow,\text{Dirac}}({\boldsymbol{k}})$ represents the
majority current carried by the Dirac (i.e., not the spectator) fermions.
Compared to Eq. (10) the ${\boldsymbol{k}}$ and ${\boldsymbol{q}}$ integrals
in Eq. (12) have factorized. The ${\boldsymbol{k}}$ integral, which simplifies
to an integral over a delta function as $m\rightarrow 0$, is nothing but the
Chern number contribution of the Dirac fermions, cf. Eq. (6). It evaluates to
$(1/8\pi)\text{sgn}(m)$. Performing the remaining ${\boldsymbol{q}}$ integral,
one finds
$\displaystyle\sigma_{\downarrow\uparrow,\text{Dirac}}=-\frac{g^{2}n_{\downarrow}}{(2\pi)^{4}}\cdot\frac{4\pi^{2}d_{2}M\cdot\text{sign}(m)}{1+4M(|d_{1}|+(d_{1}^{2}-d_{2}^{2})M)}\
.$ (13)
Defining $\Delta{\sigma_{\downarrow\uparrow}}$ as the jump of Hall drag when
going from the trivial to the topological phase, with change in Chern number
$\Delta\mathcal{C}$, Eq. (13) leads to the final result:
$\displaystyle\Delta{\sigma_{\downarrow\uparrow}}=\Delta\mathcal{C}\cdot\frac{g^{2}n_{\downarrow}}{(2\pi)^{4}}\left(-\frac{8\pi^{2}d_{2}M}{1+4M(|d_{1}|+(d_{1}^{2}-d_{2}^{2})M)}\right)\
.$ (14)
As a check, in Fig. 5 this formula is compared with a numerical evaluation of
the jump from Eq. (10) as function of the impurity mass $M$, yielding
excellent agreement. Note that both Hall drag and jump will vanish in the
limits $M\to 0\ \text{or}\ M\to\infty$: In the former limit, the impurity
cannot interact efficiently with the majority particles due to the large
kinetic energy cost, while in the latter the impurity is immobile and cannot
be dragged.
Figure 5: Jump of the Hall drag $\Delta{\sigma_{\downarrow\uparrow}}$ in the
continuum model as function of $M$, with $d_{1}=-1,d_{2}=0.5$. The dashed line
corresponds to Eq. (14), points are computed numerically by evaluating Eq.
(10) at two points $m=\pm 0.001$ close to the phase boundary. Numerical errors
are of the order of the points size. Inset: The smooth contribution of the
spectator fermions, obtained numerically from Eq. (10).
While the Dirac part of the Hall drag,
$\sigma_{\downarrow\uparrow,\text{Dirac}}$, changes sign at the transition,
there is also a small smooth background contribution from the spectator
fermions, to be denoted $\sigma_{\downarrow\uparrow,\text{spec}}$. This
contribution can be extracted numerically from Eq. (10) as
$\displaystyle\sigma_{\downarrow\uparrow,\text{spec}}=\frac{1}{2}\left[{\sigma_{\downarrow\uparrow}}(m=0^{+})+{\sigma_{\downarrow\uparrow}}(m=0^{-})\right]\
,$ (15)
see the inset to Fig. 5.
We note in passing that the jump of ${\sigma_{\downarrow\uparrow}}$ is
reminiscient of the recently shown [38] change of sign in the Hall coefficient
for a single-particle gapless Dirac cone upon variation of the particle
density.
## IV Drag and jump in the Haldane lattice model
The general behaviour of $\sigma_{\downarrow\uparrow}$ to leading order,
$\mathcal{O}(g^{2})$, is not limited to the continuum model (1), but will hold
in other Chern insulators as well. As another example, we consider a situation
[18, *PhysRevB.102.119903] where the majority particles are described by the
Haldane model on the honeycomb lattice [20], with Hamiltonian
$\displaystyle H_{\uparrow}({\boldsymbol{k}})$
$\displaystyle=\sum_{i=0}^{3}\psi_{\uparrow}^{\dagger}({\boldsymbol{k}})\left(h_{i}({\boldsymbol{k}})\sigma_{i}\right)\psi_{\uparrow}({\boldsymbol{k}})\
,$ (16) $\displaystyle\psi_{\uparrow}({\boldsymbol{k}})$
$\displaystyle=\left(c_{\uparrow,A}({\boldsymbol{k}}),c_{\uparrow,B}({\boldsymbol{k}})\right)^{T},\
\quad{\boldsymbol{k}}_{i}={\boldsymbol{k}}\cdot\boldsymbol{u}_{i}\ ,$
$\displaystyle h_{0}({\boldsymbol{k}})$
$\displaystyle=-2t^{\prime}\cos(\phi)\left[\cos({\boldsymbol{k}}_{1}-{\boldsymbol{k}}_{2})+\cos({\boldsymbol{k}}_{1})+\cos({\boldsymbol{k}}_{2})\right]\
,$ $\displaystyle h_{1}({\boldsymbol{k}})$
$\displaystyle=-\left[1+\cos({\boldsymbol{k}}_{1})+\cos({\boldsymbol{k}}_{2})\right]\
,$ $\displaystyle h_{2}({\boldsymbol{k}})$
$\displaystyle=-\left[(\sin({\boldsymbol{k}}_{1})+\sin({\boldsymbol{k}}_{2})\right],$
$\displaystyle h_{3}({\boldsymbol{k}})$ $\displaystyle=$
$\displaystyle\Delta/2+2t^{\prime}\sin(\phi)\left[\sin({\boldsymbol{k}}_{1}-{\boldsymbol{k}}_{2})+\sin({\boldsymbol{k}}_{2})-\sin({\boldsymbol{k}}_{1})\right]\
,$
where $\ {\boldsymbol{u}_{1}}=(3/2,\sqrt{3}/2)^{T},\
{\boldsymbol{u}_{2}}=(3/2,-\sqrt{3}/2)^{T}$, and the lattice constant and
nearest neighbour hopping amplitude are set to 1. The reciprocal lattice
vectors are given by $\boldsymbol{b}_{1}=(2\pi/3,2\pi/\sqrt{3})^{T},\
\boldsymbol{b}_{2}=(-2\pi/3,2\pi/\sqrt{3})^{T}$. The model is parametrized by
the next-nearest-neighbour hopping $t^{\prime}$, the angle $\phi$ quantifying
the time-reversal symmetry breaking, and the sublattice potential offset
$\Delta$. For given values of $t^{\prime},\phi,\Delta$, the majority chemical
potential is implicitly placed in the gap (its precise value is irrelevant).
The well-known topological phase diagram of the Haldane model is shown in Fig.
6(a).
The impurity particles are governed by the tight-binding model for graphene
(i.e., $t^{\prime}=\Delta=0$), with the chemical potential at the bottom of
the lower band [18] by setting $h_{0}({\boldsymbol{k}})=3$. The impurity-
majority interaction, Eq. (8), is straigthforwardly modified to account for
the impurity multi-band structure.
The Hall drag ${\sigma_{\downarrow\uparrow}}$ can then be derived in analogy
to the continuum model, see App. D for details; the only minor change is the
appearance of diagonalizing unitary matrices
$U_{\downarrow}({\boldsymbol{q}})$ for the impurity. Numerical evaluation of
${\sigma_{\downarrow\uparrow}}$ is presented in Figs. 6(b)–(d). Now, the
particle-hole symmetric case where $\epsilon_{1}=-\epsilon_{2}$ corresponds to
$\phi=\pm\pi/2$, and ${\sigma_{\downarrow\uparrow}}$ vanishes accordingly
[19]. Furthermore, one can easily demonstrate the symmetry
${\sigma_{\downarrow\uparrow}}(\phi)=-{\sigma_{\downarrow\uparrow}}(\pi-\phi)$,
see App. D below Eq. (41). This symmetry is readily seen in Fig. 6(c), which
shows a cut through the phase diagrams for fixed $\Delta=0$. Combined with the
symmetry
${\sigma_{\downarrow\uparrow}}(\phi)=-{\sigma_{\downarrow\uparrow}}(-\phi)$
inherited from the Haldane model, this gives the Hall drag a periodicity
$\displaystyle{\sigma_{\downarrow\uparrow}}(\phi)={\sigma_{\downarrow\uparrow}}(\phi+\pi)\
,$ (17)
apparent in Fig. 6(b). This remarkable manifestation of particle-hole
antisymmetry is in stark contrast to the pure majority case, where the Chern
number only has the trivial periodicity
$\mathcal{C}(\phi)=\mathcal{C}(\phi+2\pi)$, see Fig. 6(a).
At the special particle-hole symmetric parameter points,
$\phi=\pm\pi/2,\Delta=0$, one can also get insight into the behavior of
${\sigma_{\downarrow\uparrow}}$ to higher order in $g$ (see App. E): employing
a particle-hole transformation which also exchanges band indices of the
majority particles, it can be shown that at these points the Hall drag is
antisymmetric in $g$ to all orders. So while there is no $\mathcal{O}(g)$
contribution, and the leading order, $\mathcal{O}(g^{2})$, must vanish, at
order $\mathcal{O}(g^{3})$ the Hall drag will be nonzero.
Figure 6: Impurity Hall drag ${\sigma_{\downarrow\uparrow}}$ in the Haldane
model. (a) Majority phase diagram. $\Delta_{0}=6\sqrt{3}t^{\prime}$ is the
value of $\Delta$ where the phase transition occurs for $\phi=\pi/2$. (b)
${\sigma_{\downarrow\uparrow}}$ from numerical evaluation of Eq. (41) for
$t^{\prime}=0.2$. Cuts through the phase diagram along the dashed lines are
shown in the next panels. (c) ${\sigma_{\downarrow\uparrow}}$ as function of
$\phi$ for $\Delta=0$ and two values of $t^{\prime}$. (d)
${\sigma_{\downarrow\uparrow}}$ as function of $\Delta$ for $\phi=\pi/4$ and
same two values of $t^{\prime}$. The abscissa is rescaled by
$\Delta_{0}(t^{\prime})$.
In the numerics, the jump of ${\sigma_{\downarrow\uparrow}}$ across the
topological phase transition is again prominent, and clearly delineates the
topological phases of the parent Haldane model. Its origin is analogous to the
continuum model – it comes from a sign-changing contribution of Dirac
fermions, which becomes singular upon gap closing. The only significant
difference is that there are now two Dirac cones in the problem, but except at
the special points $\phi=0,\pi$, the gap closes at only one of them. In the
language employed for the continuum model, states near the Dirac cone with
open gap count as spectator fermions. A detailed analysis of the jump leads to
(see App. D)
$\displaystyle\Delta{\sigma_{\downarrow\uparrow}}=\Delta\mathcal{C}\cdot\frac{g^{2}n_{\downarrow}}{(2\pi)^{4}}\cdot
f(t^{\prime},\phi),$ (18)
where $f(t^{\prime},\phi)$ is a numerical function defined in Eq. (43). It
involves the remaining ${\boldsymbol{q}}$ integral, which is difficult to
evaluate analytically in the lattice case. In Fig. 7(a),
$\Delta{\sigma_{\downarrow\uparrow}}$ is depicted as a function of $\phi$. It
is maximal as $\phi\rightarrow 0^{+},\pi^{-}$, where the particle-hole
asymmetry of the dispersion (away from the Dirac points) is largest. Again,
the jump occurs on top of a smooth background contribution from the spectator
fermions, presented in Fig. 7(b). It too is maximal as $\phi\rightarrow
0^{+},\pi^{-}$, approaching $1/2\Delta{\sigma_{\downarrow\uparrow}}$: close to
these angles, the spectator contribution is almost fully determined by the
second Dirac cone, which has a very small gap. Accordingly, the values of the
sign-changing drag contribution, $\sigma_{\downarrow\uparrow,\text{Dirac}}$,
and the almost Dirac-like background contribution are the same.
Figure 7: (a) Jump of the Hall drag $\Delta{\sigma_{\downarrow\uparrow}}$ in
the Haldane model as function of $\phi$, with $t^{\prime}=0.2$ and
$\Delta=\Delta_{c}$ tuned to the transition line. The dashed line corresponds
to formula (18), points are computed numerically by evaluating Eq. (41) at two
points close to the phase boundary, with $\Delta=\Delta_{c}\pm 0.001$ (filled
circles). For comparison, a numerical evaluation with $\Delta=\Delta_{c}\pm
0.1$ is also shown (empty circles), which yields qualitative agreement only.
(b) Smooth contribution from spectator fermions, obtained numerically from
(41). Horizontal lines correspond to
$\Delta{\sigma_{\downarrow\uparrow}}(\phi=0^{+},\pi^{-})/2.$
## V Measurement of the Hall drag
We now discuss how to detect ${\sigma_{\downarrow\uparrow}}$ experimentally.
In a solid state system, the total transversal conductivity
$\sigma_{xy,\text{tot}}$ is an easily accessible quantity, typically obtained
from a resistivity measurement. Since the majority particles form a Chern
insulator, their contribution to $\sigma_{xy,\text{tot}}$ is quantized, and
the Hall drag contribution ${\sigma_{\downarrow\uparrow}}$ can in principle be
read off by subtracting this quantized value from $\sigma_{xy,\text{tot}}$. In
practice, however, it may be necessary to use the specific parameter
dependence of ${\sigma_{\downarrow\uparrow}}$ to separate it from
$\sigma_{xy,\text{tot}}$. ${\sigma_{\downarrow\uparrow}}$ can for example be
obtained as the contribution to $\sigma_{xy,\text{tot}}$ proportional to the
impurity density $n_{\downarrow}$, or by subtracting measurements of
$\sigma_{xy,\text{tot}}$ at two particle-hole inverted points of the phase
diagram.
Chern insulators have also been successfully realized in ultracold gas
systems. Here, an established technique for measuring topological quantum
numbers [39, 40] is the in-situ observation of the center of mass displacement
of the atomic cloud upon the action of an external force. In the present
polaron context, this measurement would have to be performed in a state-
dependent manner to extract the Hall drag. In addition, one could conduct
either a state-dependent time-of-flight measurement [41, 42], or Raman
spectroscopy (as recently implemented for polarons [43]), to infer the in-trap
momentum distribution of the impurity, in view of evaluating the current
response of the impurity to an applied force.
All these transport experiments would extract the Hall drag from the linear
current response to an external, linearly polarized electric field, which is
the standard point of view. However, recent theoretical works have shown [25,
26, 27, 44, 45] that topological invariants can also be obtained from a
measurement of excitation rates to second order in the amplitudes of
circularly polarized fields, which was verified in the experiment of Ref.
[28]. For the Hall drag ${\sigma_{\downarrow\uparrow}}$, a relation to an
impurity excitation rate can be established as well, as we now show. Measuring
such excitation rates may be a simpler route to detect
${\sigma_{\downarrow\uparrow}}$ experimentally, in both ultracold gas and
solid state systems.
To set the stage, we first rephrase the results of Ref. [25] for the majority
sector (non-interacting Chern insulator). The particles are coupled to
external left or right circular polarized electrical fields:
$\displaystyle\textbf{E}_{\pm}(t)=2E\left(\cos(\omega t),\ \pm\sin(\omega
t)\right)^{T}\ ,$ (19)
with $\omega$ a fixed drive frequency. In the temporal gauge, the time-
dependent light-matter Hamiltonian reads
$\displaystyle
H_{\uparrow,\pm}(t)=\frac{2E}{\omega}\left(\hat{J}^{x}_{\uparrow}\sin(\omega
t)\mp\hat{J}^{y}_{\uparrow}\cos(\omega t)\right)\ .$ (20)
When this perturbation is switched on, particles are excited from the lower to
the upper band. One can define the associated depletion rates of initially
occupied states with momentum ${\boldsymbol{k}}$,
$\Gamma_{\uparrow,\pm}({\boldsymbol{k}},\omega)$, which depend on the
polarization of the driving field (“circular dichroism”). In Ref. [25], these
rates are obtained from Fermi’s Golden Rule. Let
$\Delta\Gamma_{\uparrow}(\omega)$ be the difference in total depletion rates
for a fixed frequency $\omega$, $\Delta\Gamma_{\uparrow}(\omega)\equiv
1/2\sum_{\boldsymbol{k}}(\Gamma_{\uparrow,+}({\boldsymbol{k}},\omega)-\Gamma_{\uparrow,-}({\boldsymbol{k}},\omega))$.
Then the Chern number $\mathcal{C}$ follows the simple relation 444Note that
we use a different sign convention for $\mathcal{C}$ than Ref. [25]:
$\displaystyle A_{0}E^{2}\mathcal{C}=-\int_{0}^{\infty}\
d\omega\Delta\Gamma_{\uparrow}(\omega)\ .$ (21)
This integration has to be understood as an average of
$\Delta\Gamma_{\uparrow}(\omega)$ over different drive frequencies, obtained
by repeating the experiment many times [28].
Figure 8: On-shell self-energy diagram. Incoming and outgoing fermion lines
represent particles from the lower band, the intermediate line a particle from
the upper band, and the wiggly lines the circularly polarized electrical
fields. The Feynman rules are explained in App. F.
For our purposes here, it is useful to rederive the result (21) from
diagrammatic perturbation theory. This is achieved by relating the depletion
rate to the on-shell retarded self-energy as
$\displaystyle\Gamma_{\pm,\uparrow}({\boldsymbol{k}},\omega)=-2\text{Im}\left[\Sigma_{\pm}(\epsilon_{\uparrow,1}({\boldsymbol{k}}),{\boldsymbol{k}};\omega)\right]\
.$ (22)
In turn, the self-energy to second order in $H_{\uparrow,\pm}$ can be
represented by the Feynman diagram of Fig. 8, plus the diagram with the
$\hat{J}^{x}_{\uparrow},\hat{J}_{\uparrow}^{y}$ vertices interchanged. The
necessary Feynman rules in energy-momentum space are easily derived from
$H_{\uparrow,\pm}$, and are detailed in App. F. There are also processes
involving $(\hat{J}^{x}_{\uparrow})^{2},(\hat{J}^{y}_{\uparrow})^{2}$, but
they cancel in $\Delta\Gamma_{\uparrow}(\omega)$. Working directly in the real
frequency space for convenience, $\Delta\Gamma_{\uparrow}(\omega)$ can then be
directly written down as
$\displaystyle\Delta\Gamma_{\uparrow}(\omega)=-\sum_{\boldsymbol{k}}\text{Im}\left[\Sigma_{+}(\epsilon_{1}({\boldsymbol{k}}),{\boldsymbol{k}};\omega)-\Sigma_{-}(\epsilon_{1}({\boldsymbol{k}}),{\boldsymbol{k}};\omega)\right]=$
$\displaystyle-\sum_{\boldsymbol{k}}\frac{E^{2}}{\omega^{2}}\text{Im}\bigg{[}\left(2iJ_{\uparrow,21}^{x}({\boldsymbol{k}})J^{y}_{\uparrow,12}({\boldsymbol{k}})-2iJ_{\uparrow,21}^{y}({\boldsymbol{k}})J_{\uparrow,12}^{x}({\boldsymbol{k}})\right)$
$\displaystyle\qquad\qquad\qquad\times\frac{1}{\omega+\epsilon_{\uparrow,1}({\boldsymbol{k}})-\epsilon_{\uparrow,2}({\boldsymbol{k}})+i0^{+}}\bigg{]}\
.$ (23)
Integrating over $\omega$, we find:
$\displaystyle\int_{0}^{\infty}d\omega\Delta\Gamma_{\uparrow}(\omega)=$ (24)
$\displaystyle\frac{4\pi E^{2}A_{0}}{(2\pi)^{2}}\int
d{\boldsymbol{k}}\int_{0}^{\infty}d\omega\frac{\delta(\omega-(\epsilon_{\uparrow,2}({\boldsymbol{k}})-\epsilon_{\uparrow,1}({\boldsymbol{k}})))}{\omega^{2}}$
$\displaystyle\times\text{Im}\left[J_{\uparrow,12}^{x}({\boldsymbol{k}})J_{\uparrow,21}^{y}({\boldsymbol{k}})\right]\overset{\eqref{sigmaCrel}}{=}-A_{0}E^{2}\mathcal{C}\
,$
in agreement with Eq. (21).
To summarize, we have related the majority Chern number to the differential
depletion rate of filled states from the lower band when the system is
subjected to a circular perturbation. We can now extend this idea to the
impurity case. We consider our previous interacting majority-impurity setup,
with a small number of impurities prepared in the lower band, and couple both
majority and impurity particles to the circular fields. On their own, the
impurities would not experience a differential depletion because of the time
reversal invariance of the impurity Hamiltonian. Only due the interaction with
the majority particles such differential depletion will set in, corresponding
to occupation of higher momentum states. Note that, for strong impurity-
majority interactions, it will rather be polaronic (dressed impurity) states
which are depleted. For weak coupling, however, such band-dressing effects can
be neglected (to order $\mathcal{O}(g^{2})$), and we can think in terms of
bare impurities in lieu of polarons. In technical terms, our Feynman diagrams
will not contain any impurity self-energy insertions.
Let us couple the impurities to the circular fields in the same way as the
majority particles, Eq. (20). We consider the depletion rate of the filled
impurity state with vanishing momentum
$\Gamma_{\downarrow,\pm}({\boldsymbol{{0}}},\omega)\equiv\Gamma_{\downarrow,\pm}(\omega)$,
which is of most interest when the impurity density is small. Since non-
vanishing contributions to $\Delta\Gamma_{\downarrow}(\omega)$ must involve
majority scattering, to order $\mathcal{O}(g^{2})$ there are two classes of
relevant diagrams; representative diagrams are shown in Fig. 9.
Figure 9: Non-vanishing contributions to the impurity depletion rate
$\Gamma_{\downarrow,\pm}({\boldsymbol{{0}}},\omega)$. Panels (a), (b):
Diagrams not related to the drag, which are particle-hole symmetric. Panels
(c), (d): Diagrams related to the drag. These two diagrams differ in the
orientation of the field lines and the band index structure of the majority
particles.
Consider first the two diagrams 9(a), 9(b) in the top row of the Figure. These
diagrams describe processes where only the majority particles are excited by
the external fields. Since they do not involve an impurity current, they are
not related to the drag. Two additional diagrams where the direction of the
external field lines is inverted can be drawn as well.
The structural difference between Fig. 9(a) and 9(b) is the orientation of the
majority lines, which maps to an inverted energy-momentum transfer on the
impurity (marked red). Thus, similar to the drag diagrams of Fig. 3, the
diagrams are related by particle-hole symmetry. However, the contributions of
these diagrams add up rather than cancel, since they do not contain an
impurity current operator, $J_{\downarrow}({\boldsymbol{q}})$, which is odd in
${\boldsymbol{q}}$. Therefore, as can be verified by a straightforward
evaluation (cf. App. F, Eq. (52)), the total contribution
$\Delta\Gamma_{\downarrow,\text{ph}}$ of these diagrams obeys
$\Delta\Gamma_{\downarrow,\text{ph}}(\phi)=\Delta\Gamma_{\downarrow,\text{ph}}(\pi-\phi)$
for the Haldane and
$\Delta\Gamma_{\downarrow,\text{ph}}(d_{2})=\Delta\Gamma_{\downarrow,\text{ph}}(-d_{2})$
for the continuum model. As a result, in an experiment these processes can be
projected out by subtracting
$\Delta\Gamma_{\downarrow,\text{ph}}(\phi)-\Delta\Gamma_{\downarrow,\text{ph}}(\pi-\phi)$,
which leaves out only the antisymmetric drag contribution. Another way to
separate $\Delta\Gamma_{\downarrow,\text{ph}}$ from the drag is to have a
different coupling constant between external field and impurities, which is
feasible in the ultracold gas setup where the circular perturbation can for
example be implemented by lattice shaking [47, 28]. Since
$\Delta\Gamma_{\downarrow,\text{ph}}$ is independent of the coupling to the
impurities, it can again be eliminated by subtracting measurements obtained
for two different impurity couplings.
Let us assume either such elimination implicitly, and move on to the two
diagrams of Fig. 9(c), 9(d). In essence, they correspond to the drag
transconductivity diagram of Fig. 3 (top), with the central (red) impurity
line cut. The two other diagrams in this class have crossed interaction lines,
akin to the “crossed” diagrams of Fig. 3 (bottom). Evaluation of these four
diagrams is straightforward, see App. F. Summation over the filled impurity
states simply yields:
$\displaystyle\sum_{{\boldsymbol{p}},\text{filled}}\Gamma_{\downarrow,\pm}({\boldsymbol{p}},\omega)\simeq\sum_{{\boldsymbol{p}},\text{filled}}\Gamma_{\downarrow,\pm}(\omega)=A_{0}n_{\downarrow}\Gamma_{\downarrow,\pm}(\omega)\
.$ (25)
For the integrated differential depletion rate, one then finds
$\displaystyle\int_{0}^{\infty}d\omega\Delta\Gamma_{\downarrow,xy}(\omega)=2\pi
A_{0}E^{2}{\sigma_{\downarrow\uparrow}}\ ,$ (26)
as naively expected from Eq. (24). However, the impurity depletion rate also
receives contribution from processes involving the currents
$\hat{J}_{\downarrow}^{y}$, $\hat{J}_{\uparrow}^{x}$. Per the Feynman rules
(cf. App. F), these diagrams come with a relative minus sign, and then yield a
factor of two for the total differential rate, since
$\sigma_{xy,\downarrow\uparrow}=-\sigma_{yx,\downarrow\uparrow}$ for both the
continuum and the Haldane model, as one can check easily. Modulo the
antisymmetrization discussed above, we therefore have
$\displaystyle{\sigma_{\downarrow\uparrow}}=\frac{1}{4\pi
A_{0}E^{2}}\int_{0}^{\infty}d\omega\Delta\Gamma_{\downarrow}(\omega)\ .$ (27)
This result can also be rephrased in terms of excitation instead of depletion
rates. Since the impurities are initially prepared at the bottom of the band,
one can write
$\displaystyle\int_{0}^{\infty}d\omega\Delta\Gamma_{\downarrow}(\omega)=\sum_{{\boldsymbol{q}}>0}\
\int_{0}^{\infty}d\omega\Delta\Gamma_{\downarrow,\text{exc}}({\boldsymbol{q}},\omega)\
,$ (28)
meaning that the impurities are excited into states with higher momentum which
are initially empty. These ${\boldsymbol{q}}$-states correspond to the
intermediate impurity lines in Fig. 9. Via Eq. (27) we can then define a
${\boldsymbol{q}}$-resolved impurity drag as
$\displaystyle{\sigma_{\downarrow\uparrow}}\equiv\sum_{{\boldsymbol{q}}>0}{\sigma_{\downarrow\uparrow}}({\boldsymbol{q}})\
.$ (29)
This provides an alternative view on, say, the topological jump
$\Delta{\sigma_{\downarrow\uparrow}}$. For the Haldane model, it can be
phrased as $\Delta{\sigma_{\downarrow\uparrow}}=\Delta\mathcal{C}\int
d{\boldsymbol{q}}f_{\text{jump}}({\boldsymbol{q}})$, where
$f_{\text{jump}}({\boldsymbol{q}})$ is a known function, see Eqs. (18), (43).
If the excitation rates defined in Eq. (28) can be experimentally detected in
${\boldsymbol{q}}$-resolved fashion (for instance with band mapping techniques
[48, 49, 50]), so can the ${\boldsymbol{q}}$-resolved impurity drag
${\sigma_{\downarrow\uparrow}}({\boldsymbol{q}})$. Measuring
${\sigma_{\downarrow\uparrow}}({\boldsymbol{q}})$ at two points in the phase
diagram close to the topological boundary then gives direct access to
$f_{\text{jump}}({\boldsymbol{q}})$. Taken the other way around, supposing
that $f_{\text{jump}}({\boldsymbol{q}})$ is known for the model realized in
the experiment, at each ${\boldsymbol{q}}$-point an independent estimate of
the change in Chern number across the phase transition $\Delta\mathcal{C}$ is
possible.
## VI Conclusions
In this work we have studied to which extent a topologically trivial impurity
can be Hall-dragged by majority excitations in a Chern insulator, looking at
two different models in a controlled perturbative setting. Since the impurity
Hall drag is sensitive to the dispersion of the majority particles and holes,
there is no one-to-one correspondence to the Chern number; nevertheless, the
change in Chern number across a topological transition is clearly reflected by
a discontinuous jump in the drag transconductivity
${\sigma_{\downarrow\uparrow}}$. This jump arises from the integrated singular
Berry curvature of the majority fermions.The transconductivity can be
extracted either from transport experiments, or from a measurement of impurity
excitation rates upon driving the system by a circularly polarized field.
A worthwhile goal for future study is the extension to the strong-coupling
limit, in particular the analysis of impurity-majority bound state formation.
These bound states may have rather rich physics: They could inherit the
topological characteristics of the majority particles [14, 15], have opposite
chirality as found for Haldane model in the two-body limit [51], or even be
topological when the single-particle state are trivial [52, 53, 54].
## Acknowledgments
We thank A. Kamenev for helpful discussions. D.P. acknowledges funding by the
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under
Germany’s Excellence Strategy – EXC-2111 – 390814868, and is particularly
grateful to the Max-Planck-Institute for the Physics of Complex Systems
Dresden (MPIPKS) for hospitality during the intermediate stage of this
project. N.G. has been supported by the FRS-FNRS (Belgium) and the ERC
Starting Grant TopoCold. P.M. has been supported by the Spanish MINECO
(FIS2017-84114-C2-1- P), and EU FEDER Quantumcat. G.M.B. acknowledges support
from the Independent Research Fund Denmark-Natural Sciences via Grant No.
DFF-8021-00233B, and US Army CCDC Atlantic Basic and Applied Research via
grant W911NF-19-1-0403. M.G. has been supported by the Israel Science
Foundation (Grant No. 227/15) and the US-Israel Binational Science Foundation
(Grant No. 2016224).
## Appendix A Basis rotation
To evaluate the conductivities, it is convenient to work in the diagonal band
basis, introducing a diagonalizing unitary matrix
$U_{\uparrow}({\boldsymbol{k}})$
$\displaystyle
U_{\uparrow}^{\dagger}({\boldsymbol{k}})H_{\uparrow}({\boldsymbol{k}})U_{\uparrow}({\boldsymbol{k}})=\text{diag}(\epsilon_{\uparrow,1}({\boldsymbol{k}}),\epsilon_{\uparrow,2}({\boldsymbol{k}}))$
(30) $\displaystyle
U_{\uparrow}({\boldsymbol{k}})=\begin{pmatrix}U_{\uparrow,A1}({\boldsymbol{k}})&U_{\uparrow,A2}({\boldsymbol{k}})\\\
U_{\uparrow,B1}({\boldsymbol{k}})&U_{\uparrow,B2}({\boldsymbol{k}})\end{pmatrix},$
(31) $\displaystyle
U_{\uparrow,A1}({\boldsymbol{k}})=\frac{h_{3}({\boldsymbol{k}})-h({\boldsymbol{k}})}{\sqrt{2h({\boldsymbol{k}})(h({\boldsymbol{k}})-h_{3}({\boldsymbol{k}}))}}$
$\displaystyle
U_{\uparrow,A2}({\boldsymbol{k}})=\frac{h_{3}({\boldsymbol{k}})+h({\boldsymbol{k}})}{\sqrt{2h({\boldsymbol{k}})(h({\boldsymbol{k}})+h_{3}({\boldsymbol{k}}))}}$
$\displaystyle
U_{\uparrow,B1}({\boldsymbol{k}})=\frac{h_{1}({\boldsymbol{k}})+ih_{2}({\boldsymbol{k}})}{\sqrt{2h({\boldsymbol{k}})(h({\boldsymbol{k}})-h_{3}({\boldsymbol{k}}))}},$
$\displaystyle
U_{\uparrow,B2}({\boldsymbol{k}})=\frac{h_{1}({\boldsymbol{k}})+ih_{2}({\boldsymbol{k}})}{\sqrt{2h({\boldsymbol{k}})(h({\boldsymbol{k}})+h_{3}({\boldsymbol{k}}))}}\
,$
where $A,B$ refer to the sublattice- and $1,2$ to the diagonal band basis. The
same expressions apply for the Haldane model as well.
In the band basis, the second-quantized current operator is given by
$\displaystyle\hat{J}^{x/y}_{\uparrow}$
$\displaystyle=\sum_{\boldsymbol{k}}c^{\dagger}_{\uparrow,\alpha}({\boldsymbol{k}})J^{x/y}_{\uparrow,\alpha\beta}({\boldsymbol{k}})c_{\uparrow,\beta}({\boldsymbol{k}}),$
(32)
with matrix elements
$\displaystyle J_{\uparrow}^{x}({\boldsymbol{k}})$
$\displaystyle=U_{\uparrow}^{\dagger}({\boldsymbol{k}})J_{\uparrow}^{x,0}({\boldsymbol{k}})U_{\uparrow}({\boldsymbol{k}})$
(33) $\displaystyle J_{\uparrow}^{x,0}({\boldsymbol{k}})$
$\displaystyle=\frac{\partial H_{\uparrow}({\boldsymbol{k}})}{\partial
k_{x}}=\sigma_{x}+2k_{x}(d_{1}\sigma_{z}+d_{2}\mathbbm{1}),$
and likewise for $J_{\uparrow}^{y}({\boldsymbol{k}})$.
## Appendix B Evaluation of the drag diagrams in the continuum model
Let us start by considering the first “direct” diagram in Fig. 3 with majority
band indices $\alpha=1,\beta=2$. Its contribution to the Matsubara correlator
$-\braket{\hat{J}_{\downarrow}^{x}\hat{J}_{\uparrow}^{y}}(i\Omega)$, to be
denoted by $\mathcal{P}_{1}(i\Omega)$, reads
$\displaystyle\mathcal{P}_{1}(i\Omega)=-g^{2}n_{\downarrow}A_{0}\int_{k,q}\int\frac{d\tilde{\omega}}{2\pi}J_{\uparrow,12}^{y}({\boldsymbol{k}})W^{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}})W^{21}({\boldsymbol{k}},-{\boldsymbol{q}})J^{x}_{\downarrow}({\boldsymbol{q}})$
(34)
$\displaystyle\frac{1}{i(\Omega+\omega_{q})-\epsilon_{\downarrow}({\boldsymbol{q}})}\frac{1}{i\omega_{q}-\epsilon_{\downarrow}({\boldsymbol{q}})}\frac{1}{i(\omega_{q}+\tilde{\omega})+0^{+}}\frac{1}{i(\Omega+\omega_{k})-{\epsilon_{\uparrow,1}}({\boldsymbol{k}})}\frac{1}{i\omega_{k}-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})}\frac{1}{i(\omega_{k}+\tilde{\omega})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})}\
,$
where $0^{+}$ in the third impurity propagator ensures the correspondence to
filled states. Evaluating the frequency integrals we find:
$\displaystyle\mathcal{P}_{1}(i\Omega)=-g^{2}n_{\downarrow}A_{0}\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}J_{\uparrow,12}^{y}({\boldsymbol{k}})W^{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}})W^{21}({\boldsymbol{k}},-{\boldsymbol{q}})J^{x}_{\downarrow}({\boldsymbol{q}})$
(35)
$\displaystyle\frac{1}{{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-\epsilon_{\downarrow}({\boldsymbol{q}})}\frac{1}{-i\Omega+{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-\epsilon_{\downarrow}({\boldsymbol{q}})}\frac{1}{-i\Omega+{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})}\
.$
Upon analytical continuation, $i\Omega\rightarrow\omega$, only the
$\mathcal{O}(\omega)$ part contributes to the static drag as in the non-
interacting case. With Eq. (9), we get:
$\displaystyle\sigma_{\downarrow\uparrow,1}=ig^{2}n_{\downarrow}\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}J^{y}_{\uparrow,12}({\boldsymbol{k}})W^{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}})W^{21}({\boldsymbol{k}},-{\boldsymbol{q}})\frac{q_{x}}{M}\frac{1}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})\right)^{2}}d({\boldsymbol{k}},{\boldsymbol{q}}),$
(36)
with $d({\boldsymbol{k}},{\boldsymbol{q}})$ as defined in Eq. (11). The
remaining three contributions to $\sigma_{\downarrow\uparrow}$ have the
following structure: The direct diagram with majority indices
$\alpha=2,\beta=1$ leads to Eq. (36) with $A\equiv
J^{y}_{\uparrow,12}({\boldsymbol{k}})W^{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}})W^{21}({\boldsymbol{k}},-{\boldsymbol{q}})$
replaced by
$B\equiv-J^{y}_{\uparrow,21}({\boldsymbol{k}})W^{12}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}})W^{22}({\boldsymbol{k}},-{\boldsymbol{q}})$;
using elementary properties of unitary matrices, one can show that
$B=-\overline{A}$ (with $\overline{A}$ the complex conjugate of $A$), thus
yielding the part $\propto d({\boldsymbol{k}},{\boldsymbol{q}})$ of Eq. (10)
in the main text. The remaining “crossed” diagram of Fig. 3 likewise generates
the part $\propto c({\boldsymbol{k}},{\boldsymbol{q}})$.
## Appendix C Jump of the Hall drag in the continuum model
To derive the jump from Eq. (10), we need to project on the part of the
${\boldsymbol{k}}$-integrand corresponding to the Dirac fermions, which
becomes singular at ${\boldsymbol{k}}=0$ as $m\rightarrow 0$. This can be done
by setting ${\boldsymbol{k}}=0$ in all regular parts. The last factor in the
integrand becomes
$\displaystyle
d({\boldsymbol{k}},{\boldsymbol{q}})+c({\boldsymbol{k}},{\boldsymbol{q}})\rightarrow$
(37)
$\displaystyle\left(\frac{1}{(\epsilon_{\downarrow}({\boldsymbol{q}})+{\epsilon_{\uparrow,2}}({\boldsymbol{q}}))^{2}}-\frac{1}{(\epsilon_{\downarrow}({\boldsymbol{q}})-{\epsilon_{\uparrow,1}}({\boldsymbol{q}}))^{2}}\right)\
.$
In the part involving interaction matrices $W$, it is useful to rewrite
$\displaystyle
W^{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}})W^{21}({\boldsymbol{k}},-{\boldsymbol{q}})\overset{\eqref{Hint}}{=}$
(38) $\displaystyle
U^{\dagger}_{\uparrow,2n}({\boldsymbol{k}})U_{\uparrow,n2}({\boldsymbol{k}}-{\boldsymbol{q}})U^{\dagger}_{\uparrow,2m}({\boldsymbol{k}}-{\boldsymbol{q}})U_{\uparrow,m1}({\boldsymbol{k}})\rightarrow$
$\displaystyle
U^{\dagger}_{\uparrow,2n}({\boldsymbol{k}})U_{\uparrow,n2}(-{\boldsymbol{q}})U^{\dagger}_{\uparrow,2m}(-{\boldsymbol{q}})U_{\uparrow,m1}({\boldsymbol{k}})=$
$\displaystyle\left(U^{\dagger}_{\uparrow}({\boldsymbol{k}})V({\boldsymbol{q}})U_{\uparrow}({\boldsymbol{k}})\right)_{21},$
$\displaystyle V({\boldsymbol{q}})_{nm}\equiv
U_{\uparrow,n2}(-{\boldsymbol{q}})U^{\dagger}_{\uparrow,2m}(-{\boldsymbol{q}})\
,$
where $n,m$ are sublattice indices, and in the second step we have only kept
the singular ${\boldsymbol{k}}$ dependence. $V({\boldsymbol{q}})$ is a
hermitian matrix, and so can be expanded as a linear combination of the unit
and Pauli matrices with real coefficients. Then it is easy to show that only
the contribution $\propto\sigma_{x}$ survives the integration in Eq. (10),
while the rest either does not contribute to the required imaginary part or is
antisymmetric in $k_{x}$. Therefore, we can write
$\displaystyle\left(U^{\dagger}_{\uparrow}({\boldsymbol{k}})V({\boldsymbol{q}})U_{\uparrow}({\boldsymbol{k}})\right)_{21}\hat{=}$
(39)
$\displaystyle\left(U^{\dagger}_{\uparrow}({\boldsymbol{k}})\sigma_{x}U_{\uparrow}({\boldsymbol{k}})\right)_{21}\text{Re}\left[V({\boldsymbol{q}})_{12}\right]=$
$\displaystyle\left(U^{\dagger}_{\uparrow}({\boldsymbol{k}})J^{x,0}_{\uparrow,\text{Dirac}}U_{\uparrow}({\boldsymbol{k}})\right)_{21}\frac{-q_{x}}{2q\sqrt{1+(d_{1}q)^{2}}}\
,$
where in the last step we identified the current operator of the Dirac
fermions in the sublattice basis, $\sigma_{x}=J^{x,0}_{\uparrow,\text{Dirac}}$
(cf. Eq. (1)), and wrote out $V({\boldsymbol{q}})$ by inserting matrix
elements of $U_{\uparrow}(-{\boldsymbol{q}})$ from App. A. Inserting Eqs.
(37), (39) into Eq. (10), we can write the sign-changing Dirac part of the
Hall drag as shown in Eq. (12) of the main text.
## Appendix D Impurity Hall drag and jump in the Haldane model
In the Haldane model, the on-site interaction is defined by (cf. Eq. (8))
$\displaystyle
H_{\text{int}}=\frac{g}{A_{0}}\sum_{\ell=A,B}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}c^{\dagger}_{\uparrow,\ell}({\boldsymbol{k}}+{\boldsymbol{q}})c_{\uparrow,\ell}({\boldsymbol{k}})c^{\dagger}_{\downarrow,\ell}({\boldsymbol{p}}-{\boldsymbol{q}})c_{\downarrow,\ell}({\boldsymbol{p}})=\frac{g}{A_{0}}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}c^{\dagger}_{\uparrow,\alpha}({\boldsymbol{k}}+{\boldsymbol{q}})c_{\uparrow,\beta}({\boldsymbol{k}})c^{\dagger}_{\downarrow,1}({\boldsymbol{p}}-{\boldsymbol{q}})c_{\downarrow,1}({\boldsymbol{p}})W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}})\
,$ $\displaystyle
W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}})=\sum_{\ell=A,B}\overline{U}_{\uparrow,\ell\alpha}({\boldsymbol{k}}+{\boldsymbol{q}})U_{\uparrow,\ell\beta}({\boldsymbol{k}})\overline{U}_{\downarrow,\ell
1}({\boldsymbol{p}}-{\boldsymbol{q}})U_{\downarrow,\ell 1}({\boldsymbol{p}})\
.$ (40)
In Eq. (40), we have restricted the impurity to the lower band, which is
legitimate for weak interactions.
With this interaction, the derivation of the Hall drag proceeds analogously to
the continuum model, App. B, and results in
$\displaystyle\sigma_{\downarrow\uparrow}=$ (41)
$\displaystyle-2g^{2}n_{\downarrow}\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}\
\text{Im}\left\\{J^{y}_{\uparrow,12}({\boldsymbol{k}})W^{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{q}},{\boldsymbol{q}})W^{21}({\boldsymbol{k}},{\boldsymbol{{0}}},-{\boldsymbol{q}})\right\\}J^{x}_{\downarrow,11}({\boldsymbol{q}})\frac{1}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})\right)^{2}}\left(d({\boldsymbol{k}},{\boldsymbol{q}})+c({\boldsymbol{k}},{\boldsymbol{q}})\right),$
with $J_{\downarrow,11}^{x}(q)$ the impurity current operator in the band
basis (taking into account lower band contributions only), and $c,d$ as in Eq.
(11), only replacing the single band energy of the continuum model
$\epsilon_{\downarrow}({\boldsymbol{q}})$ by the lower band energy
$\epsilon_{\downarrow,1}({\boldsymbol{q}})$.
From Eq. (41) one can readily derive the additional symmetry
${\sigma_{\downarrow\uparrow}}(\phi)=-{\sigma_{\downarrow\uparrow}}(\pi-\phi)$
mentioned in the main text. In the majority Hamiltonian
$H_{\uparrow}({\boldsymbol{k}})$,
$h_{0}({\boldsymbol{k}};\phi)=-h_{0}({\boldsymbol{k}};\pi-\phi)$, while the
other coefficients are invariant under such reflection. As a result, one finds
$c({\boldsymbol{k}},{\boldsymbol{q}};\phi)=-d({\boldsymbol{k}},{\boldsymbol{q}};\pi-\phi)$.
All other elements of Eq. (41) do not change, which shows the property as
claimed.
To evaluate the jump of the Hall drag $\Delta{\sigma_{\downarrow\uparrow}}$ in
the Haldane model in analogy with Sec. III, let us focus on the transition
line, $\Delta_{c}=6\sqrt{3}t^{\prime}\sin(\phi)$, where the gap closes at the
Dirac point ${\boldsymbol{k}}_{A}=(0,4\pi/3\sqrt{3})^{T}$. Since
${\sigma_{\downarrow\uparrow}}$ is symmetric in $\Delta$, for a given value of
$\phi$ the value of $\Delta{\sigma_{\downarrow\uparrow}}$ at $-\Delta_{c}$ is
the same. To extract the singular Dirac contribution at
${\boldsymbol{k}}_{A}$, we let
${\boldsymbol{k}}\rightarrow{\boldsymbol{k}}_{A}$ in all regular parts of Eq.
(41). In this limit,
$\displaystyle J_{\uparrow}^{x}({\boldsymbol{k}})\rightarrow
U_{\uparrow}^{\dagger}({\boldsymbol{k}})\frac{3}{2}\sigma_{y}U_{\uparrow}({\boldsymbol{k}})\equiv
J_{\uparrow,\text{Dirac}}^{x}({\boldsymbol{k}})$ (42)
This current can be extracted from the interaction part of Eq. (41) as in Sec.
III, which allows to write the Dirac part of the Hall drag as
$\displaystyle\sigma_{\downarrow\uparrow,\text{Dirac}}=\frac{g^{2}n_{\downarrow}}{(2\pi)^{4}}\int\frac{d{\boldsymbol{k}}}{\pi}\
\text{Im}\left\\{J_{\uparrow,\text{Dirac},12}^{y}({\boldsymbol{k}})J_{\uparrow,\text{Dirac},21}^{x}({\boldsymbol{k}})\right\\}\cdot\frac{1}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})\right)^{2}}\cdot
f(t^{\prime},\phi)\ ,$ (43) $\displaystyle
f(t^{\prime},\phi)\equiv-\frac{4\pi}{3}\int
d{\boldsymbol{q}}J_{\downarrow}^{x}({\boldsymbol{q}})\left(\frac{1}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}}_{A})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}_{A}-{\boldsymbol{q}})-{\epsilon_{\downarrow,1}}({\boldsymbol{q}})\right)^{2}}-\frac{1}{\left({\epsilon_{\uparrow,1}}({\boldsymbol{k}}_{A}-{\boldsymbol{q}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}_{A})-{\epsilon_{\downarrow,1}}({\boldsymbol{q}})\right)^{2}}\right)\cdot$
$\displaystyle\qquad\qquad\qquad\text{Im}\left\\{U_{\uparrow,A2}({\boldsymbol{k}}_{A}-{\boldsymbol{q}})U^{\dagger}_{\downarrow,1A}({\boldsymbol{{0}}})U_{\downarrow,A1}({\boldsymbol{q}})U^{\dagger}_{\uparrow,2B}({\boldsymbol{k}}_{A}-{\boldsymbol{q}})U_{\downarrow,B1}({\boldsymbol{{0}}})U^{\dagger}_{\downarrow,1B}({\boldsymbol{q}})\right\\}\quad$
Again, the ${\boldsymbol{k}}$ and ${\boldsymbol{q}}$ integrals have
factorized, and the ${\boldsymbol{k}}$ integral gives $\pm 1/2$. This yields a
value of the jump as in Eq. (18) of the main text. The remaining
${\boldsymbol{q}}$ integral has to be evaluated numerically.
## Appendix E Antisymmetry of the hall drag as function of $g$ in the Haldane
model with $\phi=\pm\pi/2,\Delta=0$.
Here we show that the Hall drag ${\sigma_{\downarrow\uparrow}}$ in the Haldane
model, with parameters $\phi=\pm\pi/2,\Delta=0$, is antisymmetric in the
impurity-majority coupling $g$ to all orders. We work in the diagonal band
frame, and perform a particle-hole transformation which also exchanges the
band indices:
$\displaystyle b_{\tilde{\alpha}}({\boldsymbol{k}})\equiv
c^{\dagger}_{\uparrow,\alpha}(-{\boldsymbol{k}}),\quad{b}^{\dagger}_{\tilde{\alpha}}({\boldsymbol{k}})\equiv{c}_{\uparrow,\alpha}(-{\boldsymbol{k}}),\quad{\tilde{1}}\equiv
2,\quad\tilde{2}\equiv 1\ .$ (44)
Due to particle-hole symmetry for $\phi=\pm\pi/2$, the form of the non-
interacting majority Hamiltonian is invariant under this transformation (up to
a constant):
$\displaystyle H_{\uparrow}$
$\displaystyle=\sum_{\boldsymbol{k}}c^{\dagger}_{\uparrow,\alpha}({\boldsymbol{k}})\epsilon_{\alpha}({\boldsymbol{k}})c_{\uparrow,\alpha}({\boldsymbol{k}})$
(45)
$\displaystyle=\sum_{\boldsymbol{k}}b^{\dagger}_{\tilde{\alpha}}(-{\boldsymbol{k}})\left[-\epsilon_{\alpha}({\boldsymbol{k}})\right]b_{\tilde{\alpha}}(-{\boldsymbol{k}})+\text{const.}$
$\displaystyle=\sum_{\boldsymbol{k}}b^{\dagger}_{\alpha}({\boldsymbol{k}})\epsilon_{\alpha}({\boldsymbol{k}})b_{\alpha}({\boldsymbol{k}})+\text{const.}\
,$
where
$\epsilon_{\alpha}({\boldsymbol{k}})=-\epsilon_{\tilde{\alpha}}(-{\boldsymbol{k}})$
was used. However, the interaction term acquires a minus sign under the
variable transformation (44):
$\displaystyle H_{\text{int}}$
$\displaystyle=\frac{g}{A_{0}}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}c^{\dagger}_{\uparrow,\alpha}({\boldsymbol{k}}+{\boldsymbol{q}})c_{\uparrow,\beta}({\boldsymbol{k}})c^{\dagger}_{\downarrow,1}({\boldsymbol{p}}-{\boldsymbol{q}})c_{\downarrow,1}({\boldsymbol{p}})W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}})$
(46)
$\displaystyle=-\frac{g}{A_{0}}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}b^{\dagger}_{\tilde{\beta}}(-{\boldsymbol{k}}){b}_{\tilde{\alpha}}(-{\boldsymbol{k}}-{\boldsymbol{q}})c^{\dagger}_{\downarrow,1}({\boldsymbol{p}}-{\boldsymbol{q}}){c}_{\downarrow,1}({\boldsymbol{p}})W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}})+\text{const.}$
$\displaystyle=-\frac{g}{A_{0}}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}{b}^{\dagger}_{\alpha}({\boldsymbol{k}}+{\boldsymbol{q}}){b}_{\beta}({\boldsymbol{k}})c^{\dagger}_{\downarrow,1}({\boldsymbol{p}}-{\boldsymbol{q}}){c}_{\downarrow,1}({\boldsymbol{p}})W_{\tilde{\beta}\tilde{\alpha}}(-{\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{p}},{\boldsymbol{q}})+\text{const.}$
$\displaystyle=-\frac{g}{A_{0}}\sum_{{\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}}}b^{\dagger}_{\alpha}({\boldsymbol{k}}+{\boldsymbol{q}})b_{\beta}({\boldsymbol{k}})c^{\dagger}_{\downarrow,1}({\boldsymbol{p}}-{\boldsymbol{q}})c_{\downarrow,1}({\boldsymbol{p}})W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}})+\text{const.}\
.$
The unimportant additional terms are constant in the majority sector. In the
last step, we used
$W_{\tilde{\beta}\tilde{\alpha}}(-{\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{p}},{\boldsymbol{q}})=W_{\alpha\beta}({\boldsymbol{k}},{\boldsymbol{p}},{\boldsymbol{q}})$.
This can be easily shown by inserting the matrix elements from Eqs. (31),
(16), but requires $h_{3}({\boldsymbol{k}})=-h_{3}(-{\boldsymbol{k}})$, which
is only fulfilled for $\Delta=0$ (and is violated in the continuum model).
Last, the required majority current operator transforms as
$\displaystyle
J_{\uparrow,\alpha\beta}^{y}({\boldsymbol{k}})=\sum_{\boldsymbol{k}}c^{\dagger}_{\uparrow,\alpha}({\boldsymbol{k}})U^{\dagger}_{\uparrow,\alpha
n}({\boldsymbol{k}})\left[J_{y,\uparrow}^{0}({\boldsymbol{k}})\right]_{nm}U_{\uparrow,m\beta}({\boldsymbol{k}})c_{\uparrow,\beta}({\boldsymbol{k}}),\quad\left[J_{y,\uparrow}^{0}({\boldsymbol{k}})\right]_{nm}=\left[\partial_{k_{y}}H_{\uparrow}({\boldsymbol{k}})\right]_{nm}\
,$ (47) $\displaystyle
J_{\uparrow,\alpha\beta}^{y}({\boldsymbol{k}})=-\sum_{{\boldsymbol{k}}}{b}^{\dagger}_{\tilde{\beta}}(-{\boldsymbol{k}})U^{\dagger}_{\uparrow,\alpha
n}({\boldsymbol{k}})\left[J_{y,\uparrow}^{0}({\boldsymbol{k}})\right]_{nm}U_{\uparrow,m\beta}({\boldsymbol{k}})b_{\tilde{\alpha}}(-{\boldsymbol{k}})+\text{const.}$
$\displaystyle\quad\quad\quad\ \
=-\sum_{{\boldsymbol{k}}}{b}^{\dagger}_{\alpha}({\boldsymbol{k}})U^{T}_{\uparrow,\tilde{\alpha}m}(-{\boldsymbol{k}})\left[J_{y,\uparrow}^{0}(-{\boldsymbol{k}})\right]^{T}_{mn}\overline{U}_{\uparrow,n\tilde{\beta}}(-{\boldsymbol{k}})b_{\beta}({\boldsymbol{k}})+\text{const.}\
.$
Again, inserting matrix elements one can show that
$\displaystyle
U^{T}_{\uparrow,\tilde{\alpha}m}(-{\boldsymbol{k}})\left[J_{y,\uparrow}^{0}(-{\boldsymbol{k}})\right]^{T}_{mn}\overline{U}_{\uparrow,n\tilde{\beta}}(-{\boldsymbol{k}})=$
$\displaystyle U_{\uparrow,\alpha
m}^{\dagger}({\boldsymbol{k}})\left[J_{y,\uparrow}^{0}({\boldsymbol{k}})\right]_{mn}U_{\uparrow,n\beta}({\boldsymbol{k}}),$
and the majority current changes sign. In conclusion, for
$\phi=\pm\pi/2,\Delta=0$ this proves the antisymmetry
$\displaystyle{\sigma_{\downarrow\uparrow}}(g)=-{\sigma_{\downarrow\uparrow}}(-g)\
,$ (48)
as claimed in the main text.
## Appendix F ${\sigma_{\downarrow\uparrow}}$ from circular dichroism:
Technical details
The Feynman rules for the perturbation $H_{\uparrow,\pm}(t)$ of Eq. (20) in
the energy-momentum domain are easily derived from Wick’s theorem. They read:
* •
Each current vertex comes with a factor $E/\omega$.
* •
If an incoming (outgoing) electrical field line couples to a $J_{x}$-vertex,
there is an extra factor $-i$ $(i)$ for both $\Gamma_{\pm}(\omega)$.
* •
If an electrical field line (incoming or outgoing) couples to a
$J_{y}$-vertex, this gives a factor $\mp$ 1 for $\Gamma_{\pm}(\omega)$.
Application of these rules directly leads to Eq. (23) in the non-interacting
case. For the integrated impurity depletion rate, let us consider for instance
the contribution of the two diagrams of Fig. 9(c), 9(d), to be denoted $D$. It
reads
$\displaystyle
D=-n_{\downarrow}g^{2}E^{2}A_{0}\int_{0}^{\infty}d\omega\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}\
\text{Im}\
\bigg{\\{}\int\frac{d\omega_{k}}{2\pi}\int\frac{d\omega_{q}}{2\pi}\left(-2iJ^{y}_{\uparrow,21}({\boldsymbol{k}})J^{x}_{\downarrow}({\boldsymbol{q}})W^{2}+2iJ^{y}_{\uparrow,12}({\boldsymbol{k}})J^{x}_{\downarrow}({\boldsymbol{q}})\overline{W}^{2}\right)\frac{1}{\omega^{2}}$
(49)
$\displaystyle\frac{1}{\omega_{q}-{\epsilon_{\downarrow}}({\boldsymbol{q}})+i0^{+}}\frac{1}{\omega_{q}-\omega-{\epsilon_{\downarrow}}({\boldsymbol{q}})+i0^{+}}\frac{1}{\omega_{k}-{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-i0^{+}}\frac{1}{\omega+\omega_{k}-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})+i0^{+}}\frac{1}{\omega+\omega_{k}-\omega_{q}-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})+i0^{+}}\bigg{\\}}\
.$
Here $W$ is shorthand for the proper interaction matrices (cf. Eq. (10)). The
third propagator is advanced (it corresponds to a majority hole) and has a
$-i0^{+}$ term in the denominator, the other propagators are retarded.
Performing the $\omega_{k},\omega_{q}$ integrals yields
$\displaystyle D=-n_{\downarrow}g^{2}E^{2}A_{0}\
\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}\int_{>0}d\omega\
\text{Im}\
\bigg{\\{}\left(-2iJ^{y}_{\uparrow,21}({\boldsymbol{k}})J^{x}_{\downarrow}({\boldsymbol{q}})W^{2}+2iJ^{y}_{\uparrow,12}({\boldsymbol{k}})J^{x}_{\downarrow}({\boldsymbol{q}})\overline{W}^{2}\right)\frac{1}{\omega^{2}}$
(50)
$\displaystyle\frac{1}{\omega+{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-{\epsilon_{\downarrow}}({\boldsymbol{q}})+i0^{+}}\frac{1}{{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\downarrow}}({\boldsymbol{q}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})+i0^{+}}\frac{1}{\omega+{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})+i0^{+}}\bigg{\\}}\
.$
The expression involving the currents is real, and the imaginary part comes
from the propagators only. They yield a sum of two delta-functions, since the
propagator in the middle is real. Computing the $\omega$-integral, after some
trivial algebra one then finds
$\displaystyle D=2\pi
E^{2}A_{0}\cdot-2g^{2}n_{\downarrow}\int\frac{d{\boldsymbol{k}}}{(2\pi)^{2}}\frac{d{\boldsymbol{q}}}{(2\pi)^{2}}\
\text{Im}\left\\{J^{y}_{\uparrow,12}({\boldsymbol{k}})J_{\downarrow}^{x}({\boldsymbol{q}})W^{2}\right\\}\frac{2{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-{\epsilon_{\downarrow}}({\boldsymbol{q}})}{({\epsilon_{\uparrow,2}}({\boldsymbol{k}})-{\epsilon_{\uparrow,1}}({\boldsymbol{k}}))^{2}({\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-{\epsilon_{\downarrow}}({\boldsymbol{q}}))^{3}}\
,$ (51)
which is precisely $2\pi E^{2}A_{0}$ times the
${\sigma_{\downarrow\uparrow}}$-contribution of the “direct” diagram, cf.
(10). Evaluation of the other non-vanishing drag diagrams (crossed diagram and
diagrams with $J_{\downarrow}^{y},J_{\uparrow}^{x}$ interchanged) proceeds in
the same manner.
Since diagrams where both external field lines couple to the impurity vanish
when forming $\Delta\Gamma_{\downarrow}$, the only remaining non-zero diagrams
are those of Fig. 9(a), 9(b) plus those with inverted directions of the
external field lines. After some straightforward simplifications, one finds a
total contribution
$\displaystyle\frac{n_{\downarrow}g^{2}}{(2\pi)^{4}}4E^{2}A_{0}\
\int_{0}^{\infty}\frac{d\omega}{\omega^{2}}\int
d{\boldsymbol{k}}d{\boldsymbol{q}}\
\text{Im}\left[J_{\uparrow,21}^{x}({\boldsymbol{k}}-{\boldsymbol{q}})J^{y}_{\uparrow,12}({\boldsymbol{k}})W_{11}({\boldsymbol{k}},-{\boldsymbol{q}},-{\boldsymbol{q}})W_{22}({\boldsymbol{k}}-{\boldsymbol{q}},{\boldsymbol{{0}}},{\boldsymbol{q}})\right]$
(52)
$\displaystyle\text{Im}\bigg{\\{}(-1)\frac{1}{-\omega+{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-{\epsilon_{\uparrow,1}}({\boldsymbol{k}}-{\boldsymbol{q}})-i0^{+}}\frac{1}{\omega+{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})+i0^{+}}$
$\displaystyle\cdot\left(\frac{1}{\omega-{\epsilon_{\uparrow,2}}({\boldsymbol{k}})+{\epsilon_{\uparrow,1}}({\boldsymbol{k}}-{\boldsymbol{q}})-{\epsilon_{\downarrow}}({\boldsymbol{q}})+i0^{+}}+\frac{1}{\omega+{\epsilon_{\uparrow,1}}({\boldsymbol{k}})-{\epsilon_{\uparrow,2}}({\boldsymbol{k}}-{\boldsymbol{q}})-{\epsilon_{\downarrow}}({\boldsymbol{q}})+i0^{+}}\right)\bigg{\\}}\
.$
It is readily seen that this expression is invariant under
${\epsilon_{\uparrow,1}}\leftrightarrow-{\epsilon_{\uparrow,2}}$, which
implies the particle-hole symmetry claimed in the main text. We have also
checked this symmetry explicitly for the Haldane model by numerically
implementing Eq. (52).
## References
* Chevy [2006] F. Chevy, Phys. Rev. A 74, 063628 (2006).
* Note [1] Here, we understand the polaron as a mobile quasiparticle, and not as a static impurity as recently studied in a topological system in Ref. [55].
* Schirotzek _et al._ [2009] A. Schirotzek, C.-H. Wu, A. Sommer, and M. W. Zwierlein, Phys. Rev. Lett. 102, 230402 (2009).
* Kohstall _et al._ [2012] C. Kohstall, M. Zaccanti, M. Jag, A. Trenkwalder, P. Massignan, G. M. Bruun, F. Schreck, and R. Grimm, Nature 485, 615 (2012).
* Koschorreck _et al._ [2012] M. Koschorreck, D. Pertot, E. Vogt, B. Fröhlich, M. Feld, and M. Köhl, Nature 485, 619 (2012).
* Scazza _et al._ [2017] F. Scazza, G. Valtolina, P. Massignan, A. Recati, A. Amico, A. Burchianti, C. Fort, M. Inguscio, M. Zaccanti, and G. Roati, Phys. Rev. Lett. 118, 083602 (2017).
* Yan _et al._ [2019] Z. Yan, P. B. Patel, B. Mukherjee, R. J. Fletcher, J. Struck, and M. W. Zwierlein, Phys. Rev. Lett. 122, 093401 (2019).
* Sidler _et al._ [2017] M. Sidler, P. Back, O. Cotlet, A. Srivastava, T. Fink, M. Kroner, E. Demler, and A. Imamoglu, Nature Physics 13, 255 (2017).
* Massignan _et al._ [2014] P. Massignan, M. Zaccanti, and G. M. Bruun, Reports on Progress in Physics 77, 034401 (2014).
* Levinsen and Parish [2015] J. Levinsen and M. M. Parish, “Strongly interacting two-dimensional fermi gases,” in _Annual Review of Cold Atoms and Molecules_ (2015) Chap. 1, pp. 1–75.
* Schmidt _et al._ [2018] R. Schmidt, M. Knap, D. A. Ivanov, J.-S. You, M. Cetina, and E. Demler, Reports on Progress in Physics 81, 024401 (2018).
* Haldane [2017] F. D. M. Haldane, Rev. Mod. Phys. 89, 040502 (2017).
* Cooper _et al._ [2019] N. R. Cooper, J. Dalibard, and I. B. Spielman, Rev. Mod. Phys. 91, 015005 (2019).
* Grusdt _et al._ [2016] F. Grusdt, N. Y. Yao, D. Abanin, M. Fleischhauer, and E. Demler, Nature Com. 7, 11994 (2016).
* Grusdt _et al._ [2019] F. Grusdt, N. Y. Yao, and E. A. Demler, Phys. Rev. B 100, 075126 (2019).
* Muñoz de las Heras _et al._ [2020] A. Muñoz de las Heras, E. Macaluso, and I. Carusotto, Phys. Rev. X 10, 041058 (2020).
* [17] N. Baldelli, B. Juliá-Diáz, U. Bhattacharya, M. Lewenstein, and T. Graß, arXiv:2102.02072 [cond-mat.mes-hall] .
* Camacho-Guardian _et al._ [2019] A. Camacho-Guardian, N. Goldman, P. Massignan, and G. M. Bruun, Phys. Rev. B 99, 081105 (2019).
* Camacho-Guardian _et al._ [2020] A. Camacho-Guardian, N. Goldman, P. Massignan, and G. M. Bruun, Phys. Rev. B 102, 119903 (2020).
* Haldane [1988] F. D. M. Haldane, Phys. Rev. Lett. 61, 2015 (1988).
* Kamenev and Oreg [1995] A. Kamenev and Y. Oreg, Phys.Rev.B 52, 7516 (1995).
* Tse _et al._ [2007] W.-K. Tse, B. Y.-K. Hu, and S. Das Sarma, Phys. Rev. B 76, 081401 (2007).
* Narozhny and Levchenko [2016] B. N. Narozhny and A. Levchenko, Rev. Mod. Phys. 88, 025003 (2016).
* Bernevig and Hughes [2013] B. A. Bernevig and T. L. Hughes, _Topological insulators and topological superconductors_ (Princeton university press, 2013).
* Tran _et al._ [2017] D. T. Tran, A. Dauphin, A. G. Grushin, P. Zoller, and N. Goldman, Science advances 3, e1701207 (2017).
* Tran _et al._ [2018] D. T. Tran, N. R. Cooper, and N. Goldman, Phys. Rev. A 97, 061602 (2018).
* Repellin and Goldman [2019] C. Repellin and N. Goldman, Phys. Rev. Lett. 122, 166801 (2019).
* Asteria _et al._ [2019] L. Asteria, D. T. Tran, T. Ozawa, M. Tarnowski, B. S. Rem, N. Fläschner, K. Sengstock, N. Goldman, and C. Weitenberg, Nature Physics 15, 449 (2019).
* Ryu _et al._ [2010] S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. Ludwig, New Journal of Physics 12, 065010 (2010).
* Note [2] For the winding number construction one should view momentum space as compactified, $\mathbb{R}_{2}\rightarrow S_{2}$.
* Ying and Kamenev [2018] X. Ying and A. Kamenev, Phys. Rev. Lett. 121, 086810 (2018).
* Thouless _et al._ [1982] D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982).
* Kohmoto [1985] M. Kohmoto, Annals of Physics 160, 343 (1985).
* Note [3] Note that $e=1$ is the effective charge corresponding to this force and might not be directly related to the electron charge.
* Cotleţ _et al._ [2019] O. Cotleţ, F. Pientka, R. Schmidt, G. Zarand, E. Demler, and A. Imamoglu, Phys. Rev. X 9, 041019 (2019).
* Prokof’ev and Svistunov [2008a] N. V. Prokof’ev and B. V. Svistunov, Phys. Rev. B 77, 125101 (2008a).
* Prokof’ev and Svistunov [2008b] N. Prokof’ev and B. Svistunov, Phys. Rev. B 77, 020408 (2008b).
* Samanta _et al._ [2021] A. Samanta, D. P. Arovas, and A. Auerbach, Phys. Rev. Lett. 126, 076603 (2021).
* Aidelsburger _et al._ [2015] M. Aidelsburger, M. Lohse, C. Schweizer, M. Atala, J. T. Barreiro, S. Nascimbène, N. R. Cooper, I. Bloch, and N. Goldman, Nature Physics 11, 162 (2015).
* Repellin _et al._ [2020] C. Repellin, J. Léonard, and N. Goldman, Phys. Rev. A 102, 063316 (2020).
* Cheuk _et al._ [2012] L. W. Cheuk, A. T. Sommer, Z. Hadzibabic, T. Yefsah, W. S. Bakr, and M. W. Zwierlein, Phys. Rev. Lett. 109, 095302 (2012).
* Wu _et al._ [2016] Z. Wu, L. Zhang, W. Sun, X.-T. Xu, B.-Z. Wang, S.-C. Ji, Y. Deng, S. Chen, X.-J. Liu, and J.-W. Pan, Science 354, 83 (2016).
* Ness _et al._ [2020] G. Ness, C. Shkedrov, Y. Florshaim, O. K. Diessel, J. von Milczewski, R. Schmidt, and Y. Sagi, Phys. Rev. X 10, 041019 (2020).
* Midtgaard _et al._ [2020] J. M. Midtgaard, Z. Wu, N. Goldman, and G. M. Bruun, Phys. Rev. Research 2, 033385 (2020).
* Klein _et al._ [2021] P. W. Klein, A. G. Grushin, and K. Le Hur, Phys. Rev. B 103, 035114 (2021).
* Note [4] Note that we use a different sign convention for $\mathcal{C}$ than Ref. [25].
* Jotzu _et al._ [2015] G. Jotzu, M. Messer, F. Görg, D. Greif, R. Desbuquois, and T. Esslinger, Phys. Rev. Lett. 115, 073002 (2015).
* Köhl _et al._ [2005] M. Köhl, H. Moritz, T. Stöferle, K. Günter, and T. Esslinger, Phys. Rev. Lett. 94, 080403 (2005).
* Tarruell _et al._ [2012] L. Tarruell, D. Greif, T. Uehlinger, G. Jotzu, and T. Esslinger, Nature 483, 302 (2012).
* Jotzu _et al._ [2014] G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, Nature 515, 237 (2014).
* Salerno _et al._ [2018] G. Salerno, M. Di Liberto, C. Menotti, and I. Carusotto, Phys. Rev. A 97, 013637 (2018).
* Lin _et al._ [2020] L. Lin, Y. Ke, and C. Lee, Phys. Rev. A 101, 023620 (2020).
* Olekhno _et al._ [2020] N. A. Olekhno, E. I. Kretov, A. A. Stepanenko, P. A. Ivanova, V. V. Yaroshenko, E. M. Puhtina, D. S. Filonov, B. Cappello, L. Matekovits, and M. A. Gorlach, Nature Communications 11, 1436 (2020).
* Salerno _et al._ [2020] G. Salerno, G. Palumbo, N. Goldman, and M. Di Liberto, Phys. Rev. Research 2, 013348 (2020).
* Julià-Farré _et al._ [2020] S. Julià-Farré, M. Müller, M. Lewenstein, and A. Dauphin, Phys. Rev. Lett. 125, 240601 (2020).
|
# The puzzle of bicriticality in the XXZ antiferromagnet
Amnon Aharony<EMAIL_ADDRESS>School of Physics and Astronomy, Tel Aviv
University, Tel Aviv 6997801, Israel Ora Entin-Wohlman<EMAIL_ADDRESS>School of Physics and Astronomy, Tel Aviv University, Tel Aviv 6997801, Israel
###### Abstract
Renormalization-group theory predicts that the XXZ antiferromagnet in a
magnetic field along the easy Z-axis has asymptotically either a tetracritical
phase-diagram or a triple point in the field-temperature plane. Neither
experiments nor Monte Carlo simulations procure such phase diagrams. Instead,
they find a bicritical phase-diagram. Here this discrepancy is resolved: after
generalizing a ubiquitous condition identifying the tetracritical point, we
employ new renormalization-group recursion relations near the isotropic fixed
point, exploiting group-theoretical considerations and using accurate
exponents at three dimensions. These show that the experiments and simulations
results can only be understood if their trajectories flow towards the
fluctuation-driven first order transition (and the associated triple point),
but reach this limit only for prohibitively large system sizes or correlation
lengths. In the crossover region one expects a bicritical phase diagram, as
indeed is observed. A similar scenario may explain puzzling discrepancies
between simulations and renormalization-group predictions for a variety of
other phase diagrams with competing order parameters.
Introduction. Natural systems show behaviors ascribed to fluctuations on many
length scales (e.g., critical phenomena, fully-developed turbulence, quantum
field-theory, the Kondo effect, and polymers described by self-avoiding
walks). These behaviors can be treated by the renormalization group (RG)
theory [1, 2, 3]: gradually eliminating short-range details, during which the
system size $L$ and the correlation length $\xi$ rescale to $L\rightarrow
L(\ell)=L/e^{\ell}$ and $\xi\rightarrow\xi(\ell)=\xi/e^{\ell}$ ($\ell$ is the
number of RG iterations), the parameters characterizing the system can ‘flow’
to a ‘stable’ fixed point (FP), which determines universal power-laws
describing physical quantities. Varying the parameters can lead to an
instability of a FP (with one or more parameters becoming ’relevant’ and
’flowing’ away from it, as $e^{\lambda\ell}$, with a positive ‘stability
exponent’ $\lambda$), generating transitions between different universality
classes. Although in most cases the predictions of the RG have been confirmed
experimentally and/or by numerical simulations, some puzzling discrepancies
still await explanations. Here we resolve one such puzzle, involving the phase
transitions between competing ordered phases. As listed e.g. in Refs. 4 and 5,
phase diagrams with competing order parameters arise in a variety of physical
examples. Some of these are mentioned below, after analyzing the phase diagram
of the anisotropic antiferromagnet in a magnetic field.
A uniaxially anisotropic XXZ antiferromagnet has long-range order (staggered
magnetization) along its easy axis, Z. A magnetic field $H_{\parallel}$ along
that axis causes a spin-flop transition into a phase with order in the
transverse plane, plus a small ferromagnetic order along Z. Experiments [6, 7]
and Monte Carlo simulations on three-dimensional lattices [8, 9, 10] typically
find a bicritical phase diagram in the temperature-field $T-H_{\parallel}$
plane [Fig. 1(a)]: a first-order transition line between the two ordered
phases, and two second-order lines between these phases and the disordered
(paramagnetic) phase, all meeting at a bicritical point. Recently, the spin-
flop transition in XXZ antiferromagnets has raised renewed interest [11],
related to possible spintronic applications of the Seebeck effect near that
transition. Simulations in that paper also seem to find a bicritical phase
diagram.
Figure 1: Possible phase-diagrams for the XXZ antiferromagnet in a
longitudinal magnetic field. (a) Bicritical phase diagram. (b) Tetracritical
phase diagram. (c) Diagram with a triple point. Thick lines - first-order
transitions. Thin lines - second-order transitions. The first-order transition
lines between the ordered phases and the disordered paramagnetic phase end at
tricritical points (small empty circles). After Refs. 12, 14. $T_{1}$ and
$T^{\prime}_{1}$ are the transition lines between the ordered phases and the
paramagnetic phase. $T_{2}$ and $T^{\prime}_{2}$ are the lines second-order
lines which border the mixed phase.
History. The early RG calculations [4] were based on low-order expansions in
$\epsilon=4-d$, where $d$ is the spatial dimensionality. These calculations
found that the (rotationally-invariant)isotropic FP is stable at $d=3$,
yielding asymptotically the bicritical phase diagram. These calculations also
found that the isotropic FP becomes unstable as the total number of spin
components $n$ ($=3$ in our case) increases beyond a threshold $n_{c}(d)$, and
estimated that $n_{c}(3)>3$. For $n>n_{c}(d)$ they found a stable biconical
FP. Had the RG trajectories flown to that FP, the first-order line between the
two ordered phases would be replaced by an intermediate (mixed) phase, bounded
by two second-order lines, and all four second-order lines would have met at a
tetracritical point [Fig. 1(b)] [4, 12, 13]. In addition, if the system
parameters are initially outside the region of attraction of that PF, the
bicritical point turns into a triple point, and the transitions between the
ordered phases and the disordered paramagnetic phase become first-order near
that point, turning second-order only at finite distances from it [Fig. 1(c)]
[14].
However, the $\epsilon-$expansions diverge, and low-order calculations are not
reliable [15]. One way to overcome this divergence is to use resummation
techniques, e.g., by taking into account the singularities of the series’
Borel transforms [16], and extrapolating the results to $\epsilon=1$. These
yielded three stability exponents for the isotropic FP, $\lambda_{0,2,4}$. The
small exponent $\lambda_{4}$ also describes the (in)stability against a cubic
perturbation [17, 13], and it vanishes at $n=n_{c}(d)$. The same resummation
techniques (carried out on sixth-order $\epsilon-$expansions) have been
applied to the latter problem [18]. The results were compared with a
resummation of the sixth-order perturbative (divergent) expansions in the
original field-theory coefficients at $d=3$ [19], with recent bootstrap
calculations [20], with Monte Carlo simulations [21] and with high-temperature
series (for $\lambda_{0}$) [22]. An updated table of these results appears in
Ref. 20. The agreement between all the techniques indicates the accuracy of
the exponents:
$\displaystyle\lambda_{0}\approx-0.78,\ \ \lambda_{2}\approx-0.55,\ \
\lambda_{4}\approx 0.01.$ (1)
Since $\lambda_{4}>0$, the isotropic fixed point is unstable at $d=3$, and
$n_{c}(3)<3$, contradicting previous estimates [4, 13]. Therefore, as
explained below, the bicritical phase diagram should be replaced by the
tetracritical or the triple one, but neither of these agrees with the
experiments or the simulations.
The field theoretical analysis is based on the Ginzburg-Landau-Wilson (GLW)
Hamiltonian density [4],
$\displaystyle{\cal H}({\bf r})=$
$\displaystyle\big{(}|{\boldmath{\nabla}}{\bf S}|^{2}+t|{\bf
S}|^{2}\big{)}/2+U_{2}+U_{4},$ (2) $\displaystyle U_{2}$
$\displaystyle=g\big{[}|S_{\parallel}|^{2}-|{\bf S}|^{2}/3\big{]},$ (3)
$\displaystyle U_{4}$
$\displaystyle=u_{\parallel}|S_{\parallel}|^{4}+u_{\perp}|{\bf
S}_{\perp}|^{4}+2u_{\times}|S_{\parallel}|^{2}|{\bf S}_{\perp}|^{2},$ (4)
with the local three-component ($n=3$) staggered magnetization, ${\bf S}({\bf
r})\equiv\big{(}S_{\parallel}({\bf r}),{\bf S}_{\perp}({\bf r})\big{)}$. For
$g=0$ and $u_{\parallel}=u_{\perp}=u_{\times}=u$, ${\cal H}$ reduces to the
isotropic Wilson-Fisher Hamiltonian [1, 2, 3], which has an (isotropic) FP at
$u=u^{I}$. [23]
Group theory. A priori, at $g=0$, the stability of the isotropic FP against
symmetry-breaking perturbations requires an analysis of 15 terms in the GLW
Hamiltonian, which are quartic in the spin components,
$S_{\alpha}S_{\beta}S_{\gamma}S_{\delta}$. Group-theoretical arguments showed
that these terms split into subsets of $1+5+9$ terms, and all the terms within
a subgroup have the same stability exponent, listed in Eq. (1) [24, 25, 16,
21, 26]. In our case, [$O(3)\Rightarrow O(1)\bigoplus O(2)$], the three
exponents are associated with the following combinations of quartic terms:
$\displaystyle{\cal P}_{4,0}$ $\displaystyle\equiv|{\bf S}|^{4},\ \ \ {\cal
P}_{4,2}\equiv|{\bf S}|^{4}[x-1/3],$ $\displaystyle{\cal P}_{4,4}$
$\displaystyle\equiv|{\bf S}|^{4}\big{[}x(1-x)-(1+x)/7+2/35\big{]},$ (5)
where $x=S^{2}_{\parallel}/|{\bf S}|^{2}$. The largest (negative) exponent
$\lambda_{0}$ corresponds to the stability within the $O(3)-$symmetric case,
${\cal P}_{4,0}$. In our case, the exponent $\lambda_{2}$ corresponds the a
term which splits the $O(3)$ isotropic symmetry group into $O(1)\bigoplus
O(2)$. Similar to $U_{2}$, ${\cal P}_{4,2}$ ‘prefers’ ordering of
$S_{\parallel}$ or of ${\bf S}_{\perp}$. The smallest exponent $\lambda_{4}$
describes the crossovers away from the isotropic FP, towards either the
biconical or the cubic FP. Writing the quartic terms as
$\displaystyle U_{4}=(u^{I}+p_{0}){\cal P}_{4,0}+p_{2}{\cal
P}_{4,2}-p_{4}{\cal P}_{4,4},$ (6)
with arbitrary coefficients $p_{i},~{}i=0,2,4$ (which vanish at the isotropic
FP), implies the linear recursion relations near the isotropic FP,
$\displaystyle dp_{i}/d\ell\approx\lambda_{i}p_{i}\ \ \ \ \Rightarrow\ \ \
p_{i}(\ell)=p_{i}(0)e^{\lambda_{i}\ell}.$ (7)
Finite sizes. The calculations of the stability exponents, Eqs. (1), apply
only in the asymptotic limit, for infinite samples and very close to the
multicritical point, i.e., at very large $\ell$. The explanation of the
experiments (carried out at a finite $\xi$) and simulations (accomplished at a
finite $L$) requires the usage of a finite number of RG iterations,
$\ell=\ell_{f}$, at which the fluctuations have been eliminated: The
renormalized correlation length $\xi(\ell_{f})={\cal O}(1)$, with
$\xi(0)\sim|t|^{-\nu}$ ($t=T/T_{c}-1$ measures the distance from the
transition temperature $T_{c}$, and $\nu\approx.711$ is the critical
exponent), or the system size $L(\ell_{f})={\cal O}(1)$ [2] (lengths are
measured in units of the lattice constant). $\ell_{f}$ increases with the
system’s size $L$ (at criticality), or when the initial parameters are closer
to criticality (i.e., a larger initial correlation length). At this stage, one
can solve the problem using the mean-field Landau theory [2]. An analysis of
this situation requires the full RG flow of the system’s Hamiltonian [27].
Such an analysis, based on resummation of (approximate) second-order
$\epsilon-$expansions, was performed by Folk et al. [28]. That paper presented
numerical RG flows in the parameter space, and observed the slow flow close to
the isotropic and biconical FP’s.
Our calculation. This Letter presents a more precise way to perform this
analysis, based on the following steps. (1) Using the stability exponents of
the isotropic FP at three dimensions, Eq. (1), we construct flow recursion
relations near that FP. (2) Equating Eq. (4) with Eq. (6), the initial quartic
parameters $\\{u_{i}\\}$ are expressed in terms of the $p_{i}$’s, with
coefficients true to all orders in $\epsilon$ [see Eq. (11) below]. (3) Since
$p_{0}$ and $p_{2}$ are strongly irrelevant ($\lambda_{0}$ and $\lambda_{2}$
are negative and large [Eq. (1)]) near the isotropic FP, they decay after a
small number $\ell_{1}$ of ‘transient’ RG iterations (irrespective of non-
linear terms in their recursion relations). After that, the RG iterations
continue on a single universal straight line in the three-dimensional
parameter space, given in Eq. (12). In a way, this line generalizes the
concept of universality. (4) On this universal line, Eq. (7) for $p_{4}$
yields a slow flow [as $p_{4}(\ell)\sim e^{\lambda_{4}\ell}$] away from the
isotropic FP for both positive and negative $p_{4}$. The smallness of
$\lambda_{4}$ allows us to expand in powers of $p_{4}$ around the isotropic FP
[instead of the ‘usual’ expansion in all the $u$’s near the Gaussian FP]. To
second order in $p_{4}$ [for $\ell>\ell_{1}$],
$\displaystyle dp_{4}/d\ell=\lambda_{4}p_{4}-Bp^{2}_{4},$ (8)
where the (positive) coefficient $B$ (the only unknown parameter) is
presumably of order $1$. This yields explicit solutions for $p_{4}(\ell)$, Eq.
(13), and typical solutions are shown in Fig. 2. (5) For $p_{4}>0$ the
trajectories flow to the stable biconical FP, and the stability exponents at
that point agree (approximately) with the full calculation in Ref. 16 – adding
credibility to our approximate expansion. On these trajectories the
coefficients are shown to yield a tetracritical phase diagram. (6) For
$p_{4}<0$ the trajectories eventually flow to a fluctuation-driven first-order
transition, which occurs when $p_{4}(\ell)$ crosses the horizontal line in
Fig. 2. In the wide intermediate range of $\ell$, before that crossing, the
parameters yield a bicritical phase diagram. Beyond that crossing, for very
large $\ell$ (corresponding to very large $L$ or $\xi$) the bicritical point
turns into a triple point. The bicritical phase-diagrams observed in the
experiments/simulations apparently occur at this intermediate range.
Figure 2: (color online) The function $p_{4}(\ell-\ell_{1})$ (blue) for $B=1$
and $p_{4}(\ell_{1})=.3$ and $-.1$. Below the horizontal (orange) line at
$p_{4}=-35u^{I}/8=-1.75$, the transition becomes first order and the
bicritical point becomes a triple point.
Criteria for tetracriticality. Eliminating the small (non-critical)
paramagnetic moment (generated by $H_{\parallel}$) from the free energy
renormalizes the three $u$’s in Eq. (4), with corrections of order
$H^{2}_{\parallel}$ [4]. Although these corrections are small, so that the new
coefficients remain close to the isotropic $u$, they are important because
they determine the ultimate shape of the phase diagram. The tetracritical
phase diagram [Fig. 1(b)] requires that on the line $g=0$ both order
parameters are non-zero, implying that the mean-field free energy has a
minimum at $0<x<1$ [29]. Presenting Eq. (4) as
$\displaystyle U_{4}=|{\bf
S}|^{4}\big{[}u_{\parallel}x^{2}+u_{\perp}(1-x)^{2}+2u_{\times}x(1-x)\big{]}\
,$ (9)
this minimum is at
$x=(u_{\perp}-u_{\times})/(u_{\parallel}+u_{\perp}-2u_{\times})$, provided
that
$\displaystyle u_{\times}<u_{\parallel}\ \ {\rm and}\ \ u_{\times}<u_{\perp}.$
(10)
These conditions for tetracriticality are more restrictive than the condition
found before, $u_{\parallel}u_{\perp}-u^{2}_{\times}>0$ [4]. When even one of
them is violated, the minimum of $U_{4}$ is at $x=1$ or at $x=0$, implying
that the mixed phase does not exist; it is replaced by a first-order
transition line, as in Figs. 1(a,c).
Renormalization group. Comparing Eqs. (4) and (6) for $U_{4}$ one finds
$\displaystyle\delta u_{\parallel}=p_{0}+(70p_{2}+24p_{4})/105,$
$\displaystyle\delta u_{\perp}=p_{0}-(35p_{2}-9p_{4})/105,$
$\displaystyle\delta u_{\times}=p_{0}+(35p_{2}-72p_{4})/210,$ (11)
with $\delta u_{i}=u_{i}-u^{I}$. According to Eq. (10), the multicritical
point is tetracritical if both anisotropy parameters
$u_{\parallel}-u_{\times}=p_{2}/2+4p_{4}/7$ and
$u_{\perp}-u_{\times}=-p_{2}/2+3p_{4}/7$ are positive, i.e., when
$|p_{2}(\ell)|<6p_{4}(\ell)/7$. Since $p_{2}(\ell)\approx
p_{2}(0)e^{\lambda_{2}\ell}$ decays rather quickly, and $p_{4}(\ell)$ varies
slowly (see below), this will happen when
$e^{\lambda_{2}\ell}<6p_{4}(0)/[7|p_{2}(0)]$. Assuming that
$p_{4}(0)[=u_{\parallel}+u_{\perp}-2u_{\times}]$ and
$p_{2}(0)[=2(3u_{\parallel}-4u_{\perp}+u_{\times})/7]$ are small and of the
same order, this happens for a small $\ell<\ell_{1}$. We conclude that the
phase diagram is in fact tetracritical whenever $p_{4}(0)>0$, for practically
all $\ell$, irrespective of the value of $B$. Since the experiments and
simulations do not exhibit this phase diagram, we conclude that they probably
have $p_{4}(0)<0$.
To complete the RG analysis, we note that both $p_{0}$ and $p_{2}$ decay
quickly, so there is no need to add higher-order terms for them in Eq. (7).
They can be neglected in Eq. (11) after a transient stage of $\ell_{1}$
iterations [30], and then all the flows continue on the universal semi-
asymptotic line,
$\displaystyle\big{(}\delta u_{\parallel},~{}\delta u_{\perp},~{}\delta
u_{\times}\big{)}=\big{(}8,~{}3,~{}-12\big{)}p_{4}/35.$ (12)
Higher-order terms in the RG recursion relations may turn this line non-linear
[5].
For $\ell>\ell_{1}$ the recursion relation for $p_{4}$, Eq. (8), gives the
solution [5]
$\displaystyle
p_{4}(\ell)=\frac{p_{4}(\ell_{1})e^{\lambda_{4}(\ell-\ell_{1})}}{1+Bp_{4}(\ell_{1})(e^{\lambda_{4}(\ell-\ell_{1})}-1)/\lambda_{4}}.$
(13)
For $p_{4}(\ell_{1})>0$, the flow approaches the biconical FP,
$p_{4}(\ell)\rightarrow p^{B}_{4}=\lambda_{4}/B$, with $p^{B}_{4}\ll 1$ –
justifying stopping the expansion in Eq. (8) at second order [31, 32]. Near
the biconical FP one finds that (to linear order in $p_{4}-p^{B}_{4}$)
$d[p_{4}-p^{B}_{4}]/d\ell=-\lambda_{4}[p_{4}-p^{B}_{4}]$, identifying the
stability exponent at this FP as $\lambda^{B}_{4}=-\lambda_{4}\approx-0.01$,
independent of $B$, and the biconical FP is indeed stable. Within our
approximate recursion relations for $p_{0}$ and $p_{2}$, the other two
exponents approximately remain unchanged,
$\lambda^{B}_{0,2}\approx\lambda_{0,2}$. All three values are close to those
found near the biconical FP by the full sixth-order calculation in Ref. 16,
confirming the validity of our approximate expansion near the isotropic FP.
For $p_{4}(\ell_{1})<0$, Eq. (8) implies that $p_{4}(\ell)$ grows more and
more negative (note: both $B$ and $\lambda_{4}$ were assumed to be positive).
At $\ell=\ell_{f}$, Eq. (10) is not obeyed, the minimum of $U_{4}$ is at
$x=1$, with $U_{4,min}=|{\bf S}|^{4}u_{\parallel}=|{\bf
S}|^{4}[u^{I}+8p_{4}(\ell_{f})/35]$, where we used Eq. (12). This becomes
negative when $p_{4}(\ell_{f})<-35u^{I}/8$. The resummation of the
$\epsilon-$expansion gives $u^{I}\sim 0.4$ [5], leading to $35u^{I}/8\sim
1.75$ [the orange horizontal line in Fig. 2], which is quite large compared to
reasonable values of $p_{4}(\ell_{1})$, and probably out of the region of
applicability of the quadratic approximation which yielded Eq. (13). However,
it may still be reasonable for intermediate values of $\ell$ (e.g.,
$\ell-\ell_{1}<8$ in Fig. 2). Equation (13) diverges at a large
$\ell=\ell_{2}$ [33], and we expect $p_{4}(\ell)$ to cross the value $-1.75$
not very far below $\ell_{2}$. With the parameters used in Fig. 2, the
divergence occurs at
$\ell_{2}-\ell_{1}\sim\log[1-\lambda_{4}/(Bp_{4}(\ell_{1})]/\lambda_{4}\sim
9.5$, and the transition to first-order occurs at $\ell_{x}-\ell_{1}\sim 9$.
These numbers become smaller for larger values of $Bp_{4}(\ell_{1})$. In this
example, the bicritical point turns into a triple point at $\xi\sim
e^{\ell_{x}}\sim e^{8+9}\sim 10^{7}$, which cannot be reached experimentally.
Even if this approximation is improved, and if $Bp_{4}(0)$ increases (see the
end of the paper), there will still be a wide range of parameters where
experiments and simulations will follow the bicritical phase-diagram. In this
range, the effective exponents near the bicritical point may depend on
$\ell_{f}$ and differ significantly from their isotropic-FP values [5].
Other examples. Similar phase diagrams pertain to the structural transitions
in uniaxially stressed perovskites, which are described by the cubic model [5,
17, 12]. Similarly to the XXZ antiferromagnet, the almost isotropic SrTiO3
(with $p_{4}\lessapprox 0$) yielded an apparent bicritical phase diagram.
However, the more anisotropic RbCaF3 did yield the diagram 1(c), as expected
by the RG calculations [5].
In reality, cubic anisotropic antiferromagnets are subjected to both the
anisotropic and cubic terms, $U_{4}$ and $U_{c}$ (or other crystal-field
terms). In most magnetic cases, the cubic terms are small [7]. Since both
${\cal P}_{4,4}$ and $U_{c}$ scale with the same small exponent $\lambda_{4}$,
we expect the same qualitative flow diagrams as discussed above. However, the
competition (within this subgroup) between the biconical and the cubic FP’s
(which are degenerate at linear order), can only be settled by including
higher-order terms in the RG recursion relations, still awaits further
analysis. Studies with other crystal symmetries (e.g., tetragonal), and
detailed studies of the sixth-order terms which dominate the fluctuation-
driven tricritical point, also await a detailed analysis (and corresponding
dedicated experiments).
For larger values of $n=n_{1}+n_{2}>3$, the biconical FP becomes unstable,
being replaced by the decoupled FP, at which $u^{D}_{\times}=0$ [34], implying
a tetracritical phase diagram. This has been particularly expected for the
SO(5) theory aimed to describe the competition between superconductivity
($n_{1}=2$) and antiferromagnetism ($n_{2}=3$) in the cuprates [35]. In
contrast, Monte Carlo simulations of this model gave a bicritical phase
diagram, with isotropic $n=5$ critical exponents [36]. Similar results were
reported for the iron pnictides [37]. Assuming that the parameters of these
materials obey $u_{\times}(\ell_{f})\nless
u_{\parallel}(\ell_{f}),~{}u_{\perp}(\ell_{f})$, preferring the bicritical
scenario, and that the RG trajectories stay close to the isotropic FP, could
also resolve that long-standing puzzle.
A very recent experiment [38] studied a critical pressure-temperature phase
diagram, with competing ferromagnetic and antiferromagnetic phases, is also
apparently in contrast to the RG results, which predict for $n_{1}=n_{2}=3$ an
asymptotic decoupled tetracritical phase diagram (or a triple point). It would
be interesting to study the RG trajectories for these experiments.
Competing order parameters, with larger values of $n$, also arise in certain
field-theory models [20, 39], which are similar in structure to the standard
model of particle interactions. It would be interesting to see whether those
theories yield puzzles of the sort discussed here.
Summary. In conclusion, experiments and simulations do not contradict the
renormalization-group predictions. The new system of recursion relations
presented here, which is based on group-theoretical exact coefficients for an
expansion near the isotropic fixed point, clearly shows that the simulations
and experiments are in a crossover regime, between the bicritical point and
the triple point. Our quantitative estimates show that it will probably be
very difficult to reach the triple point experimentally. However, in principle
the renormalization-group also supplies intermediate effective exponents [5],
whose measurements can confirm its validity. Dedicated experiments (carried
out on larger samples, at temperatures closer to the multicritical point), and
exploiting a wider range of the initial Hamiltonians, which will allow
increasing $p_{4}(0)$ by moving away from the parameters characterizing the
isotropic fixed point (e.g., by adding single-ion anisotropies [40]), may find
the tetracritical or the triple point, or – at least – detect the variation of
the non-asymptotic (effective) critical exponents.
Acknowledgement: We thank Andrey Kudlis, Walter Selke, David Landau and Andrea
Pelissetto for helpful correspondence.
## References
* [1] K. G. Wilson, The RG and critical phenomena (1982, Nobel Prize Lecture), Rev. Mod. Phys. 55, 583 (1983).
* [2] e.g., M. E. Fisher, Renormalization group theory: Its basis and formulation in statistical physics, Rev. Mod. Phys. 70, 653 (1998).
* [3] C. Domb and M. S. Green, eds., Phase Transitions and Critical Phenomena, Vol. 6 (Academic Press, NY, 1976).
* [4] J. M. Kosterlitz, D. R. Nelson and M. E. Fisher, Bicritical and tetracritical points in anisotropic antiferromagnetic systems, Phys. Rev. B 13, 412 (1976).
* [5] A. Aharony, O. Entin-Wohlman and A. Kudlis, Different critical behaviors in cubic to trigonal and tetragonal perovskites, Phys. Rev. B 105, 104101 (2022).
* [6] A. R. King and H. Rohrer, Spin-flop bicritical point in MnF2, Phys. Rev. B 19, 5864 (1979).
* [7] For a review, see Y. Shapira, Experimental Studies of Bicritical Points in 3D Antiferromagnets, in R. Pynn and A. Skjeltorp, eds., Multicritical Phenomena, Proc. NATO advanced Study Institute series B, Physics; Vol. 6, Plenum Press, NY 1984, p. 35; Y. Shapira, Phase diagrams of pure and diluted low‐anisotropy antiferromagnets: Crossover effects (invited), J. of Appl. Phys. 57, 3268 (1985).
* [8] W. Selke, M. Holtschneider, R. Leidl, S. Wessel, and G. Bannasch, Uniaxially anisotropic antiferromagnets in a field along the easy axis, Physics Procedia 6, 84-94 (2010).
* [9] G. Bannasch and W. Selke, Heisenberg antiferromagnets with uniaxial exchange and cubic anisotropies in a field, Eur. Phys. J. B 69, 439 (2009).
* [10] J. Xu, S.-H. Tsai, D. P. Landau, and K. Binder, Finite-size scaling for a first-order transition where a continuous symmetry is broken: The spin-flop transition in the three-dimensional XXZ Heisenberg antiferromagnet, Phys. Rev. E 99, 023309 (2019).
* [11] Y. Yamamoto, M. Ichioka, and H. Adachi, Antiferromagnetic spin Seebeck effect across the spin-flop transition: A stochastic Ginzburg-Landau simulation, Phys. Rev. B 105, 104417 (2022).
* [12] A. D. Bruce and A. Aharony, Coupled order parameters, symmetry-breaking irrelevant scaling fields, and tetracritical points, Phys. Rev. B 11, 478 (1975).
* [13] A. Aharony, Dependence of universal critical behavior on symmetry and range of interaction, in Ref. 3, p. 357.
* [14] E. Domany, D. Mukamel, and M. E. Fisher, Destruction of first-order transitions by symmetry-breaking fields, Phys. Rev. B 15, 5432 (1977).
* [15] E. Brezin, J. C. Le Guillou, J. Zinn-Justin and B. G. Nickel, Higher order contributions to critical exponents, Phys. Lett. 44A, 227 (1973).
* [16] P. Calabrese, A. Pelissetto, and E. Vicari, Multicritical phenomena in $O(n_{1})\bigoplus O(n_{2})$-symmetric theories, Phys. Rev. B 67, 054505 (2003).
* [17] A. Aharony, Critical behavior of anisotropic cubic systems, Phys. Rev. B 8, 4270 (1973).
* [18] L. T. Adzhemyan, E. V. Ivanova, M. V. Kompaniets, A. Kudlis, and A. I. Sokolov, Six-loop $\epsilon$ expansion study of three-dimensional $n$-vector model with cubic anisotropy, Nucl. Phys. B 940, 332 (2019).
* [19] J. M. Carmona, A. Pelissato and E. Vicari, N-Component Ginzburg-Landau Hamiltonians with cubic anisotropy: A six-loop study, Phys. Rev. B 61, 15136 (2000).
* [20] S. M. Chester, W. Landry, J. Liu, D. Poland, D. Simmons-Duffin, N. Su and A. Vichi, Bootstrapping Heisenberg magnets and their cubic anisotropy, Phys. Rev. D 104, 105013 (2021).
* [21] M. Hasenbusch and E. Vicari, Anisotropic perturbations in three-dimensional O(N)-symmetric vector models, Phys. Rev. B 84, 125136 (2011).
* [22] P. Butera and M. Comi, Renormalized couplings and scaling correction amplitudes in the $N$-vector spin models on the sc and the bcc lattices, Phys. Rev. B 58, 11552 (1998).
* [23] One way to introduce $u$ in the field theory is to replace the discrete spin ${\bf S}({\bf r})$ by a continuous variable, with a distribution $\exp[-|{\bf S}|^{2}/2-u|{\bf S}|^{4}/2]$. The additional symmetry-breaking deviations, $u_{i}-u$ and $g$, result from applying an external field, or follow from crystal-field potentials.
* [24] F. J. Wegner, Critical Exponents in Isotropic Spin Systems, Phys. Rev. B 6, 1891 (1972). See also F. J. Wegner, The critical state, General Aspects, in Ref. 3, p. 7.
* [25] A. Codello, M. Safari, G. P. Vacca, and O. Zanusso, Critical models with $n\leqq 4$ scalars in $d=4-\epsilon$, Phys. Rev. D 102, 065017 (2020) and references therein.
* [26] A. Pelissetto and E. Vicari, Critical phenomena and renormalization-group theory, Phys. Rep. 368, 542 (2002).
* [27] Describing realistic systems by a finite number of RG iterations is a well-known procedure, see e.g., E. K. Riedel and F. J. Wegner, Effective critical and tricritical exponents, Phys. Rev. B 9, 294 (1974); S. T. Bramwell and P. C. W. Holdsworth, Magnetization: A characteristic of the Kosterlitz-Thouless-Berezinskii transition, Phys. Rev. B 49, 8811 (1994). Our aim here is to apply it for the XXZ phase diagrams.
* [28] R. Folk, Yu. Holovatch, and G. Moser, Field theory of bicritical and tetracritical points. I. Statics, Phys. Rev. E 78, 041124 (2008).
* [29] To identify the mixed phase it is sufficient to find it for $g=0$. Keeping that term allows to calculate the boundaries of the mixed phase in Fig. 1(b), and is important for calculating the crossover exponents which determine the shapes of the lines in the phase diagrams in Fig. 1 [4, 12]. These tasks are beyond the scope of the present paper.
* [30] Assuming that these coefficients can be neglected when their magnitude is smaller than 1/1000, we end up with $\ell_{1}\sim\max[\ln(1000p_{2,4}(0))/\lambda_{2,4}]$. This yields $\ell_{1}\sim 8$ if $p_{0}$ and $p_{2}$ are of order 0.1. These transients then disappear for $\xi$ or $L$ of order $e^{\ell_{1}}\sim e^{8}\sim 3000$.
* [31] The coefficient $B$ can be deduced from $p^{B}_{4}$. Unfortunately, present papers, e.g., Ref. 16, report only the universal exponents, and not the values of the FP parameters. $B$ can also be deduced from a resummation of the quadratic coefficients in the recursion relations near the isotropic FP [5].
* [32] Reference 5 performed a similar analysis for the cubic Hamiltonian, which contains a single anisotropic term, $v\sum_{m=1}^{3}(S_{m})^{4}$. A numerical resummation of the linear scaling fields from the sixth-order $\epsilon-$expansion yielded the semi-asymptotic universal line $u=-0.595v$, while group-theory based arguments give $u=-3v/5$, corroborating the resummation techniques. That calculation also resummed the quadratic terms in the recursion relations, yielding the analog of the coefficient $B$ here.
* [33] The expression in Eq. (13) diverges at $\ell_{2}$, when $1/p_{4}(\ell_{1})+(e^{\lambda_{4}(\ell_{2}-\ell_{1})}-1)/p_{4}^{B}=0$. Therefore, this approximation may not apply for very large $\ell_{f}$. However, the qualitative behavior is expected to remain valid after adding higher orders in the flow equations. It certainly remains true if $\ell_{f}\ll\ell_{2}$. Assuming Eq. (13), $B=1$ and $p_{4}(\ell_{1})=.1$ the divergence happens at $\ell_{2}-\ell_{1}\sim 10$. This value decreases for larger values of $Bp_{4}(\ell_{1})$.
* [34] A. Aharony, Comment on “Bicritical and Tetracritical Phenomena and Scaling Properties of the SO(5) Theory”, Phys. Rev. Lett. 88, 059703 (2002) and references therein. See also Ref. 13.
* [35] E. Demler, W. Hanke, and S.-C. Zhang, SO(5) theory of antiferromagnetism and superconductivity, Rev. Mod. Phys. 76, 909 (2004).
* [36] X. Hu, Bicritical and Tetracritical Phenomena and Scaling Properties of the SO(5) Theory, Phys. Rev. Lett. 87, 057004 (2001).
* [37] R. M. Fernandes and J. Schmalian, Competing order and nature of the pairing state in the iron pnictides, Phys. Rev. B 82, 014521 (2010).
* [38] T. Qian, E. Emmanouilidou, C. Hu, J. C. Green, I. I. Mazin, and N. Ni, Unconventional pressure-driven metamagnetic transitions in topological van der Waals magnets, (arXiv:2203.11925).
* [39] e.g., O. Antipin, J. Bersini, F. Sannino, Z.-W. Wang, and C. Zhang, Untangling scaling dimensions of fixed charge operators in Higgs theories, Phys. Rev. D 103, 125024 (2021) and references therein.
* [40] W. Selke, Multicritical points in the three-dimensional XXZ antiferromagnet with single-ion anisotropy, Phys. Rev. E 87, 014101 (2013) added single-ion anisotropic terms, and found a mixed phase, alas only far below the bicritical point.
|
[width=0.9]Sections/figure2.pdf
Figure : Qualitative examples on iVQA videos. Single frame and ten-word
summary is generated from the original video for Video Question Answering
task. First two examples demonstrate successful cases where both visual and
textual signals signals are able to capture the question-relevant information.
The last two examples show some failure cases where visual and/or textual
signals are distracted from the question.Figure could be improved by clearer
labels for what is going on and thicker lines, etc.
## 1 Experiments
section summary here In this section, we provide our experiment results. First
we detail our implementation and VideoQA datasets; then we provide VideoQA
results under different input sparsities, followed by multi-modal results.
Finally, we offer some qualitative visualization to analyze our approach.
Table : Effect of the temperature $\tau$. Smaller $\tau$ leads to more exploitation while higher leads to more exploration. We observe that more explorative selection is beneficial for denser inputs. | VLEP | VIOLIN
---|---|---
Input Percentage | $\tau=0.01$ | $\tau=0.1$ | $\tau=0.5$ | $\tau=0.01$ | $\tau=0.1$ | $\tau=0.5$
10% | 60.25 | 56.01 | 58.94 | 56.25 | 60.57 | 58.80
30% | 60.95 | 63.52 | 59.13 | 61.72 | 57.64 | 62.34
50% | 63.05 | 63.64 | 64.30 | 65.57 | 64.48 | 66.06
70% | 63.73 | 65.14 | 65.32 | 65.94 | 66.52 | 67.06
| | | | | |
Table : Effect of the balancing weight $\lambda$. $\lambda$ balances the selection loss and task loss as specificed in eq. LABEL:eq:loss. We report results on VLEP dataset and observe that $\lambda=1.0$ yields the best balance. Higher $\lambda$ might lead to distraction of task, while lower $\lambda$ might lead to insufficient sparsification. We pick $\lambda=1.0$ based on the following ablation. Input Percentage | $\lambda=0.01$ | $\lambda=0.1$ | $\lambda=1.0$ | $\lambda=10.0$
---|---|---|---|---
10% | 59.12 | 59.85 | 60.25 | 59.90
70% | 65.23 | 65.32 | 65.32 | 65.11
| | | |
Table : Comparison of our two Gumbel variants. Overall the first variants perform slightly better. The second variant is superior at highly sparsified level ($10\%$) as it adds more flexibility in individual sparsity levels across different videos. Input Percentage | $10\%$ | $30\%$ | $50\%$ | $70\%$
---|---|---|---|---
Gumbel-TopK Selection | 60.25 | 63.52 | 64.30 | 65.32
Ratio-controlled Gumbel | 61.43 | 63.42 | 63.49 | 65.01
| | | |
[width=]Sections/qa_curve_full.pdf
Figure : Sparsified VideoQA results on VLEP and VIOLIN datasets. Accuracy at
the $100\%$ level refers to the original full input baseline result. We can
conclude that learnable sparsification is better than fixed sampling
(Uniform), and that stochasitic sampling is better than deterministic
selection (TopK). Our Multi-Gumbel estimator achieves the best result overall.
Table : VideoQA results on iVQA. We apply our approach on the state-of-the-art
method [Yang_2021_ICCV]. We consider multi-modal sparsification where we
sparsify both visual (i.e., frames) and textual (i.e., words) inputs. Compared
to single-modality, multi-modal performance is stronger at different
sparsification levels. With additional extracted words, we also outperform the
state-of-the-art result on iVQA (last column).
| | Visual (Snippets)
---|---|---
| | 0 | 1 snippet | 2 snippets | 5 snippets | 20 snippets
5* | Textual
---
(Words)
0 | 14.6 (Q-only) | 28.65 | 30.24 | 31.26 | 35.43 [Yang_2021_ICCV]
| 5 words | 17.5 | 28.68 | 30.31 | 31.70 | 35.43
| 10 words | 18.22 | 29.87 | 31.43 | 31.88 | 36.01
| 25 words | 20.14 | 30.16 | 31.59 | 32.03 | 36.09
| 100 words | 26.75 | 31.47 | 32.11 | 33.21 | 36.42
| | | | | |
### Implementation Details
To verify our idea, we experimented on two state-of-the-art video-and-language
models VQA-T [Yang_2021_ICCV] and HERO [li2020hero]. HERO considers multi-
channel videos where videos come with subtitles as additional channel of
inputs. HERO follows a hierarchical transformer architecture to first exploit
the information within video modalities and contexts, and then has another
task head to operate the task. VQA-T simply consists of two Distill-BERT
models to deal with video+question inputs and answer candidates, and computes
the answer based on embedding similarity. For extracting the video features,
we follow [Yang_2021_ICCV] to use the S3D model pre-trained on Howto100M
dataset. For extracting the key word candidates, we use the model offered by
[Yang_2021_ICCV] and the vocabulary from the training split of the dataset to
extract the words/phrases based on feature similarity.
### Datasets and Metrics
We evaluate our idea on public VideoQA benchmarks including VLEP
[lei2020vlep], VIOLIN [liu2020violin] and iVQA [Yang_2021_ICCV]. For VLEP and
VIOLIN, we follow [li2020hero] to build our method on top of HERO. VLEP and
VIOLIN provide both raw videos and subtitles as inputs. Our selection is then
based on the multi-modal inputs. For iVQA, we follow [Yang_2021_ICCV] to build
our method on top of VQA-T. We report VideoQA accuracies across different
input sparsity level: $10\%$, $30\%$, $50\%$, $70\%$ and full ($100\%$)
inputs.
### VideoQA Experiments
Introduction for this experiment, what is it meant to show? We present our
single modality sparsified VideoQA results here. First, we study the design
choices of our two multi-gumbel estimator variants, followed by the comparison
between our approach and other token sparsification baselines. Effect of
temperature $\tau$. In our experiments, we found that varying $\tau$ could
result in very different performance. We elaborate the result in Table 1 with
our Gumbel-TopK selection variant, where we choose $\tau=(0.01,0.1,0.5)$ for
each sparsity level, and fix $\lambda=1.0$. A smaller $\tau$ means the model
focuses more on exploitation, while a larger $\tau$ makes the model focus more
on exploration. We can observe that on both datasets, the model that is more
explorative with denser inputs gives a better results; but on sparser inputs,
the model tends to stick to exploitation. Effect of loss balancing weight
$\lambda$. We also study how the balancing weight $\lambda$ affects the
performance with our ratio-controlled Gumbel estimator. In Table 1, we choose
$\lambda=0.01,0.1,1.0$ and $10.0$, and then report the results at highly
sparsified ($10\%$) and lowerly sparsified ($70\%$) levels on VLEP dataset. We
fix $\tau=0.01$ for $10\%$ level and $\tau=0.5$ for $70\%$ level. $\lambda$
has slightly larger impact on highly sparsified setting. We observe that
$\lambda=1.0$ yields the best balance and hence choose it for all the other
performance. Comparison of two multi-gumbel variants. We compare the two
variants of Gumbel estimator for token sparsification. In Table 1, we compare
these two variants at different sparsity levels on VLEP. Our Gumbel-TopK
selection variant is better than ratio-controlled Gumbel overall. Ratio-
controlled Gumbel is superior at highly sparsified level ($10\%$) as it adds
more flexibility in individual sparsity levels across different videos.
Comparison with other sparsification approaches. To our knowledge, no prior
work has studied the same topic on VideoQA before, so there is no direct
comparison. To validate our approach of Multi-Gumbel Estimator, we define the
baselines on our own: 1. Uniform(Fixed): Fixed uniform sampling of inputs
w.r.t. different sparsity levels. 2. TopK: During training, directly select
inputs with higher keeping probability $s_{i}$ after softmax step, without
noise perturbing. 3. Multi-Gumbel(Ours): Our approach which stochastically
sparsifies tokens with Gumbel perturbing, we plot the better result from the
two variants we introduced. We show the accuracy vs. density curve in Figure
1. We can see that our Multi-Gumbel approach module is able to achieve the
best performance across different sparsity levels. Compared to learnable
selection, fixed uniform sampling is weaker as it does not contain any form of
task adaptive selection. A direct TopK selection training performs weaker than
training with stochastic sampling, as we observe that the deterministic
selection tends to a local optimal choice, while our stochastic Multi-Gumbel
approach gives more flexibility of by adding noises while learning. One
noticeable observation is that, at $10\%$ level, which corresponds to very few
frames (2 frames for VLEP and 4 frames for VIOLIN), the performance is still
quite good. It implies the potential of accomplishing the task with very few
inputs.
[width=]Sections/figure4.pdf
Figure : Frame importance visualization. Darker color means the corresponding
word/frame is of more importance to predict the answer. We can see that the
model is able to discard some repetitive frames or frames that are not
relevant. Add labels to the image? I.e. Question, video, etc?
### Multi-modal Sparsification Results on iVQA
In the multi-modal experiments, we would like to study the relation between
visual and textual modalities under a controlled input setting. In order to do
that, we extend our learnable selection module to the multi-modal setting
following Section LABEL:sec:multimodal to generate key frames and key words
from the original video inputs. We first get a pool of candidate inputs from
the raw video. The candidate frames are directly sampled from the videos,
while the candidate key words are extracted using CLIP-based model, which
finds the closest words or phrases using nearest embedding matching. We use
all the phrases and words from the iVQA training set as the vocabulary
dictionary to choose words from. To better demonstrate the results, we use the
format of few-word or few-frame inputs. For visual frame inputs, we process
with the same method as before. For textual inputs, we treat 5-word as one
unit. 5-word/sec is the average reading speed for adults, which consumes
similar attention from watching a frame. So 5-word and one single frame could
be thought of as equivalent in consuming user attention. We combine the units
into a sequence, then apply the same selection method for word selection. For
multi-modal setting, we concatenate the frames and word units as a multi-modal
sequence and select from both. We fix $\tau=0.1$ for training the models. Our
results are shown in Table 1. For single-modality inputs, we similarly observe
an increasing performance trend with increasing number of inputs. Even with
very few inputs, the VideoQA performance is very close to the upper bound from
dense inputs. We can also observe a boost of performance from increasing
density of inputs on both modalities at sparsified levels, which validates the
effectiveness of our sparsification techniques. In general, the visual inputs
perform stronger than textual inputs, which is mainly due to the fact that
visual signals are much more informative. On the other hand, we can still
observe an increase of performance from adding even very few multi-modal
inputs. For example, adding only 5-word to the visual snippet could still get
some performance gains. This implies the complimentary manner from different
modalities from the perspective from strictly controlled inputs. Noticeably,
as an intermediate output from our learnable selection, we can get a few-frame
and few-word summarization of the original video, which is human-
interpretable. We provide more examples and analysis in the following section
to demonstrate this advantage.
### Qualitative Analysis
figure 4 is mentioned before figure 3 Here we provide visualizations on the
selected frame and/or key words from iVQA dataset. For illustration purposes,
we present the result for single frame selection and 10-word extraction in
Figure , along with their associated questions and the predicted answer.
Answer in green color means the system correctly predicts the answer, while
red color means the system predicts the wrong answer and the ground truth is
in parenthesis. In the successful cases, the sparsified output is able to
capture an appropriate figure for the topic, and the texts also contain words
related to the answer, which leads to the correct answer. In the first failure
case, even though the selected frame contains information related to the
answer “shirt”, the textual component is a distraction, and the system
generates an answer more related to the key words which are closely describing
pipes. In the second failure case, the generated key frame and words are both
irrelevant to the question. This is probably because the question itself is
asking something minor (since most of the video contents are about the
architectures and surroundings) while the model is trained to get information
that is of major interest for the overall dataset and task. Additionally, we
analyze the token importance using the tool provided by [Chefer_2021_CVPR]
which calculates the importance score of each input token w.r.t. to the task
prediction. In Figure 1, we provide some visualization examples where question
words and video frame inputs are highlighted according to their importance
scores. For illustration purposes, we only sample 10 frames in each sample.
Words or frames that are of darker green color means they contribute more to
the prediction. From the given examples, we can see that not every video frame
is of significant importance. The model is able to discard frames that do not
contain any useful information for the question (e.g., in the second example,
only the frames showing the fingers are contributing). On the other hand, in
the example where the scene is relatively stable, we can also observe that the
model focuses mostly on one of these similar frames (as in the third example),
while the rest seems to be diverging. These observations show the potential of
dropping unnecessary video inputs to improve the efficiency, which validates
our motivation.
|
# Non-normable spaces of analytic functions
Iván Jiménez Departamento de Matemáticas, Universidad Autónoma de Madrid,
28049 Madrid, Spain<EMAIL_ADDRESS>and Dragan Vukotić
Departamento de Matemáticas, Universidad Autónoma de Madrid, 28049 Madrid,
Spain<EMAIL_ADDRESS>
(Date: 01 August, 2024.)
###### Abstract.
For each value of $p$ such that $0<p<1$, we give a specific example of two
functions in the Hardy space $H^{p}$ and in the Bergman space $A^{p}$ that do
not satisfy the triangle inequality. For Hardy spaces, this provides a much
simpler proof than the one due to Livingston that involves abstract functional
analysis arguments and an approximation theorem. For Bergman spaces, we have
not been able to locate any examples or proofs in the existing literature.
###### Key words and phrases:
Hardy spaces, Bergman spaces, normable spaces
###### 2020 Mathematics Subject Classification:
30H05
## 1\. Introduction
It is well known that, for a given positive measure $\mu$, the Lebesgue space
$L^{p}(X,\mu)$ is a complete (Banach) space when equipped with the usual norm
$\|f\|_{p}=\left(\int_{X}|f|^{p}\,d\mu\right)^{1/p}$ whenever $1\leq
p<\infty$, while the above expression $\|\cdot\|_{p}$ in general does not
define a norm when $0<p<1$ since it fails to satisfy the triangle inequality.
The standard Hardy and Bergman spaces of analytic functions in the unit disk
${\mathbb{D}}$, denoted respectively by $H^{p}$ and $A^{p}$, can be seen as
closed subspaces of the spaces $L^{p}({\mathbb{T}},dm)$ and
$L^{p}({\mathbb{D}},dA)$, where ${\mathbb{T}}$ denotes the unit circle,
$dm(\theta)=d\theta/(2\pi)$ the normalized arc length measure on it, and
$dA(z)=dx\,dy/\pi$ the normalized area measure on ${\mathbb{D}}$. These spaces
are also complete with respect to their respective $L^{p}$-type norms when
$1\leq p<\infty$, and it is also known that they are not normed space when
$0<p<1$. There are many known monographs or texts that treat Hardy spaces or
Bergman spaces [1, 2, 3, 4, 5, 6, 7, 8, 10] and this fact is mentioned in
passing in many of them.
However, it seems that the proof of this “obvious” fact is not contained in
any of the texts quoted, not even among the exercises. The likely reason is
that this is not so easy to prove in a direct way. The spaces $H^{p}$ and
$A^{p}$ consist of holomorphic functions, which have many rigidity properties
and therefore they cannot be varied in any flexible way, not even on very
small sets. Thus, specific examples are not nearly as easy to construct as in
the context of $L^{p}$ spaces where we have all the freedom of modifying
measurable functions at our pleasure. Also, it is complicated in general to
compute or estimate precisely the norms of functions in such spaces when
$p\neq 2$.
This specific issue was discussed from a different point of view in
Livingston’s paper [9] from the 1950’s. By a well-known theorem of Kolmogorov
from 1934 [11, Theorem 1.39], a topological vector space is normable (has an
equivalent normed topology) if and only if its origin has a convex bounded
neighborhood. It was shown in [9] that, for $0<p<1$, the open unit ball of
$H^{p}$ contains no convex neighborhood of the origin, which implies the non-
normability of the space. To the best of our knowledge, we have not been able
to locate a proof of the analogous well-known fact for $A^{p}$ spaces in the
literature.
It seems useful to have a ‘hard analysis’ proof that the usual expression
$\|\cdot\|_{p}$ is not a norm when $0<p<1$; of course, a ‘soft analysis’
argument seems to be called for in order to prove the stronger fact that
actually no norm can be defined on the same space defining an equivalent
topology. The purpose of this note is to fill this gap in the literature by
giving specific examples of two functions, in both $H^{p}$ and $A^{p}$ spaces
with $0<p<1$, that do not satisfy the triangle inequality. We hope that
graduate students and other young researchers may find useful the examples
given in this note.
## 2\. Preliminary facts
### 2.1. Hardy spaces
Let ${\mathbb{D}}$ denote the unit disc in the complex plane. It is well known
[2, Chapter 1] that for any function $f$ analytic in ${\mathbb{D}}$ and
$0<p<\infty$, the integral means of order $p$ of $f$:
$M_{p}(r;f)=\left(\int_{0}^{2\pi}|f(re^{i\theta})|^{p}\frac{d\theta}{2\pi}\right)^{1/p}$
are increasing functions of $r\in(0,1)$. The Hardy space $H^{p}$ is the set of
all analytic functions in ${\mathbb{D}}$ for which these means have finite
limits: $\|f\|_{H^{p}}=\lim_{r\to 1^{-}}M_{p}(r;f)<\infty$. This is not a true
norm if $0<p<1$ but the same notation and the term “norm” will still be used
in this case. We list several properties of Hardy spaces that will be needed
in the sequel.
It is a well-known fact that $H^{p}$ functions have radial limits:
$\tilde{f}(e^{i\theta})=\lim_{r\to 1^{-}}f(re^{i\theta})$ almost everywhere on
the unit circle and the norm can be computed in terms of these limits (see
[2], [4], or [5]):
$\|f\|_{H^{p}}=\left(\int_{0}^{2\pi}|\tilde{f}(e^{i\theta})|^{p}\frac{d\theta}{2\pi}\right)^{1/p}\,.$
A direct computation shows that the $H^{p}$ norm is invariant under rotations;
in particular, if $g(z)=f(-z)$, then $\|g\|_{H^{p}}=\|f\|_{H^{p}}$.
It is also well known that the composition with the function $z\mapsto z^{2}$
is bounded on any Hardy space. The following useful fact quantifies this.
###### Lemma 1.
Let $f\in H^{p}$ and $h(z)=f(z^{2})$, for $z\in{\mathbb{D}}$. Then $h\in
H^{p}$ and $\|h\|_{H^{p}}=\|f\|_{H^{p}}$.
###### Proof.
Follows by a simple change of variable $t=2\theta$ and periodicity:
$\|h\|_{H^{p}}^{p}=\int_{0}^{2\pi}|\tilde{f}(e^{2i\theta})|^{p}\frac{d\theta}{2\pi}=\frac{1}{2}\int_{0}^{4\pi}|\tilde{f}(e^{it})|^{p}\frac{dt}{2\pi}=\int_{0}^{2\pi}|\tilde{f}(e^{it})|^{p}\frac{dt}{2\pi}=\|f\|_{H^{p}}^{p}\,.$
∎
### 2.2. Bergman spaces
The Bergman “norm” is defined as
(1)
$\|f\|_{A^{p}}^{p}=\int_{\mathbb{D}}|f(z)|^{p}\,dA(z)=\int_{0}^{1}2rM_{p}^{p}(r,f)\,dr\,,$
where $dA$ denotes the normalized Lebesgue area measure on ${\mathbb{D}}$:
$dA(z)=\frac{dx\,dy}{\pi}=\frac{r\,dr\,d\theta}{\pi}\,,\qquad
z=x+iy=re^{i\theta}\,.$
In the most special case when $p=2$ and $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\/$
in ${\mathbb{D}}$, using orthogonality, the norm can be computed explicitly in
terms of the Taylor coefficients:
(2) $\|f\|_{A^{2}}^{2}=\sum_{n=0}^{\infty}\frac{|a_{n}|^{2}}{n+1}\,.$
We refer the readers to [3] or [6] for these basic facts.
Of course, the expression in (1) is a norm only when $1\leq p<\infty$ but we
shall again use the term “norm” also when $0<p<1$. To show explicitly that
this is not a norm for small exponents $p$ becomes more involved than for the
Hardy spaces. This is due to the lack of boundary values of functions in
$A^{p}$ and to the fact that computing the norm involves integration with
respect to area.
It is readily checked that the Bergman “norm” is invariant under rotations,
hence $f(z)$ and $f(-z)$ have the same norm. Again, it is well known that the
composition with the function $z\mapsto z^{2}$ is bounded on any Bergman
space. The following related exact formula will be useful.
###### Lemma 2.
If $h\in A^{p}$, then
$\int_{\mathbb{D}}|h(z)|^{p}\,dA(z)=2\int_{\mathbb{D}}|h(z^{2})|^{p}|z|^{2}\,dA(z)\,.$
###### Proof.
By the obvious change of variable $2\theta=\varphi$ and periodicity, followed
by another change of variable $r^{2}=\rho$, we obtain
$\displaystyle\int_{\mathbb{D}}|h(z^{2})|^{p}|z|^{2}\,dA(z)$ $\displaystyle=$
$\displaystyle\int_{0}^{1}2r^{3}\int_{0}^{2\pi}|h(r^{2}e^{2i\theta})|^{p}\frac{d\theta}{2\pi}\,dr$
$\displaystyle=$
$\displaystyle\int_{0}^{1}2r^{3}\int_{0}^{2\pi}|h(r^{2}e^{i\varphi})|^{p}\frac{d\varphi}{2\pi}\,dr$
$\displaystyle=$ $\displaystyle\int_{0}^{1}\rho M_{p}^{p}(\rho,h)\,d\rho\,,$
and the statement follows. ∎
The following fact is well known. We include an indication of a proof for the
sake of completeness.
###### Lemma 3.
Let $h(z)=(1-z)^{-\alpha}$, $\alpha>0$. Then $h\in A^{p}$ if and only if
$p\alpha<2$.
###### Proof.
This is easily established by integrating in polar coordinates centered at
$z=1$ rather than at the origin: write $z=1-re^{i\theta}$, where
$-\pi/2<\theta<\pi/2$ and $0<r<2\cos\theta$. The rest is also elementary
calculus. ∎
## 3\. Examples
### 3.1. A Hardy space example
It seems intuitively clear that the nice properties that $H^{p}$ spaces enjoy
should allow us to present simple examples. Actually, it turns out that there
is one single example that works for every $p$ with $0<p<1$.
###### Theorem 4.
Let $0<p<1$. Then the functions $f$ and $g$, defined respectively by
$f(z)=\frac{1+z}{1-z}\,,\quad g(z)=-f(-z)=-\frac{1-z}{1+z}\,,$
both belong to $H^{p}$ but fail to satisfy the triangle inequality for
$\|\cdot\|_{H^{p}}$.
###### Proof.
It is a well-known exercise [2, Chapter 1, Problem 1] that, for
$h(z)=\frac{1}{1-z}$, we have $h\in H^{p}$ if and only if $0<p<1$, and the
same is easily seen for the closely related function $f$. By the basic
properties, $\|f\|_{H^{p}}=\|g\|_{H^{p}}$. Also, a direct computation shows
that
$f(z)+g(z)=\frac{4z}{1-z^{2}}\,.$
Taking into account that $|z|=1$ on the unit circle, as well as Lemma 1 and
the inequality $|1+z|\leq 2$ which is actually strict for all
$z\in{\mathbb{T}}\setminus\\{1\\}$, we obtain that
$\|f+g\|_{H^{p}}=\left\|\frac{4}{1-z^{2}}\right\|_{H^{p}}=4\left\|\frac{1}{1-z}\right\|_{H^{p}}>2\left\|\frac{1+z}{1-z}\right\|_{H^{p}}=2\|f\|_{H^{p}}=\|f\|_{H^{p}}+\|g\|_{H^{p}}\,,$
showing that the triangle inequality fails in this case. ∎
Once discovered, the last example may even look trivial. However, it should be
mentioned that it was not the first example of this kind that we found; the
earlier examples required a lot more involved calculations and estimates.
Moreover, the “obvious” example one would first think of does not work: for
the function $h$ mentioned in the proof and the related modified function
$k(z)=-h(-z)$, it can be easily checked by a completely similar argument that
we have equality in the triangle inequality:
$\|h+k\|_{H^{p}}=\|h\|_{H^{p}}+\|k\|_{H^{p}}$.
### 3.2. Bergman space examples
In this case, we do not have one single example covering the entire range of
values of the exponent $p$. Instead, we exhibit two different types of
examples, depending on the value of $p$. The first example is a modification
of the Hardy space example given earlier.
###### Theorem 5.
Let $\frac{1}{2}\leq p<1$ and let $\varepsilon\leq 1$ and
$\/(1-p)/p\leq\varepsilon<2(1-p)/p$. Then the functions $f$ and $g$, given by
$f(z)=\frac{(1+z)^{2-\varepsilon}}{(1-z)^{2+\varepsilon}}\,,\quad
g(z)=-f(-z)=-\frac{(1-z)^{2-\varepsilon}}{(1+z)^{2+\varepsilon}},$
fail to satisfy the triangle inequality for $\|\cdot\|_{A^{p}}$.
###### Proof.
Note that $\varepsilon=1$ if and only if $p=1/2$; otherwise we have a whole
interval to choose the value of $\varepsilon$. Also note that the numerator in
the expression for $f$ is bounded in view of the condition $\varepsilon\leq 1$
while $\varepsilon<2(1-p)/p$ implies that $p(2+\varepsilon)<2$. Thus, $f\in
A^{p}$ by Lemma 3.
We already know that $\|f\|_{A^{p}}=\|g\|_{A^{p}}$. Since
$f(z)+g(z)=\frac{(1+z)^{4}-(1-z)^{4}}{(1-z^{2})^{2+\varepsilon}}=\frac{8z(1+z^{2})}{(1-z^{2})^{2+\varepsilon}}\,,$
the desired inequality $\|f+g\|_{A^{p}}>\|f\|_{A^{p}}+\|g\|_{A^{p}}$ is
equivalent to the statement that $\|f+g\|_{A^{p}}^{p}>2^{p}\|f\|_{A^{p}}^{p}$,
that is,
$2^{3p}\int_{\mathbb{D}}\frac{|z|^{p}|1+z^{2}|^{p}}{|1-z^{2}|^{(2+\varepsilon)p}}\,dA(z)>2^{p}\int_{\mathbb{D}}\frac{|1+z|^{(2-\varepsilon)p}}{|1-z|^{(2+\varepsilon)p}}\,dA(z)=2^{p+1}\int_{\mathbb{D}}\frac{|z|^{2}|1+z^{2}|^{(2-\varepsilon)p}}{|1-z^{2}|^{(2+\varepsilon)p}}\,dA(z)\,,$
by Lemma 2 applied to the function $f$. This is clearly equivalent to
$\int_{\mathbb{D}}\frac{|z|^{p}|1+z^{2}|^{p}\left(2^{2p-1}-|z|^{2-p}|1+z^{2}|^{(1-\varepsilon)p}\right)}{|1-z^{2}|^{(2+\varepsilon)p}}\,dA(z)>0.$
By our choice of $\varepsilon$ and restrictions on $p$, we have
$(2p-1)-(1-\varepsilon)p=p+\varepsilon p-1\geq 0$ and $(1-\varepsilon)p\geq
0$, hence
$2^{2p-1}-|z|^{2-p}|1+z^{2}|^{(1-\varepsilon)p}>2^{2p-1}-2^{(1-\varepsilon)p}\geq
0$
for all $z\in{\mathbb{D}}$, and the desired integral inequality follows. ∎
We now turn to the remaining range of exponents. To this end, the following
simple inequality will be useful.
###### Lemma 6.
If $a$, $b>0$ and $q>1$, then $|a^{q}-b^{q}|\geq|a-b|^{q}$.
###### Proof.
Without loss of generality, we may assume that $a\geq b>0$. Then, writing
$x=a/b$, the inequality reduces to $(x^{q}-1)-(x-1)^{q}\geq 0$, for $x\geq 1$,
and this is easily proved by elementary calculus since the function on the
left-hand side is non-decreasing in $[1,+\infty)$. ∎
Our next example covers the remaining range of values of $p$ and, actually, a
somewhat larger interval of values.
###### Theorem 7.
Let $0<p<\frac{1}{2}$ and define
$f(z)=(1+z)^{4/p}\,,\quad g(z)=-f(-z)=-(1-z)^{4/p}\,,$
choosing the appropriate branch of the complex logarithm so that, say, $\log
1=0$. Then the functions $f$ and $g$ both belong to $A^{p}$ but fail to
satisfy the triangle inequality for $\|\cdot\|_{A^{p}}$.
###### Proof.
Again, it is clear that $\|f\|_{A^{p}}=\|g\|_{A^{p}}$. We can compute this
value by using the formula (2) to obtain
$\|f\|_{A^{p}}^{p}=\int_{\mathbb{D}}|1+2z+z^{2}|^{2}\,dA(z)=1+2+\frac{1}{3}=\frac{10}{3}\,.$
Next, we use the formulas defining the functions $f$ and $g$, the standard
triangle inequality for complex numbers, then employ the elementary inequality
from Lemma 6 with
$q=1/p\,,\quad a=|1+z|^{4}\,,\quad b=|1-z|^{4}\,,$
then some basic algebra of complex numbers, and afterwards express the
integral obtained in polar coordinates and use Fubini’s theorem to obtain:
$\displaystyle\|f+g\|_{A^{p}}^{p}$ $\displaystyle=$
$\displaystyle\int_{\mathbb{D}}|(1+z)^{4/p}-(1-z)^{4/p}|^{p}dA(z)$
$\displaystyle\geq$
$\displaystyle\int_{\mathbb{D}}\left||1+z|^{4/p}-|1-z|^{4/p}\right|^{p}dA(z)$
$\displaystyle\geq$
$\displaystyle\int_{\mathbb{D}}\left||1+z|^{4}-|1-z|^{4}\right|dA(z)$
$\displaystyle=$
$\displaystyle\int_{\mathbb{D}}\left|(1+|z|^{2}+2\operatorname{Re}z)^{2}-(1+|z|^{2}-2\operatorname{Re}z)^{2}\right|dA(z)$
$\displaystyle=$ $\displaystyle
8\int_{\mathbb{D}}(1+|z|^{2})|\operatorname{Re}z|dA(z)$ $\displaystyle=$
$\displaystyle\frac{8}{\pi}\int_{0}^{1}r^{2}(1+r^{2})\,dr\cdot
2\int_{-\pi/2}^{\pi/2}\cos\theta\,d\theta$ $\displaystyle=$
$\displaystyle\frac{2^{8}}{15\pi}$ $\displaystyle>$ $\displaystyle
2^{p}\frac{10}{3}$ $\displaystyle=$
$\displaystyle(\|f\|_{A^{p}}+\|g\|_{A^{p}})^{p}\,,$
whenever $p<\frac{1}{2}$, as is easily seen by computation. Actually, the last
inequality holds for a larger range of values of $p$ which we do not need for
our purpose. ∎
For the sake of simplicity, we have avoided discussing the weighted Bergman
spaces $A^{p}_{\alpha}$ with standard radial weights. It does not appear too
difficult to find related examples in such cases as well.
Acknowledgments. The authors would like to thank Ole F. Brevig and Raymond
Mortini for their interest in the first draft of the paper and for some useful
comments.
## References
* [1] A.E. Djrbashian, F.A. Shamoian, _Topics in the Theory of $A^{p}_{\alpha}$ Spaces_, Teubner-texte zur Mathematik, Band 105, BSB Teubner, Leipzig 1988.
* [2] P.L. Duren, Theory of $H^{p}$ Spaces, Academic Press, New York 1970.
* [3] P.L. Duren and A.P. Schuster, _Bergman Spaces_ , Math. Surveys and Monographs Vol. 100, American Mathematical Society, Providence, Rhode Island 2004.
* [4] S.D. Fisher, Function Theory on Planar Domains, John Wiley & Sons, New York, 19831.
* [5] J. Garnett, Bounded Analytic Functions, Academic Press, New York, 1981.
* [6] H. Hedenmalm, B. Korenblum, K. Zhu, _Theory of Bergman Spaces_ (Graduate Texts in Mathematics, Vol. 199, Springer, New York, Berlin, etc. 2000.
* [7] M. Jevtić, D. Vukotić, M. Arsenović, Taylor Coefficients and Coefficient Multipliers of Hardy and Bergman-Type Spaces, RSME - Springer Series, Vol. 2, Springer, Cham, Switzerland 2016.
* [8] P. Koosis, Introduction to $H^{p}$ Spaces, Second Edition, Cambridge Univ. Press, Cambridge 1999.
* [9] A.E. Livingston, The space $H^{p}$, $0<p<1$, is not normable, Pacific J. Math. 3 (1953), No. 3, 613–616.
* [10] W. Rudin: _Real and Complex Analysis_ , Third Edition, McGraw-Hill, New York 1987.
* [11] W. Rudin: _Functional Analysis_ , Second Edition, McGraw-Hill, New York 1991.
|
# $B$ meson anomalies within the triplet vector boson model to the light of
recent measurements from LHCb
J. M. Cabarcas<EMAIL_ADDRESS>Universidad Santo Tomás, Colombia J.
H. Muñoz<EMAIL_ADDRESS>Departamento de Física, Universidad del Tolima,
Código Postal 730006299, Ibagué, Colombia Néstor Quintero
<EMAIL_ADDRESS>Facultad de Ciencias Básicas, Universidad
Santiago de Cali, Campus Pampalinda, Calle 5 No. 62-00, Código Postal 76001,
Santiago de Cali, Colombia Eduardo Rojas<EMAIL_ADDRESS>Departamento de
Física, Universidad de Nariño, A.A. 1175, San Juan de Pasto, Colombia
###### Abstract
The triplet vector boson (TVB) is a simplified new physics model involving
massive vector bosons transforming as a weak triplet vector. Such a model has
been proposed as a combined explanation of the anomalous $b\to
s\mu^{+}\mu^{-}$ and $b\to c\tau\bar{\nu}_{\tau}$ data (the so-called $B$
meson anomalies). In this work, we carry out an updated view of the TVB model
by incorporating the most recent 2022 and 2023 LHCb measurements on the lepton
flavor universality ratios $R(D^{(\ast)})={\rm BR}(B\to
D^{(\ast)}\tau\bar{\nu}_{\tau})/{\rm BR}(B\to
D^{(\ast)}\ell^{\prime}\bar{\nu}_{\ell^{\prime}})$, $R(\Lambda_{c})={\rm
BR}(\Lambda_{b}\to\Lambda_{c}\tau\bar{\nu}_{\tau})/{\rm
BR}(\Lambda_{b}\to\Lambda_{c}\mu\bar{\nu}_{\mu})$, and $R_{K^{(\ast)}}={\rm
BR}(B\to K^{(\ast)}\mu^{+}\mu^{-})/{\rm BR}(B\to K^{(\ast)}e^{+}e^{-})$. We
perform a global fit to explore the allowed parameter space by the new data
and all relevant low-energy flavor observables. Our results are confronted
with the recent high-mass dilepton searches at the Large Hadron Collider
(LHC). We find that for a heavy TVB mass of 1 TeV a common explanation of the
$B$ meson anomalies is possible for all data with the recent LHCb measurements
on $R(D^{(\ast)})$, in consistency with LHC constraints. However, this
framework is in strong tension with LHC bounds when one considers all data
along with the world average values (BABAR, Belle, and LHCb) on
$R(D^{(\ast)})$. Future measurements will be required in order to clarify such
a situation. In the end, the implications of our phenomenological analysis of
the TVB model to some known flavor parametrizations are also discussed.
## I Introduction
In the last ten years, approximately, the high-energy physics community has
been a witness of discrepancies between experimental measurements and the
Standard Model (SM) calculations in several observables involving $b\to
s\mu^{+}\mu^{-}$ (neutral-current) and $b\to c\tau\bar{\nu}_{\tau}$ (charged-
current) transitions, which provide an important test of lepton flavor
universality (LFU). Such inconsistencies indicate strong signals of LFU
violation (for very recent interesting reviews, see Refs. London:2021lfn ;
Albrecht:2021tul ; Bifani:2018zmi ). For the neutral-current $b\to
s\mu^{+}\mu^{-}$ transition, the ratio of semileptonic decay channels,
$\displaystyle R_{K^{(\ast)}}=\frac{{\rm BR}(B\to
K^{(\ast)}\mu^{+}\mu^{-})}{{\rm BR}(B\to K^{(\ast)}e^{+}e^{-})},$ (1)
provides a test of $\mu/e$ LFU for different dilepton mass-squared range
$q^{2}$ ($q^{2}$ bins). From 2014 to 2021, the LHCb experiment reported the
existence of discrepancies between the SM predictions and the experimental
measurements (low and central $q^{2}$ bins) of $R_{K}$, $R_{K^{\ast}}$,
$R_{K_{S}}$, and $R_{K^{\ast+}}$ Aaij:2014ora ; Aaij:2019wad ; Aaij:2017vbb ;
Aaij:2021vac ; LHCb:2021lvy , hinting toward LFU violation in the $\mu/e$
sector. However, at the end of 2022, an improved LHCb analysis of the ratios
$R_{K^{(\ast)}}$, namely LHCb:2022qnv ; LHCb:2022zom
$\displaystyle R_{K}$ $\displaystyle=$
$\displaystyle\begin{cases}0.994^{+0.090+0.029}_{-0.082-0.027},\ \
q^{2}\in[0.1,1.1]\ {\rm GeV}^{2},\\\ 0.949^{+0.042+0.022}_{-0.041-0.022},\ \
q^{2}\in[1.1,6.0]\ {\rm GeV}^{2},\end{cases}$ (2)
and
$\displaystyle R_{K^{\ast}}$ $\displaystyle=$
$\displaystyle\begin{cases}0.927^{+0.093+0.036}_{-0.087-0.035},\ \
q^{2}\in[0.1,1.1]\ {\rm GeV}^{2},\\\ 1.027^{+0.072+0.027}_{-0.068-0.026},\ \
q^{2}\in[1.1,6.0]\ {\rm GeV}^{2},\end{cases}$ (3)
now shows a good agreement with the SM LHCb:2022qnv ; LHCb:2022zom . In
addition, the CMS experiment has presented a new measurement of the branching
ratio of $B_{s}\to\mu^{+}\mu^{-}$ more consistent with the SM CMS:2022mgd .
Despite that the tension on $R_{K^{(\ast)}}$ ratios and ${\rm
BR}(B_{s}\to\mu^{+}\mu^{-})$ has now disappeared, there are still some
discrepancies in the measurements of additional $b\to s\mu^{+}\mu^{-}$
observables, such as angular observables and differential branching fractions
related with $B\to K^{\ast}\mu^{+}\mu^{-}$ and $B_{s}\to\phi\mu^{+}\mu^{-}$
decays Aaij:2013qta ; Aaij:2015oid ; Aaij:2020nrf ; Aaij:2013aln ;
Aaij:2015esa ; Aaij:2020ruw . Within a model-independent effective Hamiltonian
approach and under the hypothesis that New Physics (NP) couples selectively to
the muons, different scenarios with NP operators (dimension-six) have been
surveyed in the literature Aebischer:2019mlg ; Altmannshofer:2021qrr ;
Alguero:2021anc ; Alguero:2019ptt ; Geng:2021nhg ; Hurth:2021nsi ;
Angelescu:2021lln ; Carvunis:2021jga ; London:2021lfn ; Greljo:2022jac ;
Alguero:2023jeh . The most recent global fit analysis Greljo:2022jac ;
Alguero:2023jeh taking into account updated $b\to s\mu^{+}\mu^{-}$ data
(including $R_{K^{(\ast)}}$ by LHCb LHCb:2022qnv ; LHCb:2022zom and ${\rm
BR}(B_{s}\to\mu^{+}\mu^{-})$ by CMS CMS:2022mgd ), showed that the Wilson
coefficient (WC) solution $C^{bs\mu\mu}_{9}=-C^{bs\mu\mu}_{10}$, related with
the operators $(\bar{s}P_{L}\gamma_{\alpha}b)(\bar{\mu}\gamma^{\alpha}\mu)$
and $(\bar{s}P_{L}\gamma_{\alpha}b)(\bar{\mu}\gamma^{\alpha}\gamma_{5}\mu)$,
is still a viable solution to describe the data.
On the other hand, the experimental measurements collected by the BABAR,
Belle, and LHCb experiments on different charged-current $b\to
c\tau\bar{\nu}_{\tau}$ observables, indicate the existence of disagreement
with respect to the SM predictions Lees:2012xj ; Lees:2013uzd ;
Huschle:2015rga ; Sato:2016svk ; Hirose:2017vbz ; Aaij:2015yra ; Aaij:2017deq
; Aaij:2017uff ; Belle:2019rba ; Hirose:2017dxl ; Hirose:2016wfn ;
Abdesselam:2019wbt ; HFLAV:2022pwe ; LHCb2022 ; LHCb:2023zxo ; LHCb2023 ;
HFLAVsummer ; Aaij:2017tyk (see Table 1 for a summary). Regarding the ratios
of semileptonic $B$ meson decays,
$R(D^{(\ast)})=\dfrac{{\rm BR}(B\to D^{(\ast)}\tau\bar{\nu}_{\tau})}{{\rm
BR}(B\to D^{(\ast)}\ell^{\prime}\bar{\nu}_{\ell^{\prime}})},$ (4)
with $\ell^{\prime}=e\ {\rm or}\ \mu$ (the so-called $R(D^{(\ast)})$
anomalies), the LHCb has presented, very recently, the first combined
measurement using Run 1 data (3 fb-1) with muonic $\tau$ decay reconstruction
LHCb2022 ; LHCb:2023zxo ,
$\displaystyle R(D)_{\rm LHCb22}$ $\displaystyle=$ $\displaystyle 0.441\pm
0.060\pm 0.066,$ (5) $\displaystyle R(D^{\ast})_{\rm LHCb22}$ $\displaystyle=$
$\displaystyle 0.281\pm 0.018\pm 0.024,$ (6)
which show a tension of $1.9\sigma$ with the SM predictions. Additionally, the
LHCb also reported a preliminary measurement of $R(D^{\ast})$ using partial
Run 2 data (2 fb-1), where the $\tau$ is hadronically reconstructed LHCb2023 .
When combined with Run 1, the result is LHCb2023
$R(D^{\ast})_{\rm LHCb23}=0.257\pm 0.012\pm 0.018,$ (7)
that is compatible with SM at the $\sim 1\sigma$ level. Incorporating these
new LHCb results, the preliminary world average values reported by the Heavy
Flavor Averaging Group (HFLAV) are HFLAVsummer
$\displaystyle R(D)_{\rm HFLAV23}$ $\displaystyle=$ $\displaystyle 0.356\pm
0.029,$ (8) $\displaystyle R(D^{\ast})_{\rm HFLAV23}$ $\displaystyle=$
$\displaystyle 0.284\pm 0.013,$ (9)
that now exceed the SM by $3.2\sigma$. Moreover, the LHCb measurement of the
ratio $R(J/\psi)={\rm BR}(B_{c}\to J/\psi\tau\bar{\nu}_{\tau})/{\rm
BR}(B_{c}\to J/\psi\mu\bar{\nu}_{\mu})$ Aaij:2017tyk also shows tension
($\sim 2\sigma$) with regard to the SM prediction Harrison:2020nrv .
Additional hints of LFU violation in the $b\to c\tau\bar{\nu}_{\tau}$
transition have been obtained in the Belle measurements of the $\tau$ lepton
polarization $P_{\tau}(D^{\ast})$ Hirose:2017dxl ; Hirose:2016wfn and the
longitudinal polarization of the $D^{*}$ meson $F_{L}(D^{\ast})$
Abdesselam:2019wbt related with the channel $\bar{B}\to
D^{\ast}\tau\bar{\nu}_{\tau}$, which also exhibit a deviation from the SM
values Iguro:2022yzr . The tauonic channel $B_{c}\to
J/\psi\tau\bar{\nu}_{\tau}$ has not been measured yet, but indirect
constraints on its branching ratio have been imposed $<30\%$ Alonso:2016oyd
and $<10\%$ Akeroyd:2017mhr . In Table 1 we summarize the current experimental
measurements and their corresponding SM predictions. We also collect in Table
1 the experimental and theoretical values of the ratio of inclusive decays
$R(X_{c})\equiv{\rm BR}(B\to X_{c}\tau\bar{\nu}_{\tau})/{\rm BR}(B\to
X_{c}\mu\bar{\nu}_{\mu})$, which is generated via the same $b\to
c\tau\bar{\nu}_{\tau}$ transition Kamali:2018bdp . The SM estimation on
$R(X_{c})$ is based on the $1S$ mass scheme and includes nonperturbative
corrections of the order $\mathcal{O}(1/m_{b}^{3})$, while the NP effects took
into account the subleading $\mathcal{O}(1/m_{b})$ corrections Kamali:2018bdp
. The $R(D^{(\ast)})$ anomalies still exhibit the largest deviation. The other
$b\to c\tau\bar{\nu}_{\tau}$ observables also show tension (moderate) with the
data, although, some of them have large experimental uncertainties (such as
$R(J/\psi)$ and $P_{\tau}(D^{\ast})$). While the ratio $R(X_{c})$ is in
excellent agreement with the SM.
In addition, the LHCb Collaboration has recently released the first
measurement of the ratio of semileptonic $\Lambda_{b}$ baryon decays, namely
LHCb:2022piu
$R(\Lambda_{c})\equiv\dfrac{{\rm
BR}(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\tau^{-}\bar{\nu}_{\tau})}{{\rm
BR}(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu})}=0.242\pm 0.076,$
(10)
in agreement at the $\sim 1.2\sigma$ level with the most recent SM
calculation, $R(\Lambda_{c})_{\rm SM}=0.324\pm 0.004$ Bernlochner:2018bfn . In
Eq. (10) we have added in quadrature the statistical and systematic
uncertainties, and the external branching ratio uncertainty from the channel
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu})$ LHCb:2022piu . It
is interesting to highlight that this new measurement is below the SM value,
pointing to an opposite direction than the current $b\to
c\tau\bar{\nu}_{\tau}$ data (see Table 1). Nevertheless, in order to provide
an overall picture, all the anomalous $b\to c\tau\bar{\nu}_{\tau}$ data must
be taken into account. To the best of our knowledge, the impact of the new
LHCb measurement on $R(\Lambda_{c})$ has been recently studied from a model-
independent way (effective field theory approach) Fedele:2022iib and in the
singlet vector leptoquark model Garcia-Duque:2022tti .
Table 1: Experimental status and SM predictions on observables related to the charged-current transitions $b\to c\ell\bar{\nu}_{\ell}$ ($\ell=\mu,\tau$). Transition | Observable | Expt. measurement | SM prediction
---|---|---|---
$b\to c\tau\bar{\nu}_{\tau}$ | $R(D)$ | $0.441\pm 0.060\pm 0.066$ (LHCb22) LHCb2022 ; LHCb:2023zxo | 0.298 $\pm$ 0.004 HFLAVsummer
| | $0.356\pm 0.029$ (HFLAV) HFLAVsummer |
| $R(D^{\ast})$ | $0.281\pm 0.018\pm 0.024$ (LHCb22) LHCb2022 ; LHCb:2023zxo | 0.254 $\pm$ 0.005 HFLAVsummer
| | $0.257\pm 0.012\pm 0.018$ (LHCb23) LHCb2023 |
| | $0.284\pm 0.013$ (HFLAV) HFLAVsummer |
| $R(J/\psi)$ | $0.71\pm 0.17\pm 0.18$ Aaij:2017tyk | 0.2582 $\pm$ 0.0038 Harrison:2020nrv
| $P_{\tau}(D^{\ast})$ | $-0.38\pm 0.51^{+0.21}_{-0.16}$ Hirose:2017dxl ; Hirose:2016wfn | $-0.497\pm 0.007$ Iguro:2022yzr
| $F_{L}(D^{\ast})$ | $0.60\pm 0.08\pm 0.035$ Abdesselam:2019wbt | $0.464\pm 0.003$ Iguro:2022yzr
| $R(X_{c})$ | 0.223 $\pm$ 0.030 Kamali:2018bdp | 0.216 $\pm$ 0.003 Kamali:2018bdp
| ${\rm BR}(B_{c}^{-}\to\tau^{-}\bar{\nu}_{\tau})$ | $<10\%$ Akeroyd:2017mhr , $<30\%$ Alonso:2016oyd | $(2.16\pm 0.16)\%$ Gomez:2019xfw
$b\to c\mu\bar{\nu}_{\mu}$ | $R_{D}^{\mu/e}$ | $0.995\pm 0.022\pm 0.039$ Glattauer:2015teq | $0.9960\pm 0.0002$ Becirevic:2020rzi
| $R_{D^{\ast}}^{\mu/e}$ | $0.961\pm 0.050$ Belle:2017rcc | $0.9974\pm 0.0001$ Bobeth:2021lya
Although the $b\to c\tau\bar{\nu}_{\tau}$ data is suggesting stronger signals
of LFU violation than $b\to s\mu^{+}\mu^{-}$ one, a combined explanation of
the current data is still desirable. This simultaneous explanation can be
generated by different tree-level heavy mediators with adequate couplings, for
example, charged scalar bosons, extra gauge bosons or leptoquarks (scalar and
vector). For an extensive list of literature, see the theoretical status
report presented in Ref. London:2021lfn . In this work, we will pay particular
attention to the common explanation provides by the so-called Triplet Vector
Boson (TVB) model Calibbi:2015kma ; Bhattacharya:2014wla ; Greljo:2015mma ;
Faroughy:2016osc ; Buttazzo:2017ixm ; Bhattacharya:2016mcc ; Kumar:2018kmr ;
Guadagnoli:2018ojc ; Boucenna:2016wpr ; Boucenna:2016qad 111Let us notice
that in a recent work Capdevila:2020rrl , the TVB model was implemented as an
explanation to the Cabibbo angle anomaly and $b\to s\ell^{+}\ell^{-}$ data.,
in which the SM is extended by including a color-neutral real $SU(2)_{L}$
triplet of massive vectors $W^{\prime}$ and $Z^{\prime}$ that coupled
predominantly to left-handed (LH) fermions from the second- and third-
generations Calibbi:2015kma ; Bhattacharya:2014wla ; Greljo:2015mma ;
Faroughy:2016osc ; Buttazzo:2017ixm ; Bhattacharya:2016mcc ; Kumar:2018kmr ;
Guadagnoli:2018ojc ; Boucenna:2016wpr ; Boucenna:2016qad . The neutral boson
$Z^{\prime}$ is responsible for the $b\to s\mu^{+}\mu^{-}$ data, while the
charged boson $W^{\prime}$ generates the $b\to c\tau\bar{\nu}_{\tau}$ one. We
adopt a phenomenological approach of the TVB model based on the minimal setup
of couplings between the new gauge bosons $Z^{\prime},W^{\prime}$ and LH
fermions of the SM, without specifying the complete UV model. We present an
updated analysis of TVB model (parametric space) by including the new 2022 and
2023 LHCb data on $R_{K^{(\ast)}}$, $R(D^{(\ast)})$, and $R(\Lambda_{c})$. We
also incorporate in our study all relevant flavor observables that are also
affected by this NP model, such as $B_{s}-\bar{B}_{s}$ mixing, neutrino
trident production, LFV decays ($B\to K^{(\ast)}\mu^{\pm}\tau^{\mp}$,
$B_{s}\to\mu^{\pm}\tau^{\mp}$, $\tau\to\mu\phi$,
$\Upsilon(nS)\to\mu^{\pm}\tau^{\mp}$), rare $B$ decays ($B\to
K^{(\ast)}\nu\bar{\nu},B\to K\tau^{+}\tau^{-},B_{s}\to\tau^{+}\tau^{-}$), and
bottomonium LFU ratios. Furthermore, we study the consistency of the allowed
TVB parameter space with the Large Hadron Collider (LHC) bounds from searches
of high-mass dilepton resonances at the ATLAS experiment.
Even though our focus will be phenomenological, regarding the ultra violet
(UV) complete realization for the TVB model, the extension of the SM must
allow for Lepton Flavor Non Universal (LFNU) couplings to the extra gauge
bosons and LFV. In this direction in Ref. Boucenna:2016qad there is a
proposal in which an extra $SU(2)$ gauge group is added and where extra
scalars, new vector-like fermions and some non trivial transformations under
the SM group are included. It is clear, that the couplings of fermions to the
extra gauge bosons of the particular UV realization, will have model-dependent
consequences that might relate different terms between them; however, since we
make emphasis that our approach is phenomenological, we will start from the
most general lagrangian for the TVB model as possible, and we will make
comparisons to other approaches presented in Refs. Calibbi:2015kma ;
Greljo:2015mma ; Buttazzo:2017ixm ; Bhattacharya:2016mcc where the new
physics is coupled predominantly to the second and third generation of left
handed quarks and leptons, ensuring LFNU and LFV through different mechanisms.
Restrict our results to a particular UV-model is out of our target.
This paper is structured as follows: in Sec. II we discuss the main aspects of
the TVB model to accommodate the $B$ meson anomalies. As a next step in Sec.
III, we consider the most relevant flavor observables and present the TVB
model contributions to them. The LHC bounds are also studied. We then perform
our phenomenological analysis of the allowed parametric space in Sec. IV and
our conclusions are presented in Sec. V.
## II The Triplet Vector boson model
In general, flavor anomalies have been boarded into the current literature as
a motivation to build innovative models and to test well established New
Physics (NP) models. In this section, we focus in the previously mentioned
Triplet Vector Boson (TVB) model Bhattacharya:2014wla ; Calibbi:2015kma ;
Kumar:2018kmr ; Faroughy:2016osc ; Greljo:2015mma ; Bhattacharya:2016mcc ;
Buttazzo:2017ixm ; Guadagnoli:2018ojc ; Boucenna:2016wpr ; Boucenna:2016qad
as a possible explanations of these anomalies, that might accommodate the
observed flavor experimental results. One significant feature of this model,
is the inclusion of extra SM-like vector bosons with non-zero couplings to the
SM fermions, that allow us to include additional interactions.
In the fermion mass basis, the most general lagrangian describing the dynamics
of the fields can be written as
$\displaystyle\Delta{\cal
L}_{V}=g_{ij}^{q}(\bar{\Psi}_{iL}^{Q}\gamma^{\mu}\sigma^{I}\Psi^{Q}_{jL})V^{I}_{\mu}+g_{ij}^{\ell}(\bar{\Psi}_{iL}^{\ell}\gamma^{\mu}\sigma^{I}\Psi^{\ell}_{jL})V^{I}_{\mu}$
(11)
where, $V_{\mu}$ stands for the extra or new vector bosons that transform as
(1, 3, 0) under the $SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}$ gauge
symmetry and must be redefined as $W^{\prime\pm},\,Z^{\prime}$. On the other
side, SM fermions are arranged into the doublets $\Psi^{Q}_{L}$ and
$\Psi^{\ell}_{L}$ given by
$\displaystyle\Psi^{Q}_{L}=\begin{pmatrix}V^{\dagger}u_{L}\cr
d_{L}\end{pmatrix},\qquad\Psi^{\ell}_{L}=\begin{pmatrix}\nu_{L}\cr\ell_{L}\end{pmatrix}.$
(12)
It is worth noticing here that in this particular model the CKM mixing matrix
$V$ is applied on the up-type quarks.
In order to find the effective lagrangian for this model, the heavy degrees of
freedom corresponding to vector bosons introduced above must be integrated
out. Introducing the definition for the currents
$J_{Q}=\bar{\Psi}_{iL}^{Q}\gamma^{\mu}\sigma^{I}\Psi^{Q}_{jL}$ and
$J_{\ell}=\bar{\Psi}_{iL}^{\ell}\gamma^{\mu}\sigma^{I}\Psi^{\ell}_{jL}$, the
effective lagrangian is therefore
$\displaystyle{\cal L}_{eff}$ $\displaystyle=$
$\displaystyle-\frac{(g_{ij}^{q}J_{Q}+g_{ij}^{\ell}J_{\ell})^{2}}{2M^{2}_{V}}$
(13) $\displaystyle=$
$\displaystyle-\frac{(g_{ij}^{q}J_{Q})^{2}}{2M^{2}_{V}}-\frac{g_{ij}^{q}g_{kl}^{\ell}J_{Q}J_{\ell}}{M^{2}_{V}}-\frac{(g_{ij}^{\ell}J_{\ell})^{2}}{2M^{2}_{V}}.$
(14)
The middle term of the right-hand side of the above equation corresponds to
$\displaystyle\frac{g_{ij}^{q}g_{kl}^{\ell}J_{Q}J_{\ell}}{M^{2}_{V}}$
$\displaystyle=$
$\displaystyle\frac{g_{ij}^{q}g_{kl}^{\ell}}{M_{V}^{2}}(\bar{\Psi}^{Q}_{iL}\gamma_{\mu}\sigma^{I}\Psi^{Q}_{jL})(\bar{\Psi}^{\ell}_{kL}\gamma^{\mu}\sigma^{I}\Psi^{\ell}_{lL})$
(15)
Substituting equation (12) in the last expression, it leads us to
$\displaystyle\frac{g_{ij}^{q}g_{kl}^{\ell}J_{Q}J_{\ell}}{M^{2}_{V}}$
$\displaystyle=$ $\displaystyle
2\frac{g_{kl}^{\ell}}{M_{V}^{2}}\left[(Vg^{d})_{ij}(\,\bar{u}_{iL}\gamma_{\mu}d_{jL})(\bar{\ell}_{k}\gamma^{\mu}\nu_{lL})+{\rm
h.c.}\right]$ (16)
$\displaystyle+\frac{g_{kl}^{\ell}}{M_{V}^{2}}\left[(Vg^{d}V^{\dagger})_{ij}(\bar{u}_{iL}\gamma_{\mu}u_{jL})(\bar{\nu}_{kL}\gamma^{\mu}\nu_{lL})+g_{ij}^{d}(\bar{d}_{iL}\gamma_{\mu}d_{jL})(\bar{\ell}_{kL}\gamma^{\mu}\ell_{lL})\right]$
$\displaystyle-\frac{g_{kl}^{\ell}}{M_{V}^{2}}\left[(Vg^{d}V^{\dagger})_{ij}(\bar{u}_{iL}\gamma_{\mu}u_{jL})(\bar{\ell}_{kL}\gamma^{\mu}\ell_{lL})+g_{ij}^{d}(\bar{d}_{iL}\gamma_{\mu}d_{jL})(\bar{\nu}_{kL}\gamma^{\mu}\nu_{lL})\right],$
in this expression, we can identify that the first term expresses an effective
interaction of the SM fields that should be mediated by extra bosonic charged
fields, while the remaining terms are mediated by an extra neutral bosonic
field. These mediators are precisely the vector boson fields ${W^{\prime}}$
and ${Z^{\prime}}$ introduced in this model and which masses can naively be
considered to be (almost) degenerated which is required by electroweak
precision data Faroughy:2016osc . For simplicity, and without losing
generality, we are going to consider that the couplings $g^{q,\ell}$ are real
to avoid CP violation effects. Additionally, it is important to notice that we
can write compactly the couplings of quarks to the vector boson fields with an
explicit dependence in the couplings of the down sector and also, keeping in
mind that the CKM matrix couples into the doublets to up-type quarks and that
we should restrict the significant contributions for the second and third
families. For this purpose, we restrict the relevant couplings of the down
sector to $g_{bb}$, $g_{ss}$ and $g_{sb}=g_{bs}$ while other terms remain
zero. This hypothesis that the couplings to the first generation of fermions
(also in the leptonic sector) can be neglected has been widely accepted in the
literature into the context of flavor anomaly explanations Calibbi:2015kma ;
Kumar:2018kmr ; Greljo:2015mma ; Faroughy:2016osc ; Bhattacharya:2016mcc ;
Bhattacharya:2014wla ; Buttazzo:2017ixm ; Guadagnoli:2018ojc . Lastly, the
resultant compact form for the couplings of the quark sector to the
$W^{\prime}$ that we obtained are
$\displaystyle g_{\alpha b}$ $\displaystyle=$ $\displaystyle g_{bb}V_{\alpha
b}+g_{sb}V_{\alpha s},$ $\displaystyle g_{\alpha s}$ $\displaystyle=$
$\displaystyle g_{ss}V_{\alpha s}+g_{sb}V_{\alpha b},$ (17)
where $\alpha$ stands for $u,c$ or $t$ quark flavors. The same procedure
described above must be implemented for a compact form of the couplings of up-
type quarks to the $Z^{\prime}$ boson. In this case we find two possibilities:
one on flavor conserving interaction given by
$\displaystyle g_{\alpha\alpha}$ $\displaystyle=$ $\displaystyle
g_{bb}V_{\alpha b}^{2}+2g_{\alpha b}V_{\alpha s}V_{\alpha b}+g_{ss}V_{\alpha
s}^{2};$ (18)
the other is related to flavor changing $Z^{\prime}$ couplings mediated by
$\displaystyle g_{\alpha\beta}=g_{bb}V_{\beta b}V_{\alpha b}+g_{sb}V_{\beta
s}V_{\alpha b}+g_{sb}V_{\beta b}V_{\alpha s}+g_{ss}V_{\beta s}V_{\alpha s},$
(19)
where $\alpha\neq\beta$ labels $u,c$ or $t$ quark flavors.
To close this kind of parametrization, we mention that the terms of the r.h.s
of equation (15) are responsible for and will be important to $4q$ and $4\ell$
interactions ruled by the lagrangian
$\displaystyle{\cal
L}_{NP}^{4q,4\ell}=-\frac{g_{ij}^{q}g_{kl}^{q}}{2M_{V}^{2}}(\bar{\Psi}^{Q}_{iL}\gamma_{\mu}\sigma^{I}\Psi^{Q}_{jL})(\bar{\Psi}^{Q}_{kL}\gamma^{\mu}\sigma^{I}\Psi^{Q}_{lL})-\frac{g_{ij}^{\ell}g_{kl}^{\ell}}{2M_{V}^{2}}(\bar{\Psi}^{\ell}_{iL}\gamma_{\mu}\sigma^{I}\Psi^{\ell}_{jL})(\bar{\Psi}^{\ell}_{kL}\gamma^{\mu}\sigma^{I}\Psi^{\ell}_{lL})$
(20)
### II.1 Other parametrizations
In this subsection, we compare the previous parameterization explained above
with others used in some representative references studied widely in the TVB
model.
In the TVB model presented in refs Calibbi:2015kma ; Bhattacharya:2016mcc ,
the mixing pattern for quarks is enriched by the inclusion of mixing matrices
that will rotate the fields from the gauge basis to the mass basis and a
projector ($X,Y$) that will ensure the dominance of the second and third
families to explain anomalies. Particularly, the explicit form of these
matrices for the down-type quarks and charged leptons and projectors are
$\displaystyle D=\begin{pmatrix}1&0&0\cr 0&\cos\theta_{D}&\sin\theta_{D}\cr
0&-\sin\theta_{D}&\cos\theta_{D}\end{pmatrix},\qquad L=\begin{pmatrix}1&0&0\cr
0&\cos\theta_{L}&\sin\theta_{L}\cr
0&-\sin\theta_{L}&\cos\theta_{L}\end{pmatrix},\qquad
X=Y=\begin{pmatrix}0&0&0\cr 0&0&0\cr 0&0&1\end{pmatrix}.$ (21)
These matrices will leave an explicit dependence of these mixing angles
($\theta_{D,L}$) into the couplings to the extra fields, which by the
experimental results coming from different observables, can be constrained.
The assumptions made in the introduction of these matrices were previously
introduced in Calibbi:2015kma , and we can establish the full equivalence
between the notations of the angles by the relations $\theta_{D}=\alpha_{sb}$
and $\theta_{L}=\alpha_{\mu\tau}$. We also found that these couplings can be
translated to the generic parameterization introduced at the beginning of this
section. For this purpose, as it was explained before, the couplings of all
the quark sector will be dependent on the couplings of the down-type quarks,
particularly in this kind of parameterization, we can illustrate the way that
the couplings are obtained through the effective charged lagrangian that will
be given as
$\displaystyle{\cal L}_{\rm
eff}^{W^{\prime}}=2\frac{g_{2}^{q}g_{2}^{\ell}}{M_{V}^{2}}\left[(V\,D^{\dagger}X\,D)_{ij}(\,\bar{u}_{iL}\gamma_{\mu}d_{jL})(L^{\dagger}Y\,L)_{kl}(\bar{\ell}_{k}\gamma^{\mu}\nu_{lL})+{\rm
h.c}\right];$ (22)
thus, we obtain the equivalence
$\displaystyle g_{bb}$ $\displaystyle\to$ $\displaystyle
g_{2}^{q}\cos^{2}\theta_{D}$ $\displaystyle g_{sb}$ $\displaystyle\to$
$\displaystyle-g_{2}^{q}\sin\theta_{D}\cos\theta_{D}$ $\displaystyle g_{ss}$
$\displaystyle\to$ $\displaystyle g_{2}^{q}\sin^{2}\theta_{D},$ (23)
and for the leptonic sector
$\displaystyle g_{\tau\tau}$ $\displaystyle\to$ $\displaystyle
g_{2}^{\ell}\cos^{2}\theta_{L}$ $\displaystyle g_{\mu\tau}$ $\displaystyle\to$
$\displaystyle-g_{2}^{\ell}\sin\theta_{L}\cos\theta_{L}$ $\displaystyle
g_{\mu\mu}$ $\displaystyle\to$ $\displaystyle g_{2}^{\ell}\sin^{2}\theta_{L}.$
(24)
The comparison and equivalence among parameterizations of different
influential references can be found in Tables 2, 3, 4 and 5.
For our last comparison, we considered the parameterization given in Refs.
Greljo:2015mma ; Buttazzo:2017ixm where the couplings to the vector bosons
have almost the same structure of the initial parameterization presented here,
but its major difference consists in the dependence on flavor matrices denoted
by the authors as $\lambda_{ij}^{(q,\ell)}$. This incidence of the flavor
structure into the model can be shown using the charged effective lagrangian
as we did before
$\displaystyle{\cal L}_{\rm
eff}^{W^{\prime}}=\frac{g_{q}g_{\ell}}{2M_{V}^{2}}\left[(V\,\lambda)_{ij}(\,\bar{u}_{iL}\gamma_{\mu}d_{jL})(\bar{\ell}_{k}\gamma^{\mu}\nu_{lL})+{\rm
h.c}\right],$ (25)
to obtain the desired dominance of couplings to the second and third families
using the flavor matrices mentioned before, the $\lambda_{ij}$ belonging to
the first family must be set to zero. Additionally, the values for
$\lambda_{bb}=\lambda_{\tau\tau}=1$ in order to maximize its contribution.
However, as an illustration, we can make a complete relation of the
implementation of the flavor matrices to the construction of couplings for the
quark sector without any assumption in Tables 2, 3, 4 and 5.
Table 2: Couplings to $W^{\prime}$ boson in different parameterizations of the TVB model Coupling | Parameterization in Kumar:2018kmr | Parameterization in Calibbi:2015kma ; Bhattacharya:2016mcc | Parameterization in Greljo:2015mma ; Buttazzo:2017ixm
---|---|---|---
$g_{ub}^{q}$ | $g_{bb}V_{ub}+g_{sb}V_{us}$ | $g_{2}^{q}(V_{ub}\cos^{2}\theta_{d}-V_{us}\cos\theta_{d}\sin\theta_{d})$ | $g_{q}(V_{ub}+V_{ud}\lambda_{db}+V_{us}\lambda_{sb})/\sqrt{2}$
$g_{cb}^{q}$ | $g_{bb}V_{cb}+g_{sb}V_{cs}$ | $g_{2}^{q}(V_{cb}\cos^{2}\theta_{d}-V_{cs}\cos\theta_{d}\sin\theta_{d})$ | $g_{q}(V_{cb}+V_{cd}\lambda_{db}+V_{cs}\lambda_{sb})/\sqrt{2}$
$g_{tb}^{q}$ | $g_{bb}V_{tb}+g_{sb}V_{ts}$ | $g_{2}^{q}(V_{tb}\cos^{2}\theta_{d}-V_{ts}\cos\theta_{d}\sin\theta_{d})$ | $g_{q}(V_{tb}+V_{ud}\lambda_{tb}+V_{us}\lambda_{sb})/\sqrt{2}$
$g_{us}^{q}$ | $g_{ss}V_{us}+g_{sb}V_{ub}$ | $g_{2}^{q}(V_{us}\sin^{2}\theta_{d}-V_{ub}\cos\theta_{d}\sin\theta_{d})$ | $g_{q}(V_{ud}\lambda_{ds}+V_{ub}\lambda_{sb}+V_{us}\lambda_{ss})/\sqrt{2}$
$g_{cs}^{q}$ | $g_{ss}V_{cs}+g_{sb}V_{ucb}$ | $g_{2}^{q}(V_{cs}\sin^{2}\theta_{d}-V_{cb}\cos\theta_{d}\sin\theta_{d})$ | $g_{q}(V_{cd}\lambda_{ds}+V_{cb}\lambda_{sb}+V_{cs}\lambda_{ss})/\sqrt{2}$
$g_{ts}^{q}$ | $g_{ss}V_{ts}+g_{sb}V_{tb}$ | $g_{2}^{q}(V_{ts}\sin^{2}\theta_{d}-V_{tb}\cos\theta_{d}\sin\theta_{d})$ | $g_{q}(V_{td}\lambda_{ds}+V_{tb}\lambda_{sb}+V_{ts}\lambda_{ss})/\sqrt{2}$
Table 3: Flavor conserving couplings to $Z^{\prime}$ boson in different parameterizations of the TVB model. Coupling | Parameterization in Kumar:2018kmr | Parameterization in Calibbi:2015kma ; Bhattacharya:2016mcc | Parameterization in Greljo:2015mma ; Buttazzo:2017ixm
---|---|---|---
$g_{uu}^{q}$ | $g_{bb}V_{ub}^{2}+2g_{sb}V_{us}V_{ub}+g_{ss}V_{us}^{2}$ | $g_{2}^{q}(V_{ub}^{2}\cos^{2}\theta_{d}-2V_{us}V_{ub}\cos\theta_{d}\sin\theta_{d}+V_{us}^{2}\sin^{2}\theta_{d})$ | $g_{q}\lambda_{uu}/\sqrt{2}$
$g_{cc}^{q}$ | $g_{bb}V_{cb}^{2}+2g_{sb}V_{cs}V_{cb}+g_{ss}V_{cs}^{2}$ | $g_{2}^{q}(V_{cb}^{2}\cos^{2}\theta_{d}-2V_{cs}V_{cb}\cos\theta_{d}\sin\theta_{d}+V_{cs}^{2}\sin^{2}\theta_{d})$ | $g_{q}\lambda_{cc}/\sqrt{2}$
$g_{tt}^{q}$ | $g_{bb}V_{tb}^{2}+2g_{sb}V_{ts}V_{tb}+g_{ss}V_{ts}^{2}$ | $g_{2}^{q}(V_{tb}^{2}\cos^{2}\theta_{d}-2V_{ts}V_{tb}\cos\theta_{d}\sin\theta_{d}+V_{ts}^{2}\sin^{2}\theta_{d})$ | $g_{q}\lambda_{tt}/\sqrt{2}$
Table 4: Flavor changing couplings to $Z^{\prime}$ boson in different parameterizations of the TVB model. Coupling | Parameterization in Kumar:2018kmr | Parameterization in Calibbi:2015kma ; Bhattacharya:2016mcc | Parameterization in Greljo:2015mma ; Buttazzo:2017ixm
---|---|---|---
$g_{uc}^{q}$ | $g_{bb}V_{cb}V_{ub}+g_{sb}V_{cs}V_{ub}$ | $g_{2}^{q}V_{cb}V_{ub}\cos^{2}\theta_{d}-g_{2}^{q}V_{cs}V_{ub}\cos\theta_{d}\sin\theta_{d}$ | $g_{q}\lambda_{uc}/\sqrt{2}$
| $+g_{sb}V_{cb}V_{us}+g_{ss}V_{cs}V_{us}$ | $-g_{2}^{q}V_{cb}V_{us}\cos\theta_{d}\sin\theta_{d}+g_{2}^{q}V_{cs}V_{us}\sin^{2}\theta_{d}$ |
$g_{ut}^{q}$ | $g_{bb}V_{tb}V_{ub}+g_{sb}V_{ts}V_{ub}$ | $g_{2}^{q}V_{tb}V_{ub}\cos^{2}\theta_{d}-g_{2}^{q}V_{ts}V_{ub}\cos\theta_{d}\sin\theta_{d}$ | $g_{q}\lambda_{ut}/\sqrt{2}$
| $+g_{sb}V_{tb}V_{us}+g_{ss}V_{ts}V_{us}$ | $-g_{2}^{q}V_{tb}V_{us}\cos\theta_{d}\sin\theta_{d}+g_{2}^{q}V_{ts}V_{us}\sin^{2}\theta_{d}$ |
$g_{ct}^{q}$ | $g_{bb}V_{cb}V_{tb}+g_{sb}V_{cs}V_{tb}$ | $g_{2}^{q}V_{cb}V_{tb}\cos^{2}\theta_{d}-g_{2}^{q}V_{cs}V_{tb}\cos\theta_{d}\sin\theta_{d}$ | $g_{q}\lambda_{ct}/\sqrt{2}$
| $+g_{sb}V_{cb}V_{ts}+g_{ss}V_{cs}V_{ts}$ | $-g_{2}^{q}V_{cb}V_{ts}\cos\theta_{d}\sin\theta_{d}+g_{2}^{q}V_{cs}V_{ts}\sin^{2}\theta_{d}$ |
Table 5: Couplings of leptons to $Z^{\prime}$ boson in different parameterizations of the TVB model. Coupling | Parameterization in Kumar:2018kmr | Parameterization in Calibbi:2015kma ; Bhattacharya:2016mcc | Parameterization in Greljo:2015mma ; Buttazzo:2017ixm
---|---|---|---
$g_{\mu\mu}$ | $g_{\mu\mu}$ | $g_{2}^{\ell}\sin^{2}\theta_{L}$ | $g_{q}(\lambda_{\mu\mu})/\sqrt{2}$
$g_{\mu\tau}$ | $g_{\mu\tau}$ | $-g_{2}^{\ell}\sin\theta_{L}\cos\theta_{L}$ | $2g_{q}(\lambda_{\mu\tau})/\sqrt{2}$
$g_{\tau\tau}$ | $g_{\tau\tau}$ | $g_{2}^{\ell}\cos^{2}\theta_{L}$ | $g_{q}/\sqrt{2}$
We make emphasis that the results presented in tables 2, 3, 4, and 5 allow us
to understand the differences and similarities for the parameterizations
presented above in the context of the TVB model; additionally it gives us a
complete interpretation of the variables present on each one and the
possibilities to find adjustments to explain flavor anomalies.
## III Relevant Observables
In this section, we discuss the constraints from the most relevant flavor
observables on the TVB model couplings that simultaneously accommodate the $B$
meson anomalies. We will include the recent experimental progress from Belle
and LHCb on different LFV decays (such as
$\Upsilon(1S)\to\mu^{\pm}\tau^{\mp}$, $B\to K^{\ast}\mu^{\pm}\tau^{\mp}$, and
$\tau\to\mu\phi$).
### III.1 $b\to c\ell^{-}\bar{\nu}_{\ell}$ ($\ell=\mu,\tau$) data
The $W^{\prime}$ boson leads to additional tree-level contribution to $b\to
c\ell^{-}\bar{\nu}_{\ell}$ transitions involving leptons from second- and
third-generation $(\ell=\mu,\tau)$. The total low-energy effective Lagrangian
has the following form Gomez:2019xfw
$\displaystyle-\mathcal{L}_{\rm eff}(b\to c\ell\bar{\nu}_{\ell})_{\rm
SM+W^{\prime}}$ $\displaystyle=$
$\displaystyle\frac{4G_{F}}{\sqrt{2}}V_{cb}\Big{[}(1+C_{V}^{bc\ell\nu_{\ell}})(\bar{c}\gamma_{\mu}P_{L}b)(\bar{\ell}\gamma^{\mu}P_{L}\nu_{\ell})\Big{]},$
(26)
where $G_{F}$ is the Fermi coupling constant, $V_{cb}$, is the charm-bottom
Cabbibo-Kobayashi-Maskawa (CKM) matrix element, and $C_{V}^{bc\ell\nu_{\ell}}$
is the Wilson coefficient (WC) associated with the NP vector (left-left)
operator. This WC is defined as
$\displaystyle C_{V}^{bc\ell\nu_{\ell}}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{2}}{4G_{F}V_{cb}}\frac{2(V_{cs}g^{q}_{sb}+V_{cb}g^{q}_{bb})g^{\ell}_{\ell\ell}}{M_{V}^{2}}\
\ \ (\ell=\mu,\tau),$ (27)
with $M_{V}$ the heavy boson mass. The NP effects on the LFU ratios $R(X)$
($X=D,D^{\ast},J/\psi$), the $D^{\ast}$ and $\tau$ longitudinal polarizations
related with the channel $\bar{B}\to D^{\ast}\tau\bar{\nu}_{\tau}$, the ratio
of inclusive decays $R(X_{c})$, and the tauonic decay
$B_{c}^{-}\to\tau^{-}\bar{\nu}_{\tau}$ can be easily parametrized as
Gomez:2019xfw
$\displaystyle R(X)$ $\displaystyle=$ $\displaystyle R(X)_{\rm
SM}\big{|}1+C_{V}^{bc\tau\nu_{\tau}}\big{|}^{2},$ (28) $\displaystyle
F_{L}(D^{*})$ $\displaystyle=$ $\displaystyle F_{L}(D^{*})_{\rm SM}\
r_{D^{\ast}}^{-1}\big{|}1+C_{V}^{bc\tau\nu_{\tau}}\big{|}^{2},$ (29)
$\displaystyle P_{\tau}(D^{*})$ $\displaystyle=$ $\displaystyle
P_{\tau}(D^{*})_{\rm SM}\
r_{D^{\ast}}^{-1}\big{|}1+C_{V}^{bc\tau\nu_{\tau}}\big{|}^{2}\ ,$ (30)
$\displaystyle R(X_{c})$ $\displaystyle=$ $\displaystyle R(X_{c})_{\rm
SM}\Big{(}1+2.294\ {\rm
Re}(C_{V}^{bc\tau\nu_{\tau}})+1.147\big{|}C_{V}^{bc\tau\nu_{\tau}}\big{|}^{2}\Big{)},$
(31) $\displaystyle{\rm BR}(B_{c}^{-}\to\tau^{-}\bar{\nu}_{\tau})$
$\displaystyle=$ $\displaystyle{\rm
BR}(B_{c}^{-}\to\tau^{-}\bar{\nu}_{\tau})_{\text{SM}}\big{|}1+C_{V}^{bc\tau\nu_{\tau}}\big{|}^{2},$
(32)
respectively, where $r_{D^{\ast}}=R(D^{*})/R(D^{*})_{\rm SM}$. For ${\rm
BR}(B_{c}^{-}\to\tau^{-}\bar{\nu}_{\tau})$, we will use the bound $<10\%$
Akeroyd:2017mhr . Concerning to the ratio $R(\Lambda_{c})$ very recently
measured by LHCb LHCb:2022piu , the SM contribution is also rescaled by the
overall factor $\big{|}1+C_{V}^{bc\tau\nu_{\tau}}\big{|}^{2}$, namely
Datta:2017aue
$R(\Lambda_{c})=R(\Lambda_{c})_{\rm SM}\
\big{|}1+C_{V}^{bc\tau\nu_{\tau}}\big{|}^{2}.$ (33)
A long term integrated luminosity of $50\ {\rm ab}^{-1}$ is expected to be
accumulated by the Belle II experiment Belle-II:2018jsg , allowing
improvements at the level of $\sim 3\%$ and $\sim 2\%$ for the statistical and
systematic uncertainties of $R(D)$ and $R(D^{\ast})$, respectively Belle-
II:2018jsg . It is also envisioned accuracy improvements on angular analysis
in $\bar{B}\to D^{\ast}\tau\bar{\nu}_{\tau}$ decay ($\tau$ polarization
observable $P_{\tau}(D^{*})$), as well as on $q^{2}$-distribution Belle-
II:2018jsg . On the other hand, the LHCb will be able to improve measurements
of $R(D^{\ast})$ and $R(J/\psi)$ in the future runs of data taking
Albrecht:2021tul ; Bifani:2018zmi .
In regard to the transition $b\to c\mu\bar{\nu}_{\mu}$, the $\mu/e$ LFU ratios
$R_{D^{(\ast)}}^{\mu/e}\equiv{\rm BR}(B\to D^{(\ast)}\mu\bar{\nu}_{\mu})/{\rm
BR}(B\to D^{(\ast)}e\bar{\nu}_{e})$ have to be taken into account. The
experimental values obtained by Belle Glattauer:2015teq ; Belle:2017rcc are
in great accordance with the SM estimations Becirevic:2020rzi ; Bobeth:2021lya
(see Table 1). The $W^{\prime}$ boson coupling to lepton pair
$\mu\bar{\nu}_{\mu}$ modifies this ratio as
$R_{D^{(\ast)}}^{\mu/e}=[R_{D^{(\ast)}}^{\mu/e}]_{\rm
SM}\big{|}1+C_{V}^{bc\mu\nu_{\mu}}\big{|}^{2},$ (34)
where $C_{V}^{bc\mu\nu_{\mu}}$ is given by Eq. (27). From this LFU ratio we
get the bound
$\frac{|(V_{cs}g^{q}_{sb}+V_{cb}g^{q}_{bb})g^{\ell}_{\mu\mu}|}{M_{V}^{2}}\leqslant
0.013\ {\rm TeV}^{-2},$ (35)
which is relevant for the couplings aiming to explain the $b\to
s\mu^{+}\mu^{-}$ anomaly (see Sec. III.3).
### III.2 $b\to u\ell^{-}\bar{\nu}_{\ell}$ ($\ell=\mu,\tau$) data
The TVB model can also induce NP contributions in the leptonic decay
$B\to\ell\bar{\nu}_{\ell}$ induced via the charged-current transition $b\to
u\ell^{-}\bar{\nu}_{\ell}$ ($\ell=\mu,\tau$). The ratio
$R_{B}^{\tau/\mu}\equiv\dfrac{{\rm BR}(B^{-}\to\tau^{-}\bar{\nu}_{\tau})}{{\rm
BR}(B^{-}\to\mu^{-}\bar{\nu}_{\mu})},$ (36)
provides a clean LFU test Becirevic:2020rzi . Through this ratio the
uncertainties on the decay constant $f_{B}$ and CKM element $V_{ub}$ cancel
out (circumventing the tension between the exclusive and inclusive values of
$V_{ub}$ UTfit:2022hsi ). The NP effects on this ratio can be expressed as
$R_{B}^{\tau/\mu}=[R_{B}^{\tau/\mu}]_{\rm
SM}\Bigg{|}\dfrac{1+C_{V}^{bu\tau\nu_{\tau}}}{1+C_{V}^{bu\mu\nu_{\mu}}}\Bigg{|}^{2},$
(37)
where
$C_{V}^{bu\ell\nu_{\ell}}=\frac{\sqrt{2}}{4G_{F}M_{U_{1}}^{2}}\Big{[}|x_{L}^{b\ell}|^{2}+\frac{V_{us}}{V_{ub}}x_{L}^{s\tau}(x_{L}^{b\ell})^{\ast}\Big{]},\
\ (\ell=\mu,\tau)$ (38)
and
$[R_{B}^{\tau/\mu}]_{\rm
SM}=\Big{(}\dfrac{m_{\tau}}{m_{\mu}}\Big{)}^{2}\Big{(}\dfrac{m_{B}^{2}-m_{\tau}^{2}}{m_{B}^{2}-m_{\mu}^{2}}\Big{)}^{2}=222.5\pm
3.0.$ (39)
The experimental value is $[R_{B}^{\tau/\mu}]_{\rm Exp}=205.7\pm 96.6$, which
was obtained from the values reported by the Particle Data Group (PDG) on
${\rm BR}(B^{-}\to\tau^{-}\bar{\nu}_{\tau})$ PDG2020 and the Belle experiment
on ${\rm BR}(B^{-}\to\mu^{-}\bar{\nu}_{\mu})$ Belle:2019iji .
### III.3 $b\to s\mu^{+}\mu^{-}$ data
The NP effective Lagrangian responsible for the semileptonic transition $b\to
s\mu^{+}\mu^{-}$ can be expressed as
$\mathcal{L}(b\to s\mu^{+}\mu^{-})_{\rm
NP}=\frac{4G_{F}}{\sqrt{2}}V_{tb}V_{ts}^{\ast}(C^{bs\mu\mu}_{9}\mathcal{O}^{bs\mu\mu}_{9}+C^{bs\mu\mu}_{10}\mathcal{O}^{bs\mu\mu}_{10})+\
{\rm h.c.},$ (40)
where the NP is encoded in the WCs $C^{bs\mu\mu}_{9}$ and $C^{bs\mu\mu}_{10}$
of the four-fermion operators
$\displaystyle\mathcal{O}^{bs\mu\mu}_{9}$ $\displaystyle=$
$\displaystyle\frac{\alpha_{\rm
em}}{4\pi}(\bar{s}\gamma_{\mu}P_{L}b)(\bar{\mu}\gamma^{\mu}\mu),$ (41)
$\displaystyle\mathcal{O}^{bs\mu\mu}_{10}$ $\displaystyle=$
$\displaystyle\frac{\alpha_{\rm
em}}{4\pi}(\bar{s}\gamma_{\mu}P_{L}b)(\bar{\mu}\gamma^{\mu}\gamma_{5}\mu),$
(42)
respectively, with $\alpha_{\rm em}$ being the fine-constant structure. A
global fit analysis including most current $b\to s\mu^{+}\mu^{-}$ data, such
as $R_{K^{(\ast)}}$ by LHCb LHCb:2022qnv ; LHCb:2022zom and ${\rm
BR}(B_{s}\to\mu^{+}\mu^{-})$ by CMS CMS:2022mgd , has been recently performed
in Ref. Greljo:2022jac ; Alguero:2023jeh . Among the different NP scenarios,
the $C^{bs\mu\mu}_{9}=-C^{bs\mu\mu}_{10}$ solution is preferred by the data
Greljo:2022jac ; Alguero:2023jeh .222Let us notice that the single WC
$C^{bs\mu\mu}_{9}$ also provides a good fit of the $b\to s\mu^{+}\mu^{-}$ data
Greljo:2022jac ; Alguero:2023jeh . Some explicit model examples are shown in
Greljo:2022jac . The best fit $1\sigma$ solution is Greljo:2022jac
$C^{bs\mu\mu}_{9}=-C^{bs\mu\mu}_{10}\in[-0.23,-0.11].$ (43)
In the context of the TVB model, the $Z^{\prime}$ boson induces a tree-level
contribution to $b\to s\mu^{+}\mu^{-}$ transition via the WCs
$C^{bs\mu\mu}_{9}=-C^{bs\mu\mu}_{10}=-\frac{\pi}{\sqrt{2}G_{F}\alpha_{\rm
em}V_{tb}V_{ts}^{\ast}}\frac{g_{sb}^{q}g_{\mu\mu}^{\ell}}{M_{V}^{2}}.$ (44)
Using the result of the global fit, Eq. (43), this corresponds to
$-\frac{g_{sb}^{q}g_{\mu\mu}^{\ell}}{M_{V}^{2}}\in[1.7,3.5]\times 10^{-4}\
{\rm TeV}^{-2}.$ (45)
### III.4 Bottomonium processes: $R_{\Upsilon(nS)}$ and
$\Upsilon(nS)\to\mu^{\pm}\tau^{\mp}$
Test of LFU has also been studied in the leptonic ratio $R_{\Upsilon(nS)}$
(with $n=1,2,3$) in connection with the reported hints of LFU violation in the
charged-current transition $b\to c\tau\bar{\nu}_{\tau}$ Aloni:2017eny ;
Garcia-Duque:2021qmg .333Recently, in Ref. Descotes-Genon:2021uez has been
proposed a new method to test LFU through inclusive dileptonic $\Upsilon(4S)$
decays. It is known that NP scenarios aiming to provide an explanation to the
anomalous $b\to c\tau\bar{\nu}_{\tau}$ data, also induce effects in the
neutral-current transition $b\bar{b}\to\tau^{+}\tau^{-}$ Aloni:2017eny ;
Garcia-Duque:2021qmg . Experimentally, the BABAR and CLEO Collaborations have
reported the values delAmoSanchez:2010bt ; Besson:2006gj ; Lees:2020kom
$\displaystyle R_{\Upsilon(1S)}$ $\displaystyle=$
$\displaystyle\begin{cases}\text{BABAR-10:}\ 1.005\pm 0.013\pm
0.022\text{~{}\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{delAmoSanchez:2010bt}{\@@citephrase{(}}{\@@citephrase{)}}}},\\\
\text{SM:}\ 0.9924\text{~{}\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Aloni:2017eny}{\@@citephrase{(}}{\@@citephrase{)}}}},\end{cases}$
(46) $\displaystyle R_{\Upsilon(2S)}$ $\displaystyle=$
$\displaystyle\begin{cases}\text{CLEO-07:}\ 1.04\pm 0.04\pm
0.05\text{~{}\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Besson:2006gj}{\@@citephrase{(}}{\@@citephrase{)}}}},\\\
\text{SM:}\ 0.9940\text{~{}\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Aloni:2017eny}{\@@citephrase{(}}{\@@citephrase{)}}}},\end{cases}$
(47) $\displaystyle R_{\Upsilon(3S)}$ $\displaystyle=$
$\displaystyle\begin{cases}\text{CLEO-07:}\ 1.05\pm 0.08\pm
0.05\text{~{}\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Besson:2006gj}{\@@citephrase{(}}{\@@citephrase{)}}}},\\\
\text{BABAR-20:}\ 0.966\pm 0.008\pm
0.014\text{~{}\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Lees:2020kom}{\@@citephrase{(}}{\@@citephrase{)}}}},\\\
\text{SM:}\ 0.9948\text{~{}\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Aloni:2017eny}{\@@citephrase{(}}{\@@citephrase{)}}}},\end{cases}$
(48)
where the theoretical uncertainty is typically of the order
$\pm\mathcal{O}(10^{-5})$ Aloni:2017eny . These measurements are in good
accordance with the SM estimations, except for the 2020 measurement on
$R_{\Upsilon(3S)}$ that shows an agreement at the $1.8\sigma$ level
Lees:2020kom . By averaging the CLEO-07 Besson:2006gj and BABAR-20
Lees:2020kom measurements we obtain $R_{\Upsilon(3S)}^{\rm Ave}=0.968\pm
0.016$, which deviates at the $1.7\sigma$ level with respect to the SM
prediction Garcia-Duque:2021qmg .
The NP effects of the TVB model on the leptonic ratio can be expressed as
Aloni:2017eny ; Garcia-Duque:2021qmg
$R_{\Upsilon(nS)}=\frac{(1-4x_{\tau}^{2})^{1/2}}{|A_{V}^{\rm
SM}|^{2}}\Big{[}|A_{V}^{b\tau}|^{2}(1+2x_{\tau}^{2})+|B_{V}^{b\tau}|^{2}(1-4x_{\tau}^{2})\Big{]},$
(49)
with $x_{\tau}=m_{\tau}/m_{\Upsilon(nS)}$, $|A_{V}^{\rm SM}|=-4\pi\alpha
Q_{b}$, and
$\displaystyle A_{V}^{b\tau}$ $\displaystyle=$ $\displaystyle-4\pi\alpha
Q_{b}+\frac{m_{\Upsilon(nS)}^{2}}{4}\frac{g^{q}_{bb}g^{\ell}_{\tau\tau}}{4M_{V}^{2}},$
(50) $\displaystyle B_{V}^{b\tau}$ $\displaystyle=$
$\displaystyle-\frac{m_{\Upsilon(nS)}^{2}}{2}\frac{g^{q}_{bb}g^{\ell}_{\tau\tau}}{4M_{V}^{2}}.$
(51)
The neutral gauge boson also generates the LFV processes
$\Upsilon\to\mu^{\pm}\tau^{\mp}$ $(\Upsilon\equiv\Upsilon(nS))$. The branching
fraction is given by Bhattacharya:2016mcc ; Kumar:2018kmr
${\rm
BR}(\Upsilon\to\mu^{\pm}\tau^{\mp})=\frac{f_{\Upsilon}^{2}m_{\Upsilon}^{3}}{48\pi\Gamma_{\Upsilon}}\Big{(}2+\frac{m_{\tau}^{2}}{m_{\Upsilon}^{2}}\Big{)}\Big{(}1-\frac{m_{\tau}^{2}}{m_{\Upsilon}^{2}}\Big{)}^{2}\Big{|}\dfrac{g^{q}_{bb}(g^{\ell}_{\mu\tau})^{\ast}}{M_{V}^{2}}\Big{|}^{2},$
(52)
where $f_{\Upsilon}$ and $m_{\Upsilon}$ are the Upsilon decay constant and
mass, respectively. The decay constant values can be extracted from the
experimental branching ratio measurements of the processes $\Upsilon\to
e^{-}e^{+}$. Using current data from PDG PDG2020 , one obtains
$f_{\Upsilon(1S)}=(659\pm 17)\ {\rm MeV}$, $f_{\Upsilon(2S)}=(468\pm 27)\ {\rm
MeV}$, and $f_{\Upsilon(3S)}=(405\pm 26)\ {\rm MeV}$. Experimentally, the
reported ULs are ${\rm BR}(\Upsilon(1S)\to\mu^{\pm}\tau^{\mp})<2.7\times
10^{-6}$ from Belle Belle:2022cce , and ${\rm
BR}(\Upsilon(2S)\to\mu^{\pm}\tau^{\mp})<3.3\times 10^{-6}$, ${\rm
BR}(\Upsilon(3S)\to\mu^{\pm}\tau^{\mp})<3.1\times 10^{-6}$ from PDG PDG2020 .
From these ULs we get
$\displaystyle\Upsilon(1S)\to\mu^{\pm}\tau^{\mp}$ $\displaystyle:$
$\displaystyle\ \frac{|g^{q}_{bb}(g^{\ell}_{\mu\tau})^{\ast}|}{M_{V}^{2}}<5.7\
{\rm TeV}^{-2},$ (53a) $\displaystyle\Upsilon(2S)\to\mu^{\pm}\tau^{\mp}$
$\displaystyle:$ $\displaystyle\
\frac{|g^{q}_{bb}(g^{\ell}_{\tau\mu})^{\ast}|}{M_{V}^{2}}<6.2\ {\rm
TeV}^{-2},$ (53b) $\displaystyle\Upsilon(3S)\to\mu^{\pm}\tau^{\mp}$
$\displaystyle:$ $\displaystyle\
\frac{|g^{q}_{bb}(g^{\ell}_{\mu\tau})^{\ast}|}{M_{V}^{2}}<5.2\ {\rm
TeV}^{-2}.$ (53c)
### III.5 $\Delta F=2$ processes: $B_{s}-\bar{B}_{s}$ and
$D^{0}-\bar{D}^{0}$mixing
The interactions of a $Z^{\prime}$ boson to quarks $s\bar{b}$ relevant for
$b\to s\mu^{+}\mu^{-}$ processes also generate a contribution to
$B_{s}-\bar{B}_{s}$ mixing DiLuzio:2019jyq ; DiLuzio:2017fdq . The NP effects
to the $B_{s}-\bar{B}_{s}$ mixing can be described by the effective Lagrangian
$\mathcal{L}_{\rm\Delta
B=2}^{Z^{\prime}}=-\frac{4G_{F}}{\sqrt{2}}|V_{tb}V_{ts}^{\ast}|^{2}C_{sb}^{LL}(\bar{s}\gamma_{\mu}P_{L}b)(\bar{s}\gamma^{\mu}P_{L}b)+\
{\rm h.c.},$ (54)
where
$C_{sb}^{LL}=\frac{1}{4\sqrt{2}G_{F}|V_{tb}V_{ts}^{\ast}|^{2}}\frac{|g_{sb}^{q}|^{2}}{M_{Z^{\prime}}^{2}}.$
(55)
Thus, the NP contributions to the mass difference $\Delta M_{s}$ of the
neutral $B_{s}$ meson can be expressed as DiLuzio:2019jyq
$\dfrac{\Delta M_{s}^{\rm SM+NP}}{\Delta M_{s}^{\rm
SM}}=\Big{(}1+\frac{\eta^{6/23}}{R^{\rm loop}_{\rm SM}}C_{sb}^{LL}\Big{)},$
(56)
where $\eta=\alpha_{s}(M_{Z^{\prime}})/\alpha_{s}(m_{b})$ accounts for running
from the $M_{Z^{\prime}}$ scale down to the $b$-quark mass scale and the SM
loop function is $R^{\rm loop}_{\rm SM}=(1.310\pm 0.010)\times 10^{-3}$
DiLuzio:2019jyq . At present, $\Delta M_{s}$ has been experimentally measured
with great precision $\Delta M_{s}^{\rm Exp}=(17.757\pm 0.021)\ {\rm ps}^{-1}$
DiLuzio:2019jyq ; HFLAV:2022pwe . On the theoretical side, the average is
$\Delta M_{s}^{\rm SM}=(18.4^{+0.7}_{-1.2})\ {\rm ps}^{-1}$ implying that
$\Delta M_{s}^{\rm SM}/\Delta M_{s}^{\rm Exp}=1.04^{+0.04}_{-0.07}$
DiLuzio:2019jyq . This value yields to
$0.89\leq\Bigg{|}1+\frac{\eta^{6/23}}{R_{\rm loop}^{\rm
SM}}C_{sb}^{LL}\Bigg{|}\leq 1.11,$ (57)
where in the TVB model translates into the important $2\sigma$ bound
$\frac{|g_{sb}^{q}|}{M_{V}}\geq 3.9\times 10^{-3}\ {\rm TeV^{-1}}.$ (58)
In addition, the $Z^{\prime}$ boson can also admit $c\to u$ transitions,
consequently generating tree-level effects on $D^{0}-\bar{D}^{0}$ mixing
Kumar:2018kmr ; Alok:2021pdh . The effective Lagrangian describing the
$Z^{\prime}$ contribution to $D^{0}-\bar{D}^{0}$ mixing can be expressed as
Kumar:2018kmr ; Alok:2021pdh
$\mathcal{L}_{\rm\Delta
C=2}^{Z^{\prime}}=-\frac{|g_{uc}|^{2}}{2M_{Z^{\prime}}^{2}}(\bar{c}\gamma_{\mu}P_{L}u)(\bar{c}\gamma^{\mu}P_{L}u)+\
{\rm h.c.},$ (59)
where
$g_{uc}=g^{q}_{bb}V_{cb}V^{\ast}_{ub}+g^{q}_{sb}(V_{cs}V^{\ast}_{ub}+V_{cb}V^{\ast}_{us})+g^{q}_{ss}V_{cs}V^{\ast}_{us}$
Kumar:2018kmr (see also Table 4). Such a NP contributions are constrained by
the results of the mass difference $\Delta M_{D}$ of neutral $D$ mesons. The
theoretical determination of this mass difference is limited by our
understanding of the short and long-distance contributions Kumar:2018kmr ;
Alok:2021pdh . Here we follow the recent analysis of Ref. Kumar:2018kmr
focused on short-distance SM contribution that sets the conservative (strong)
bound
$\frac{|g_{ss}^{q}|}{M_{V}}\leq 3\times 10^{-3}\ {\rm TeV^{-1}}.$ (60)
The couplings $g^{q}_{bb}$ and $g^{q}_{sb}$ are less constrained by $\Delta
M_{D}$ Kumar:2018kmr , therefore, we will skip them in our study.
### III.6 Neutrino Trident Production
The $Z^{\prime}$ couplings to leptons from second-generation
($g_{\mu\mu}=g_{\nu_{\mu}\nu_{\mu}}$) also generate a contribution to the
cross-section of neutrino trident production (NTP),
$\nu_{\mu}N\to\nu_{\mu}N\mu^{+}\mu^{-}$ Altmannshofer:2014pba . The cross-
section is given by Altmannshofer:2014pba
$\frac{\sigma_{\rm SM+NP}}{\sigma_{\rm
SM}}=\frac{1}{1+(1+4s_{W}^{2})^{2}}\Big{[}\Big{(}1+\frac{v^{2}g_{\mu\mu}^{2}}{M_{V}^{2}}\Big{)}^{2}+\Big{(}1+4s_{W}^{2}+\frac{v^{2}g_{\mu\mu}^{2}}{M_{V}^{2}}\Big{)}^{2}\Big{]},$
(61)
where $v=(\sqrt{2}G_{F})^{-1/2}$ and $s_{W}\equiv\sin\theta_{W}$ (with
$\theta_{W}$ the Weinberg angle). The existing CCFR trident measurement
$\sigma_{\rm CCFR}/\sigma_{\rm SM}=0.82\pm 0.28$ provides the upper bound
$\frac{|g_{\mu\mu}^{\ell}|}{M_{Z^{\prime}}}\leq 1.13\ {\rm TeV^{-1}}.$ (62)
### III.7 LFV $B$ decays: $B\to K^{(\ast)}\mu^{\pm}\tau^{\mp}$ and
$B_{s}\to\mu^{\pm}\tau^{\mp}$
The $Z^{\prime}$ boson mediates LFV transitions $b\to s\mu^{\pm}\tau^{\mp}$
($B\to K^{(\ast)}\mu^{\pm}\tau^{\mp}$ and $B_{s}^{0}\to\mu^{\pm}\tau^{\mp}$)
at tree level via the WCs Calibbi:2015kma
$C^{bs\mu\tau}_{9}=-C^{bs\mu\tau}_{10}=-\frac{\pi}{\sqrt{2}G_{F}\alpha_{\rm
em}V_{tb}V_{ts}^{\ast}}\frac{g^{q}_{sb}(g^{\ell}_{\mu\tau})^{\ast}}{M_{V}^{2}}.$
(63)
The current experimental limits ($90\%$ C.L.) on the branching ratios of
$B^{+}\to K^{+}\mu^{\pm}\tau^{\mp}$ are PDG2020
$\displaystyle{\rm BR}(B^{+}\to K^{+}\mu^{+}\tau^{-})_{\rm exp}$
$\displaystyle<$ $\displaystyle 4.5\times 10^{-5},$ (64) $\displaystyle{\rm
BR}(B^{+}\to K^{+}\mu^{-}\tau^{+})_{\rm exp}$ $\displaystyle<$ $\displaystyle
2.8\times 10^{-5}.$ (65)
Let us notice that LHCb Collaboration obtained a limit of ${\rm BR}(B^{+}\to
K^{+}\mu^{-}\tau^{+})_{\rm LHCb}<3.9\times 10^{-5}$ Aaij:2020mqb that is
comparable with the one quoted above from PDG. On the other hand, the LHCb has
recently presented the first search of $B^{0}\to K^{\ast
0}\mu^{\pm}\tau^{\mp}$ LHCb:2022wrs . The obtained UL on this LFV decay is
LHCb:2022wrs
${\rm BR}(B^{0}\to K^{\ast 0}\mu^{\pm}\tau^{\mp})_{\rm exp}<1.0\times
10^{-5}.$ (66)
From the theoretical side, the branching ratio of $B^{+}\to
K^{+}\mu^{+}\tau^{-}$ Parrott:2022zte and $B^{0}\to K^{\ast
0}\mu^{+}\tau^{-}$ Calibbi:2015kma can be written as
$\displaystyle{\rm BR}(B^{+}\to K^{+}\mu^{+}\tau^{-})$ $\displaystyle=$
$\displaystyle\big{(}a_{K}|C^{bs\mu\tau}_{9}|^{2}+b_{K}|C^{bs\mu\tau}_{10}|^{2}\big{)}\times
10^{-9},$ (67) $\displaystyle{\rm BR}(B^{0}\to K^{\ast 0}\mu^{+}\tau^{-})$
$\displaystyle=$
$\displaystyle\Big{(}(a_{K^{\ast}}+c_{K^{\ast}})|C^{bs\mu\tau}_{9}|^{2}+(b_{K^{\ast}}+d_{K^{\ast}})|C^{bs\mu\tau}_{10}|^{2}\Big{)}\times
10^{-9},$ (68)
respectively, where $(a_{K},b_{K})=(12.72\pm 0.81,13.21\pm 0.81)$
Parrott:2022zte , and
$(a_{K^{\ast}},b_{K^{\ast}},c_{K^{\ast}},d_{K^{\ast}})=(3.0\pm 0.8,2.7\pm
0.7,16.4\pm 2.1,15.4\pm 1.9)$ Calibbi:2015kma are the numerical coefficients
that have been calculated using the $B\to K^{(\ast)}$ transitions form factors
obtained from lattice QCD Parrott:2022zte ; Calibbi:2015kma . The decay
channel with final state $\mu^{-}\tau^{+}$ can be easily obtained by replacing
$\mu\leftrightarrows\tau$. The current ULs can be translated into the bounds
$\displaystyle B^{+}\to K^{+}\mu^{+}\tau^{-}$ $\displaystyle:$ $\displaystyle\
\frac{|g^{q}_{sb}(g^{\ell}_{\mu\tau})^{\ast}|}{M_{V}^{2}}<6.2\times 10^{-2}\
{\rm TeV}^{-2},$ (69a) $\displaystyle B^{+}\to K^{+}\mu^{-}\tau^{+}$
$\displaystyle:$ $\displaystyle\
\frac{|g^{q}_{sb}(g^{\ell}_{\tau\mu})^{\ast}|}{M_{V}^{2}}<4.9\times 10^{-2}\
{\rm TeV}^{-2},$ (69b) $\displaystyle B^{0}\to K^{\ast 0}\mu^{+}\tau^{-}$
$\displaystyle:$ $\displaystyle\
\frac{|g^{q}_{sb}(g^{\ell}_{\mu\tau})^{\ast}|}{M_{V}^{2}}<2.5\times 10^{-2}\
{\rm TeV}^{-2}.$ (69c)
As for the LFV leptonic decay $B_{s}\to\mu^{\pm}\tau^{\mp}$, the branching
ratio is Calibbi:2015kma
$\displaystyle{\rm BR}(B_{s}^{0}\to\mu^{\pm}\tau^{\mp})$ $\displaystyle=$
$\displaystyle\tau_{B_{s}}\frac{f_{B_{s}}^{2}m_{B_{s}}m^{2}_{\tau}}{32\pi^{3}}\alpha^{2}G_{F}^{2}|V_{tb}V_{ts}^{\ast}|^{2}\Big{(}1-\frac{m_{\tau}^{2}}{m_{B_{s}}^{2}}\Big{)}^{2}\big{(}|C^{bs\mu\tau}_{9}|^{2}+|C^{bs\mu\tau}_{10}|^{2}\big{)},$
(70)
where $f_{B_{s}}=(230.3\pm 1.3)$ MeV is the $B_{s}$ decay constant
HFLAV:2022pwe and we have used the limit $m_{\tau}\gg m_{\mu}$. Recently, the
LHCb experiment has reported the first upper limit of ${\rm
BR}(B_{s}\to\mu^{\pm}\tau^{\mp})<4.2\times 10^{-5}$ at $95\%$ CL Aaij:2019okb
. Thus, one gets the following limit
$\frac{|g^{q}_{sb}(g^{\ell}_{\mu\tau})^{\ast}|}{M_{V}^{2}}<5.1\times 10^{-2}\
{\rm TeV}^{-2}.$ (71)
### III.8 Rare $B$ decays: $B\to K^{(\ast)}\nu\bar{\nu}$, $B\to
K\tau^{+}\tau^{-}$ and $B_{s}\to\tau^{+}\tau^{-}$
Recently, the interplay between the di-neutrino channel $B\to
K^{(\ast)}\nu\bar{\nu}$ and the $B$ meson anomalies has been studied by
several works Alok:2021pdh ; Bause:2020auq ; Bause:2021cna ; Browder:2021hbl ;
He:2021yoz . In the NP scenario under study, the $Z^{\prime}$ boson can give
rise to $B\to K^{(\ast)}\nu\bar{\nu}$ at tree level. The effective Hamiltonian
for the $b\to s\nu\bar{\nu}$ transition is given by Buras:2014fpa
$\mathcal{H}_{\rm eff}(b\to s\nu\bar{\nu})=-\frac{\alpha_{\rm
em}G_{F}}{\sqrt{2}\pi}V_{tb}V_{ts}^{\ast}C_{L}^{ij}(\bar{s}\gamma^{\mu}P_{L}b)(\bar{\nu}_{i}\gamma_{\mu}(1-\gamma_{5})\nu_{j}),$
(72)
where $C_{L}^{ij}=C_{L}^{\rm SM}+\Delta C_{L}^{ij}$ is the aggregate of the SM
contribution $C_{L}^{\rm SM}\approx-6.4$ and the NP effects $\Delta
C_{L}^{ij}$, that in the TVB framework read as
$\Delta C_{L}^{ij}=\frac{\pi}{\sqrt{2}G_{F}\alpha_{\rm
em}V_{tb}V_{ts}^{\ast}}\frac{g^{q}_{sb}g^{\ell}_{ij}}{M_{V}^{2}},$ (73)
with $i,j=\mu,\tau$. By defining the ratio Buras:2014fpa
$R^{\nu\bar{\nu}}_{K^{(\ast)}}\equiv\frac{{\rm BR}(B\to
K^{(\ast)}\nu\bar{\nu})}{{\rm BR}(B\to K^{(\ast)}\nu\bar{\nu})_{\rm SM}},$
(74)
the NP contributions can be constrained. In the TVB model this ratio is
modified as
$\displaystyle R^{\nu\bar{\nu}}_{K^{(\ast)}}$ $\displaystyle=$
$\displaystyle\frac{\sum_{ij}|\delta_{ij}C_{L}^{\rm SM}+\Delta
C_{L}^{ij}|^{2}}{3|C_{L}^{\rm SM}|^{2}},$ (75) $\displaystyle=$ $\displaystyle
1+\frac{2\sum_{i}C_{L}^{\rm SM}\Delta C_{L}^{ii}+\sum_{ij}|\Delta
C_{L}^{ij}|^{2}}{3|C_{L}^{\rm SM}|^{2}},$ (76)
From this expression, we can observe that diagonal leptonic couplings
$g^{\ell}_{\mu\mu}$ and $g^{\ell}_{\tau\tau}$ contribute to $b\to
s\nu_{\mu}\bar{\nu}_{\mu}$ (relevant for $b\to s\mu^{+}\mu^{-}$ data) and
$b\to s\nu_{\tau}\bar{\nu}_{\tau}$ (relevant for $b\to c\tau\bar{\nu}_{\tau}$
data), respectively. In addition, since the neutrino flavor is experimentally
unobservable in heavy meson experiments, it is also possible to induce the LFV
transitions $b\to s\nu_{\mu}\bar{\nu}_{\tau}$ (and
$\nu_{\tau}\bar{\nu}_{\mu}$) through the off-diagonal coupling
$g^{\ell}_{\mu\tau}$.
On the experimental side, the Belle experiment in 2017 obtained the following
ULs on the branching fractions ${\rm BR}(B\to K\nu\bar{\nu})<1.6\times
10^{-5}$ and ${\rm BR}(B\to K^{\ast}\nu\bar{\nu})<2.7\times 10^{-5}$
Grygier:2017tzo , resulting in limits on the ratios,
$R^{\nu\bar{\nu}}_{K}<3.9$ and $R^{\nu\bar{\nu}}_{K^{\ast}}<2.7$ ($90\%$
C.L.), respectively Grygier:2017tzo . In 2021, based on an inclusive tagging
technique, the Belle II experiment reported the bound ${\rm BR}(B^{+}\to
K^{+}\nu\bar{\nu})<4.1\times 10^{-5}$ at $90\%$ C.L. Belle-II:2021rof . A
combination of this new result with previous experimental results leads to the
weighted average ${\rm BR}(B^{+}\to K^{+}\nu\bar{\nu})=(1.1\pm 0.4)\times
10^{-5}$ Dattola:2021cmw . In turn, the ratio $R^{\nu\bar{\nu}}_{K^{+}}$ has
been calculated to be, $R^{\nu\bar{\nu}}_{K^{+}}=2.4\pm 0.9$ Browder:2021hbl .
The rare $B$ processes $B_{s}\to\tau^{+}\tau^{-}$ and $B\to K\tau^{+}\tau^{-}$
(induced via $b\to s\tau^{+}\tau^{-}$ transition) are expected to receive
significant NP impact. For the leptonic process $B_{s}\to\tau^{+}\tau^{-}$,
the SM branching ratio is shifted by the factor
${\rm BR}(B_{s}\to\tau^{+}\tau^{-})={\rm
BR}(B_{s}\to\tau^{+}\tau^{-})_{\text{SM}}\Bigg{|}1+\dfrac{\pi}{\sqrt{2}G_{F}\alpha_{\rm
em}V_{tb}V_{ts}^{\ast}C_{10}^{\rm
SM}}\dfrac{g^{q}_{sb}(g^{\ell}_{\tau\tau})^{\ast}}{M_{V}^{2}}\Bigg{|}^{2},$
(77)
where $C_{10}^{\rm SM}\simeq-4.3$. The strongest experimental bound on its
branching ratio has been obtained by the LHCb, ${\rm
BR}(B_{s}\to\tau^{+}\tau^{-})<6.8\times 10^{-3}$ at 95% confidence level
Aaij:2017xqt , while its SM predictions is ${\rm
BR}(B_{s}^{0}\to\tau^{+}\tau^{-})_{\rm SM}=(7.73\pm 0.49)\times 10^{-7}$
Bobeth:2013uxa . The bound is
$\displaystyle\dfrac{|g^{q}_{sb}(g^{\ell}_{\tau\tau})^{\ast}|}{M_{V}^{2}}$
$\displaystyle<$ $\displaystyle 0.56\ {\rm TeV}^{-2}.$ (78)
As concerns the semileptonic decay $B\to K\tau^{+}\tau^{-}$, an easy handle
numerical formula for the branching ratio (over the whole kinematic range for
the lepton pair invariant mass) has been obtained in Ref. Cornella:2019hct ,
for the case of a singlet vector leptoquark explanation of the $B$ meson
anomalies. Since the NP contribution is generated via the same operator, this
expression can be easily (but properly) translated to the TVB model, namely
${\rm BR}(B\to K\tau^{+}\tau^{-})\simeq 1.5\times 10^{-7}+1.4\times
10^{-3}\Big{(}\frac{1}{2\sqrt{2}G_{F}}\Big{)}\frac{{\rm
Re}[g^{q}_{sb}(g^{\ell}_{\tau\tau})^{\ast}]}{M_{V}^{2}}+3.5\Big{(}\frac{1}{2\sqrt{2}G_{F}}\Big{)}^{2}\frac{|g^{q}_{sb}(g^{\ell}_{\tau\tau})^{\ast}|^{2}}{M_{V}^{4}}.$
(79)
This decay channel has not been observed so far, and the present reported
bound is ${\rm BR}(B\to K\tau^{+}\tau^{-})<2.25\times 10^{-3}$ PDG2020 . We
obtained the following bound
$\frac{|g^{q}_{sb}(g^{\ell}_{\tau\tau})^{\ast}|}{M_{V}^{2}}<0.83\ {\rm
TeV}^{-2},$ (80)
that is weaker than the one get from $B_{s}\to\tau^{+}\tau^{-}$.
### III.9 $\tau$ decays: $\tau\to 3\mu$,
$\tau\to\mu\bar{\nu}_{\mu}\nu_{\tau}$, and $\tau\to\mu\phi$
It is known that the TVB model generates four-lepton operators
$(\bar{\mu}\gamma^{\alpha}P_{L}\tau)(\bar{\mu}\gamma_{\alpha}P_{L}\mu)$ and
$(\bar{\mu}\gamma^{\alpha}P_{L}\tau)(\bar{\nu}_{\tau}\gamma_{\alpha}P_{L}\nu_{\mu})$,
thus yielding to tree-level contributions to the leptonic $\tau$ decays,
$\tau^{-}\to\mu^{-}\mu^{+}\mu^{-}\ (\tau\to 3\mu)$ and
$\tau^{-}\to\mu^{-}\bar{\nu}_{\mu}\nu_{\tau}$, respectively
Bhattacharya:2016mcc ; Kumar:2018kmr . For the LFV decay $\tau\to 3\mu$, the
expression for the branching ratio can be written as
${\rm
BR}(\tau^{-}\to\mu^{-}\mu^{+}\mu^{-})=\frac{m_{\tau}^{5}}{1536\pi^{3}\Gamma_{\tau}}\frac{|g^{\ell}_{\mu\mu}g^{\ell}_{\mu\tau}|^{2}}{M_{V}^{4}},$
(81)
where $\Gamma_{\tau}$ is the total decay width of the $\tau$ lepton. The
current experimental UL obtained by Belle (at 90% CL) is ${\rm
BR}(\tau^{-}\to\mu^{-}\mu^{+}\mu^{-})<2.1\times 10^{-8}$ PDG2020 . This
corresponds to
$\frac{|g^{\ell}_{\mu\mu}g^{\ell}_{\mu\tau}|}{M_{V}^{2}}<1.13\times 10^{-2}\
{\rm TeV}^{-2}.$ (82)
The leptonic decay $\tau^{-}\to\mu^{-}\bar{\nu}_{\mu}\nu_{\tau}$ is a lepton
flavor conserving and SM allowed process that receives tree-level contribution
from both $W^{\prime}$ (via lepton flavor conserving couplings) and
$Z^{\prime}$ (via LFV couplings) bosons Kumar:2018kmr . The branching ratio is
given by Kumar:2018kmr
${\rm BR}(\tau\to\mu\bar{\nu}_{\mu}\nu_{\tau})={\rm
BR}(\tau\to\mu\bar{\nu}_{\mu}\nu_{\tau})_{\rm
SM}\bigg{(}\bigg{|}1+\dfrac{1}{2\sqrt{2}G_{F}M_{V}^{2}}(2g^{\ell}_{\mu\mu}g^{\ell}_{\tau\tau}-|g^{\ell}_{\mu\tau}|^{2})\bigg{|}^{2}+\bigg{|}\dfrac{1}{2\sqrt{2}G_{F}M_{V}^{2}}|g^{\ell}_{\mu\tau}|^{2}\bigg{|}^{2}\bigg{)},$
(83)
where ${\rm BR}(\tau\to\mu\bar{\nu}_{\mu}\nu_{\tau})_{\rm SM}=(17.29\pm
0.03)\%$ Pich:2013lsa . The $Z^{\prime}$ boson can also generates one-loop
corrections, which can be safely ignored. This value has to be compared with
the experimental value reported by PDG ${\rm
BR}(\tau\to\mu\bar{\nu}_{\mu}\nu_{\tau})=(17.39\pm 0.04)\%$ PDG2020 .
Finally, the branching ratio of the LFV hadronic $\tau$ decay $\tau\to\mu\phi$
($\tau\to\mu s\bar{s}$ transition), can be expressed as Bhattacharya:2016mcc
${\rm
BR}(\tau^{-}\to\mu^{-}\phi)=\frac{f_{\phi}^{2}m_{\tau}^{3}}{128\pi\Gamma_{\tau}}\Big{(}1+2\frac{m_{\phi}^{2}}{m_{\tau}^{2}}\Big{)}\Big{(}1-\frac{m_{\phi}^{2}}{m_{\tau}^{2}}\Big{)}^{2}\frac{|g^{\ell}_{\mu\tau}g^{q}_{ss}|^{2}}{M_{V}^{4}},$
(84)
where $m_{\phi}$ and $f_{\phi}=(238\pm 3)$ MeV Kumar:2018kmr are the $\phi$
meson mass and decay constant, respectively. Currently, the UL reported by
Belle on the branching ratio is ${\rm BR}(\tau^{-}\to\mu^{-}\phi)<2.3\times
10^{-8}$ Belle:2023ziz . The current UL produces the bound
$\frac{|g^{\ell}_{\mu\tau}g^{q}_{ss}|}{M_{V}^{2}}<9.4\times 10^{-3}\ {\rm
TeV}^{-3}.$ (85)
Since the $D^{0}-\bar{D}^{0}$ mixing imposes that $|g^{q}_{ss}|/M_{V}\leq
3.3\times 10^{-3}\ {\rm TeV^{-1}}$ (see Sec. III.5) the constraint from
$\tau\to\mu\phi$ can be easily fulfilled. We will not take into account this
LFV process in further TVB model analysis.
### III.10 LHC bounds
LHC constraints are always important for models with non-zero $Z^{\prime}$
couplings to the SM particles Langacker:2008yv . In particular, in our study
it will set important constraints on the parametric space conformed by the TVB
couplings $(g^{q}_{bb},g^{\ell}_{\mu\mu})$ and
$(g^{q}_{bb},g^{\ell}_{\tau\tau})$. We consider the ATLAS search for high-mass
dilepton resonances in the mass range of 250 GeV to 6 TeV, in proton-proton
collisions at a center-of-mass energy of $\sqrt{s}=13$ TeV during Run 2 of the
LHC with an integrated luminosity of 139 fb-1 ATLAS:2019erb (recently, the
CMS collaboration has also reported constraints for similar luminosities
CMS:2019tbu , basically identical to ATLAS ATLAS:2019erb ), and the data from
searches of $Z^{\prime}$ bosons decaying to tau pairs with an integrated
luminosity of 36.1 fb-1 from proton-proton collisions at $\sqrt{s}=13$ TeV
ATLAS:2017eiz . There are also searches for high-mass resonances in the
monolepton channels ($pp\to\ell\nu$) carried out by the ATLAS and CMS
ATLAS:2019lsy ; ATLASmonotau ; CMS:2022ncp . However, they provide weaker
bounds than those obtained from dilepton searches, and we will not take them
into account.
We obtain for benchmark mass value $M_{V}=1$ TeV the lower limit on the
parameter space from the intersection of the 95$\%$CL upper limit on the
cross-section from the ATLAS experiment ATLAS:2019erb ; ATLAS:2017eiz with
the theoretical cross-section given in Ref. Erler:2011ud . Lower limits above
$4.5$ TeV apply to models with couplings to the first family, which it is not
our case. The strongest restrictions come from $Z^{\prime}$ production
processes in the $b\bar{b}$ annihilation and the subsequent $Z^{\prime}$ decay
into muons ($\mu^{+}\mu^{-}$) and taus ($\tau^{+}\tau^{-}$). Further details
are shown in Refs. Erler:2011ud ; Salazar:2015gxa ; Benavides:2018fzm . Let us
remark that within the TVB framework is also possible to consider the
annihilation between quarks with different flavors (namely, $g^{q}_{bs}$),
however, we anticipate that according to our phenomenological analysis in Sec.
IV this coupling is very small; therefore, we only consider production
processes without flavor changing neutral currents. In the next section we
will show that the TVB parameter space is limited by LHC constraints to
regions where the couplings of the leptons or the quarks are close to zero,
excluding the regions preferred by the $B$ meson anomalies and low-energy
flavor observables.
## IV Analysis on the TVB parametric space
In this section we present the parametric space analysis of the TVB model
addressing a simultaneous explanation of the $b\to s\mu^{+}\mu^{-}$ and $b\to
c\tau\bar{\nu}_{\tau}$ data. We define the pull for the $i$-th observable as
${\rm pull}_{i}=\frac{\mathcal{O}^{\rm exp}_{i}-\mathcal{O}^{\rm
th}_{i}}{\Delta\mathcal{O}_{i}},$ (86)
where $\mathcal{O}^{\text{exp}}_{i}$ is the experimental measurement,
$\mathcal{O}^{\text{th}}_{i}\equiv\mathcal{O}^{\text{th}}_{i}(g^{q}_{bs},g^{q}_{bb},g^{\ell}_{\mu\mu},g^{\ell}_{\tau\tau},g^{\ell}_{\mu\tau})$
is the theoretical prediction that include the NP contributions, and
$\Delta\mathcal{O}_{i}=((\sigma^{\rm exp}_{i})^{2}+(\sigma^{\rm
th}_{i})^{2})^{1/2}$ corresponds to the combined experimental and theoretical
uncertainties. By means of the pull, we can compare the fitted values of each
observable to their measured values. The $\chi^{2}$ function is written as the
sum of squared pulls, i.e.,
$\chi^{2}=\sum_{i}^{N_{\rm obs}}({\rm pull}_{i})^{2},$ (87)
where the sum extends over the number of observables $(N_{\rm obs})$ to be
fitted. Our phenomenological analysis is based on the flavor observables
presented in the previous Sec. III. This all data set includes: $b\to
c\tau\bar{\nu}_{\tau}$ and $b\to s\mu^{+}\mu^{-}$ data, bottomonium ratios
$R_{\Upsilon(nS)}$, LFV decays ($B^{+}\to K^{+}\mu^{\pm}\tau^{\mp}$, $B^{0}\to
K^{\ast 0}\mu^{\pm}\tau^{\mp}$, $B_{s}\to\mu^{\pm}\tau^{\mp}$,
$\Upsilon(nS)\to\mu^{\pm}\tau^{\mp}$), rare $B$ decays ($B\to
K^{(\ast)}\nu\bar{\nu},B\to K\tau^{+}\tau^{-},B_{s}\to\tau^{+}\tau^{-}$),
$\tau$ decays ($\tau\to 3\mu$, $\tau\to\mu\bar{\nu}_{\mu}\nu_{\tau}$), $\Delta
F=2$ processes, and neutrino trident production. We will study the impact of
the most recent LHCb measurements on the ratios $R(D^{(\ast)})$ LHCb2022 ;
LHCb:2023zxo ; LHCb2023 , allowing us to present an updated status of the TVB
model as an explanation to the $B$ meson anomalies. For such a purpose, we
will consider in our analysis the following three different sets of
observables,
* •
All data with $R(D)_{\rm LHCb22}$ \+ $R(D^{\ast})_{\rm LHCb23}$,
* •
All data with $R(D^{(\ast)})_{\rm LHCb22}$,
* •
All data with $R(D^{(\ast)})_{\rm HFLAV23}$.
All these three sets contain a total number of observables $N_{\rm obs}=31$
and five free TVB parameters ($g^{q}_{bs}$, $g^{q}_{bb}$, $g^{\ell}_{\mu\mu}$,
$g^{\ell}_{\tau\tau}$, $g^{\ell}_{\mu\tau}$) to be fitted. The heavy TVB mass
will be fixed to the benchmark value $M_{V}=1\ {\rm TeV}$. Therefore, the
number of degrees of freedom is $N_{\rm dof}=26$.
Table 6: Best-fit point values and $1\sigma$ intervals of the five TVB couplings $(g^{q}_{bs},g^{q}_{bb},g^{\ell}_{\mu\mu},g^{\ell}_{\tau\tau},g^{\ell}_{\mu\tau})$ for the three different sets of observables and a benchmark mass value of $M_{V}=1\ {\rm TeV}$. TVB couplings | Best-fit point | $1\sigma$ intervals
---|---|---
All data with $R(D)_{\rm LHCb22}$ \+ $R(D^{\ast})_{\rm LHCb23}$ :
$\chi^{2}_{\rm min}/N_{\rm dof}=0.63$, $p$-value $=93.7\%$
$g^{q}_{bs}$ | $-2.3\times 10^{-3}$ | $[-3.2,-1.6]\times 10^{-3}$
$g^{q}_{bb}$ | 0.73 | $[0.28,1.72]$
$g^{\ell}_{\mu\mu}$ | 0.20 | $[0.072,0.131]$
$g^{\ell}_{\tau\tau}$ | 0.49 | $[0.27,0.71]$
$g^{\ell}_{\mu\tau}$ | $\sim 0$ | $[-0.11,0.11]$
All data with $R(D^{(\ast)})_{\rm LHCb22}$ : $\chi^{2}_{\rm min}/N_{\rm
dof}=0.62$, $p$-value $=93.1\%$
$g^{q}_{bs}$ | $-3.2\times 10^{-3}$ | $[-4.4,-2.1]\times 10^{-3}$
$g^{q}_{bb}$ | 1.50 | $[0.74,2.24]$
$g^{\ell}_{\mu\mu}$ | 0.074 | $[0.052,0.095]$
$g^{\ell}_{\tau\tau}$ | 0.70 | $[0.45,0.94]$
$g^{\ell}_{\mu\tau}$ | $\sim 0$ | $[-0.15,0.15]$
All data with $R(D^{(\ast)})_{\rm HFLAV23}$ : $\chi^{2}_{\rm min}/N_{\rm
dof}=0.59$, $p$-value $=95.2\%$
$g^{q}_{bs}$ | $-3.2\times 10^{-3}$ | $[-4.4,-2.1]\times 10^{-3}$
$g^{q}_{bb}$ | 1.52 | $[1.09,1.94]$
$g^{\ell}_{\mu\mu}$ | 0.073 | $[0.052,0.095]$
$g^{\ell}_{\tau\tau}$ | 0.70 | $[0.53,0.88]$
$g^{\ell}_{\mu\tau}$ | $\sim 0$ | $[-0.14,0.14]$
For the three sets of observables we find the best-fit point values by
minimizing the $\chi^{2}$ function ($\chi^{2}_{\rm min}$). In Table 6 we
report our results of the best-fit point values and $1\sigma$ intervals of TVB
couplings. For each fit we also present in Table 6 the values of
$\chi^{2}_{\rm min}/N_{\rm dof}$ and its corresponding $p$-value to evaluate
the fit-quality. In general, it is found that the three sets of observables
provide an excellent fit of the data. In the quark sector, the TVB model
requires small $g^{q}_{bs}$ coupling, $|g^{q}_{bs}|\sim\mathcal{O}(10^{-3})$,
and opposite sign to $g^{\ell}_{\mu\mu}$ to be consistent with $b\to
s\mu^{+}\mu^{-}$ data ($C_{9}^{\mu\mu}=-C_{10}^{\mu\mu}$ solution) and
$B_{s}-\bar{B}_{s}$ mixing. On the other hand, large values for the bottom-
bottom coupling $g^{q}_{bb}\sim\mathcal{O}(1)$ are preferred. As for the
leptonic couplings, it is found that the lepton flavor conserving ones have a
similar size $g^{\ell}_{\mu\mu}\approx
g^{\ell}_{\tau\tau}\sim\mathcal{O}(10^{-1})$ for All data with $R(D)_{\rm
LHCb22}$ \+ $R(D^{\ast})_{\rm LHCb23}$, suggesting non-hierarchy pattern.
While for All data with $R(D^{(\ast)})_{\rm LHCb23}$ (with $R(D^{(\ast)})_{\rm
HFLAV23}$), these couplings exhibit a hierarchy
$g^{\ell}_{\tau\tau}>g^{\ell}_{\mu\mu}$. As LFV coupling concerns, the
obtained best-fit point values on $g^{\ell}_{\mu\tau}$ are negligible. Thus,
the TVB model do not lead to appreciable LFV effects. Last but no least, we
also probe higher mass values ($M_{V}>1\ {\rm TeV}$). We obtain that in order
to avoid large values on $g^{q}_{bb}$ coupling $(\sim\sqrt{4\pi})$, that would
put the perturbativity of the model into question, the TVB mass can be as
large as $M_{V}\sim 2$ TeV.
(a) All data with $R(D)_{\rm LHCb22}$ \+ $R(D^{\ast})_{\rm LHCb23}$
(b) All data with $R(D^{(\ast)})_{\rm LHCb23}$
(c) All data with $R(D^{(\ast)})_{\rm HFLAV23}$
Figure 1: $68\%$ (green) and $95\%$ (light-green) CL allowed regions for the
most relevant 2D parametric space of (a) All data with $R(D)_{\rm LHCb22}$ \+
$R(D^{\ast})_{\rm LHCb23}$, (b) All data with $R(D^{(\ast)})_{\rm LHCb22}$,
and (c) All data with $R(D^{(\ast)})_{\rm HFLAV23}$, respectively, for
$M_{V}=1\ {\rm TeV}$. In each plot we are marginalizing over the rest of the
parameters. The SM value is represented by the blue dot. The light-gray region
corresponds to LHC bounds at the $95\%$ CL. Perturbative region
($g^{q}_{bb}\geq\sqrt{4\pi})$) is represented by yellow color.
In Fig. 1, we show the allowed regions of the most relevant two-dimension (2D)
parametric space of (a) All data with $R(D)_{\rm LHCb22}$ \+ $R(D^{\ast})_{\rm
LHCb23}$, (b) All data with $R(D^{(\ast)})_{\rm LHCb22}$, and (c) All data
with $R(D^{(\ast)})_{\rm HFLAV23}$, respectively, for a benchmark TVB mass
$M_{V}=1\ {\rm TeV}$. The $68\%$ and $95\%$ CL regions are shown in green and
light-green colors, respectively. In each plot we are marginalizing over the
rest of the parameters. Furthermore, we include the LHC bounds (light-gray
regions) obtained from searches of high-mass dilepton (dimuon and ditau)
resonances at the ATLAS experiment ATLAS:2019erb ; ATLAS:2017eiz , as
discussed in Sec. III.10. For completeness, the perturbative region
($g^{q}_{bb}\geq\sqrt{4\pi})$) is represented by yellow color. It is observed
in the planes ($g^{q}_{bb},g^{\ell}_{\tau\tau}$) and
($g^{q}_{bb},g^{\ell}_{\mu\mu}$) for All data with $R(D^{(\ast)})_{\rm
HFLAV23}$ that the TVB model is seems to be strongly ruled out by the LHC
bounds. However, for All data with $R(D)_{\rm LHCb22}$ \+ $R(D^{\ast})_{\rm
LHCb23}$ (and with $R(D^{(\ast)})_{\rm LHCb22}$) that include the very recent
LHCb measurements LHCb2022 ; LHCb:2023zxo ; LHCb2023 , the TVB model can
provide a combined explanation of the $b\to c\tau\bar{\nu}_{\tau}$ and $b\to
s\mu^{+}\mu^{-}$ anomalies, in consistency with LHC bounds. Our analysis shows
that given the current experimental situation, particularly with LHCb, it is
premature to exclude the TVB model to addressing the $B$ meson anomalies.
Future improvements and new measurements on $b\to c\tau\bar{\nu}_{\tau}$ data
at the Belle II and LHCb experiments will be a matter of importance to test
the TVB model.
We close by mentioning that an analysis of the TVB model was previously
reported by Kumar, London, and Watanabe (KLW) by implementing the 2018 $b\to
c\tau\bar{\nu}_{\tau}$ and $b\to s\mu^{+}\mu^{-}$ data Kumar:2018kmr . KLW
found that the TVB model is excluded as a possible explanation of the $B$
meson anomalies due to the bound from LHC dimuon search (3.2 fb-1)
Kumar:2018kmr . Such a result is in agreement with ours for All data with
$R(D^{(\ast)})_{\rm HFLAV23}$ and considering recent LHC dimuon (139 fb-1) and
ditau (36.1 fb-1) searches. Unlike to KLW analysis, we have incorporated
several new observables and considered the recent available experimental
measurements and ULs. Thus, our present study extends, complements, and update
the previous analysis performed by KLW. We also extend the recent analysis
Garcia-Duque:2021qmg where only the charged-current $b\to
c\tau\bar{\nu}_{\tau}$ anomaly was addressed within this framework.
### IV.1 Implications to some flavor parametrizations
As a final step in our analysis, we will explore the implications to our
previous phenomenological analysis on TVB model to some flavor
parametrizations that have been already studied in the literature. For this we
consider scenarios in which the transformations involve only the second and
third generations Bhattacharya:2016mcc ; Calibbi:2015kma , as it was
previously discussed in Sec. II, we found that the equivalence in the quark
sector is Eq. (II.1), while for the leptonic sector we have Eq. (II.1). Taking
into account the $1\sigma$ range solutions of TVB couplings obtained in Table
6 (for the three sets of data), we get, in general, a large coupling
$g_{2}^{q}\sim\mathcal{O}(1)$ and a very small mixing angle $|\theta_{D}|\sim
10^{-3}$. Such a small mixing angle ($|\theta_{D}|\ll V_{cb}$) result is still
in agreement with previous analysis Bhattacharya:2016mcc ; Calibbi:2015kma .
On the contrary, in the leptonic sector, we obtained that because of $1\sigma$
range of the LFV coupling $g_{\mu\tau}^{\ell}$ it is not possible to find a
physical solution to the mixing angle $\theta_{L}$. As additional probe, we
have performed a global fit to the current $b\to s\mu^{+}\mu^{-}$ and $b\to
c\tau\bar{\nu}_{\tau}$ data, and the most relevant flavor observables, with
$(g_{2}^{q},g_{2}^{\ell},\theta_{D},\theta_{L})$ as free parameters. For a
fixed mass value $M_{V}=1\ {\rm TeV}$, we obtained a very poor fit
($\chi^{2}_{\rm min}/N_{\rm dof}\gg 1$), concluding that this kind of flavor
setup is not viable within the TVB model.
## V Conclusions
We have presented an updated view of the TVB model as a simultaneous
explanation of the $B$ meson anomalies ($b\to c\tau\bar{\nu}_{\tau}$ and $b\to
s\mu^{+}\mu^{-}$ data). We performed a global fit of the TVB parameter space
with the most recent 2022 and 2023 data, including the LHCb measurements on
the charged-current LFU ratios $R(D^{(\ast)})$ and $R(\Lambda_{c})$. As
concerns $b\to s\mu^{+}\mu^{-}$ data, we taken into account the
$C^{bs\mu\mu}_{9}=-C^{bs\mu\mu}_{10}$ solution from global fit analysis
including the recent results on $R_{K^{(\ast)}}$ by LHCb and ${\rm
BR}(B_{s}\to\mu^{+}\mu^{-})$ by CMS. We have also included all relevant flavor
observables such as $B_{s}-\bar{B}_{s}$ mixing, neutrino trident production,
LFV decays ($B\to K^{(\ast)}\mu^{\pm}\tau^{\mp}$,
$B_{s}\to\mu^{\pm}\tau^{\mp}$, $\tau\to\mu\phi$,
$\Upsilon(nS)\to\mu^{\pm}\tau^{\mp}$), rare $B$ decays ($B\to
K^{(\ast)}\nu\bar{\nu},B\to K\tau^{+}\tau^{-},B_{s}\to\tau^{+}\tau^{-}$), and
bottomonium LFU ratios. We have confronted the allowed paramater space with
the LHC bounds from searches of high-mass dilepton resonances at the ATLAS
experiment.
Our analysis has shown that for a heavy TVB mass of 1 TeV and using all data
along with world averages values on $R(D^{(\ast)})$ reported by HFLAV, the TVB
model can accommodate the $b\to c\tau\bar{\nu}_{\tau}$ and $b\to
s\mu^{+}\mu^{-}$ anomalies (in consistency with other flavor observables), but
it seems to be strongly disfavoured by the LHC bounds. However, we obtained a
different situation when all data are combined with the very recent LHCb
measurements on $R(D^{(\ast)})$. The the $B$ meson anomalies can be addressed
within the TVB model in consistency with LHC constraints. We concluded that
new and improved $b\to c\tau\bar{\nu}_{\tau}$ data by LHCb and Belle II will
be required to really establish the viability of the TVB model.
We have also studied the consequences of our analysis of the TVB model to
flavor parametrizations in which the transformations involve only the second
and third generations. We obtained that such a flavor ansatz is not viable
within the TVB model.
###### Acknowledgements.
J. H. M. is grateful to Vicerrectoría de Investigación-Creación of Universidad
del Tolima for financial support of Project No. 290130517. E. R. acknowledges
financial support from the “Vicerrectoría de Investigaciones e Interacción
Social VIIS de la Universidad de Nariño,” Projects No. 1928 and No. 2172. We
are grateful to Hector Gisbert for his comments on LFV effects in the
dineutrino channels $B\to K^{(\ast)}\nu\bar{\nu}$.
## References
* (1) D. London and J. Matias, $B$ Flavour Anomalies: 2021 Theoretical Status Report, Ann. Rev. Nucl. Part. Sci. 72, 37-68 (2022) [arXiv:2110.13270 [hep-ph]].
* (2) J. Albrecht, D. van Dyk and C. Langenbruch, Flavour anomalies in heavy quark decays, Prog. Part. Nucl. Phys. 120, 103885 (2021) [arXiv:2107.04822 [hep-ex]].
* (3) S. Bifani, S. Descotes-Genon, A. Romero Vidal and M. H. Schune, Review of Lepton Universality tests in $B$ decays, J. Phys. G 46, no.2, 023001 (2019) [arXiv:1809.06229 [hep-ex]].
* (4) R. Aaij et al. [LHCb], Test of lepton universality using $B^{+}\rightarrow K^{+}\ell^{+}\ell^{-}$ decays, Phys. Rev. Lett. 113, 151601 (2014) [arXiv:1406.6482 [hep-ex]].
* (5) R. Aaij et al. [LHCb], Test of lepton universality in beauty-quark decays, [arXiv:2103.11769 [hep-ex]].
* (6) R. Aaij et al. [LHCb], Tests of lepton universality using $B^{0}\to K^{0}_{S}\ell^{+}\ell^{-}$ and $B^{+}\to K^{*+}\ell^{+}\ell^{-}$ decays, [arXiv:2110.09501 [hep-ex]].
* (7) R. Aaij et al. [LHCb], Search for lepton-universality violation in $B^{+}\to K^{+}\ell^{+}\ell^{-}$ decays, Phys. Rev. Lett. 122, no.19, 191801 (2019) [arXiv:1903.09252 [hep-ex]].
* (8) R. Aaij et al. [LHCb], Test of lepton universality with $B^{0}\rightarrow K^{*0}\ell^{+}\ell^{-}$ decays, JHEP 08, 055 (2017) [arXiv:1705.05802 [hep-ex]].
* (9) [LHCb], Test of lepton universality in $b\rightarrow s\ell^{+}\ell^{-}$ decays, [arXiv:2212.09152 [hep-ex]].
* (10) [LHCb], Measurement of lepton universality parameters in $B^{+}\to K^{+}\ell^{+}\ell^{-}$ and $B^{0}\to K^{*0}\ell^{+}\ell^{-}$ decays, [arXiv:2212.09153 [hep-ex]].
* (11) [CMS], Measurement of the B${}^{0}_{\mathrm{S}}$$\to$$\mu^{+}\mu^{-}$ decay properties and search for the B0$\to$$\mu^{+}\mu^{-}$ decay in proton-proton collisions at $\sqrt{s}$ = 13 TeV, [arXiv:2212.10311 [hep-ex]].
* (12) R. Aaij et al. [LHCb], Measurement of Form-Factor-Independent Observables in the Decay $B^{0}\to K^{*0}\mu^{+}\mu^{-}$, Phys. Rev. Lett. 111, 191801 (2013) [arXiv:1308.1707 [hep-ex]].
* (13) R. Aaij et al. [LHCb], Angular analysis of the $B^{0}\to K^{*0}\mu^{+}\mu^{-}$ decay using 3 fb-1 of integrated luminosity, JHEP 02, 104 (2016) [arXiv:1512.04442 [hep-ex]].
* (14) R. Aaij et al. [LHCb], Measurement of $CP$-Averaged Observables in the $B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-}$ Decay, Phys. Rev. Lett. 125, no.1, 011802 (2020) [arXiv:2003.04831 [hep-ex]].
* (15) R. Aaij et al. [LHCb], Differential branching fraction and angular analysis of the decay $B_{s}^{0}\to\phi\mu^{+}\mu^{-}$, JHEP 07, 084 (2013) [arXiv:1305.2168 [hep-ex]].
* (16) R. Aaij et al. [LHCb], Angular analysis and differential branching fraction of the decay $B^{0}_{s}\to\phi\mu^{+}\mu^{-}$, JHEP 09, 179 (2015) [arXiv:1506.08777 [hep-ex]].
* (17) R. Aaij et al. [LHCb], Angular Analysis of the $B^{+}\rightarrow K^{\ast+}\mu^{+}\mu^{-}$ Decay, Phys. Rev. Lett. 126, no.16, 161802 (2021) [arXiv:2012.13241 [hep-ex]].
* (18) J. Aebischer, W. Altmannshofer, D. Guadagnoli, M. Reboud, P. Stangl and D. M. Straub, $B$-decay discrepancies after Moriond 2019, Eur. Phys. J. C 80, no.3, 252 (2020) [arXiv:1903.10434 [hep-ph]].
* (19) W. Altmannshofer and P. Stangl, New physics in rare B decays after Moriond 2021, Eur. Phys. J. C 81, no.10, 952 (2021) doi:10.1140/epjc/s10052-021-09725-1 [arXiv:2103.13370 [hep-ph]].
* (20) M. Algueró, B. Capdevila, S. Descotes-Genon, J. Matias and M. Novoa-Brunet, $\bm{b\to s\ell\ell}$ global fits after Moriond 2021 results, [arXiv:2104.08921 [hep-ph]].
* (21) M. Algueró, B. Capdevila, A. Crivellin, S. Descotes-Genon, P. Masjuan, J. Matias, M. Novoa Brunet and J. Virto, Emerging patterns of New Physics with and without Lepton Flavour Universal contributions, Eur. Phys. J. C 79, no.8, 714 (2019) [arXiv:1903.09578 [hep-ph]].
* (22) L. S. Geng, B. Grinstein, S. Jäger, S. Y. Li, J. Martin Camalich and R. X. Shi, Implications of new evidence for lepton-universality violation in $b\to s\ell^{+}\ell^{-}$ decays, Phys. Rev. D 104, no.3, 035029 (2021) [arXiv:2103.12738 [hep-ph]].
* (23) T. Hurth, F. Mahmoudi, D. M. Santos and S. Neshatpour, More Indications for Lepton Nonuniversality in $b\to s\ell^{+}\ell^{-}$, [arXiv:2104.10058 [hep-ph]].
* (24) A. Angelescu, D. Bečirević, D. A. Faroughy, F. Jaffredo and O. Sumensari, Single leptoquark solutions to the B-physics anomalies, Phys. Rev. D 104, no.5, 055017 (2021) [arXiv:2103.12504 [hep-ph]].
* (25) A. Carvunis, F. Dettori, S. Gangal, D. Guadagnoli and C. Normand, On the effective lifetime of $B_{s}\to\mu\mu\gamma$, JHEP 12, 078 (2021) [arXiv:2102.13390 [hep-ph]].
* (26) A. Greljo, J. Salko, A. Smolkovič and P. Stangl, Rare $b$ decays meet high-mass Drell-Yan, [arXiv:2212.10497 [hep-ph]].
* (27) M. Algueró, A. Biswas, B. Capdevila, S. Descotes-Genon, J. Matias and M. Novoa-Brunet, To (b)e or not to (b)e: No electrons at LHCb, [arXiv:2304.07330 [hep-ph]].
* (28) J. P. Lees et al. [BaBar Collaboration], Evidence for an excess of $\bar{B}\to D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ decays, Phys. Rev. Lett. 109, 101802 (2012) [arXiv:1205.5442 [hep-ex]].
* (29) J. P. Lees et al. [BaBar Collaboration], Measurement of an Excess of $\bar{B}\to D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ Decays and Implications for Charged Higgs Bosons, Phys. Rev. D 88, no. 7, 072012 (2013) [arXiv:1303.0571 [hep-ex]].
* (30) M. Huschle et al. [Belle Collaboration], Measurement of the branching ratio of $\bar{B}\to D^{(\ast)}\tau^{-}\bar{\nu}_{\tau}$ relative to $\bar{B}\to D^{(\ast)}\ell^{-}\bar{\nu}_{\ell}$ decays with hadronic tagging at Belle, Phys. Rev. D 92, no. 7, 072014 (2015) [arXiv:1507.03233 [hep-ex]].
* (31) Y. Sato et al. [Belle Collaboration], Phys. Rev. D 94, no. 7, 072007 (2016) [arXiv:1607.07923 [hep-ex]].
* (32) S. Hirose [Belle Collaboration], $\bar{B}\rightarrow D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ and Related Tauonic Topics at Belle, arXiv:1705.05100 [hep-ex].
* (33) R. Aaij et al. [LHCb Collaboration], Measurement of the ratio of branching fractions $\mathcal{B}(\bar{B}^{0}\to D^{*+}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(\bar{B}^{0}\to D^{*+}\mu^{-}\bar{\nu}_{\mu})$, Phys. Rev. Lett. 115, no. 11, 111803 (2015) Erratum: [Phys. Rev. Lett. 115, no. 15, 159901 (2015)] [arXiv:1506.08614 [hep-ex]].
* (34) R. Aaij et al. [LHCb Collaboration], Test of Lepton Flavor Universality by the measurement of the $B^{0}\to D^{*-}\tau^{+}\nu_{\tau}$ branching fraction using three-prong $\tau$ decays, Phys. Rev. D 97, no. 7, 072013 (2018) [arXiv:1711.02505 [hep-ex]].
* (35) R. Aaij et al. [LHCb Collaboration], Measurement of the ratio of the $B^{0}\to D^{*-}\tau^{+}\nu_{\tau}$ and $B^{0}\to D^{*-}\mu^{+}\nu_{\mu}$ branching fractions using three-prong $\tau$-lepton decays, Phys. Rev. Lett. 120, no. 17, 171802 (2018) [arXiv:1708.08856 [hep-ex]].
* (36) G. Caria et al. [Belle Collaboration], Measurement of $\mathcal{R}(D)$ and $\mathcal{R}(D^{*})$ with a Semileptonic Tagging Method, Phys. Rev. Lett. 124 (2020) no.16, 161803 [arXiv:1910.05864 [hep-ex]].
* (37) S. Hirose et al. [Belle Collaboration], Measurement of the $\tau$ lepton polarization and $R(D^{*})$ in the decay $\bar{B}\rightarrow D^{*}\tau^{-}\bar{\nu}_{\tau}$ with one-prong hadronic $\tau$ decays at Belle, Phys. Rev. D 97, no. 1, 012004 (2018) [arXiv:1709.00129 [hep-ex]].
* (38) S. Hirose et al. [Belle Collaboration], Measurement of the $\tau$ lepton polarization and $R(D^{*})$ in the decay $\bar{B}\to D^{*}\tau^{-}\bar{\nu}_{\tau}$, Phys. Rev. Lett. 118, no. 21, 211801 (2017) [arXiv:1612.00529 [hep-ex]].
* (39) R. Aaij et al. (LHCb Collaboration), Measurement of the ratio of branching fractions $\mathcal{B}(B_{c}^{+}\,\to\,J/\psi\tau^{+}\nu_{\tau})$/$\mathcal{B}(B_{c}^{+}\,\to\,J/\psi\mu^{+}\nu_{\mu})$, Phys. Rev. Lett. 120, 121801 (2018) [arXiv:1711.05623 [hep-ex]].
* (40) A. Abdesselam et al. [Belle Collaboration], Measurement of the $D^{\ast-}$ polarization in the decay $B^{0}\to D^{\ast-}\tau^{+}\nu_{\tau}$, arXiv:1903.03102 [hep-ex].
* (41) Y. Amhis et al. [HFLAV], Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of 2021, [arXiv:2206.07501 [hep-ex]].
* (42) LHCb Collaboration, First joint measurement of $R(D^{\ast})$ and $R(D^{0})$ at LHCb, https://indico.cern.ch/event/1187939/
* (43) [LHCb], Measurement of the ratios of branching fractions $\mathcal{R}(D^{*})$ and $\mathcal{R}(D^{0})$, [arXiv:2302.02886 [hep-ex]].
* (44) LHCb Collaboration, Measurement of $R(D^{\ast})$ with hadronic $\tau^{+}$ decays at $\sqrt{s}=$ 13 TeV by the LHCb collaboration, https://indico.cern.ch/event/1231797/.
* (45) For updated results see HFLAV preliminary average of $R(D^{(\ast)})$ for Winter 2023 in https://hflav-eos.web.cern.ch/hflav-eos/semi/winter23_prel/html/RDsDsstar/RDRDs.html.
* (46) J. Harrison et al. [LATTICE-HPQCD], $R(J/\psi)$ and $B_{c}^{-}\rightarrow J/\psi\ell^{-}\bar{\nu}_{\ell}$ Lepton Flavor Universality Violating Observables from Lattice QCD, Phys. Rev. Lett. 125, no.22, 222003 (2020) [arXiv:2007.06956 [hep-lat]].
* (47) S. Iguro, T. Kitahara and R. Watanabe, Global fit to $b\to c\tau\nu$ anomalies 2022 mid-autumn, [arXiv:2210.10751 [hep-ph]].
* (48) R. Alonso, B. Grinstein and J. Martin Camalich, Lifetime of $B_{c}^{-}$ Constrains Explanations for Anomalies in $B\to D^{(*)}\tau\nu$, Phys. Rev. Lett. 118, 081802 (2017). [arXiv:1611.06676 [hep-ph]]
* (49) A. G. Akeroyd and C. H. Chen, Constraint on the branching ratio of $B_{c}\to\tau\nu$ from LEP1 and consequences for R(D(*)) anomaly, Phys. Rev. D 96, 075011 (2017). [arXiv:1708.04072 [hep-ph]].
* (50) S. Kamali, New physics in inclusive semileptonic $B$ decays including nonperturbative corrections, Int. J. Mod. Phys. A 34, no.06n07, 1950036 (2019) [arXiv:1811.07393 [hep-ph]].
* (51) R. Aaij et al. [LHCb], Observation of the decay $\Lambda_{b}^{0}\rightarrow\Lambda_{c}^{+}\tau^{-}\overline{\nu}_{\tau}$, Phys. Rev. Lett. 128, no.19, 191803 (2022) [arXiv:2201.03497 [hep-ex]].
* (52) M. Fedele, M. Blanke, A. Crivellin, S. Iguro, T. Kitahara, U. Nierste and R. Watanabe, Impact of $\Lambda$b→$\Lambda$c$\tau$$\nu$ measurement on new physics in b→cl$\nu$ transitions, Phys. Rev. D 107, no.5, 055005 (2023) [arXiv:2211.14172 [hep-ph]].
* (53) C. H. García-Duque, J. M. Cabarcas, J. H. Muñoz, N. Quintero and E. Rojas, Singlet vector leptoquark model facing recent LHCb and BABAR measurements,” Nucl. Phys. B 988, 116115 (2023) [arXiv:2209.04753 [hep-ph]].
* (54) F. U. Bernlochner, Z. Ligeti, D. J. Robinson and W. L. Sutcliffe, Precise predictions for $\Lambda_{b}\to\Lambda_{c}$ semileptonic decays, Phys. Rev. D 99, no.5, 055008 (2019) [arXiv:1812.07593 [hep-ph]].
* (55) L. Calibbi, A. Crivellin and T. Ota, Effective Field Theory Approach to $b\to s\ell\ell^{(\prime)}$, $B\to K^{(*)}\nu\overline{\nu}$ and $B\to D^{(*)}\tau\nu$ with Third Generation Couplings, Phys. Rev. Lett. 115, 181801 (2015) [arXiv:1506.02661 [hep-ph]].
* (56) A. Greljo, G. Isidori and D. Marzocca, On the breaking of Lepton Flavor Universality in B decays, JHEP 07, 142 (2015) [arXiv:1506.01705 [hep-ph]].
* (57) B. Bhattacharya, A. Datta, D. London and S. Shivashankara, Simultaneous Explanation of the $R_{K}$ and $R(D^{(*)})$ Puzzles, Phys. Lett. B 742, 370-374 (2015) [arXiv:1412.7164 [hep-ph]].
* (58) D. A. Faroughy, A. Greljo and J. F. Kamenik, Confronting lepton flavor universality violation in B decays with high-$p_{T}$ tau lepton searches at LHC, Phys. Lett. B 764, 126 (2017). [arXiv:1609.07138 [hep-ph]]
* (59) D. Buttazzo, A. Greljo, G. Isidori and D. Marzocca, B-physics anomalies: a guide to combined explanations, JHEP 11, 044 (2017) [arXiv:1706.07808 [hep-ph]].
* (60) B. Bhattacharya, A. Datta, J. P. Guévin, D. London and R. Watanabe, Simultaneous explanation of the $R_{K}$ and $R_{D^{(*)}}$ puzzles: a model analysis, JHEP 01, 015 (2017) [arXiv:1609.09078 [hep-ph]].
* (61) J. Kumar, D. London and R. Watanabe, Combined Explanations of the $b\to s\mu^{+}\mu^{-}$ and $b\to c\tau^{-}{\bar{\nu}}$ Anomalies: a General Model Analysis, Phys. Rev. D 99, no.1, 015007 (2019) [arXiv:1806.07403 [hep-ph]].
* (62) D. Guadagnoli, M. Reboud and O. Sumensari, A gauged horizontal $SU(2)$ symmetry and $R_{K^{(\ast)}}$, JHEP 11, 163 (2018) [arXiv:1807.03285 [hep-ph]].
* (63) S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente and J. Virto, Non-abelian gauge extensions for B-decay anomalies, Phys. Lett. B 760, 214-219 (2016) [arXiv:1604.03088 [hep-ph]].
* (64) S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente and J. Virto, Phenomenology of an $SU(2)\times SU(2)\times U(1)$ model with lepton-flavour non-universality, JHEP 12, 059 (2016) [arXiv:1608.01349 [hep-ph]].
* (65) B. Capdevila, A. Crivellin, C. A. Manzari and M. Montull, Explaining $b\to s\ell^{+}\ell^{-}$ and the Cabibbo angle anomaly with a vector triplet, Phys. Rev. D 103, no.1, 015032 (2021) [arXiv:2005.13542 [hep-ph]].
* (66) J. D. Gómez, N. Quintero and E. Rojas, Charged current $b\to c\tau\bar{\nu}_{\tau}$ anomalies in a general $W^{\prime}$ boson scenario, Phys. Rev. D 100, no.9, 093003 (2019) [arXiv:1907.08357 [hep-ph]].
* (67) A. Datta, S. Kamali, S. Meinel and A. Rashed, Phenomenology of ${\Lambda}_{b}\to{\Lambda}_{c}\tau{\overline{\nu}}_{\tau}$ using lattice QCD calculations, JHEP 08, 131 (2017) [arXiv:1702.02243 [hep-ph]].
* (68) E. Kou et al. [Belle-II], The Belle II Physics Book, PTEP 2019, no.12, 123C01 (2019) [erratum: PTEP 2020, no.2, 029201 (2020)] [arXiv:1808.10567 [hep-ex]]. Glattauer:2015teq
* (69) R. Glattauer et al. [Belle], Measurement of the decay $B\to D\ell\nu_{\ell}$ in fully reconstructed events and determination of the Cabibbo-Kobayashi-Maskawa matrix element $|V_{cb}|$, Phys. Rev. D 93, no.3, 032006 (2016) [arXiv:1510.03657 [hep-ex]].
* (70) A. Abdesselam et al. [Belle], Precise determination of the CKM matrix element $\left|V_{cb}\right|$ with $\bar{B}^{0}\to D^{*\,+}\,\ell^{-}\,\bar{\nu}_{\ell}$ decays with hadronic tagging at Belle, [arXiv:1702.01521 [hep-ex]].
* (71) D. Bečirević, F. Jaffredo, A. Peñuelas and O. Sumensari, New Physics effects in leptonic and semileptonic decays, JHEP 05, 175 (2021) [arXiv:2012.09872 [hep-ph]].
* (72) C. Bobeth, M. Bordone, N. Gubernari, M. Jung and D. van Dyk, Lepton-flavour non-universality of ${\bar{B}}\rightarrow D^{*}\ell{{\bar{\nu}}}$ angular distributions in and beyond the Standard Model, Eur. Phys. J. C 81, no.11, 984 (2021) [arXiv:2104.02094 [hep-ph]].
* (73) M. Bona et al. [UTfit], “New UTfit Analysis of the Unitarity Triangle in the Cabibbo-Kobayashi-Maskawa scheme,” [arXiv:2212.03894 [hep-ph]].
* (74) R. L. Workman et al. [Particle Data Group], Review of Particle Physics, PTEP 2022, 083C01 (2022)
* (75) M. T. Prim et al. [Belle], Search for $B^{+}\to\mu^{+}\,\nu_{\mu}$ and $B^{+}\to\mu^{+}\,N$ with inclusive tagging, Phys. Rev. D 101, no.3, 032007 (2020) [arXiv:1911.03186 [hep-ex]].
* (76) C. H. García-Duque, J. H. Muñoz, N. Quintero and E. Rojas, Extra gauge bosons and lepton flavor universality violation in $\Upsilon$ and $B$ meson decays, Phys. Rev. D 103, no.7, 073003 (2021) [arXiv:2103.00344 [hep-ph]].
* (77) D. Aloni, A. Efrati, Y. Grossman and Y. Nir, $\Upsilon$ and $\psi$ leptonic decays as probes of solutions to the $R_{D}^{(*)}$ puzzle, JHEP 06, 019 (2017) [arXiv:1702.07356 [hep-ph]].
* (78) S. Descotes-Genon, S. Fajfer, J. F. Kamenik and M. Novoa-Brunet, Testing lepton flavor universality in $\Upsilon(4S)$ decays, Phys. Rev. D 103, no.11, 113009 (2021) [arXiv:2104.06842 [hep-ph]].
* (79) P. del Amo Sanchez et al. [BaBar], Test of lepton universality in $\Upsilon(1S)$ decays at BaBar, Phys. Rev. Lett. 104, 191801 (2010) [arXiv:1002.4358 [hep-ex]].
* (80) D. Besson et al. [CLEO], First Observation of $\Upsilon(3S)\to\tau^{+}\tau^{-}$ and Tests of Lepton Universality in Upsilon Decays, Phys. Rev. Lett. 98, 052002 (2007) [arXiv:hep-ex/0607019 [hep-ex]].
* (81) J. P. Lees et al. [BaBar], Precision measurement of the ${\cal B}(\Upsilon(3S)\to\tau^{+}\tau^{-})/{\cal B}(\Upsilon(3S)\to\mu^{+}\mu^{-})$ ratio, Phys. Rev. Lett. 125, 241801 (2020) [arXiv:2005.01230 [hep-ex]].
* (82) S. Patra et al. [Belle], Search for charged lepton flavor violating decays of $\Upsilon(1S)$, JHEP 05, 095 (2022) [arXiv:2201.09620 [hep-ex]].
* (83) L. Di Luzio, M. Kirk, A. Lenz and T. Rauh, $\Delta M_{s}$ theory precision confronts flavour anomalies, JHEP 12, 009 (2019) [arXiv:1909.11087 [hep-ph]].
* (84) L. Di Luzio, M. Kirk and A. Lenz, Updated $B_{s}$-mixing constraints on new physics models for $b\to s\ell^{+}\ell^{-}$ anomalies, Phys. Rev. D 97, no.9, 095035 (2018) [arXiv:1712.06572 [hep-ph]].
* (85) A. K. Alok, N. R. S. Chundawat and D. Kumar, Impact of $b\rightarrow s\ell\ell$ anomalies on rare charm decays in non-universal $Z^{\prime}$ models, Eur. Phys. J. C 82, no.1, 30 (2022) [arXiv:2110.12451 [hep-ph]].
* (86) W. Altmannshofer, S. Gori, M. Pospelov and I. Yavin, Neutrino Trident Production: A Powerful Probe of New Physics with Neutrino Beams, Phys. Rev. Lett. 113, 091801 (2014) [arXiv:1406.2332 [hep-ph]].
* (87) R. Aaij et al. [LHCb], Search for the lepton flavour violating decay $B^{+}\rightarrow K^{+}\mu^{-}\tau^{+}$ using $B_{s2}^{*0}$ decays, JHEP 06, 129 (2020) [arXiv:2003.04352 [hep-ex]].
* (88) [LHCb], Search for the lepton-flavour violating decays $B^{0}\to K^{*0}\tau^{\pm}\mu^{\mp}$, [arXiv:2209.09846 [hep-ex]].
* (89) W. G. Parrott et al. [HPQCD], Standard Model predictions for B→K$\ell$+$\ell$-, B→K$\ell$1-$\ell$2+ and B→K$\nu$$\nu$¯ using form factors from Nf=2+1+1 lattice QCD, Phys. Rev. D 107, no.1, 014511 (2023) [arXiv:2207.13371 [hep-ph]].
* (90) R. Aaij et al. [LHCb], Search for the lepton-flavour-violating decays $B^{0}_{s}\to\tau^{\pm}\mu^{\mp}$ and $B^{0}\to\tau^{\pm}\mu^{\mp}$, Phys. Rev. Lett. 123, no.21, 211801 (2019) [arXiv:1905.06614 [hep-ex]].
* (91) R. Bause, H. Gisbert, M. Golz and G. Hiller, Lepton universality and lepton flavor conservation tests with dineutrino modes, Eur. Phys. J. C 82, no.2, 164 (2022) [arXiv:2007.05001 [hep-ph]].
* (92) R. Bause, H. Gisbert, M. Golz and G. Hiller, Interplay of dineutrino modes with semileptonic rare B-decays, JHEP 12, 061 (2021) [arXiv:2109.01675 [hep-ph]].
* (93) T. E. Browder, N. G. Deshpande, R. Mandal and R. Sinha, Impact of $B\to K\nu\bar{\nu}$ measurements on beyond the Standard Model theories, Phys. Rev. D 104, no.5, 053007 (2021) [arXiv:2107.01080 [hep-ph]].
* (94) X. G. He and G. Valencia, $R^{\nu}_{K^{(\ast)}}$ and non-standard neutrino interactions, Phys. Lett. B 821, 136607 (2021) [arXiv:2108.05033 [hep-ph]].
* (95) A. J. Buras, J. Girrbach-Noe, C. Niehoff and D. M. Straub, $B\to{K}^{\left(\ast\right)}\nu\overline{\nu}$ decays in the Standard Model and beyond, JHEP 02, 184 (2015) [arXiv:1409.4557 [hep-ph]].
* (96) J. Grygier et al. [Belle], Search for $\bm{B\to h\nu\bar{\nu}}$ decays with semileptonic tagging at Belle, Phys. Rev. D 96, no.9, 091101 (2017) [arXiv:1702.03224 [hep-ex]].
* (97) F. Abudinén et al. [Belle-II], Search for $B^{+}\to K^{+}\nu\bar{\nu}$ Decays Using an Inclusive Tagging Method at Belle II, Phys. Rev. Lett. 127, no.18, 181802 (2021) [arXiv:2104.12624 [hep-ex]].
* (98) F. Dattola [Belle-II], Search for $B^{+}\to K^{+}\nu\bar{\nu}$ decays with an inclusive tagging method at the Belle II experiment, [arXiv:2105.05754 [hep-ex]].
* (99) R. Aaij et al. [LHCb], Search for the decays $B_{s}^{0}\to\tau^{+}\tau^{-}$ and $B^{0}\to\tau^{+}\tau^{-}$, Phys. Rev. Lett. 118, no.25, 251802 (2017) [arXiv:1703.02508 [hep-ex]].
* (100) C. Bobeth, M. Gorbahn, T. Hermann, M. Misiak, E. Stamou and M. Steinhauser, $B_{s,d}\to l^{+}l^{-}$ in the Standard Model with Reduced Theoretical Uncertainty, Phys. Rev. Lett. 112, 101801 (2014) [arXiv:1311.0903 [hep-ph]].
* (101) J. Albrecht, F. Bernlochner, M. Kenzie, S. Reichert, D. Straub and A. Tully, Future prospects for exploring present day anomalies in flavour physics measurements with Belle II and LHCb, [arXiv:1709.10308 [hep-ph]].
* (102) C. Cornella, J. Fuentes-Martin and G. Isidori, Revisiting the vector leptoquark explanation of the B-physics anomalies, JHEP 07, 168 (2019) [arXiv:1903.11517 [hep-ph]].
* (103) A. Pich, Precision Tau Physics, Prog. Part. Nucl. Phys. 75, 41-85 (2014) [arXiv:1310.7922 [hep-ph]].
* (104) N. Tsuzuki et al. [Belle], Search for lepton-flavor-violating $\tau$ decays into a lepton and a vector meson using the full Belle data sample, [arXiv:2301.03768 [hep-ex]].
* (105) P. Langacker, The Physics of Heavy $Z^{\prime}$ Gauge Bosons, Rev. Mod. Phys. 81, 1199-1228 (2009) [arXiv:0801.1345 [hep-ph]].
* (106) G. Aad et al. [ATLAS], Search for high-mass dilepton resonances using 139 fb-1 of $pp$ collision data collected at $\sqrt{s}=$13 TeV with the ATLAS detector, Phys. Lett. B 796, 68-87 (2019) [arXiv:1903.06248 [hep-ex]].
* (107) [CMS], Search for a narrow resonance in high-mass dilepton final states in proton-proton collisions using 140$~{}\mathrm{fb}^{-1}$ of data at $\sqrt{s}=13~{}\mathrm{TeV}$, CMS-PAS-EXO-19-019.
* (108) M. Aaboud et al. [ATLAS], Search for additional heavy neutral Higgs and gauge bosons in the ditau final state produced in 36 fb-1 of pp collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, JHEP 01 (2018), 055 [arXiv:1709.07242 [hep-ex]].
* (109) G. Aad et al. [ATLAS], Search for a heavy charged boson in events with a charged lepton and missing transverse momentum from $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector, Phys. Rev. D 100, no.5, 052013 (2019) [arXiv:1906.05609 [hep-ex]].
* (110) G. Aad et al. [ATLAS], Search for high-mass resonances in final states with a tau lepton and missing transverse momentum with the ATLAS detector, ATLAS-CONF-2021-025.
* (111) [CMS], Search for new physics in the $\tau$ lepton plus missing transverse momentum final state in proton-proton collisions at $\sqrt{s}$ = 13 TeV, [arXiv:2212.12604 [hep-ex]].
* (112) J. Erler, P. Langacker, S. Munir and E. Rojas, $Z^{\prime}$ Bosons at Colliders: a Bayesian Viewpoint, JHEP 11, 076 (2011) [arXiv:1103.2659 [hep-ph]].
* (113) C. Salazar, R. H. Benavides, W. A. Ponce and E. Rojas, LHC Constraints on 3-3-1 Models, JHEP 07, 096 (2015) [arXiv:1503.03519 [hep-ph]].
* (114) R. H. Benavides, L. Muñoz, W. A. Ponce, O. Rodríguez and E. Rojas, Electroweak couplings and LHC constraints on alternative Z’ models in $E_{6}$, Int. J. Mod. Phys. A 33, no.35, 1850206 (2018) [arXiv:1801.10595 [hep-ph]].
|
Quantile Least Squares: A Flexible Approach for Robust
Estimation and Validation of Location-Scale Families
Mohammed Adjieteh111 Mohammed Adjieteh, ASA, is a Ph.D. candidate and Graduate
Teaching Assistant in the Department of Mathematical Sciences, University of
Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201, USA. e-mail:
<EMAIL_ADDRESS>
University of Wisconsin-Milwaukee
Vytaras Brazauskas222 Corresponding Author: Vytaras Brazauskas, Ph.D., ASA, is
a Professor in the Department of Mathematical Sciences, University of
Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201, USA. e-mail:
<EMAIL_ADDRESS>
University of Wisconsin-Milwaukee
December 15, 2023
> Abstract. In this paper, the problem of robust estimation and validation of
> location-scale families is revisited. The proposed methods exploit the joint
> asymptotic normality of sample quantiles (of i.i.d. random variables) to
> construct the ordinary and generalized least squares estimators of location
> and scale parameters. These quantile least squares (QLS) estimators are easy
> to compute because they have explicit expressions, their robustness is
> achieved by excluding extreme quantiles from the least-squares estimation,
> and efficiency is boosted by using as many non-extreme quantiles as
> practically relevant. The influence functions of the QLS estimators are
> specified and plotted for several location-scale families. They closely
> resemble the shapes of some well-known influence functions yet those shapes
> emerge automatically (i.e., do not need to be specified). The joint
> asymptotic normality of the proposed estimators is established and their
> finite-sample properties are explored using simulations. Also, computational
> costs of these estimators, as well as those of MLE, are evaluated for sample
> sizes $n=10^{6},10^{7},10^{8},10^{9}$. For model validation, two goodness-
> of-fit tests are constructed and their performance is studied using
> simulations and real data. In particular, for the daily stock returns of
> Google over the last four years, both tests strongly support the logistic
> distribution assumption and reject other bell-shaped competitors.
>
> Keywords. Goodness-of-Fit; Least Squares; Quantiles; Relative Efficiency;
> Robustness.
## 1 Introduction
The problem of robust estimation of location-scale families can be traced back
to the seminal works of Tukey (1960), Huber (1964), and Hampel (1968). Since
then, numerous robust methods for this problem have been proposed in the
literature; they are summarized in the books of Hampel et al. (1986), Maronna
et al. (2006), and Huber and Ronchetti (2009). While at first it might seem
like the topic is exhausted and fully “solved”, in this paper we argue that it
is worthwhile revisiting it. In particular, connections with best linear
unbiased estimators or BLUE, based on strategically selected order statistics,
can be exploited and studied from various theoretical and practical
perspectives: robustness, efficiency, model validation (goodness of fit), and
computational cost. All of this within the same framework.
The literature on BLUE methods for location-scale families, which are
constructed out of a few order statistics, goes back to Mosteller (1946).
Since then, numerous papers on parameter estimation, hypothesis testing,
optimal spacings, simulations, and applications have been published. A very
short list of contributions to this area includes: a first comprehensive
review of estimation problems by Sarhan and Greenberg (1962) (and many
specialized papers by at least one of these authors); estimation of parameters
of the Cauchy distribution by Chan (1970) (and multiple related papers by the
same author and his co-authors) and Cane (1974); a relatively recent review of
this literature by Ali and Umbach (1998) (and numerous technical papers by at
least one of these authors). A common theme in many of the papers in this
field is to show that for various distributions, highly-efficient estimators
can be constructed using less than ten order statistics. Also, when the number
of order statistics is fixed, then the optimal spacings, according to the
asymptotic relative efficiency criterion, can be determined. Computational
ease and robustness of such estimators are often mentioned, but to the best of
our knowledge no formal studies of robustness that specify breakdown points
and influence functions have been pursued. Interestingly, those optimal (most
efficient) estimators usually include order statistics that are very close to
the extreme levels of 0 or 1, making the estimators practically nonrobust.
Xu et al. (2014), focusing on the $g$-and-$h$ distributional family, did study
the breakdown points and influence functions of robust estimators that they
derived using the criterion of quantile least squares (QLS). In this paper, we
will link the QLS criterion with BLUE methods for location-scale families and
thus introduce two types of estimators: ordinary QLS and generalized QLS
(Sections 2 and 3). Besides studying small-sample properties of the new
estimators under clean and contaminated data scenarios (Section 4), we will
evaluate computational costs of these estimators and those of MLE for sample
sizes $n=10^{6},10^{7},10^{8},10^{9}$. In addition, two goodness-of-fit tests
will be constructed (Section 3.5) and their performance will be studied using
simulations (Section 4.4) and real data (Section 5).
## 2 Quantile Least Squares
In this section, a general formulation of the least squares estimators based
on sample quantiles is presented, and their asymptotic robustness and relative
efficiency properties are specified.
Suppose a sample of independent and identically distributed (i.i.d.)
continuous random variables, $X_{1},\ldots,X_{n}$, with the cumulative
distribution function (cdf) $F$, probability density function (pdf) $f$, and
quantile function (qf) $F^{-1}$, is observed. Let the cdf, pdf, and qf be
given in a parametric form, and suppose that they are indexed by an
$m$-dimensional parameter
$\mbox{\boldmath$\theta$}=(\theta_{1},\ldots,\theta_{m})$. Further, let
$X_{(1)}\leq\cdots\leq X_{(n)}$ denote the ordered sample values. The
empirical estimator of the $p$th population quantile is the corresponding
sample quantile $X_{(\lceil np\rceil)}=\widehat{F}^{-1}(p)$, where
$\lceil\cdot\rceil$ denotes the rounding up operation. Also, throughout the
paper the notation ${\cal AN}$ stands for “asymptotically normal.”
### 2.1 Regression Estimation
To specify a regression framework, we first recall the joint asymptotic
normality result of sample quantiles. (The following theorem is slightly
edited to match the context of the current paper.)
Theorem 2.1 [ Serfling (2002a, p.80, Theorem B) ]
Let $0<p_{1}<\cdots<p_{k}<1$, and suppose that pdf $f$ is continuous. Then the
$k$-variate vector of empirical quantiles
$\big{(}\widehat{F}^{-1}(p_{1}),\ldots,\widehat{F}^{-1}(p_{k})\big{)}$ is
${\cal AN}$ with the mean vector
$\big{(}F^{-1}(p_{1}),\ldots,F^{-1}(p_{k})\big{)}$ and $k\times k$ covariance-
variance matrix with elements $\sigma_{ij}/n$, where
$\sigma_{ij}=\frac{p_{i}(1-p_{j})}{f(F^{-1}(p_{i}))f(F^{-1}(p_{j}))}\qquad\mbox{for
~{}}i\leq j$ (2.1)
and $\sigma_{ij}=\sigma_{ji}\;$ for $i>j$.
For large sample size and general $F$, this result can be interpreted as a
nonlinear regression model with normally distributed error terms. That is,
$\widehat{F}^{-1}(p_{i})~{}=~{}F^{-1}(p_{i})+\varepsilon_{i},\qquad
i=1,\ldots,k,$ (2.2)
where the error term
$\mbox{\boldmath$\varepsilon$}=\left(\varepsilon_{1},\ldots,\varepsilon_{k}\right)$
is ${\cal AN}\big{(}\mbox{\bf 0},\,\mbox{\boldmath$\Sigma$}/n\big{)}$ with the
elements of $\Sigma$ given by (2.1). Since $F^{-1}(p_{i})$ is a function of
$\mbox{\boldmath$\theta$}=(\theta_{1},\ldots,\theta_{m})$, the number of
quantiles ($k$) should be at least as large as the number of parameters ($m$).
Then, the least squares problem can be formulated as follows:
$\mbox{minimize}\quad\sum_{i=1}^{k}\left(\widehat{F}^{-1}(p_{i})-F^{-1}(p_{i})\right)^{2}\quad\mbox{with
respect to $\theta_{1},\ldots,\theta_{m}$}.$ (2.3)
In general, (2.3) is a challenging computational problem that requires
numerical optimization algorithms. Moreover, the objective function may have
many local minima and even the global minimum may produce a biased estimate.
But as was demonstrated by Xu et al. (2014, Section 2.1) for the $g$-and-$h$
distributional family, this problem can be solved with rapidly converging
algorithms, and its solution possesses several desirable properties:
consistency, asymptotic normality, bounded influence functions, positive
breakdown point. We also notice that using similar arguments to those of Xu et
al. (2014) the equivalent theoretical properties can be established for other
parametric distributions, which will be discussed in Sections 2.2-2.3.
Further, it will be shown in Section 3 that for location-scale families and
their variants, the nonlinear regression model (2.2) becomes a linear
regression model with (approximately) normally distributed error terms whose
covariance-variance matrix has a convenient structure. As a result, the latter
problem has explicit solutions with known theoretical properties.
### 2.2 Robustness Properties
The quantile least squares (QLS) estimator found by solving (2.3) can be
viewed as an indirect estimator, robust inferential properties of which are
provided by Genton and de Luna (2000) and Genton and Ronchetti (2003). Using
those general results and the arguments of Xu et al. (2014) several properties
of the QLS estimator can be stated. First, as is clear from the choice of
quantile confidence levels,
$0<a=p_{1}<p_{2}<\cdots<p_{k-1}<p_{k}=b<1,$
the order statistics with the index less than $\lceil na\rceil$ and more than
$\lceil nb\rceil$ play no role in estimation of the regression model (2.2).
This implies that the QLS estimator is globally robust with the (asymptotic)
breakdown point equal to:
$\mbox{BP}~{}=~{}\min\left\\{\mbox{LBP},\mbox{UBP}\right\\}~{}=~{}\min\left\\{a,1-b\right\\}>0.$
(2.4)
Note that when the underlying probability distribution $F$ is not symmetric it
makes sense to consider lower (LBP) and upper (UBP) breakdown points
separately. For more details on the relevance of LBP and UBP in applications,
see Brazauskas and Serfling (2000) and Serfling (2002b). Second, the influence
function (IF) of the QLS estimator for $\theta$ is directly related to the
influence functions of “data”, i.e., the selected sample quantiles
$\widehat{F}^{-1}(p_{1}),\ldots,\widehat{F}^{-1}(p_{k})$:
$\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{i})\big{)}~{}=~{}\frac{p_{i}-\mbox{\bf\large
1}\\{x\leq F^{-1}(p_{i})\\}}{f(F^{-1}(p_{i}))},\qquad i=1,\ldots,k,$
where $-\infty<x<\infty$ and $\mbox{\bf\large 1}\\{\cdot\\}$ denotes the
indicator function. Specifically, the IF of
$\widehat{\mbox{\boldmath$\theta$}}$ is given by
$\mbox{IF}\big{(}x,\widehat{\mbox{\boldmath$\theta$}}\big{)}~{}=~{}(\mathbf{X^{\prime}X})^{-1}\mathbf{X^{\prime}}\left(\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{1})\big{)},\ldots,\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{k})\big{)}\right)^{\prime},$
(2.5)
where $\mathbf{X}=\big{[}X_{ij}\big{]}_{k\times
m}=\left[\frac{\partial\widehat{F}^{-1}(p_{i})}{\partial\widehat{\theta}_{j}}\right]\Big{|}_{\widehat{\mbox{\boldmath$\theta$}}=\mbox{\boldmath$\theta$}}$
, and is bounded because $p_{1}=a>0$ and $p_{k}=b<1$.
### 2.3 Asymptotic Relative Efficiency
To start with, the model assumptions used in Theorem 1 of Xu et al. (2014,
Section 2.1) can be broadened to include other parametric models. Then,
repeating the arguments used by these authors to prove the theorem, it can be
established that in general the QLS estimator is consistent and ${\cal AN}$
with the mean vector $\theta$ and $m\times m$ covariance-variance matrix
$\dfrac{1}{n}\,(\mathbf{X^{\prime}X})^{-1}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma$}\mathbf{X}(\mathbf{X^{\prime}X})^{-1},$
(2.6)
where $\mathbf{X}$ is defined as in (2.5) and the elements of $\Sigma$ are
given by (2.1).
Further, under suitable regularity conditions (Serfling, 2002a, Section
4.2.2), the maximum likelihood estimator (MLE) is ${\cal AN}$ with the mean
vector $\theta$ and $m\times m$ covariance-variance matrix
$\frac{1}{n}\,\mathbf{I}^{-1}$, where $\mathbf{I}$ is the Fisher information
matrix. Since MLE is the most efficient ${\cal AN}$ estimator (i.e., its
asymptotic variance attains the Cramér-Rao bound), its performance can serve
as a benchmark for the QLS estimator. In particular, the following asymptotic
relative efficiency (ARE) criterion will be used:
$\mbox{ARE}\,\big{(}\mbox{QLS},\,\mbox{MLE}\big{)}~{}=~{}\left(\frac{\mbox{det}\left[\mathbf{I}^{-1}\right]}{\mbox{det}\big{[}(\mathbf{X^{\prime}X})^{-1}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma$}\mathbf{X}(\mathbf{X^{\prime}X})^{-1}\big{]}}\right)^{1/m},$
(2.7)
where ‘det’ stands for the determinant of a square matrix (Serfling, 2002a,
Section 4.1).
## 3 Location-Scale Families
In this section, the proposed methodology is worked out for location-scale
families. Several such families are listed in Section 3.1. Two QLS-type
estimators are developed in Section 3.2. Further, efficiency and robustness
properties of the new estimators are established in Sections 3.3 and 3.4,
respectively. Finally, in Section 3.5, we analyze model residuals and explore
its goodness-of-fit properties.
### 3.1 Preliminaries
The pdf $f$, cdf $F$, and the qf $F^{-1}$ of the location-scale family are
given by:
$f(x)=\frac{1}{\sigma}f_{*}\left(\frac{x-\mu}{\sigma}\right),\qquad
F(x)=F_{*}\left(\frac{x-\mu}{\sigma}\right),\qquad F^{-1}(u)=\mu+\sigma
F_{*}^{-1}(u),$ (3.1)
where $-\infty<x<\infty$ (or depending on the distribution, $x$ can be
restricted to some interval), $0<u<1$, $-\infty<\mu<\infty$ is the location
parameter, and $\sigma>0$ is the scale parameter. The functions $f_{*}$,
$F_{*}$, $F_{*}^{-1}$ represent pdf, cdf, qf, respectively, of the standard
location-scale family (i.e., $\mu=0$, $\sigma=1$). Choosing $\mu$ or $\sigma$
known, the location-scale family is reduced to either the scale or location
family, respectively.
In Table 3.1, we list key facts for several location-scale families. The
selected distributions include typical symmetric bell-shaped densities, with
domains on all real numbers (e.g., Cauchy, Laplace, Logistic, Normal), as well
as few asymmetric densities with varying domains (e.g., Exponential, Gumbel,
Lévy). In the latter group, the Gumbel pdf is defined on all real numbers but
is slightly skewed; this distribution plays an important role in extreme value
theory. Two-parameter Exponential and Lévy densities are highly skewed and
have domains $(\mu,\,\infty)$. They represent examples when the aforementioned
regularity conditions are not satisfied, due to the presence of $\mu$. Both
distributions are widely used in applications and have many probabilistic
connections. For example, the Lévy distribution is directly related to the
following well-known distributions: Inverse Gamma, Stable, and Folded Normal.
Table 3.1. Key probabilistic formulas and information for selected location-
scale families.
Probability | Standard pdf | Standard qf | Information Matrix
---|---|---|---
Distribution | $f_{*}(z)$ | $F_{*}^{-1}(u)$ | $\mathbf{I_{*}}\,(=\sigma^{2}\times\mathbf{I})$
Cauchy | $\dfrac{1}{\pi(1+z^{2})}$ | $\tan(\pi(u-0.5))$ | $\begin{bmatrix}\frac{1}{2}&0\\\ 0&\frac{1}{2}\\\ \end{bmatrix}$
Laplace | $0.5\,e^{-|z|}$ | $\left\\{\begin{array}[]{cl}\ln(2u),&u\leq 0.5\\\\[1.07639pt] -\ln(2(1-u)),&u>0.5\end{array}\right.$ | $\begin{bmatrix}1&0\\\ 0&1\\\ \end{bmatrix}$
Logistic | $\dfrac{e^{-z}}{(1+e^{-z})^{2}}$ | $-\ln(1/u-1)$ | $\begin{bmatrix}\frac{1}{3}&0\\\ 0&\frac{3+\pi^{2}}{9}\\\ \end{bmatrix}$
Normal | $\frac{1}{\sqrt{2\pi}}\,e^{-z^{2}/2}$ | $\Phi^{-1}(u)$ | $\begin{bmatrix}1&0\\\ 0&2\\\ \end{bmatrix}$
Exponential | $e^{-z}$, $z>0$ | $-\ln(1-u)$ | $1$ (for $\sigma$; $\mu$ is known)
Gumbel | $\displaystyle\exp\\{-z-e^{-z}\\}$ | $-\ln(-\ln(u))$ | $\begin{bmatrix}1&\gamma-1\\\ \gamma-1&\frac{\pi^{2}}{6}+(\gamma-1)^{2}\\\ \end{bmatrix}$
Lévy | $\frac{1}{\sqrt{2\pi}}\,z^{-3/2}\,e^{-(2z)^{-1}}$, $z>0$ | $\big{(}\Phi^{-1}(1-u/2)\big{)}^{-2}$ | $\frac{1}{2}$ (for $\sigma$; $\mu$ is known)
Note: $\gamma=-\Gamma^{\prime}(1)\approx 0.5772$ is the Euler-Mascheroni
constant.
It is worthwhile mentioning that there exist numerous variants of the
location-scale family such as folded distributions (e.g., Folded Normal,
Folded Cauchy) or log-location-scale families (e.g., Lognormal, Pareto type
$I$). Since their treatment requires only suitable parameter-free
transformation of the data variable, the estimators developed in this paper
will work for those distributions as well.
### 3.2 Parameter Estimation
Incorporating expressions (3.1) into the model (2.2) yields a linear
regression model:
$\mathbf{Y}~{}=~{}\mathbf{X}\mbox{\boldmath$\beta$}+\mbox{\boldmath$\varepsilon$},$
(3.2)
where
$\mathbf{Y}=\left(\widehat{F}^{-1}(p_{1}),\ldots,\widehat{F}^{-1}(p_{k})\right)^{\prime}$,
$\beta$ $=(\mu,\sigma)^{\prime}$, and
$\mbox{\boldmath$\varepsilon$}=\left(\varepsilon_{1},\ldots,\varepsilon_{k}\right)^{\prime}$
is ${\cal AN}\big{(}\mbox{\bf
0},\,\sigma^{2}\mbox{\boldmath$\Sigma_{*}$}/n\big{)}$. The entries of
$\Sigma_{*}$ are defined by (2.1), but now they are completely known because
$f$ and $F^{-1}$ are replaced with $f_{*}$ and $F_{*}^{-1}$, respectively. The
design matrix $\mathbf{X}$ is defined as in (2.5) and has simple entries:
$\mathbf{X}~{}=~{}\begin{bmatrix}\frac{\partial
F^{-1}(p_{1})}{\partial\mu}&\cdot&\cdot&\cdot&\frac{\partial
F^{-1}(p_{k})}{\partial\mu}\\\\[4.30554pt] \frac{\partial
F^{-1}(p_{1})}{\partial\sigma}&\cdot&\cdot&\cdot&\frac{\partial
F^{-1}(p_{k})}{\partial\sigma}\\\
\end{bmatrix}^{\prime}~{}=~{}\begin{bmatrix}1&\cdot&\cdot&\cdot&1\\\\[4.30554pt]
F_{*}^{-1}(p_{1})&\cdot&\cdot&\cdot&F_{*}^{-1}(p_{k})\\\
\end{bmatrix}^{\prime}.$ (3.3)
Solving (2.3) for the model (3.2) leads to the ordinary least squares
estimator
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
oQLS}}~{}=~{}(\mathbf{X^{\prime}X})^{-1}\mathbf{X^{\prime}}\mathbf{Y}$ (3.4)
which is ${\cal AN}$ with the mean vector $\beta$ $=(\mu,\sigma)^{\prime}$ and
$2\times 2$ covariance-variance matrix
$\dfrac{\sigma^{2}}{n}\,(\mathbf{X^{\prime}X})^{-1}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}$}\mathbf{X}(\mathbf{X^{\prime}X})^{-1},$
(3.5)
where $\mathbf{X}$ is given by (3.3).
Further, the oQLS solution (3.4) implicitly assumes that $\Sigma_{*}$ is the
$k\times k$ identity matrix, which might be a sensible assumption for the non-
linear regression model (2.3) because the resulting estimator is consistent
while the computational complexity is significantly reduced. In general,
however, such a simplification decreases the estimator’s efficiency. Since for
the linear regression model (3.2), $\Sigma_{*}$ is known, ARE of oQLS can be
improved by employing the generalized least squares estimator
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}~{}=~{}(\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{X})^{-1}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{Y}$
(3.6)
which is ${\cal AN}$ with the mean vector $\beta$ $=(\mu,\sigma)^{\prime}$ and
$2\times 2$ covariance-variance matrix
$\dfrac{\sigma^{2}}{n}\,(\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{X})^{-1}.$
(3.7)
Finally, note that if a one-parameter family – location or scale – needs to be
estimated, the formulas (3.4)–(3.7) still remain valid, but the design matrix
(3.3) would be a column of 1’s (for location) or a column of $F_{*}^{-1}(p)$’s
(for scale).
### 3.3 Relative Efficiency Studies
To see how much efficiency is sacrificed when one uses
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny oQLS}}$ instead of
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny gQLS}}$, we will compute (2.7)
for several distributions of Table 3.1. In view of (3.5) and (3.7), the ARE
formula (2.7) is now given by
$\mbox{ARE}\,\big{(}\mbox{oQLS},\,\mbox{MLE}\big{)}~{}=~{}\left(\frac{\mbox{det}\left[\mathbf{I}_{*}^{-1}\right]}{\mbox{det}\big{[}(\mathbf{X^{\prime}X})^{-1}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}$}\mathbf{X}(\mathbf{X^{\prime}X})^{-1}\big{]}}\right)^{1/2}$
and
$\mbox{ARE}\,\big{(}\mbox{gQLS},\,\mbox{MLE}\big{)}~{}=~{}\left(\frac{\mbox{det}\left[\mathbf{I}_{*}^{-1}\right]}{\mbox{det}\big{[}(\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma^{-1}_{*}$}\mathbf{X})^{-1}\big{]}}\right)^{1/2},$
where $\mathbf{I}_{*}$ is specified in Table 3.1. For one-parameter families
(location or scale), the covariance-variance matrices in the ARE formulas get
reduced to scalars and the exponents become 1.
These ARE expressions are functions of $k$, the number of selected sample
quantiles, therefore the choice of $p_{1},\ldots,p_{k}$ (with $k\geq 2$) is
important. As mentioned earlier, our top priority is estimators’ robustness.
Thus, to keep the breakdown points positive and influence functions bounded,
we first fix $a=p_{1}>0$ and $b=p_{k}<1$ and then make the remaining $p_{i}$’s
equally spaced:
$p_{i}=a+\frac{i-1}{k-1}(b-a),\qquad i=1,\ldots,k.$ (3.8)
It is clear that choosing larger $a$ and smaller $b$ yields higher robustness,
while choosing larger $k$ improves efficiency. But there are practical limits
to the efficiency improvement. As can be seen from Figure 3.1, the (pointwise)
ARE curves become almost flat for $k>10$ making the efficiency gains
negligible. This holds true irrespectively of the underlying location-scale
family. On the other hand, choosing gQLS over oQLS gives a major boost to ARE,
especially for the heavier-tailed distributions such as Gumbel, Laplace, and
Cauchy. Also, the seesaw pattern of the ARE curve (for location and to a
lesser degree for location-scale) for the Laplace distribution can be
explained as follows: for $a=1-b$ and $k$ odd, one of the $p_{i}$’s is always
equal to 0.50 resulting in the selection of the sample median, which in this
case is MLE for $\mu$ and thus full efficiency is attained.
Further, to demonstrate that increasing $k$ yields no substantial gains in
efficiency, in Table 3.2 we list AREs of the generalized QLS estimators for
Cauchy, Gumbel, Laplace, Logistic, and Normal distributions, when
$k=15,\,20,\,25$. As is evident from the table, choosing $k=25$ over $k=15$
results in $\sim 1\%$ improvement of AREs. Similar magnitude improvements can
be observed when the extreme quantile levels $a$ and $b$ are changed from
$(0.02,0.98)$ to $(0.05,0.95)$ to $(0.10,0.90)$. In view of this discussion
and keeping in mind that $k$ should be odd (see the ARE entries for Laplace),
we can say that the choice of $k=15$ would suffice in most situations.
However, in order to not squander much efficiency at some unusual location-
scale distributions, $k=25$ is a safer choice.
Finally, it is tempting to try the brute force approach for the ordinary QLS
estimators with the hope that it would substantially improve ARE. In Table
3.3, we list AREs for the oQLS estimator (of the joint location-scale
parameter) when $k$ ranges from 15 to 200. Depending on the distribution and
how low the ARE value is at $k=15$, some tiny improvements are still possible
even for $k=200$ (Cauchy) but they are leveling off (Logistic). More
interestingly, for the Laplace, Normal, and Gumbel distributions, their AREs
reach the peak at some lower $k$ and then start slowly declining. This
behavior is not unexpected because the oQLS estimator, while consistent, is
based on an incorrect simplifying assumption that $\Sigma_{*}$ is the $k\times
k$ identity matrix. This assumption, combined with the brute force approach,
will eventually penalize the estimator’s performance.
Figure 3.1. AREs of the ordinary and generalized QLS estimators of location,
scale, and joint
location-scale parameters for Cauchy, Gumbel, Laplace, Logistic, and Normal
distributions.
The quantiles are selected according to (3.8) with $(a,b)=(0.05,0.95)$ and
$k=2:16$.
Table 3.2. AREs of the generalized QLS estimators of location, scale, and
joint
location-scale parameters for Cauchy, Gumbel, Laplace, Logistic, and Normal
distributions.
The quantiles are selected according to (3.8) with various $(a,b)$ and
$k=15,\,20,\,25$.
Probability | Location | Scale | Location-Scale
---|---|---|---
Distribution | $k=15$ | $k=20$ | $k=25$ | $k=15$ | $k=20$ | $k=25$ | $k=15$ | $k=20$ | $k=25$
$(a,\,b)=(0.02,\,0.98)$
Cauchy | 0.986 | 0.992 | 0.995 | 0.985 | 0.992 | 0.995 | 0.985 | 0.992 | 0.995
Laplace | 1 | 0.950 | 1 | 0.930 | 0.943 | 0.949 | 0.965 | 0.946 | 0.974
Logistic | 0.996 | 0.998 | 0.998 | 0.938 | 0.951 | 0.958 | 0.966 | 0.974 | 0.978
Normal | 0.987 | 0.991 | 0.992 | 0.901 | 0.915 | 0.922 | 0.943 | 0.952 | 0.957
Gumbel | 0.985 | 0.990 | 0.991 | 0.902 | 0.913 | 0.918 | 0.933 | 0.941 | 0.946
$(a,\,b)=(0.05,\,0.95)$
Cauchy | 0.988 | 0.993 | 0.995 | 0.987 | 0.993 | 0.995 | 0.987 | 0.993 | 0.995
Laplace | 1 | 0.953 | 1 | 0.888 | 0.894 | 0.896 | 0.943 | 0.923 | 0.947
Logistic | 0.996 | 0.998 | 0.999 | 0.904 | 0.910 | 0.913 | 0.949 | 0.953 | 0.955
Normal | 0.982 | 0.984 | 0.985 | 0.836 | 0.841 | 0.843 | 0.906 | 0.909 | 0.911
Gumbel | 0.979 | 0.981 | 0.982 | 0.836 | 0.840 | 0.842 | 0.888 | 0.892 | 0.893
$(a,\,b)=(0.10,\,0.90)$
Cauchy | 0.981 | 0.985 | 0.986 | 0.989 | 0.993 | 0.995 | 0.985 | 0.989 | 0.991
Laplace | 1 | 0.958 | 1 | 0.796 | 0.798 | 0.799 | 0.892 | 0.874 | 0.894
Logistic | 0.995 | 0.997 | 0.997 | 0.814 | 0.816 | 0.817 | 0.900 | 0.902 | 0.903
Normal | 0.964 | 0.965 | 0.965 | 0.708 | 0.710 | 0.711 | 0.826 | 0.828 | 0.828
Gumbel | 0.956 | 0.957 | 0.957 | 0.719 | 0.720 | 0.721 | 0.803 | 0.805 | 0.805
Table 3.3. AREs of the ordinary QLS estimators of joint location-scale
parameters for Cauchy, Gumbel, Laplace, Logistic and Normal
distributions, with $(a,b)=(0.05,0.95)$ and various $k$ (see (3.8)).
Probability | $k$
---|---
Distribution | $15$ | $20$ | $25$ | $50$ | $75$ | $100$ | $200$
Cauchy | 0.181 | 0.211 | 0.232 | 0.282 | 0.299 | 0.308 | 0.321
Laplace | 0.742 | 0.753 | 0.757 | 0.759 | 0.758 | 0.757 | 0.756
Logistic | 0.672 | 0.693 | 0.704 | 0.718 | 0.720 | 0.721 | 0.722
Normal | 0.914 | 0.930 | 0.936 | 0.941 | 0.940 | 0.940 | 0.938
Gumbel | 0.905 | 0.905 | 0.902 | 0.890 | 0.885 | 0.882 | 0.877
### 3.4 Robustness Investigations
To see what kind of shapes the influence functions of
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny oQLS}}$ and
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny gQLS}}$ exhibit, we evaluate
and plot (2.5) for the symmetric (Figure 3.2) and asymmetric (Figure 3.3)
location-scale families of Table 3.1. In view of (3.4), (3.6), and (3.1), the
expression (2.5) is now given by
$\mbox{IF}\big{(}x,\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
oQLS}}\big{)}~{}=~{}(\mathbf{X^{\prime}X})^{-1}\mathbf{X^{\prime}}\left(\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{1})\big{)},\ldots,\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{k})\big{)}\right)^{\prime}$
and
$\mbox{IF}\big{(}x,\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}\big{)}~{}=~{}(\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{X})^{-1}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\left(\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{1})\big{)},\ldots,\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{k})\big{)}\right)^{\prime},$
where
$\mbox{IF}\big{(}x,\widehat{F}^{-1}(p_{i})\big{)}=\sigma\,\frac{p_{i}-\mbox{\bf
1}\\{x\,\leq\,\mu+\sigma
F_{*}^{-1}(p_{i})\\}}{f_{*}(F_{*}^{-1}(p_{i}))},~{}i=1,\ldots,k.$ In Figures
3.2 and 3.3, $\mu=0$ and $\sigma=1$.
Figure 3.2. Influence functions of the ordinary and generalized QLS estimators
of
location and scale parameters for Cauchy, Laplace, Logistic, and Normal
distributions.
The quantiles are selected according to (3.8) with $(a,b)=(0.05,0.95)$ and
$k=25$.
In Figure 3.2, we see that the IF shapes of the ordinary QLS estimators look
familiar. For estimation of $\mu$, the estimators act like a stepwise
approximation of a trimmed/winsorized mean or Huber estimator (Hampel et al.,
1986, Figure 1, p.105). For estimation of $\sigma$, they behave like an
approximate version of an $M$-estimator for scale (Hampel et al., 1986, Figure
2, p.123). On the other hand, the generalized QLS estimators demonstrate a
remarkable flexibility. For estimation of $\sigma$, gQLS shrinks the height of
the Cauchy IF and keeps the other curves similar to those of oQLS. But most
impressively, it automatically changes the shape of the IF when estimating
$\mu$: for Normal and Logistic distributions, it acts like a
trimmed/winsorized mean; for Laplace, it behaves like a median; and for
Cauchy, its shape resembles that of a Tukey’s biweight (Hampel et al., 1986,
Figure 3, p.151).
Figure 3.3. Influence functions of the ordinary and generalized QLS estimators
of location and scale parameters for Exponential, Gumbel, and Lévy
distributions.
The quantiles are selected according to (3.8) with $(a,b)=(0.10,0.75)$ and
$k=25$.
In Figure 3.3, the shapes of IF are predictably non-symmetric. For oQLS and
gQLS at Gumbel, they are fairly similar to the IFs of Normal or Logistic
distributions. (Note that by choosing $a=0.10\neq 0.25=1-b$ we made the IF
look more symmetric than it actually is.) For Exponential and Lévy, parameter
$\mu$ is not the “center” of the pdf anymore; it is the left boundary of its
support. This fact has an effect on the shapes of IFs. For gQLS of $\mu$ and
$\sigma$, we see that points near the boundary exhibit most dramatic swings of
influence. Overall, these IFs can be seen as a half of a symmetric family IF.
### 3.5 Model Validation
For model validation, we consider two goodness-of-fit tests, both are
constructed using $\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny gQLS}}$
which possesses more favorable efficiency-robustness properties than
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny oQLS}}$. The first test is a
typical $\chi^{2}$ test that is based on a quadratic form in model residuals.
This approach will be called “in-sample validation” (Section 3.5.1). The
second test is conceptually similar but is based on a combination of the model
residuals and additional sample quantiles. The inclusion of quantiles that had
not been used for parameter estimation allows us to make a fair comparison
among the estimators with different $a$ and $b$. This approach will be called
“out-of-sample validation” (Section 3.5.2).
#### 3.5.1 In-Sample Validation
After the parameter estimation step is completed, the predicted value of
$\mathbf{Y}$ is defined as
$\widehat{\mathbf{Y}}~{}=~{}\mathbf{X}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}$. Then the corresponding residuals are
$\widehat{\mbox{\boldmath$\varepsilon$}}~{}=~{}\mathbf{Y}-\widehat{\mathbf{Y}}~{}=~{}\mathbf{Y}-\mathbf{X}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}~{}=~{}\big{(}\mbox{\boldmath$\mbox{\bf
I}_{k}$}-\mathbf{X}(\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{X})^{-1}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\big{)}\mathbf{Y},$
where $\mbox{\bf I}_{k}$ is the $k\times k$ identity matrix. Using (3.6),
(3.7) and standard statistical inference techniques for linear models (Hogg et
al., 2005, Section 12.3) the following properties can be verified:
* •
$\mathbf{Y}$ has a ${\cal AN}$
$\left(\mathbf{X}\mbox{\boldmath$\beta$},\;\dfrac{\sigma^{2}}{n}\,\mbox{\boldmath$\Sigma_{*}$}\right)$
distribution.
* •
$\widehat{\mathbf{Y}}$ has a ${\cal AN}$
$\left(\mathbf{X}\mbox{\boldmath$\beta$},\;\dfrac{\sigma^{2}}{n}\,\mathbf{X}(\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{X})^{-1}\mathbf{X^{\prime}}\right)$
distribution.
* •
$\widehat{\mbox{\boldmath$\varepsilon$}}$ has a ${\cal AN}$
$\left(\mathbf{0},\;\dfrac{\sigma^{2}}{n}\,\Big{(}\mbox{\boldmath$\Sigma_{*}$}-\mathbf{X}(\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{X})^{-1}\mathbf{X^{\prime}}\Big{)}\right)$
distribution.
* •
$\widehat{\mathbf{Y}}$ and $\widehat{\mbox{\boldmath$\varepsilon$}}$ are
(asymptotically) independent.
Next, these properties can be exploited to construct a diagnostic plot (e.g.,
predicted values versus residuals) and to show that the quadratic form
$Q~{}=~{}\frac{n}{\sigma^{2}}\left(\mathbf{Y}-\mathbf{X}\mbox{\boldmath$\beta$}\right)^{\prime}\mbox{\boldmath$\Sigma_{*}^{-1}$}\left(\mathbf{Y}-\mathbf{X}\mbox{\boldmath$\beta$}\right)$
has the following orthogonal decomposition:
$Q~{}=~{}Q_{1}+Q_{2}~{}=~{}\frac{n}{\sigma^{2}}\left(\mathbf{Y}-\mathbf{X}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}\right)^{\prime}\mbox{\boldmath$\Sigma_{*}^{-1}$}\left(\mathbf{Y}-\mathbf{X}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}\right)+\frac{n}{\sigma^{2}}\left(\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}-\mbox{\boldmath$\beta$}\right)^{\prime}\mathbf{X^{\prime}}\mbox{\boldmath$\Sigma_{*}^{-1}$}\mathbf{X}\left(\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}-\mbox{\boldmath$\beta$}\right).$
Therefore, since asymptotically $Q$ has a $\chi^{2}_{k}$ distribution and
$Q_{2}$ has a $\chi^{2}_{2}$ distribution, the above decomposition implies
that $Q_{1}$ has an approximate $\chi^{2}_{k-2}$ distribution.
Now, to test the hypotheses
$\left\\{\begin{array}[]{cl}H_{0}:&X_{1},\ldots,X_{n}\mbox{~{} were generated
by a location-scale family }F\\\ H_{A}:&X_{1},\ldots,X_{n}\mbox{~{} were {\em
not\/} generated by }F,\\\ \end{array}\right.$
the quadratic form $Q_{1}$ can be utilized as follows. Recall that
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}=(\widehat{\mu}_{\mbox{\tiny gQLS}},\widehat{\sigma}_{\mbox{\tiny
gQLS}})^{\prime}$ is a consistent estimator of $\beta$, thus
$\widehat{\sigma}^{2}_{\mbox{\tiny gQLS}}$ converges in probability to
$\sigma^{2}$. Define a test statistic
$W~{}=~{}\frac{n}{\widehat{\sigma}^{2}_{\mbox{\tiny
gQLS}}}\left(\mathbf{Y}-\mathbf{X}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}\right)^{\prime}\mbox{\boldmath$\Sigma_{*}^{-1}$}\left(\mathbf{Y}-\mathbf{X}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}\right).$ (3.9)
Since $W=\frac{\sigma^{2}}{\widehat{\sigma}^{2}_{\mbox{\tiny gQLS}}}\,Q_{1}$
and $\frac{\sigma^{2}}{\widehat{\sigma}^{2}_{\mbox{\tiny gQLS}}}\rightarrow 1$
(in probability), it follows from Slutsky’s Theorem that under $H_{0}$ the
test statistic $W$ has an approximate $\chi^{2}_{k-2}$ distribution. Note that
a similar goodness-of-fit test was proposed by Ali and Umbach (1989), but
there $\sigma^{2}$ was estimated by the sample variance, which requires that
$F$ has a finite variance. The test based on (3.9) has wider applicability
(e.g., it works for heavy-tailed distributions such as Cauchy) and inherits
the robustness properties of $\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}$.
#### 3.5.2 Out-of-Sample Validation
To compare the goodness of fit of location-scale distributions for which
$\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny gQLS}}$ are computed using
different $a$ and $b$ (i.e., $a_{1},b_{1}$ versus $a_{2},b_{2}$), we first fix
a universal set of sample quantiles. That is, select $\mathbf{Y}_{\mbox{\tiny
out}}=\left(\widehat{F}(p_{1}^{\mbox{\tiny
out}}),\ldots,\widehat{F}(p_{r}^{\mbox{\tiny out}})\right)^{\prime}$, where
$p_{1}^{\mbox{\tiny out}},\ldots,p_{r}^{\mbox{\tiny out}}$ can be all
different from, partially overlapping with, or completely match
$p_{1},\ldots,p_{k}$ (which are used for parameter estimation). Of course, the
latter choice simplifies the out-sample-validation test to the test of Section
3.5.1. After this selection is made, we proceed by mimicking the structure of
(3.9). The predicted value of $\mathbf{Y}_{\mbox{\tiny out}}$ is
$\mathbf{X_{\mbox{\tiny out}}}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}$ with
$\mathbf{X}_{\mbox{\tiny
out}}~{}=~{}\begin{bmatrix}1&\cdot&\cdot&\cdot&1\\\\[4.30554pt]
F_{*}^{-1}(p_{1}^{\mbox{\tiny
out}})&\cdot&\cdot&\cdot&F_{*}^{-1}(p_{r}^{\mbox{\tiny out}})\\\
\end{bmatrix}^{\prime},$
but $\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}=(\widehat{\mu}_{\mbox{\tiny gQLS}},\widehat{\sigma}_{\mbox{\tiny
gQLS}})^{\prime}$ is based on
$\mathbf{Y}=\left(\widehat{F}(p_{1}),\ldots,\widehat{F}(p_{k})\right)^{\prime}$.
Then the test statistic is
$W_{\mbox{\tiny out}}~{}=~{}\frac{n}{\widehat{\sigma}^{2}_{\mbox{\tiny
gQLS}}}\left(\mathbf{Y}_{\mbox{\tiny out}}-\mathbf{X}_{\mbox{\tiny
out}}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny
gQLS}}\right)^{\prime}\mbox{\boldmath$\Sigma_{\mbox{\tiny
out}}^{-1}$}\left(\mathbf{Y}_{\mbox{\tiny out}}-\mathbf{X}_{\mbox{\tiny
out}}\widehat{\mbox{\boldmath$\beta$}}_{\mbox{\tiny gQLS}}\right),$ (3.10)
where the elements of $\Sigma_{\mbox{\tiny out}}$ are
$\sigma_{ij}^{\mbox{\tiny out}}=\frac{p_{i}^{\mbox{\tiny
out}}(1-p_{j}^{\mbox{\tiny out}})}{f_{*}(F_{*}^{-1}(p_{i}^{\mbox{\tiny
out}}))f_{*}(F_{*}^{-1}(p_{j}^{\mbox{\tiny out}}))}$ for $~{}i\leq j~{}$ with
$~{}i,j=1,\ldots,r$.
Now, unless $p_{1}^{\mbox{\tiny out}},\ldots,p_{r}^{\mbox{\tiny out}}$
perfectly match $p_{1},\ldots,p_{k}$ (this case was solved in Section 3.5.1),
the theoretical derivation of the distribution of $W_{\mbox{\tiny out}}$ is a
major challenge. Therefore, to estimate the $p$-value associated with this
test statistic, the following bootstrap procedure can be employed.
1. 2. Bootstrap Procedure (for finding the $p$-value of (3.10))
3. 4. Step 1. Given the original sample, $X_{1},\ldots,X_{n}$, the estimates of $\beta$ and $W_{\mbox{\tiny out}}$ are obtained. Denote them $\widehat{\mbox{\boldmath$\beta$}}^{o}_{\mbox{\tiny gQLS}}=(\widehat{\mu}^{o}_{\mbox{\tiny gQLS}},\widehat{\sigma}^{o}_{\mbox{\tiny gQLS}})^{\prime}$ and $\widehat{W}^{o}_{\mbox{\tiny out}}$. Remember that $\widehat{\mbox{\boldmath$\beta$}}^{o}_{\mbox{\tiny gQLS}}$ is computed using the quantile levels $p_{1},\ldots,p_{k}$, while $\widehat{W}^{o}_{\mbox{\tiny out}}$ is based on $p_{1}^{\mbox{\tiny out}},\ldots,p_{r}^{\mbox{\tiny out}}$ and $\widehat{\mbox{\boldmath$\beta$}}^{o}_{\mbox{\tiny gQLS}}$.
5. Step 2. Generate an i.i.d. sample $X_{1}^{(b)},\ldots,X_{n}^{(b)}$ from $F$ (assumed under $H_{0}$) using the parameter values $\widehat{\mbox{\boldmath$\beta$}}^{o}_{\mbox{\tiny gQLS}}=(\widehat{\mu}^{o}_{\mbox{\tiny gQLS}},\widehat{\sigma}^{o}_{\mbox{\tiny gQLS}})^{\prime}$. Based on this sample, compute $\widehat{\mbox{\boldmath$\beta$}}^{(b)}_{\mbox{\tiny gQLS}}$ (using $p_{1},\ldots,p_{k}$) and $\widehat{W}^{(b)}_{\mbox{\tiny out}}$ (using $p_{1}^{\mbox{\tiny out}},\ldots,p_{r}^{\mbox{\tiny out}}$ and $\widehat{\mbox{\boldmath$\beta$}}^{(b)}_{\mbox{\tiny gQLS}}$).
6. Step 3. Repeat Step 2 a $B$ number of times (e.g., $B=1000$) and save $\widehat{W}^{(1)}_{\mbox{\tiny out}},\ldots,\widehat{W}^{(B)}_{\mbox{\tiny out}}$.
7. Step 4. Estimate the $p$-value of (3.10) by
$\widehat{p}_{\mbox{\tiny
val}}~{}=~{}\frac{1}{B}\sum_{b=1}^{B}{\mbox{\large\bf
1}}\left\\{\widehat{W}^{(b)}_{\mbox{\tiny out}}>\widehat{W}^{o}_{\mbox{\tiny
out}}\right\\}$
and reject $H_{0}$ when $\widehat{p}_{\mbox{\tiny val}}\leq\alpha$ (e.g.,
$\alpha=0.05$).
8.
## 4 Simulations
In this section, we conduct a simulation study with the objective to verify
and augment the theoretical properties established in Section 3. We start by
describing the study design (Section 4.1). Then we explore how the MLE, oQLS
and gQLS estimators perform as the sample size $n$ increases (Section 4.2),
and when data are contaminated with outliers (Section 4.3). We finish the
study by investigating the power of the goodness-of-fit tests against several
alternatives (Section 4.4).
### 4.1 Study Design
The study design is based on the following choices.
1. 2. Simulation Design
3. * •
Location-scale families ($F_{0}$ with $\mu=0$ and $\sigma=1$). Cauchy,
Exponential, Gumbel, Laplace, Lévy, Logistic, Normal.
* •
Estimators. MLE, oQLS (abbreviated ‘o’) and gQLS (abbreviated ‘g’). For ‘o’
and ‘g’ estimators, the quantiles are selected according to (3.8) with
$(a_{1},b_{1})=(0.02,0.98)$, $(a_{2},b_{2})=(0.05,0.95)$,
$(a_{3},b_{3})=(0.10,0.90)$ and $k=25$ (in all cases).
* •
Contaminating distributions ($G$ in the contamination model
$F_{\varepsilon}=(1-\varepsilon)\,F_{0}+\varepsilon\,G$, where $F_{0}$ is a
location-scale family). $\mbox{Exponential}\,(\mu^{*}=1,\sigma^{*}=3)$ and
$\mbox{Normal}\,(\mu^{*}=1,\sigma^{*}=3)$.
* •
Levels of contamination. $\varepsilon=0,\,0.03,\,0.05,\,0.08$.
* •
Goodness-of-fit tests (at $\alpha=0.05$). Based on $W$, given by (3.9), and
$W_{\mbox{\tiny out}}$, given by (3.10).
* •
Quantile levels for model validation. $p_{1}^{\mbox{\tiny
out}}=0.01,\,p_{2}^{\mbox{\tiny out}}=0.03,\ldots,\,p_{49}^{\mbox{\tiny
out}}=0.97,\,p_{50}^{\mbox{\tiny out}}=0.99$.
* •
Sample sizes. $n=10^{2},10^{3}$ (and $n=10^{6},10^{7},10^{8},10^{9}$ for
computational time evaluations).
* •
Number of bootstrap samples. $B=10^{3}$.
* •
Number of Monte Carlo runs. $M=10^{4}$.
4.
For any given distribution, we generate $M=10^{4}$ random samples of a
specified length $n$. For each sample, we estimate $\mu$ and $\sigma$ of the
distribution using the MLE, oQLS, and gQLS estimators. The results are then
presented using boxplots and few summarizing statistics.
### 4.2 From Small to Big Data
The boxplots of the estimators under consideration are presented in Figures
4.1 (for Normal and Cauchy) and 4.2 (for Exponential and Lévy). Barring a few
exceptions, most estimators are correctly calibrated, i.e., they are centered
at $\mu=0$ for location and $\sigma=1$ for scale, and shrink toward the
respective targets according to the rate of $n^{1/2}$. The latter statement
can be illustrated by reporting the values of the ratio $\sqrt{\mbox{\sc
mse}}$ (at $n=100$)$\big{/}\sqrt{\mbox{\sc mse}}$ (at $n=1000$) for each
estimator, and few selected distributions.
* •
For Cauchy, the ratios are: 3.08 ($\widehat{\mu}_{\mbox{\tiny MLE}}$), 13.74
($\widehat{\mu}_{\mbox{\tiny o1}}$), 3.62 ($\widehat{\mu}_{\mbox{\tiny o2}}$),
3.29 ($\widehat{\mu}_{\mbox{\tiny o3}}$), 3.33 ($\widehat{\mu}_{\mbox{\tiny
g1}}$), 3.20 ($\widehat{\mu}_{\mbox{\tiny g2}}$), 3.18
($\widehat{\mu}_{\mbox{\tiny g3}}$); and 2.52 ($\widehat{\sigma}_{\mbox{\tiny
MLE}}$), 16.83 ($\widehat{\sigma}_{\mbox{\tiny o1}}$), 4.10
($\widehat{\sigma}_{\mbox{\tiny o2}}$), 3.50 ($\widehat{\sigma}_{\mbox{\tiny
o3}}$), 3.33 ($\widehat{\sigma}_{\mbox{\tiny g1}}$), 3.36
($\widehat{\sigma}_{\mbox{\tiny g2}}$), 3.33 ($\widehat{\sigma}_{\mbox{\tiny
g3}}$).
* •
For Normal, the ratios are: 3.14 ($\widehat{\mu}_{\mbox{\tiny MLE}}$), 3.20
($\widehat{\mu}_{\mbox{\tiny o1}}$), 3.13 ($\widehat{\mu}_{\mbox{\tiny o2}}$),
3.13 ($\widehat{\mu}_{\mbox{\tiny o3}}$), 3.19 ($\widehat{\mu}_{\mbox{\tiny
g1}}$), 3.14 ($\widehat{\mu}_{\mbox{\tiny g2}}$), 3.14
($\widehat{\mu}_{\mbox{\tiny g3}}$); and 3.16 ($\widehat{\sigma}_{\mbox{\tiny
MLE}}$), 3.15 ($\widehat{\sigma}_{\mbox{\tiny o1}}$), 3.13
($\widehat{\sigma}_{\mbox{\tiny o2}}$), 3.13 ($\widehat{\sigma}_{\mbox{\tiny
o3}}$), 3.14 ($\widehat{\sigma}_{\mbox{\tiny g1}}$), 3.13
($\widehat{\sigma}_{\mbox{\tiny g2}}$), 3.13 ($\widehat{\sigma}_{\mbox{\tiny
g3}}$).
Note that according to the asymptotic results of Section 3.2, these ratios are
expected to fall around $\sqrt{1000/100}\approx 3.16$. The “incorrect”
behavior of o1 (and to a lesser degree of o2) for Cauchy is not surprising and
can be attributed to its very poor efficiency properties:
$\mbox{ARE}\,(\widehat{\mu}_{\mbox{\tiny o1}},\widehat{\mu}_{\mbox{\tiny
MLE}})=0.029$ and $\mbox{ARE}\,(\widehat{\sigma}_{\mbox{\tiny
o1}},\widehat{\sigma}_{\mbox{\tiny MLE}})=0.107$. Similar conclusions can be
drawn for distributions in Figure 4.2.
Figure 4.1. Boxplots of $\widehat{\mu}$ and $\widehat{\sigma}$ for Normal (top
two rows) and Cauchy (bottom two rows)
distributions, using MLE, oQLS (‘o’) and gQLS (‘g’) estimators, where $(a,b)$
is equal to
$(0.02,0.98)$ for o1/g1, $(0.05,0.95)$ for o2/g2, and $(0.10,0.90)$ for o3/g3.
Figure 4.2. Boxplots of $\widehat{\sigma}$ for Exponential (top) and Lévy
(bottom) distributions,
using MLE, oQLS (‘o’) and gQLS (‘g’) estimators, where $(a,b)$ is equal to
$(0.02,0.98)$ for o1/g1, $(0.05,0.95)$ for o2/g2, and $(0.10,0.90)$ for o3/g3.
It is also of interest to see how fast these estimators can be computed when
sample sizes are very large, which is quite common nowadays. Using
$\mbox{MATLAB}^{\copyright}$ R2023a software on a basic laptop (with Apple M2
8-core CPU, RAM 8GB, Mac OS), the MLE, oQLS and gQLS estimators of six
location-scale families have been computed for samples of size
$n=10^{6},10^{7},10^{8},10^{9}$ and their computational times (in seconds)
have been recorded in Table 4.1. Note that for all these distributions, oQLS
and gQLS have explicit formulas (although they require inversion of medium-
sized matrices), and MLE has explicit formulas for four chosen distributions
but requires numerical optimization for Cauchy and Logistic. The conclusion is
clear: the computational costs for oQLS and gQLS are on a par with the
explicit-formula MLEs and at least 10 times less than those of optimization-
based MLEs. This computational advantage is highly relevant in situations
involving “big data”.
Table 4.1. Computational times (in seconds) of MLE, oQLS, and gQLS for large
$n$.
Sample | Estimation | Probability Distribution
---|---|---
Size | Method | Cauchy | Exponential | Laplace | Lévy | Logistic | Normal
$n=10^{6}$ | MLE | 0.76 | 0.02 | 0.07 | 0.09 | 0.76 | 0.14
| oQLS | 0.05 | 0.05 | 0.05 | 0.10 | 0.05 | 0.07
| gQLS | 0.08 | 0.07 | 0.06 | 0.11 | 0.10 | 0.10
$n=10^{7}$ | MLE | 5.32 | 0.20 | 0.36 | 0.51 | 4.51 | 0.70
| oQLS | 0.41 | 0.43 | 0.32 | 0.72 | 0.42 | 0.52
| gQLS | 0.42 | 0.48 | 0.48 | 0.75 | 0.47 | 0.60
$n=10^{8}$ | MLE | 79.44 | 1.78 | 3.19 | 4.79 | 84.24 | 3.52
| oQLS | 5.21 | 4.14 | 3.95 | 6.87 | 3.84 | 5.33
| gQLS | 4.58 | 5.40 | 4.27 | 7.36 | 4.71 | 5.83
$n=10^{9}$ | MLE | $**$ | 207 | 844 | 555 | 19716 | 498
| oQLS | 397 | 467 | 585 | 742 | 433 | 788
| gQLS | 406 | 506 | 672 | 732 | 447 | 989
$**$ MLE for Cauchy and $n=10^{9}$ did not converge.
### 4.3 Good Data, Bad Data
When the distributional assumption is correct (“clean” or “good” data
scenario), the simulated large-sample performance of MLE, oQLS, or gQLS is
consistent with the asymptotic results of Section 3.2, which was verified in
the previous section. When data are contaminated by outliers (“bad” data
scenario), all estimators are affected by it, but to a diferent extent. As is
evident from the boxplots of Figure 4.3, the robust QLS-type estimators
successfully cope with outliers as long as their breakdown point exceeds the
level of contamination $\varepsilon$. They work especially well for estimation
of $\mu$ and less so for estimation of $\sigma$. Further, for estimation of
$\sigma$ under Normal, MLEs completely miss the target and their variability
gets significantly inflated even for the smallest levels of contamination. For
Cauchy, which easily accommodates the outliers from
$\mbox{Normal}\,(\mu^{*}=1,\sigma^{*}=3)$, MLEs perform reasonably well. This
suggests that to mitigate the effects of potential contamination on MLEs, a
prudent approach is to always assume that data follow a heavy-tailed
distribution. Of course, if one were to take that route they would have to
accept the fact that no mean and other moments exist, and thus all subsequent
inference should be based on quantiles. Finally, more simulations have been
conducted using Laplace, Logistic, and other distributions. The conclusions
were similar to those of Normal and Cauchy: if a light-tailed distribution is
assumed, contamination is devastating to MLEs, but a heavier-tailed
distribution can correct the MLEs’ performance. Those additional studies will
not be presented here.
Figure 4.3. Boxplots of $\widehat{\mu}$ and $\widehat{\sigma}$ for Normal (top
two rows) and Cauchy (bottom two rows)
distributions, using MLE, oQLS (‘o’) and gQLS (‘g’) estimators, where $(a,b)$
is equal to
$(0.02,0.98)$ for o1/g1, $(0.05,0.95)$ for o2/g2, and $(0.10,0.90)$ for o3/g3.
### 4.4 Goodness of Fit
A simulation study has been conducted to assess the power properties of the
goodness-of-fit tests based on statistics $W$ and $W_{\mbox{\tiny out}}$. The
results are presented in Tables 4.2 (for $W$) and 4.3 (for $W_{\mbox{\tiny
out}}$). The following conclusions emerge from the tables.
* •
For the test based on $W$ (Table 4.2), the estimated probability of rejecting
the null hypothesis approaches 1 as $n\rightarrow 1000$ for most distributions
under $H_{0}$ and most alternatives. The exceptions are Logistic against
Normal and $F_{0.05}$, and Normal against Logistic and $F_{0.05}$. For
$n=100$, Cauchy has very low power against all alternatives, and the test
designed for Cauchy exceeds its level of 0.05. Comparisons between different
levels of $(a,b)$, i.e., $(0.02,0.98)$ versus $(0.05,0.95)$ versus
$(0.10,0.90)$, do reveal some patterns. However, recall that choosing one pair
of $(a,b)$ versus another means comparing the model fit on two overlapping but
different ranges of data.
* •
For the test based on $W_{\mbox{\tiny out}}$ (Table 4.3), all model fits are
compared on the same set of quantiles, ranging from the level 0.01 to 0.99 (50
quantiles in total). The estimated probability of rejecting the null
hypothesis approaches 1 as $n\rightarrow 1000$ for most distributions under
$H_{0}$ and most alternatives. The power of Logistic against Normal and
$F_{0.05}$ is still low, but higher than that based on $W$. This time Normal
exhibits fairly high power against Logistic and $F_{0.05}$. The patterns among
different choices of $(a,b)$ are mixed and depend on $H_{0}$ and the
alternative distribution. All tests match the significance level of 0.05.
Interestingly, for $n=100$ the Cauchy-based test has no power at all against
any of the selected alternatives.
Table 4.2. Proportions of rejections of $H_{0}$ by the goodness-of-fit test
(3.9) at $\alpha=0.05$ for
several distributions under $H_{0}$ and $H_{A}$, and varying $n$. In all
cases, $\mu=0$ and $\sigma=1$,
and
$F_{0.05}=0.95\,\mbox{Normal}\,(\mu=0,\sigma=1)+0.05\,\mbox{Normal}\,(\mu^{*}=1,\sigma^{*}=3)$.
gQLS | Assumed | Data Generated by
---|---|---
Estimator | Distribution ($H_{0}$) | Cauchy | Gumbel | Laplace | Logistic | Normal | $F_{0.05}$
Sample Size: $n=100$
$a=0.02,b=0.98$ | Cauchy | 0.25 | 0.08 | 0.04 | 0.05 | 0.07 | 0.06
| Gumbel | 1.00 | 0.08 | 0.89 | 0.66 | 0.47 | 0.51
| Laplace | 0.96 | 0.43 | 0.09 | 0.21 | 0.34 | 0.27
| Logistic | 1.00 | 0.32 | 0.28 | 0.09 | 0.09 | 0.13
| Normal | 1.00 | 0.46 | 0.52 | 0.16 | 0.08 | 0.20
$a=0.05,b=0.95$ | Cauchy | 0.18 | 0.08 | 0.04 | 0.05 | 0.07 | 0.06
| Gumbel | 0.99 | 0.07 | 0.75 | 0.45 | 0.32 | 0.32
| Laplace | 0.78 | 0.37 | 0.08 | 0.17 | 0.26 | 0.21
| Logistic | 0.96 | 0.29 | 0.23 | 0.08 | 0.08 | 0.07
| Normal | 0.98 | 0.37 | 0.38 | 0.11 | 0.08 | 0.09
$a=0.10,b=0.90$ | Cauchy | 0.14 | 0.11 | 0.06 | 0.08 | 0.09 | 0.08
| Gumbel | 0.89 | 0.08 | 0.56 | 0.30 | 0.24 | 0.23
| Laplace | 0.40 | 0.28 | 0.09 | 0.16 | 0.22 | 0.18
| Logistic | 0.74 | 0.22 | 0.20 | 0.09 | 0.09 | 0.08
| Normal | 0.82 | 0.25 | 0.28 | 0.10 | 0.08 | 0.09
Sample Size: $n=1000$
$a=0.02,b=0.98$ | Cauchy | 0.08 | 1 | 0.99 | 1 | 1 | 1
| Gumbel | 1 | 0.06 | 1 | 1 | 1 | 1
| Laplace | 1 | 1 | 0.06 | 0.97 | 1 | 0.99
| Logistic | 1 | 1 | 0.98 | 0.05 | 0.35 | 0.18
| Normal | 1 | 1 | 1 | 0.57 | 0.05 | 0.53
$a=0.05,b=0.95$ | Cauchy | 0.07 | 1 | 0.99 | 1 | 1 | 1
| Gumbel | 1 | 0.05 | 1 | 1 | 1 | 1
| Laplace | 1 | 1 | 0.05 | 0.94 | 1 | 1
| Logistic | 1 | 1 | 0.96 | 0.05 | 0.19 | 0.10
| Normal | 1 | 1 | 1 | 0.29 | 0.05 | 0.13
$a=0.10,b=0.90$ | Cauchy | 0.06 | 1 | 0.67 | 1 | 1 | 1
| Gumbel | 1 | 0.05 | 1 | 1 | 1 | 0.99
| Laplace | 0.99 | 1 | 0.05 | 0.81 | 0.98 | 0.95
| Logistic | 1 | 1 | 0.86 | 0.05 | 0.10 | 0.07
| Normal | 1 | 1 | 0.99 | 0.13 | 0.05 | 0.07
Table 4.3. Proportions of rejections of $H_{0}$ by the goodness-of-fit test
(3.10) at $\alpha=0.05$ for
several distributions under $H_{0}$ and $H_{A}$, and varying $n$. In all
cases, $\mu=0$ and $\sigma=1$,
and
$F_{0.05}=0.95\,\mbox{Normal}\,(\mu=0,\sigma=1)+0.05\,\mbox{Normal}\,(\mu^{*}=1,\sigma^{*}=3)$.
gQLS | Assumed | Data Generated by
---|---|---
Estimator | Distribution ($H_{0}$) | Cauchy | Gumbel | Laplace | Logistic | Normal | $F_{0.05}$
Sample Size: $n=100$
$a=0.02,b=0.98$ | Cauchy | 0.05 | 0 | 0 | 0 | 0 | 0
| Gumbel | 1 | 0.05 | 0.83 | 0.57 | 0.32 | 0.52
| Laplace | 0.96 | 0.18 | 0.05 | 0.08 | 0.14 | 0.15
| Logistic | 1 | 0.12 | 0.19 | 0.05 | 0.04 | 0.14
| Normal | 1 | 0.25 | 0.43 | 0.14 | 0.05 | 0.32
$a=0.05,b=0.95$ | Cauchy | 0.05 | 0 | 0 | 0 | 0 | 0
| Gumbel | 1 | 0.05 | 0.90 | 0.68 | 0.41 | 0.60
| Laplace | 0.98 | 0.13 | 0.05 | 0.06 | 0.10 | 0.13
| Logistic | 1 | 0.10 | 0.21 | 0.05 | 0.03 | 0.15
| Normal | 1 | 0.26 | 0.53 | 0.19 | 0.05 | 0.37
$a=0.10,b=0.90$ | Cauchy | 0.05 | 0 | 0 | 0 | 0 | 0
| Gumbel | 1 | 0.05 | 0.94 | 0.75 | 0.46 | 0.65
| Laplace | 0.99 | 0.07 | 0.06 | 0.03 | 0.04 | 0.07
| Logistic | 1 | 0.09 | 0.28 | 0.06 | 0.02 | 0.13
| Normal | 1 | 0.28 | 0.66 | 0.24 | 0.05 | 0.38
Sample Size: $n=1000$
$a=0.02,b=0.98$ | Cauchy | 0.05 | 1 | 0.48 | 1 | 1 | 1
| Gumbel | 1 | 0.05 | 1 | 1 | 1 | 1
| Laplace | 1 | 1 | 0.05 | 0.89 | 1 | 0.99
| Logistic | 1 | 1 | 0.95 | 0.06 | 0.24 | 0.45
| Normal | 1 | 1 | 1 | 0.59 | 0.05 | 0.87
$a=0.05,b=0.95$ | Cauchy | 0.05 | 1 | 0.47 | 1 | 1 | 1
| Gumbel | 1 | 0.05 | 1 | 1 | 1 | 1
| Laplace | 1 | 1 | 0.05 | 0.87 | 1 | 0.99
| Logistic | 1 | 1 | 0.96 | 0.05 | 0.21 | 0.46
| Normal | 1 | 1 | 1 | 0.67 | 0.05 | 0.90
$a=0.10,b=0.90$ | Cauchy | 0.05 | 1 | 0.44 | 0.99 | 1 | 1
| Gumbel | 1 | 0.05 | 1 | 1 | 1 | 1
| Laplace | 1 | 1 | 0.05 | 0.82 | 1 | 0.98
| Logistic | 1 | 1 | 0.98 | 0.05 | 0.18 | 0.43
| Normal | 1 | 1 | 1 | 0.76 | 0.05 | 0.91
## 5 Real Data Examples
To illustrate how our proposed estimators and goodness-of-fit tests work on
real data, we will use the daily stock returns of Alphabet Inc., the parent
company of Google, for the period from January 2, 2020, to November 30, 2023.
The stock prices are available at the Yahoo!Finance website
https://finance.yahoo.com/quote/GOOG/. The daily returns are calculated as the
difference between the closing prices on two consecutive trading days. Below
are summary statistics for these data.
$n$ | min | $q1$ | $q2$ | $q3$ | max | mean | std. dev.
---|---|---|---|---|---|---|---
$986$ | -6.5275 | -0.8975 | $0.1487$ | $1.1140$ | $7.6735$ | $0.0870$ | $1.6738$
Diagnostic tools such as histogram, quantile-quantile plot, and probability-
probability plot were employed and revealed that a symmetric and
(approximately) bell-shaped distribution might be a suitable candidate for the
data set at hand. In view of this, all the distributions of Section 4.4 were
fitted to the daily returns using gQLS and their fits were formally validated
with the goodness-of-fit tests (3.9) and (3.10). Table 5.1 summarizes the
findings of this analysis.
Table 5.1. Parameter estimates and goodness-of-fit statistics for several
location-scale
families fitted to the daily returns of the Google stock (1/02/2020 –
11/30/2023).
gQLS Estimator | Assumed | Parameter Estimates | Goodness-of-Fit Statistics
---|---|---|---
(with $k=25$) | Distribution | $\widehat{\mu}$ | $\widehat{\sigma}$ | $W$ ($p$-value) | $W_{\mbox{\tiny out}}$ ($p$-value)
$a=0.02,b=0.98$ | Cauchy | 0.16 | 0.95 | 74.68 (0.00) | 84.85 (0.01)
| Gumbel | -0.70 | 1.61 | 259.05 (0.00) | 324.80 (0.00)
| Laplace | 0.15 | 1.28 | 50.60 (0.00) | 68.11 (0.04)
| Logistic | 0.11 | 0.92 | 29.06 (0.18) | 45.21 (0.64)
| Normal | 0.08 | 1.62 | 50.64 (0.00) | 72.13 (0.02)
$a=0.05,b=0.95$ | Cauchy | 0.15 | 0.95 | 71.67 (0.00) | 84.87 (0.01)
| Gumbel | -0.64 | 1.51 | 165.38 (0.00) | 390.07 (0.00)
| Laplace | 0.15 | 1.30 | 45.26 (0.00) | 66.81 (0.05)
| Logistic | 0.16 | 0.91 | 25.71 (0.31) | 45.62 (0.64)
| Normal | 0.09 | 1.58 | 35.06 (0.05) | 78.51 (0.01)
$a=0.10,b=0.90$ | Cauchy | 0.15 | 0.96 | 61.54 (0.00) | 83.77 (0.02)
| Gumbel | -0.57 | 1.43 | 114.11 (0.00) | 475.43 (0.00)
| Laplace | 0.15 | 1.33 | 39.27 (0.02) | 64.80 (0.08)
| Logistic | 0.12 | 0.91 | 24.58 (0.37) | 45.93 (0.62)
| Normal | 0.10 | 1.54 | 29.45 (0.17) | 87.83 (0.00)
As is evident from the table, the Logistic distribution provides the best fit
among the candidate models, with its $p$-values significantly exceeding the
0.10 level under both tests. Note that the more robust estimators (i.e., those
with higher $a=1-b$) achieve better fits according to the chi-square test
(3.9). This pattern is particularly evident in the case of the Normal
distribution and $a=0.10,\,b=0.90$, but it is not surprising because as
$a=1-b$ increases the quantile range for which residuals are calculated
shrinks making the fit better. On the other hand, the test (3.10) computes
residuals on the universal set of 50 quantile levels (from 0.01 to 0.99) and
practically shows no sensitivity to the choice of $a$ and $b$.
## 6 Concluding Remarks
In this paper, two types of quantile least squares estimators for location-
scale families have been introduced: ordinary (denoted oQLS) and generalized
(denoted gQLS). Both approaches are robust. While the oQLS estimators are
quite effective for more general probability distributions, the gQLS
estimators can match the levels of robustness of oQLS yet offer much higher
efficiency for estimation of location and/or scale parameters. These
properties have been derived analytically (for large $n$) and verified using
simulations (for small and medium-sized samples). In addition, two goodness-
of-fit tests have been contructed and their power properties have been
investigated via simulations. Also, it has been established that computational
times of these estimators are similar to those of explicit-formula MLEs, and
are more than 10 times lower when MLEs have to be found using numerical
optimization. For example, oQLS and gQLS can be computed for a sample of a
billion observations in 7-15 minutes while non-explicit MLEs take more than 5
hours or do not converge at all.
The research presented in this paper can be extended in several directions.
The most obvious one is to develop QLS estimators for more general classes of
distributions. Another direction (perhaps less obvious) is to follow the
literature on $L$-moments (Hosking, 1990) and trimmed $L$-moments (see Elamir
and Seheult (2003); Hosking (2007)) and construct QLS-type statistics to
summarize the shapes of parametric distributions. This line of research is
more probabilistic in nature. On the statistical side, direct comparisons with
the MTM estimators of Brazauskas et al. (2009), MWM estimators (see Zhao et
al. (2018a); Zhao et al. (2018b)), or even more general trimming methods such
as trimmed likelihood estimators of Neykov et al. (2007) are also of interest.
## Acknowledgments
Declarations of interest: none.
## References
* Ali and Umbach (1989) Ali, M.M. and Umbach, D. (1989). A Shapiro-Wilk type goodness-of-fit test using a few order statistics. Journal of Statistical Planning and Inference, 22(2), 251–261.
* Ali and Umbach (1998) Ali, M.M. and Umbach, D. (1998). Optimal linear inference using selected order statistics in location-scale models. In Handbook of Statistics, Vol. 17; Order Statistics: Applications (N. Balakrishnan and C.R. Rao, eds.); 183–213. Elsevier Science B.V.
* Brazauskas et al. (2009) Brazauskas, V., Jones, B., and Zitikis, R. (2009). Robust fitting of claim severity distributions and the method of trimmed moments. Journal of Statistical Planning and Inference, 139(6), 2028–2043.
* Brazauskas and Serfling (2000) Brazauskas, V. and Serfling, R. (2000). Robust and efficient estimation of the tail index of a single-parameter Pareto distribution (with discussion). North American Actuarial Journal, 4(4), 12–27. Discussion: 5(3), 123–126. Reply: 5(3), 126–128.
* Cane (1974) Cane, G.J. (1974). Linear estimation of parameters of the Cauchy distribution based on sample quantiles. Journal of the American Statistical Association, 69(345), 243–245.
* Chan (1970) Chan, L.K. (1970). Linear estimation of the location and scale parameters of the Cauchy distribution based on sample quantiles. Journal of the American Statistical Association, 65(330), 851–859.
* Elamir and Seheult (2003) Elamir, E.A.H. and Seheult, A.H. (2003). Trimmed $L$-moments. Computational Statistics and Data Analysis, 43(3), 299–314.
* Genton and de Luna (2000) Genton, M.G. and de Luna, X. (2000). Robust simulation-based estimation. Statistics and Probability Letters, 48(3), 253–259.
* Genton and Ronchetti (2003) Genton, M.G. and Ronchetti, E. (2003). Robust indirect inference. Journal of the American Statistical Association, 98(461), 67–76.
* Hampel et al. (1986) Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., and Stahel, W.A. (1986). Robust Statistics: The Approach Based on Influence Functions. Wiley.
* Hampel (1968) Hampel, F.R. (1968). Contributions to the Theory of Robust Estimation. Ph.D. thesis, University of California, Berkeley.
* Hogg et al. (2005) Hogg, R.V., McKean, J.W., and Craig, A.T. (2005). Introduction to Mathematical Statistics, 6th edition. Pearson Education.
* Hosking (1990) Hosking, J.R.M. (1990). $L$-moments: Analysis and estimation of distributions using linear combinations of order statistics. Journal of the Royal Statistical Society: Series B, 52(1), 105–124.
* Hosking (2007) Hosking, J.R.M. (2007). Some theory and practical uses of trimmed $L$-moments. Journal of Statistical Planning and Inference, 137(9), 3024–3039.
* Huber (1964) Huber, P.J. (1964). Robust estimation of a location parameter. Annals of Mathematical Statistics, 35(1), 73–101.
* Huber and Ronchetti (2009) Huber, P.J. and Ronchetti, E.M. (2009). Robust Statistics, 2nd edition. Wiley.
* Maronna et al. (2006) Maronna, R.A., Martin, R.D., and Yohai, V.J. (2006). Robust Statistics: Theory and Methods. Wiley.
* Mosteller (1946) Mosteller, F. (1946). On some useful “inefficient” statistics. Annals of Mathematical Statistics, 17(4), 377–408.
* Neykov et al. (2007) Neykov, N., Filzmoser, P., Dimova, R., and Neytchev, P. (2007). Robust fitting of mixtures using the trimmed likelihood estimator. Computational Statistics and Data Analysis, 52(1), 299–308.
* Sarhan and Greenberg (1962) Sarhan, A.E. and Greenberg, B.G. (1962). Contributions to Order Statistics. Wiley.
* Serfling (2002a) Serfling, R.J. (2002a). Approximation Theorems of Mathematical Statistics. Wiley.
* Serfling (2002b) Serfling, R. (2002b). Efficient and robust fitting of lognormal distributions (with discussion). North American Actuarial Journal, 6(4), 95–109. Discussion: 7(3), 112–116. Reply: 7(3), 116.
* Tukey (1960) Tukey, J.W. (1960). A survey of sampling from contaminated distributions. In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling (I. Olkin et al., eds.), Stanford University Press, 448–485.
* Xu et al. (2014) Xu, Y., Iglewicz, B., and Chervoneva, I. (2014). Robust estimation of the parameters of $g$-and-$h$ distributions, with applications to outlier detection. Computational Statistics and Data Analysis, 75(July 2014), 66–80.
* Zhao et al. (2018a) Zhao, Q., Brazauskas, V., and Ghorai, J. (2018a). Robust and efficient fitting of severity models and the method of Winsorized moments. ASTIN Bulletin, 48(1), 275–309.
* Zhao et al. (2018b) Zhao, Q., Brazauskas, V., and Ghorai, J. (2018b). Small-sample performance of the MTM and MWM estimators for the parameters of log-location-scale families. Journal of Statistical Computation and Simulation, 88(4), 808–824.
|
# Entropy and thermodynamical stability of white dwarfs
Jiří Adam, Jr.***Corresponding author †††E-mail address<EMAIL_ADDRESS>Institute of Nuclear Physics ASCR, CZ–250 68 Řež, Czech Republic Emil
Truhlík‡‡‡E-mail address<EMAIL_ADDRESS>Institute of Nuclear Physics
ASCR, CZ–250 68 Řež, Czech Republic
###### Abstract
A structure of spherical white dwarfs is calculated for a non-zero
temperature. It is shown that the thermodynamical stability of the white dwarf
stars can be described naturally within the concept of the Helmholtz free
energy of the Coulomb fully ionized electron-ion plasma.
Entropy; Thermodynamics; Stability; White dwarfs
###### pacs:
23.40.Bw; 14.60.Cd; 67.85.Lm; 97.10.Ld; 97.20.Rp
## I Introduction
Description of the white dwarf stars (WDs) often starts from the equation of
equilibrium in the Newtonian approximation, without the rotation and the
magnetic field included SC1 ; MC §§§The importance of the general relativity
for a star with the mass $M$ and the radius $R$ is given by a compactness
parameter $x_{g}$ AYP ,
$x_{g}\,=\,r_{g}/R\,,\,\,\,r_{g}\,=\,2\mathrm{G}M/c^{2}\,\approx\,2.95\,M/M_{\odot}\,,$
(1) where $r_{g}$ is the Schwarzschild radius and $M_{\odot}$ is the mass of
the Sun. If one takes for $M$ the Chandrasekhar - Landau limit mass SC2 ; LDL
$M\,\approx\,1.4\,M_{\odot}$ and the radius $R\,\approx\,5\times 10^{3}$ km,
one obtains $x_{g}\,<<\,1$. However, the effects od the general relativity are
important for stabilizing very massive and fast rotating WDs SLSSAT .,
$\frac{d\,P}{d\,r}\,=\,-\frac{\mathrm{G}\,M(r)}{r^{2}}\,\rho\,,\,\,\,\frac{d\,M(r)}{d\,r}\,=\,4\,\pi\,r^{2}\,\rho\,,$
(2)
from which it follows,
$\frac{1}{r^{2}}\,\frac{d}{d\,r}\left(\frac{r^{2}}{\rho}\,\frac{d\,P}{d\,r}\right)\,=\,-4\,\pi\,\mathrm{G}\,\rho\,,$
(3)
where $P$ is the pressure, $\rho$ is the mass density and G is the Newton
gravitational constant. One gets explicitly from (3),
$\frac{1}{\rho}\,\frac{d^{2}\,P}{d\,r^{2}}\,+\,\frac{2}{\rho\,r}\,\frac{d\,P}{d\,r}\,-\,\frac{1}{\rho^{2}}\,\frac{d\,P}{d\,r}\,\frac{d\,\rho}{d\,r}\,+\,4\,\pi\,\mathrm{G}\,\rho\,=\,0\,.$
(4)
If $P$ as a function of $\rho$ (i.e., the equation of state (EoS)) is known,
the equation above can be cast into a second order differential equation for
the function $\rho(r)$.
For instance, it was shown in SC1 that writing the EoS in the form
$P\,=\,\mathrm{K}\,\rho^{(1+\frac{1}{n})}\,,$ (5)
and $\rho$ as
$\rho\,=\,\mathrm{\lambda_{c}}\,\theta^{n}\,,$ (6)
one gets EoS eq. (5) in the form,
$P\,=\,\mathrm{K}\,\mathrm{\lambda_{c}}^{(1+\frac{1}{n})}\,\theta^{(1+n)}\,,$
(7)
and from eq. (4) it then follows the famous Lane-Emden (LE) equation,
$\frac{1}{\xi^{2}}\,\frac{d}{d\,\xi}\,\left(\xi^{2}\,\frac{d\,\theta}{d\,\xi}\right)\,=\,0,$
(8)
where
$\xi\,=\,\frac{r}{a}\,,\,\,\,\,a\,=\,\sqrt{\frac{(n+1)}{4\,\pi}\,\frac{\mathrm{K}}{\mathrm{G}}\,\mathrm{\lambda_{c}}^{(\frac{1}{n}-1)}}\,.$
(9)
Further, if $\mathrm{\lambda_{c}}\,=\,\rho_{c}$, where $\rho_{c}\,=\,\rho(0)$,
then $\theta(0)\,=\,1$ and
$\left(\frac{d\,\theta}{d\,\xi}\right)\big{|}_{\xi=0}\,=\,0$.
Eq. (8) was discussed in detail in SC1 , where it was shown that for
$n\,=\,0,1,5$ one gets explicit solutions. The model with the EoS (7), known
as polytropic model, has been frequently used in the astrophysics, in
particular for treating the structure of WDs. In this case, the star is
considered as a dense Coulomb plasma of nuclei in a charge compensating
background of degenerate electron Fermi gas, where the electron - ion
interaction is taken in this polytropic approximation. However, for studying
the WDs structure, the polytropic model is of restricted use, because it
fairly approximates the equation of state only in the extreme non-relativistic
limit for $\lambda_{c}\,<<\,10^{6}$ g/cm3 (with $n$ = 3/2) and in the extreme
relativistic limit for $\lambda_{c}\,>>\,10^{6}$ g/cm3 (with $n$ = 3). Recall
that for $n$ = 3 the mass of the WDs is uniquely given by the Chandrasekhar -
Landau limit mass SC2 ; LDL .
Another problem with the polytropic model is its stability. As discussed at
length in JPM , the squared Brunt-Väisälä frequency $N^{2}\,=\,0$ in this
model. Only if $N^{2}\,>\,0$, the fluid is stable, if $N^{2}\,<\,0$, it is
convectively unstable. So the fluid described by the polytropic model is only
neutrally stable. According to Ref. JPM , no magnetic equilibrium exists in
this case.
Stable stratification has an important influence on stellar magnetic
equilibria and their stability. As discussed at length in Ref. TARMM , the
study of the magnetic equilibria in stable stratified stars has a long story.
It follows from these studies that simple magnetic field configurations are
always unstable. In their study, the authors of Ref. TARMM constructed simple
analytic models for axially symmetric magnetic fields, compatible with
hydromagnetic equilibria in stably stratified stars, with both poloidal and
toroidal components of adjustable strength, as well as the associated
pressure, density and gravitational potential perturbations, which maked them
possible to study directly their stability. It turned out that the instability
of toroidal field can be stabilized by a poloidal field containing much less
energy than the former, as given by the condition
$E_{\mathrm{pol}}/E_{\mathrm{tor}}\,\gtrsim\,2aE_{\mathrm{pol}}/E_{\mathrm{grav}}$,
where $E_{\mathrm{pol}}$ and $E_{\mathrm{tor}}$ are the energies of the
poloidal and toroidal fields, respectively and $E_{\mathrm{grav}}$ is the
gravitational binding energy of the star. It was found in TARMM that
$a\approx 7.4$ for main-sequence stars, which compares with $a\approx 10$
obtained by Braithwaite JB , using the method of numerical simulations. But
the results for the neutron stars differ by a factor of $\approx$ 4.
The possibility of compensation of instabilities of toroidal fields by a
relatively weak poloidal field was earlier studied by Spruit HCS .
As to the instabilities in the poloidal field, stable stratification is of
less help for eliminating them TARMM . A relatively stronger toroidal field
would be needed in order to stabilize it JB .
As it follows from the discussion in Ch. 3 of the monograph CSBK , the
necessary and sufficient condition for the thermodynamical stability of stars
with optically thick media is the positive gradient of the entropy,
$\frac{d\,S}{d\,r}\,>\,0\,.$ (10)
Respecting this criterion of the stability, the authors of Ref. JPM
considered the star as a chemically uniform, monoatomic, classical ideal gas,
described in the polytropic model with the EoS (5) as
$P\,\sim\,\rho^{\gamma}$, where $\gamma\,=\,4/3\,=\,1\,+\,1/n\,\,\,(n\,=\,3)$
and with the specific entropy
$s\,\approx\,ln(P/\rho^{\Gamma})\,+\,const\ ,\qquad{\rm
where}\quad\Gamma\,=\,\left(\frac{\partial\,P}{\partial\,ln\,\rho}\right)_{ad}\,=\,\frac{5}{3}\
,$
for which it holds
$\frac{d\,s}{d\,r}\,>0\ .$
In this model, applied to the Ap stars, the constructed magnetic equilibrium
turns out to be stable. However, for $\gamma\,=\,5/3\,\,\,(n\,=\,3/2)$, one
obtains $\gamma\,=\,\Gamma$, $d\,s/d\,r\,=\,0$ and the magnetic equilibrium is
unstable. Similar calculations have recently been done in Ref. LB .
This model cannot be applied to the study of WDs, because they consist of
plasma containing the mix of the fully ionized atoms and of the fully
degenerate electron gas.
The polytropic model was used to describe super - Chandrasekhar strongly
magnetized WDs (SMWDs) in Refs. UDM1 ; UDM2 ; UDM3 and also in Ref. BB . It
was shown in BB , that axisymmetric configurations with the poloidal or
toroidal fields are unstable and it was concluded that the long lived super -
Chandrasekhar SMWDs are unlikely to occur in nature.
In Ref. DC , the authors developed a detailed and self - consistent numerical
model with a poloidal magnetic field in the SMWDs. In their model, the
rotation, the effects of general relativity and a realistic EoS were taken
into account and extensive stability analysis of such objects was performed.
As a result, it was found that the SMWDs could potentially exist, but their
stability would be mainly limited by the onset of the electron capture and of
the picnonuclear reactions. However, it should be noted that the condition of
the thermodynamical stability (10) was not checked in Ref. DC .
In the recent paper NCLP , the authors have studied the influence of the
electron capture on the stability of the WDs. They used the EoS in the
polytropic form (5) in the ultrarelativistic limit
$P\,\approx\,K_{0}\,\rho^{4/3}$, with $A$ and $Z$ dependent constant $K_{0}$.
Besides, the electrostatic correction and the correction due to the electron-
ion interaction are also included in $K_{0}$. This allowed them to calculate
the threshold density $\rho_{\beta}$ for the capture of electrons and to set
the lower bound for the radius of the WDs. It was also found that the electron
capture reduces the mass of WDs by 3 - 13% . Solving the Einstein - Maxwell
equations, with the magnetism of the dense matter included, has shown that the
magnetized WDs with the polar magnetic field stronger than $10^{13}$ G could
be massive enough to explain overluminous type Ia supernova. It was also found
that the pure poloidal magnetic field is unstable. Actually, this result
follows from the fact that the polytropic model is only neutrally stable, as
it has already been discussed above.
In Ref. LB1 , the authors investigated the evolution of isolated massive,
highly magnetized and uniformly rotating WDs, under angular momentum loss
driven by magnetic dipole braking. They computed general relativistic
configurations of such objects using the relativistic Feynman - Metropolis -
Teller equation of state for the WD matter. One of the interesting results is
obtained for rotating magnetized WD with the mass which is not very close to
the non - rotating Chandrasekhar - Landau mass. Such a WD evolves slowing down
and behaving as an active pulsar of the type of soft gamma - repeater and
anomalous X - ray pulsar MM ; IB ; KBLI ; JAR ; JGC ; RVL ; VBB ; TRM . Let us
note that it is not clear if the condition of the thermodynamical stability
(10) is fulfilled in Ref. LB1 .
Realistic model of stars as systems of magnetized fully - ionized plasma has
been developed in CP \- ASJ . In the model, considered in CP \- PC3 , an
analytical EoS is derived from the Helmholtz free energy of the system of
magnetized fully - ionized atoms and of the degenerate electron gas, whereas,
in ASJ also positrons were included. Such an EoS covers a wide range of
temperatures and densities, from low-density classical plasmas to
relativistic, quantum plasma conditions.
Starting in Sect. II.1 from the equation of equilibrium (4), and using in
Sect. II.2 the function $P(\rho)$, obtained from the EoS mentioned above, we
get the second order equation for the matter density $\rho$. Solving this
equation, we study the structure of corresponding WDs for a representative
series of values of the central density $\rho_{c}$, chosen from the interval
$10^{4}\,\mathrm{g/cm^{3}}\,<<\,\rho_{c}\,<<\,2\times\,10^{9}\,\mathrm{g/cm^{3}}$,
and simultaneously using the electron and ion entropies from Sects. III.1 and
III.2, we show that the criterion of the thermodynamical stability (10) is
fulfilled in all cases (see FIG. 2 and FIG. 3). Besides, using LE eq.(8), we
calculate for comparison for the extreme non - relativistic and extreme
relativistic values of $\rho_{c}$ the mass and radius of corresponding WDs,
which are presented in TABLE I.
In Appendix A, we discuss different ways of calculating the structure of WDs
and summarize relations for the scaling parameter $\mathrm{a_{s}}$ and for the
LE approximation. In Appendix B, we express the functions $f_{1}$ and $f_{2}$,
entering the pressure $P(\rho)$ in terms of the Fermi - Dirac integrals and in
Appendix C, we briefly describe how to decompose the thermodynamical
quantities for free electrons into series in powers of
$k_{\mathrm{B}}T/\tilde{E}_{F}$, where $\tilde{E}_{F}=\mu_{\mathrm{e}}(T=0)$
is the Fermi energy with the rest mass contribution subtracted.
Our results show that the realistic model developed in CP \- ASJ is a good
starting model to be applied to construct the WDs with stable magnetic fields.
## II Methods and input
### II.1 Modified equation of stability
Let us write eq. (4) in the form,
$\frac{d^{2}\,P}{d\,r^{2}}\,+\,\frac{2}{r}\,\frac{d\,P}{d\,r}\,-\,\frac{1}{\rho}\,\frac{d\,P}{d\,r}\,\frac{d\,\rho}{d\,r}\,+\,4\,\pi\,\mathrm{G}\,\rho^{2}\,=\,0\,.$
(11)
Considering now the pressure $P$ as a function (solely) of the density $\rho$,
eq. (11) can be transformed into the 2nd order differential equation for
$\rho$,
$\frac{d^{2}\,\rho}{d\,r^{2}}\,+\,f_{1}(\rho)\,\left(\frac{d\,\rho}{d\,r}\right)^{2}\,+\,\frac{2}{r}\,\frac{d\,\rho}{d\,r}\,+\,\frac{4\,\pi\,\mathrm{G}}{f_{2}(\rho)}\,\rho^{2}\,=\,0\,,$
(12)
where
$f_{1}(\rho)\,=\,\left(\frac{d^{2}\,P}{d\,\rho^{2}}\right)\bigg{/}\left(\frac{d\,P}{d\,r}\right)\,-\,\frac{1}{\rho}\,,\,\,\,\,\,f_{2}(\rho)\,=\,\frac{d\,P}{d\,\rho}\,.$
(13)
Next we set
$r\,=\,\mathrm{a_{s}}\,x\,,\,\,\,\rho\,=\,\lambda_{c}\,y\,.$ (14)
If $\lambda_{c}$ is the matter density in the center of the star
$\lambda_{c}=\rho(0)$, then
$y(0)\,=\,0\,,\,\,\,\frac{d\,y}{d\,r}\big{|}_{0}\,=\,0\,.$ (15)
From relations
$\frac{dP}{d\rho}\,=\,\frac{dP}{\lambda_{c}\,dy}\,,\,\,\,\frac{d^{2}P}{d\rho^{2}}\,=\,\frac{d^{2}P}{\lambda_{c}^{2}\,dy^{2}}\,,$
(16)
it follows
$f_{1}(\rho)\,=\,\frac{1}{\lambda_{c}}\,f_{1}(y)\,,\,\,\,f_{2}(\rho)\,=\,\frac{1}{\lambda_{c}}\,f_{2}(y)\,.$
(17)
Then, in terms of the new variables (14), eq. (12) becomes,
$\frac{d^{2}\,y}{d\,x^{2}}\,+\,f_{1}(y)\,\left(\frac{d\,y}{d\,x}\right)^{2}\,+\,\frac{2}{x}\,\frac{d\,y}{d\,x}\,+\,\frac{\mathrm{C}}{f_{2}(y)}\,y^{2}\,=\,0\,,$
(18)
with
$\mathrm{C}\,=\,4\,\pi\,\mathrm{G}\,\mathrm{a_{s}}^{2}\,\lambda_{c}^{2}\,.$
(19)
We solved eq. (18) by the standard 4th order Runge-Kutta method for various
values of the central matter density $\lambda_{c}$. The choice of the value of
the scaling parameter $\mathrm{a_{s}}$ is discussed in Appendix A. In our
calculations we use the value (see (70))
$\mathrm{a}_{s}\,=\,8686.26\,{\rm km}.$ (20)
### II.2 The Helmholtz free energy and the EoS
Here we follow Ref. PC3 , where the Helmholtz free energy $F$ of the plasma is
defined as,
$\displaystyle F\,$ $\displaystyle=$
$\displaystyle\,F^{(e)}_{id}\,+\,F^{(i)}_{id}\,+\,F_{ii}\,+\,F_{ee}\,+\,F_{ie}\,,$
(21) $\displaystyle\simeq$ $\displaystyle F^{(e)}_{id}\,+\,F_{lat}\ ,\quad
F_{lat}=\,F^{(i)}_{id}\,+\,F_{ii}\ .$
On the first line, the first two terms correspond to the ideal part of the
free energy of ions and electrons, and the last three terms correspond to the
ion - ion, electron - electron, and ion - electron interactions, respectively.
The second line corresponds to the approximation adopted in this paper: the
sum of the 2nd and 3rd terms of the 1st line is denoted as $F_{lat}$ and
evaluated as in Ref. PC3 in one - component plasma model (OCP) in the regime
of Coulomb crystal. Less important and more uncertain contributions $F_{ee}$
and $F_{ie}$ are skipped. It should be noted that these contributions are less
important only in degenerate plasmas. In the non-degenerate regime, they can
be of the same order of magnitude as $F_{ii}$, especially if Z is not large
PC3 .
FIG. 1: Dependence of the electron pressure $P$ $[g/cm\cdot sec^{2}]$ on the
matter density $\rho$ $[g/cm^{3}]$ and the temperature $T$ $[K]$. In the upper
panel, somewhat wider ranges of densities and temperatures are considered,
while in the lower one, they are relevant for the carbon WDs.
The particle density $N$, internal energy $U$, the entropy $S$ and Helmholtz
free energy $F$ are related by:
$\displaystyle T\,S$ $\displaystyle=$ $\displaystyle U+P\,V-\mu\,N\ ,$ (22)
$\displaystyle F$ $\displaystyle=$ $\displaystyle U-T\,S=\mu\,N-P\,V\ .$ (23)
In the lower panel of FIG. 1, we present the dependence of the electron
pressure on $\rho$ for various values of $T$ for the carbon WDs with
$A\,=\,12\,,\ Z\,=\,6$. In this case, the dependence of the electron presure
on the matter density is $T$ \- independent ¶¶¶Similar $P^{(\mathrm{e})}-\rho$
dependence on various values of $T$ was presented in Fig. 1 of Ref. KB ..
In general, the pressure and the entropy can be calculated from $F$ by:
$P\,=\,-\left(\frac{\partial\,F}{\partial\,V}\right)_{T}\,,\qquad
S\,=\,-\left(\frac{\partial\,F}{\partial\,T}\right)_{V}\ .$
But for free electrons the number density $n_{\mathrm{e}}$, pressure
$P^{(\mathrm{e})}_{id}$ and energy $U$ are known explicitly:
$\displaystyle n_{\mathrm{e}}\,$ $\displaystyle\equiv$
$\displaystyle\frac{N_{e}}{V}=\,\mathrm{c_{n}}\,\big{[}I_{1/2}(\chi_{\mathrm{e}},\tau)+\,\tau\,I_{3/2}(\chi_{\mathrm{e}},\tau)\big{]}\,,$
(24) $\displaystyle P^{(\mathrm{e})}_{id}\,$ $\displaystyle=$
$\displaystyle\,\mathrm{c_{p}}\,\big{[}I_{3/2}(\chi_{\mathrm{e}},\tau)+\,\frac{\tau}{2}\,I_{5/2}(\chi_{\mathrm{e}},\tau)\big{]}\,,$
(25) $\displaystyle U^{(\mathrm{e})}_{id}\,$ $\displaystyle=$
$\displaystyle\,\mathrm{c_{e}}\,\big{[}I_{3/2}(\chi_{\mathrm{e}},\tau)+\,\tau\,I_{5/2}(\chi_{\mathrm{e}},\tau)\big{]}\equiv{\cal
E}_{e}\,V\,,$ (26)
where $\chi_{\mathrm{e}}\,=\,\mu_{\mathrm{e}}\beta$, $\mu_{\mathrm{e}}$ is the
electron chemical potential without the rest energy $m_{\mathrm{e}}c^{2}$ and
dimensionless $\tau\,=\,T/T_{\mathrm{r}}\,,$ with
$T_{\mathrm{r}}\,=\,m_{\mathrm{e}}\,c^{2}/k_{\mathrm{B}}\,\simeq\,5.9301\times
10^{9}\,\mathrm{K}$ (from the Boltzmann constant $k_{B}\simeq 8.617\times
10^{-11}$MeV/K). In the last relation we introduce the electron energy density
${\cal E}_{e}=U^{(\mathrm{e})}_{id}/V$. Further:
$\displaystyle\mathrm{c_{n}}\,$ $\displaystyle=$
$\displaystyle\,\frac{\sqrt{2}}{\pi^{2}\hbar^{3}}\left(\frac{m_{\mathrm{e}}}{\beta}\right)^{3/2}=3\sqrt{2}\rho_{0}\,\tau^{3/2}\,,\quad\beta\,=\,\frac{1}{k_{\mathrm{B}}T}\
,\quad\rho_{0}=\frac{1}{3\pi^{2}\,\lambda_{e}^{3}}\ ,$
$\displaystyle\mathrm{c_{p}}\,$ $\displaystyle=$
$\displaystyle\,\frac{(2m_{\mathrm{e}})^{3/2}}{3\pi^{2}\hbar^{3}\beta^{5/2}}=2\sqrt{2}\,m_{e}c^{2}\,\rho_{0}\,\tau^{5/2}\
,$ $\displaystyle\mathrm{c_{e}}=\frac{3}{2}\mathrm{c_{p}}\,$ $\displaystyle=$
$\displaystyle\,\frac{\sqrt{2}\,m_{\mathrm{e}}^{3/2}}{\pi^{2}\hbar^{3}\beta^{5/2}}=3\sqrt{2}\,m_{e}c^{2}\,\rho_{0}\,\tau^{5/2}\
,$
where $\lambda_{e}$ is the electron Compton length (69).
The generalized (relativistic) Fermi-Dirac integrals
$I_{\nu}(\chi_{\mathrm{e}}\,,\,\tau)$ are defined as follows:
$I_{\nu}(\chi_{\mathrm{e}},\tau)\,=\,\int\limits_{0}^{\infty}\frac{x^{\nu}(1\,+\,\tau
x/2)^{1/2}}{e^{(x\,-\,\chi_{\mathrm{e}})}\,+\,1}\mathrm{d}x\,.$ (27)
The functions $f_{1}(y)$ and $f_{2}(y)$ of previous section are presented in
terms of the Fermi - Dirac integrals $I_{\nu}(\chi_{\mathrm{e}}\,,\,\tau)$ in
Appendix B.
One obtains the EoS by referring to the neutrality of the plasma, which
provides the equation between the mass density $\rho$ of the ion and the
electron number density $n_{\mathrm{e}}$,
$n_{\mathrm{e}}\,=\,Zn_{i}\,=\,\frac{Z\rho}{Am_{u}}=\frac{\rho}{\mu_{u}}\,,$
(28)
where $Z$ is the ion charge number, $A$ is the mass number, $n_{i}$ is the ion
number density,
$m_{u}$ is the atomic mass unit and $\mu_{u}=A\,m_{u}/Z$ is introduced in
(68). Given values of $A\,,Z$ and $\rho$ one gets $n_{e}$ from the neutrality
condition (28) and then reversing eq. (24) for a given temperature $T$,
obtains the value of $\chi_{\mathrm{e}}$. Substituting this
$\chi_{\mathrm{e}}$ into eq. (25), one gets the value of the electron pressure
$P^{(\mathrm{e})}_{id}$ corresponding to the given value of $\rho$.
Table 1: In the upper part of this table we present the results of calculations of the radii and masses of the WDs for representative values of the central mass densities $\lambda_{c}$. We have solved the equation of stability (18) with the scaling parameter $a_{s}$ of eq. (20). The EoS is obtained within the concept of the Helmholtz free energy of the Coulomb plasma, using eqs. (25), (24) and (28). The central fractional Fermi momentum eq. (34) $x_{rc}$ is also listed, since it indicates whether the dynamics is non-relativistic or relativistic. In the lower part of the table (below the middle double line) we present the radii and masses obtained from the LE eq. (8), which is based on the polytropic EoS eq. (7). $R_{0}$ and $M_{0}$ are radii and masses in this approximation, $r_{0}$ and $m_{0}$ the corresponding dimensionless quantities, introduced at the end of Appendix A in eqs. (78-80) and (81-83). Comparing these (LE) results with our numerical ones (which do not employ a polytropic EoS) one can see that they are close to each other in the non-relativistic case, but in the relativistic regime they approach each other only in the extreme relativistic limit of very large densities. model | 1 | 2 | 3 | 4 | 5 | 6
---|---|---|---|---|---|---
$\lambda_{c}[g/cm^{3}]$ | $10^{4}$ | $10^{5}$ | $10^{6}$ | $10^{7}$ | $10^{8}$ | $2\cdot 10^{9}$
$x_{rc}$ | $0.173$ | $0.372$ | $0.801$ | $1.36$ | $3.72$ | $10.1$
$R_{WD}$ [km] | $2.40\cdot 10^{4}$ | $1.63\cdot 10^{4}$ | $1.09\cdot 10^{4}$ | $7.04\cdot 10^{3}$ | $4.30\cdot 10^{3}$ | $2.05\cdot 10^{3}$
$M/M_{\odot}$ | 0.048 | 0.146 | 0.391 | 0.816 | 1.15 | 1.37
n | $3/2$ | $3/2$ | | 3 | 3 | 3
$r_{0}$ | $2.78$ | $1.90$ | | $1.79$ | $0.830$ | $0.306$
$m_{0}$ | $0.0185$ | $0.0583$ | | $0.542$ | $0.542$ | $0.542$
$R_{0}$ [km] | $2.42\cdot 10^{4}$ | $1.65\cdot 10^{4}$ | | $1.55\cdot 10^{4}$ | $7.21\cdot 10^{3}$ | $2.66\cdot 10^{3}$
$M_{0}/M_{\odot}$ | 0.050 | 0.157 | | 1.46 | 1.46 | 1.46
Apart from the electron pressure, Chamel and Fantina CHFA have taken into
account also the lattice pressure $P_{\mathrm{lat}}$, derived from the
dominant static-lattice (Madelung) part of the ion free energy PC3 , i.e.
approximating $F_{lat}\simeq F_{M}$:
$F_{\mathrm{M}}\,=\,N_{i}k_{B}T\mathrm{C}_{\mathrm{M}}\,\Gamma\,,\quad
N_{i}\,=\,n_{i}V\ ,$ (29)
and for the bcc crystal the Madelung constant is BPY :
$\displaystyle\mathrm{C}_{\mathrm{M}}$ $\displaystyle=$
$\displaystyle-0.89592925568\,.$ (30)
The ion coupling parameter
$\Gamma\,=\,\frac{(Ze)^{2}}{a_{i}k_{\mathrm{B}}T}\,,$ (31)
is given in terms of the ion sphere radius $a_{i}\,=\,\big{(}\frac{4}{3}\pi
n_{i}\big{)}^{-1/3}$.
As it can be seen from Table III CHFA , the effect of the pressure
$P_{\mathrm{lat}}$ on the mass of the WDs containing the light elements is
only few percents and we do not take it into account.
The results of comparative calculations are presented in Table 1.
## III Entropy and its gradient in WDs
In this section we calculate one - electron and ion entropies and their
gradients within the above introduced Coulomb plasma theory based on the
Helmholtz free energy concept. We show on the representative set of WDs that
both entropies are positive and that their gradients satisfy the condition of
the thermodynamical stability, required by eq. (10). We will deal with a
reduced dimensional entropy defined (both for electrons and ions) as:
$\displaystyle\hat{s}$ $\displaystyle=$ $\displaystyle\frac{S}{k_{B}\,N}\ ,$
(32)
where $N$ is a number of particles and $k_{\mathrm{B}}$ is the Boltzman
constant. We will evaluate and plot a derivative of various contributions to
this reduced entropy in respect to the dimensionless radius $x$ (see (14)),
i.e., $d\hat{s}/d\,x$.
### III.1 The electron entropy
For the free electrons it follows from (32) and relations (24-26):
$\displaystyle\hat{s}_{e}$ $\displaystyle=$
$\displaystyle\frac{1}{k_{\mathrm{B}}T}\frac{U^{(e)}_{id}+P^{(e)}_{id}\,V-N_{e}\,\mu_{e}}{V\,n_{e}}=\frac{1}{k_{\mathrm{B}}T}\,\left(\frac{{\cal
E}_{e}+P^{(e)}_{id}}{n_{e}}-\mu_{e}\right)$ (33) $\displaystyle=$
$\displaystyle\frac{5\,I_{3/2}(\chi_{e},\tau)+4\tau\,I_{5/2}(\chi_{e},\tau)}{3(I_{1/2}(\chi_{e},\tau)+\tau\,I_{3/2}(\chi_{e},\tau))}-\chi_{e}\simeq\pi^{2}\,\tau\,\frac{\epsilon_{F}}{x^{2}_{\mathrm{r}}}\,,$
where $n_{\mathrm{e}}$ is the electron density, $\tau\,=\,T/T_{\mathrm{r}}$,
$T_{\mathrm{r}}\,=\,m_{\mathrm{e}}c^{2}/k_{\mathrm{B}}$. Further,
$p_{\mathrm{F}}\,=\,\hbar(3\pi^{2}n_{\mathrm{e}})^{1/3}$ is the electron Fermi
momentum and the dimensionless Fermi momentum $x_{r}$ and energy
$\epsilon_{F}$ are
$\displaystyle x_{\mathrm{r}}=\frac{p_{\mathrm{F}}}{m_{\mathrm{e}}c}\
,\quad\epsilon_{F}=\sqrt{1+x^{2}_{\mathrm{r}}}\equiv\tilde{\epsilon}_{F}+1\ .$
(34)
The last equation in eq. (33) is obtained by the Sommerfeld expansion (see
Appendix C), it agrees with equation (6) in PC2 . We checked numerically that
for our calculations it is sufficient to take the termodynamic quantities in
the Sommerfeld approximation (SA).
As for the derivative of the electron entropy in respect to the dimensionless
WD radius $x$, let us first consider it in a more transparent SA. Using the
charge neutrality of the plasma (28), the electron Fermi momentum can be
connected to the matter density $\rho$ as follows,
$p_{\mathrm{F}}\,=\,\hbar\,\bigg{(}\frac{3\pi^{2}\rho}{\mu_{u}}\bigg{)}^{1/3}\equiv
D\,\rho^{1/3}\ ,\quad
D\,=\,\hbar\bigg{(}\frac{3\pi^{2}}{\mu_{u}}\bigg{)}^{1/3}\,.$ (35)
Since $\rho$ is decreasing to the surface of the WD, $p_{\mathrm{F}}$ and
$x_{\mathrm{r}}$ are also decreasing, and it follows from eq. (33) that
$\hat{s}_{\mathrm{e}}$ is increasing. In other words, the specific one-
electron entropy is stratified. With the help of relations:
$x_{r}=\frac{D}{m_{e}c}\,\rho^{1/3}\
,\qquad\epsilon_{F}=\frac{\sqrt{(m_{\mathrm{e}}c)^{2}\,+\,D^{2}\rho^{2/3}}}{m_{e}c}\
,$
the electron entropy (33) can be transformed to the form, suitable for
calculations,
$\hat{s}_{\mathrm{e}}\,=\,\frac{\pi^{2}k_{\mathrm{B}}T}{cD^{2}\rho^{2/3}}\,\sqrt{(m_{\mathrm{e}}c)^{2}\,+\,D^{2}\rho^{2/3}}\,,$
(36)
It is convenient to write the gradient of $\hat{s}_{\mathrm{e}}$ as a product:
$\displaystyle\frac{d\hat{s}_{\mathrm{e}}}{dx}\,$ $\displaystyle=$
$\displaystyle\,\left(\rho\,\frac{d\hat{s}_{\mathrm{e}}}{d\rho}\right)\cdot\,\frac{1}{\rho}\frac{d\rho}{dx}\
,\qquad\frac{1}{\rho}\frac{d\rho}{dx}<0\ ,$ (37)
$\displaystyle\rho\frac{d\hat{s}_{\mathrm{e}}}{d\rho}\,$ $\displaystyle=$
$\displaystyle\,-\frac{\pi^{2}k_{\mathrm{B}}T}{3cD^{2}\,\,\rho^{2/3}}\,\frac{2(m_{\mathrm{e}}c)^{2}+D^{2}\rho^{2/3}}{\sqrt{(m_{\mathrm{e}}c)^{2}\,+\,D^{2}\rho^{2/3}}}=-\frac{\pi^{2}\,\tau}{3}\,\frac{2+x_{r}^{2}}{x_{r}^{2}\,\epsilon^{2}_{F}}<0\
.$ (38)
Obviously, both terms are dimensionless and negative, hence their product is
dimensionless and positive:
$\frac{d\hat{s}_{\mathrm{e}}}{dx}\,>\,0\,.$ (39)
It should be noted, that our calculations respect the criterion of the strong
degeneracy,
$\theta\,=\,T/T_{\mathrm{F}}\,<<\,1\,,\quad{\rm where}\
T_{\mathrm{F}}\,=\,\frac{\tilde{E}_{\mathrm{F}}}{k_{\mathrm{B}}}\,,$ (40)
and
$\tilde{E}_{\mathrm{F}}=E_{\mathrm{F}}-m_{e}c^{2}=\,c[(m_{\mathrm{e}}c)^{2}\,+\,p^{2}_{\mathrm{F}}]^{1/2}\,-\,m_{\mathrm{e}}\,c^{2}=m_{e}c^{2}\,\tilde{\epsilon}_{\mathrm{F}}$
is the Fermi energy with the rest mass contribution subtracted.
Due to very good termal conductivity of the WD the temperature $T$ (and hence
$\tau=T/T_{r}$) are nearly constant inside the WD, with the exception of a
thin skip at its surface. Therefore, in our calculations we consider $\tau$ to
be independent of the radius $x$.
We have also checked that the empirical factor
$(1\,+\,\Delta\tilde{\epsilon}/\tilde{\epsilon})^{-1}$ PC2 , minimizing the
numerical jump of the transition between the fit for $\chi_{\mathrm{e}}<14$
and the Sommerfeld expansion for $\chi_{\mathrm{e}}>14$, did not lead in our
calculations to any sizeable effect.
Let us now briefly mention equation for the derivative of the electron entropy
following from the full form of (33). For easier comparison with equations
(37,38) we can again use the factorization (37), where the 2nd term is now
(with the help of (24) and (28)):
$\displaystyle\rho\frac{d\hat{s}_{\mathrm{e}}}{d\rho}\,$ $\displaystyle=$
$\displaystyle
n_{e}\,\frac{d\hat{s}_{\mathrm{e}}}{dn_{e}}=\frac{n_{e}}{n^{\prime}_{e}}\cdot\,\hat{s}^{\prime}_{e}\
,$ (41) $\displaystyle n^{\prime}_{e}$ $\displaystyle\equiv$
$\displaystyle\frac{dn_{e}}{d\chi_{e}}=I^{\prime}_{1/2}+\tau\,I^{\prime}_{3/2}\
,\qquad I^{\prime}_{\nu}\equiv\frac{d\,I_{\nu}(\chi_{e},\tau)}{d\chi_{e}}\ ,$
$\displaystyle\hat{s}^{\prime}_{e}$ $\displaystyle\equiv$
$\displaystyle\frac{d\hat{s}_{e}}{d\chi_{e}}=\frac{(I_{1/2}+\tau\,I_{3/2})(5I^{\prime}_{3/2}+4\tau
I^{\prime}_{5/2})-(I^{\prime}_{1/2}+\tau\,I^{\prime}_{3/2})(5I_{3/2}+4\tau
I_{5/2})}{3(I_{1/2}+\tau\,I_{3/2})^{2}}-1\ .$
To calculate (41) one needs derivatives of the Fermi-Dirac integrals
$I_{\nu}(\chi,\tau)$ in respect to $\chi$. It is also not obvious from the
general equation that (41) is negative. Nevertheless, we checked numerically
that in our calculations the general equation (41) agrees very well with the
approximate one (38).
### III.2 The ion entropy
As for the ions, we consider them in the crystalline phase, in which they are
arranged in the body-centered cubic (bcc) Coulomb lattice (see Sect. 3.2.2 of
Ref. PC3 ).
In this state, $T<T_{\mathrm{m}}$, where $T_{\mathrm{m}}$ is the melting
temperature. For the one-component Coulomb plasma, it is obtained from the
relation,
$\Gamma_{\mathrm{m}}\,=\,2.2747\times
10^{5}\,\frac{Z^{5/3}}{T_{\mathrm{m}}}\,\bigg{(}\rho\frac{Z}{A}\bigg{)}^{1/3}\,,$
(42)
where $\Gamma_{\mathrm{m}}\,=\,175\,\pm\,0.4$ PC1 .
Beyond the harmonic-lattice approximation (29), the reduced dimensionless one-
ion free energy is given by:
$\displaystyle f_{\mathrm{lat}}(\Gamma,\eta)$ $\displaystyle\equiv$
$\displaystyle\frac{F_{\mathrm{lat}}}{N_{\mathrm{i}}k_{\mathrm{B}}T}=C_{\mathrm{M}}\,\Gamma\,+\,1.5\,u_{1}\,\eta\,+\,f_{\mathrm{th}}\,+\,f_{\mathrm{ah}}\
.$ (43)
The first three terms describe the harmonic lattice model BPY and
$f_{\mathrm{ah}}$ is the anharmonic correction to the Coulomb lattice.
Further, $C_{\mathrm{M}}$ is the Madelung constant (30) and
$u_{1}\,=\,0.5113875$. The parameter $\eta$, determining the importance of the
quantum effects in a strongly coupled plasma, is PC3 :
$\eta\,\equiv\,7.835\frac{Z}{A}\cdot\frac{\sqrt{\rho}}{T}\,\times 10^{3}\,.$
(44)
The ion coupling parameter $\Gamma\sim 1/T$ is defined by (31).
For $f_{\mathrm{th}}$ we adopt the following fitting formula, used in the
Appendix B.2 of Ref. PC3 :
$f_{\mathrm{th}}(\eta)\,=\,\sum_{i=1}^{3}ln\big{(}1-e^{-\alpha_{i}\eta}\big{)}-\frac{A(\eta)}{B(\eta)}\,,$
(45)
where
$\alpha_{1}\,=\,0.932446,\,\alpha_{2}\,=\,0.334547,\,\alpha_{3}\,=\,0.265764$
and
$\displaystyle A(\eta)\,=\,\sum_{i=1}^{7}a_{i}\,\eta^{m_{i}}\ ,\quad
B(\eta)\,=\,\sum_{i=1}^{8}b_{i}\,\eta^{n_{i}}\ .$ (46)
Table 2: In this table, the input data for eqs. (46) are presented. i | ai | mi | bi | ni
---|---|---|---|---
1 | 1.0 | 0 | 261.66 | 0
2 | 0.1839 | 1 | 7.07997 | 2
3 | 0.593 586 | 2 | 4.094 84$\times 10^{-2}$ | 4
4 | 5.4814$\times 10^{-3}$ | 3 | 3.973 55$\times 10^{-4}$ | 5
5 | 5.01813$\times 10^{-4}$ | 4 | 5.11148$\times 10^{-5}$ | 6
6 | 3.9247$\times 10^{-7}$ | 6 | 2.19749$\times 10^{-6}$ | 7
7 | 5.8356$\times 10^{-11}$ | 8 | 1.866985$\times 10^{-9}$ | 9
8 | - | - | 2.78772$\times 10^{-13}$ | 11
For the anharmonic correction of the Coulomb lattice $f_{\mathrm{ah}}$, we use
the anharmonic contribution to the one-ion entropy from the Sect. 4 of the
recent work BC .
Using eq. (43), we calculate the dimensionless one-ion entropy as,
$\displaystyle\hat{s}_{i}(\Gamma,\eta)$ $\displaystyle=$
$\displaystyle\,-\frac{1}{k_{\mathrm{B}}\,N_{i}}\,\frac{\partial
F_{lat}}{\partial T}\,=\,-\,f_{lat}-T\,\frac{\partial f_{lat}}{\partial
T}=-\,f_{lat}+\Gamma\,\frac{\partial
f_{lat}}{\partial\Gamma}+\eta\,\frac{\partial f_{lat}}{\partial\eta}\ ,$ (47)
where we used relations
$T\,\frac{\partial\Gamma}{\partial T}\,=-\Gamma\ ,\quad
T\,\frac{\partial\eta}{\partial T}\,=-\eta\ .$ (48)
It is obvious from (47) that the first two terms of (43) (linear in $\Gamma$
and $\eta$, resp.) do not contribute to the entropy (since the corresponding
contributions to $F_{lat}$ do not depend on temperature). From the last part
of harmonic contribution, i.e. from $f_{th}$, we obtain for the harmonic part
of the entropy $\hat{s}_{i}(har)$ two contributions corresponding to two terms
of (45):
$\hat{s}_{i}(har)\,=\,\hat{s}_{ths}(\eta)\,+\,\hat{s}_{thr}(\eta)\,,$ (49)
with
$\displaystyle\hat{s}_{ths}(\eta)\,$ $\displaystyle=$
$\displaystyle\,\sum_{k=1}^{3}\,\big{[}-ln(1-e^{-\alpha_{k}\,\eta})\,+\,\frac{\eta\alpha_{k}e^{-\alpha_{k}\,\eta}}{1-e^{-\alpha_{k}\,\eta}}\big{]}\,,$
(50) $\displaystyle\hat{s}_{thr}(\eta)\,$ $\displaystyle=$
$\displaystyle\,\frac{A(\eta)}{B(\eta)}\,-\,\frac{\tilde{A}(\eta)^{\prime}}{B}\,+\,A(\eta)\,\frac{\tilde{B}(\eta)^{\prime}}{B(\eta)^{2}}\
,$ (51)
where we denote
$\tilde{C}(\eta)^{\prime}\,\equiv\,\eta\,\frac{dC(\eta)}{d\eta}\,,\,\,\,\,C\,=\,A\,,B\,.$
(52)
The anharmonic part of the one-ion entropy $\hat{s}_{i}(ah)$ was parametrized
in Ref. BC . From eqs. (20) - (24) of this paper one gets:
$\displaystyle\hat{s}_{i}(ah)\,$ $\displaystyle=$
$\displaystyle\,\frac{\tilde{A}^{S}_{1}(\eta)}{\Gamma}\,+\,\frac{\tilde{A}^{S}_{2}(\eta)}{\Gamma^{2}}\,+\,\frac{\tilde{A}^{S}_{3}(\eta)}{\Gamma^{3}}\,,\qquad\tilde{A}^{S}_{i}(\eta)=\eta^{i}\,A^{S}_{i}(\eta)\,,$
(53) $\displaystyle\tilde{A}^{S}_{1}(\eta)\,$ $\displaystyle=$
$\displaystyle\,-2A_{11}\,\frac{2A_{12}\eta^{2}\,+\,1}{(1\,+\,A_{12}\eta^{2})^{2}}-2A_{13}\,\frac{2A_{14}\eta^{2}\,+\,1}{(1\,+\,A_{14}\eta^{2})^{2}}\,,$
(54) $\displaystyle\tilde{A}^{S}_{2}(\eta)\,$ $\displaystyle=$
$\displaystyle\,\frac{3A_{2cl}}{2(1\,+\,A_{21}\eta^{4})^{1/4}}\,,$ (55)
$\displaystyle\tilde{A}^{S}_{3}(\eta)\,$ $\displaystyle=$
$\displaystyle\,\frac{4A_{3cl}}{3}\,.$ (56)
The parameters entering eqs. (54)-(56) are listed in Table 1 and in eqs.
(9)-(11)
of Ref. BC :
$\displaystyle A_{1cl}\,=\,10.2\,,\quad
A_{2cl}\,=\,248\,,\quad\,A_{3cl}\,=\,2.03\times 10^{5}\,,$ $\displaystyle
A_{1q}\,=\,-0.62/6\,,\quad A_{2q}\,=\,-0.56\,,$ $\displaystyle
A_{11}\,=\,-10\,,\quad A_{12}\,=\,6\times 10^{-3}\,,\quad
A_{13}\,=\,-0.2\,,\quad A_{14}\,=\,0.2167\,,\quad A_{21}\,=\,2.9624\times
10^{-4}\ ,$
As for the derivative of the electron entropy (37), let us also for
$\hat{s}_{i}$ factorize:
$\displaystyle\frac{d\hat{s}_{\mathrm{i}}}{dx}\,$ $\displaystyle=$
$\displaystyle\,\left(\rho\,\frac{d\hat{s}_{\mathrm{i}}}{d\rho}\right)\cdot\,\frac{1}{\rho}\frac{d\rho}{dx}\
,\qquad\frac{1}{\rho}\frac{d\rho}{dx}<0\ .$ (57)
Then, using the fact that $\hat{s}_{i}=\hat{s}_{i}(\Gamma,\eta)$ and taking
into account the identities:
$\rho\,\frac{\partial\Gamma}{\partial\rho}\,=\frac{\Gamma}{3}\
,\quad\rho\,\frac{\partial\eta}{\partial\rho}\,=\frac{\eta}{2}\ ,$ (58)
we get for the ion entropy:
$\displaystyle\rho\,\frac{d\hat{s}_{i}}{d\rho}$ $\displaystyle=$
$\displaystyle\rho\,\frac{\partial\Gamma}{\partial\rho}\,\frac{\partial\hat{s}_{i}}{\partial\Gamma}+\rho\,\frac{\partial\eta}{\partial\rho}\,\frac{\partial\hat{s}_{i}}{\partial\eta}=\frac{\Gamma}{3}\,\frac{\partial\hat{s}_{i}}{\partial\Gamma}+\frac{\eta}{2}\,\frac{\partial\hat{s}_{i}}{\partial\eta}\
.$ (59)
For the harmonic part (49-51) it then follows
$\rho\frac{\partial\hat{s}_{i}(har)}{\partial\rho}\,=\frac{\eta}{2}\,\frac{\partial\hat{s}_{ths}(\eta)}{\partial\eta}+\frac{\eta}{2}\,\frac{\partial\hat{s}_{thr}(\eta)}{\partial\eta}\equiv
ds_{ths}(\eta)\,+\,ds_{thr}(\eta)\ ,$ (60)
with
$\displaystyle ds_{ths}(\eta)\,$ $\displaystyle=$
$\displaystyle\,-\frac{1}{2}\,\sum_{k=1}^{3}\,(\eta\alpha_{k})^{2}\,\frac{e^{-\alpha_{k}\eta}}{(1\,-\,e^{-\alpha_{k}\eta})^{2}}\,,$
(61) $\displaystyle ds_{thr}(\eta)\,$ $\displaystyle=$
$\displaystyle\,-\frac{A\tilde{B}^{\prime
2}}{B^{3}}\,+\,\frac{1}{2B^{2}}\,\big{(}2\tilde{A}^{\prime}\tilde{B}^{\prime}\,+\,A\tilde{B}^{\prime\prime}\big{)}-\frac{\tilde{A}^{\prime\prime}}{2B}\
,$ (62)
where
$\tilde{C}(\eta)^{\prime\prime}\,\equiv\,\eta^{2}\,\frac{d^{2}C(\eta)}{d\eta^{2}}\,,\,\,\,\,C\,=\,A\,,B\,.$
(63)
In its turn, the derivative of the anharmonic part $\hat{s}_{i}(ah)$ over
$\rho$ is,
$\rho\frac{\partial\hat{s}_{i}(ah)}{\partial\rho}\,=\,\rho\frac{\partial}{\partial\rho}\bigg{[}\frac{\tilde{A}^{S}_{1}(\eta)}{\Gamma}\bigg{]}+\rho\frac{\partial}{\partial\rho}\bigg{[}\frac{\tilde{A}^{S}_{2}(\eta)}{\Gamma^{2}}\bigg{]}+\rho\frac{\partial}{\partial\rho}\bigg{[}\frac{\tilde{A}^{S}_{3}(\eta)}{\Gamma^{3}}\bigg{]}\,,$
(64)
where we can write similar to (59),
$\displaystyle\rho\frac{\partial}{\partial\rho}\bigg{[}\frac{\tilde{A}^{S}_{n}(\eta)}{\Gamma^{n}}\bigg{]}$
$\displaystyle=$
$\displaystyle\left(\frac{\Gamma}{3}\,\frac{\partial}{\partial\Gamma}+\frac{\eta}{2}\,\frac{\partial}{\partial\eta}\right)\,\frac{\tilde{A}^{S}_{n}(\eta)}{\Gamma^{n}}=\frac{1}{\Gamma^{n}}\,\left(-\frac{n}{3}\,\tilde{A}^{S}_{n}(\eta)+\frac{\eta}{2}\,\frac{d\tilde{A}^{S}_{n}(\eta)}{d\eta}\right)\
,\quad n=1,2,3\ .$
From the explicit form (54)-(56) of factors $\tilde{A}^{S}_{n}(\eta)$ one gets
with the help of the relation above:
$\displaystyle\rho\frac{\partial}{\partial\rho}\bigg{[}\frac{\tilde{A}^{S}_{1}(\eta)}{\Gamma}\bigg{]}\,$
$\displaystyle=$
$\displaystyle\,\frac{2}{3\Gamma}\bigg{[}\frac{A_{11}}{(1\,+\,A_{12}\eta^{2})^{3}}\,(8A_{12}^{2}\eta^{4}\,+\,3A_{12}\eta^{2}\,+\,1)\bigg{]}$
(65)
$\displaystyle\,+\frac{2}{3\Gamma}\bigg{[}\frac{A_{13}}{(1\,+\,A_{14}\eta^{2})^{3}}\,(8A_{14}^{2}\eta^{4}\,+\,3A_{14}\eta^{2}\,+\,1)\bigg{]}\,,$
$\displaystyle\rho\frac{\partial}{\partial\rho}\bigg{[}\frac{\tilde{A}^{S}_{2}(\eta)}{\Gamma^{2}}\bigg{]}\,$
$\displaystyle=$
$\displaystyle\,-\frac{A_{2cl}}{4\Gamma^{2}}\frac{(4\,+\,7A_{21}\eta^{4})}{(1\,+\,A_{21}\eta^{4})^{5/4}}\,,$
(66)
$\displaystyle\rho\frac{\partial}{\partial\rho}\bigg{[}\frac{\tilde{A}^{S}_{3}(\eta)}{\Gamma^{3}}\bigg{]}\,$
$\displaystyle=$ $\displaystyle\,-\frac{4A_{3cl}}{3\Gamma^{3}}\,.$ (67)
The electron and ion entropies and their derivatives presented in this section
depend for a given WD (i.e. given $Z,\,A,\,T$ and $\lambda_{c}$) on the
chemical potential $\chi_{e}$ or on the parameters $\eta$ and $\Gamma$, resp.,
which are all determined from the density $\rho$ obtained by integrating the
equation of stability. This way we analyzed numerically the WD models 1–6 of
Table 1. The results are plotted in Figs. 2 and 3.
|
---|---
|
|
FIG. 2: Entropies (left column) and their derivatives (right column) for the
first three models.
On the horizontal axis we plot the dimensionless radial distance (14), on the
vertical axes we enter the dimensional reduced entropy $\hat{s}$ (eq. (32)) in
the left plots and $d\hat{s}/d\,x$, i.e., derivatives of reduced entropies in
respect to the dimensionless radius $x$ (eq. (14)) in the right plots. As
indicated in legends, red dotted curves are ion contributions, blue dashed
electron ones and black solid display sums (total values).
|
---|---
|
|
FIG. 3: The same as the previous figure for models 4-6.
For the light WDs (see Fig. 2) the ion entropy prevails, although the electron
contribution becomes gradually important, in particular in the inner parts of
the WDs. The ion contribution also dominates the energy gradient.
For heavier WDs (see Fig. 3) the electron entropy is more important almost in
the whole star, only close to the surface the ion part prevails. The same
trend is apparent also for entropy gradient.
As it can be seen from our calculations, the entropy is positive∥∥∥the
positivity of the entropy was noted also in ref. ASJ for all models and the
entropy gradient satisfies the condition of the thermodynamical stability of
stars Eq.(1.10).
## IV Conclusions
The frequently used polytropic model SC1 of the description of the WDs has
two drawbacks: a) It is of restricted use, because it is a realistic model of
the EoS only in the non-relativistic limit for $\lambda_{c}<<10^{6}g/cm^{3}$
and in the extreme relativistic limit for $\lambda_{c}>>10^{6}g/cm^{3}$.
b) The fluid, described by the polytropic model, is only neutrally stable JPM
.
In this paper, we have shown on a representative set of the carbon WDs that
their description, based on the EoS formulated in the theory of the magnetized
Coulomb plasma in Refs. CP \- ASJ , satisfies the stability requirement,
given by eq. (10). As it is seen in Figs. (2) - (7), both the entropy and its
gradient are positive.
It would be important to investigate, if this requirement would be satisfied
also in the case of the presence of the strong magnetic field. This finding
would mean that the existence of strongly magnetized WDs would be possible.
## Acknowledgments
One of us (E. T.) thanks Dr. A.Y. Potekhin for the correspondence, discussions
and advices. The correspondence with Dr. N. Chamel is acknowledged.
## References
* (1) S. Chandrasekhar, An Introduction to the Study of Stellar Structure, Dover Publications, INC., University Chicago Press, 1939.
* (2) M. Camenzind, Compact Objects in Astrophysics, Springer Verlag, Berlin, Heidelberg, 2007.
* (3) A.Y. Potekhin, Phys. Usp. 53 (2010) 1235.
* (4) S. Chandrasekhar, Astrophys. J. 74 (1931) 81.
* (5) L.D. Landau, Phys. Z. Sowjetunion 1 (1932) 285.
* (6) S.L. Shapiro, S.A. Teukolsky, Black Holes, White Dwarfs, and Neutron Stars: The Physics of Compact Objects, Wiley, New York, 1983.
* (7) J.P. Mitchell, J. Braithwaite, A. Reisenegger, H. Spruit, J.A. Valvidia and N. Langer, Mon. Not. R. Astron. Soc. 447 (2015) 1213.
* (8) T. Akgün, A. Reisenegger, A. Mastrano and P. Marchant, Mon. Not. R. Astron. Soc. 433 (2013) 2445.
* (9) J. Braithwaite, Mon. Not. R. Astron. Soc. 397 (2009) 763.
* (10) H.C. Spruit, Astron. Astrophys. 349 (1999) 189.
* (11) C.S. Bisnovatyi - Kogan, Stellar Physics 1: Fundamental Concepts and Stellar Equilibrium, Springer, 2001.
* (12) L. Becerra, A. Reisenegger, J.A. Valvidia and M.E. Gusakov, Mon. Not. R. Astron. Soc. 511 (2022) 732.
* (13) U. Dass and B. Mukhopadhyay, Int. J. Mod. Phys. D 22 (2013) 134.
* (14) U. Dass and B. Mukhopadhyay, J. Cosmol. Astropart. Phys. 06 (2014) 050.
* (15) U. Dass and B. Mukhopadhyay, J. Cosmol. Astropart. Phys. 05 (2015) 016.
* (16) D. Bera and D. Bhattacharya, Mon. Not. R. Astron. Soc. 465 (2017) 4026.
* (17) D. Chatterjee, A.F. Fantina, N. Chamel, J. Novak and M. Oertel, Mon. Not. R. Astron. Soc. 469 (2017) 95.
* (18) N. Chamel, L. Perot, A.F. Fantina, D. Chatterjee, S. Ghosh, J. Novak and M. Oertel, in The 16th Marcel Grossmann Meeting on General Relativity, 5 - 10 July 2021, eds. R. Ruffini and G. Vereshchagin, World Scientific, Singapore, 2023, pp. 4488 - 4507.
* (19) L. Becerra, K. Boshkayev, J.A. Rueda and R. Ruffini, Mon. Not. R. Astron. Soc. 487 (2019) 812.
* (20) M. Malheiro, J.A. Rueda and R. Ruffini, Publ. Astron. Soc. Jpn. 64 (2012) 56.
* (21) N.R. Ikhsanov and N.G. Beskrovnaya, Astron. Rep. 56 (2012) 595.
* (22) K. Boshkayev, L. Izzo, J.A. Hernandez Rueda and R. Ruffini, Astron. Astrophys. 555 (2013) A151.
* (23) J.A. Rueda, K. Boshkayev, L. Izzo, R. Ruffini, P. Loren - Aguilar, B. Külebi, G. Aznar - Siguán and E. Garcia - Berro, Astrophys. J. 772 (2013) L24.
* (24) J.G. Coelho and M. Malheiro, Publ. Astron. Soc. Jpn. 66 (2014) 14.
* (25) R.V. Lobato, M. Malheiro and J.G. Coelho, Int. J. Mod. Phys. D 25 (2016) 1641025.
* (26) V.B. Belyaev, P. Ricci, F. Šimkovic, J. Adam, M. Tater and E. Truhlík, Nucl. Phys. A 937 (2015) 17.
* (27) T.R. Marsh, B.T. Gänsicke, S. Hümmerich et al., Nature 537 (2016) 374.
* (28) G. Chabrier and A.Y. Potekhin, Phys. Rev. E 58 (1998) 4941.
* (29) D.A. Baiko, A.Y. Potekhin and D.G. Yakovlev, Phys. Rev. E 64 (2001) 057402.
* (30) A.Y. Potekhin and G. Chabrier, Phys. Rev. E 62 (2000) 8554.
* (31) A.Y. Potekhin and G. Chabrier, Contrib. Plasma Phys. 50 (2010) 82.
* (32) A.Y. Potekhin and G. Chabrier, Astron. Astrophys. 550 (2013) A43.
* (33) A.S. Jermyn, J. Schwab, E. Bauer, F.X. Timmes and A.Y. Potekhin,
Astrophys. J. 913 (2021) 72.
* (34) D.A. Baiko and A.I. Chugunov, Mon. Not. R. Astron. Soc. 510 (2022) 2628.
* (35) K. Boshkayev, Astron. Rep. 62(12) (2018) 847.
* (36) N. Chamel and A.F. Fantina, Phys. Rev. D 92 (2015) 023008.
* (37) C.B. Jackson, J. Taruna, S.L. Pouliot, B.W. Ellison, D.D. Lee, J. Piekarewicz,
Eur. J. of Phys. 26 (2005) 695 ,arXiv:astro-ph/0409348v2.
* (38) W.Greiner, L.Neise, H.Stoeker: _Thermodynamics and statistical mechanics,_ Chap. 14, 1995, Springer-Verlag, New York.
## Appendix A Scaling And Dimensionless Equations
Let us briefly present an alternative derivation of differential equations
describing the WD, starting from the usual Newtonian formulation of the
mechanical stability for the spherical WD (which is also a starting point in
the main text (2)):
$\displaystyle\frac{d\,P_{e}(r)}{d\,r}$ $\displaystyle=$
$\displaystyle-G\,\frac{M(r)\,\rho(r)}{r^{2}}\ ,$
$\displaystyle\frac{d\,M(r)}{d\,r}$ $\displaystyle=$ $\displaystyle
4\pi\,r^{2}\,\rho(r)\ ,$
where $\rho(r)$ is a matter density:
$\displaystyle\rho(r)$ $\displaystyle=$ $\displaystyle
m_{u}\,n_{e}(r),\qquad\mu_{u}\equiv\frac{A}{Z}\,m_{u}\ ,$ (68)
and $P_{e}$ is an electron pressure. In the calculations of this paper we used
$A=12,Z=6$ and:
$\displaystyle m_{u}=931.494\,{\rm
MeV/c^{2}}\quad\rightarrow\quad\mu_{u}=1\,862.988\,{\rm MeV/c^{2}}\ .$
$M(r)$ is a mass contained inside the radius $r$. It appears convenient to re-
write the electron pressure and density in terms of dimensionless quantities
$\tilde{P}$ and $\tilde{\rho}$:
$\displaystyle n_{e}$ $\displaystyle=$ $\displaystyle\rho_{0}\,\tilde{n}_{e}\
,\quad\rho_{0}=\frac{1}{3\pi^{2}\,\lambda^{3}_{e}}\
,\quad\lambda_{e}=\frac{\hbar}{m_{e}c}\simeq 386.164\,{\rm fm}\ ,$ (69)
$\displaystyle P_{e}$ $\displaystyle=$ $\displaystyle
m_{e}c^{2}\,\rho_{0}\,\tilde{P}\ ,$
where $\lambda_{e}$ is the electron Compton length (its value is obtained from
$m_{e}\,c^{2}\simeq 0.511\,$MeV and $\hbar\,c\simeq 197.33\,{\rm MeV\cdot
fm}$).
The first equation then reads:
$\displaystyle\frac{d\,\tilde{P}(r)}{d\,r}$ $\displaystyle=$
$\displaystyle-G\,\frac{\mu_{u}}{m_{e}c^{2}}\
\frac{M(r)\,\tilde{n}_{e}(r)}{r^{2}}\ .$
Next we re-scale also the radius $r$ (as in (20)) and the mass $M(r)$:
$\displaystyle r=a_{s}\,x\,\qquad M(R)=M_{s}\,\tilde{m}(x)\ .$
In terms of $x$ and $\tilde{m}(x)$ the set of differential equations read:
$\displaystyle\frac{d\,\tilde{P}(x)}{d\,x}$ $\displaystyle=$
$\displaystyle-G\,\frac{\mu_{u}}{m_{e}c^{2}}\,\frac{M_{s}}{a^{2}_{s}}\
\frac{\tilde{m}(x)\,\tilde{n}_{e}(x)}{x^{2}}\ ,$
$\displaystyle\frac{d\,\tilde{m}(x)}{d\,x}$ $\displaystyle=$ $\displaystyle
4\pi\,\mu_{u}\,\rho_{0}\,\frac{a^{3}_{s}}{M_{s}}\ x^{2}\,\tilde{n}_{e}(x)\ .$
The dimensionless constants appearing on the r.h. sides can be fixed to our
convenience, we adopt a choice (following ref. JTPELP ):
$\displaystyle G\,\frac{\mu_{u}}{m_{e}c^{2}}\,\frac{M_{s}}{a^{2}_{s}}$
$\displaystyle=$ $\displaystyle\frac{5}{3}\ ,$ $\displaystyle
4\pi\,\mu_{u}\,\rho_{0}\,\frac{a^{3}_{s}}{M_{s}}$ $\displaystyle=$
$\displaystyle 3\ ,$
from which one gets (cp (20))
$\displaystyle a_{s}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{15\pi}}{2\mu_{u}}\,\sqrt{\frac{m_{e}c^{2}\,\lambda_{e}^{3}}{G}}=\frac{\sqrt{15\pi}}{2}\,\lambda_{e}\,\frac{m_{Pl}}{\mu_{u}}\,\simeq
8686.26\,{\rm km}\ ,$ (70) $\displaystyle M_{s}$ $\displaystyle=$
$\displaystyle a_{s}\cdot\ \frac{5m_{e}c^{2}}{3\mu m_{N}G}\simeq
2.646\,M_{\odot}\ ,$ (71)
where $m_{Pl}$ is the Planck mass (with a corresponding value of the
gravitational constant $G$):
$\displaystyle m_{Pl}$ $\displaystyle\equiv$ $\displaystyle\sqrt{\frac{\hbar
c}{G}}\simeq 1.2209\cdot 10^{22}\,{\rm MeV}\ ,\quad G\simeq 6.6742\,\cdot
10^{-8}\,{\rm cm^{3}/(g\cdot sec^{2})}\ .$ (72)
The set of the dimensionless DE is now:
$\displaystyle\frac{d\,\tilde{P}(x)}{d\,x}$ $\displaystyle=$
$\displaystyle-\frac{5}{3}\,\frac{\tilde{m}(x)\,\tilde{n}_{e}(x)}{x^{2}}\ ,$
$\displaystyle\frac{d\,\tilde{m}(x)}{d\,x}$ $\displaystyle=$ $\displaystyle
3\,x^{2}\,\tilde{n}_{e}(x)\ .$
To proceed further one has to realize that the quantities $\tilde{P}$ and
$\tilde{n}_{e}(x)$ are known functions of a temperature $T$ and electron
chemical potential $\mu_{e}$ (taken here without the electron rest mass), or
more conveniently, of the dimensionless variables (see the main text):
$\displaystyle\chi_{e}=\mu_{e}\,\beta\
,\qquad\tau=\frac{1}{\beta\,m_{e}c^{2}}\ ,\qquad\beta=\frac{1}{k_{B}\,T}\ .$
An important simplification then follows from an assumption (employed also in
an alternative formulation in the main text) that to a very good approximation
the temperature in the WD is constant, i.e. $T$ and hence $\tau$ do not depend
on $x$ and are fixed by their initial value. Then all quantities of interest
in the WD, in particular $\tilde{P}$ and $\tilde{n}_{e}(x)$, depend on the
radius $r$ – and hence on the dimensionless $x$ – implicitly just through a
single $x-$dependent function, e.g. $\chi_{e}(x)$. Thus, we can write:
$\displaystyle\frac{d\,\tilde{P}(x)}{d\,x}$ $\displaystyle=$
$\displaystyle\frac{d\,\tilde{P}(x)}{d\,\chi_{e}}\,\frac{d\,\chi_{e}(x)}{d\,x}\
,$
where the derivative $\tilde{P}(x)/d\,\chi_{e}$ can be calculated from the
explicit form of $\tilde{P}=\tilde{P}(\chi_{e},\tau)$. It is convenient to
consider instead of $\chi_{e}(x)$ a variable $\varphi(x)$:
$\displaystyle\varphi(x)$ $\displaystyle=$
$\displaystyle\chi_{e}\,\tau=\frac{\mu_{e}}{m_{e}c^{2}}\ ,$
which is a dimensionless electron chemical potential. Its advantage is that
for $T=0$ it just reduces to the dimensionless electron Fermi energy
$\tilde{\varepsilon}_{F}$:
$\displaystyle\varphi\ \xrightarrow[T\rightarrow 0]{}\
\frac{\tilde{E}_{F}}{m_{e}c^{2}}\equiv\tilde{\varepsilon}_{F}\ ,$ (73)
where $\tilde{E}_{F}$ (and $\tilde{\varepsilon}_{F}$) has a contribution of
the electron rest mass subtracted. The resulting set of the DE is then:
$\displaystyle\frac{d\,\varphi(x)}{d\,x}$ $\displaystyle=$
$\displaystyle-\frac{5}{3}\,\frac{\tilde{m}(x)}{x^{2}}\,g(\varphi)\ ,\qquad
g(\varphi)=\frac{\tilde{n}_{e}}{\frac{d\,\tilde{P}}{d\,\varphi}}=\frac{\tau\,\tilde{n}_{e}(\chi_{e},\tau)}{\frac{d\,\tilde{P}(\chi_{e},\tau)}{d\,\chi_{e}}}\
,\qquad\chi_{e}(x)=\frac{\varphi(x)}{\tau}\ ,$ (74)
$\displaystyle\frac{d\,\tilde{m}(x)}{d\,x}$ $\displaystyle=$ $\displaystyle
3\,x^{2}\,\tilde{n}_{e}(x)\ .$ (75)
As for the initial conditions for these equations:
$\tilde{m}(0)\equiv\tilde{m}_{c}=0$ (and $\tilde{m}(x)$ is an increasing
function of $x$), a value of $\varphi(0)\equiv\varphi_{c}>0$ is related to the
central matter density $\lambda_{c}$ and is discussed below ($\varphi(x)$ is a
decreasing function of $x$). For the WD the function $g(\varphi)$ is rather
close to unity.
The set of DE formulated above has several advantages which we are going to
discuss briefly: a) there is a smooth limit of $T\rightarrow 0$. In this limit
$\varphi\rightarrow\tilde{\varepsilon}_{F}=\sqrt{1+x^{2}_{r}}-1$, where
$x_{r}$ is a dimensionless electron Fermi momentum. Further, for the free
electron Fermi gas at $T=0$ it holds:
$\displaystyle\frac{\partial P_{e}}{\partial E_{F}}=n_{e}\ \Rightarrow\
\frac{\partial\tilde{P}}{\varepsilon_{F}}=\tilde{n}_{e}\ \Rightarrow\
g(\varphi)\xrightarrow[T\rightarrow 0]{}1\ \ .$
Thus, our set of coupled DEs for $T\rightarrow 0$ smoothly approaches a set of
$\displaystyle\frac{d\,\tilde{\varepsilon}_{F}(x)}{d\,x}$ $\displaystyle=$
$\displaystyle-\frac{5}{3}\,\frac{\tilde{m}(x)}{x^{2}}\ ,$
$\displaystyle\frac{d\,\tilde{m}(x)}{d\,x}$ $\displaystyle=$ $\displaystyle
3\,x^{2}\,\tilde{n}_{e}(x)=3\,x^{2}\,x^{3}_{r}\ ,$
which is equivalent to DEs considered in JTPELP . This makes comparisons of
the finite temperature solutions to $T=0$ ones very transparent.
b) A numerical solution of eqs. (74-75) is straightforward: once one specifies
the initial conditions, the equations (complemented by equations for
$\tilde{P}(\chi_{e},\tau)$ and $\tilde{n}_{e}(\chi_{e},\tau)$) are solved step
by step by appropriate numerical procedure (e.g. 4th order Runge-Kutta) and
there is no need to solve numerically at each step some transcendent equation
(cp to procedure described in a paragraph following eq. (28)). Moreover, at
some point $x_{0}$ the numerical value of the decreasing $\varphi(x)$ crosses
zero:
$\displaystyle\varphi(x_{0})=0\ \rightarrow\ r_{0}=a_{s}\,x_{0}\ .$
As in the $T=0$ limit we identify the value of $r_{0}$ with the
(dimensionless) radius of the WD. The alternative method used in the main text
does not have such a clear criterium for the radius.
c) An initial condition for $\varphi(0)=\varphi_{c}$ is expressed from the
central matter density $\lambda_{c}$ (see eq. (14)). Let us express
$\displaystyle\lambda_{c}=\mu_{u}\,\tilde{n}_{c}=\mu_{u}\,x^{3}_{rc}=\mu_{u}\,\tilde{n}_{c}(\chi_{ec},\tau)\
.$ (76)
One can invert this equation numerically to determine $\chi_{ec}$ by a
procedure mentioned below eq. (28) (and then to get
$\varphi_{c}=\tau\,\chi_{ec}$). Let us emphasize that in this formulation one
would have to solve the transcendent equation just once for the central
initial value. But even this is actually not necessary. At the center of the
WD the density is rather high and $T<<T_{F}$, hence one can use the Sommerfeld
expansion, from which it is possible to get an algebraic equation for
$\varphi_{c}$ in terms of $\varphi_{0c}=x^{2}_{rc}$ and temperature. We
checked that $\varphi_{c}$ obtained this way reproduces very accurately the
value obtained by solving eq. (76).
In last part of this appendix we briefly present equations for the WD radius
and mass in the Lame-Emden approximation. There are well known textbook
equations (see e.g. SLSSAT ) in terms of the central mass density or one can
derive very convenient representations for dimensionless radii and masses in
terms of the central fractional electron Fermi momentum $x_{rc}$. To
crosscheck numbers in our Table 1 we used both versions, so we list below for
reference corresponding equations and numerical value.
Recall that the radius $R_{0}$ and the mass of the object described by the
Lame-Emden equation (8) are defined by the first zero of its solution
$\theta(\xi_{1})=0$ and by its derivative in
$f(\xi_{1})=-\xi^{2}_{1}\,\theta^{\prime}(\xi_{1})$
$\displaystyle R_{0}=a\,\xi_{1}\ $ , $\displaystyle\quad
M_{0}=4\pi\,f(\xi_{1})\,a^{3}\,\lambda_{c}$
where according to (9) the LE scaling $a$ is
$\displaystyle
a=\sqrt{\frac{n+1}{4\pi}\,\frac{K}{G}\,\lambda_{c}^{\frac{1}{n}-1}}=\sqrt{\tilde{K}}\,\lambda_{c}^{\frac{1-n}{2n}}\
,\qquad\ .$
For the non-relativistic case with
$n=\frac{3}{2}\ ,\quad\xi_{1}^{nr}\simeq 3.65375\ ,\quad f(\xi_{1}^{nr})\simeq
2.71406\ ,$
we get:
$\displaystyle K_{nr}$ $\displaystyle=$ $\displaystyle\frac{\hbar
c\,\lambda_{e}}{15\pi^{2}}\,\left(\frac{3\pi^{2}}{\mu_{N}\,m_{u}}\right)^{5/3}\simeq
3.16119\cdot 10^{12}\,\frac{\rm cm^{4}}{\rm g^{2/3}\cdot
sec^{2}}\simeq\frac{1.00361\cdot 10^{13}}{\mu_{N}^{5/3}}\,\frac{\rm
cm^{4}}{\rm g^{2/3}\cdot sec^{2}}\ ,$
which agrees with eq. (2.3.22) of SLSSAT . Then, introducing
$\displaystyle\tilde{K}_{nr}$ $\displaystyle=$
$\displaystyle\frac{5}{8\pi}\,\frac{K_{nr}}{G}\simeq 9.42283\cdot
10^{18}\,{\rm g^{1/3}\cdot cm}\ ,$
gets (substituting $\lambda_{c}[{\rm g/cm^{3}}]$):
$\displaystyle R_{0}[{\rm km}]$ $\displaystyle=$ $\displaystyle
10^{-5}\cdot\frac{\sqrt{\tilde{K}_{nr}}\,\xi_{1}^{nr}}{\lambda_{c}^{1/6}}\simeq\frac{1.12158\cdot
10^{5}}{\lambda_{c}^{1/6}}=1.12158\cdot
10^{4}\,\left(\frac{\mu_{N}}{2}\right)^{-5/6}\,\left(\frac{\lambda_{c}}{10^{6}}\right)^{-1/6}\
,$
where the last equation agrees with eq. (3.3.13) of SLSSAT . For the WD mass
it follows:
$\displaystyle M$ $\displaystyle=$ $\displaystyle
4\pi\,\tilde{K}_{nr}^{3/2}\,f(\xi^{nr}_{1})\,\sqrt{\lambda_{c}}\simeq
9.86510\cdot 10^{29}\,\sqrt{\lambda_{c}}$ $\displaystyle=$ $\displaystyle
4.95993\cdot
10^{-4}\,M_{\odot}\,\sqrt{\lambda_{c}}=0.495993\,\left(\frac{\mu_{N}}{2}\right)^{-5/2}\,M_{\odot}\,\left(\frac{\lambda_{c}}{10^{6}}\right)^{1/2}\
,$
where we use $M_{\odot}\simeq 1.98896\cdot 10^{33}\,$g and the result fairy
agrees with eq. (3.3.14) of SLSSAT .
Alternatively, we can express from (34) and (35):
$\displaystyle x_{r}\,=\,\frac{\hbar
c}{m_{e}c^{2}}\,\bigg{(}\frac{3\pi^{2}\rho}{\mu_{u}}\bigg{)}^{1/3}\,.$ (77)
and express in terms of $x_{r}$ the reduced radius $r_{0}=R_{0}/a_{s}$ and
mass $m_{0}=M/m_{S}$, where the scaling factors $a_{s}$ and $M_{s}$ are
defined in eqs. (70,71). After some algebra one gets:
$\displaystyle r_{0}$ $\displaystyle=$
$\displaystyle\frac{R_{0}}{R_{s}}=\frac{\xi_{1}^{nr}}{\sqrt{10\,x_{rc}}}\simeq\frac{1.15542}{\sqrt{x_{rc}}}\
,$ (78) $\displaystyle m_{0}$ $\displaystyle\equiv$
$\displaystyle\frac{M}{M_{s}}=\frac{3\,f(\xi_{1}^{nr})}{5\,\sqrt{40}}\,x^{3/2}_{rc}\simeq
0.257478\,x^{3/2}_{rc}\ ,$ (79) $\displaystyle\frac{M}{M_{\odot}}$
$\displaystyle=$ $\displaystyle\frac{M_{s}}{M_{\odot}}\,\cdot
m_{0}=\frac{\sqrt{6\pi}}{8}\,f(\xi_{1}^{nr})\,\frac{m^{3}_{\rm
pl}}{(\mu_{N}m_{u})^{2}\,M_{\odot}}\,x^{3/2}_{rc}$ (80) $\displaystyle\simeq$
$\displaystyle 2.68849\,m_{0}\simeq 0.692227\,x^{3/2}_{rc}\ .$
These equations are convenient, since $r_{0}$ and $m_{0}$ depend only on
$x_{r}$ and are of natural size. The numbers in the Table 1 were calculated in
both ways, yielding identical results.
For the ultra-relativistic case with
$n=3\ ,\quad\xi_{1}^{ur}\simeq 6.89685\ ,\quad f(\xi_{1}^{ur})\simeq 2.01824\
,$
we get:
$\displaystyle K_{ur}$ $\displaystyle=$ $\displaystyle\frac{\hbar
c}{12\pi^{2}}\,\left(\frac{3\pi^{2}}{\mu_{u}}\right)^{4/3}\simeq 4.93488\cdot
10^{14}\,\frac{\rm cm^{3}}{\rm g^{1/3}\cdot sec^{2}}\simeq\frac{1.24351\cdot
10^{15}}{\mu_{N}^{4/3}}\,\frac{\rm cm^{3}}{\rm g^{1/3}\cdot sec^{2}}\ ,$
which agrees with eq. (2.3.23) of SLSSAT . We again introduce the auxiliary
constant:
$\displaystyle\tilde{K}_{ur}$ $\displaystyle=$
$\displaystyle\frac{K_{ur}}{\pi\,G}\equiv\frac{(3\pi^{2})^{1/3}}{4\pi}\,\frac{m^{2}_{Pl}}{(\mu_{N}\,m_{u})^{4/3}}\simeq
2.35357\,\cdot 10^{21}\,{\rm g^{2/3}}\ .$
Then the radius in km is:
$\displaystyle R_{0}[{\rm km}]$ $\displaystyle=$ $\displaystyle
10^{-5}\cdot\frac{\sqrt{\tilde{K}_{ur}}\,\xi_{1}^{rel}}{\lambda_{c}^{1/3}}\simeq\frac{33.4591}{\lambda_{c}^{1/3}}=0.334591\,\cdot\left(\frac{\mu_{N}}{2}\right)^{-2/3}\,\left(\frac{\lambda_{c}}{10^{6}}\right)^{-1/3}\
,$
which is consistent with eq. (3.3.16) of SLSSAT . The mass is in the ultra-
relativistic limit independent of $\lambda_{c}$ and it is known as the
Chandrasekhar limit:
$\displaystyle M$ $\displaystyle=$ $\displaystyle
4\pi\,\tilde{K}_{ur}^{3/2}\,f(\xi^{rel}_{1})\simeq 2.89584\cdot 10^{33}\,{\rm
g}\simeq
1.45595\,M_{\odot}=1.45595\,\left(\frac{\mu_{N}}{2}\right)^{-2}\,M_{\odot}\ .$
Alternatively, we can calculate $R_{0}$ and $M/M_{\odot}$ in terms of the
dimensionless:
$\displaystyle r_{0}$ $\displaystyle=$
$\displaystyle\frac{\xi_{1}^{rel}}{\sqrt{5}\,x_{rc}}\simeq\frac{3.084370}{x_{rc}}\
,\quad R_{0}=R_{s}\cdot r_{0}\ ,$ (81) $\displaystyle m_{0}$
$\displaystyle\equiv$
$\displaystyle\frac{M}{M_{s}}=\frac{3\,f(\xi_{1}^{rel})}{5\,\sqrt{5}}\simeq
0.541550\ ,$ (82)
from which one gets the same result as above:
$\displaystyle\frac{M}{M_{\odot}}$ $\displaystyle=$
$\displaystyle\frac{M_{s}}{M_{\odot}}\,\cdot
m_{0}=\frac{\sqrt{3\pi}}{2}\,f(\xi_{1}^{ur})\,\frac{m^{3}_{\rm
pl}}{(\mu_{N}m_{u})^{2}\,M_{\odot}}\simeq 1.45595\ .$ (83)
## Appendix B Calculations of functions $f_{1}(y)$ and $f_{2}(y)$
In accord with eq. (23) PC3 , the derivative of the electron pressure
$P^{(e)}_{id}$ over $y$ is
$\frac{\partial P^{(e)}_{id}}{\partial y}\,=\,\lambda_{c}\,\frac{\partial
P^{(e)}_{id}}{\partial\rho}\,=\,\lambda_{c}\,\frac{n_{\mathrm{e}}}{\rho}\,\bigg{(}\frac{\partial
P^{(e)}_{id}}{\partial\chi_{\mathrm{e}}}\bigg{)}_{T}\bigg{/}\bigg{(}\frac{\partial
n_{\mathrm{e}}}{\partial\chi_{\mathrm{e}}}\bigg{)}_{T}\,,$ (84)
and, similarly, the second derivative is
$\frac{\partial^{2}P^{(e)}_{id}}{\partial
y^{2}}\,=\,\lambda_{c}^{2}\,\frac{n_{\mathrm{e}}^{2}}{\rho^{2}}\,\bigg{[}\,\bigg{(}\frac{\partial^{2}P^{(e)}_{id}}{\partial\chi_{\mathrm{e}}^{2}}\bigg{)}\,-\,\bigg{(}\frac{\partial
P^{(e)}_{id}}{\partial\chi_{\mathrm{e}}}\bigg{)}\,\bigg{(}\frac{\partial^{2}n_{\mathrm{e}}}{\partial\chi_{\mathrm{e}}^{2}}\bigg{)}\,\bigg{/}\,\bigg{(}\frac{\partial
n_{\mathrm{e}}}{\partial\chi_{\mathrm{e}}}\bigg{)}\,\bigg{]}_{T}\,\bigg{/}\bigg{(}\frac{\partial
n_{\mathrm{e}}}{\partial\chi_{\mathrm{e}}}\bigg{)}^{2}_{T}\,.$ (85)
So one should calculate $n_{\mathrm{e}}$ and the derivatives of $P^{(e)}_{id}$
and $n_{\mathrm{e}}$ over $\chi_{\mathrm{e}}$ in terms of the Fermi - Dirac
integrals $I_{\nu}(\chi_{\mathrm{e}}\,,\,\tau)$. Identifying
$\displaystyle I_{k+1/2}(\chi_{\mathrm{e}}\,,\,\tau)\,$ $\displaystyle\equiv$
$\displaystyle\,Wk\,,$ (86) $\displaystyle\frac{\partial
I_{k+1/2}(\chi_{\mathrm{e}}\,,\,\tau)}{\partial\chi_{\mathrm{e}}}\,$
$\displaystyle\equiv$ $\displaystyle\,WkDX\,,$ (87)
$\displaystyle\frac{\partial^{2}I_{k+1/2}(\chi_{\mathrm{e}}\,,\,\tau)}{\partial\chi_{\mathrm{e}}^{2}}\,$
$\displaystyle\equiv$ $\displaystyle\,WkDXX\,,$ (88)
one can write the $n_{\mathrm{e}}$ and the derivatives of $P^{(e)}_{id}$ and
$n_{\mathrm{e}}$ over $\chi_{\mathrm{e}}$ in terms of $Wk$, $WkDX$ and $WkDXX$
. We calculated these quantities using the program BLIN9 PC3 . We then have
$\displaystyle n_{\mathrm{e}}\,$ $\displaystyle=$
$\displaystyle\,\mathrm{c_{n}}\,[\,W0+\,\tau\,W1\,]\,\equiv\,\mathrm{c_{n}}\,Z_{5}\,,$
(89) $\displaystyle\frac{\partial
n_{\mathrm{e}}}{\partial\chi_{\mathrm{e}}}\,$ $\displaystyle=$
$\displaystyle\,\mathrm{c_{n}}\,[\,W0DX+\,\tau\,W1DX\,]\,\equiv\,\mathrm{c_{n}}\,Z_{3}\,,$
(90) $\displaystyle\frac{\partial P^{(e)}_{id}}{\partial\chi_{\mathrm{e}}}\,$
$\displaystyle=$
$\displaystyle\,\mathrm{c_{p}}\,[\,W1DX+\,\tau/2\,W2DX\,]\,\equiv\,\mathrm{c_{p}}\,Z_{4}\,,$
(91)
$\displaystyle\frac{\partial^{2}n_{\mathrm{e}}}{\partial\chi_{\mathrm{e}}^{2}}\,$
$\displaystyle=$
$\displaystyle\,\mathrm{c_{n}}\,[\,W0DXX+\,\tau\,W1DXX\,]\,\equiv\,\mathrm{c_{n}}\,Z_{2}\,,$
(92)
$\displaystyle\frac{\partial^{2}P^{(e)}_{id}}{\partial\chi_{\mathrm{e}}^{2}}\,$
$\displaystyle=$
$\displaystyle\,\mathrm{c_{p}}\,[\,W1DXX+\,\tau/2\,W2DXX\,]\,\equiv\,\mathrm{c_{p}}\,Z_{1}\,.$
(93)
In terms of $Z_{i}$ we obtain eq. (84) in the form
$\frac{\partial P^{(e)}_{id}}{\partial
y}\,=\,\frac{\mathrm{c}_{p}}{y}\,\frac{Z_{4}\,Z_{5}}{Z_{3}}\,,$ (94)
and eq. (85) will be
$\frac{\partial^{2}P^{(e)}_{id}}{\partial
y^{2}}\,=\,\frac{\mathrm{c}_{p}}{y^{2}}\,\frac{Z_{5}^{2}}{Z_{3}^{3}}\,(Z_{1}\,Z_{3}-Z_{2}\,Z_{4})\,,$
(95)
and finally,
$f_{1}(y)\,=\,\frac{1}{y}\,\bigg{[}\frac{Z_{5}}{Z_{3}^{2}\,Z_{4}}\,(Z_{1}\,Z_{3}-Z_{2}\,Z_{4})\,-\,1\,\bigg{]}\,,\,\,\,f_{2}(y)\,=\,\frac{\mathrm{c}_{p}}{y}\,\frac{Z_{4}\,Z_{5}}{Z_{3}}\,.$
(96)
## Appendix C The Sommerfeld expansion
In this Appendix we briefly describe how to decompose the thermodynamical
quantities for free electrons into series in powers of
$k_{\mathrm{B}}T/\tilde{E}_{F}$, where $\tilde{E}_{F}=\mu_{\mathrm{e}}(T=0)$
is the Fermi energy with the rest mass contribution subtracted. We will start
from simpler non-relativistic dynamics and later extend the results to a
general case.
In the non-relativistic approximation we can write for electron density,
momentum and energy density in a conveniently normalized form:
$\displaystyle\left(\frac{n_{\mathrm{e}}(T)}{\rho_{0}}\right)_{nr}$
$\displaystyle=$
$\displaystyle=3\sqrt{2}\,\tau^{3/2}\,I_{1/2}(\chi_{\mathrm{e}})\ ,$ (97)
$\displaystyle\left(\frac{P(T)}{\rho_{0}\,m_{e}c^{2}}\right)_{nr}$
$\displaystyle=$ $\displaystyle
2\sqrt{2}\,\tau^{5/2}\,I_{3/2}\chi_{\mathrm{e}})\ ,$ (98)
$\displaystyle\left(\frac{\tilde{\cal E}(T)}{{\cal E}_{0}}\right)_{nr}$
$\displaystyle=$
$\displaystyle\sqrt{2}\,\tau^{5/2}\,I_{3/2}(\chi_{\mathrm{e}})=\frac{1}{2}\left(\frac{P(T)}{\rho_{0}\,m_{e}c^{2}}\right)_{nr}\
,\quad{\cal E}_{0}=3\rho_{0}\,m_{e}c^{2}\ ,$ (99)
where the non-relativistic Fermi-Dirac integrals are here defined as:
$\displaystyle
I_{\nu}(\chi)=\int\limits_{0}^{\infty}\,\frac{u^{\nu}}{e^{u-\chi}+1}\,du\ ,$
(100)
Substituting into the 1st line of (33) one gets the reduced entropy in the
non-relativistic limit:
$\displaystyle s_{e,nr}$ $\displaystyle=$
$\displaystyle\frac{1}{k_{\mathrm{B}}T}\,\left(\frac{5\,m_{e}c^{2}\,\tau\,I_{3/2}(\chi_{\mathrm{e}})}{3\,I_{1/2}(\chi_{\mathrm{e}})}-\mu_{\mathrm{e}}\right)=\frac{5\,I_{3/2}(\chi_{\mathrm{e}})}{3\,I_{1/2}(\chi_{\mathrm{e}})}-\chi_{\mathrm{e}}\
.$
This non-relativistic limit follows also from the general results (33) making
use of the relation:
$I_{\nu}(\chi,\tau)\ \xrightarrow[\tau\rightarrow 0]{}\ I_{\nu}(\chi)\ .$
An opposite ultra-relativistic limit is obtained from
$\displaystyle I_{\nu}(\chi,\tau)\ \xrightarrow[\tau\rightarrow\infty]{}\
\sqrt{\frac{\tau}{2}}\,I_{\nu+1/2}(\chi)\ .$
The number density, momentum and energy density in the ultra-relativistic
limit are:
$\displaystyle\left(\frac{n_{\mathrm{e}}(T)}{\rho_{0}}\right)_{ur}$
$\displaystyle=$ $\displaystyle 3\tau^{3}\,I_{2}(\chi_{\mathrm{e}})\ ,$ (101)
$\displaystyle\left(\frac{P(T)}{\rho_{0}\,m_{e}c^{2}}\right)_{ur}$
$\displaystyle=$ $\displaystyle\left(\frac{\tilde{\cal E}(T)}{{\cal
E}_{0}}\right)_{ur}=\tau^{4}\,I_{3}(\chi_{\mathrm{e}})\ .$ (102)
Equations above define the densities of the electron number, pressure and
kinetic energy as functions of $\chi_{\mathrm{e}}=\beta\,\mu_{\mathrm{e}}$ and
$\tau=k_{\mathrm{B}}T/m_{e}c^{2}$, i.e., functions of the chemical potential
$\mu_{\mathrm{e}}$ (not yet determined) and of the temperature $T$ (or of
$\beta=k_{\mathrm{B}}T$). According to ref. PC3 the chemical potential
$\mu_{\mathrm{e}}(V,T)$ is obtained by (numerically) inverting equation for
the density (24) (in its exact or non/ultra-relativistic forms). Assuming the
fixed number of electrons $N_{\mathrm{e}}$ (meaning that the electron density
$n_{\mathrm{e}}=N_{\mathrm{e}}/V$ explicitly depends only on volume $V$), one
gets the chemical potential from the condition
$n_{\mathrm{e}}(T)=n_{\mathrm{e}}(0)=N_{\mathrm{e}}/V$, where
$n_{\mathrm{e}}(0)$ is known function of the Fermi energy. At the end we get
the chemical potential dependent on the temperature $T$ and the Fermi energy,
which is its value for zero temperature:
$\displaystyle\mu_{0}$ $\displaystyle\equiv$
$\displaystyle\mu_{\mathrm{e}}(T=0)=\tilde{E}_{F}=m_{e}\,c^{2}\,\tilde{\epsilon}_{F}\
,$ (103)
where the chemical potential and energies do not include the rest mass
contributions.
For a simple non-relativistic case, for which
$\mu_{0}=\tilde{E}_{Fnr}/(m_{e}\,c^{2})=\tilde{\epsilon}_{Fnr}=x_{r}^{2}/2$
with $x_{r}=p_{F}/(m_{e}c)$, the l.h.s. of (97) at $T=0$ reads:
$\displaystyle\left(\frac{n_{e}(0)}{\rho_{0}}\right)_{nr}$ $\displaystyle=$
$\displaystyle
x^{3}_{r}=\left(2\,\tilde{\epsilon}_{Fnr}\right)^{3/2}=\left(2\,\frac{\mu_{0}}{m_{e}c^{2}}\right)^{3/2}=2\,\sqrt{2}\,\left(\frac{\mu_{0}}{m_{e}c^{2}}\right)^{3/2}\
.$
Equating this to the r.h.s and substituting for $\tau$ yields:
$\displaystyle
2\,\sqrt{2}\,\left(\frac{\mu_{0}}{m_{e}c^{2}}\right)^{3/2}=3\,\sqrt{2}\,\left(\frac{k_{\mathrm{B}}T}{m_{e}c^{2}}\right)^{3/2}\,I_{1/2}(\chi_{\mathrm{e}})\
.$
which simplifies to:
$\displaystyle
I_{1/2}(\chi_{\mathrm{e}})=\frac{2}{3}\,\left(\frac{\tilde{\mu}_{0}}{k_{\mathrm{B}}T}\right)^{3/2}=\frac{2}{3}\,\left(\frac{\tilde{E}_{Fnr}}{k_{\mathrm{B}}T}\right)^{3/2}=\frac{2}{3}\,\left(\frac{T_{F}}{T}\right)^{3/2}\
.$ (104)
Denoting by $X_{1/2}$ the inverse function to $I_{1/2}(\chi_{\mathrm{e}})$ one
gets a solution for $\chi_{\mathrm{e}}$:
$\displaystyle\chi_{\mathrm{e}}$ $\displaystyle\equiv$
$\displaystyle\frac{\mu}{k_{\mathrm{B}}T}=X_{1/2}\left(\frac{2}{3}\,\left(\frac{T_{F}}{T}\right)^{3/2}\,\right)\
,$
which is just eq. (17) of ref. CP in our notations. In general, a similar
connection is obtained from $n_{e}$ given by (24), in the ultra-relativistic
limit from (101).
The relations above are valid for arbitrary temperature, now we will deal with
a low temperature expansion. The non-relativistic Fermi-Dirac integrals (100)
can be for small temperatures (i.e. large inverse temperatures
$\beta=1/k_{B}T$ and hence also $\chi_{\mathrm{e}}=\beta\,\mu_{\mathrm{e}}$)
approximated by a power series:
$\displaystyle I_{\nu}(\chi)$ $\displaystyle\simeq$
$\displaystyle\frac{\chi^{\nu+1}}{\nu+1}\,\Big{(}1+\frac{\pi^{2}}{6}\,\frac{(\nu+1)\nu}{\chi^{2}}+\frac{7\,\pi^{4}}{360}\,\frac{(\nu+1)\nu(\nu-1)(\nu-2)}{\chi^{4}}+\dots\Big{)}\
.$ (105)
For the non-relativistic dynamics one needs $I_{\nu}(\chi)$ with $\nu=1/2$ and
$\nu=3/2$:
$\displaystyle I_{1/2}(\chi)$ $\displaystyle\simeq$
$\displaystyle\frac{2}{3}\,\chi^{3/2}\,\Big{(}1+\frac{\pi^{2}}{8\,\chi^{2}}+\frac{7\,\pi^{4}}{640\,\chi^{4}}+\dots\Big{)}\
,$ (106) $\displaystyle I_{3/2}(\chi)$ $\displaystyle\simeq$
$\displaystyle\frac{2}{5}\,\chi^{5/2}\,\Big{(}1+\frac{5\,\pi^{2}}{8\,\chi^{2}}-\frac{7\,\pi^{4}}{384\,\chi^{4}}+\dots\Big{)}\
.$ (107)
Substituting these relations into (97)-(99) and using
$\tau\,\chi_{\mathrm{e}}=\mu_{\mathrm{e}}/(m_{e}c^{2})$ we get:
$\displaystyle\frac{1}{3\sqrt{2}}\left(\frac{n_{\mathrm{e}}(T)}{\rho_{0}}\right)_{nr}$
$\displaystyle=$
$\displaystyle\tau^{3/2}\,I_{1/2}(\chi_{\mathrm{e}})\simeq\frac{2}{3}\left(\frac{\mu_{\mathrm{e}}}{m_{e}c^{2}}\right)^{3/2}\,\left(1+\frac{\pi^{2}}{8\chi_{\mathrm{e}}^{2}}+\frac{7\pi^{4}}{640\chi_{\mathrm{e}}^{4}}+\dots\right)\
,$ (108)
$\displaystyle\frac{1}{2^{3/2}}\left(\frac{P(T)}{\rho_{0}\,m_{e}c^{2}}\right)_{nr}$
$\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}\left(\frac{\tilde{\cal
E}(T)}{{\cal E}_{0}}\right)_{nr}=\tau^{5/2}\,I_{3/2}(\chi_{\mathrm{e}})$ (109)
$\displaystyle\simeq$
$\displaystyle\frac{2}{5}\left(\frac{\mu_{\mathrm{e}}}{m_{e}c^{2}}\right)^{5/2}\,\left(1+\frac{5\pi^{2}}{8\chi_{\mathrm{e}}^{2}}-\frac{7\pi^{4}}{384\chi_{\mathrm{e}}^{4}}+\dots\right)\
.$
These are formal power series in terms of powers of the so far unknown
$1/\chi_{\mathrm{e}}^{2}\,$. Recall that $\chi_{\mathrm{e}}$ depends on
temperature and on the chemical potential. As discussed above,
$\mu_{\mathrm{e}}$ is determined from the condition
$n_{\mathrm{e}}(T)=n_{\mathrm{e}}(0)$ (but now with $n_{\mathrm{e}}(T)$
decomposed into the power series above). Substituting
$1/\chi_{\mathrm{e}}^{2}=(kT)^{2}/\mu_{\mathrm{e}}^{2}$ (and using
$\mu_{0}=\tilde{E}_{Fnr}$ for the non-relativistic Fermi energy) leads to:
$\displaystyle\mu_{0}$ $\displaystyle=$
$\displaystyle\mu_{\mathrm{e}}\,\left(1+\frac{1}{8}\frac{(\pi
k_{\mathrm{B}}T)^{2}}{\mu_{\mathrm{e}}^{2}}+\frac{7}{640}\frac{(\pi
k_{\mathrm{B}}T)^{4}}{\mu_{\mathrm{e}}^{4}}+\dots\right)^{2/3}\ .$ (110)
This relation can be perturbatively inverted by assuming the power series for
$\mu_{\mathrm{e}}$ (in powers of $(k_{\mathrm{B}}T/\mu_{0})^{2}$):
$\displaystyle\mu_{\mathrm{e}}$ $\displaystyle=$
$\displaystyle\mu_{0}\,\left(1-\frac{\pi^{2}}{12}\frac{(k_{\mathrm{B}}T)^{2}}{\mu_{0}^{2}}-\frac{\pi^{4}}{80}\frac{(k_{\mathrm{B}}T)^{4}}{\mu_{0}^{4}}+\dots\right)\
.$ (111)
Substituting (111) into the r.h.s. of and (109) and making the Taylor
decomposition in powers of $(k_{\mathrm{B}}T)^{2}$ yields the following non-
relativistic equations for the observables:
$\displaystyle P_{nr}(T)$ $\displaystyle=$ $\displaystyle
m_{e}c^{2}\,\rho_{0}\
\frac{x^{5}_{r}}{5}\,\left(1+\frac{5\pi^{2}}{12}\frac{(k_{\mathrm{B}}T)^{2}}{\mu^{2}_{0}}-\frac{\pi^{4}}{16}\frac{(k_{\mathrm{B}}T)^{4}}{\mu^{4}_{0}}+\dots\right)\
,$ (112) $\displaystyle\tilde{\cal E}_{nr}(T)$ $\displaystyle=$
$\displaystyle\frac{3}{2}\,P_{nr}(T)\ .$ (113)
These results are consistent with the equations, presented in (Grei ) (where
slightly different notation is used).
For the ultra-relativistic dynamics one proceeds in a similar way. For sake of
briefness, we cite just the final results (using
$\mu_{0}/(m_{e}c^{2})=\epsilon_{Fur}=x_{r}$):
$\displaystyle\mu_{\mathrm{e}}$ $\displaystyle=$
$\displaystyle\mu_{0}\,\left(1-\frac{\pi^{2}}{3}\frac{(kT)^{2}}{\mu_{0}^{2}}+O(T^{6})\right)\
.$ (114) $\displaystyle n_{e,ur}(T)$ $\displaystyle=$ $\displaystyle
n_{e,ur}(0)=\rho_{0}\,\left(\frac{\mu_{0}}{m_{e}c^{2}}\right)^{3}=\rho_{0}\,\epsilon^{3}_{Fur}=\rho_{0}\,x_{r}^{3}=\frac{N_{e}}{V}\
,$ (115) $\displaystyle P_{ur}(T)$ $\displaystyle=$
$\displaystyle\rho_{0}\,m_{e}c^{2}\,\frac{x^{4}_{r}}{4}\,\left(1+\frac{2\pi^{2}}{3}\frac{(k_{\mathrm{B}}T)^{2}}{\mu^{2}_{0}}-\frac{\pi^{4}}{5}\frac{(k_{\mathrm{B}}T)^{4}}{\mu^{4}_{0}}+\dots\right)\
,$ (116) $\displaystyle\tilde{\cal E}_{ur}(T)$ $\displaystyle=$ $\displaystyle
3\,P_{ur}(T)\ .$ (117)
When the non-relativistic or ultra-relativistic limits cannot be applied, we
start from normalized equations (24-26):
$\displaystyle\frac{n_{\mathrm{e}}(T)}{\rho_{0}}$ $\displaystyle=$
$\displaystyle
3\sqrt{2}\,\tau^{3/2}\,\left[I_{1/2}(\chi_{\mathrm{e}},\tau)+\tau\,I_{3/2}(\chi_{\mathrm{e}},\tau)\right]\
,\quad\rho_{0}=\frac{1}{3\pi^{2}\,\lambda^{3}_{e}}\ ,$ (118)
$\displaystyle\frac{P(T)}{\rho_{0}\,m_{e}c^{2}}$ $\displaystyle=$
$\displaystyle
2\sqrt{2}\,\tau^{5/2}\,\left[I_{3/2}(\chi_{\mathrm{e}},\tau)+\frac{\tau}{2}\,I_{5/2}(\chi_{\mathrm{e}},\tau)\right]\
,$ (119) $\displaystyle\frac{\tilde{\cal E}(T)}{{\cal E}_{0}}$
$\displaystyle=$
$\displaystyle\sqrt{2}\,\tau^{5/2}\,\left[I_{3/2}(\chi_{\mathrm{e}},\tau)+\tau\,I_{5/2}(\chi_{\mathrm{e}},\tau)\right]\
,\quad{\cal E}_{0}=3\rho_{0}\,m_{e}c^{2}\ .$ (120)
Now, we will need the following Sommerfeld decompositions of the generalized
Fermi-Dirac integrals (denoting
$\varphi=\tau\,\chi=\mu_{\mathrm{e}}/(m_{e}c^{2})$):
$\displaystyle I_{1/2}(\chi,\tau)$ $\displaystyle\simeq$
$\displaystyle\frac{1}{\sqrt{2}\,\tau^{3/2}}\,\left\\{\frac{1}{2}\,\left[(1+\varphi)\sqrt{\varphi(2+\varphi)}-{\rm
ln}\left(1+\varphi+\sqrt{\varphi(2+\varphi)}\right)\,\right]+\right.$ (121)
$\displaystyle\hskip
56.9055pt\left.+\frac{(1+\varphi)}{\sqrt{\varphi(2+\varphi)}}\,\frac{\pi^{2}\tau^{2}}{6}+\frac{(1+\varphi)}{[\varphi(2+\varphi)]^{5/2}}\,\frac{7\pi^{4}\tau^{4}}{120}+\dots\right\\}\
,$ $\displaystyle I_{3/2}(\chi,\tau)$ $\displaystyle\simeq$
$\displaystyle\frac{1}{\sqrt{2}\,\tau^{5/2}}\,\left\\{\frac{2\varphi^{2}+\varphi-3}{6}\,\sqrt{\varphi(2+\varphi)}+\frac{1}{2}{\rm
ln}\left(1+\varphi+\sqrt{\varphi(2+\varphi)}\right)\right.$ (122)
$\displaystyle\hskip
56.9055pt\left.+\frac{\sqrt{\varphi}\,(3+2\varphi)}{\sqrt{2+\varphi}}\,\frac{\pi^{2}\tau^{2}}{6}-\frac{1}{\varphi^{3/2}\,(2+\varphi)^{5/2}}\,\frac{7\pi^{4}\tau^{4}}{120}+\dots\right\\}\
,$ $\displaystyle I_{5/2}(\chi,\tau)$ $\displaystyle\simeq$
$\displaystyle\frac{1}{\sqrt{2}\,\tau^{7/2}}\,\left\\{\frac{(6\varphi^{3}+2\varphi^{2}-5\varphi+15)\sqrt{\varphi(2+\varphi)}}{24}\,-\frac{5}{8}{\rm
ln}\left(1+\varphi+\sqrt{\varphi(2+\varphi)}\right)\right.$ (123)
$\displaystyle\hskip
14.22636pt\left.+\frac{\varphi^{2}\,(3\varphi+5)}{\sqrt{\varphi(2+\varphi)}}\,\frac{\pi^{2}\tau^{2}}{6}+\frac{\varphi^{2}(2\varphi^{3}+10\varphi^{2}+15\varphi+5)}{[\varphi(2+\varphi)]^{5/2}}\,\frac{7\pi^{4}\tau^{4}}{120}+\dots\right\\}\
.$
Now we again first find the chemical potential from the condition
$n_{\mathrm{e}}(T)=n_{\mathrm{e}}(0)$. Substituting (121,122) into equation
(118) for $n_{\mathrm{e}}(T)$ yields (the logarithmic terms are cancel each
other):
$\displaystyle\frac{n_{\mathrm{e}}(T)}{\rho_{0}}\simeq\left[\varphi(2+\varphi)\right]^{3/2}+\frac{(2\varphi^{2}+4\varphi+1)}{\sqrt{\varphi(2+\varphi)}}\,\frac{\pi^{2}\tau^{2}}{2}+\frac{1}{[\varphi(2+\varphi)]^{5/2}}\,\frac{7\pi^{4}\tau^{4}}{40}+\dots\
.$ (124)
For $T=0$ it holds $\tau=0$ and
$\displaystyle\varphi(0)\equiv\varphi_{0}=\frac{\mu_{0}}{m_{e}c^{2}}=\tilde{\epsilon}_{F}=\epsilon_{F}-1\
,\quad\epsilon_{F}=\sqrt{1+x^{2}_{r}}\ .$
This relation implies:
$\displaystyle\varphi_{0}(2+\varphi_{0})=(\epsilon_{F}-1)(\epsilon_{F}+1)=\epsilon_{F}^{2}-1=x^{2}_{r}\
,$
which reproduces the electron density at zero temperature: from the relation
(124) one gets
$\displaystyle n_{\mathrm{e}}(0)$ $\displaystyle=$
$\displaystyle\rho_{0}\,\left[\varphi_{0}(2+\varphi_{0})\right]^{3/2}=\rho_{0}\,x^{3}_{r}\equiv\rho_{F}\
.$
Using (124) one gets from $n_{\mathrm{e}}(0)=n_{\mathrm{e}}(T)$ an implicit
relation between $\varphi_{0}$ and $\varphi$:
$\displaystyle\varphi_{0}(2+\varphi_{0})=\varphi(2+\varphi)\,\left[1+\frac{(2\varphi^{2}+4\varphi+1)\,B}{2[\varphi(2+\varphi)]^{2}}+\frac{7\,B^{2}}{40[\varphi(2+\varphi)]^{4}}+\dots\right]^{2/3}\
,$ (125)
where $B=\pi^{2}\,\tau^{2}$. To solve this constraint we assume for
$\varphi=\mu_{\mathrm{e}}/(m_{e}c^{2})$ a perturbative expansion in a form:
$\displaystyle\varphi$ $\displaystyle=$
$\displaystyle\varphi_{0}\,\left(1+c_{1}\frac{B}{\varphi^{2}_{0}}+c_{2}\frac{B^{2}}{\varphi^{4}_{0}}+\dots\right)\
.$ (126)
Notice that:
$\displaystyle\frac{B}{\varphi^{2}_{0}}$ $\displaystyle=$
$\displaystyle\frac{\pi^{2}(k_{\mathrm{B}}T)^{2}}{(m_{e}c^{2})^{2}}\,\frac{(m_{e}c^{2})^{2}}{\mu_{0}^{2}}=\frac{\pi^{2}(k_{\mathrm{B}}T)^{2}}{\mu_{0}^{2}}\
,$
hence our Ansatz for $\varphi$ above is identical to the Ansatz used for
$\mu_{\mathrm{e}}$ in previous sections. Substituting (126) into (125), making
the Taylor decomposition in powers of $B$ and requiring that the coefficients
in front of $B^{n},n\geq 1$ are equal to zero, yields the coefficients $c_{i}$
in terms of $\varphi_{0}$. The first two are given by relatively simple
equations:
$\displaystyle c_{1}$ $\displaystyle=$
$\displaystyle-\frac{2\varphi^{2}_{0}+4\varphi_{0}+1}{6(\varphi^{2}_{0}+3\varphi_{0}+2)}=-\frac{2\epsilon^{2}_{F}-1}{6\epsilon_{F}(\epsilon_{F}+1)}\
,$ (127) $\displaystyle c_{2}$ $\displaystyle=$
$\displaystyle-\frac{20\varphi^{4}_{0}+80\varphi^{3}_{0}+141\varphi^{2}_{0}+122\varphi_{0}+36}{360(\varphi^{2}_{0}+3\varphi_{0}+2)^{3}}=-\frac{20\epsilon^{4}_{F}+21\epsilon^{2}_{F}-5}{360\epsilon^{3}_{F}(\epsilon_{F}+1)^{3}}\
.$ (128)
With these $c_{1}$ and $c_{2}$ in (126) the Taylor decomposition of (125) has
a first non-zero coefficients (apart from the constant
$\varphi_{0}(2+\varphi_{0})$ term) in front of the power $B^{3}$.
Let us now derive the perturbative series for the pressure and the kinetic
energy. Substituting the decompositions of $I_{3/2}(\chi_{\mathrm{e}},\tau)$
(see 122) and $I_{5/2}(\chi_{\mathrm{e}},\tau)$ (see 123) into (119) and (120)
yields:
$\displaystyle\frac{P(T)}{\rho_{0}\,m_{e}c^{2}}$ $\displaystyle=$
$\displaystyle\frac{(2\varphi^{3}+6\varphi^{2}+\varphi-3)\sqrt{\varphi(2+\varphi)}}{8}+\frac{3}{8}\,{\rm
ln}\left(1+\varphi+\sqrt{\varphi(2+\varphi)}\right)$
$\displaystyle+\frac{(\varphi^{3}+3\varphi^{2}+2\varphi)}{\sqrt{\varphi(2+\varphi)}}\,\frac{\pi^{2}\theta^{2}}{2}+\frac{(2\varphi^{5}+10\varphi^{4}+15\varphi^{3}+5\varphi^{2}-2\varphi)}{[\varphi(2+\varphi)]^{5/2}}\,\frac{7\pi^{4}\theta^{4}}{120}+\dots\
,$ $\displaystyle\frac{\tilde{\cal E}(T)}{{\cal E}_{0}}$ $\displaystyle=$
$\displaystyle\frac{(6\varphi^{3}+10\varphi^{2}-\varphi+3)\sqrt{\varphi(2+\varphi)}}{24}-\frac{1}{8}\,{\rm
ln}\left(1+\varphi+\sqrt{\varphi(2+\varphi)}\right)$
$\displaystyle+\frac{(3\varphi^{3}+7\varphi^{2}+3\varphi)}{\sqrt{\varphi(2+\varphi)}}\,\frac{\pi^{2}\theta^{2}}{6}+\frac{(2\varphi^{5}+10\varphi^{4}+15\varphi^{3}+5\varphi^{2}-\varphi)}{[\varphi(2+\varphi)]^{5/2}}\,\frac{7\pi^{4}\theta^{4}}{120}+\dots\
.$
What remains is to substitute $\varphi$ expressed in terms of $\varphi_{0}$
and powers of $B$ (see eqs. (126)-(128)) into equations for the pressure (C)
and kinetic energy (C) and decompose into the Taylor series in powers of $B$.
This is not so simple as for the limiting non-relativistic and ultra-
relativistic cases considered above, since the coefficients depend on
$\varphi_{0}$ and also $P(0)$ and ${\cal E}(0)$ are now not just simple powers
of $\varphi_{0}$ and cannot be simply factorized. Nevertheless, with the help
of _Mathematica_ the power decomposition can be performed. We made a
decomposition up to the order $B^{2}$, but the terms $\sim B^{2}$ are is
lengthy and clumsy. hence, we for the sake of briefness present just leading
and next-to-leading orders:
$\displaystyle\frac{P(T)}{\rho_{0}\,m_{e}c^{2}}$ $\displaystyle=$
$\displaystyle\frac{P(0)}{\rho_{0}\,m_{e}c^{2}}+\frac{\varphi_{0}(2+\varphi_{0})(\varphi_{0}^{2}+2\varphi_{0}+2)\,B}{6(1+\varphi_{0})\sqrt{\varphi_{0}(2+\varphi_{0})}}$
(131) $\displaystyle=$
$\displaystyle\frac{P(0)}{\rho_{0}\,m_{e}c^{2}}+\frac{x_{r}(\epsilon_{F}^{2}+1)}{6\epsilon_{F}}\,B\
,\quad B=\pi^{2}\,\tau^{2}\ ,$ $\displaystyle\frac{\tilde{\cal E}(T)}{{\cal
E}_{0}}$ $\displaystyle=$ $\displaystyle\frac{\tilde{\cal E}(0)}{{\cal
E}_{0}}+\frac{\varphi_{0}(2+\varphi_{0})(1+\varphi_{0})\,B}{6\sqrt{\varphi_{0}(2+\varphi_{0})}}=\frac{\tilde{\cal
E}(0)}{{\cal E}_{0}}+\frac{x_{r}\,\epsilon_{F}}{6}\,B\ .$ (132)
### C.1 Sommerfeld decomposition of entropy
Recall the equation for the dimensionless reduced entropy of free electrons
(33):
$\displaystyle\hat{s}_{e}$ $\displaystyle\equiv$
$\displaystyle\frac{1}{k_{B}}\,\frac{S_{e}}{N_{e}}\equiv\frac{1}{k_{B}\,T}\,\Sigma\
,\quad\Sigma=\frac{\tilde{\cal E}+P_{e}}{n_{e}}-\mu_{\mathrm{e}}\ ,$ (133)
where it is convenient to separate for a while a factor $\Sigma$. We re-write
this factor in a convenient form:
$\displaystyle\Sigma$ $\displaystyle=$ $\displaystyle\frac{\tilde{\cal
E}+P_{e}}{n_{e}}-\mu_{\mathrm{e}}=\left[\frac{\tilde{\cal
E}(0)+P_{e}(0)}{n_{e}}-\mu_{\mathrm{e}}\right]+\frac{\Delta\tilde{\cal
E}+\Delta P_{e}}{n_{e}}=\Sigma_{1}+\Sigma_{2}\ ,$ (134)
where $\Delta\tilde{\cal E}=\tilde{\cal E}(T)-\tilde{\cal E}(0)$ and $\Delta
P_{e}=P_{e}(T)-P_{e}(0)$. From relations (C,C) at $T=0$ and from
$\varphi(0)\equiv\varphi_{0}=\tilde{\epsilon}_{F}$ one gets:
$\displaystyle\tilde{\cal E}(0)+P_{e}(0)$ $\displaystyle=$ $\displaystyle
m_{e}c^{2}\,\rho_{0}\,x^{3}_{r}\,\tilde{\epsilon}_{F}=m_{e}c^{2}\,n_{e}\,\tilde{\epsilon}_{F}\
\rightarrow\ \frac{\tilde{\cal E}(0)+P_{e}(0)}{n_{e}}=m_{e}c^{2}\,\varphi_{0}\
.$
Making use of $\mu_{\mathrm{e}}=m_{e}c^{2}\,\varphi$ we write the first term
of $\Sigma$ in a compact form:
$\displaystyle\Sigma_{1}$ $\displaystyle=$ $\displaystyle\frac{\tilde{\cal
E}(0)+P_{e}(0)}{n_{e}}-\mu_{\mathrm{e}}=-m_{e}\,c^{2}\,(\varphi-\varphi_{0})\
.$ (135)
The low temperature decomposition of this equation follows from (126):
$\displaystyle\varphi-\varphi_{0}$ $\displaystyle\simeq$ $\displaystyle
c_{1}\,\frac{B}{\varphi_{0}}=-\frac{2\epsilon^{2}_{F}-1}{6\epsilon_{F}\,x^{2}_{r}}\,B\
,$
therefore:
$\displaystyle\Sigma_{1}$ $\displaystyle\simeq$ $\displaystyle
m_{e}\,c^{2}\,\frac{2\epsilon^{2}_{F}-1}{6\epsilon_{F}\,x^{2}_{r}}\,B\ ,\quad
B=\pi^{2}\,\tau^{2}\ .$ (136)
The non-relativistic ($\epsilon_{F}\rightarrow 1$) and ultra-relativistic
($\epsilon_{F}\rightarrow\tilde{\epsilon}_{F}\rightarrow x_{r}$) limits of
this equation read:
$\displaystyle\Sigma_{1,nr}=m_{e}\,c^{2}\,\frac{1}{6\,x^{2}_{r}}\,B\
,\quad\Sigma_{1,ur}=m_{e}\,c^{2}\,\frac{1}{3\,x_{r}}\,B\ .$ (137)
These equations can be also obtained by calculating $\varphi-\varphi_{0}$
directly from (111) and (114).
The second part of $\Sigma$ is obtained from (131,132):
$\displaystyle\Delta\tilde{\cal E}+\Delta P_{e}$ $\displaystyle=$
$\displaystyle
m_{e}c^{2}\,\rho_{0}\,\left(\frac{x_{r}\epsilon_{F}}{2}+\frac{x_{r}(\epsilon^{2}_{F}+1)}{6\epsilon_{F}}\right)\,B=m_{e}c^{2}\,\rho_{0}\,\frac{x_{r}\,(4\epsilon^{2}_{F}+1)}{6\epsilon_{F}}\,B\
,$
which implies:
$\displaystyle\Sigma_{2}$ $\displaystyle=$
$\displaystyle\frac{\Delta\tilde{\cal E}+\Delta
P_{e}}{n_{e}}=m_{e}c^{2}\,\frac{4\epsilon^{2}_{F}+1}{6\epsilon_{F}\,x^{2}_{r}}\,B\
,$ (138)
which in kinematic limits reduces to
$\displaystyle\Sigma_{2,nr}=m_{e}\,c^{2}\,\frac{5}{6\,x^{2}_{r}}\,B\
,\quad\Sigma_{2,ur}=m_{e}\,c^{2}\,\frac{2}{3\,x_{r}}\,B\ .$ (139)
Adding these results up:
$\displaystyle\Sigma$ $\displaystyle=$ $\displaystyle
k_{B}T\,\hat{s}_{e}\equiv\frac{TS_{e}}{N_{e}}=m_{e}c^{2}\,\frac{\epsilon_{F}}{x^{2}_{r}}\,B\
,\quad B=\pi^{2}\,\tau^{2}\ ,$ (140) $\displaystyle\Sigma_{nr}$
$\displaystyle=$ $\displaystyle m_{e}c^{2}\,\frac{1}{x^{2}_{r}}\,B\
,\qquad\Sigma_{ur}=m_{e}c^{2}\,\frac{1}{x_{r}}\,B\ .$
Equations on the 2nd line can be, of course, obtained directly from the non-
relativistic or ultra-relativistic results for the momentum and energy
densities.
|
# SightSteeple:
Agreeing to Disagree with Functional Blockchain Consensus
Aditya Ahuja Indian Institute of Technology DelhiNew DelhiIndia
<EMAIL_ADDRESS>
(2022)
###### Abstract.
Classical and contemporary distributed consensus protocols, may they be for
binary agreement, state machine replication, or blockchain consensus, require
all protocol participants in a peer-to-peer system to agree on exactly the
same information as part of the consensus payload. Although this model of
consensus is extensively studied, and is useful for most consensus based
decentralized applications, it falls short of defining correct distributed
systems which mandate participant credential based privileged visibility into
the consensus payload, through the consensus protocol itself.
We introduce a new paradigm for distributed consensus, called _functional
blockchain consensus_. Functional blockchain consensus allows each blockchain
protocol participant to agree on some distinct sub-information of the list of
transactions, as a function of the credentials of the participant in the
blockchain system, instead of agreeing on the entire list of transactions. We
motivate two adversary models, one with a standard crash-fault adversary and
another with a novel rational-fault adversary, to compromise functional
blockchain consensus. We then present two versions of a blockchain protocol
called SightSteeple, that achieves functional blockchain consensus in the said
fault models. SightSteeple relies on a novel combination of standard
blockchain consensus and functional encryption, among other primitives, to
achieve its goals of correctness. Finally, we discuss practical uses of
functional blockchain consensus based asymmetric distributed ledgers, and
motivate off-shoot constructions that can result from this new consensus
paradigm.
Functional Blockchain Consensus, Hierarchical Blockchains
††copyright: acmcopyright††journalyear: 2022††conference: ; ; ††ccs: Security
and privacy Distributed systems security††ccs: Security and privacy
Cryptography
## 1\. Introduction
Distributed consensus, which can manifest in the form of binary agreement
(Dolev and Strong, 1983; Shi, 2020), state machine replication (Yin et al.,
2019; McMenamin et al., 2021), or blockchain consensus (Bano et al., 2019;
Xiao et al., 2020; Chan and Shi, 2020), requires a set of networked processes
to agree on some information. In each manifestation, the notion of consensus
is to agree on an identical snapshot of the information as part of the
consensus payload, symmetrically, by each of the processes involved. Although
this notion of consensus may be useful for symmetric information based
decentralized applications, it precludes decentralized applications requiring
consensus on sensitive information, where there is a need for privileged
visibility into the consensus payload for each of the participant processes.
From a pedagogical perspective, there is a lack of consensus paradigms and
protocols where visibility into the consensus payload is predicated on the
credentials of the consensus protocol participants. Presently, distributed
consensus is in general defined for a peer-to-peer system, and to
intentionally preclude the credentials that the consensus protocol
participants may possess: those credentials, which may define the privilege of
their visibility into the consensus payload. Consequently, as at least an
academic exercise, there is a need for defining _paradigms for asymmetric
consensus_ : the consensus protocol participants may agree on some sub-
information, which is any information that may be inferred from the complete
consensus payload, as a function of their credentials in the distributed
system, once those credentials are established and agreed to in a
decentralized setting.
One way to achieve asymmetric consensus is to ensure that the information
contained in the consensus payload that is being considered by all processes
is identical, however the agreed _view_ 111We use ‘view’ to denote any sub-
information that can be implied by the complete information contained in the
consensus payload, and will formally define a view later. or summary of the
payload, and the consequential distributed ledger, is allowed to be different
for different processes, as long as there exists a _hierarchy of inference_
across the views of each of the processes. The hierarchy of inference should
necessitate that some views are implied by other views, thereby ensuring an
asymmetric consistency across all processes. Such credential based consensus
definitions and protocols for secure consensus payload views for each of the
involved processes (similar to secure information flow (Denning, 1976)),
resulting in continuously growing logs which are the output of the consensus
protocol, do not exist yet to the best of our knowledge.
There is also a practical motivation for asymmetric consensus based
decentralized applications. For instance, cryptocurrencies (Bonneau et al.,
2015) with sensitive transactions may require asymmetric distributed ledgers,
which allow different processes to see different summaries of the list of
transactions, or allow processes to learn the list of transactions only when
certain preconditions are met. Decentralized finance (DeFi) (Werner et al.,
2021) applications may require hierarchical distributed ledgers for selective
portfolio exposure to enable asymmetrical access to automated markets. There
would also be, in general, a need for asymmetric records for agreement on
classified information in information critical decentralized sectors requiring
sensitive data distribution (Casino et al., 2019).222We motivate decentralized
applications based on functional blockchain consensus, in more detail, in
Section 6.1.
Given the explosion of blockchain based decentralized applications in recent
times (Casino et al., 2019), there is a motivation for blockchain based
information flow hierarchies in decentralized applications and organizations,
perhaps through separate yet hierarchical blockchains across the blockchain
protocol participants, especially in information critical sectors as
mentioned. Consequently, it is befitting and opportune to consider, both as an
academic exercise and a practical curiosity, asymmetric blockchain consensus
models and protocols, for defining hierarchical blockchains: models that
generalize standard blockchains by accommodating credential-based asymmetric
agreement on the list of transactions.
### Our Contributions
In this paper, we make the following contributions333Our contributions are
inspired from and are a refinement to a patent application on functional
blockchain consensus (Ahuja et al., 2021)..
_Introducing Functional Blockchain Consensus (Section 2)._ We present a player
model for consensus where blockchain protocol participants (or _players_) have
different credentials towards their visibility into the blockchain payload. We
formally define a block payload view, which is any information that can be
inferred from the complete list of transactions. We then introduce our new
paradigm of consensus, called _functional blockchain consensus_ , which, given
the credentials of all players in the blockchain system, allows (i) each
honest player to agree on a distinct block payload view, as a function of its
credentials in the system, and (ii) allows each honest player to know that its
honest counterparts agree on a correct block payload view. Functional
blockchain consensus may result in different blockchains for different players
(with some blockchains being implied by other blockchains), and so we formally
show that functional blockchain consensus is a generalization of traditional
blockchain consensus.
_Presenting SightSteeple under a fail-stop adversary (Section 4)._ Given a a
partially synchronous network (Dwork et al., 1988) with crash-fault adversary
that controls less than half of the players in the system, we present our
first functional blockchain consensus protocol called SightSteeple-CFT.
SightSteeple-CFT is constructed by amending the crash-fault tolerant version
of the streamlined Streamlet (Chan and Shi, 2020) blockchain protocol, and by
using functional encryption for all efficiently computable functions (Garg et
al., 2014) (among other cryptographic primitives).
_Presenting SightSteeple under an economically incentivized, payload view
compromise adversary (Section 5)._ We motivate a new adversary model under
functional blockchain consensus, termed a _rational_ adversary. A rational
adversary, apart from maximizing its revenue through the consensus protocol
(which may include any combination of block rewards, transaction fees, or
double spending transactions), would simultaneously want to maximize its block
payload view and try to learn the complete list of transactions instead of
some summary of it. To that end, the adversary would be willing to mislead the
honest players towards learning incorrect payload views. Under a rational
adversary controlling less than one-third of the players in the system, over a
partially synchronous network, we present our next protocol called
SightSteeple-RFT. SightSteeple-RFT is constructed by amending the Byzantine-
fault tolerant version of Streamlet, and by using verifiable functional
encryption schemes (Badrinarayanan et al., 2016).
#### Our goals, and open problems.
In this work, we intend to initiate the study of hierarchical visibility into
the blockchain payload, through a new functional blockchain consensus
protocol. We discuss the impossibility of Byzantine-fault tolerant
SightSteeple (Section 5.1). We will not give exact construction of any
functional encryption scheme, but point out their existence and viability for
various distributed ledgers (Section 6.1). We will discuss the subtleties of
privilege alteration attacks, both on-chain and off-chain, and point to
possible solutions to harden the protocol (Section 6.2). We will motivate
future definitions on asymmetric smart contracts and alternate asymmetric
consensus paradigms, such as consensus on transaction declassification, which
might have a construction similar to SightSteeple (Section 7).
### Related Work
_Asymmetric trust, and relaxing consensus_. There have been proposals to model
asymmetric Byzantine quorum systems over an asynchronous network, where each
consensus protocol participant is free to choose which participants it
considers faulty, and which it considers honest (non-faulty) (Cachin, 2021),
and consequential consensus protocols have been proposed (Cachin and Zanolini,
2021). There have been proposals to relax the definition of consensus (more
specifically, relaxing the definition of termination within consensus) in
blockchains, over an asynchronous network (Sliwinski and Wattenhofer, 2019).
None of these contributions permit an asymmetric _visibility_ of the consensus
payload, nor advocate for asymmetry on the agreed information for the
participants in the protocol.
_Hybrid blockchains_. Hybrid blockchains, which have a public chain and
multiple private subchains to realize the decentralized application (Zhu et
al., 2019; Cui et al., 2020a), are different from SightSteeple where
blockchain payload visibility can change for each player on the same chain.
_Solutions at the intersection of blockchains and functional encryption_.
There have been proposals to outsource decryption under a functional scheme,
with incentivization, to blockchains (Cui et al., 2020b). Privacy preserving
energy trading in blockchain empowered smart grids has been proposed by
leveraging functional encryption (Son et al., 2020). Secure distributed smart
meters have been defined using a combination of blockchains and functional
encryption (Yurchenko et al., 2020). A power efficient elliptic curve pairing
crypto-processor has been proposed for blockchains and functional encryption
(Banerjee and Chandrakasan, 2021). None of these contributions define a
consensus model that can be realized using a combination of standard
blockchains and functional encryption, which is central to our contribution.
## 2\. Functional Blockchain Consensus
In this section, we introduce functional blockchain consensus.
### 2.1. The Player Model
We refer to the blockchain protocol participants, which are (polynomial-time)
interactive Turing machines, as _players_. The set of players is given by
$[n]:=\\{1,2,...,n\\}$, where some players are honest (non-faulty) and others
are faulty. Further, each player $i\in[n]$ has some credentials
$\kappa_{i}\in\\{0,1\\}^{*}$, with the highest credential denoted by
$\kappa^{*}$. Let $\mathcal{C}=(\kappa_{i})_{i\in[n]}$ denote the list of
credentials for all players.
Further, there exists a third party for trusted setup, called init-party, that
does not participate in consensus, but distributes the credentials to each
player.
### 2.2. Block Payload View
We first introduce a block payload _view_ , which has a special connotation in
functional blockchain consensus (not to be confused with view change in state
machine replication, or a real-time snapshot of the blockchain state in
standard blockchains (Chan and Shi, 2020)). A block payload view for a
specific player in functional blockchain consensus, is the sub-information of
the list of transactions that the said player agrees upon, and includes in its
blockchain. We formalize this through the following definition.
Definition 1 (Block Payload View). _A set of functions $\mathbb{F}$ is a set
of block payload view functions iff
$\forall\textsf{txs}\in\\{0,1\\}^{*},\forall f\in\mathbb{F}$,
$f(\textsf{txs})$ is implied by txs. Further there exists an identity function
$f^{*}\in\mathbb{F}$, such that
$\forall\textsf{txs}\in\\{0,1\\}^{*},f^{*}(\textsf{txs})=\textsf{txs}$, and a
null function $f_{\bot}\in\mathbb{F}$, such that
$\forall\textsf{txs}\in\\{0,1\\}^{*},f_{\bot}(\textsf{txs})=\bot$.
Further, $\forall\textsf{txs}\in\\{0,1\\}^{*},\forall f\in\mathbb{F}$, we call
$f(\textsf{txs})$ a block payload view of txs under view function $f$._
_Examples of block payload views._ Instances of block payload views include
view functions that provide the smallest transaction in the list of
transactions, or provide the sub-list of the transactions by a particular
transacting party (say Alice), or provide the sum of the tokens exchanged in
all the transactions in the transaction list.
_Mapping players’ credentials to their permissible payload view._ Given a
player with certain credentials, there needs to be a correspondence between
the player’s credentials and the view function (s)he is eligible for. Let
$\Psi:\\{0,1\\}^{*}\rightarrow\mathbb{F}$ be the function, determined by the
init-party, that provides this mapping. Also, it is true that
$\Psi(\kappa^{*})=f^{*}$.
### 2.3. Defining Functional Blockchain Consensus
Having presented the player model and introduced block payload views, we now
formally define functional blockchain consensus.
Definition 2 (Functional Blockchain Consensus). _Assume there exist $n$
players with credentials $\mathcal{C}$, and each player is eligible to learn a
block payload view under the view function set $\mathbb{F}$, through $\Psi$. A
blockchain protocol achieves ‘functional blockchain consensus’, if it attains
the following consensus goals (with all but negligible probability in the
security parameter), for each epoch $e$ of the blockchain system when the
block payload $\textsf{txs}^{e}$ is added consistently to the blockchain:_
_1\. Functional Hierarchy Consistency: For each honest player $i\in[n]$,
player $i$ agrees on $(\Psi(\kappa_{i})=f^{e}_{i}\in\mathbb{F})_{i\in[n]}$._
_2\. Block Payload View Integrity: For each honest player $i\in[n]$, player
$i$ agrees on $f^{e}_{i}(\textsf{txs}^{e})$, and $i$ knows that each honest
player $j\in[n],j\neq i$ agrees on $f^{e}_{j}(\textsf{txs}^{e})$. Further, if
for some honest player $i\in[n]$,
$f^{e}_{i}(\textsf{txs}^{e})=f^{*}(\textsf{txs}^{e})=\textsf{txs}^{e}$, then
$i$ verifies that $\textsf{txs}^{e}$ is valid (does not contain double
spending transactions)._
_3\. Liveness: If some honest player with highest credentials receives a valid
block payload $txs$ in some round, that payload will eventually be summarized
and finalized in each honest player’s blockchain._
It is instructive to give an explanation of Definition 2. In the first
requirement for achieving functional blockchain consensus, each honest player
must agree that each player in the system is eligible for a block payload view
congruent to its credential in the system. In the second requirement, it is
ensured that each honest player knows that each honest player did indeed learn
a block payload view in accordance with its view function. In the final
requirement, it is just ascertained that every valid block payload eventually
goes on-chain.
Kindly note that in the most general case, the credentials of each player can
be a function of time (which means that the correct payload view function of
the players can be a function of time).
### 2.4. Hierarchical Player Blockchains
We introduce some terminology first. We say a payload view is _notarized_
444An equivalent notion of a notarized block, is a mined block in Nakamoto
consensus blockchains (Bonneau et al., 2015). (similar terminology in
Streamlet (Chan and Shi, 2020)), once it receives a threshold of votes from
some of the players and is eligible to be eventually confirmed in the player’s
blockchain. We say that a notarized payload view is _finalized_ once is is
confirmed as a part of the player’s blockchain.
For each player $i\in[n]$, and an arbitrary epoch $e$, the player’s blockchain
under functional blockchain consensus, is given by
$\textsf{chain}^{e}_{i}:=(\textsf{chain}^{e-1}_{i},H^{*}(f^{e^{\prime}}_{i}(\textsf{txs}^{e^{\prime}})),f^{e}_{i}(\textsf{txs}^{e}))$,
with $e^{\prime}<e$, notarized $f^{e}_{i}(\textsf{txs}^{e})$ linked to
notarized $f^{e^{\prime}}_{i}(\textsf{txs}^{e^{\prime}})$, and
$\textsf{chain}^{0}_{i}$ is the genesis block. The standard blockchain, which
is ideal (corresponding to the payload view function $f^{*}$), is given by
$\textsf{chain}^{*,e}:=(\textsf{chain}^{*,e-1},H^{*}(\textsf{txs}^{e^{\prime}}),\textsf{txs}^{e})$,
similarly. Note that each player’s notarized blockchain might be a block-tree
in general, with the finalized blockchain being a sub-chain of the notarized
block-tree. We will denote each player $i$’s finalized blockchain by
$\textsf{chain}_{i}$, and the ideal finalized blockchain by
$\textsf{chain}^{*}$ (dropping the epoch superscript).
_View Functions’ Hierarchy._ We first define the binary relation $\preceq$
over the set of credentials. $\forall i_{1},i_{2}\in[n]$,
$\kappa_{i_{1}}\preceq\kappa_{i_{2}}$ implies that player $i_{2}$ has no
lesser credentials than player $i_{1}$, and consequently for each epoch $e$,
payload view $\textsf{txs}^{e}_{i_{1}}=f_{i_{1}}(\textsf{txs}^{e})$ should be
implied by payload view
$\textsf{txs}^{e}_{i_{2}}=f_{i_{2}}(\textsf{txs}^{e})$. This is denoted
equivalently with $f_{i_{1}}\preceq f_{i_{2}}$, or even
$\textsf{chain}_{i_{1}}\preceq\textsf{chain}_{i_{2}}$. From Definition 1, it
is evident that $\forall f\in\mathbb{F},f_{\bot}\preceq f\preceq f^{*}$.
It is easy to see that $(\mathbb{F},\preceq)$ is a partial order, as the
binary relation $\preceq$ over $\mathbb{F}$ is reflexive, anti-symmetric and
transitive555This partial order provides the hierarchy of inference on the
consensus payload, which was mentioned in Section 1.. $\forall
f_{1},f_{2}\in\mathbb{F}$, define $\textsf{dist}_{\preceq}(f_{1},f_{2})$ to be
the number of functions on the path between $f_{1}$ and $f_{2}$ in the partial
order $(\mathbb{F},\preceq)$. From Definition 1, it is evident that $\forall
f_{1},f_{2}\in\mathbb{F},\textsf{dist}_{\preceq}(f_{1},f_{2})\leq\textsf{dist}_{\preceq}(f_{\bot},f^{*})$.
For some $S\subseteq[n]$, define
$\inf_{\preceq}\\{f_{i}(\textsf{txs})\\}_{i\in
S}:=\\{f_{j}(\textsf{txs})\\}_{j\in S^{*}}$ to be the smallest
$S^{*}(\subseteq S)$ such that for each $f_{i}(\textsf{txs})\in S$, there
exists $f_{j}(\textsf{txs})\in S^{*}$ such that $f_{j}\preceq f_{i}$.
Similarly, for some $S\subseteq[n]$, define
$\sup_{\preceq}\\{f_{i}(\textsf{txs})\\}_{i\in
S}:=\\{f_{j}(\textsf{txs})\\}_{j\in S^{*}}$ to be the smallest
$S^{*}(\subseteq S)$ such that for each $f_{i}(\textsf{txs})\in S$, there
exists $f_{j}(\textsf{txs})\in S^{*}$ such that $f_{i}\preceq f_{j}$.
_Hierarchical player blockchains generalize standard blockchains._ $\forall
i\in[n],\forall e$, if it is the case that $f^{e}_{i}=f^{*}$, then it is true
that each honest player’s payload view is identical and contains all the
transactions for each block in each epoch: $\forall
e,i\in[n],\textsf{chain}_{i}=\textsf{chain}^{*}$. In this instance, each
player’s blockchain under functional blockchain consensus is no different than
a standard blockchain.
### 2.5. Alternate Functional Consensus Models
We briefly discuss possibilities of asymmetric consensus in binary agreement
and state machine replication, which can be considered in the context of
functional blockchain consensus.
_Functional Binary Agreement reduces to Binary Agreement._ Binary agreement
requires a set of processes to agree on a bit. Firstly, note that, binary
agreement on constant functions on a bit do not require a consensus protocol.
In case binary agreement is considered on non-constant functions on a bit, it
can be proved that all non-constant functions on a bit are invertible, and so
consequently any functional binary agreement definition can be reduced to
standard binary agreement.
_Functional Blockchain Consensus and Functional State Machine Replication
Consensus are equivalent._ State machine replication is a method for providing
a fault-tolerant service where replicas of servers maintain the correct state
of the service, and accept commands from clients to update the state of the
service. There are direct parallels between functional blockchain consensus
and a possible ‘functional’ consensus for state machine replication: block
payload view is equivalent to a sub-state (a sub-automaton) of the service.
Thus, by replacing the list of transactions txs (the blockchain payload) with
state (the state of the system) and by replacing block payload view functions
in $\mathbb{F}$ with state machine sub-state functions in $\mathbb{F}$, in
Definitions 1 and 2, an equivalent definition of functional state machine
replication can be proposed.
## 3\. Preliminaries
We first present the preliminary assumptions and constructions required by the
SightSteeple protocols.
### 3.1. The Execution Model
_The Player Model._ We assume that the players $[n]$ are ordered with non-
increasing static credentials, by the init-party: $\forall
i_{1},i_{2}\in[n],i_{1}\leq i_{2}$, $\kappa_{i_{1}}\preceq\kappa_{i_{2}}$. We
denote the subset of players that can participate in block proposal (defined
in Section 3.3) by $[m]$, where $m\leq n$. $\forall
i\in[m],\kappa_{i}=\kappa^{*}$, and $\forall
j\in\\{m+1,m+2,...,n\\},\kappa_{j}\prec\kappa^{*}$ ($j$ has lower than highest
credentials). We refer to all the players in $[m]$ as _head_ players.
_Credentials’ Initialization._ The init-party is a trusted benevolent body
that initializes the system by distributing the credentials, does not
participate in consensus, and cannot flag adversarial players. During setup,
the init-party makes $\Psi$ public. Each player $i\in[n]$ only knows its
$\kappa_{i}$ through the init-party, unless $\kappa_{i}=\kappa^{*}$, in which
case $i$ knows $\mathcal{C}$ through the init-party.
_The Network Model._ We assume that there exists a permissioned, authenticated
blockchain network of $n$ players. We assume that the clocks of all players
are synchronized, and block proposal occurs in epochs. We assume that the
network obeys partial synchrony (Dwork et al., 1988), where, there exists a
known finite number of rounds $\Delta$, and an unknown Global Stablization
Time $GST$, such that for any message sent by any honest player at round
$r_{0}$, the said message is received by all honest players in $[n]$ by round
$\max(r_{0},GST)+\Delta$. We ignore the impact of computation times of
cryptographic routines on our message delays (as in our base protocol
Streamlet (Chan and Shi, 2020)).
_The Fault Model._ We assume there exists an unknown, static partition of
$[n]$, of honest and faulty players $(\mathcal{H},\mathcal{A})$. The honest
players in $\mathcal{H}$ follow the protocol specification as is, and the
faulty players in $\mathcal{A}$ deviate from the specified protocol under the
failure types stated next.
We assume that given the static adversary, there is at least one head player
that is not compromised by it: at least one player in $[m]$ is honest, to
eliminate the possibility of double-spending by the adversary (will be
discussed in detail in Section 5.4). We will first consider the traditional
crash-fault adversary: once a player is compromised by the adversary, it stops
sending and received all protocol specific messages. We will then define a
novel _rational-fault_ adversary under the functional blockchain consensus
paradigm: briefly, a rational adversary would try to maximize its revenue from
participation in the consensus protocol, and simultaneously try to maximize
its visibility in the blockchain payload (the list of transactions). We cover
each adversary in detail in the relevant sections that follow.
### 3.2. Streamlet: The Base Protocol
SightSteeple will be an amendment to the streamlined blockchain protocol
Streamlet (Chan and Shi, 2020). Streamlet will be considered over a partially
synchronous network, with one of crash-fault or Byzantine-fault adversaries.
For each block, consensus in Streamlet takes place in four stages: block
proposal, block vote, block notarization (when the block receives a threshold
of votes), and block finalization (when the block is confirmed). These four
stages will be revised and re-interpreted in SightSteeple. For details on
Streamlet, please see Appendix A.1.
### 3.3. Metablocks, Metachain and Player Blockchains
_The Metablock._ In SightSteeple, we introduce a ‘metablock’ as a super block
containing encrypted information about the block payload (the list of
transactions txs). Each player can selectively read part of the information
contained in the metablock, as per its privileges towards the block payload.
Since only head players have the highest credentials in the SightSteeple
system, metablocks can solely be proposed by them. We will denote, for each
epoch $e$, the metablock using $\textsf{M}^{e}$.
_The Metachain._ The ‘metachain’ would simply be the blockchain of metablocks.
We would denote, for each epoch $e$, the presently notarized metachain by
$\textsf{mchain}^{e}$ (which may be a tree of metablocks), and the final
metachain at any epoch by mchain.
_Player Blockchains are implied by the SightSteeple Metachain._ Since each
metablock in the metachain contains information that can be selectively
inferred by each player, based on the encrypted information on the list of
transactions as part of the metablock, each honest player $i\in[n]$ can deduce
$\textsf{chain}^{e}_{i}$ from $\textsf{mchain}^{e}$, for each epoch $e$.
### 3.4. Basics of Functional Encryption
Functional encryption will be extensively employed in SightSteeple to
preferentially reveal information to each player as part of each metablock.
Under a functional encryption scheme (Boneh et al., 2011), given the
encryption of a message $\textsf{msg}\in\\{0,1\\}^{*}$, the decryptor can
recover $f(\textsf{msg})$ if provided with the secret key $sk_{f}$ under the
scheme by the encryptor for a particular function $f$. Under a verifiable
functional encryption scheme (Badrinarayanan et al., 2016), the decryptor can
validate $f$ from the supplied secret key for decryption, and recover $f(m)$,
even if the encryptor is faulty (malicious), and wants to fool the decryptor
by supplying a key $sk_{f^{\prime}}$ for some $f^{\prime}\neq f$. A functional
encryption scheme for all circuits (Garg et al., 2014) supports the functional
encryption of all efficiently computable functions over the message space
$\\{0,1\\}^{*}$. We will denote the set of all efficiently computable
functions as $\mathbf{\hat{F}}$. It is easy to see that
$\mathbb{F}\subseteq\mathbf{\hat{F}}$. For details on functional encryption,
please see Appendix A.2.
### 3.5. Notation
Let $e$ denote an epoch of the metachain, and simultaneously that of each
player chain. $L_{e}$ will denote the metablock proposing epoch leader, and is
a random member of $[m]$. Let $H^{*}$ denote a collision resistant hash
function, which is ideal under the random oracle model (its image is uniformly
distributed). Let $\Gamma_{\text{Sig}}$ denote a signature scheme,
$\Gamma_{\text{E}}$ denote a public key encryption scheme,
$\Gamma_{\text{aFE}}$ (Garg et al., 2014) denote a functional encryption
scheme for all efficiently computable functions, and $\Gamma_{\text{vFE}}$
(Badrinarayanan et al., 2016) denote a verifiable functional encryption
scheme.
Given a message $\textsf{msg}\in\\{0,1\\}^{*}$, define signed message under
scheme $\Gamma_{\text{Sig}}$ by player $i$ as
$(\textsf{msg})_{\Gamma_{\text{Sig}}.i}$ and encrypted message under scheme
$\Gamma_{\text{E}}$ for player $i$ as
$(\textsf{msg})_{\Gamma_{\text{E}}.{i^{-1}}}$.
Crash-fault tolerant Streamlet will be denoted by $\Pi^{0}_{\text{cft}}$, and
Byzantine-fault tolerant Streamlet will be denoted by $\Pi^{0}_{\text{bft}}$.
The crash-fault tolerant SightSteeple protocol will be denoted by
$\mathbf{\Pi}^{\text{ss}}_{\text{cft}}$, and the rational-fault tolerant
version will be denoted by $\mathbf{\Pi}^{\text{ss}}_{\text{rft}}$.
We will use $\textrm{M}.$Add$-\textsf{msg}$ to denote the addition of a
message $\textsf{msg}\in\\{0,1\\}^{*}$ to metablock M.
## 4\. SightSteeple: Crash Fault Tolerant
We present the first version of the SightSteeple functional blockchain
consensus protocol, in the presence of a _crash-fault_ adversary
$\mathcal{A}$: all adversarial players stop sending and receiving all messages
related to the the protocol. We assume $|\mathcal{A}|<\frac{n}{2}$.
### 4.1. Metablock Structure
_The genesis block._ The players in $[n]$ initialize the system by agreeing on
the genesis block
$\textsf{gen}:=(0,[n],\mathcal{C},\mathbb{F},\Psi,\Gamma_{\text{E}},\Gamma_{\text{aFE}},H^{*})$.
The genesis block is notarized when at least $\frac{n}{2}$ players vote on it
(a vote by a player is just a signed hash of the genesis block by that
player).
_The metablock._ The metablock for SightSteeple-CFT is presented next. In
brief, the metablock contains the current epoch number $e$, hash of the
previous metablock $\mathrm{M}^{e^{\prime}}$ to which the current metablock is
linked, encryption of the list of transactions $\textsf{txs}^{e}$ under
$\Gamma_{\text{aFE}}$, and, for each player $i$, hash of the current player
chain $\textsf{chain}_{i}^{e-1}$, payload view function $f^{e}_{i}$ for $i$,
and the encryption of the secret key $\textrm{sk}_{f^{e}_{i}}$ under
$\Gamma_{\text{aFE}}$, recoverable by $i$.
SS-CFT Metablock: The Contents of $\mathcal{M}^{e}_{\mathcal{H}}$ (by Leaders
in $\mathcal{H}$) Initialize $\mathcal{M}^{e}_{\mathcal{H}}\leftarrow\phi$
$\mathcal{M}^{e}_{\mathcal{H}}.$Add$-(e,H^{*}(\mathrm{M}^{e^{\prime}}),\Gamma_{\text{aFE}}.\textrm{Enc}_{\textrm{pp}^{e}}(\textsf{txs}^{e}))$
$\forall i\in[n]$:
$\mathcal{M}^{e}_{\mathcal{H}}.$Add$-(i,H^{*}(\textsf{chain}_{i}^{e-1}),f^{e}_{i},(\Gamma_{\text{aFE}}.\textrm{sk}_{f^{e}_{i}})_{\Gamma_{\text{E}}.i^{-1}})$
### 4.2. The SightSteeple-CFT Protocol
The SightSteeple-CFT Protocol $\mathbf{\Pi}^{\text{ss}}_{\text{cft}}$ is
presented in Algorithm 1.
#### Protocol Outline
For each epoch, the metablock proposing leader is elected as a random member
of $[m]$, as a function of $e$. If the leader is honest, it proposes
$\mathcal{M}^{e}_{\mathcal{H}}$ to the network (otherwise, no metablock is
proposed). On successfully receiving the metablock, the honest players in
$[n]$ reply by broadcasting their vote (denoted by $\textrm{V}^{e}_{i},\forall
i\in[n]$) over the network. The metablock is notarized once it achieves a vote
from at least all the honest players. The metablock is finalized according to
the finalization rule of the crash-fault tolerant version of Streamlet
$\Pi^{0}_{\text{cft}}$ (Sec. 5 in (Chan and Shi, 2020)).
Algorithm 1: SightSteeple-CFT ($\mathbf{\Pi}^{\text{ss}}_{\text{cft}}$) Leader
Election: $\forall e,L_{e}:=H^{*}(e)\mod m$ Metablock Proposal: If
$L_{e}\in\mathcal{H},\textrm{M}^{e}=\mathcal{M}^{e}_{\mathcal{H}}$. If
$L_{e}\in\mathcal{A},\textrm{M}^{e}=\bot$. $\forall e,L_{e}$ broadcasts
$\textrm{M}^{e}$ Metablock Vote: $\forall i\in[n]$, $i$ broadcasts
$\textrm{V}^{e}_{i}=(i,e,H^{*}(\textrm{M}^{e}))$. Metablock Notarization:
$\textrm{M}^{e}$ is notarized when at least $\frac{n}{2}$ players vote for it.
Metablock Finalization (from Streamlet $\Pi^{0}_{\text{cft}}$): If in any
notarized metachain, there exist three hash-linked metablocks with consecutive
epoch numbers, the prefix of the metachain up to the second of the three
metablocks is considered final. Further, when a metablock is finalized, its
parent chain is also finalized.
#### 4.2.1. Correctness
We show that the SightSteeple-CFT protocol is correct.
Theorem 3 (SS-CFT Correctness). _The SightSteeple-CFT protocol
$\mathbf{\Pi}^{ss}_{\text{cft}}$ achieves functional blockchain consensus, in
the presence of a crash-fault adversary $\mathcal{A}$, with
$|\mathcal{A}|<\frac{n}{2}$._
_Proof._ Since the notarization and finalization rules in
$\mathbf{\Pi}^{ss}_{\text{cft}}$ are equivalent to those in
$\Pi^{0}_{\text{cft}}$, the $\mathbf{\Pi}^{ss}_{\text{cft}}$ metachain will be
consistent across all players (Theorem 12 in (Chan and Shi, 2020)). We will
now show that $\mathbf{\Pi}^{ss}_{\text{cft}}$ achieves the three goals of
functional blockchain consensus (Definition 2), considering a consistent
metablock $\textrm{M}^{e}$ from an arbitrary epoch $e$, and remembering the
metablock response from honest leaders is $\mathcal{M}^{e}_{\mathcal{H}}$ and
crash-faulty leaders do not propose a metablock:
(i) Functional Hierarchy Consistency: Since all honest players vote on the
genesis block which contains $([n],\mathcal{C},\mathbb{F},\Psi)$, and vote on
the metablock $\textrm{M}^{e}$ which contains $(f^{e}_{i})_{i\in[n]}$, it is
implied that all honest players agree on
$(\Psi(\kappa_{i})=f^{e}_{i}\in\mathbb{F})_{i\in[n]}$.
(ii) Block Payload View Integrity: Since each honest player voted on the
metablock, which implies that it successfully received $\textrm{M}^{e}$, it is
true that each honest player knows that each honest player $i\in[n]$ agrees on
$f^{e}_{i}(\textsf{txs}^{e})$. Further, since each honest head player voted,
it is true that $\textsf{txs}^{e}$ doesn’t contain double spending
transactions.
(iii) Liveness: The $\mathbf{\Pi}^{ss}_{\text{cft}}$ metablock finalization
rule is identical to the $\Pi^{0}_{\text{cft}}$ block finalization rule. Thus,
the liveness of $\mathbf{\Pi}^{ss}_{\text{cft}}$ is implied by Theorem 13 in
(Chan and Shi, 2020) (details in Appendix A.1). $\hfill\square$
#### The $\mathbf{\Pi}^{\text{ss}}_{\text{cft}}$ metachain implies each player
chain
Consider, for any epoch $e$, the metachain $\textsf{mchain}^{e}$ and the most
recent metablock $\textrm{M}^{e}$ in it. Also consider, for each honest player
$i\in[n]$, the sub-metablock $\textrm{M}^{e}_{i}$ of $\textrm{M}^{e}$.
$\textrm{M}^{e}_{i}$ contains:
1\.
$(e,H^{*}(\mathrm{M}^{e^{\prime}}),\Gamma_{\text{aFE}}.\textrm{Enc}(\textsf{txs}^{e}))$
2\.
$(i,H^{*}(\textsf{chain}_{i}^{e-1}),f^{e}_{i},(\Gamma_{\text{aFE}}.\textrm{sk}_{f^{e}_{i}})_{\Gamma_{\text{E}}.i^{-1}})$
From both these messages, it is easy for player $i$ to imply
$\textsf{chain}^{e}_{i}=(\textsf{chain}^{e-1}_{i},H^{*}(f^{e^{\prime}}_{i}(\textsf{txs}^{e^{\prime}})),f^{e}_{i}(\textsf{txs}^{e}))$,
by recovering the encrypted secret key $\textrm{sk}_{f^{e}_{i}}$ under
$\Gamma_{\text{E}}$, followed by recovering $f^{e}_{i}(\textsf{txs}^{e})$
under $\Gamma_{\text{aFE}}$.
## 5\. SightSteeple: Rational Fault Tolerant
### 5.1. Impossibility of (Secret Key based) BFT SightSteeple
Asymmetric block payload visibility based on encrypted on-chain information as
part of the metablock, and a secret key per player, can never be Byzantine
fault tolerant. This is because an adversarial player can just broadcast its
secret key after the metablock finalization, thereby violating the payload
view integrity on any lower credential honest player. Due to this payload view
malleability post payload finalization, Byzantine-fault tolerant SightSteeple
is impossible, as is formalized by the following attack.
Attack 1 (SightSteeple-BFT). _Assume there exists a Byzantine player
$i^{\prime}\in\mathcal{A}$, and an honest player $i\in\mathcal{H}$, with
$\kappa_{i}\preceq\kappa_{i^{\prime}}$ and
$\kappa_{i^{\prime}}\npreceq\kappa_{i}$. Assume at some epoch $\tilde{e}>e$,
the metablock $\textsf{M}^{e}$ is finalized, then player $i^{\prime}$ can
violate the block payload view integrity of player $i$ for epoch $e$, by
broadcasting $\Gamma_{\text{vFE}}.sk_{f^{e}_{i^{\prime}}}$ over the network at
epoch $\tilde{e}$._
Consequently, SightSteeple need be proposed for a weaker adversary.
### 5.2. Rational-fault Adversary: Motivation and Definition
We consider rational players which wish to (i) maximize their revenue from the
block payload, in terms of block reward (if the protocol is incentivized, as
in Bitcoin (Bonneau et al., 2015)), transaction fees, and by double spending
transactions in the payload which they are a part of; and (ii) maximize their
payload view (under $\preceq$).
Further, rational players may want to mislead honest players by supplying them
a secret key (under the functional encryption scheme) for an incorrect view
function, thereby forcing them to agree to an incorrect view of the payload,
and violating the block payload view integrity for honest players, even when
the metachain is consistent. An example to illustrate such an attack on head
players is given below. Consequence for honest head players under such an
attack is that they cannot propose payloads after the attack (as payloads may
not be notarizable), inducing an effective denial-of-service (different from
conventional DoS attacks as in (Mirkin et al., 2020)). Thus it is imperative
to design a protocol with verifiable view function keys for resilience to a
rational adversary.
Attack 2 (SightSteeple-RFT without $\Gamma_{vFE}$). _Let
$\tilde{f}(\textsf{txs}):=\textsf{txs}\text{ with reduced value of each tx by
1 unit}$. Consider, for some epoch $e$, a rational leader
$L_{e}=i^{\prime}\in\mathcal{A}$ supplies $sk_{\tilde{f}}$ instead of
$sk_{f^{*}}$ to an honest $i\in[m]$. Now, for the smallest $e^{\prime}>e$,
with $L_{e^{\prime}}=i$, if $i$ proposes a metablock containing payload
$\textsf{txs}^{e}$, the said metablock will not be notarized by any honest
head player (due to the impression of double spending)._
_Rational Players’ Utility Function._ We present the utility of the rational
adversary $\mathcal{A}$, which is a function of the metablock proposed and
notarized in the current epoch $e$. Briefly, the utility function is a convex
combination of the revenue $\tau_{\mathcal{A}}$ for the adversary resulting
from the potential confirmation of the payload $\textsf{txs}^{e}$ (which could
be any combination of block reward, if the consensus protocol is incentivized,
transaction fees, or transactions by the adversary in the payload), and the
visibility into the payload given by the payload view function
$f^{e}_{i^{\prime}}$ for each faulty player $i^{\prime}$. We give the
normalized utility function $v^{e}_{\mathcal{A}}$ next, where
$\beta_{1},\beta_{2}\in(0,1)$, with $\beta_{1}+\beta_{2}=1$:
$v^{e}_{\mathcal{A}}(\textrm{M}^{e}):=\beta_{1}\cdot\tau_{\mathcal{A}}(\textsf{txs}^{e})+\beta_{2}\cdot\frac{1}{|\mathcal{A}|}\sum_{i^{\prime}\in\mathcal{A}}\frac{\textsf{dist}_{\preceq}(f_{\bot},f^{e}_{i^{\prime}})}{\textsf{dist}_{\preceq}(f_{\bot},f^{*})}$
(1)
We assume that rational players wish to maximize their utility under
$v^{e}_{\mathcal{A}}$ from participation in rational-fault tolerant
SightSteeple, and so would choose metablock proposal strategies to that end.
### 5.3. Metablock Structure
_The genesis block._ The players in $[n]$ initialize the system by agreeing on
the genesis block
$\textsf{gen}:=(0,[n],\mathcal{C},\mathbb{F},\Psi,\Gamma_{\text{E}},\Gamma_{\text{vFE}},H^{*})$.
The genesis block is notarized when at least $\frac{2n}{3}$ players vote on it
(a vote by a player is just a signed hash of the genesis block by that
player).
We will modify the vote and notarization rule for the metablock.
_The metablock (by honest leaders)._ The metablock for SightSteeple-RFT by
honest leaders is presented next. The metablock contains the current epoch
number $e$, hash of the previous metablock $\mathrm{M}^{e^{\prime}}$ to which
the current metablock is linked, public parameters $\textrm{pp}^{e}$ under the
scheme $\Gamma_{\text{vFE}}$, encryption of the list of transactions
$\textsf{txs}^{e}$ under $\Gamma_{\text{vFE}}$, and, for each player $i$, hash
of the current player chain $\textsf{chain}_{i}^{e-1}$, payload view function
$f^{e}_{i}$ for $i$, and the encryption of the secret key
$\textrm{sk}_{f^{e}_{i}}$ under $\Gamma_{\text{vFE}}$, recoverable by $i$.
SS-RFT Metablock: The Contents of $\mathcal{M}^{e}_{\mathcal{H}}$ by Leaders
in $\mathcal{H}$ Initialize $\mathcal{M}^{e}_{\mathcal{H}}\leftarrow\phi$
$\mathcal{M}^{e}_{\mathcal{H}}.$Add$-(e,H^{*}(\mathrm{M}^{e^{\prime}}),\Gamma_{\text{vFE}}.\textrm{pp}^{e},\Gamma_{\text{vFE}}.\textrm{Enc}_{\textrm{pp}^{e}}(\textsf{txs}^{e}))_{\Gamma_{\text{Sig}}.L_{e}}$
$\forall i\in[n]$:
$\mathcal{M}^{e}_{\mathcal{H}}.$Add$-(i,H^{*}(\textsf{chain}_{i}^{e-1}),f^{e}_{i},(\Gamma_{\text{vFE}}.\textrm{sk}_{f^{e}_{i}})_{\Gamma_{\text{E}}.i^{-1}})_{\Gamma_{\text{Sig}}.L_{e}}$
_The metablock (by adversarial leaders)._ The metablock for SightSteeple-RFT
by rational leaders is also presented next. The metablock is the same as that
from the honest leaders, except that $\forall i\in\mathcal{A}$, the secret key
$\textrm{sk}_{f^{e}_{i}}$ under $\Gamma_{\text{vFE}}$ is replaced by
$\textrm{sk}_{f^{*}}$.
SS-RFT Metablock: The Contents of $\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$ by
Leaders in $\mathcal{A}$ Initialize
$\tilde{\mathcal{M}}^{e}_{\mathcal{A}}\leftarrow\phi$
$\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$.Add$-(e,H^{*}(\mathrm{M}^{e^{\prime}}),\Gamma_{\text{vFE}}.\textrm{pp}^{e},\Gamma_{\text{vFE}}.\textrm{Enc}_{\textrm{pp}^{e}}(\textsf{txs}^{e}))_{\Gamma_{\text{Sig}}.L_{e}}$
$\forall i\in[n]\setminus\mathcal{A}$:
$\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$.Add$-(i,H^{*}(\textsf{chain}_{i}^{e-1}),f^{e}_{i},(\Gamma_{\text{vFE}}.\textrm{sk}_{f^{e}_{i}})_{\Gamma_{\text{E}}.i^{-1}})_{\Gamma_{\text{Sig}}.L_{e}}$
$\forall i\in\mathcal{A}$:
$\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$.Add$-(i,H^{*}(\textsf{chain}_{i}^{e-1}),f^{e}_{i},(\mathbf{\Gamma_{\text{vFE}}.\textrm{sk}_{f^{*}}})_{\Gamma_{\text{E}}.i^{-1}})_{\Gamma_{\text{Sig}}.L_{e}}$
Note the need for a signature on metablock contents: a rational head player,
which is not the current epoch leader, can otherwise propose the metablock.
### 5.4. The SightSteeple-RFT Protocol
The SightSteeple-RFT Protocol $\mathbf{\Pi}^{\text{ss}}_{\text{rft}}$ is
presented in Algorithm 2. For this protocol, it is assumed that for the
rational adversary $\mathcal{A}$, $|\mathcal{A}|<\frac{n}{3}$.
#### Protocol Outline
For each epoch, the metablock proposing leader is elected as a random member
of $[m]$, as a function of $e$. If the leader is honest, it proposes
$\mathcal{M}^{e}_{\mathcal{H}}$ to the network. Otherwise, the rational leader
proposes $\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$. On receiving the the first
metablock from the leader, each honest player $i$ in $[n]$ validates its
contents to ensure that the secret key it received is that for
$\Psi(\kappa_{i})$. The honest head players also validate that
$\textsf{txs}^{e}$ has no double spending transactions. Post validation, the
honest players in $[n]$ reply by broadcasting their vote (denoted by
$\textrm{V}^{e}_{i},\forall i\in[n]$) to the network. Each vote is either a
‘yes’ vote if the validation succeeds, or a ‘no’ vote if the validation fails.
The metablock is notarized once it achieves a ‘yes’ vote from at least all the
honest players, and receives no ‘no’ votes. The metablock is finalized
according to the finalization rule of the Byzantine-fault tolerant version of
Streamlet $\Pi^{0}_{\text{bft}}$ (Sec. 3 in (Chan and Shi, 2020)).
#### Rational Player Voting Policy
We now show that it is not necessary for rational players to vote in order to
maximize their utility under $v^{e}_{\mathcal{A}}$, for any epoch $e$.
It is in the interest of rational players that, for the maximization of the
utility function $v^{e}_{\mathcal{A}}$, $\forall e,\textrm{M}^{e}$ is
notarized: if $\textrm{M}^{e}$ is not notarized, $v^{e}_{\mathcal{A}}=0$, but
if $\textrm{M}^{e}$ is notarized, there is a possibility that $\textrm{M}^{e}$
would be finalized, and consequently $v^{e}_{\mathcal{A}}>0$ (since
$\textsf{dist}_{\preceq}(f_{\bot},f^{e}_{i^{\prime}})>0,\forall
i^{\prime}\in\mathcal{A}$). This implies that for metablocks
$\mathcal{M}^{e}_{\mathcal{H}}$ and $\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$,
no rational player will ever vote no. Further, since honest players will
always vote ‘yes’ for $\mathcal{M}^{e}_{\mathcal{H}}$ and
$\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$, consequently both these metablocks
will be notarized, the rational players need not vote ‘yes’.
Algorithm 2: SightSteeple-RFT ($\mathbf{\Pi}^{\text{ss}}_{\text{rft}}$) Leader
Election: $\forall e,L_{e}:=H^{*}(e)\mod m$ Metablock Proposal: If
$L_{e}\in\mathcal{H},\textrm{M}^{e}=\mathcal{M}^{e}_{\mathcal{H}}$. If
$L_{e}\in\mathcal{A},\textrm{M}^{e}=\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$
$\forall e,L_{e}$ broadcasts $\textrm{M}^{e}$ Metablock Validation and Vote
(first $\textrm{M}^{e}$ from $L_{e}$): Each honest $i\in[n]$ asserts
$f^{e}_{i}=\Psi(\kappa_{i})$ and
$\textrm{sk}_{f^{e}_{i}}=_{\Gamma_{\text{vFE}}}\textrm{sk}_{\Psi(\kappa_{i})}$.
Each honest $i\in[m]$ also asserts $\textsf{txs}^{e}$ has no double spending.
If assertions succeed for $i$, broadcast
$\textrm{V}^{e}_{i}=(i,e,H^{*}(\textrm{M}^{e}),\text{yes})_{\Gamma_{\text{Sig}}.i}$,
otherwise broadcast
$\textrm{V}^{e}_{i}=(i,e,H^{*}(\textrm{M}^{e}),\text{no})_{\Gamma_{\text{Sig}}.i}$.
Metablock Notarization: $\textrm{M}^{e}$ is notarized when at least
$\frac{2n}{3}$ players vote ‘yes’, and no player votes ‘no’. Metablock
Finalization (from Streamlet $\Pi^{0}_{\text{bft}}$): If in any notarized
metachain, there exist three hash-linked metablocks with consecutive epoch
numbers, the prefix of the metachain up to the second of the three metablocks
is considered final. Further, when a metablock is finalized, its parent chain
is also finalized.
#### 5.4.1. Correctness
We first show that the best metablock response by rational head players is
$\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$.
Lemma 4 (Rational Leader Metablock). _Assuming that rational players wish to
maximize their utility under $v^{e}_{\mathcal{A}}$, the dominant strategy on
metablock proposal for each rational head player $i^{\prime}\in[m]$ is
$\sigma^{i^{\prime}}_{\tilde{\mathcal{M}}^{e}_{\mathcal{A}}}$, for each epoch
$e$ when $L_{e}=i^{\prime}$._
_Proof._ The payoff for rational leaders as part of $v^{e}_{\mathcal{A}}$ is
on (i) the revenue from the block payload confirmation; and (ii) the
visibility into the list of transactions. For (i), note that the rational
leader may attempt to fork the metachain to orphan some metablocks, if it
results in a higher revenue for it. The rational leader may also consider
announcing two metablocks in quick succession for the same epoch in which it
is a leader if it receives a second payload in the same epoch which has a
higher revenue possible666Consider, for some epoch $e$, $i^{\prime}$ receives
$\textsf{txs}^{e}_{1}$ at $e$ and $\textsf{txs}^{e}_{2}$ at $e+\epsilon$ (for
a small $\epsilon$), with
$\tau_{\mathcal{A}}(\textsf{txs}^{e}_{2})>\tau_{\mathcal{A}}(\textsf{txs}^{e}_{1})$.
$i^{\prime}$ would announce metablocks for both payloads.. For (ii), the
rational leaders’ payoff is maximized when all faulty players learn
$\textsf{txs}^{e},\forall e$. This can only happen when each faulty player
receives the secret key $\Gamma_{\text{vFE}}.\textrm{sk}_{f^{*}}$ for each
epoch in which a rational player is elected leader.
Finally, it is easy to see that $v^{e}_{\mathcal{A}}=0$ if the rational
leader’s block is unnotarized, and $v^{e}_{\mathcal{A}}>0$ if the rational
leader’s block is notarized (even if the payload related revenue is zero, the
payload view payoff is positive). Consequently, both (i) and (ii) are
achievable only when a rational leader’s metablock is notarized, which is only
possible when each honest player $i$ receives
$\Gamma_{\text{vFE}}.\textrm{sk}_{f^{e}_{i}}$.
These arguments imply that the best choice of a metablock from rational
leaders $i^{\prime}\in[m]$ is $\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$, denoted
by the strategy $\sigma^{i^{\prime}}_{\tilde{\mathcal{M}}^{e}_{\mathcal{A}}}$.
$\hfill\square$
We now show that the SightSteeple-RFT protocol is correct.
Theorem 5 (SS-RFT Correctness). _The SightSteeple-RFT protocol
$\mathbf{\Pi}^{ss}_{\text{rft}}$ achieves functional blockchain consensus, in
the presence of a rational-fault adversary $\mathcal{A}$, with
$|\mathcal{A}|<\frac{n}{3}$._
_Proof._ Since the notarization and finalization rules in
$\mathbf{\Pi}^{ss}_{\text{rft}}$ are equivalent to those in
$\Pi^{0}_{\text{bft}}$, the $\mathbf{\Pi}^{ss}_{\text{rft}}$ metachain will be
consistent across all players (Theorem 3 in (Chan and Shi, 2020)). We will now
show that $\mathbf{\Pi}^{ss}_{\text{rft}}$ achieves the three goals of
functional blockchain consensus (Definition 2), considering a consistent
metablock $\textrm{M}^{e}$ from an arbitrary epoch $e$, and remembering the
metablock response from honest leaders is $\mathcal{M}^{e}_{\mathcal{H}}$ and
that from rational leaders is $\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$ (Lemma
4):
(i) Functional Hierarchy Consistency: Since all honest players vote on the
genesis block which contains $([n],\mathcal{C},\mathbb{F},\Psi)$, and vote
‘yes’ on the metablock $\textrm{M}^{e}$ which contains
$(f^{e}_{i})_{i\in[n]}$, it is implied that all honest players agree on
$(\Psi(\kappa_{i})=f^{e}_{i}\in\mathbb{F})_{i\in[n]}$.
(ii) Block Payload View Integrity: Since each honest player voted ‘yes’ on the
metablock (which is one of $\mathcal{M}^{e}_{\mathcal{H}}$ or
$\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$), and no player voted ‘no’, it is
implied that the verification of $f^{e}_{i}$ under $\Gamma_{\text{vFE}}$
succeeded for each honest player $i\in[n]$, and so it is true that each honest
player knows that each honest player $i\in[n]$ agrees on
$f^{e}_{i}(\textsf{txs}^{e})$. Further, since each honest head player voted
‘yes’, it is true that $\textsf{txs}^{e}$ doesn’t contain double spending
transactions.
(iii) Liveness: The $\mathbf{\Pi}^{ss}_{\text{rft}}$ metablock finalization
rule is identical to the $\Pi^{0}_{\text{bft}}$ block finalization rule. Thus,
the liveness of $\mathbf{\Pi}^{ss}_{\text{rft}}$ is implied by Theorem 6 in
(Chan and Shi, 2020) (details in Appendix A.1). $\hfill\square$
#### The $\mathbf{\Pi}^{\text{ss}}_{\text{rft}}$ metachain implies each player
chain
Consider, for any epoch $e$, the metachain $\textsf{mchain}^{e}$ and the most
recent metablock $\textrm{M}^{e}$ in it. Also consider, for each honest player
$i\in[n]$, the sub-metablock $\textrm{M}^{e}_{i}$ of $\textrm{M}^{e}$.
$\textrm{M}^{e}_{i}$ contains:
1\.
$(e,H^{*}(\mathrm{M}^{e^{\prime}}),\Gamma_{\text{vFE}}.\textrm{pp}^{e},\Gamma_{\text{vFE}}.\textrm{Enc}(\textsf{txs}^{e}))_{\Gamma_{\text{Sig}}.L_{e}}$
2\.
$(i,H^{*}(\textsf{chain}_{i}^{e-1}),f^{e}_{i},(\Gamma_{\text{vFE}}.\textrm{sk}_{f^{e}_{i}})_{\Gamma_{\text{E}}.i^{-1}})_{\Gamma_{\text{Sig}}.L_{e}}$
From both these messages, it is easy for player $i$ to imply
$\textsf{chain}^{e}_{i}=(\textsf{chain}^{e-1}_{i},H^{*}(f^{e^{\prime}}_{i}(\textsf{txs}^{e^{\prime}})),f^{e}_{i}(\textsf{txs}^{e}))$,
by recovering the encrypted secret key $\textrm{sk}_{f^{e}_{i}}$ under
$\Gamma_{\text{E}}$, followed by recovering $f^{e}_{i}(\textsf{txs}^{e})$
under $\Gamma_{\text{vFE}}$.
### 5.5. Special Case: Perfect SightSteeple-RFT
We outline a special case where each player, honest or rational-faulty, agrees
on a correct block payload view for each epoch of the SightSteeple metachain.
Given the player network $[n]$, consider the case where, for each credential,
there are at least $a_{0}$ players with that credential, and among those
players, there is at least one honest player, and less than $a_{0}$ rational
players. Now, by using a single $(n,a_{0})$ threshold encryption (Desmedt and
Frankel, 1989) of the secret payload view function key for all players with
the same credential, the rational leaders would be forced to encrypt the
correct view function key in the metablock for all faulty players (if the
rational leader wants its metablock to be notarized by the honest players).
Consequently, perfect SightSteeple-RFT can be achieved, where $\forall
e,\forall i\in[n]$, $i$ learns nothing other than
$f^{e}_{i}(\textsf{txs}^{e})$.
Giving an exact construction and correctness proof for this special case of
SightSteeple-RFT is left as a future exercise.
## 6\. Discussion
### 6.1. Functional Blockchain Consensus for dApps
We discuss possible applications of asymmetric distributed ledgers resulting
from functional blockchain consensus.
_Cryptocurrencies(Bonneau et al., 2015) with sensitive transactions._ We
demonstrate how asymmetric distributed ledgers for cryptocurrencies with
privileged transactions, based on sub-types of functional encryption, can be
constructed, assuming the init-party is a cross-jursidictional network of
federal regulators. The first sub-type of functional encryption we consider is
attribute based encryption (ABE) (Lewko et al., 2010), which allows recovery
of the plaintext if the decryptor satisfies certain attributes. Using ABE,
SightSteeple can be defined to allow players in specific federal jurisdictions
to learn the complete list of transactions. The next sub-type of functional
encryption we consider is predicate encryption (PE) (Boneh et al., 2011),
which allows recovery of the plaintext if some predicate on the plaintext is
true (based on the key held by the decryptor). SightSteeple can be defined
with PE to allow a subset of players to learn the list of transactions if a
specific transactor (say Alice) has a transaction in it. Finally, a functional
encryption scheme with the inner-product functionality (IP) (Abdalla et al.,
2015) can be used to learn the sum of a sub-sequence of the plaintext.
SightSteeple with IP can be used to allow players to learn the sum value of
all crypto-tokens exchanged in the list of transactions.
_Asymmetric Decentralized Finance (DeFi)(Werner et al., 2021; Zetzsche et al.,
2020) applications._ We present some asymmetric financial market solutions
that can result from functional blockchain consensus. First, asymmetric
automated markets may be defined by achieving functional blockchain consensus
on a subset of asset reserves per player (thereby locking in a sub-pool of
assets in the smart contract corresponding to each player). Next, asymmetric
portfolio management and exposure can be achieved through functional
blockchain consensus, to facilitate different DeFi protocols, such as
protocols for loanable funds and automated market makers, for different
subsets of players. Finally, derivative trading under different combinations o
of synthetic assets, futures, perpetual swaps and options, for different
subsets of players, may be achieved through functional blockchain consensus.
The init-party for such applications could be a benevolent dictator (Werner et
al., 2021), that initializes each application appropriately for financial
governance.
_Other dApps(Casino et al., 2019)._ As a final example, functional blockchain
consensus can facilitate the need for asymmetric records for agreement on
classified information in governance (Oliveira et al., 2020; ConsenSys, 2022)
(for instance on citizenship and voting records), healthcare (McGhin et al.,
2019; Hölbl et al., 2018; Builtin, 2022) (on patient healthcare records), and
decentralized IoT network management (Casino et al., 2019) requiring agreement
on sensitive RFID sensor data such as from supply chains, transportation
networks, and inventory management.
### 6.2. Block Payload View Privilege Alteration
It has been shown in Section 5.5, that perfect rational-fault tolerance in
SightSteeple, where, $\forall e$, each player $i\in[n]$ provably learns
$f^{e}_{i}(\textsf{txs}^{e})$, with
$(\Psi(\kappa_{i})=f^{e}_{i}\in\mathbb{F})_{i\in[n]}$, is only achievable as a
special case. In general, the rational players can violate their privileges to
learn the entire payload, whenever a rational head player is elected as the
metablock proposer. We revisit the privilege alteration properties of
SightSteeple, seen so far.
_Inherent Collusion to Supersede Privilege._ The adversary in SightSteeple-RFT
implicitly learns
$\sup_{\preceq}\\{f^{e}_{i^{\prime}}(\textsf{txs}^{e})\\}_{i^{\prime}\in\mathcal{A}}$,
for each epoch $e$, as it controls all players in $\mathcal{A}$.
_Privilege alteration would be ineffective in escalated information going on-
chain for honest players._ It has been established that, given a rational-
fault adversary, the metablock response by honest leaders in the SightSteeple
protocol is $\mathcal{M}^{e}_{\mathcal{H}}$, and the best metablock response
by rational leaders is $\tilde{\mathcal{M}}^{e}_{\mathcal{A}}$ (Lemma 4). In
both instances, it is true that the secret functional encryption key supplied
for each honest $i\in[n]$ is no different from
$\textrm{sk}_{\Psi(\kappa_{i})}$. This implies that although the rational
players might learn the entire list of transactions, the correctness is
preserved for all honest players.
_Off-Chain Privilege Preservation._ In future, in order to ensure $\forall e$,
each player $i\in[n]$ provably learns $f^{e}_{i}(\textsf{txs}^{e})$, with
$(\Psi(\kappa_{i})=f^{e}_{i}\in\mathbb{F})_{i\in[n]}$, metablock proposal may
be made an off-chain activity. Options to outsource metablock creation include
payload view function key generation through decentralized blockchain-based
multi-party computation (Zhong et al., 2019), or through dynamic decentralized
functional encryption (Chotard et al., 2020), or through an alternate, oracle
blockchain system (Cui et al., 2020b).
### 6.3. SightSteeple Protocol Optimization
The present version of SightSteeple has some overheads in terms of space
complexity of the proposed metablock, and overall communication complexity per
epoch of metablock proposal. Both SightSteeple-CFT and SightSteeple-RFT have
metablock size $|\textsf{M}^{e}|\in\Theta(n)$. Further, since the base
protocol Streamlet echoes each message (Cohen and Malkhi, 2022), the current
communication complexity $\forall e$ is
$n^{2}(|\textsf{M}^{e}|+n|\textsf{V}^{e}|)=\Theta(n^{3})$.
In future, we would like to reduce the metablock size, and supplant the base
protocol from Streamlet to HotStuff (Yin et al., 2019), to reduce the
communication complexity, and provide an API for implementation (Cohen and
Malkhi, 2022; Viswanath, 2022).
### 6.4. Function and Block Payload Privacy
We give a brief discussion on whether any information about the payload,
beyond what is presented in the metablock, is leaked, under the associated
functional encryption scheme. The following arguments are based on message
privacy (from Appendix A.2), which translates to payload privacy in
SightSteeple, remembering that payloads are the functionally encrypted
messages in the metablocks.
_Under crash-fault tolerance._ $\Gamma_{\text{aFE}}$ achieves full message
privacy (Brakerski and Segev, 2018), which implies that SightSteeple-CFT
achieves full payload privacy for each function in $\mathbb{F}$, even though
any of the players might not be intending to infer extra information from what
is conveyed for them individually in the metachain.
_Under rational-fault tolerance._ Function Privacy (Brakerski and Segev, 2018)
is not achieved in the present version of SightSteeple, as the view functions
are public in the metablock, in order to ensure the functional hierarchy
consistency. Block payload security requirements are implied by the re-
instantiation of the verifiable functional encryption scheme parameters per
epoch, in SightSteeple-RFT. In the SightSteeple-RFT protocol, the adversary
sees $1$ payload and less than $\frac{n}{3}$ functions applied on the payload,
in each epoch (which has a separate instantiation of the verifiable functional
encryption scheme parameters). Thus SightSteeple-RFT requires at least
$1$-selective-message payload privacy and at least $\frac{n}{3}$-selective-
function payload privacy (Brakerski and Segev, 2018) (security notions
outlined in A.2) under $\Gamma_{\text{vFE}}$, proving which is beyond the
scope of this contribution.
## 7\. Future Directions
We have initiated a new line of enquiry into functional blockchain consensus,
and proposed a first functional blockchain consensus protocol SightSteeple
providing an asymmetric visibility into the list of transactions. To conclude,
we outline some problems emerging from this contribution, that can be
addressed in future.
_Off-chain metablock creation for privilege preservation._ Presently, the
block payload view decryption is part of the consensus protocol, as part of
the validation of the metablock. In future, SightSteeple can be amended to
eliminate privilege escalation by adversarial metablock proposers, through
outsourced (if needed verifiable) decryption under a functional encryption
scheme, using standard blockchains (Cui et al., 2020b).
_In hidden credentials’ networks, understanding the tradeoff between
expressiveness of function families $\mathbb{F}$ versus function-privacy under
various FE schemes._ SightSteeple is constructed to reveal the credentials and
view functions for each player. In future, for privacy, if the credentials and
view functions per player need not be revealed while achieving functional
hierarchy consistency, then function-private functional encryption schemes
(Brakerski and Segev, 2018) may be employed to achieve functional blockchain
consensus. Given a adversary $\mathcal{A}\subseteq[n]$, collusion can be
prevented using function-private functional encryption, to prevent
$\mathcal{A}$ from learning more than
$\\{f_{i^{\prime}}(\textsf{txs})\\}_{i^{\prime}\in\mathcal{A}}$, in terms of
payload information and view functions, for each payload txs going on-chain.
However, in this case, the permissible set of view functions $\mathbb{F}$
supported by the functional encryption scheme is an open question, which may
be addressed in future.
_Functional blockchain consensus in the BAR and ByRa models(McMenamin et al.,
2021)._ SightSteeple has been constructed to be resilient to crash-faults and
rational-faults. It has been shown that SightSteeple cannot be appropriately
modified to achieve Byzantine-fault tolerance (Section 5.1). In future,
alternate protocols for functional blockchain consensus may be proposed for
tolerance to a combination of Byzantine and rational players in the presence
of altruistic/honest players (the BAR model), or functional blockchain
consensus may be attained in the absence of honest players altogether
(identical to the ByRa model of the Tenderstake (McMenamin et al., 2021)
protocol).
_Towards asymmetric smart contracts._ Traditionally, for each participant in
the distributed system, the execution logic of the smart contract is
predicated on $\textsf{chain}^{*}$. Given the hierarchical player blockchains
resulting from SightSteeple, future _functional_ smart contracts in credential
driven distributed systems, may base their execution logic on
$\textsf{chain}^{e}_{i}$ for player $i$ (or any process privy to player $i$’s
blockchain), or might even base their execution logic on
$\inf_{\preceq}\\{\textsf{chain}^{e}_{i}\\}_{i\in[n]}$ for each player777We
use the $\inf_{\preceq}$ notation on player blockchains to give the same
implication as the $\inf_{\preceq}$ notation on payload view in Section 2.4..
_Proposing declassification blockchain consensus._ In the instance that a
peer-to-peer network requires agreement on sensitive information that cannot
be revealed in completion immediately, but can safely be divulged in the
future, a declassification blockchain protocol can be defined to reach the
said goal. To that end, we propose the following definition of
declassification consensus, which may be realized using a SightSteeple-like
protocol in the future.
Proposed Definition (Declassification Consensus). _Given a declassification
window $H$, $\forall h,\exists f\in\mathbb{F},f\neq f^{*}$, such that if the
honest players in $[n]$ finalize $f(\textsf{txs}^{h})$ at height $h$, then
they also finalize $\textsf{txs}^{h}$ at height $h+H$._
We so believe that through this contribution, and through the possible future
directions as stated above, SightSteeple would be a stepping stone towards
defining new consensus paradigms and protocols for an asymmetric agreement on
privileged information.
## References
* (1)
* Abdalla et al. (2015) Michel Abdalla, Florian Bourse, Angelo De Caro, and David Pointcheval. 2015. Simple functional encryption schemes for inner products. In _IACR International Workshop on Public Key Cryptography_. Springer, 733–751.
* Ahuja et al. (2021) Poonam Ahuja, Deepak Ahuja, and Aditya Ahuja. WIPO International Publication No. WO 2021/205241 A1, 14 Oct. 2021. System and method for establishing a trusted functional blockchain consensus. https://patents.google.com/patent/WO2021205241A1/en.
* Badrinarayanan et al. (2016) Saikrishna Badrinarayanan, Vipul Goyal, Aayush Jain, and Amit Sahai. 2016. Verifiable functional encryption. In _International Conference on the Theory and Application of Cryptology and Information Security_. Springer, 557–587.
* Banerjee and Chandrakasan (2021) Utsav Banerjee and Anantha P Chandrakasan. 2021. A Low-Power Elliptic Curve Pairing Crypto-Processor for Secure Embedded Blockchain and Functional Encryption. In _2021 IEEE Custom Integrated Circuits Conference (CICC)_. IEEE, 1–2.
* Bano et al. (2019) Shehar Bano, Alberto Sonnino, Mustafa Al-Bassam, Sarah Azouvi, Patrick McCorry, Sarah Meiklejohn, and George Danezis. 2019\. SoK: Consensus in the age of blockchains. In _Proceedings of the 1st ACM Conference on Advances in Financial Technologies_. 183–198.
* Boneh et al. (2011) Dan Boneh, Amit Sahai, and Brent Waters. 2011. Functional encryption: Definitions and challenges. In _Theory of Cryptography Conference_. Springer, 253–273.
* Bonneau et al. (2015) Joseph Bonneau, Andrew Miller, Jeremy Clark, Arvind Narayanan, Joshua A Kroll, and Edward W Felten. 2015\. Sok: Research perspectives and challenges for bitcoin and cryptocurrencies. In _2015 IEEE symposium on security and privacy_. IEEE, 104–121.
* Brakerski and Segev (2018) Zvika Brakerski and Gil Segev. 2018. Function-private functional encryption in the private-key setting. _Journal of Cryptology_ 31, 1 (2018), 202–225.
* Builtin (2022) Builtin. (Online; Accessed 21-Feb-2022). _Blockchain in Healthcare: 15 Examples_. https://builtin.com/blockchain/blockchain-healthcare-applications-companies
* Cachin (2021) Christian Cachin. 2021\. Asymmetric distributed trust. In _International Conference on Distributed Computing and Networking 2021_. 3–3.
* Cachin and Zanolini (2021) Christian Cachin and Luca Zanolini. 2021. Asymmetric Asynchronous Byzantine Consensus. In _Data Privacy Management, Cryptocurrencies and Blockchain Technology_. Springer, 192–207.
* Casino et al. (2019) Fran Casino, Thomas K Dasaklis, and Constantinos Patsakis. 2019\. A systematic literature review of blockchain-based applications: Current status, classification and open issues. _Telematics and informatics_ 36 (2019), 55–81.
* Chan and Shi (2020) Benjamin Y Chan and Elaine Shi. 2020. Streamlet: Textbook streamlined blockchains. In _Proceedings of the 2nd ACM Conference on Advances in Financial Technologies_. 1–11.
* Chotard et al. (2020) Jérémy Chotard, Edouard Dufour-Sans, Romain Gay, Duong Hieu Phan, and David Pointcheval. 2020. Dynamic decentralized functional encryption. In _Annual International Cryptology Conference_. Springer, 747–775.
* Cohen and Malkhi (2022) Shir Cohen and Dahlia Malkhi. (Online; Accessed 21-Feb-2022). _What They Did not Teach you in Streamlet_. https://dahliamalkhi.github.io/posts/2020/12/what-they-didnt-teach-you-in-streamlet/
* ConsenSys (2022) ConsenSys. (Online; Accessed 21-Feb-2022). _Blockchain in Government and Public Sector_. https://consensys.net/blockchain-use-cases/government-and-the-public-sector/
* Cui et al. (2020b) Hui Cui, Zhiguo Wan, Xinlei Wei, Surya Nepal, and Xun Yi. 2020b. Pay as you decrypt: Decryption outsourcing for functional encryption using blockchain. _IEEE Transactions on Information Forensics and Security_ 15 (2020), 3227–3238.
* Cui et al. (2020a) Zhihua Cui, XUE Fei, Shiqiang Zhang, Xingjuan Cai, Yang Cao, Wensheng Zhang, and Jinjun Chen. 2020a. A hybrid blockchain-based identity authentication scheme for multi-WSN. _IEEE Transactions on Services Computing_ 13, 2 (2020), 241–251.
* Denning (1976) Dorothy E Denning. 1976\. A lattice model of secure information flow. _Commun. ACM_ 19, 5 (1976), 236–243.
* Desmedt and Frankel (1989) Yvo Desmedt and Yair Frankel. 1989. Threshold cryptosystems. In _Conference on the Theory and Application of Cryptology_. Springer, 307–315.
* Dolev and Strong (1983) Danny Dolev and H. Raymond Strong. 1983. Authenticated algorithms for Byzantine agreement. _SIAM J. Comput._ 12, 4 (1983), 656–666.
* Dwork et al. (1988) Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. 1988\. Consensus in the presence of partial synchrony. _Journal of the ACM (JACM)_ 35, 2 (1988), 288–323.
* Garg et al. (2016) Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, and Brent Waters. 2016\. Candidate indistinguishability obfuscation and functional encryption for all circuits. _SIAM J. Comput._ 45, 3 (2016), 882–929.
* Garg et al. (2014) Sanjam Garg, Craig Gentry, Shai Halevi, and Mark Zhandry. 2014. Fully Secure Functional Encryption without Obfuscation. _IACR Cryptol. ePrint Arch._ 2014 (2014), 666.
* Gorbunov et al. (2012) Sergey Gorbunov, Vinod Vaikuntanathan, and Hoeteck Wee. 2012\. Functional encryption with bounded collusions via multi-party computation. In _Annual Cryptology Conference_. Springer, 162–179.
* Hölbl et al. (2018) Marko Hölbl, Marko Kompara, Aida Kamišalić, and Lili Nemec Zlatolas. 2018. A systematic review of the use of blockchain in healthcare. _Symmetry_ 10, 10 (2018), 470.
* Lewko et al. (2010) Allison Lewko, Tatsuaki Okamoto, Amit Sahai, Katsuyuki Takashima, and Brent Waters. 2010\. Fully secure functional encryption: Attribute-based encryption and (hierarchical) inner product encryption. In _Annual International Conference on the Theory and Applications of Cryptographic Techniques_. Springer, 62–91.
* McGhin et al. (2019) Thomas McGhin, Kim-Kwang Raymond Choo, Charles Zhechao Liu, and Debiao He. 2019. Blockchain in healthcare applications: Research challenges and opportunities. _Journal of Network and Computer Applications_ 135 (2019), 62–75.
* McMenamin et al. (2021) Conor McMenamin, Vanesa Daza, and Matteo Pontecorvi. 2021\. Achieving State Machine Replication without Honest Players. In _Proceedings of the 3rd ACM Conference on Advances in Financial Technologies_.
* Mirkin et al. (2020) Michael Mirkin, Yan Ji, Jonathan Pang, Ariah Klages-Mundt, Ittay Eyal, and Ari Juels. 2020\. BDoS: Blockchain denial-of-service. In _Proceedings of the 2020 ACM SIGSAC conference on Computer and Communications Security_. 601–619.
* Oliveira et al. (2020) Thays A Oliveira, Miquel Oliver, and Helena Ramalhinho. 2020\. Challenges for connecting citizens and smart cities: ICT, e-governance and blockchain. _Sustainability_ 12, 7 (2020), 2926.
* Shi (2020) Elaine Shi. 2020\. Foundations of Distributed Consensus and Blockchains. https://www.distributedconsensus.net/. Book (Publicly Available).
* Sliwinski and Wattenhofer (2019) Jakub Sliwinski and Roger Wattenhofer. 2019. Abc: Asynchronous blockchain without consensus. _arXiv preprint arXiv:1909.10926_ (2019).
* Son et al. (2020) Ye-Byoul Son, Jong-Hyuk Im, Hee-Yong Kwon, Seong-Yun Jeon, and Mun-Kyu Lee. 2020. Privacy-preserving peer-to-peer energy trading in blockchain-enabled smart grids using functional encryption. _Energies_ 13, 6 (2020), 1321.
* Viswanath (2022) Pramod Viswanath. (Online; Accessed 21-Feb-2022). _Blockchain protocols with finality: Streamlet and HotStuff_. https://courses.grainger.illinois.edu/ece598pv/sp2021/lectureslides2021/ECE_598_PV_course_notes14.pdf
* Werner et al. (2021) Sam M Werner, Daniel Perez, Lewis Gudgeon, Ariah Klages-Mundt, Dominik Harz, and William J Knottenbelt. 2021. Sok: Decentralized finance (defi). _arXiv preprint arXiv:2101.08778_ (2021).
* Xiao et al. (2020) Yang Xiao, Ning Zhang, Wenjing Lou, and Y Thomas Hou. 2020\. A survey of distributed consensus protocols for blockchain networks. _IEEE Communications Surveys and Tutorials_ 22, 2 (2020), 1432–1465.
* Yin et al. (2019) Maofan Yin, Dahlia Malkhi, Michael K Reiter, Guy Golan Gueta, and Ittai Abraham. 2019\. HotStuff: BFT consensus with linearity and responsiveness. In _Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing_. 347–356.
* Yurchenko et al. (2020) Artem Yurchenko, Mahbuba Moni, Daniel Peters, Jan Nordholz, and Florian Thiel. 2020. Security for Distributed Smart Meter: Blockchain-based Approach, Ensuring Privacy by Functional Encryption.. In _CLOSER_. 292–301.
* Zetzsche et al. (2020) Dirk A Zetzsche, Douglas W Arner, and Ross P Buckley. 2020\. Decentralized finance. _Journal of Financial Regulation_ 6, 2 (2020), 172–203.
* Zhong et al. (2019) Hanrui Zhong, Yingpeng Sang, Yongchun Zhang, and Zhicheng Xi. 2019. Secure multi-party computation on blockchain: An overview. In _International Symposium on Parallel Architectures, Algorithms and Programming_. Springer, 452–460.
* Zhu et al. (2019) Saide Zhu, Zhipeng Cai, Huafu Hu, Yingshu Li, and Wei Li. 2019. zkCrowd: a hybrid blockchain-based crowdsourcing platform. _IEEE Transactions on Industrial Informatics_ 16, 6 (2019), 4196–4205.
## Appendix A Background
### A.1. Streamlet: Main Results
Streamlet (Chan and Shi, 2020) is a simple blockchain protocol where consensus
evolves in four streamlined stages to achieve consistency and liveness: (i) a
block is proposed by a random leader on the set of all players; (ii) the first
correct block seen by honest players is voted on; (iii) a block is considered
‘notarized’ once a threshold of players vote on it; and lastly (iv) notarized
block(s) are finalized under different finalization rules depending on the
network model and the power of the adversary.
For our contribution, we would only consider Streamlet over a partially
synchronous network (Dwork et al., 1988), with a crash-fault adversary of size
$<\frac{n}{2}$ in the network (denoted by $\Pi^{0}_{\text{cft}}$), or a
Byzantine-fault adversary of size $<\frac{n}{3}$ in the network (denoted by
$\Pi^{0}_{\text{bft}}$).
For both $\Pi^{0}_{\text{cft}}$ and $\Pi^{0}_{\text{bft}}$, the block
finalization rule states that if a player sees three adjacent notarized
blocks, with consecutive epoch numbers, then the second of the three blocks,
along with its parent chain, is finalized. The same finalization rule is
applied to $\mathbf{\Pi}^{ss}_{\text{cft}}$ and
$\mathbf{\Pi}^{ss}_{\text{rft}}$.
Further, for both $\Pi^{0}_{\text{cft}}$ and $\Pi^{0}_{\text{bft}}$, the proof
of consistency is similar. First, it is shown that for any epoch, for any
honest player’s blockchain snapshot, at most one block is notarized. Next, it
is shown that given a block branch with three adjacent notarized blocks with
three consecutive epoch numbers, there cannot exist a notarized block at
length 2 in any competing branch. These basic arguments lead to Theorems 3 and
12 on consistency for Byzantine-fault tolerant Streamlet and crash-fault
tolerant Streamlet respectively, in (Chan and Shi, 2020).
The liveness theorems of $\Pi^{0}_{\text{cft}}$ and $\Pi^{0}_{\text{bft}}$ are
also identical, and are given next.
Streamlet Liveness (Theorems 6 and 13 in (Chan and Shi, 2020)). _After GST,
suppose that there are 5 consecutive epochs $e,e+1,...,e+4$, all with honest
leaders, then, by the beginning of epoch $e+5$, every honest node must have
observed a new final block that was not final at the beginning of epoch $e$.
Moreover, this new block was proposed by an honest leader._
### A.2. Fundamentals of Functional Encryption
Functional encryption differs from traditional encryption by allowing the
decryptor to recover _any function of_ the message from the encryption of the
message, instead of allowing the decryptor to recover the message from its
encryption. We outline the basics of function encryption and its variants
relevant to our contribution. We then highlight various notions of security
that different functional encryption schemes may achieve.
#### A.2.1. Basic Functional Encryption
A functional encryption scheme (Boneh et al., 2011), given a set of functions
$\mathbf{F}$ over some message space $M$, is a tuple of four probabilistic
polynomial time algorithms
$(\textsf{Setup},\textsf{KeyGen},\textsf{Enc},\textsf{Dec})$ where, $\forall
m\in M$:
$(pp,msk)\leftarrow\textsf{Setup}(1^{\lambda})$
$sk_{f}\leftarrow\textsf{KeyGen}(msk,f)$ for some $f\in\mathbf{F}$
$ctx\leftarrow\textsf{Enc}_{pp}(m)$
$f(m)\leftarrow\textsf{Dec}(sk_{f},ctx)$
where the decryption succeeds with at least an overwhelming probability in the
security parameter $\lambda$. The parameters $pp$ are public, whereas the key
$msk$ to generate the function secret key(s) is private.
#### A.2.2. Functional Encryption for all Circuits
A functional encryption scheme for all circuits has the same specification as
a standard functional encryption scheme, except that it supports functionality
(decryption) $\mathbf{F}$ for all efficiently computable functions on the
message space. Examples of such schemes are (Garg et al., 2016) and (Garg et
al., 2014). The scheme (Garg et al., 2016) achieves selective-message message
privacy, and the scheme (Garg et al., 2014) achieves full message privacy
(both security notions defined below). We denote a functional encryption
scheme for all circuits by $\Gamma_{\text{aFE}}$.
#### A.2.3. Verifiable Functional Encryption
A verifiable functional encryption scheme (Badrinarayanan et al., 2016)
supports, in addition to the base algorithms
$(\textsf{Setup},\textsf{KeyGen},\textsf{Enc},\textsf{Dec})$, two additional
algorithms
$(\textsf{VerCT},\textsf{VerKey})$, such that
$0/1\leftarrow\textsf{VerCT}(pp,ctx)$ (output true if the ciphertext was
generated using the correct public parameters)
$0/1\leftarrow\textsf{VerKey}(pp,f,sk_{f})$ (output true if the secret
function key indeed corresponds to the function $f$)
Verifiable functional encryption works by modifying an existing functional
encryption scheme. Reasonable candidates for the underlying functional
encryption scheme are (Garg et al., 2016) (which achieves selective-message
message privacy) and (Gorbunov et al., 2012) (which achieves selective-
function message privacy). We denote a verifiable functional encryption scheme
by $\Gamma_{\text{vFE}}$.
#### A.2.4. Security of Functional Encryption Schemes
We briefly discuss different notions of message privacy security of functional
encryption schemes (Brakerski and Segev, 2018). This notion of security
translates to the security properties of the payload under SightSteeple. We
will denote the adversary by $\mathbb{A}$.
_Valid message privacy adversary._ $\mathbb{A}$ is a valid (polynomial time)
message privacy adversary if for all functions $\\{f_{i}\\}_{i\leq T}$ for
which it queries the KeyGen oracle of the scheme for secret keys, and for all
messages $\\{m_{j}\\}_{j\leq T^{\prime}}$ it receives encryptions of under the
scheme, it is true that $f_{i_{1}}(m_{j_{1}})=f_{i_{2}}(m_{j_{2}})$, $\forall
i_{1},i_{2}\in[T],j_{1},j_{2}\in[T^{\prime}]$. We will assume that for each of
the message privacy models below, $\mathbb{A}$ is a valid message privacy
adversary.
_Full message privacy._ Full message privacy dictates that, given an adversary
$\mathbb{A}$, that can request for any number of function keys from the key
generation oracle of the scheme, under valid message privacy, the encryptions
of any two messages received from the encryption oracle of the scheme, for
$\mathbb{A}$, are computationally indistinguishable.
_Selective-message message privacy._ Given any two vectors of messages, where
each message vector has length $k$, and allowing $\mathbb{A}$ to request any
number of function keys from the key generation oracle of the scheme, under
valid message privacy, $k$-selective-message message privacy dictates that the
encryptions of the two message vectors received from the encryption oracle of
the scheme, are computationally indistinguishable for $\mathbb{A}$. The scheme
achieves selective-message message privacy, if it is $k$-selective-message
message private for all polynomials $k$ in the security parameter.
_Selective-function message privacy._ Given secret keys of $k$ arbitrary
functions received from the key generation oracle of the scheme by the
adversary $\mathbb{A}$, $k$-selective-function message privacy dictates that
the encryptions of any two messages received from the encryption oracle of the
scheme, are computationally indistinguishable for $\mathbb{A}$. The scheme
achieves selective-function message privacy, if it is $k$-selective-function
message private for all polynomials $k$ in the security parameter.
|
# Stable LM 2 1.6B Technical Report
Marco Bellagente* Jonathan Tow* Dakota Mahan Duy Phung Maksym Zhuravinskyi
Reshinth Adithyan James Baicoianu Ben Brooks Nathan Cooper Ashish Datta
Meng Lee Emad Mostaque Michael Pieler Nikhil Pinnaparju Paulo Rocha
Harry Saini Hannah Teufel Niccolo Zanichelli Carlos Riquelme
Stability AI Language Team Equal contribution. Correspondance to:
{marco.bellagente, jonathantow<EMAIL_ADDRESS>
###### Abstract
We introduce StableLM 2 1.6B, the first in a new generation of our language
model series. In this technical report, we present in detail the data and
training procedure leading to the base and instruction-tuned versions of
StableLM 2 1.6B. The weights for both models are available via Hugging Face
for anyone to download and use
111https://huggingface.co/stabilityai/stablelm-2-1_6b222https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b.
The report contains thorough evaluations of these models, including zero- and
few-shot benchmarks, multilingual benchmarks, and the MT benchmark focusing on
multi-turn dialogues. At the time of publishing this report, StableLM 2 1.6B
was the state-of-the-art open model under 2B parameters by a significant
margin. Given its appealing small size, we also provide throughput
measurements on a number of edge devices. In addition, we open source several
quantized checkpoints and provide their performance metrics compared to the
original model.
## 1 Introduction
Following the development of the Transformer architecture [71], a remarkable
number of proprietary and open-source large language models have been trained
and deployed. While countless new ideas and artifacts are announced weekly or
daily, some key aspects remain opaque – especially around the most powerful
models. Often, the training data is not disclosed. This poses a fundamental
challenge in times where society demands transparency as it faces a brand-new
disruptive technology that is hard to inspect and audit. In this report, we
explain in a reproducible manner how to train a modest-size but state-of-the-
art language model. All the data we used is public (see Table 1) and its
training required around 92k GPU hours of training – worth around $322k on
popular cloud providers (assuming $3.5 per GPU hour). We hope this work
contributes to the open AI community and helps set the standard for upcoming
transparent models.
This report is organized as follows: Section 2 details the process of pre-
training Stable LM 2 1.6B. We devote Section 3 to fine-tuning and human
preference alignment. Section 4 presents model evaluations on standard
downstream benchmarks. Compiling and running inference Stable LM 2 on several
edge devices is outlined in Section 5. We consider several follow-up
directions in Section 6. Carbon emissions and societal impacts related to the
training and release of Stable LM 2 are covered in Section 7. Finally, Section
8 concludes and summarizes this work.
## 2 Model Pre-Training
The first stage in training large language models (LLMs) focuses on learning
to predict the next token in a sequence using a vast and diverse array of data
sources. We refer to this stage as _pre-training_. It enables models to build
general-purpose internal representations suitable for basic language
capabilities and even more advanced generation and comprehension tasks. In
fact, it has been hypothesized that the majority of model knowledge and
capabilities are learned during pre-training [88]. In this section, we
introduce the design principles and ablations that influenced the creation of
our training set, as well as details about the model architecture and training
procedure. While many similar reports exist for other cutting-edge models,
they often omit critical details, such as the particular data sources,
sampling weights, or the complete set of ablations they performed. As a
result, the open-source community cannot accurately reproduce these models. On
the other hand, we present a fully transparent log of our model training
details. We are confident that researchers and practitioners will find
valuable insights in this comprehensive account.
Dataset | Sampling Weight | Num Tokens | Epochs | Category
---|---|---|---|---
Arxiv | 0.0079 | 15,852,446,656 | 0.75 | Academic
PubMed | 0.012 | 24,126,600,626 | 1.0 | Academic
S2ORC | 0.0318 | 63,910,567,858 | 1.0 | Academic
PhilPapers | 0.0013 | 2,591,115,796 | 4.0 | Academic
BookCorpusOpen | 0.0045 | 9,135,954,810 | 6.0 | Books
PG-19 | 0.0091 | 18,293,226,468 | 4.0 | Books
FanFics | 0.0018 | 3,644,583,700 | 4.0 | Books
Cultura-X (EN) | 0.2521 | 506,625,211,049 | 0.72 | Web
Cultura-X (ES) | 0.0155 | 31,241,701,156 | 0.4 | Web
Cultura-X (DE) | 0.0152 | 30,628,813,934 | 0.32 | Web
Cultura-X (FR) | 0.0097 | 19,466,611,808 | 0.26 | Web
Cultura-X (IT) | 0.0096 | 19,202,903,491 | 0.4 | Web
Cultura-X (NL) | 0.0097 | 19,511,322,386 | 0.62 | Web
Cultura-X (PT) | 0.01 | 20,063,968,694 | 0.78 | Web
C4 | 0.0855 | 171,782,178,108 | 1.0 | Web
OpenWebText2 | 0.0130 | 26,161,864,434 | 3.0 | Web
RefinedWeb | 0.3292 | 661,591,178,339 | 1.15 | Web
StackExchange | 0.0231 | 46,302,993,820 | 2.5 | Social
HackerNews | 0.0019 | 3,817,060,582 | 2.0 | Social
EuroParl | 0.0023 | 4,678,506,882 | 3.0 | Law
FreeLaw | 0.0088 | 17,694,697,577 | 1.2 | Law
PileOfLaw | 0.0063 | 12,663,061,642 | 0.75 | Law
DM Math | 0.0066 | 13,321,872,138 | 3.0 | Math
AMPS | 0.0011 | 2,126,719,278 | 6.0 | Math
OpenWebMath | 0.0132 | 26,530,341,292 | 2.0 | Math
RedPajama Wiki | 0.0363 | 72,866,870,472 | 3.0 | Wiki
Starcoder | 0.0724 | 145,586,775,301 | 0.74 | Code
Restruct-v1 | 0.0102 | 20,412,655,632 | 3.0 | Instruction
Total | 1 | 2,009,831,803,933 | | -
Table 1: The complete Stable LM 2 training set with sampling weights. The
tokens count refers to the Arcade100k tokenizer introduced in Sec. 2.3. The
number of tokens in the table already includes the number of epochs shown next
to it. For instance, in the case of BookCorpusOpen, we use around 9B tokens,
corresponding to 6 epochs of the original dataset (that is, each epoch is
around 1.5B tokens). Similarly, if epochs are below one, the number of tokens
shown is a subset of the total dataset. Figure 1: Percentage of effective
training tokens by domain in the Stable LM 2 pre-training dataset.
### 2.1 Training
We train Stable LM 2 to predict the next token following standard
autoregressive sequence modeling [58]. We train our model from scratch with a
context length of 4096 and benefit from the efficient sequence-wise
parallelism optimizations of FlashAttention-2 [17, 16]. Training is performed
in BFloat16 mixed precision while keeping all-reduce operations in FP32. [10,
74] found it beneficial to add a z-loss regularization term on the softmax
normalizer, $z_{\mathrm{loss}}\propto\log^{2}Z$, to mitigate instability
stemming from output logits divergence. While it did not hurt performance in
our ablations, the improvements to stability were minimal. Accordingly, it was
not applied in the final run. We employ a standard AdamW optimizer with the
following hyperparameters:
$\beta_{1}=0.9,\beta_{2}=0.95,\epsilon=1e-8,\lambda(\text{weight decay})=0.1$.
Sec. 2.5 offers details regarding the custom learning rate scheduler that we
applied.
### 2.2 Data
Model performance is affected by pre-training data design decisions, including
both the source selection and the sampling weights [48]. Our approach is close
to that of [67]: the majority of our training data consists of sources
utilized in the training of other LLMs, such as RefinedWeb [57], subsets of
the Pile [22], RedPajama [13] and the Stack [39]. We supplement these with
OpenWebText [24], OpenWebMath [56], and parts of CulturaX [54]. While
inspecting randomly sampled documents from the mC4 subset of CulturaX, we
encountered frequent HTML boilerplate and decided to remove this portion
entirely, finally keeping only the OSCAR subset. Additionally, we incorporate
FanFics333https://huggingface.co/datasets/atom-in-the-
universe/fanfics-10k-50k, a subset of 50k documents from _fanfiction.net_
selected by lowest perplexity scores according to a
KenLM444https://huggingface.co/edugp/kenlm. Finally, following [81], we
restructure several raw datasets into rich fixed forms for downstream tasks
such as summarization, question-answering, sentiment analysis, etc., and add
instruction data from [47], the aggregate of which we collectively call
Restruct-v1. The list of sources used in Restruct-v1 is made available in Tab.
11. Stable LM’s training set is comprised entirely of open-source datasets
compatible with commercial usage, most of which are hosted on the Hugging Face
Hub. The only exception to the latter aspect (HFH), Restruct-v1, can easily be
reproduced by following the approaches and prompt templates provided by [81].
Carefully selecting the mixture proportions of the various data domains is
critical, particularly with respect to the amount of non-English and code
data. We trained several models on different mixes and evaluated them on
downstream benchmarks to pick our final dataset. The full set of ablations is
available in Appendix A, together with rationales for selecting these
particular mixes. Based on the results of the ablations, we trained our model
with the mix shown in Table 1, which accounts for around 2 trillion tokens.
Note that it includes multilingual data in German (DE), Spanish (ES), French
(FR), Italian (IT), Dutch (NL), and Portuguese (PT). The split of our dataset
across different domains is visualized in Fig. 1.
Parameters | Hidden Size | Layers | Heads | Sequence Length
---|---|---|---|---
1,644,417,024 | 2048 | 24 | 32 | 4096
Table 2: Stable LM 2 model architecture.
Data Parallel Degree | Micro Batch Size | Gradient Accumulation Steps | Activation Checkpointing
---|---|---|---
512 | 2 | 2 | disabled
Table 3: Stable LM 2 training configuration.
### 2.3 Tokenizer
We use Arcade100k, a BPE tokenizer extended from OpenAI’s
`tiktoken.cl100k_base` to include special tokens for code [40] and digit-split
handling [45, 4]. The vocabulary consists of 100,289 tokens and is padded to
the nearest multiple of 64 (100,352) during training to meet the recommended
Tensor Cores alignment on NVIDIA A100 devices. In preliminary experiments, we
did not observe statistically significant deviations in downstream natural
language performance tasks when compared against a hyperparameter matching
model trained with the smaller GPT-NeoX tokenizer [6]. Increased compression
rates for code and non-English languages influenced our decision to use
Arcade100k over existing tokenizers.
### 2.4 Architecture and Training Layout
The model is a causal decoder-only transformer similar in design to the LLaMA
architecture [67]. Table 3 shows some of the key architectural details. In
particular, the main differences with respect to LLaMA are the following:
* •
Position Embeddings. Rotary Position Embeddings [66] applied to the first
$25\%$ of head embedding dimensions for improved throughput following [6].
* •
Normalization. LayerNorm [3] with learned bias terms as opposed to RMSNorm
[84].
* •
Biases. We remove all bias terms from the feed-forward networks and multi-head
self-attention layers, except for the biases of the key, query, and value
projections [4].
Stable LM 2 1.6B is trained on 64 Amazon P4d instances comprising 512 NVIDIA
A100 (40GB HBM2) GPUs. The size of our model, together with ZeRO stage 1
distributed optimization [61], eliminates the need for model sharding. Still,
different triplets of micro batch size, gradient accumulation steps, and
activation checkpointing granularity lead to different speed metrics.
Following the recommendations in [28], we obtain our final configuration by
finding the micro-batch size that allows us to completely remove activation
checkpointing. We then determine the gradient accumulation steps based on our
target batch size and data parallel degree. We employ a batch size of
$8,388,608$ tokens, based on the observations in Appendix D. With the setup in
Table 3, we achieve $\approx$170 TFLOPs/s per device, or $54.5\%$ model flops
utilization (MFU). A higher hardware utilization of $\approx$200 TFLOPs/s
($64\%$ MFU) can be trivially achieved by decreasing the degree of data
parallelism and correspondingly increasing the number of gradient accumulation
steps, at the cost of an increased iteration time.
### 2.5 Learning Rate Scheduler
We propose a new learning rate scheduler. It consists of multiple stages and
is designed to favor flexibility in terms of continued pre-training. We begin
by linearly increasing the learning rate to its max value of $1e-3$ over 9720
steps. This _warm up_ stage is followed by the main training phase in which
the learning rate is decreased according to Eq. 1:
$\left\\{\begin{aligned}
&m+\frac{(M-m)}{2}*\left[\cos\left(2\pi*\frac{i}{N}\right)+1\right]\quad&\text{if}\quad
i\leq N/4&\qquad\text{\emph{cosine} decay}\\\
&\frac{\alpha}{\sqrt{i+\beta}}\quad&\text{if}\quad
i>N/4&\qquad\text{\emph{rsqrt} decay}\end{aligned}\right.$ (1)
where $m$ and $M$ are respectively the minimum and maximum learning rates, $i$
is the current step, and $N$ is the total number of steps. The free parameters
$\alpha$ and $\beta$ have been arbitrarily chosen to enforce the continuity of
the scheduler and its derivative at $i=N/4$. We finalize training by linearly
cooling the learning rate down to zero over 80k steps, which corresponds to
around 670B tokens. The full scheduler is illustrated in Fig. 2; further
details and ablations can be found in Appendix B.
Figure 2: Multi-stage infinite scheduler proposed and applied in this work.
## 3 Fine-tuning and Alignment
Following pre-training, we further develop the conversational skills of our
model via a fine-tuning stage that consists of three main steps: supervised
fine-tuning (SFT), direct preference optimization (DPO), and self-knowledge
learning. Importantly, we do not use multilingual data at this stage. We now
describe each of them in detail, and in Section 4 we report the results after
all three steps.
Model | Size | Avg | ARC | HS | MMLU | TQA | Wino | GSM
---|---|---|---|---|---|---|---|---
phi-2† | 2.8B | 61.3 | 61.1 | 75.1 | 58.1 | 44.5 | 74.4 | 54.8
stablelm-2-zephyr-1_6b† | 1.6B | 49.7 | 43.3 | 69.3 | 41.8 | 45.6 | 63.6 | 34.8
phi-1_5† | 1.3B | 47.7 | 52.9 | 63.8 | 43.9 | 40.9 | 72.2 | 12.4
stablelm-3b-4e1t | 2.7B | 46.6 | 46.6 | 75.9 | 45.2 | 37.2 | 71.2 | 3.3
Qwen-1.5-1.8B | 1.8B | 46.6 | 37.9 | 61.4 | 46.7 | 39.4 | 60.3 | 33.6
gemma-2b | 2.5B | 46.5 | 48.5 | 71.7 | 41.7 | 33.1 | 66.8 | 17.4
stablelm-2-1_6b | 1.6B | 45.3 | 43.3 | 70.5 | 38.9 | 36.8 | 64.6 | 17.4
gemma-2b-it† | 2.5B | 42.7 | 43.9 | 62.7 | 37.6 | 45.8 | 60.9 | 5.5
open_llama_3b | 3B | 40.3 | 39.9 | 71.6 | 27.1 | 34.8 | 67.0 | 0.9
falcon-rw-1b | 1.3B | 37.1 | 35.1 | 63.6 | 25.3 | 35.9 | 62.0 | 0.5
TinyLLama-1.1B | 1.1B | 36.4 | 33.9 | 60.3 | 26.0 | 37.3 | 59.5 | 1.4
Table 4: Comparison of Open LLM leaderboard evals. † denotes aligned models. Note that in this table, as well as in Tab. 5, we marked the Phi series of models [26] as _aligned_. While we acknowledge that they may have only been pre-trained, by the nature of the training data used, and the self-disclaimed intended use for QA and chat formats, we believe this to be a fair statement. Model | All | EN | DE | ES | FR | IT | NL | PT
---|---|---|---|---|---|---|---|---
stablelm-3b-4e1t | 41.7 | 50.9 | 39.7 | 40.2 | 41.2 | 41.1 | 36.7 | 41.9
stablelm-2-zephyr-1_6b† | 41.5 | 49.5 | 40.2 | 40.0 | 39.8 | 39.9 | 38.8 | 42.0
stablelm-2-1_6b | 40.5 | 48.7 | 39.1 | 39.0 | 39.3 | 38.8 | 37.8 | 41.2
gemma-2b | 39.8 | 48.6 | 38.3 | 38.7 | 38.7 | 38.4 | 35.1 | 40.5
gemma-2b-it† | 38.2 | 49.4 | 36.8 | 38.0 | 37.5 | 35.5 | 32.1 | 38.1
open_llama_3b | 37.5 | 47.3 | 35.2 | 36.4 | 37.6 | 37.1 | 32.2 | 36.8
Qwen-1.5-1.8B-Chat | 35.5 | 46.2 | 33.2 | 35.1 | 34.3 | 33.2 | 30.9 | 35.7
Qwen-1.5-1.8B | 34.8 | 46.3 | 31.8 | 34.0 | 34.2 | 32.8 | 29.7 | 35.0
TinyLLama-1.1B | 34.8 | 42.4 | 33.0 | 33.8 | 34.7 | 33.5 | 31.0 | 35.0
phi-2† | 34.6 | 55.8 | 29.0 | 34.3 | 32.9 | 29.9 | 27.1 | 33.4
falcon-rw-1b | 29.9 | 42.2 | 27.4 | 28.6 | 28.3 | 28.0 | 25.9 | 29.1
phi-1_5† | 29.7 | 47.1 | 25.3 | 28.7 | 26.8 | 27.2 | 24.1 | 29.0
Table 5: Multilingual evaluations. As in the previous table, † denotes aligned
models.
### 3.1 SFT
The first step is supervised fine-tuning. We fine-tune the pre-trained model
on a number of instruction datasets publicly available on the Hugging Face
Hub. In particular, we use the following _conversational_ datasets: UltraChat
[18], WizardLM [77], SlimOrca [41], ShareGPT [72], Capybara [15], Deita [46],
and MetaMathQA [80]. We removed any samples that exceeded eight turns, leading
to a total of 826,938 samples.
Dataset | Type | Source | Number of Samples
---|---|---|---
UltraChat | SFT | HuggingFaceH4/ultrachat_200k | 194,409
WizardLM | SFT | WizardLM/WizardLM_evol_instruct_V2_196k | 80,662
SlimOrca | SFT | Open-Orca/SlimOrca-Dedup | 143,789
ShareGPT | SFT | openchat/openchat_sharegpt4_dataset | 3,509
Capybara | SFT | LDJnr/Capybara | 7,291
Deita | SFT | hkust-nlp/deita-10k-v0 | 2,860
MetaMathQA | SFT | meta-math/MetaMathQA | 394,418
Ultrafeedback | DPO | argilla/ultrafeedback-binarized-preferences | 63,169
Orca Pairs | DPO | Intel/orca_dpo_pairs | 12,859
Table 6: Fine-tuning datasets
We train our SFT models for three epochs using a cosine learning rate
scheduler. A warm-up phase of $10\%$ of the training duration is employed
before reaching the peak learning rate of $5e-5$. We set the global batch size
to 512 sequences and pack inputs into sequences of up to 4096 tokens in
length.
### 3.2 DPO
Direct Preference Optimization (DPO) [58] has been a fundamental tool in
recent strong models such as Zephyr-7B [70], Neural-Chat-7B, and
Tulu-2-DPO-70B [32]. Accordingly, after applying SFT, we aligned the resulting
model via DPO. We use two datasets at this stage: UltraFeedback [14] and Intel
Orca Pairs. We filter the datasets by removing pairs with tied ranking, pairs
with duplicated content, and pairs in which the score for the chosen response
is less than eight out of ten. We train the model with DPO following the
Zephyr recipe [70] and borrowing most of its hyperparameters, except for
$\beta$, which we lower to $0.01$, and the learning rate, which we lower to
$4e-6$, both of which helped improve the stability of training and final
performance. This stage of training is performed using the Alignment Handbook
[69].
Figure 3: Stable LM 2 1.6B shows competitive performance, matching or even
surpassing significantly larger models on MT-Bench.
### 3.3 Self-Knowledge
The model after the output of the DPO [58] stage does not have knowledge about
who created it, or even what limitations a language model has. To remedy this,
we were inspired by the data generation method of Reinforcement Learning from
Contrast Distillation (RLCD) [78] and the training method of Conditioned
Reinforcement Learning Fine-Tuning (C-RLFT) [72], which we apply to self-
knowledge training.
To generate the initial prompts, we use the base model to generate 10k random
first messages to a language model with no duplicates. To generate
continuations, we use a few-shot prompt with self-knowledge in the previous
chat turns as a positive completion. For the negative prompt, we simply sample
from the prompt with no additional prompting or few-shot turns.
We train with unpacked examples for 6 epochs using a batch size of 256, a
warmup stage for 100 steps to a maximum LR of 3e-6 followed by a cosine decay
to zero. The positive prompts are trained in the same way as the SFT step,
while the negative prompts are trained with a negative token instead of the
Assistant token.
## 4 Experimental Results and Benchmarks
This section presents the main experimental results for Stable LM 2 1.6B. We
compare with similarly-sized open-source models showing strong improvements on
various tasks, including multilingual capabilities in Spanish, German, French,
Italian, Portuguese, and Dutch. For context, we also provide comparisons with
much larger models. We split our experiments into three sets: few- and zero-
shot evaluations in English (as commonly done in the Hugging Face Open LLM
leaderboard), multilingual evaluations, and conversational evaluations.
### 4.1 Few-shot and Zero-shot Evaluations
First, we assess the 0-shot and few-shot capabilities of Stable LM 2 by
evaluating our model over popular benchmarks and comparing results against
similarly sized open-source pre-trained models. Table 4 presents model
evaluations in English. Our results cover the 6 benchmarks from the Open LLM
Leaderboard ([5]): ARC-Challenge 25-shot [11] (ARC), HellaSwag 10-shot [83]
(HS), MMLU 5-shot [30] (MMLU), TruthfulQA 0-shot [43] (TQA), WinoGrande 5-shot
[62] (Wino) and GSM8K 5-shot [12] (GSM). Also, as Stable LM 2 is a general-
purpose foundational model, we further assess natural language understanding
capabilities by evaluating English and machine-translated versions of LAMBADA.
All evaluations are performed with the Language Model Evaluation Harness
framework [23]555https://github.com/Stability-AI/lm-evaluation-
harness/tree/stablelm-2/multilingual-bench. As shown in Table 4, Stable LM 2
1.6B (stablelm-2-1-6b) outperforms other base models by a significant margin.
Similarly, the instruction-tuned version (stablelm-2-1-6b-dpo) improves on
Microsoft’s Phi-1.5 by two average points while lagging behind the larger
Phi-2.0 on few-shot accuracy. Performance versus Google’s Gemma 2B (2.5B
parameters) is also remarkable.
Model | Size | MT-Bench
---|---|---
Mistral-7B-Instruct-v0.2 | 7B | 7.61
Llama2-Chat | 70B | 6.86
stablelm-zephyr-3b | 3B | 6.64
MPT-30B-Chat | 30B | 6.39
stablelm-2-zephyr-1.6b | 1.6B | 5.42
Qwen-1.5-1.8B-Chat | 1.8B | 5.29
gemma-2b-it | 2.5B | 5.19
Falcon-40B-Instruct | 40B | 5.17
dolphin-2.6-phi-2 | 2.7B | 4.93
phi-2 | 2.7B | 4.29
TinyLlama-1.1B-Chat-v1.0 | 1.1B | 3.46
Table 7: MT-Bench results
### 4.2 Multilingual Evaluations
We assess knowledge and reasoning in the multilingual setting for non-English
languages seen during pre-training by evaluating on ChatGPT-translated
versions of ARC, HS, TQA, and MMLU ([38]). In addition, we test next-word
prediction capabilities using the machine-translated LAMBADA datasets from
[23]. After manual inspection by native speakers, we have deemed existing
machine translations
666https://huggingface.co/datasets/EleutherAI/lambada_openai too noisy to draw
accurate performance signals from. We instead evaluate multilingual next-word
prediction with new translations which are made available for researchers
777https://huggingface.co/datasets/marcob/lambada_multilingual.
The zero-shot results are presented in Tab. 5 and highlight Stable LM 2’s
superior performance compared to models even twice its size.
### 4.3 MT Benchmark Evaluations
Finally, we also test the conversational skills of our model on the popular
multi-turn benchmark MT-Bench [86]. The results are provided in Fig. 3 and
Tab. 7. While lagging behind much more powerful models such as Mistral 7B
Instruct v0.2 (more than 4x the size of Stable LM 2), our model delivers
better chat performance and beats both Phi-2, Gemma 2B and TinyLLaMA 1.1B by a
wide margin despite the larger size of the former.
## 5 Inference and Quantization
This model represents a substantial leap towards making advanced generation
capabilities available directly on-device without the computational overhead
of larger models. We believe this model strikes a great balance between
remarkable efficiency and effectiveness in inference tasks when paired with
inference frameworks and quantization methods. As part of our release, we
provide quantized weights of stablelm-2-1-6b supported on popular inference
libraries such as llama.cpp 888https://github.com/ggerganov/llama.cpp, Apple
MLX [29] and Intel OpenVINO 999https://github.com/openvinotoolkit/openvino.
### 5.1 Quantization
We make available quantization files for various models and formats to support
easier integration with different inference frameworks, including:
* •
Two 4-bit quantized models: Q4_0, Q4_1 and a 5-bit quantized model: Q5_K_M
GGUF
* •
INT4 for OpenVINO quantized with Intel’s Neural Network Compression Framework
(NNCF)
* •
INT4 for MLX quantized with MLX
These quantization files can be found in the model’s Hugging Face repository
for the convenience of developers and researchers working with our models. We
aim to facilitate smoother deployment experiences across various deep-learning
framework ecosystems by offering a range of quantization formats.
### 5.2 Throughput
In Tab. 8 we provide throughput numbers obtained from our model running on
consumer-grade devices and the system environments utilized. Our initial runs
showcase that when using a lower precision, we are able to achieve almost 2x
performance in throughput. Note that these figures are provided as a
reference, and they are not the result of rigorous benchmarking but are rather
intended to give users a practical insight into what they can expect in terms
of performance on commonly used devices. Likewise, as lower precision
quantization is expected to reduce the model’s performance, we encourage
researchers and developers to assess the potential degradation in real-world
scenarios.
Framework | CPU | Precision | Throughput (Tok/s) | Power consumption (W)
---|---|---|---|---
MLX | M2 | FP16 | 71 | 6
Mac Mini | (8GB) | INT4 | 127 | 11
GGUF | M2 Pro Max | FP16 | 46 | 14
2023 MacBook Pro | (16GB) | INT4 | 99 | 14
Table 8: Throughput and power usage on various devices using different
quantization frameworks. We employ a batch size of 1 for all benchmarks. INT4
numbers for GGUF were collected using Q4_0 quantization.
## 6 Future Work
There is a number of research avenues we would like to explore to further
improve the model:
1. 1.
Data. In this work, we focused on publicly available data. In particular, most
of the data comes from web-crawled content, as is common for most models. This
data is known to contain many low-quality documents [20] that can potentially
harm training. We believe there is significant potential in smart filtering,
re-writing, and synthetic data generation with strong models.
2. 2.
Hallucination Mitigation. Language models are prone to generating incorrect or
misleading information, and small language models are even more prone to doing
so. Finding reliable ways to detect hallucinations in these models will unlock
new use cases in areas that are sensitive to hallucinations.
3. 3.
Long Contexts and Retrieval. The ability to retrieve information across long
context windows is essential for applications such as chat models or dataset
integration. Accordingly, in Appendix C, we explore the current capabilities
and limitations of StableLM2 1.6B on the Needle-in-the-Haystack task. Going
forward we plan to further build upon this work as well as to extend our
models to context lengths beyond 4k.
4. 4.
Conditional Computation. Small models are often capacity-constrained – that
is, with the current training approaches, they lack the capacity to process
and exploit all of the training data. Recently, ideas such as Mixture of
Experts have been successfully applied to take a dense model and extend it to
contain more parameters that are selectively applied to certain inputs (for
instance, via sparse upcycling [37]). Importantly, if each token selects only
one expert, the overall inference FLOPs do not significantly change. Applying
this to the Stable LM 2 1.6B model is a natural extension we will investigate.
## 7 Environmental and Societal Impact
### 7.1 Carbon Footprint
The training of Stable LM 2 has consumed energy with associated carbon dioxide
emissions. In line with [67] we report our carbon footprint based on the
formula
$\displaystyle\text{Total Wh}=\text{GPU-h}\times\text{power
consumption}\times\text{PUE}$
where the Power Usage Effectiveness is set to 1.1. We trained Stable LM 2 for
$\approx 92,000$ GPU-hours, giving a total power consumption of 30MWh
considering our average power usage. The tons of emitted carbon tCO2eq can be
estimated using the US national average carbon intensity factor of 0.385 kg
CO2eq/KWh, leading to a final figure of 11 tCO2eq.
### 7.2 Societal impact
Stability AI is committed to releasing open models to help improve access to
foundational AI technology. Open access to model weights enables researchers
to inspect models for suitability and vulnerabilities, test the effectiveness
of different optimization strategies, and correct for biases observed in the
model. To that end, this model is released under an open non-commercial
license. However, open release can introduce challenges in assessing the
societal impact of a model. For example, Stability AI does not have direct
visibility into downstream applications of Stable LM 2 1.6B, the distribution
of applications by sector, or the distribution of model usage by geography.
Since the model is released with a noncommercial license, we expect a limited
number of applications outside of fine-tuning or evaluation interfaces and a
limited number of third parties affected by the model.
We will continue to monitor openly released fine-tuned models to understand
the extent of fine-tuning research or development activity that uses Stable LM
2 1.6B as a base model, including the evaluation results of these derivative
models.
## 8 Conclusion
In this report, we introduced Stable LM 2 1.6B, a compact decoder-only
language model trained on multilingual datasets. It fluidly handles up to
seven languages: English, Spanish, German, Italian, French, Portuguese, and
Dutch. To ensure that the community can reproduce our run, we detail all
datasets used during training –with the _exact_ data mix– and our newly
designed learning rate schedule. We also conduct extensive model evaluations
and comparisons with other similarly-sized models, demonstrating Stable LM 2
1.6B’s exceptional performance. Finally, we profile the model on common edge
computing architectures. We hope the current report contributes to the
improvement and further research on small language models.
## Acknowledgments
We thank our awesome MLOps team members, particularly Richard Vencu, for the
support provided. We also thank Christian Laforte, Cedric Wagrez, and Jerry
Chi for their feedback, useful ideas, and comments.
## References
* [1] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
* [2] b-mc2. sql-create-context dataset, 2023.
* [3] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization, 2016.
* [4] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report, 2023.
* [5] Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, and Thomas Wolf. Open llm leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard, 2023.
* [6] Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. Gpt-neox-20b: An open-source autoregressive language model, 2022.
* [7] British Library, Victoria Morris, Daniel van Strien, Giorgia Tolfo, Lora Afric, Stewart Robertson, Patricia Tiney, Annelies Dogterom, and Ildi Wollner. 19th century books - metadata with additional crowdsourced annotations, 2021.
* [8] Isabel Cachola, Kyle Lo, Arman Cohan, and Daniel S. Weld. TLDR: Extreme summarization of scientific documents. arXiv:2004.15011, 2020.
* [9] Iñigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. Efficient intent detection with dual sentence encoders. In Proceedings of the 2nd Workshop on NLP for ConvAI - ACL 2020, mar 2020. Data available at https://github.com/PolyAI-LDN/task-specific-datasets.
* [10] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022.
* [11] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018.
* [12] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021.
* [13] Together Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023.
* [14] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Ultrafeedback: Boosting language models with high-quality feedback, 2023.
* [15] Luigi Daniele and Suphavadeeprasit. Amplify-instruct: Synthetically generated diverse multi-turn conversations for effecient llm training. arXiv preprint arXiv:(coming soon), 2023.
* [16] Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning, 2023.
* [17] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems, 2022.
* [18] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations, 2023.
* [19] Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. Glam: Efficient scaling of language models with mixture-of-experts, 2022.
* [20] Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhilasha Ravichander, Dustin Schwenk, Alane Suhr, Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer Singh, Hanna Hajishirzi, Noah A. Smith, and Jesse Dodge. What’s in my big data?, 2023.
* [21] Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim, and Hao Peng. Data engineering for scaling language models to 128k context, 2024.
* [22] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb dataset of diverse text for language modeling, 2020.
* [23] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021.
* [24] Aaron Gokaslan, Vanya Cohen, Ellie Pavlick, and Stefanie Tellex. Openwebtext corpus, 2019.
* [25] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour, 2018.
* [26] Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. Textbooks are all you need. arXiv preprint arXiv:2306.11644, 2023.
* [27] Himanshu Gupta, Neeraj Varshney, Swaroop Mishra, Kuntal Kumar Pal, Saurabh Arjun Sawant, Kevin Scaria, Siddharth Goyal, and Chitta Baral. “john is 50 years old, can his son be 65?” evaluating NLP models’ understanding of feasibility. In Andreas Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 407–417, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics.
* [28] Johannes Hagemann, Samuel Weinbach, Konstantin Dobler, Maximilian Schall, and Gerard de Melo. Efficient parallelization layouts for large-scale distributed model training, 2023.
* [29] Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. MLX: Efficient and flexible machine learning on apple silicon, 2023.
* [30] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2021.
* [31] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models, 2022.
* [32] Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. Camels in a changing climate: Enhancing lm adaptation with tulu 2, 2023.
* [33] Mingi Jeon, Seung-Yeop Baik, Joonghyuk Hahn, Yo-Sub Han, and Sang-Ki Ko. Deep Learning-based Code Complexity Prediction, 2022.
* [34] Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. arXiv e-prints, page arXiv:1705.03551, 2017.
* [35] Greg Kamradt and Ikko Ashimine. Needle in a haystack - pressure testing llms, 2023.
* [36] Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328, 2018.
* [37] Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, and Neil Houlsby. Sparse upcycling: Training mixture-of-experts from dense checkpoints. arXiv preprint arXiv:2212.05055, 2022.
* [38] Viet Dac Lai, Chien Van Nguyen, Nghia Trung Ngo, Thuat Nguyen, Franck Dernoncourt, Ryan A Rossi, and Thien Huu Nguyen. Okapi: Instruction-tuned large language models in multiple languages with reinforcement learning from human feedback. arXiv preprint arXiv:2307.16039, 2023.
* [39] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Starcoder: may the source be with you!, 2023.
* [40] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. StarCoder: may the source be with you!, 2023.
* [41] Wing Lian, Guan Wang, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". Slimorca: An open dataset of gpt-4 augmented flan reasoning traces, with verification, 2023.
* [42] Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1823–1840, Online, November 2020. Association for Computational Linguistics.
* [43] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods, 2022.
* [44] Emmy Liu, Chen Cui, Kenneth Zheng, and Graham Neubig. Testing the ability of language models to interpret figurative language, 2022.
* [45] Tiedong Liu and Bryan Kian Hsiang Low. Goat: Fine-tuned llama outperforms gpt-4 on arithmetic tasks, 2023.
* [46] Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. What makes good data for alignment? a comprehensive study of automatic data selection in instruction tuning, 2023.
* [47] Shayne Longpre, Robert Mahari, Anthony Chen, Naana Obeng-Marnu, Damien Sileo, William Brannon, Niklas Muennighoff, Nathan Khazam, Jad Kabbara, Kartik Perisetla, et al. The data provenance initiative: A large scale audit of dataset licensing & attribution in ai. arXiv preprint arXiv:2310.16787, 2023.
* [48] Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, and Daphne Ippolito. A pretrainer’s guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity, 2023.
* [49] Risto Luukkonen, Ville Komulainen, Jouni Luoma, Anni Eskelinen, Jenna Kanerva, Hanna-Mari Kupari, Filip Ginter, Veronika Laippala, Niklas Muennighoff, Aleksandra Piktus, Thomas Wang, Nouamane Tazi, Teven Le Scao, Thomas Wolf, Osma Suominen, Samuli Sairanen, Mikko Merioksa, Jyrki Heinonen, Aija Vahtola, Samuel Antao, and Sampo Pyysalo. Fingpt: Large generative models for a small language, 2023.
* [50] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
* [51] Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training, 2018.
* [52] John P. McCrae, Alexandre Rademaker, Francis Bond, Ewa Rudnicka, and Christiane Fellbaum. English WordNet 2019 – an open-source WordNet for English. In Piek Vossen and Christiane Fellbaum, editors, Proceedings of the 10th Global Wordnet Conference, pages 245–252, Wroclaw, Poland, July 2019. Global Wordnet Association.
* [53] Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language models, 2023.
* [54] Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, and Thien Huu Nguyen. Culturax: A cleaned, enormous, and multilingual dataset for large language models in 167 languages, 2023.
* [55] OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mo Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. Gpt-4 technical report, 2023.
* [56] Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open dataset of high-quality mathematical web text, 2023.
* [57] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data, and web data only, 2023.
* [58] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training, 2018.
* [59] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2023.
* [60] Vipul Raheja, Dhruv Kumar, Ryan Koo, and Dongyeop Kang. Coedit: Text editing by task-specific instruction tuning, 2023.
* [61] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models, 2020.
* [62] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale, 2019.
* [63] Eva Sharma, Chen Li, and Lu Wang. BIGPATENT: A large-scale dataset for abstractive and coherent summarization. CoRR, abs/1906.03741, 2019.
* [64] Zhengxiang Shi, Qiang Zhang, and Aldo Lipani. Stepgame: A new benchmark for robust multi-hop spatial reasoning in texts. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11321–11329, Jun. 2022.
* [65] Gizem Soğancıoğlu, Hakime "Ozt"urk, and Arzucan "Ozg"ur. Biosses: a semantic sentence similarity estimation system for the biomedical domain. Bioinformatics, 33(14):i49–i58, 2017.
* [66] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding, 2023.
* [67] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
* [68] Jonathan Tow, Marco Bellagente, Dakota Mahan, and Carlos Riquelme. Stablelm 3b 4e1t, 2023.
* [69] Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Shengyi Huang, Kashif Rasul, Alexander M. Rush, and Thomas Wolf. The alignment handbook. https://github.com/huggingface/alignment-handbook, 2023.
* [70] Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, et al. Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944, 2023.
* [71] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
* [72] Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, and Yang Liu. Openchat: Advancing open-source language models with mixed-quality data, 2023.
* [73] Zhilin Wang, Yi Dong, Jiaqi Zeng, Virginia Adams, Makesh Narsimhan Sreedhar, Daniel Egert, Olivier Delalleau, Jane Polak Scowcroft, Neel Kant, Aidan Swope, and Oleksii Kuchaiev. Helpsteer: Multi-attribute helpfulness dataset for steerlm, 2023.
* [74] Mitchell Wortsman, Peter J. Liu, Lechao Xiao, Katie Everett, Alex Alemi, Ben Adlam, John D. Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, Jeffrey Pennington, Jascha Sohl-dickstein, Kelvin Xu, Jaehoon Lee, Justin Gilmer, and Simon Kornblith. Small-scale proxies for large-scale transformer training instabilities, 2023.
* [75] Qizhe Xie, Guokun Lai, Zihang Dai, and Eduard H. Hovy. Large-scale cloze test dataset designed by teachers. CoRR, abs/1711.03225, 2017.
* [76] Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy Liang, Quoc V. Le, Tengyu Ma, and Adams Wei Yu. Doremi: Optimizing data mixtures speeds up language model pretraining, 2023.
* [77] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
* [78] Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. Rlcd: Reinforcement learning from contrast distillation for language model alignment, 2023.
* [79] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes, 2020.
* [80] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023.
* [81] Weizhe Yuan and Pengfei Liu. restructured pre-training, 2022.
* [82] Armel Randy Zebaze. Self-instruct-starcoder.
* [83] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David Traum, and Lluís Màrquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800, Florence, Italy, July 2019. Association for Computational Linguistics.
* [84] Biao Zhang and Rico Sennrich. Root mean square layer normalization, 2019.
* [85] Jingmiao Zhao and Carolyn Jane Anderson. Solving and generating npr sunday puzzles with large language models, 2023.
* [86] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
* [87] Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth. Temporal reasoning on implicit events from distant supervision. In NAACL, 2021.
* [88] Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. Lima: Less is more for alignment, 2023.
## Appendix A Data Ablations
How to select the optimal training mix for pre-training from a set of sources
is an open problem. Tuning weights based on downstream tasks [10, 19] can be
extremely costly and bears the risk of overfitting on particular tasks as well
as exploiting data leakage. While the computationally cheap, principled
approach introduced in [76] is promising, we found it delivers sub-optimal
weights when the data sources are highly imbalanced and have different
information content (e.g., large web sources vs curated datasets).
Furthermore, multilingual evaluations introduce a more explicit dependence on
the tokenizer, with increased noise due to the lack of high-quality, non-
machine-translated benchmarks. We, therefore, aim to find general guiding
principles that are expected to hold against changes in the tokenizer or in
the absence of high-quality benchmarks for each data category while keeping
the cost of these ablations low.
We trained a set of 1B models on a total of 100B tokens sampled according to
Tab. 9.
Sampling weights
---
Source | Control | Mix 1 | Mix 2 | Mix 3 | Mix 4 | Mix 5
Cultura-x En | 0.6894 | 0.5694 | 0.3294 | 0.1494 | 0.49 | 0.49
Cultura-x De | 0. | 0.03 | 0.06 | 0.09 | 0. | 0.
Cultura-x Es | 0. | 0.03 | 0.06 | 0.09 | 0. | 0.
Cultura-x Fr | 0. | 0.03 | 0.06 | 0.09 | 0. | 0.
Cultura-x It | 0. | 0.03 | 0.06 | 0.09 | 0. | 0.
Cultura-x Pt | 0. | 0.03 | 0.06 | 0.09 | 0. | 0.
Cultura-x Nl | 0. | 0.03 | 0.06 | 0.09 | 0. | 0.
Starcoder | 0.0702 | 0.0702 | 0.0702 | 0.0702 | 0.0497 | 0.28
Others | 0.2292 | 0.2292 | 0.2292 | 0.2292 | 0.4648 | 0.2292
Table 9: Data ablations with corresponding sampling weights. Column 2
(Control) is our reference, containing only English and code data, with a
standard, significant amount of web data. Mix 1-3 test our strategy for adding
multilingual data, capping non-English sources to a fixed percentage. In Mix
4-5 we reduce the amount of web data, increasing respectively books and
academic sources and code. Source Others contains the same data as in Tab. 1
from the categories: Academic, Books, Social, Law, Math, and Wiki. In Mix 4,
we uniformly upsample Academic and Books sources.
Evaluations of each model on English and non-English benchmarks are shown in
Tab. 10. We observe the following trends
* •
Contrary to [53], we find less conclusive evidence that code can be used as a
neutral filler for training data as increasing the amount of code leads to a
degradation in language model performances. We leave a more thorough
exploration of this to future work, including math and reasoning tasks that
may benefit from a higher proportion of code data.
* •
Performance on non-English benchmarks increases for each language by adding
any amount of data in that same language. However, this increase saturates
very fast and we observe only modest gains beyond $6\%$. We hypothesise that
this might be due to the lack of high-quality, structured data sources in non-
English languages, which we only sample from the web.
* •
Upsampling Academic and Books sources improves downstream performance over the
control run, particularly in natural language understanding.
Data | Avg | English Commonsense | LAMBADA | Okapi
---|---|---|---|---
Control | 38.76 | 66.51 | 30.93 | 33.98
Mix 1 | 39.91 | 63.92 | 34.25 | 35.39
Mix 2 | 40.69 | 64.71 | 35.74 | 35.93
Mix 3 | 39.87 | 63.02 | 35.22 | 35.25
Mix 4 | 39.41 | 65.87 | 32.73 | 34.58
Mix 5 | 38.66 | 64.95 | 31.4 | 34.07
Table 10: Downstream evaluations of the models considered in our data
ablations. English commonsense includes: ARC-Easy, PIQA, SciQ, WinoGrande.
Each score is averaged over EN, DE, ES, FR, and IT.
## Appendix B Scheduler Ablations
[31] shows that the widely adopted cosine learning rate decay achieves optimal
performance only when performing a full cosine period, forcing practitioners
to fix the number of steps beforehand. As multi-epoch training performs well
for LLMs [53, 49, 68], and larger and cleaner data sources are made accessible
by the OS community, it becomes more and more important to alleviate this
limitation. To this end, we experimented with the "inverse square root"
(rsqrt) learning rate scheduler [59] Eq. 2
$\displaystyle\frac{1}{\sqrt{\text{max}\left(i,k\right)}}$ (2)
where $i$ is the current iteration and $k$ the number of warmup steps. As the
scheduler is strictly convex and reaches zero asymptotically, it can be used
to train for infinite iterations.
However, in standard scenarios, of which we show an example in Fig. 4, rsqrt
consistently underperforms cosine. We make the comparison by decaying the
learning rate to 0 in both cases, with a linear cool down for the last $10\%$
of the steps for the rsqrt scheduler.
A sharp difference between the two schedulers is how they start from the peak
learning rate: with a flat slope, negative second derivative for cosine and a
large slope, positive second derivative for rsqrt. This allows rsqrt to escape
the high learning rate region quickly, which we identified as the main cause
of the performance gap. We then get the best of both worlds by combining both
schedulers into a single function that is again the cosine for the first
section, and smoothly switches to rsqrt, as described in Eq. 1.
Finally, we experimented with two different versions of our scheduler, whose
performance we show in the right panels of Fig. 4: hybrid (hyb) corresponds to
our final scheduler defined in Eq. 1, while for hybrid2 (hyb2) we double the
cosine period and move the turning point at half the training steps instead of
at a fourth. We attribute the difference in performance between the two
versions to the different average learning rates. For the set of experiments
in Fig. 4, integrating the scheduler over the training volume, we obtain an
average learning rate of $1.5e-4$, $1.13e-4$ and $1.67e-4$ for cosine, hyb and
hyb2 respectively. We leave to future work a proof that schedulers with the
same average value lead to statistically equivalent models under mild
conditions, such as monotonicity and fixed endpoints.
## Appendix C Evaluation of Performance Across Different Sized Context
Windows
Figure 4: Learning rate scheduler ablations. Left figures: comparison between
rsqrt and cosine decay scheduler. Right figures: final loss of models trained
with the two variants of the hybrid scheduler and cosine for different number
of training tokens (top). The difference in final loss between the 3
schedulers is $<1\%$ (bottom).
The Needle-in-a-haystack test, as introduced in [35], is commonly used to
assess the retrieval capabilities of LLMs across different context window
sizes. Following the methodology of [35], a fact ("the needle") is embedded
within a context of unrelated essays ("the haystack"). The model under
evaluation is tasked to answer a question requiring the retrieval of the
needle from the context. The evaluation is carried out systematically by
placing the needle at 35 different depths within the context and comparing the
results across context window sizes from 500 to 4000 tokens. Notably, these
window sizes are specifically chosen to assess the performance within our
trained context size of 4096 tokens. An AI judge, typically GPT-4 [55], scores
the answers from 1 to 10 based on wether or not the fact was correctly
retrieved.
While performing the evaluation, we noticed that the order of the context was
not fixed or controlled by a seed, leading in some cases to significantly
different scores. For this reason, we show in Fig. 5 the average results over
10 different runs, together with the corresponding standard deviation. For
Stable LM 2, we employ the prompt of [21], whereas for our fine-tuned version
we use the official repository as is. Averaging the scores over the evaluated
grid, we observe a slight degradation from $\approx 7.7$ for our base model to
$\approx 6.8$ for the fine-tuned version. The different prompt structures make
it hard to directly compare the results, however, we will investigate in
future work how different elements such as the distribution of the document
lengths and the attention mask correlate with this behavior.
Figure 5: Needle-in-a-haystack evaluation of Stable LM 2 on context window
sizes from 500 to 4000 tokens. Figure 6: Number of iterations performed at
various batch sizes to match the same final loss.
## Appendix D Global Batch Size
The role of the batch size in stochastic optimization convergence is
illustrated in [51] as follows. The loss $L({\theta})$ of a model
parameterized by $\theta$ over a training set $D$ is estimated by
independently drawing random samples from $D$ to form a batch $B$. Thus, the
loss gradient used to update the model’s parameters is given by
$\displaystyle\nabla
L\left({\theta}\right)=\frac{1}{|B|}\sum_{i=1}^{|B|}\nabla_{\theta}L_{x_{i}}\left(\theta\right)$
(3)
and its variance scales with $1/|B|$. In other words, a bigger batch has a
lower variance and thus provides a more accurate estimate of the gradient. A
more accurate gradient suggests that we should increase the learning rate
accordingly to converge faster; however, in practice, this is far from trivial
and requires special handling such as fine-tuning the learning rate scheduler
or performing per-layer updates [25, 79].
To choose the global batch size for Stable LM 2, we make the following
assumptions:
1. 1.
We do not change the learning rate based on the batch size
2. 2.
The batch size can be approximately scaled at no cost.
With $1.$, we are, in principle, giving up on potential further gains by
tuning the step size to the gradient noise. In practice, we notice that using
a large learning rate at the boundary between convergence and divergence of
our ablations is more than enough to compensate for this. $2.$ follows from a
combination of theoretical results, hardware optimizations, and increased
training data availability. [53] empirically demonstrated how multiple epochs
of repeated data are as good as fresh data in LLMs pre-training, while the
computational overhead of increasing data parallel workers is minimal.
The data volume available for pre-training is consistently increasing through
larger datasets, and there are promising results of multi-epoch training.
Hence, we explore training with larger batch sizes, which requires more
overall training tokens to reach the final loss but significantly decreases
training time. To determine our final batch size, we start by training a
baseline with a batch $B_{0}$ of 4M tokens on $T_{0}=50B$ total training
tokens. Subsequently, we train new models from scratch with batches $B_{i}=$
8M, 12M, and 16M, increasing $T_{i}$ tokens until the same final loss is
reached. We do this with an rsqrt scheduler as the number of required training
steps is unknown beforehand.
In Fig. 6, we show the number of iterations $B_{i}/T_{i}$ which are required
to match the baseline loss. Assuming for simplicity that the iteration time is
independent of the batch size, this gives an upper bound on the speed up we
can achieve by increasing the batch.
In the regime considered, we observe that an increase in batch size leads to a
decrease in the number of iterations required to match the baseline loss all
the way to the biggest batch considered of $16.7$M tokens, which speeds up
training by a factor 2x. However, to achieve an equal loss, we require
$T_{i}=96B$ training tokens, which is a factor 1.96 increase compared to the
baseline. Therefore to train Stable LM 2, we opt for a global batch of
$8,388,608$, which with the layout of Tab. 3 offers the best compromise
between decrease in training time and additional required training tokens.
## Appendix E Restructed Pre-training Sources
While many sources are restructured in [81], in this work, we have only
considered those listed in Tab.11 with commercially viable licenses such as
BIGPATENT, CLOTH, SciTLDR, TriviaQA, WordNet, WikiHow, etc. We also
restructure additional sources with similar licensing compatibility and list
them below.
Dataset | Prefix
---|---
Banking77 [9] | banking77
BigPatent [63] | big_patent
BIOSSES [65] | biosses
BLBooksGenre [7] | TheBritishLibrary/blbooksgenre
CodeComplex [33] | codeparrot/codecomplex
CoEdIT [60] | grammarly/coedit
CLOTH [75] | AndyChiang/cloth
CommonGen [42] | common_gen
FigQA [44] | nightingal3/fig-qa
FeasibilityQA [27] | jon-tow/feasibility_qa
Flan 2021 [47] | DataProvenanceInitiative/flan2021_submix_original
Flan Chain of Thought [47] | DataProvenanceInitiative/cot_submix_original
Flan NIv2 [47] | DataProvenanceInitiative/niv2_submix_original
Flan T0 [47] | DataProvenanceInitiative/t0_submix_original
HelpSteer [73] | nvidia/HelpSteer
IMDB Review [50] | ajaykarthick/imdb-movie-reviews
Joke Explanation [47] | dim/joke_explaination
MBPP [1] | mbpp
NarrativeQA [36] | narrativeqa
PuzzleQA [85] | Jingmiao/PUZZLEQA
SciTLDR [8] | allenai/scitldr
Self-Instruct Starcoder [82] | codeparrot/self-instruct-starcoder
SQL Create Context [2] | b-mc2/sql-create-context
StepGame [64] | tasksource/stepgame
TRACIE [87] | tasksource/tracie
TriviaQA [34] | trivia_qa
WikiHow | wikihow
WordNet [52] | jon-tow/open-english-wordnet-synset-2023
Yahoo Answers Topics | yahoo_answers_topics
Table 11: The original sources for restructured and instruction pre-training
datasets can be found at https://huggingface.co/datasets/ followed by the
provided prefix.
|
# Chemical Abundance of the LINER galaxy UGC 4805 with SDSS-IV MaNGA
A.C. Krabbe,1 C. B. Oliveira Jr.,1 I. A. Zinchenko,2,3 J. A. Hernández-
Jiménez,4 O. L. Dors Jr.,1 G. F. Hägele,5,6 M. V. Cardaci,5,6 N. R. Telles1
1 Universidade do Vale do Paraíba, Av. Shishima Hifumi, 2911, Zip Code
12244-000, São José dos Campos, SP, Brazil
2 Faculty of Physics, Ludwig-Maximilians-Universität, Scheinerstr. 1, 81679
Munich, Germany
3 Main Astronomical Observatory, National Academy of Sciences of Ukraine, 27
Akad. Zabolotnoho St 03680 Kyiv, Ukraine
4 Departamento de Ciencias Físicas, Universidad Andrés Bello, Fernández
Concha, 700, Las Condes, Santiago, Chile.
5 Instituto de Astrofísica de La Plata (CONICET La Plata–UNLP), Argentina.
6 Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La
Plata, Paseo del Bosque s/n, 1900 La Plata, Argentina
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Chemical abundance determinations in Low-Ionization Nuclear Line Regions
(LINERs) are especially complex and uncertain because the nature of the
ionizing source of this kind of object is unknown. In this work, we study the
oxygen abundance in relation to the hydrogen abundance (O/H) of the gas phase
of the UGC 4805 LINER nucleus. Optical spectroscopic data from the Mapping
Nearby Galaxies (MaNGA) survey was employed to derive the O/H abundance of the
UGC 4805 nucleus based on the extrapolation of the disk abundance gradient, on
calibrations between O/H abundance and strong emission-lines for Active
Galactic Nuclei (AGNs) as well as on photoionization models built with the
Cloudy code, assuming gas accretion into a black hole (AGN) and post-
Asymptotic Giant Branch (p-AGB) stars with different effective temperatures.
We found that abundance gradient extrapolations, AGN calibrations, AGN and
p-AGB photoionization models produce similar O/H values for the UGC 4805
nucleus and similar ionization parameter values. The study demonstrated that
the methods used to estimate the O/H abundance using nuclear emission-line
ratios produce reliable results, which are in agreement with the O/H values
obtained from the independent method of galactic metallicity gradient
extrapolation. Finally, the results from the WHAN diagram combined with the
fact that the high excitation level of the gas has to be maintained at kpc
scales, we suggest that the main ionizing source of the UGC 4805 nucleus
probably has a stellar origin rather than an AGN.
###### keywords:
galaxies:abundances – ISM:abundances – galaxies:nuclei
††pubyear: 2020††pagerange: Chemical Abundance of the LINER galaxy UGC 4805
with SDSS-IV MaNGA–Chemical Abundance of the LINER galaxy UGC 4805 with SDSS-
IV MaNGA
## 1 Introduction
Determinations of the chemical abundances of Active Galactic Nuclei (AGNs) and
Star-Forming regions (SFs) are essential for understanding the chemical
evolution of galaxies and, consequently, of the Universe.
Among the heavy elements present in the gas phase of AGNs and SFs (e.g., O, N,
S), oxygen is the element with more accurate abundance determinations. This is
because prominent emission-lines from the main ionic stages of oxygen can be
easily detected in the optical spectra of these objects, making it a good
tracer of the metallicity (e.g., Kennicutt et al. 2003; Hägele et al. 2008;
Dors et al. 2015, 2020a). Therefore, hereafter we use metallicity ($Z$) and
oxygen abundance [12 + $\log$(O/H)] interchangeably. Abundance estimations
based on the direct method, also known as $T_{\rm e}$-method, are commonly
used to determine chemical abundances of gas phase of SFs (for a review see
Peimbert et al. 2017; Pérez-Montero 2017). These estimations seem to be more
reliable than those derived using empirical or theoretical relations between
the different electron temperatures (Hägele et al., 2006, 2008). In fact, the
compatibility between oxygen abundances in nebulae located in the solar
neighborhood and the ones derived from observations of the weak interstellar O
i$\lambda$1356 line towards the stars (see Pilyugin 2003 and references
therein) sustains the accuracy of the $T_{\rm e}$-method. This method is based
on determinations of nebular electron temperatures, which requires
measurements of auroral emission-lines, such as [O iii]$\lambda$ 4363 and [N
ii]$\lambda$ 5755, generally weak (about 100 times weaker than H$\beta$) or
not measurable in objects with high metallicity and/or low excitation (e.g.,
van Zee et al. 1998; Díaz et al. 2007). In the cases that auroral lines can
not be measured, indirect or strong-line methods can be used to estimate the
oxygen abundance, as proposed by Jensen et al. (1976) and Pagel et al. (1979).
This method is based on calibrations between the oxygen abundance or
metallicity and strong emission-lines, easily measured in SF spectra (for a
review see López-Sánchez & Esteban 2010b; Maiolino & Mannucci 2019; Kewley et
al. 2019).
For AGNs, chemical abundance determinations are preferably carried out in
Narrow Line Regions (NLRs) of Seyfert 2 nuclei due to the relatively low
velocity ($v\>\la\>400\>\rm km\>s^{-1}$, Contini 2017) of the shock waves
present in the gas and their low electron density ($N_{\rm e}\>\la\>2000\>\rm
cm^{-3}$, Zhang et al. 2013; Dors et al. 2014; for a review see Dors et al.
2020a). Oxygen abundance estimations for NLRs of Seyfert 2 have been obtained
by using the $T_{\rm e}$-method (Alloin et al. 1992; Izotov & Thuan 2008; Dors
et al. 2015, 2020a) and strong-line methods (e.g., Storchi-Bergmann et al.
1998; Castro et al. 2017; Carvalho et al. 2020). Studies based on strong-line
methods have indicated that Seyfert 2 nuclei in the local universe
($z\><\>0.4$) present similar metallicity (or abundances) as those in metal
rich H ii regions, i.e., no extraordinary enrichment has been observed in
AGNs, with these objects exhibiting solar or slightly over-solar
metallicities. This result agrees with predictions of chemical evolution
models for spiral and elliptical galaxies (e.g., Mollá & Díaz 2005).
An opposite situation is found for Low-Ionization Nuclear Emission-line
Regions (LINERs), whose chemical abundance studies are rare in the literature.
This class of objects appear in 1/3 of galaxies in the local universe (Netzer,
2013), and their ionization sources are still an open problem in astronomy.
Heckman (1980) suggested that these nuclei have gas shocks as their main
ionization/heating source. Later, Ferland & Netzer (1983) proposed that LINERs
could be ionized by accretion gas into a central black hole (AGN) but with
lower ionization parameters (U) than those found in Seyferts. Therefore, the
difference between LINERs and other AGN types would consist of the order of
the ionization parameter (Ho et al., 1993). However, Terlevich & Melnick
(1985) and Shields (1992) proposed a new ionization model, i.e., LINERs are
ionized by hot stars, but contrary to SFs, they are old stars (0.1-0.5 Gyr)
that came out from the main sequence (e.g., in the post-Asymptotic Giant
Branch, p-AGB). Based on this scenario, Taniguchi et al. (2000) showed that
photoionization models considering Planetary Nebula Nuclei (PNNs) with a
temperature of $10^{5}$ K as ionizing sources can reproduce the region
occupied, at least, by a subset of type 2 LINERs in optical emission-line
ratio diagnostic diagrams. Winkler (2014) found that these objects have
composite ionizing sources, i.e., more than one mechanism is responsible for
the ionization of the gas. This explanation was also proposed by Yan & Blanton
(2012), Singh et al. (2013), and Bremer et al. (2013).
The unknown nature of the ionizing sources and excitation mechanisms of LINERs
hinder determination of their metallicity using the $T_{\rm e}$-method and/or
strong-line methods (e.g., Storchi-Bergmann et al. 1998). Annibali et al.
(2010) analysed intermediate-resolution optical spectra of a sample of LINERs
and derived oxygen abundances considering these objects as AGNs (by using the
Storchi-Bergmann et al. 1998 calibrations) and as SFs (by using the Kobulnicky
et al. 1999 calibration). These authors found that when AGNs are assumed as
ionizing sources, higher oxygen values are derived than for those assuming hot
stars, which provide sub-solar abundances. On the other hand, oxygen abundance
estimations based on the extrapolation of disk abundance gradients to the
central part of the galaxies (an independent method) by Florido et al. (2012)
indicate over-solar oxygen abundances for three LINERs (NGC 2681, NGC 4314,
and NGC 4394).
Recently, semi-empirical calibrations between the oxygen abundance (or
metallicity) and strong-emission lines of Seyfert 2 were obtained by Castro et
al. (2017) and Carvalho et al. (2020). In addition, several methods to
determine the oxygen abundance gradients in spiral galaxies are available in
the literature (see Vila-Costas & Edmunds, 1992; Zaritsky et al., 1994; van
Zee et al., 1998; Pilyugin et al., 2004, 2007; Lopez-Sanchez & Esteban,
2010a). These methods, together with data from the Mapping Nearby Galaxies at
the Apache Point Observatory (MaNGA, Bundy et al., 2015), offer a powerful
opportunity to determine the chemical abundances of LINERs and to produce
insights about the ionization mechanisms of these objects.
In previous papers, we have analysed oxygen abundance in Seyfert 2 nuclei
using the $T_{\rm e}$-method, photoionization model grids, and HCM code (see
Dors et al. 2014; Castro et al. 2017; Pérez-Montero et al. 2019; Dors et al.
2019; Carvalho et al. 2020; Dors et al. 2020a, b). Although, the semi-
empirical calibrations between metallicity and strong-emission lines of
Seyfert 2 obtained by Castro et al. 2017 and Carvalho et al. 2020 along with
the AGN photoionization model grids and SF calibrations, are applied in this
paper, the object class studied here and the methodology applied are also
different. The main goal of this work is to determine the oxygen abundance in
relation to the hydrogen abundance (O/H) in the central region of the LINER
galaxy UGC 4805 (redshift $z=0.02698$), in combination with data from the
Mapping Nearby Galaxies at the Apache Point Observatory (MaNGA, Bundy et al.
(2015). We assumed a spatially flat cosmology with $H_{0}$ = 71 $\rm
km\>s^{-1}Mpc^{-1}$, $\Omega_{m}=0.270$, and $\Omega_{\rm vac}=0.730$ (Wright,
2006), which leads to a spatial scale of 0.535 kpc/arcsec at the distance of
UGC 4805. This paper is organized as follows: in Section 2 the observational
data of UGC 4805 are described; Section 3 contains the methodology used to
estimate the oxygen abundance of the nucleus and along the disk of UGC 4805;
in Section 4, the results for the nuclear oxygen abundance are given; while
discussion and conclusions of the outcome are presented in Sections 5 and 6,
respectively.
## 2 Data
Figure 1: Left panel: SDSS false colour image combining the $gri$ bands of UGC
4805 taken from the MaNGA survey (Blanton et al., 2017). The IFU field of view
is indicated in purple. Right panel: Map of the H$\alpha$ flux (in units of
$10^{-17}$ erg/cm2/spaxel).
MaNGA survey is an Integral Field Spectroscopy (IFS)
survey111sdss.org/surveys/manga/ (Blanton et al., 2017) developed to observe
about 10 000 galaxies until 2020 using Integral Field Units (IFUs). This
survey is part of the fourth version of the Sloan Digital Sky Survey (SDSS-IV,
Blanton et al. 2017) and utilises the 2.5 m Sloan Telescope in its
spectroscopic mode. The spectra have a wavelength coverage of 3 600 - 10 300
Å, with a spectral resolution of $R\sim$ 1 400 at $\lambda\sim$4 000 Å and
$R\sim$ 2 600 at $\lambda\sim$9 000 Å. The angular size of each spaxel is 0.5
arcsec, and the average Full Width Half Maximum (FWHM) of the MaNGA data is
2.5 arcsec. For details about the strategy of observations and data reduction
see Law et al. (2015) and Law et al. (2016), respectively. From the objects
whose data are available in the MaNGA survey, we selected those presenting
LINER nuclei and disk emission, preferably from objects classified as SFs.
Based on these selection criteria, we selected 81 objects. In this work, we
present a detailed analysis of the spiral galaxy UGC 4805, an object with a
classical LINER nuclear emission and with the largest number of star-forming
emission spaxels along the disk. The spectrum of each spaxel was processed
according to the steps listed below:
* •
To obtain the nebular spectrum of each spaxel not contaminated by the stellar
population continuum, i.e., the pure nebular spectrum, we use the stellar
population synthesis code STARLIGHT developed by Cid Fernandes et al. (2005);
Mateus et al. (2006); Asari et al. (2007). This code fits the observed
spectrum of a galaxy using a combination of Simple Stellar Populations (SSPs),
in different proportions and excluding the emission lines. We use a spectral
basis of 45 synthetic SSP spectra with three metallicities $Z$ = 0.004, 0.02
($Z_{\odot}$), and 0.05, and 15 ages from 1 Myr up to 13 Gyr, taken from the
evolutionary synthesis models of Bruzual & Charlot (2003). The reddening law
by Cardelli et al. (1989) was considered. The stellar spectra of the SSPs were
convolved with an elliptical Gaussian function to achieve the same spectral
resolution as the observational data and transformed to the rest frame.
* •
The emission lines are fitted with Gaussian profiles. For more details about
the synthesis method and the fitting of emission lines, see Zinchenko et al.
(2016).
* •
The residual extinction associated with the gaseous component for each spatial
bin was calculated by comparing the observed value of the H$\alpha$/H$\beta$
ratio to the theoretical value of 2.86 obtained by Hummer & Storey (1987) for
an electron temperature of 10 000 K and an electron density of 100 cm-3.
Fig. 1 presents the SDSS false colour image obtained combining the $gri$ bands
of UGC 4805 and the resulting 2D map of the H$\alpha$ flux. Observe the very
separated and clear nucleus and a bright star-forming ring in the disk at
$\sim$8 arcsec ($\sim 4.2$ kpc). In Fig. 2 (top panel), the observed (in
black) and synthetic (in red) spectra of the central region of UGC 4805 are
shown. Fig. 2 (bottom panel) also presents the pure emission spectrum, i.e.,
after the SSP subtraction, as well as emission line identifications. The
nuclear emission was obtained by integrating the flux of the central region
considering a radius of 1.5 arcsec ($\sim$1 kpc), which corresponds
approximately to the mean value of the seeing during the observations. In
Table 1 the reddening corrected emission-line intensities (in relation to
H$\beta$=100), the reddening function $f(\lambda)$, the logarithmic extinction
coefficient $c$(H$\beta$), the visual extinction AV, and the equivalent width
of H$\alpha$ [$\rm W_{H\alpha}$] of the LINER nucleus of UGC 4805 are listed.
The H$\beta$ luminosity (in units of erg/s) was also calculated and listed in
Table 1, considering a distance of 119 Mpc.
Figure 2: Upper panel: Stellar population synthesis for the nuclear region of
UGC 4805 within a circular aperture with a radius equal to 1.5 arcsec ($\sim$1
Kpc). Observed and synthetic spectra are in black and red, respectively. Lower
Panel: Pure emission spectrum of the UGC 4805 nucleus. Emission lines are
identified in the plot. The flux is in units of
$10^{-15}\rm{ergs}\,\rm{cm^{-2}}\,\rm{s^{-1}}\,\AA$.
The identification of the dominant ionization mechanism of the emitting gas
across the galaxy is essential to determine chemical abundances. To do that,
we used the $[\text{O\,{iii}}]\lambda 5007/\rm H\beta$ versus
$[\text{N\,{ii}}]\lambda 6584/\rm H\alpha$, $[\text{O\,{iii}}]\lambda 5007/\rm
H\beta$ versus $[\text{S\,{ii}}](\lambda\lambda 6716+31)/\rm H\alpha$, and
$[\text{O\,{iii}}]\lambda 5007/\rm H\beta$ versus $[\text{O\,{i}}]\lambda
6300/\rm H\alpha$ diagnostic diagrams proposed by Baldwin et al. (1981),
commonly known as BPT diagrams, to classify each spaxel of UGC 4805. The
empirical and theoretical criteria proposed by Kewley et al. (2001) and
Kauffmann et al. (2003), respectively, were considered to classify objects in
H ii-like regions, composite, and AGN-like objects. Furthermore, the
separation between Seyferts and LINERs proposed by Kewley et al. (2006) was
used. Fig. 3 shows these BPT diagrams for each spaxel of UGC 4805 and the
distribution of the regions in the galaxy according to $[\text{O\,{iii}}]/\rm
H\beta$ versus $[\text{N\,{ii}}]/\rm H\alpha$ diagram. As can be seen in these
diagrams, the central area of the galaxy is classified as LINER. Fig. 4 shows
the same $[\text{O\,{iii}}]\lambda 5007/\rm H\beta$ versus
$[\text{N\,{ii}}]\lambda 6584/\rm H\alpha$ diagram as Fig. 3 (top left panel),
but as a function of the distance to the centre of the galaxy. The colour of
each point corresponds to its distance from the galaxy centre, with the
reddest points representing the central spaxels. As can be noted in this
figure, the points closest to the centre lie in the LINER region. In Addition,
the distance to the galaxy centre and the location in the diagram are
connected, so that the points that approach the centre of the galaxy moves
away from the line that separates SF-like objects from AGN-like ones.
On the other hand, the diagram introduced by Cid Fernandes et al. (2011) uses
the equivalent width of H$\alpha$ ($\rm W_{H\alpha}$) and is known as a WHAN
diagram. This diagram can to discriminate genuine low-ionization AGNs from
galaxies that are ionized by evolved low-mass stars, i.e. the post-Asymptotic
Giant Branch (post-AGB). The WHAN diagram identifies 5 classes of galaxies,
namely:
1. 1.
Pure star forming galaxies: $\log(\text{N\,{ii}}/\rm H\alpha)\><\>-0.4$ and
$\rm W_{H\alpha}\>>\>3$ Å.
2. 2.
Strong AGN (i.e., Seyferts): $\log(\text{N\,{ii}}/\rm H\alpha)\>>\>-0.4$ and
$\rm W_{H\alpha}\>>\>6$ Å.
3. 3.
Weak AGN: $\log(\text{N\,{ii}}/\rm H\alpha)\>>\>-0.4$ and $\rm W_{H\alpha}$
between 3 and 6 Å.
4. 4.
Retired galaxies (i.e., fake AGN): $\rm W_{H\alpha}\><\>3$ Å.
5. 5.
Passive galaxies (actually, line-less galaxies): $\rm W_{H\alpha}$ and $\rm
W_{\text{N\,{ii}}}\><\>0.5$ Å.
According to this classification, the UGC 4805 nucleus is a Retired Galaxy
and, thus, it is ionized by post-AGB stars.
Figure 3: Top left panel: $\log([\text{O\,{iii}}]\lambda 5007/\rm H\beta)$ versus $\log([\text{N\,{ii}}]$ $\lambda 6584/\rm H\alpha)$ diagnostic diagram. Black solid curve represents the theoretical upper limit for the star-forming regions proposed by Kewley et al. 2001 (Ke01), the black dashed curve is the empirical star-forming limit proposed by Kauffmann et al. (2003) (Ka03), and the blue solid line represents the separation between Seyferts and LINERs from Kewley et al. (2006) (Ke06). The region between the Ke01 and Ka03 lines is denominated the composite region (black points). Top right panel: Distribution of the UGC 4805 regions accordingly to their main excitation mechanism as showed in the $\log([\text{O\,{iii}}]\lambda 5007/\rm H\beta)$ versus $\log([\text{N\,{ii}}]$ $\lambda 6584/\rm H\alpha)$ diagram (top left panel). Bottom left panel: $\log([\text{O\,{iii}}]\lambda 5007/\rm H\beta)$ versus $\log([\text{S\,{ii}}](\lambda\lambda 6716+31)/\rm H\alpha$) diagram. Bottom right panel: $\log([\text{O\,{iii}}]\lambda 5007/\rm H\beta)$ versus $\log([\text{O\,{i}}]\lambda 6300/\rm H\alpha)$ diagram. Red points represent the AGN-like spaxels and blue points the SF-like spaxels of UGC 4805, according to $\log([\text{O\,{iii}}]\lambda 5007/\rm H\beta)$ versus $\log([\text{N\,{ii}}]$ $\lambda 6584/\rm H\alpha)$ diagram. Figure 4: $\log([\text{O\,{iii}}]\lambda 5007/\rm H\beta)$ versus $\log([\text{N\,{ii}}]$ $\lambda 6584/\rm H\alpha)$ diagnostic diagram. The colour of each point corresponds to its distance from the galaxy centre, with the reddest points representing the central spaxels. Table 1: Reddening corrected emission-line intensities (in relation to H$\beta$=100), reddening function $f(\lambda)$, the logarithmic extinction coefficient $c$(H$\beta$), the visual extinction AV, and the H$\beta$ luminosity (erg/s) of the UGC 4805 nucleus. The estimations were obtained considering a radius of 1 kpc. | $f(\lambda$) | Measurements
---|---|---
[O ii] $\lambda$3727 | 0.33 | 327 $\pm$ 5
[O iii] $\lambda$4959 | $-$0.02 | 91 $\pm$ 2
H$\beta$ $\lambda$4861 | 0.00 | 100 $\pm$ 3
[O iii] $\lambda$5007 | $-$0.04 | 242 $\pm$ 3
[N ii] $\lambda$6548 | $-$0.35 | 126 $\pm$ 3
[O i] $\lambda$6300 | $-$0.29 | 21 $\pm$ 5
H$\alpha$ $\lambda$6563 | $-$0.35 | 286 $\pm$ 3
[N ii] $\lambda$6584 | $-$0.35 | 321 $\pm$ 4
[S ii] $\lambda$6717 | $-$0.36 | 135 $\pm$ 3
[S ii] $\lambda$6731 | $-$0.37 | 96 $\pm$ 3
$c$(H$\beta$) | — | 0.19 $\pm$ 0.005
WHα | — | 1.65 $\pm$ 0.21 [Å]
AV | — | 0.37 [mag]
$\log$[$L$(H$\beta$)] | — | 38.86 [erg/s]
## 3 Oxygen abundance determination
To obtain the oxygen abundance of the UGC 4805 nucleus, five calibrations of
SFs were used to extrapolate the radial oxygen abundance for the central
region. This method has been used by several authors (e.g., Vila-Costas &
Edmunds 1992; van Zee et al. 1998; Pilyugin et al. 2004; Zinchenko et al.
2019) and it produces an independent estimation of the oxygen abundance of
nuclear regions. Recently, Mingozzi et al. (2020) measured gas-phase
metallicity, ionisation parameter and dust extinction for a representative
sample of 1795 local star-forming galaxies using integral field spectroscopy
from the SDSS-IV MaNGA survey, showing the extensive reliability of this
survey in this type of study. In addition, calibrations between the gas phase
O/H abundance (or metallicity) and strong emission-lines for Seyfert 2 AGNs
and photoionization model results were considered to estimate the UGC 4805
nucleus oxygen abundance. Each method is described below.
### 3.1 Star-forming regions
The goal of this work is to determine the oxygen abundance in the nuclear
region of UGC 4805. In principle, determinations of oxygen abundances based on
measurements of temperature sensitive line ratios, for example
$[\text{O\,{iii}}]\lambda\,4363$ and $[\text{N\,{ii}}]\lambda\,5755$, should
provide more accurate estimates of O/H (Kennicutt et al., 2003), because these
are free from the uncertainties of photoionization models (e.g., Viegas 2002;
Kennicutt et al. 2003), considered in the majority of strong-line methods
(e.g., Kewley & Dopita 2002). Unfortunately, electron temperature sensitive
line ratios were not measured in the UGC 4805 spectra. In these cases, only
strong-line methods would be used to determine the oxygen abundances in the H
ii regions along the UGC 4805 disk and, then, to obtain the central intersect
O/H abundance. The strong-line methods considered in this work to derive the
O/H gradient are briefly described below.
* •
Edmunds & Pagel (1984): This theoretical calibration, obtained by using the
model calculations by Dufour et al. (1980) and Pagel et al. (1980), is based
on the $R_{23}$=([O ii]$\lambda$3727+[O
iii]$\lambda\lambda$4959+5007)/H$\beta$ index and the equations are given by
${}12+\log{\rm(O/H)_{up}}=8.76-0.69\log R_{23}$ (1)
and
${}12+\log{\rm(O/H)_{low}}=6.43+1.67\log R_{23},$ (2)
where "up" and "low" mean the equations for the upper and lower branch of the
(O/H)-$R_{23}$ calibration, respectively.
* •
Denicoló et al. (2002): These authors proposed a calibration between the O/H
abundance and the $N2=\log$([N ii]$\lambda$6584/H$\alpha$) line ratio,
originally proposed by Storchi-Bergmann et al. (1994) as a metallicity
indicator for H ii regions. For the low metallicity regime ($\rm
12+\log(O/H)\><\>8.4$), Denicoló et al. (2002) considered O/H values
calculated through the $T_{\rm e}$-method and for the high metallicity regime
abundance estimations based on calibrations by McGaugh (1991) and Díaz &
Pérez-Montero (2000). The expression proposed by Denicoló et al. (2002) is
$\displaystyle 12+\mathrm{\log(O/H)}$ $\displaystyle=9.12+0.73\times N2.$
This calibration is valid for the range of $7.2<12+\mathrm{\log(O/H)}<9.1$.
* •
Pettini & Pagel (2004): These authors used a sample of extragalactic H ii
regions and the
$O3N2=\log\left(\frac{[\mathrm{OIII}]\lambda\,5007/\mathrm{H}\beta}{[\mathrm{NII}]\lambda\,6583/\mathrm{H}\alpha}\right)$
parameter to derive the calibration:
$\displaystyle 12+\mathrm{\log(O/H)}$
$\displaystyle=8.73-0.32\times\textit{O3N2},$
valid for the range of $8.0<12+\mathrm{\log(O/H)}<9.0$. Pettini & Pagel (2004)
considered O/H values calculated using the $T_{\rm e}$-method for most cases
and a few estimations based on detailed photoionization models.
* •
Dors & Copetti (2005): These authors built photoionization model sequences to
reproduce the emission-line ratio intensities of H ii regions located along
the disks of a sample of spiral galaxies to derive O/H gradients. Dors &
Copetti (2005) obtained the semi-empirical calibration
$\displaystyle 12+\mathrm{\log(O/H)}$
$\displaystyle=8.96-0.03x-0.1x^{2}-0.21x^{3}-0.26x^{4},$
with $x=\log{R_{23}}$. This calibration is valid for the upper branch of the
(O/H)-$R_{23}$ relation (i.e., $12+\log({\rm O/H})\>>\>8.2$).
* •
Pilyugin & Grebel (2016): These authors used a sample of H ii regions with
abundances determined by the ‘counterpart’ method ($C$ method) to derive a
calibration based on oxygen and nitrogen emission lines. These empirical
calibrations use the excitation parameter $P=R_{3}/(R_{2}+R_{3})$, and $N2$,
where $R_{2}$ = [O ii]($\lambda\,$3726 + $\lambda\,$3729)/H$\beta$ and $R_{3}$
= [O iii]($\lambda$ 4959 + $\lambda$ 500 7)/H$\beta$.
Two equations were obtained, one for H ii regions with $N2\lid-0.6$ (the lower
branch), defined by
$\displaystyle 12+\log(\mathrm{O/H})$
$\displaystyle=7.932+0.944\times\log(R_{3}/R_{2})+0,695\times N2+$
$\displaystyle\quad+(0.970-0.291\times\log(R_{3}/R_{2})+$
$\displaystyle\quad-0.019\times N2)\times\log R_{2},$
and another for $N2\>\gid\>-0.6$ (the upper branch), where the following
equation is valid
$\displaystyle 12+\log(\mathrm{O/H})$
$\displaystyle=8.589+0.022\times\log(R_{3}/R_{2})+0.399\times N2+$
$\displaystyle\quad+(-0.137+0.164\times\log(R_{3}/R_{2})+$
$\displaystyle\quad+0.589\times N2)\times\log R_{2}.$
This method is similar to the $C_{NS}$ method proposed by Pilyugin et al.
(2012) and it yields O/H abundance values similar to those derived through the
$T_{\rm e}$-method.
### 3.2 Active Galactic Nuclei
* •
Storchi-Bergmann et al. (1998): The first calibrations between the oxygen
abundance and strong narrow emission-line ratios of AGNs were the theoretical
ones proposed by Storchi-Bergmann et al. (1998). These authors used
photoionization model sequences, built with the Cloudy code (Ferland, 1996),
and proposed the calibrations
$\displaystyle\begin{matrix}(\mathrm{O/H})&=8.34+0.212x-0.012x^{2}-0.002y+0.007xy+\\\
&\quad-0.002x^{2}y+6.52\times 10^{-4}y^{2}+2.27\times 10^{-4}xy^{2}+\\\
&\quad+8.87\times 10^{-5}x^{2}y^{2},\end{matrix}$
$\displaystyle\begin{matrix}(\mathrm{O/H})&=8.643-0.275u+0.164u^{2}+0.655v-0.154uv+\\\
&\quad-0.021u^{2}v+0.288v^{2}+0.162uv^{2}+0.0353u^{2}v^{2},\end{matrix}$
where $x$ = [N ii]($\lambda\lambda$6548,6584)/H$\alpha$, $y=$ [O
iii]($\lambda\lambda$4949,5007)/H$\beta$, $u=\log$([O
ii]($\lambda\lambda$3726,3729)/[O iii]($\lambda\lambda$4959,5007), and
$v=\log$ ([N ii]($\lambda\lambda$6548,6584)/H$\alpha)$.
These calibrations are valid for the range of
$8.4\><12+\>\mathrm{\log(O/H)}\>\><9.1$. Differences between O/H estimations
derived using these calibrations are in of order of 0.1 dex (Storchi-Bergmann
et al., 1998; Annibali et al., 2010; Dors et al., 2020a, 2015). For LINERs,
Storchi-Bergmann et al. (1998) found that the calibrations above yield lower
values than those derived from the extrapolation of O/H abundance gradients,
suggesting that the assumptions of their models are not representative for
LINERs. It should be mentioned that they indicated that their sample of LINERs
was too small (only four objects) to provide a firm conclusion about the
application of their method to this kind of object. They also suggest that a
more extensive sample needs to be used to test their calibrations.
* •
Castro et al. (2017) proposed a semi-empirical calibration between the
metallicity and the
N2O2$=\log([\mathrm{\text{N\,{ii}}}]\lambda\,6584/[\mathrm{\text{O\,{ii}}}]\lambda\,3727)$
index. The calibration derived by these authors was obtained upon a comparison
between observational and photoionization model predictions of the [O
iii]$\lambda$5007/[O ii]$\lambda$3727 versus $N2O2$ line ratios and given by
$\displaystyle(Z/{\rm Z_{\odot}})$ $\displaystyle=1.08(\pm 0.19)\times
N2O2^{2}+1.78(\pm 0.07)\times N2O2+$ $\displaystyle\quad 1.24(\pm 0.01).$
This calibration is valid for
$-1.4\>\la\>([\text{O\,{iii}}]/[\text{O\,{ii}}])\>\la\>2$ and
$-1.0\>\la\>N2O2\>\la\>1$.
* •
Carvalho et al. (2020) used the same methodology as Castro et al. (2017) to
calibrate NLRs metallicities of Seyfert 2 nuclei with the $N2$ emission-line
ratio. This ratio is practically independent of the flux calibration and
reddening correction. These authors proposed the following calibration
$(Z/Z_{\odot})=a^{N2}+b,$ (3)
where $a=4.01\pm 0.08$ and $b=-0.07\pm 0.01$. This calibration is valid for
$-1.4\>\la\>([\text{O\,{iii}}]/[\text{O\,{ii}}])\>\la\>2$ and
$-0.7\>\la\>(N2)\>\la\>0.6$. Carvalho et al. (2020) also proposed a relation
between the ionization parameter ($U$) and the [O iii]$\lambda$5007/[O
ii]$\lambda$3727 line ratio, almost independent of other nebular parameters,
and given by
$\log U=(0.57\pm 0.01\>x^{2})+(1.38\pm 0.01\>x)-(3.14\pm 0.01),$ (4)
where $x=\log$([O iii]$\lambda$5007/[O ii]$\lambda$3727).
Although the AGN calibrations above were developed for Seyfert 2 nuclei, in
this paper, we consider them to derive the O/H abundance in the LINER nucleus
of UGC 4805, and we compared the resulting values to those derived from
extrapolation of oxygen abundance gradients for central parts of this galaxy.
### 3.3 Photoionization models
To reproduce the observed line ratios of UGC 4805 LINER nucleus with the goal
of deriving the O/H abundance and the ionization parameter ($U$), we built
photoionization model grids using version 17.00 of the CLOUDY code (Ferland et
al., 2017). These models are similar to the ones used in Carvalho et al.
(2020), and a brief description of the input parameters is presented below:
1. 1.
SED: The models consider two distinct Spectral Energy Distributions (SEDs):
one to represent an AGN and another representing p-AGB stars. The AGN SED is a
multi-component continuum, similar to that observed in typical AGNs. As
described in the Hazy manual of the Cloudy code
222http://web.physics.ucsb.edu/~phys233/w2014/hazy1_c13.pdf, it is composed by
the sum of two components. The first one is a Big Bump component peaking at
$\approx$ 1 Ryd, parametrized by the temperature of the bump, assumed to be
$5\>\times\>10^{5}$ K, with a high-energy exponential cutoff and an infrared
exponential cutoff at 0.01 Ryd. The second component is an X-ray power law
with spectral index $\alpha_{x}=-1$ that is only added for energies greater
than 0.1 Ryd to prevent it from extending into the infrared. The X-ray power
law is not extrapolated below 1.36 eV or above 100 keV: for energies lower
than 1.36 eV it is set to zero (since the bump dominates for these energies),
and for energies above 100 keV, the continuum falls off as $\nu^{-2}$. The
$\alpha_{ox}$ spectral index defined as the slope of a power law between 2 keV
and 2500 Å is the parameter that provides the normalization of the X-ray power
law to make it compatible with the thermal component. It is given by
$\alpha_{ox}=\frac{\log[F(2\>{\rm
keV})/F(2500\>\textrm{\AA})]}{\log[\nu(2\>{\rm
keV})/\nu(2500\>\textrm{\AA})]},$ (5)
where $F$ is the flux at 2 keV, 2500 Å and $\nu$ are the corresponding
frequencies (Tananbaum et al., 1979). This AGN SED generates a continuum
similar to that used by Korista et al. (1997). In all our AGN models, a fixed
value of $\alpha_{ox}=-1.0$ is assumed. Carvalho et al. (2020) found that
models with $\alpha_{ox}\>\la\>-1.0$ trend not to reproduce optical emission
line ratios of Seyfert 2 nuclei (see also Dors et al. 2017b; Pérez-Montero et
al. 2019). Moreover, $\alpha_{ox}\sim-1.0$ has been derived in observational
studies of LINERs and low luminosity AGNs (see Ho 1999; Eracleous et al. 2010;
Maoz 2007; Younes et al. 2012).
In the case of the stellar SED, we consider p-AGB stars atmosphere models by
Rauch (2003) assuming the available values for the effective temperatures:
$T_{\rm eff}=50,100$, and 190 kK, with the logarithm of the surface gravity
$\log(\rm g)=6$. In Fig. 5, we present a comparison between the SEDs assumed
in our models. The AGN SED maintains a high ionization flux even at high
energies (low wavelengths) somewhat similar to the p-AGB one with the highest
$T_{\rm eff}$ value. Some soft emission is noted for p-AGB stars with 100 kK
and mainly with 50 kK. Both AGN and p-AGB SED models can be considered as the
main ionizing source, i.e., responsible for the ionization of the gas, and
underlying stellar population was not considered in the models. Therefore, our
models are designed to investigate what kind of object would be producing the
gas ionization in UGC 4805 based on emission line intensity ratios. These
models are not intended for analysing the equivalent width of lines, as
performed by Cid Fernandes et al. (2011), which also strongly depends on the
underling stellar population (Dottori & Bica, 1981).
2. 2.
Metallicity: We assumed ($Z/{\rm Z_{\odot}}$) = 0.2, 0.5, 0.75, 1.0, 2.0, and
3.0 for the models. We assumed the solar oxygen abundance to be 12 + $\log$
(O/H)⊙ = 8.69 (Asplund et al., 2009; Prieto et al., 2001) and it is equivalent
to ($Z/{\rm Z_{\odot}}$)=1.0. All the abundances of heavy elements were scaled
linearly with the metallicity, except the nitrogen for which we assumed the
relation $\rm\log(N/O)=1.29\times[12+\log(O/H)]-11.84$ derived by Dors et al.
(2017b), who considered abundance estimations for type 2 AGNs and H ii
regions.
3. 3.
Electron Density: We assumed for the models an electron density value of
$N_{\rm e}$ = 500 $\rm cm^{-3}$, constant in the nebular radius. This value is
very similar to the one estimated for UGC 4805 nucleus through the relation
between $N_{\rm e}$ and $R_{S2}=$[S ii]$\lambda 6716/\lambda 6731$ line ratio
and using the IRAF/TEMDEN task. Observational estimations of $N_{\rm e}$ based
on the Ar iv$\lambda$4711/$\lambda$4740 ratio, which map a denser gas region
than the one based on [S ii] ratio, for two Seyfert nuclei (IC 5063 and NGC
7212) by Congiu et al. (2017), indicate $N_{\rm e}$ ranging from $\sim 200$ to
$\sim 13\,000\>\,\rm cm^{-3}$. Furthermore, radial gradients with electron
densities decreasing from the centres to the edges have been found in star-
forming regions (e.g., Copetti et al. 2000) and in AGNs (e.g., Revalski et al.
2018). However, Carvalho et al. (2020) showed that models with $N_{\rm
e}\><2\,000\>\rm cm^{-3}$ produce practically the same optical emission-line
ratios. In addition, photoionization models assuming electron density
variations along the radius have an almost negligible influence on predicted
optical line ratios as demonstrated by Dors et al. (2019). For a detailed
discussion about the $N_{\rm e}$ influence on metallicity estimates in Seyfert
2 AGNs, see Dors et al. (2020b).
4. 4.
Ionization Parameter: This parameter is defined as
$U=\frac{Q({\rm H)}}{4\,\pi\,R_{{\rm 0}}^{2}\,n(\rm H)\,\rm c},$ (6)
in which $Q(\rm H)$ [$\rm s^{-1}$] is the number of hydrogen-ionizing photons
emitted by the central object, $R_{0}$ [cm] is the distance from the
ionization source to the inner surface of the ionized gas cloud, $n(\rm H)$
[cm-3] is the total hydrogen density (ionized, neutral and molecular), and
$\rm c$ is the speed of light [cm s-1]. We assumed logarithm of $U$ in the
range of -4.0 $\leq\log U\leq$ -0.5, with a step of 0.5 dex, which is about
the same range of values assumed by Feltre et al. (2016), who used a
photoionization model grid to reproduce ultraviolet and optical emission-line
ratios of active and normal galaxies. Different ionization parameter values
simulate gas excitation differences, owing to variations in the mass of the
gas phase and several geometrical conditions covering a wide range of possible
scenarios (Pérez-Montero, 2014).
In our models, a plane-parallel geometry is adopted, and the outer radius is
assumed to be the one where the gas temperature falls to 4 000 K (default
outer radius value in the CLOUDY code), since cooler gas practically does not
produce optical emission lines. Models with different combinations of $Q(\rm
H)$, $R_{0}$, and $n(\rm H)$, resulting in similar values of $U$, are
homologous models, i.e., they predict very similar emission-line intensities.
For the ionizing sources, Cloudy is a unidimensional code that assumes a
central ionization source, which is a good approach for AGNs. However, in
giant star-forming regions (e.g., Monreal-Ibero et al. 2011), stars are
spreaded out through the gas. In this sense, in most cases, a central
ionization source usage would not constitute a genuine representation of the
situation. Ercolano et al. (2009) and Jamet & Morisset (2008) showed that the
distribution of the O-B stars in relation to the gas alters the ionisation
structure and the electron temperature. Hence, the ionization parameter
partially depends on the distance of the ionizing source to the gas. However,
in our case, we are considering an integrated spectrum of the UGC 4805
nucleus; thus, the stellar distribution may have a minimum effect on the
emergent spectrum. In the case of giant H ii regions ionized by stellar
clusters (e.g. Mayya & Prabhu 1996; Bosch et al. 2001), the hottest stars
dominate the gas ionization (Dors et al., 2017a). Therefore, the assumption of
a single star with a representative effective temperature as the main ionizing
source, as assumed in our p-AGB models, is a good approximation (see e.g.,
Zinchenko et al., 2019).
To estimate the O/H and $U$ values for the UGC 4805 nucleus, we compare some
observational emission line intensity ratios with photoionization model
results using diagnostic diagrams and perform a linear interpolation between
models.
Figure 5: Comparison between the p-AGB star and AGN SEDs assumed the ionizing
source in the photoionization models. The atmosphere models by Rauch (2003)
and three effective temperature values (as indicated) are considered. The AGN
SED is represented by a multi-component continuum with spectral index
$\alpha_{ox}=-1.0$ (see Eq. 5).
Figure 6: Left panels: oxygen abundance maps obtained through the calibrations
described in Sect. 3.1 and indicated in each plot. Right panels: radial
abundance distributions along the UGC 4805 disk. The line in each plot
represents the linear fitting (Eq. 7) to the estimations, whose coefficients
are listed in Table 2.
Figure 7: Same as Fig. 6, but for the indicated calibrations. Figure 8: Upper
left panel: $\log$([O iii]$\lambda 5007$/H$\beta$) versus $\log$([N
ii]$\lambda 6584$/H$\alpha$) diagnostic diagram. Upper right panel: $\log$([O
iii]$\lambda 5007$/H$\beta$) versus $\log$([S ii]$\lambda\lambda
6717,6731$/H$\alpha$) diagnostic diagram. Gray lines represent the separating
criteria of the BPT diagrams, from Kewley et al. (2006) (Ke06), Kauffmann et
al. (2003) (Ka03), and Kewley et al. (2001) (Ke01). Lower left panel:
$\log$([O iii]$\lambda 5007$/[O ii] $\lambda 3727$) versus $\log$([N
ii]$\lambda 6584$/H$\alpha$) diagnostic diagram. Lower right panel: $\log$([O
iii]$\lambda 5007$/[O ii] $\lambda 3727$) versus $\log$([N ii]$\lambda
6584$/[O ii]$\lambda 3727$) diagnostic diagram. Coloured solid lines connect
AGN photoionization model results (see Sect. 3.3) with the same metallicity
$(Z/Z_{\odot})$ and dotted lines models with the same ionization parameter
($U$), as indicated. The blue point represents the observational line ratios
for the UGC 4805 nucleus (see Sect. 2). Figure 9: Same as Fig. 8 but
considering p-AGB photoionization models (see Sect. 3.3) assuming $T_{\rm
eff}$ = 50 kK. Figure 10: Same as Fig. 9 but considering p-AGB
photoioniazation models (see Sect. 3.3) assuming $T_{\rm eff}$ = 100 kK.
Figure 11: Same as Fig. 9 but considering p-AGB photoioniazation models (see
Sect. 3.3) assuming $T_{\rm eff}$ = 190 kK.
## 4 Results
### 4.1 O/H calibrations
To apply some of the strong-line calibrations developed for SFs described in
Sect. 3.1 to the UGC 4805 disk H ii regions, it is necessary to select which
branch of the (O/H)-$R_{23}$ relation is adequate. We consider the Kewley &
Ellison (2008) criteria to break the degeneracy, i.e., for objects with
$\log$([N ii]$\lambda$6584/[O ii]$\lambda$3727)$\>>\>-1.2$, the upper $R_{23}$
branch must be used. The O/H abundances were estimated only for objects
classified as pure star-forming regions, i.e., those with line ratios under
the Kauffmann et al. (2003) line in the diagnostic diagram in the left panel
of Fig. 3. Figures. 6 and 7 present the abundance maps (left panels) and the
O/H values along the disk (right panels). Note that all the strong-line
calibrations applied exhibited a linear decrease of O/H along the disk in
agreement with previous results (e.g., Pilyugin et al. 2004). We derive the
central oxygen abundance $\rm 12+\log(O/H)_{0}$ extrapolating to the centre of
the galaxy the linear fit:
${\rm 12+\log(O/H)=12+\log(O/H)_{0}}+(grad\>\times R),$ (7)
where $\rm 12+\log(O/H)$ is the oxygen abundance at a given galactocentric
distance $R$ (in units of arcsec) and $grad$ is the regression slope. The
parameters of the linear regressions for the distinct calibrations used are
listed in Table 2. The star-forming ring, clearly visible in the H$\alpha$ map
(see Fig. 1), does not present any oxygen abundance discrepancy in comparison
to its neighbour regions.
The calibration proposed by Edmunds & Pagel (1984) resulted in 12 +
$\log$(O/H) values ranging from $\rm\sim 8.2$ to $\sim 8.8$ along the galactic
disk, while the abundance value extrapolated to the nucleus ($R=0$ arcsec) is
$\rm 12+\log(O/H)_{0}=8.72$. Considering the Denicoló et al. (2002)
calibration, we derive 12 + $\log$ (O/H)0 = 8.81 for the nucleus. By using the
calibration by Pettini & Pagel (2004), we derive a nuclear abundance of 12 +
$\log$ (O/H)0 = 8.79. Estimates of oxygen abundances obtained using the
calibration by Dors & Copetti (2005) yield a flatter gradient than the
gradients derived with other calibrations, i.e., O/H values vary along the
galactic disk in the narrow range of 8.85 < 12 + $\log$(O/H) < 9.0. The
estimated nuclear abundance is 12 + $\log$ (O/H)0 = 8.98. Note that a large
part of our estimated values are close to the upper metallicity limit for this
calibration, where the metallicity is practically constant, i.e., the O/H
abundance is saturated with the variation of $R_{23}$. Finally, the
application of the calibration proposed by Pilyugin & Grebel (2016) indicates
abundances in the range of 8.5 < 12 + $\log$(O/H) < 8.7, with an inferred
central abundance of 12 + $\log$ (O/H)0 = 8.76, which is close to the
abundance obtained through the Pettini & Pagel (2004) calibration. In summary,
the extrapolation for the UGC 4805 LINER nucleus based on the calibrations
considered above indicates an over solar oxygen abundance, with an averaged
value of $\rm 12+\log(O/H)_{0}=8.82$.
To estimate the O/H abundance by using the nuclear emission of UGC 4805, we
used the line intensity ratios listed in Table 1 and applied the Storchi-
Bergmann et al. (1998), Castro et al. (2017), and Carvalho et al. (2020)
calibrations. The estimated values of O/H abundance are listed in Table 2. As
suggested by Storchi-Bergmann et al. (1998), the final (O/H) abundance derived
from their methods should be the average of the values calculated from the two
equations (Sect. 3.2), which provides 12 + $\log$ (O/H)0 $=8.93\pm$ 0.04. The
Castro et al. (2017) and Carvalho et al. (2020) calibrations provide a value
of 12 + $\log$ (O/H)0 = $8.77\pm$ 0.01 and 12 + $\log$ (O/H)0 = $8.69\pm$
0.01, respectively. An average value of 12 + $\log$ (O/H)${}_{0}=8.81\pm 0.02$
was derived considering the three calibrations.
Table 2: Oxygen abundance results for the UGC 4805 nucleus. The first set of
values are the central Z/Z⊙ estimations and the coefficients of the linear
fitting (Eq. 7) to the O/H estimations along the UGC 4805 disk (see Figs. 6
and 7) considering different calibrations for H ii regions proposed by
different authors as indicated (see Sect. 3.1). The second set of values are
the metallicities, the oxygen abundances, and $\log U$ (only for one case)
obtained by using the AGN calibrations (see Sect. 3.2). The third set of
values are metallicities, O/H abundances and $\log U$ obtained from linear
interpolations of the photoionization model results shown in Fig. 8, 9, 10 and
11. The diagnostic diagrams and the model ionizing sources considered are
indicated. Central intersect method – H ii region calibrations
---
| Z/Z⊙ | 12 + $\log$(O/H)0 | grad (dex/arcsec)
Edmunds & Pagel (1984) | 1.07 | 8.72 $\pm$ 0.003 | $-0.016\pm 0.001$
Denicoló et al. (2002) | 1.32 | 8.81 $\pm$ 0.002 | $-0.002\pm 0.0001$
Pettini & Pagel (2004) | 1.26 | 8.79 $\pm$ 0.003 | $-0.003\pm 0.0002$
Dors & Copetti (2005) | 1.95 | 8.98 $\pm$ 0.002 | $-0.004\pm 0.0002$
Pilyugin & Grebel (2016) | 1.17 | 8.76 $\pm$ 0.002 | $-0.007\pm 0.0001$
Average | 1.35 | 8.82 $\pm$ 0.003 |
AGN calibrations
| Z/Z⊙ | 12 + $\log$(O/H) | $\log U$
Storchi-Bergmann et al. (1998) | 1.74 | $8.93$ $\pm$ 0.04 | —
Castro et al. (2017) | 1.20 | $8.77\pm 0.01$ | —
Carvalho et al. (2020) | 1.00 | $8.69\pm 0.01$ | $-3.09$
Average | 1.31 | 8.81 $\pm$ 0.02 |
Diagnostic diagrams – Photoionization models
| Z/Z⊙ | 12 + $\log$(O/H) | $\log U$
AGN models
$\log([\text{O\,{iii}}]/\rm H\beta)$ vs. $\log$([N ii]/H$\alpha$) | 0.95 | $8.67\pm 0.02$ | $-3.39$
$\log([\text{O\,{iii}}]/[\text{O\,{ii}}])$ vs. $\log$($[\text{N\,{ii}}]$/H$\alpha$) | 0.93 | $8.66\pm 0.02$ | $-3.22$
$\log([\text{O\,{iii}}]/[\text{O\,{ii}}])$ vs. $\log([\text{N\,{ii}}]/[\text{O\,{ii}}])$ | 1.29 | $8.80\pm 0.02$ | $-3.24$
Average | 1.06 | $8.71\pm 0.02$ |
p-AGB models ($T_{\rm eff}$= 100 kK)
$\log([\text{O\,{iii}}]/\rm H\beta)$ vs. $\log$([N ii]/H$\alpha$) | 0.85 | $8.62\pm 0.03$ | $-3.50$
$\log([\text{O\,{iii}}]/[\text{O\,{ii}}])$ vs. $\log$($[\text{N\,{ii}}]$/H$\alpha$) | 0.98 | $8.68\pm 0.01$ | $-3.26$
$\log([\text{O\,{iii}}]/[\text{O\,{ii}}])$ vs. $\log([\text{N\,{ii}}]/[\text{O\,{ii}}])$ | 1.32 | $8.81\pm 0.02$ | $-3.29$
Average | 1.06 | $8.71\pm 0.02$ |
p-AGB models ($T_{\rm eff}$= 190 kK)
$\log([\text{O\,{iii}}]/\rm H\beta)$ vs. $\log$([N ii]/H$\alpha$) | 0.72 | $8.55\pm 0.01$ | $-3.57$
$\log([\text{O\,{iii}}]/[\text{O\,{ii}}])$ vs. $\log$($[\text{N\,{ii}}]$/H$\alpha$) | 0.81 | $8.60\pm 0.01$ | $-3.26$
$\log([\text{O\,{iii}}]/[\text{O\,{ii}}])$ vs. $\log([\text{N\,{ii}}]/[\text{O\,{ii}}])$ | 1.48 | $8.86\pm 0.01$ | $-3.31$
Average | 1.00 | $8.69\pm 0.01$ |
### 4.2 Photoionization models
As mentioned previously (see Sect. 3.3), two different photoionization model
grids were built, one assuming an AGN as the ionizing source and another
assuming p-AGB stars with different $T_{\rm{eff}}$ values as the ionizing
source. In the upper panels of Fig. 8, the observational line ratios of the
UGC 4805 nucleus are plotted in the $\log$([O iii]$\lambda 5007$/H$\beta$)
versus $\log$([N ii]$\lambda 6584$/H$\alpha$) (left panel) and $\log$([O
iii]$\lambda 5007$/H$\beta$) versus $\log$([S ii]$\lambda\lambda
6717,6731$/H$\alpha$) (right panel) diagnostic diagrams and compared to those
predicted by AGN photoionization models. These plots also show the demarcation
lines proposed by Kauffmann et al. (2003) and Kewley et al. (2006). The
observational line intensity ratios are reproduced by the AGN models;
therefore, we can infer a metallicity and an ionization parameter for the UGC
4805 nucleus. Using linear interpolation between the models in the $\log$([O
iii]/H$\beta$) versus $\log$([N ii]/H$\alpha$) diagnostic diagram (Fig. 8
upper left panel), we derive a metallicity of (Z/Z⊙) $\sim$ 0.95 and $\log
U\sim-3.39$. For $\log$([O iii]/H$\beta$) versus $\log$([S ii]/H$\alpha$)
diagnostic diagram (Fig. 8 upper right panel), which is clearly bi-valuated
with the upper envelope at (Z/$\rm Z_{\odot})\sim$ 1, we adopt the models with
larger values to characterise our object, since the low metallicity models do
not represent AGN-like objects, as it is seen in the left panel. Then, we
derived (Z/Z⊙) $\sim$ 2.57 and $\log U\sim-3.26$, using the $\log$([O
iii]/H$\beta$) versus $\log$([S ii]/H$\alpha$) diagnostic diagram. The second
metallicity value is about three times the former one.
Dors et al. (2011), by using a grid of photoionization models, showed that
there are relations between different line ratios, such as
$[\text{O\,{iii}}]\lambda 5007$/$[\text{O\,{ii}}]\lambda 3727$ versus
$[\text{N\,{ii}}]\lambda 6584$/H$\alpha$ and [O iii]$\lambda 5007$ / [O ii]
$\lambda 3727$ versus [N ii]$\lambda 6584$/ [O ii]$\lambda 3727$, that are
more sensitive to the ionization parameter, and the metallicities obtained
through them are closer to those obtained using the $T_{\rm e}$-method. For
this reason, we use these diagnostic diagrams also employed by Castro et al.
(2017) and Carvalho et al. (2020) to perform a more reliable analysis. The
lower panels of Fig. 8 presents these observational line ratios for the UGC
4805 nucleus superimposed on those ratios predicted by our AGN photoionization
models. By using linear interpolation between the models we derive (Z/Z⊙)
$\sim$ 0.93 and $\log U\sim-3.22$ from the $\log$([O iii]/[O ii]) vs.
$\log$([N ii]/H$\alpha$) diagnostic diagram (lower left panel), and (Z/Z⊙)
$\sim$ 1.29 and $\log U\sim-3.24$ from the $\log$([O iii]/[O ii]) vs.
$\log$([N ii]/[O ii]) diagnostic diagram (lower right panel).
The values of the ionization parameter found using the four diagnostic
diagrams (Fig. 8) are very similar and in agreement with the typical value for
LINER galaxies estimated by Ferland & Netzer (1983).
Figs. 9, 10, and 11 contain the same diagnostic diagrams exhibited in Fig. 8
for the photoionization model results considering p-AGB stars as ionizing
sources. In Fig. 9, the models with $T_{\rm eff}=50$ kK do not reproduce the
UGC 4805 nucleus line ratios. In the upper panels of this figure, the
parameter space characterized by the models is occupied only by H ii-like
objects. Therefore, it is impossible to derive any value of $Z$ or $U$ from
these models. For models with $T_{\rm eff}=100$ and 190 kK and considering the
$\log$([O iii]/H$\beta$) versus $\log$([N ii]/H$\alpha$) (upper left panels of
Figs. 10 and 11), we derive (Z/Z⊙) $\sim$ 0.85 and $\log U\sim 3.50$, and
(Z/Z⊙) $\sim$ 0.72 and $\log U\sim 3.57$, respectively. Taking into account
$T_{\rm eff}=100$ kK and $\log$([O iii]/H$\beta$) versus $\log$([S
ii]/H$\alpha$) diagnostic diagram (upper right panel), we found $\log
U\sim-3.44$ and two values for the metallicity, i.e., Z/$\rm Z_{\odot}\sim$
2.87 and Z/$\rm Z_{\odot}\sim$ 0.42. This happens because, as in the case of
AGN models, this relation is bi-valuated for the metallicity. Analysing the
results of the same diagnostic diagram for the p-AGB models with $T_{\rm
eff}=190$ kK, we do not observe a bi-valuated relatio. Models with
metallicities larger than 0.75 occupy almost the same region. We obtain
(Z/Z${}_{\odot})$ $\sim$ 0.36 and $\log U=-3.35$. These results could indicate
that the high metallicity model solution found for the models with T${}_{\rm
eff}=100$ kK [$(Z/\rm Z_{\odot})\sim 2.0$] is not correct.
The lower panels of Figs. 10 and 11 display the same diagnostic diagrams as in
the lower panels of Fig. 8, but containing photoionization model results
considering p-AGB stars as ionizing source. For models with $T_{\rm eff}=100$
kK (Fig. 10), we derive from the $\log$([O iii]/[O ii]) versus $\log$([N
ii]/H$\alpha$) diagnostic diagram Z/$\rm Z_{\odot}\sim$ 0.98 and $\log
U\sim-3.26$. From the $\log$([O iii]/[O ii]) versus $\log$([N ii]/[O ii])
diagram we calcule Z/$\rm Z_{\odot}\sim$ 1.32 and $\log U\sim-3.29$. Finally,
we see that the models with $T_{\rm eff}=190$ kK (Fig. 11) provide from the
$\log$([O iii]/[O ii]) versus $\log$([N ii]/H$\alpha$) diagram a metallicity
of Z/$\rm Z_{\odot}\sim$ 0.81 and $\log U\sim-3.26$, and from the $\log$([O
iii]/[O ii]) versus $\log$([N ii]/[O ii]) diagram Z/$\rm Z_{\odot}\sim$ 1.48
and $\log U\sim-3.31$.
The models yield bi-valuated or saturated results for the emission-line
diagnostic diagrams that include the [S ii] emission-lines and show the more
discrepant results including super-solar metallicities values
[$(Z/Z_{\odot})\sim$ 2.57] for the AGN models and sub-solar metallicities for
the p-AGB models with $T_{\rm eff}=100$ and 190 kK [$(Z/Z_{\odot})\sim$ 0.42,
0.36, respectively]. Hence, we do not take into account the results derived
from the $\log$([O iii]/H$\beta$) versus $\log$([S ii]/H$\alpha$) diagnostic
diagrams. The adopted $(Z/Z_{\odot})$, 12 + log(O/H) and $\log U$ values
derived from Figs. 8, 10, and 11 are listed in Table 2.
The averaged values obtained from the extrapolation of the oxygen abundance
gradient from H ii region estimations and from AGN calibrations are
Z/Z${}_{\odot})\sim$ 1.35 and (Z/Z${}_{\odot})\sim$ 1.31, respectively. In
both cases, the estimated abundance values are over-solar and are in
agreement, taking into account their errors. On the other hand, all the
photoionization model produce very similar average values close to the solar
one: (Z/Z${}_{\odot})$ $\sim$ 1.06, 1.06, 1.00 for AGN, p-AGB with $T_{\rm
eff}=100$ kK, and p-AGB with $T_{\rm eff}=190$ kK, respectively.
## 5 Discussion
A widely accepted practice is to estimate the oxygen abundance at the central
part of a galaxy by the central intersect abundance [$\rm 12+\log(O/H)_{0}$]
obtained from the radial abundance gradient (e.g., Vila-Costas & Edmunds 1992;
Zaritsky et al. 1994; van Zee et al. 1998). This methodology has predicted
solar or slightly over-solar metallicities for the central region of spiral
galaxies, i.e., 12 + $\log$(O/H) from $\sim 8.6$ to $\sim 9.0$ (e.g., Pilyugin
et al. 2004; Dors et al. 2020a), depending on the method considered to derive
the individual disk H ii-region abundances. Comparisons of these extrapolated
oxygen abundance measurements ($\rm 12+\log(O/H)_{0}$) with the ones obtained
through the use of other methods that directly involve the nuclear emission
have achieved good agreement. Storchi-Bergmann et al. (1998) found that the
O/H abundances derived for a sample of seven Seyfert 2 galaxies through their
calibrations are in consonance with those obtained by the central intersect
abundance. This agreement was also found by Dors et al. (2015) using a larger
sample of objects than the one considered by Storchi-Bergmann et al. (1998).
The oxygen abundance profile along the UGC 4805 disk presents a negative
gradient, as expected, since it is a spiral galaxy. The negative gradient is
explained naturally by models assuming the inside-out scenario of galaxy
formation (Portinari & Chiosi, 1999; MacArthur et al., 2004; Barden et al.,
2005). According to this scenario, galaxies begin to form in the inner regions
before the outer regions. This was confirmed by studies of the stellar
populations (e.g., Boissier & Prantzos 2000; Bell & Jong 2000; Pohlen &
Trujillo 2006) and chemical abundances of spiral galaxies (e.g., Sánchez et
al. 2014). As previously shown, considering the O/H gradient extrapolation,
AGN calibrations, and AGN and p-AGB photoionization models, we derived
averaged oxygen abundance values for the UGC 4805 nucleus in the range of 1.00
$\><\>$ (Z/Z${}_{\odot})\><$ 1.35, i.e., ranging from solar to slightly over-
solar metallicities.
Figure 12: Comparison between central intersect oxygen abundances derived for
the UGC 4805 nucleus from the radial abundance gradients ($12+\log(\rm
O/H)_{0}$) with those derived through strong-line methods and AGN and p-AGB
models (colored points as indicated). The point of the AGN model is the
average from the AGN models. Black points represent the estimations performed
by Dors et al. (2015) using the observational data by Ho et al. (1997). Solid
line represents the equality between the estimations. Figure 13: Metallicity
sensitive line ratios $R_{23}$, $N2O2$ and $N2$ versus the ionization
parameter sensitive line ratio [O iii]$\lambda$5007/[O ii]$\lambda$3727\.
Black points represent 463 Seyfert 2 nuclei studied by Dors et al. (2020a) and
blue points represent 38 LINERs compiled by Ho et al. (1993), Eracleous &
Halpern (2001), Annibali et al. (2010), and Molina et al. (2018). The red
point represents the UGC 4805 nucleus.
In Fig. 12 the O/H average values estimated for the UGC 4805 nucleus using AGN
calibrations as well as AGN and p-AGB models are compared with the average
value derived through the central intersect method. The estimations for active
and star-forming nuclei from Dors et al. (2015) are also presented in Fig. 12.
This figure clearly illustrates that the averaged O/H value derived through
the central intersect method is in consonance with the ones derived through
the use of AGN calibrations and AGN and p-AGB models, as well as with the Dors
et al. (2015) estimations.
Annibali et al. (2010) compared intermediate-resolution optical spectra of a
sample of 49 nuclei classified as LINERs/composites with photoionization model
results assuming as ionization source accretion-rate AGN (represented by a
power law SED) using the Groves et al. (2004) models and the shock models
built by Allen et al. (2008). These authors also compared the observed and
predicted equivalent widths of the lines present on their spectra using models
with p-AGB SEDs computed by Binette et al. (1994) [see also Cid Fernandes et
al. 2009], finding that photoionization by p-AGB stars alone can explain only
$\approx 22$% of the observed LINER/composite sample. They also found that the
major fraction of their sample could be characterized by nuclear emission
consistent with excitation by a low-accretion rate AGNs and/or fast shocks.
Molina et al. (2018) compared observational optical and ultraviolet spectra of
three LINERs with model results assuming four different excitation mechanisms:
shocks, photoionization by an accreting black hole, and photoionization by
young or old hot stars. These authors concluded that the model which best
describes their data has a low-luminosity accretion-powered active nucleus
that photoionizes the gas within $\sim 20$ pc of the galaxy centre, as well as
shock excitation of the gas at larger distances. These authors also indicated
that LINERs could have more than one ionizing mechanism. In the case of the
UGC 4805 nucleus, the good agreement among all the different methods applied
to derive its metallicity does not allow discrimination of the nature of the
ionizing source.
Fig. 13 illustrates the $\log(R_{23})$, $N2O2$ and $N2$ metallicity indexes as
a function of the [O iii]$\lambda$5007/[O ii]$\lambda$3727 line ratio used as
an ionization parameter indicator for the UGC 4805 nucleus. This figure
compares our results to those of a sample of confirmed 463 Seyfert 2 nuclei
studied by Dors et al. (2020a) and obtained from the Sloan Digital Sky Survey
(York et al., 2000), as well as those of a sample of 38 LINERs obtained by Ho
et al. (1993), Eracleous & Halpern (2001), Annibali et al. (2010), and Molina
et al. (2018). Both populations LINERs and Seyfert 2s, are partially
overlapped in all of these diagrams although they display slightly different
trends with LINERs showing lower ionizations ($\log U\><\>-3.2$) following Eq.
4. As can be seen in Fig. 13, the UGC 4805 nucleus positions in these diagrams
are compatible with both populations, although they seems to follow the LINERs
sequence; therefore, they would share similar physical properties.
According to Fig. 13, LINERs have intermediate and low [O iii]/[O ii] line
ratio intensities, with the high values
[$\rm(\log[\text{O\,{iii}}]/[\text{O\,{ii}}])\>\ga\>0.0$] only observed in
Seyfert 2. Since the [O iii]/[O ii] has a strong dependence on $U$, the above
results indicate a tendency of LINERs to present lower $U$ values than the
ones in Seyfert 2, as suggested by Ferland & Netzer (1983). As an additional
test of this scenario, Fig. 14 presents $\log U$ versus $Z/\rm Z_{\odot}$,
calculated by using the Carvalho et al. (2020) calibrations (Eqs. 3 and 4),
for the same sample as the one in Fig. 13. We can see that the UGC 4805 and
the LINERs occupy the region with lower $U$ values and the highest values of
this parameter are only observed in Seyfert 2s.
Finally, the geometry of UGC 4805 nucleus can provide information about the
ionization source. In view of this, we compare the ionization parameter
derived from the AGN and pAGB photoionization models with the one estimated
from the observational data. The average value from the models is $<\log
U>\sim-3.30$. To calculate $U$ from observational data, first, we obtained the
$Q(\rm H)$ from the expression of Hekatelyne et al. (2018)
$\left(\frac{Q({\rm H)}}{{\rm s^{-1}}}\right)=1.03\times
10^{12}\left(\frac{L_{\rm H_{\alpha}}}{\rm s^{-1}}\right)$ (8)
and employing the luminosity value listed in Table 1. This luminosity value is
obtained from integrated flux of the UGC 4805 nucleus. We found $\log Q({\rm
H)}=50.87$. The value $N_{\rm e}=100\rm\>cm^{-3}$ is obtained from [S
ii]$\lambda$6716/$\lambda 6731$ line ratio intensity, also listed in Table 1.
Applying the $Q(\rm H)$ and $N_{\rm e}$ values above to Eq. 6, the innermost
radius value $R_{0}$ to conciliate the theoretical and observational $U$ value
is about 50 pc, in order of the radius assumed by Bennert et al. (2006). As
can be noted in Fig. 3, the LINER emission extends to until $\sim 2.5$ kpc,
i.e., a high excitation level (or $U$) is maintained from $\sim 50$ pc to kpc
scales. Since $U\approx R^{-2}$, the ionization source is probably spread
along the $R$. Thus, this result indicates that p-AGB is the preferable
ionization source rather than AGN. This assumption is supported by the result
obtained previously from the WHAN diagram (Cid Fernandes et al., 2011).
Figure 14: As Fig. 13 but for logarithm of the ionzation parameter ($\log U$)
versus the metallicity ($Z/\rm Z_{\odot}$) calculated by using the Carvalho et
al. (2020) calibrations (Eqs. 3 and 4).
## 6 Conclusion
We used optical emission-line fluxes taken from the SDSS-IV MaNGA survey to
determine the oxygen abundance (metallicity) of the LINER nucleus of the UGC
4805 galaxy. The oxygen abundance was derived through the extrapolation of the
radial abundance gradient for the central part of the disk by using strong-
line calibrations for AGNs and photoionization model grids assuming as
ionizing sources gas accretion into a black hole, representing an AGN and
p-AGB stars. We found that all the O/H abundance estimations agree with each
other. The results from these methods indicate that the UGC 4805 nucleus has
an oxygen abundance in the range of $1.0\>\la\>(Z/Z_{\odot})\>\la 1.35$, i.e.,
solar or slightly over-solar metallicity.
We calculated that the UGC 4805 nucleus and other LINERs present metallicity
and ionization parameter sensitive emission-line ratios similar to those
observed in confirmed Seyfert 2 nuclei,l although exhibiting a slightly
different trend. Even though LINERs present low ionization parameter values
($\log U\>\la\>-3.2$), Seyfert 2 nuclei also present low values of the
ionization parameter. Although both AGN and p-AGB models (with $T_{\rm eff}$=
100 and 190 kK) are able to reproduce the observational data, the results from
the WHAN diagram combined with the fact that the high excitation level of the
gas has to be maintained at kpc scales, suggest that the main ionizing source
of the UGC 4805 nucleus probably has a stellar origin rather than an AGN.
## Acknowledgements
ACK thanks to CNPq. CBO is grateful to the FAPESP for the support under grant
2019/11934-0, and to the CAPES. IAZ acknowledges support by the grant for
young scientist’s research laboratories of the National Academy of Sciences of
Ukraine. AHJ thanks to CONICYT, Programa de Astronomía, Fondo ALMA-CONICYT
2017, Código de proyecto 31170038.
## 7 Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* Allen et al. (2008) Allen M. G., Groves B. A., Dopita M. A., Sutherland R. S., Kewley L. J., 2008, ApJS, 178, 20
* Alloin et al. (1992) Alloin D., Bica E., Bonatto C., Prugniel P., 1992, A&A, 266, 117
* Annibali et al. (2010) Annibali F., Bressan A., Rampazzo R., Zeilinger W. W., Vega O., Panuzzo P., 2010, A&A, 519, A40
* Asari et al. (2007) Asari N. V., Cid Fernandes R., Stasińska G., Torres-Papaqui J. P., Mateus A., Sodré L., Schoenell W., Gomes J. M., 2007, MNRAS, 381, 263
* Asplund et al. (2009) Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, ARA&A, 47, 481
* Baldwin et al. (1981) Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5
* Barden et al. (2005) Barden M., et al., 2005, ApJ, 635, 959
* Bell & Jong (2000) Bell E. F., Jong R. S., 2000, MNRAS, 312, 497
* Bennert et al. (2006) Bennert N., Jungwiert B., Komossa S., Haas M., Chini R., 2006, A&A, 456, 953
* Binette et al. (1994) Binette L., Magris C. G., Stasińska G., Bruzual A. G., 1994, A&A, 292, 13
* Blanton et al. (2017) Blanton M. R., et al., 2017, AJ, 154, 28
* Boissier & Prantzos (2000) Boissier S., Prantzos N., 2000, in Franco J., Terlevich L., López-Cruz O., Aretxaga I., eds, Astronomical Society of the Pacific Conference Series Vol. 215, Cosmic Evolution and Galaxy Formation: Structure, Interactions, and Feedback. p. 53
* Bosch et al. (2001) Bosch G., Selman F., Melnick J., Terlevich R., 2001, A&A, 380, 137
* Bremer et al. (2013) Bremer M., Scharwächter J., Eckart A., Valencia-S. M., Zuther J., Combes F., Garcia-Burillo S., Fischer S., 2013, A&A, 558, A34
* Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
* Bundy et al. (2015) Bundy K., et al., 2015, ApJ, 798, 7
* Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245
* Carvalho et al. (2020) Carvalho S. P., et al., 2020, MNRAS, 492, 5675
* Castro et al. (2017) Castro C. S., Dors O. L., Cardaci M. V., Hägele G. F., 2017, MNRAS, 467, 1507
* Cid Fernandes et al. (2005) Cid Fernandes R., Mateus A., Sodré L., Stasińska G., Gomes J. M., 2005, MNRAS, 358, 363
* Cid Fernandes et al. (2009) Cid Fernandes R., Schlickmann M., Stasinska G., Asari N. V., Gomes J. M., Schoenell W., Mateus A., Sodré L. J., 2009, The Starburst-AGN Disconnection: LINERs as Retired Galaxies. p. 122
* Cid Fernandes et al. (2011) Cid Fernandes R., Stasińska G., Mateus A., Vale Asari N., 2011, MNRAS, 413, 1687
* Congiu et al. (2017) Congiu E., et al., 2017, MNRAS, 471, 562
* Contini (2017) Contini M., 2017, MNRAS, 469, 3125
* Copetti et al. (2000) Copetti M. V. F., Mallmann J. A. H., Schmidt A. A., Castañeda H. O., 2000, A&A, 357, 621
* Denicoló et al. (2002) Denicoló G., Terlevich R., Terlevich E., 2002, MNRAS, 330, 69
* Díaz & Pérez-Montero (2000) Díaz A. I., Pérez-Montero E., 2000, MNRAS, 312, 130
* Díaz et al. (2007) Díaz Á. I., Terlevich E., Castellanos M., Hägele G. F., 2007, MNRAS, 382, 251
* Dors & Copetti (2005) Dors O. L., Copetti M. V. F., 2005, A&A, 437, 837
* Dors et al. (2011) Dors O. L. J., Krabbe A., Hägele G. F., Pérez-Montero E., 2011, MNRAS, 415, 3616
* Dors et al. (2014) Dors O. L., Cardaci M. V., Hägele G. F., Krabbe Â. C., 2014, MNRAS, 443, 1291
* Dors et al. (2015) Dors O. L., Cardaci M. V., Hägele G. F., Rodrigues I., Grebel E. K., Pilyugin L. S., Freitas-Lemes P., Krabbe A. C., 2015, MNRAS, 453, 4102
* Dors et al. (2017a) Dors O. L., Hägele G. F., Cardaci M. V., Krabbe A. C., 2017a, MNRAS, 466, 726
* Dors et al. (2017b) Dors O. L., Arellano-Córdova K. Z., Cardaci M. V., Hägele G. F., 2017b, MNRAS, 468, L113
* Dors et al. (2019) Dors O. L., Monteiro A. F., Cardaci M. V., Hägele G. F., Krabbe A. C., 2019, MNRAS, 486, 5853
* Dors et al. (2020a) Dors O. L., et al., 2020a, MNRAS, 492, 468
* Dors et al. (2020b) Dors O. L., Maiolino R., Cardaci M. V., Hägele G. F., Krabbe A. C., Pérez-Montero E., Armah M., 2020b, MNRAS, 496, 3209
* Dottori & Bica (1981) Dottori H. A., Bica E. L. D., 1981, A&A, 102, 245
* Dufour et al. (1980) Dufour R. J., Talbot R. J. J., Jensen E. B., Shields G. A., 1980, ApJ, 236, 119
* Edmunds & Pagel (1984) Edmunds M. G., Pagel B. E. J., 1984, MNRAS, 211, 507
* Eracleous & Halpern (2001) Eracleous M., Halpern J. P., 2001, ApJ, 554, 240
* Eracleous et al. (2010) Eracleous M., Hwang J. A., Flohic H. M. L. G., 2010, ApJS, 187, 135
* Ercolano et al. (2009) Ercolano B., Bastian N., Stasińska G., 2009, Ap&SS, 324, 199
* Feltre et al. (2016) Feltre A., Charlot S., Gutkin J., 2016, MNRAS, 456, 3354
* Ferland (1996) Ferland G. J., 1996, Hazy, A Brief Introduction to Cloudy 90
* Ferland & Netzer (1983) Ferland G. J., Netzer H., 1983, ApJ, 264, 105
* Ferland et al. (2017) Ferland G. J., et al., 2017, Rev. Mex. Astron. Astrofis., 53, 385
* Florido et al. (2012) Florido E., Pérez I., Zurita A., Sánchez-Blázquez P., 2012, A&A, 543, A150
* Groves et al. (2004) Groves B. A., Dopita M. A., Sutherland R. S., 2004, ApJS, 153, 75
* Hägele et al. (2006) Hägele G. F., Pérez-Montero E., Díaz Á. I., Terlevich E., Terlevich R., 2006, MNRAS, 372, 293
* Hägele et al. (2008) Hägele G. F., Díaz Á. I., Terlevich E., Terlevich R., Pérez-Montero E., Cardaci M. V., 2008, MNRAS, 383, 209
* Heckman (1980) Heckman T. M., 1980, A&A, 87, 152
* Hekatelyne et al. (2018) Hekatelyne C., et al., 2018, MNRAS, 479, 3966
* Ho (1999) Ho L. C., 1999, ApJ, 516, 672
* Ho et al. (1993) Ho L. C., Filippenko A. V., Sargent W. L. W., 1993, ApJ, 417, 63
* Ho et al. (1997) Ho L. C., Filippenko A. V., Sargent W. L. W., 1997, ApJS, 112, 315
* Hummer & Storey (1987) Hummer D. G., Storey P. J., 1987, MNRAS, 224, 801
* Izotov & Thuan (2008) Izotov Y. I., Thuan T. X., 2008, ApJ, 687, 133
* Jamet & Morisset (2008) Jamet L., Morisset C., 2008, A&A, 482, 209
* Jensen et al. (1976) Jensen E. B., Strom K. M., Strom S. E., 1976, ApJ, 209, 748
* Kauffmann et al. (2003) Kauffmann G., et al., 2003, MNRAS, 346, 1055
* Kennicutt et al. (2003) Kennicutt Robert C. J., Bresolin F., Garnett D. R., 2003, ApJ, 591, 801
* Kewley & Dopita (2002) Kewley L. J., Dopita M. A., 2002, ApJS, 142, 35
* Kewley & Ellison (2008) Kewley L., Ellison S., 2008, ApJ, 681, 1183
* Kewley et al. (2001) Kewley L. J., Dopita M. A., Sutherland R. S., Heisler C. A., Trevena J., 2001, ApJ, 556, 121
* Kewley et al. (2006) Kewley L. J., Groves B., Kauffmann G., Heckman T., 2006, MNRAS, 372, 961
* Kewley et al. (2019) Kewley L. J., Nicholls D. C., Sutherland R. S., 2019, ARA&A, 57, 511
* Kobulnicky et al. (1999) Kobulnicky H. A., Kennicutt Robert C. J., Pizagno J. L., 1999, ApJ, 514, 544
* Korista et al. (1997) Korista K., Baldwin J., Ferland G., Verner D., 1997, ApJS, 108, 401
* Law et al. (2015) Law D. R., et al., 2015, AJ, 150, 19
* Law et al. (2016) Law D. R., et al., 2016, AJ, 152, 83
* Lopez-Sanchez & Esteban (2010a) Lopez-Sanchez A. R., Esteban C., 2010a, arXiv e-prints,
* López-Sánchez & Esteban (2010b) López-Sánchez Á. R., Esteban C., 2010b, A&A, 517, A85
* MacArthur et al. (2004) MacArthur L. A., Courteau S., Bell E., Holtzman J. A., 2004, The Astrophysical Journal Supplement Series, 152, 175
* Maiolino & Mannucci (2019) Maiolino R., Mannucci F., 2019, A&ARv, 27, 3
* Maoz (2007) Maoz D., 2007, MNRAS, 377, 1696
* Mateus et al. (2006) Mateus A., Sodré L., Cid Fernand es R., Stasińska G., Schoenell W., Gomes J. M., 2006, MNRAS, 370, 721
* Mayya & Prabhu (1996) Mayya Y. D., Prabhu T. P., 1996, AJ, 111, 1252
* McGaugh (1991) McGaugh S. S., 1991, ApJ, 380, 140
* Mingozzi et al. (2020) Mingozzi M., et al., 2020, arXiv e-prints, p. arXiv:2002.05744
* Molina et al. (2018) Molina M., Eracleous M., Barth A. J., Maoz D., Runnoe J. C., Ho L. C., Shields J. C., Walsh J. L., 2018, ApJ, 864, 90
* Mollá & Díaz (2005) Mollá M., Díaz A. I., 2005, MNRAS, 358, 521
* Monreal-Ibero et al. (2011) Monreal-Ibero A., Relaño M., Kehrig C., Pérez-Montero E., Vílchez J. M., Kelz A., Roth M. M., Streicher O., 2011, MNRAS, 413, 2242
* Netzer (2013) Netzer H., 2013, The Physics and Evolution of Active Galactic Nuclei. Cambridge University Press, doi:10.1017/CBO9781139109291
* Pagel et al. (1979) Pagel B. E. J., Edmunds M. G., Blackwell D. E., Chun M. S., Smith G., 1979, MNRAS, 189, 95
* Pagel et al. (1980) Pagel B. E. J., Edmunds M. G., Smith G., 1980, MNRAS, 193, 219
* Peimbert et al. (2017) Peimbert M., Peimbert A., Delgado-Inglada G., 2017, PASP, 129, 082001
* Pérez-Montero (2014) Pérez-Montero E., 2014, MNRAS, 441, 2663
* Pérez-Montero (2017) Pérez-Montero E., 2017, PASP, 129, 043001
* Pérez-Montero et al. (2019) Pérez-Montero E., Dors O. L., Vílchez J. M., García-Benito R., Cardaci M. V., Hägele G. F., 2019, MNRAS, 489, 2652
* Pettini & Pagel (2004) Pettini M., Pagel B. E. J., 2004, MNRAS, 348, L59
* Pilyugin (2003) Pilyugin L. S., 2003, A&A, 399, 1003
* Pilyugin & Grebel (2016) Pilyugin L. S., Grebel E. K., 2016, MNRAS, 457, 3678
* Pilyugin et al. (2004) Pilyugin L. S., Vílchez J. M., Contini T., 2004, A&A, 425, 849
* Pilyugin et al. (2007) Pilyugin L. S., Thuan T. X., Vílchez J. M., 2007, MNRAS, 376, 353
* Pilyugin et al. (2012) Pilyugin L. S., Grebel E. K., Mattsson L., 2012, MNRAS, 424, 2316
* Pohlen & Trujillo (2006) Pohlen M., Trujillo I., 2006, A&A, 454, 759
* Portinari & Chiosi (1999) Portinari L., Chiosi C., 1999, A&A, 350, 827
* Prieto et al. (2001) Prieto C. A., Lambert D. L., Asplund M., 2001, ApJ, 556, L63
* Rauch (2003) Rauch T., 2003, A&A, 403, 709
* Revalski et al. (2018) Revalski M., et al., 2018, ApJ, 867, 88
* Sánchez et al. (2014) Sánchez S. F., et al., 2014, A&A, 563, A49
* Shields (1992) Shields J. C., 1992, ApJ, 399, L27
* Singh et al. (2013) Singh et al., 2013, A&A, 558, A43
* Storchi-Bergmann et al. (1994) Storchi-Bergmann T., Calzetti D., Kinney A. L., 1994, ApJ, 429, 572
* Storchi-Bergmann et al. (1998) Storchi-Bergmann T., Schmitt H. R., Calzetti D., Kinney A. L., 1998, The AJ, 115, 909
* Tananbaum et al. (1979) Tananbaum H., et al., 1979, ApJ, 234, L9
* Taniguchi et al. (2000) Taniguchi Y., Shioya Y., Murayama T., 2000, The AJ, 120, 1265
* Terlevich & Melnick (1985) Terlevich R., Melnick J., 1985, MNRAS, 213, 841
* Viegas (2002) Viegas S. M., 2002, in Henney W. J., Franco J., Martos M., eds, Revista Mexicana de Astronomia y Astrofisica Conference Series Vol. 12, Revista Mexicana de Astronomia y Astrofisica Conference Series. pp 219–224 (arXiv:astro-ph/0102392)
* Vila-Costas & Edmunds (1992) Vila-Costas M. B., Edmunds M. G., 1992, MNRAS, 259, 121
* Winkler (2014) Winkler H., 2014, arXiv e-prints, p. arXiv:1409.2966
* Wright (2006) Wright E. L., 2006, PASP, 118, 1711
* Yan & Blanton (2012) Yan R., Blanton M. R., 2012, The ApJ, 747, 61
* York et al. (2000) York D. G., Adelman J., Anderson Jr. J. E., Anderson S. F., Annis J., Bahcall 2000, AJ, 120, 1579
* Younes et al. (2012) Younes G., Porquet D., Sabra B., Reeves J. N., Grosso N., 2012, A&A, 539, A104
* Zaritsky et al. (1994) Zaritsky D., Kennicutt Jr. R. C., Huchra J. P., 1994, ApJ, 420, 87
* Zhang et al. (2013) Zhang Z. T., Liang Y. C., Hammer F., 2013, MNRAS, 430, 2605
* Zinchenko et al. (2016) Zinchenko I. A., Pilyugin L. S., Grebel E. K., Sánchez S. F., Vílchez J. M., 2016, MNRAS, 462, 2715
* Zinchenko et al. (2019) Zinchenko I. A., Dors O. L., Hägele G. F., Cardaci M. V., Krabbe A. C., 2019, MNRAS, 483, 1901
* van Zee et al. (1998) van Zee L., Salzer J. J., Haynes M. P., O’Donoghue A. A., Balonek T. J., 1998, AJ, 116, 2805
|
* Skoupý _et al._ [2023] V. Skoupý, G. Lukes-Gerakopoulos, L. V. Drummond, and S. A. Hughes, Asymptotic gravitational-wave fluxes from a spinning test body on generic orbits around a kerr black hole (2023), arXiv:2303.16798 [gr-qc] .
* Khanal [1983] U. Khanal, Rotating black hole in asymptotic de Sitter space: Perturbation of the space-time with spin fields, Physical Review D 28, 1291 (1983).
* Suzuki _et al._ [1998] H. Suzuki, E. Takasugi, and H. Umetsu, Perturbations of Kerr-de Sitter Black Holes and heun's equations, Progress of Theoretical Physics 100, 491 (1998).
* Suzuki _et al._ [1999] H. Suzuki, E. Takasugi, and H. Umetsu, Analytic solutions of the Teukolsky equation in Kerr-de Sitter and Kerr-Newman-de Sitter geometries, Progress of Theoretical Physics 102, 253 (1999).
* García and Salgado [2021] G. García and M. Salgado, Regular scalar charged clouds around a Reissner-Nordstrom black hole and no-hair theorems, Physical Review D 104, 064054 (2021).
* Harris and Kanti [2003] C. M. Harris and P. Kanti, Hawking radiation from a (4+n)-dimensional black hole: exact results for the Schwarzschild phase, Journal of High Energy Physics 2003, 014 (2003).
* Cano and Ruipérez [2019] P. A. Cano and A. Ruipérez, Leading higher-derivative corrections to kerr geometry, Journal of High Energy Physics 2019, 1 (2019).
* Cano _et al._ [2023] P. A. Cano, K. Fransen, T. Hertog, and S. Maenaut, Universal teukolsky equations and black hole perturbations in higher-derivative gravity, Physical Review D 108, 024040 (2023).
* Sen [1992] A. Sen, Rotating charged black hole solution in heterotic string theory, Physical Review Letters 69, 1006 (1992).
* Wu and Cai [2003] S. Q. Wu and X. Cai, Massive complex scalar field in the Kerr-Sen geometry: Exact solution of wave equation and Hawking radiation, Journal of Mathematical Physics 44, 1084 (2003).
* Siahaan [2015] H. M. Siahaan, Instability of charged massive scalar fields in bound states around Kerr-Sen black holes, International Journal of Modern Physics D 24, 1550102 (2015).
* Bernard [2016] C. Bernard, Stationary charged scalar clouds around black holes in string theory, Physical Review D 94, 085007 (2016).
* Dudley and Finley [1979] A. L. Dudley and I. Finley, J. D., Covariant perturbed wave equations in arbitrary type‐D backgrounds, Journal of Mathematical Physics 20, 311 (1979).
* Dudley and Finley [1977] A. L. Dudley and J. D. Finley, Separation of wave equations for perturbations of general type-D space-times, Physical Review Letters 38, 1505 (1977).
* Berti and Kokkotas [2005] E. Berti and K. D. Kokkotas, Quasinormal modes of Kerr-Newman black holes: Coupling of electromagnetic and gravitational perturbations, Physical Review D 71, 124008 (2005).
* Mark _et al._ [2015] Z. Mark, H. Yang, A. Zimmerman, and Y. Chen, Quasinormal modes of weakly charged Kerr-Newman spacetimes, Physical Review D 91, 044025 (2015).
* Li _et al._ [2021] P.-C. Li, T.-C. Lee, M. Guo, and B. Chen, Correspondence of eikonal quasinormal modes and unstable fundamental photon orbits for a Kerr-Newman black hole, Physical Review D 104, 084044 (2021).
* Hod [2014] S. Hod, Kerr-Newman black holes with stationary charged scalar clouds, Physical Review D 90, 024051 (2014).
* Hod [2015] S. Hod, The large-mass limit of cloudy black holes, Classical and Quantum Gravity 32, 134002 (2015).
* Hartman _et al._ [2010] T. Hartman, W. Song, and A. Strominger, Holographic derivation of Kerr-Newman scattering amplitudes for general charge and spin, Journal of High Energy Physics 2010 (2010).
* Fujita and Shibata [2020] R. Fujita and M. Shibata, Extreme mass ratio inspirals on the equatorial plane in the adiabatic order, Physical Review D 102, 064005 (2020).
* Teukolsky and Press [1974] S. A. Teukolsky and W. H. Press, Perturbations of a rotating black hole. III - interaction of the hole with gravitational and electromagnetic radiation, The Astrophysical Journal 193, 443 (1974).
* Warburton and Barack [2010] N. Warburton and L. Barack, Self-force on a scalar charge in kerr spacetime: Circular equatorial orbits, Physical Review D 81, 084039 (2010).
* Torres and Dolan [2022] T. Torres and S. R. Dolan, Electromagnetic self-force on a charged particle on kerr spacetime: Equatorial circular orbits, Physical Review D 106, 024024 (2022).
* Fujita [2015b] R. Fujita, Gravitational waves from a particle in circular orbits around a rotating black hole to the 11th post-newtonian order, Progress of Theoretical and Experimental Physics 2015, 33E01 (2015b).
* Olver _et al._ [2010] F. W. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, _NIST handbook of mathematical functions hardback and CD-ROM_ (Cambridge university press, 2010).
* Fiziev and Staicova [2009b] P. P. Fiziev and D. R. Staicova, Toward a new model of the central engine of GRB (2009b), arXiv:0902.2411 [astro-ph.HE] .
* Vieira _et al._ [2014] H. Vieira, V. Bezerra, and C. Muniz, Exact solutions of the Klein–Gordon equation in the Kerr–Newman background and Hawking radiation, Annals of Physics 350, 14 (2014).
* Bezerra _et al._ [2014] V. B. Bezerra, H. S. Vieira, and A. A. Costa, The Klein-Gordon equation in the spacetime of a charged and rotating black hole, Classical and Quantum Gravity 31, 045003 (2014).
* Vieira _et al._ [2015] H. Vieira, V. Bezerra, and G. Silva, Analytic solutions in the dyon black hole with a cosmic string: Scalar fields, Hawking radiation and energy flux, Annals of Physics 362, 576 (2015).
* Vieira and Bezerra [2020] H. S. Vieira and V. B. Bezerra, Resonant frequencies of a massless scalar field in the canonical acoustic black hole spacetime, General Relativity and Gravitation 52, 1 (2020).
* Vieira [2020] H. Vieira, Scalar fields in a five-dimensional lovelock black hole spacetime, Annals of Physics 418, 168197 (2020).
* Detweiler [1978] S. L. Detweiler, Black Holes and Gravitational Waves. I. Circular Orbits About a Rotating Hole, The Astrophysical Journal 225, 687 (1978).
* Chandrasekhar [1975] S. Chandrasekhar, On the equations governing the perturbations of the Schwarzschild black hole, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 343, 289 (1975).
* Sasaki and Nakamura [1982a] M. Sasaki and T. Nakamura, A class of new perturbation equations for the kerr geometry, Physics Letters A 89, 68 (1982a).
* Sasaki and Nakamura [1982b] M. Sasaki and T. Nakamura, Gravitational radiation from a kerr black hole. i. formulation and a method for numerical analysis, Progress of Theoretical Physics 67, 1788 (1982b).
* Figueiredo [2002] B. D. Figueiredo, On some solutions to generalized spheroidal wave equations and applications, Journal of Physics A: Mathematical and General 35, 2877 (2002).
* El-Jaick and Figueiredo [2013] L. J. El-Jaick and B. D. B. Figueiredo, Confluent Heun equations: convergence of solutions in series of coulomb wavefunctions, Journal of Physics A: Mathematical and Theoretical 46, 085203 (2013).
* Figueiredo and Novello [1993] B. D. B. Figueiredo and M. Novello, Solutions to a spheroidal wave equation, Journal of Mathematical Physics 34, 3121 (1993).
* Abramowitz and Stegun [1948] M. Abramowitz and I. A. Stegun, _Handbook of mathematical functions with formulas, graphs, and mathematical tables_ , Vol. 55 (US Government printing office, 1948).
* Bateman [1953] H. Bateman, _Higher transcendental functions [volumes i-iii]_ , Vol. 1 (McGRAW-HILL book company, 1953).
* BHP [2023] Black Hole Perturbation Toolkit, (bhptoolkit.org) (2023).
* Li and Chen [2008] J. Li and Y.-T. Chen, _Computational Partial Differential Equations Using MATLAB_ (2008) p. 364.
* Chen _et al._ [2020a] C. Chen, X. Zhang, and Z. Liu, A high-order compact finite difference scheme and precise integration method based on modified Hopf-Cole transformation for numerical simulation of n-dimensional Burgers’ system, Applied Mathematics and Computation 372, 125009 (2020a).
* Chen _et al._ [2020b] C. Chen, X. Zhang, Z. Liu, and Y. Zhang, A new high-order compact finite difference scheme based on precise integration method for the numerical simulation of parabolic equations, Advances in Difference Equations 2020, 1 (2020b).
* [135] J. Wainwright, Geometric properties of neutrino fields in curved space–time., J. Math. Phys. (N. Y.) 12: No. 5, 828-35(May 1971). 10.1063/1.1665651.
* Li _et al._ [2023] D. Li, P. Wagle, Y. Chen, and N. Yunes, Perturbations of spinning black holes beyond general relativity: Modified Teukolsky equation, Physical Review X 13, 021029 (2023).
* Jing _et al._ [2022] J. Jing, S. Long, W. Deng, M. Wang, and J. Wang, New self-consistent effective one-body theory for spinless binaries based on the post-minkowskian approximation, Science China Physics, Mechanics & Astronomy 65, 100411 (2022).
* Jing _et al._ [2023] J. Jing, W. Deng, S. Long, and J. Wang, Self-consistent effective-one-body theory for spinning binaries based on post-minkowskian approximation, Science China Physics, Mechanics & Astronomy 66, 270411 (2023). |
# Constant Velocity Physical Warp Drive Solution
Jared Fuchs${}^{\dagger}{}^{1,2}$, Christopher Helmerich1,2, Alexey
Bobrick2,3, Luke Sellers2,4, Brandon Melcher2, & Gianni Martire2 1The
University of Alabama in Huntsville, 301 Sparkman Drive, Huntsville, Alabama,
35899, U.S. 2Advanced Propulsion Laboratory at Applied Physics, 477 Madison
Avenue, New York, 10022, U.S. 3Technion - Israel Institute of Technology,
Physics Department, Haifa 32000, Israel 4UCLA Department of Physics &
Astronomy, 475 Portola Plaza, Los Angeles, CA 90095, U.S. $^†$
<EMAIL_ADDRESS>&<EMAIL_ADDRESS>
###### Abstract
Warp drives are exotic solutions of general relativity that offer novel means
of transportation. In this study, we present a solution for a constant-
velocity subluminal warp drive that satisfies all of the energy conditions.
The solution involves combining a stable matter shell with a shift vector
distribution that closely matches well-known warp drive solutions such as the
Alcubierre metric. We generate the spacetime metric numerically, evaluate the
energy conditions, and confirm that the shift vector distribution cannot be
reduced to a coordinate transformation. This study demonstrates that classic
warp drive spacetimes can be made to satisfy the energy conditions by adding a
regular matter shell with a positive ADM mass.
* August 2023
## 1 Introduction
Warp drive spacetimes, first introduced by Alcubierre [1] and later by others
[19, 14], offer several unique transportation properties for timelike
observers. These properties include the possibility of accelerating through
geodesic motion, moving superluminally, or being in regions of modified
spacetime, all relative to external inertially moving timelike observers. All
of these classic warp drive spacetimes violate some if not all of the energy
conditions [10, 15], and therefore, their construction has been largely
considered to be unfeasible. However, recent papers [13, 4, 9] have suggested
that ‘physical warp drives’ that satisfy some or all of the energy conditions
could possibly be constructed, reigniting interest in the subject.
Due to the complexity of the Einstein equations, progress toward finding
physical warp drive solutions through purely analytical means has been slow.
In particular, factors that increase the complexity of these equations
considerably are non-unit lapse functions and non-flat spatial metrics, both
of which have been absent from most previous warp drive solutions and both of
which have been argued to be necessary for satisfying the energy conditions
[4]. To address this challenge, the computational toolkit, Warp Factory [11],
was developed to provide numerical methods for exploring warp drive spacetimes
more comprehensively. Using this new-found flexibility afforded by Warp
Factory, we present here a new subluminal constant-velocity warp drive
solution with a non-unit lapse and non-flat spatial metric that satisfies all
of the energy conditions.
### 1.1 Key Features of a Warp Drive Spacetime
A detailed discussion on the properties of warp drive spacetimes may be found
in [11], here we summarize these key features for clarity. Warp drive
spacetimes may be viewed as modifications of a background globally hyperbolic
and asymptotically-flat spacetime, which contains a generally non-geodesic
trajectory $\mathcal{C}_{\rm background}$ connecting two arbitrary points A
and B. A warp drive spacetime modifies this background spacetime such that the
following is true:
1. 1.
Geodesic transport: The generally non-geodesic trajectory $\mathcal{C}_{\rm
background}$ in the background spacetime maps to a geodesic trajectory
$\mathcal{C}_{\rm background}\rightarrow\mathcal{C}_{\rm warp}$ in the new
warp drive spacetime. In other words, warp drive spacetimes enable passengers
to travel between points A and B along a geodesic trajectory $\mathcal{C}_{\rm
warp}$. This means the passengers inside the warp drive do not experience
local acceleration while being transported111In the case that a local
acceleration is desired, for example, 1g, the statement becomes the ‘local
acceleration of passengers should be limited’.. For a non-trivial solution,
the original trajectory $\mathcal{C}_{\rm background}$ should not be a
geodesic, i.e. the passengers should not ‘already be going’ from point A to
point B. An example of a non-trivial solution is a passenger in a static
background spacetime that is initially at rest at point A (relative to the
local frame of rest), is transported to point B, and is then, again, at rest
relative to point B. In addition, the warp modification should minimally
affect the proper distances between A and B measured along the path, as
defined originally in the background spacetime.
2. 2.
Empty passenger region: The warp drive spacetime has a compact vacuum
($T^{\mu\nu}=0$)222Ignoring the mass of the passengers. passenger region that
is free from tidal forces and encloses the passenger trajectory
$\mathcal{C}_{\rm warp}$.
3. 3.
A spatially bounded, comoving bubble: The warp drive spacetime has a compact
non-vacuum region ($T^{\mu\nu}\neq 0$) that encloses the passenger trajectory
$\mathcal{C}_{\rm warp}$ on every spacelike slice. This means the stress-
energy distribution required for the geodesic transport does not extend to
infinity333Perhaps, some energy could be radiated to infinity but that energy
should be causally connected to the bubble. and moves along with the
transported observers. The requirement of moving with passengers distinguishes
warp drive solutions from Krasnikov tubes [8], for example.
### 1.2 Designing Warp Drive Spacetimes
The transportation element of warp drives is about designing timelike curves
for passengers to travel between points A and B in spacetime. In this paper,
we will go about developing a warp solution in the following steps:
1. 1.
Start with a Minkowski background.
2. 2.
Define two points A and B in spacetime.
3. 3.
Define the starting and end conditions for the passengers that will travel
between A and B. For example, the passengers might begin at rest at point A
and then end at rest at point B. Such starting conditions can be defined w.r.t
to an outside observer situated in a Minkowski space.
4. 4.
Define a curve between points A and B that the warp drive and passengers will
travel along.
5. 5.
Construct a metric solution that will move passengers within the boundary
conditions of (iii) along geodesics matched to the curve in (iv). There are
multiple possible metrics that enable these specific geodesics, but only one
is needed.
Generally speaking, there are many ways that this can be accomplished. In a
3+1 formalism, the metric is given by:
$ds^{2}=-\alpha^{2}dt^{2}+\gamma_{ij}(dx^{i}+\beta^{i}dt)(dx^{j}+\beta^{j}dt),$
(1)
where $\alpha$ is the lapse function, $\beta^{i}$ is the shift vector and
$\gamma_{ij}$ is the spatial metric. We can consider the general geodesic
equations in a 3+1 formalism parameterized by the coordinate time [3] as:
$\begin{split}&\frac{dx^{i}}{dt}=\gamma^{ij}\frac{u_{j}}{u^{0}}-\beta^{i}\\\
&\frac{du_{i}}{dt}=-\alpha
u^{0}\partial_{i}\alpha+u_{k}\partial_{i}\beta^{k}-\frac{u_{j}u_{k}}{2u^{0}}\partial_{i}\gamma^{jk}\\\
&u^{0}=\left(\gamma^{jk}u_{j}u_{k}+\epsilon\right)^{1/2}/\alpha\end{split}$
(2)
where $u^{\mu}=dx^{\mu}/d\tau$ and $\epsilon=$ 1 or 0 for timelike or null
geodesics. The use of $dx^{i}/dt$ and $du_{i}/dt$ are coordinate dependent,
but within a fully defined spacetime and coordinate system, these values take
on a specific meaning. To illustrate this let us consider the Alcubierre
metric [1] given by:
$ds^{2}=-dt^{2}+\left(dx-v_{s}f\left(r_{s}\right)dt\right)^{2}+dy^{2}+dz^{2}$
(3)
where $r_{s}=\sqrt{(x-x_{s}(t))^{2}+y^{2}+z^{2}}$. $x_{s}(t)$ and $v_{s}(t)$
are the center position and speed of the warp drive, respectively, and
$f(r_{s})$ is a shape function that defines the warp bubble extent from the
center. Alcubierre’s solution is a shift vector addition to an otherwise
Minkowski spacetime. Thus, it is clear that all the characteristic warp drive
features of this spacetime are sourced purely by the shift vector. We can
understand this from a different perspective, using the steps and geodesic
equations (2) from above. Inside the passenger volume region (defined as
$r_{s}<R$) the spacetime is constrained to be flat, meaning that
$\partial_{i}g_{\mu\nu}=0$ and hence $du_{i}/dt=0$. The passenger geodesic
motion in the coordinate system within that region is:
$\frac{dx^{i}}{dt}=v_{s}=\delta^{ij}\frac{u_{j}}{u^{0}}-\beta^{i}$ (4)
As seen from the equation above, the geodesic transport of passengers depends
on their $u_{i}$ at the beginning and end of the transport. Alcubierre’s
solution allows for acceleration using a time-varying $v_{s}(t)$. If we
consider the starting condition where observers are initially at rest
($u^{i}=u_{i}=0$) at some point and then imagine a warp bubble forming around
the passengers with the same constraints ($du_{i}/dt=0$ for $r_{s}<R$), then
$u_{i}(t)=0$ at any time $t$ and the transportation of passengers within the
bubble in this scenario is given by:
$\frac{dx^{i}}{dt}=v_{s}(t)=-\beta^{i}(t)$ (5)
In this context, we can consider Alcubierre’s warp drive solution as capable
of transporting observers initially at rest with respect to an external
stationary observer and up to a relative velocity of $v_{s}$, all accomplished
using a localized shift vector in the spacetime with a flat interior region.
This example of warp travel is illustrated in Figure 1.
Figure 1: Example of an Alcubierre warp trajectory with three phases of
flight:
(i) Passenger enters the warp bubble at rest w.r.t to the reference observer
at point A. The passenger will not have any coordinate velocity compared to
the reference observer ($dx^{i}/dt=u_{i}=0$)
(ii) Warp bubble begins its travel by accelerating up to a constant velocity.
The passenger inside is geodesically transported along up to the coordinate
velocity of the warp drive, but still with zero velocity as measured in its
proper time ($dx^{i}/dt=v_{warp}$).
(iii) Warp bubble decelerates to a stop at point B at rest w.r.t to the
reference observer and the passenger exits the drive.
In this paper, we will focus on analyzing the constant velocity phase of warp
flight. This constant velocity focus presents some challenges, as without
acceleration, the boundary conditions of passengers entering the drive are
less interesting in the simple comoving case. For example, if the warp drive
is always at a constant velocity with its passengers having always existed
comoving inside the warp drive, then no shift vector is needed since the
passengers $u_{i}/u^{0}=v_{s}$ by definition. Bobrick and Martire discuss this
example of a constant velocity case as the basis of a physical warp solution
in [4], which results in a regular matter shell. In the Alcubierre example, we
can gain an important context for its constant velocity phase when connected
to a prior accelerating phase where we define how passengers enter the drive.
If the drive undergoes acceleration with a condition of $du_{i}/dt=0$, then
having a shift vector during the constant velocity phase is required as the
passengers prior to this point had $u_{i}/u^{0}<v_{s}$ and only a shift vector
can provide the required $dx^{i}/dt$ under the curvature constraints applied
in the passenger volume.
Almost all of the warp solutions proposed in the literature are essentially
variations of Alcubierre’s solution and rely on the shift vector to provide
the transportation of passengers in the same sense as discussed here. Van Den
Broeck’s [19] addition of a spatial term is only used to make more efficient
use of energy density by volume expansion, but the shift still plays the same
role in driving transportation as in the Alcubierre solution, as the spatial
term is flat in the passenger volume444The non-unitary spatial term will
modify the geodesic velocity as $\gamma^{ij}\beta_{j}$. Lentz [13, 12], Fell
and Heisenberg [9] and the more general Natario class [14] warp metrics all
use several shift vector components but follow in essence the same dynamics as
described here. In the spherical symmetric metric [4], no shift vector is
used, but again, this is a solution of constant velocity, and shift vector
addition could be required when expanding it to a more general accelerating
solution.
While shift vectors have been used extensively in the literature, a shift
vector is not the only ingredient we can use to build warp drives. Spatial
gradients in the lapse and metric spatial terms can affect $du_{i}/dt$ and
then $dx^{i}/dt$ but require careful management of their spatial derivatives
to avoid energy or tidal forces existing inside the passenger volume. The use
of a shift is, in many ways, the easiest method to add geodesic transportation
as it directly provides a $dx^{i}/dt$ term to observers based on its magnitude
alone. In this work, we will focus on a warp solution that uses a shift vector
to obtain the same warp drive properties as those of the Alcubierre solution
but in a manner that can maintain physicality.
### 1.3 The Problem of Physicality
The condition of a physical warp drive is discussed in detail in [4] and our
recent paper [11], but in essence, the core requirement is to satisfy the
energy conditions. The Natario-class of solutions (defined by shift vectors
with unit lapse $\alpha=1$ and flat spatial metric $\gamma_{ij}=\delta_{ij}$)
has been shown to always violate the energy conditions [15]. One possible
reason for this violation is that the metrics as constructed lack the
gravitational effect of regular matter. The asymptotic gravitational field
produced by a gravitating spherically symmetric object is given by the
Schwarzchild metric, which has an asymptotic $1/r$ dependency in the lapse and
spatial terms in Schwarzchild coordinates at infinity. For general warp
metrics with a compact support region, meaning a metric whose components are
bounded within some finite region that transitions to Minkowski space faster
than 1/r, behaves in ways different from regular matter. This can be expressed
in another way using the definition of ADM mass [5], which is a quantity
describing the concept of mass as seen in faraway regions. Alcubierre metric
and similar solutions have $M_{ADM}=0$, as opposed to Schwarzschild metric,
which for an equivalent energy density magnitude would have $M_{ADM}>0$.
However, even if the metric has non-zero ADM mass, energy condition violations
can easily occur. This issue is demonstrated in the recent work of Schuster et
al. in the transported Schwarzschild Drive [16]. Further still, even if the
solution asymptotically approaches that of a positive matter spacetime with a
positive ADM mass, in the non-vacuum warp bubble additional constraints must
be applied. As a rule of thumb, the Eulerian momentum flux and pressures
should be less than the energy density to satisfy the energy conditions [11].
Finally, from [4] subluminal motion is likely another important requirement
for the metric to be physical. In summary, the likely key ingredients to a
physical warp drive solution can be simply stated as:
1. 1.
The asymptotically flat spacetime should have a positive ADM mass.
2. 2.
Generally, much larger positive energy density than both pressure and momentum
flux in the non-vacuum warp bubble, as measured by Eulerian observers.
3. 3.
Subluminal speeds
These physical ingredients will be the guiding focus of the solution
constructed in this paper.
### 1.4 Paper Structure
The paper is structured by first introducing the approach to building a
numerical model of a warp drive in Section 2. Then, in Sections 3 \- 4, we
develop the solutions for a matter shell and its transformation to a Warp
Shell through the addition of a shift vector. In Section 5 we discuss the
implications of this solution and compare it to prior warp metrics. Finally,
we conclude and remark on future steps in Section 6.
## 2 Methods
To overcome the issues encountered by warp solutions in the past, we will use
a new approach to constructing warp solutions that maintain a Schwarzschild
vacuum solution at large distances with a compact stress-energy tensor. This
is accomplished by adding a shift vector distribution on top of a regular
shell of matter. The added shift vector is kept below a threshold that causes
energy condition violation from the added momentum flux that accompanies its
addition. Adding a shift vector will have a similar effect on passenger
transport to that in the Alcubierre drive without any energy condition
violations.
### 2.1 Building the Bubble
To find a physical solution, we utilize a moving matter shell as the
foundation metric for our warp drive. This solution features a flat interior
with an asymptotically-flat Schwarzchild solution outside the shell. The shell
solution will be constructed in comoving coordinates in which the metric
tensor does not depend on time. In this section, we provide a top-level
summary of the process, while the details are found in the next section.
First, we need to consider what warp solutions look like in a comoving frame.
Since we plan to add a single shift vector to a shell, we look at the
Alcubierre solution in a comoving frame to see the form of the shift vector we
want to add. To transform Alcubierre’s solution to the stationary frame of an
external comoving timelike observer, we can use a Lorentz transformation shown
in B as determined for an observer at spatial infinity. It should be noted
that this transformation limits us to a subluminal regime, but that is a
natural restriction for any external comoving timelike observer and is the
regime that is most likely to lead to physical solutions. Performing the
Lorentz transformation to the Alcubierre metric results in a shift vector that
is zero at $r_{s}\gg R$ and a non-zero shift vector inside the passenger
volume, as shown in Figure 2.
Figure 2: Example of an Alcubierre solution transformed to a comoving frame
using a Lorentz transformation. The static (blue) is the metric for the form
defined by Alcubierre in [1]. The comoving (dotted red) is the inverse Lorentz
transformation applied to the static solution. Note that the shift vector
remains the same but changes occur for the spatial and lapse terms. These
changes will be superseded by the matter shell terms when building the
physical solution in this paper.
With the transformed Alcubierre solution above, we can now compare this to the
same setup for a regular matter shell, which is typically defined in a
comoving frame, and see that the warp effect addition can simply be expressed
as adding shift vector to the shell metric. This added shift vector
$\beta_{i}$ is applied inside the interior of the shell, where the matter
exists to manage the non-vacuum physicality constraints:
$g_{warpshell}=g_{shell}+\delta g_{warp}$ (6)
where $\delta g_{warp}$ is a metric only containing a shift-vector component
along a single direction:
$\delta g_{warp}=\begin{pmatrix}0&\beta_{1}&0&0\\\ \beta_{1}&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\end{pmatrix}$ (7)
The details for the shell and its warp modification are described in Sections
3 and 4.
As we did for the Alcubierre metric we can return to the geodesic equations
(2) to describe how this solution would impact passengers. Although we do not
model the acceleration phase here, it may be considered similar to the
Alcubierre case. Specifically, the passenger region of this shell interior in
constant velocity case will need to be flat, so we can again assume that
$du_{i}/dt=0$. We will also let $u_{i}=0$, just as we did for the case of
Alcubierre metric, to consider possible solutions connected to some prior
acceleration phase which maintained $du_{i}/dt=0$. However, this time, the
shell solution will not have Minkowski spatial terms and thus the coordinate
motion is now given by:
$\frac{dx^{i}}{dt}=-\beta^{i}=-\gamma^{ij}\beta_{j}\\\ $ (8)
Lastly, a question occasionally discussed in warp theory is the nature of the
passenger transport w.r.t to the bubble motion, especially as the shift vector
decreases to zero at the boundary. In the case of a constant velocity warp
drive, where the metric is not changing in the comoving frame, the bubble
matter itself will always by definition “move” aligned with the passenger
volume. This is a consequence of the fact that the bubble itself is a
generating function for the shift vector, determined by the metric through the
Einstein field equation. This is true for all constant velocity warp
solutions.
### 2.2 Numerical Methods
In this section, we present a summary of our numerical method. We perform
numerical analysis using the Warp Factory toolkit presented in detail in [11].
Throughout this paper, we will adopt the 3+1 formalism and always report the
Eulerian stress-energy tensor and its components (pressure, momentum flux, and
energy density) using the methods from Section 3 in [11]. In Warp Factory, the
frame transformation is done on the metric tensor locally at each point using
a tetrad corresponding to Eulerian observers, which also transforms the local
metric to Minkowski form. This transformation tetrad is applied to the stress-
energy tensor, returning the Eulerian-measured stress-energy tensor. The
Eulerian observer in this frame is defined in a standard way as:
$n_{\mu}=(1,0,0,0)$ (9)
The observation of energy density $\rho$, momentum flow $p_{i}$, isotropic
pressure $P_{i}$ and stress tensor $\sigma_{ij}$ from the Eulerian tensor
$T^{\hat{\mu}\hat{\nu}}$ by the Eulerain observer at any location is:
$\begin{split}\rho=T^{\hat{0}\hat{0}}\\\ p_{i}=T^{\hat{0}\hat{i}}\\\
P_{i}=T^{\hat{i}\hat{i}}\\\ \sigma_{ij}=T^{\hat{i}\hat{j}}\end{split}$ (10)
The process for defining and evaluating the energy conditions, which is
described in detail in Section 3 of [11], is shown here for clarity. The Null
Energy Condition (NEC) is given by the contraction of null observers with the
stress-energy tensor at all points of the spacetime ($X$):
$\Xi_{NEC}(X)=T_{\hat{\mu}\hat{\nu}}(X)k^{\hat{\mu}}k^{\hat{\nu}}\geq 0\ \
\forall\ \ k^{\hat{\mu}}$ (11)
where $k^{\hat{\mu}}$ are null observers. The Weak Energy Condition (WEC) is
similar to the NEC but with the contraction of timelike observers at all
points of spacetime:
$\Xi_{WEC}(X)=T_{\hat{\mu}\hat{\nu}}(X)V^{\hat{\mu}}V^{\hat{\nu}}\geq 0\ \
\forall\ \ V^{\hat{\mu}}$ (12)
where $V^{\hat{\mu}}$ are timelike observers. The Strong Energy Condition
(SEC) is also found using timelike observers contracted with the stress-energy
tensor:
$\Xi_{SEC}(X)=\left(T_{\hat{\mu}\hat{\nu}}(X)-\frac{1}{2}T(X)\eta_{\hat{\mu}\hat{\nu}}\right)V^{\hat{\mu}}V^{\hat{\nu}}\geq
0\ \ \forall\ \ V^{\hat{\mu}}$ (13)
Finally, the Dominant Energy Condition (DEC) is given by contracting the
stress-energy tensor in the mixed form using the timelike observers:
$\Upsilon^{\hat{\mu}}(X)=-T^{\hat{\mu}}_{\ \ \hat{\nu}}(X)V^{\hat{\nu}}$ (14)
where $\Upsilon^{\hat{\mu}}\left(X\right)$ must be future pointing, meaning
$\Upsilon^{\hat{\mu}}$ is either timelike or null satisfying555In this work,
we flip the sign of this condition in Warp Factory so that negative values
mean violations in all of the energy conditions shown.:
$\xi_{D}(X)=\eta_{\hat{\mu}\hat{\nu}}\Upsilon^{\hat{\mu}}(X)\Upsilon^{\hat{\nu}}(X)\leq
0\ \ \forall\ \ V^{\hat{\mu}}$ (15)
The observer vector field is sampled with a spatial orientation density of 100
samples and for timelike observers and an additional velocity magnitude
density of 10 samples (see [11] for a detailed discussion of this method).
## 3 Shell Metric
The base for the warp solution is a stable matter shell. We start by
constructing this shell in a comoving frame in Schwarzschild coordinates.
### 3.1 Metric Definition
The shell solution is built starting from a general static, spherically
symmetric metric, which has the form of [6]:
$ds^{2}=-e^{2a}dt^{2}+e^{2b}dr^{2}+d\Omega^{2}$ (16)
The functions of $a$ and $b$ can be solved using the field equation with a
known stress-energy tensor. For a simple solution based on the stress-energy
tensor for an isotropic fluid, this is a straightforward process where the
stress-energy tensor components in the Eulerian frame are given as:
$T^{iso}_{\hat{\mu}\hat{\nu}}=\textrm{diag}(\rho,P,P,P)$ (17)
However, for a stable shell the pressures $P$ can not be assumed as isotropic
since the interior radius must withstand the gravity inward pressure,
resulting in non-uniform pressure terms along the $\theta$ and $\phi$
directions, akin to hoop stress in a cylinder. With non-isotropic pressure,
the solution takes the form:
$T^{shell}_{\hat{\mu}\hat{\nu}}=\textrm{diag}(\rho,P_{1},P_{2},P_{3})$ (18)
To solve for the non-isotropic shell solution, we will take an iterative
approach to find $a$ and $b$ as modifications from the isotropic solution by
changing the assumed pressure and density used to determine $a$ and $b$ in the
isotropic case. A short summary of the process will be as follows:
1. 1.
Start with an initial guess solution for the shell metric assuming a constant
density $\rho^{\prime}$ between the inner radius $R_{1}$ and the outer radius
$R_{2}$.
2. 2.
Solve for the initial guess pressure profile $P^{\prime}(r)$ by assuming the
pressure in the shell is isotropic and zero at $r=R_{2}$ using the Tolman-
Oppenheimer-Volkoff (TOV) equation. After solving the differential equation
with the single boundary condition at $r=R_{2}$, we are left with a constant
pressure inside, this pressure is set to zero for $r<R_{1}$ to enforce a
vacuum interior.
3. 3.
The constant density assumption creates sharp boundaries at $R_{1}$ and
$R_{2}$ and the isotropic pressure assumption is not valid for the $R_{1}$
boundary to maintain a stable shell. To address this issue, we soften the
boundary using radial smoothing applied to $\rho^{\prime}$ and $P^{\prime}$
using an $f_{smooth}$ function.
4. 4.
The smoothed $\tilde{\rho}$ and $\tilde{P}$ are then used to solve for the
terms of $a$ and $b$ which build the actual metric. Solving the stress-energy
tensor using the Einstein field equations then provides the true $\rho$ and
$P_{i}$ that correspond to the metric obtained in this step.
5. 5.
Finally, the smoothing function for pressure and density is iterated upon in
steps (iii) and (iv) until we find a metric that satisfies the energy
conditions at the boundaries.
The process flow for constructing the metric solution using the process
described above is shown in Figure 3.
Figure 3: Metric creation method where trial solutions are used and then
modified to construct a physical shell solution. The process starts with
density on the left and then generates a solution on the right.
The detailed version of the process outlined above is as follows. The starting
assumption of the density profile $\rho^{\prime}$ is that of a spherical shell
with an inner radius of $R_{1}$ and outer radius of $R_{2}$ with a constant
density and total mass $M$. This defines the density as:
$\rho^{\prime}(r)=\begin{cases}0&0\leq r\leq R_{1}\\\
\frac{3}{4\pi}\frac{M}{R_{2}^{3}-R_{1}^{3}}&R_{1}\leq r\leq R_{2}\\\
0&R_{2}\leq r<\infty\end{cases}$ (19)
and the associated cumulative mass profile $m^{\prime}(r)$ is just the
integration of the density below a given radius $r$, which results in:
$m^{\prime}(r)=\int_{0}^{r}4\pi r^{2}\rho^{\prime}(r)dr=\begin{cases}0&0\leq
r\leq R_{1}\\\
M\left(\frac{r^{3}-R_{1}^{3}}{R_{2}^{3}-R_{1}^{3}}\right)&R_{1}\leq r\leq
R_{2}\\\ M&R_{2}\leq r<\infty\end{cases}$ (20)
From the density and cumulative mass definitions, we can numerically solve the
TOV equation for $P^{\prime}$ when $R_{1}<r<R_{2}$ with a boundary of zero
pressure at $r=R_{2}$ and enforce that $P^{\prime}=0$ for $r<R_{1}$:
$\frac{dP^{\prime}}{dr}=\begin{cases}0&0\leq r<R_{1}\\\
-G\left(\rho^{\prime}/c^{2}+P^{\prime}/c^{4}\right)\left(m^{\prime}/r^{2}+4\pi
rP^{\prime}/c^{2}\right)\left(1-\frac{2Gm^{\prime}}{c^{2}r}\right)^{-1}&R_{1}<r\leq
R_{2}\\\ 0&R_{2}\leq r<\infty\end{cases}$ (21)
This initial solution will have issues at $R_{1}$ and $R_{2}$ due to the
discontinuity of the density and pressure, this problem is alleviated by
applying a numerical smoothing to both $\rho^{\prime}$ and $P^{\prime}$:
$\begin{split}&\tilde{\rho}=f_{smooth}(\rho^{\prime})\\\
&\tilde{P}=f_{smooth}(P^{\prime})\\\ \end{split}$ (22)
The smoothing function applied uses a moving average, which is a lowpass
filter with filter coefficients equal to the reciprocal of the span of the
average 666See MATLAB ‘smooth’ function for more details. The smoothing itself
will fix the discontinuity by having finite values of derivatives at the
boundaries while maintaining a physical solution. Selecting the smoothing
function coefficients is found iteratively until the solution has no
violations of the Null, Weak, Dominant, and Strong energy conditions. Once the
smoothing is applied, we must recompute the new mass profile as before with
the new density $\tilde{\rho}(r)$:
$m(r)=\int_{0}^{r}4\pi r^{2}\tilde{\rho}(r)dr$ (23)
The smoothed values of pressure and density can now be used to solve the
metric terms of $a$ and $b$. The mass profile directly determines $b$ which
provides $e^{2b}$ as a simple extension of the Schwarzschild solution where
$M=m$ [6, Section 5.8, Eq. 5.143]:
$e^{2b}=\left(1-\frac{2Gm}{c^{2}r}\right)^{-1}$ (24)
The second term of $e^{2a}$ is found by solving for $a$ [6, Section 5.8, Eq.
5.152]:
$\frac{da}{dr}=G\left(\frac{m}{c^{2}r^{2}}+\frac{4\pi
r}{c^{4}}\tilde{P}\right)\left(1-\frac{2Gm}{c^{2}r}\right)^{-1}$ (25)
This equation is integrated using the condition that at $r\gg R_{2}$ the
boundary is set by $e^{2a}=e^{-2b}$, which corresponds to a Schwarzchild
solution in the vacuum region. The span, or window, $s$ of the smoothing
function is selected differently between density and pressure, where we have
found that a ratio between the span of density and pressure
$s_{\rho}/s_{P}\approx 1.72$ works 777The specific numbers in the setup were
found by trial and error. Improved approaches could use more complex
techniques than simple moving average smoothing to resolve the boundary
violation issues.. The moving average smoothing is applied four times, with
the same span and ratios, to the density and pressure for the final solution.
For a shell with parameters: $R_{1}=10$ m, $R_{2}=20$ m, $M=4.49\times
10^{27}$ kg ($2.365$ Jupiter masses)888Selection of the mass parameter is to
allow the most amount of the shift vector to the drive while balancing
physicality, given the selected radial distribution of the shift vector. the
pressure and density before and after smoothing are shown in Figure 4. The
process so far provides a solution for the metric in spherical coordinates.
The last step is to transform this solution to pseudo-Cartesian coordinates,
which are convenient for defining the numerical grid, by changing the
coordinates and the coordinate differentials using the standard spherical to
Cartesian relations. When doing this numerically, the radial solutions for
$e^{2a}$ and $e^{2b}$ are interpolated to the Cartesian grid points using
Legendre polynomials. The metric, which is built from these parameters, is
plotted in Figure 5.
Figure 4: Density and pressure profiles before and after smoothing for
constructing the Shell metric. Figure 5: Shell and Schwarzschild metric
components for a slice along the y-axis. Only the non-Minkowski components for
this slice are shown. In $g_{22}$ the vertical dashed line is where $r=r_{s}$
for the reference Schwarzschild metric, when the sign flips for the spatial
parts. Figure 6: Shell stress-energy components for a slice along the y-axis.
Only the non-zero components are shown for this slice. The $\tilde{\rho}$ and
$\tilde{P}$ lines are the smoothed density and pressure as computed from Eq.
(22). Note that the energy density is scaled by a factor of c2.
### 3.2 Stress-Energy Tensor
The resulting stress-energy terms, as measured by Eulerian observers, are
plotted in Figure 6. Along each of the principal coordinate directions, the
input pressure $\tilde{P}$ used to solve for $a$ from Eq. (25) is equal to the
calculated stress-energy pressure $P$ along that direction since it is aligned
with the radial direction. The pressures along the x- and z- directions are
equal and differ from the y-pressure, with a large spike on the inner bound of
the shell. The choice to smooth the pressure and density is made purposely to
find a solution with non-isotropic pressures, which is modified from the
isotropic with a smoothing filter. For a static shell, the inner boundary at
$R_{1}$ requires a difference in pressure between the radial pressures and the
angular pressure to ensure the shell is stable from gravitational collapse.
This manifests as a kind of hoop stress around the inner radius of the shell.
It is also important that these pressures are all lower in magnitude than the
value of the energy density at that point to ensure that the shell is
physical. For realistic materials that may have a limited range of pressures
possible, this requirement can always be satisfied by making the shell large
and modifying the density profile, hence, reducing the gravitational forces.
The physicality of the solution is demonstrated by checking the energy
conditions using the Warp Factory Toolkit [11], shown in Figure 7. No energy
condition violations exist beyond the numerical precision limits that exist at
$10^{34}$ in this setup (see A for a detailed discussion on errors and
numerical limitations).
Figure 7: Shell energy conditions for a slice along the y-axis. Negative
values represent violations of the condition. No negative values are found.
Units are in $[j/m^{3}]$
## 4 Constant Velocity Warp Shell
As described in the introduction, a warp drive that can enable the transport
of different observers can do so using a shift vector inside the passenger
volume. Therefore, the task is to add a shift vector field to a shell solution
while maintaining the energy conditions.
### 4.1 Metric Definition
From the Shell solution constructed in Section 3, we now modify the interior
region of the comoving shell to have a shift vector along the direction of
motion, in this case along x. The modification must follow a few constraints
to create a sensible warp drive within our definition of warp:
1. 1.
The interior region should remain flat, meaning all spatial derivatives of the
metric are zero ($\partial_{i}g_{\mu\nu}=0$). Such a choice ensures that the
passengers will be in a vacuum and experience no tidal forces.
2. 2.
The transition region of the shift vector must occur between $R_{1}$ and
$R_{2}$ and smoothly connect with the exterior solution at $R_{2}$ where
$\beta_{i}=0$.
The modification of shift will modify the $g_{01}$ term as:
$g^{warp}_{01}=g_{01}-S_{warp}(r)\left(g_{01}+\beta_{warp}\right)$ (26)
where $S_{warp}$ is a compact sigmoid function defined as:
$S_{warp}(r)=\begin{cases}1&r<R_{1}+R_{b}\\\
1-f(r)&R_{1}+R_{b}<r<R_{2}-R_{b}\\\ 0&r>R_{2}-R_{b}\end{cases}$ (27)
and $f(r)$ is given by:
$f(r)=\left(\exp\left[(R_{2}-R_{1})\left(\frac{1}{r-R_{2}}+\frac{1}{r-R_{1}}\right)\right]+1\right)^{-1}$
(28)
where $R_{b}>0$ is a buffer region to ensure the derivatives are interior to
the bubble. We construct a matter shell with the same parameters as in Section
3. Varying the values of $\beta_{warp}$, we find that the addition of shift
inside the shell is possible for $\beta_{warp}=0.02$ without any energy
condition violation999This is likely not an upper limit as optimizations could
be considered.. The components for this metric are plotted in Figure 8.
Figure 8: Constant velocity Warp Shell metric components for a slice along the
y-axis. Only the non-Minkowski components for this slice are shown. Direction
of motion is along +X.
### 4.2 Physicality
To understand the physicality of this solution, we start by plotting the
resulting stress-energy terms in Figure 9. The energy density remains mostly
unchanged compared to that of a standard moving shell, but the modification of
the shift vector causes a difference in the momentum and pressure values for
Eulerean observers. The change in the momentum is most noticeable compared to
the shell metric, which had zero momentum density between $R_{1}$ and $R_{2}$.
This modified solution has both positive and negative momentum density around
$r\approx(R_{2}-R_{1})/2$. This is indicative of a circulation pattern forming
in the momentum flow of the shell. The same kind of momentum flow structure is
also observed for an Alcubierre solution [11]. The energy conditions are
evaluated for this metric and are shown in Figure 10. Modification of the
shift vector in this fashion has no impact on the violation compared to the
normal matter shell solution.
Surf plots of the solution for a slice centered in the $Z$ direction are shown
for the metric in Figure 11, energy density in Figure 12, other components of
stress-energy in Figure 13, and energy condition evaluations in Figure 14.
Figure 9: Constant velocity Warp Shell stress-energy components for a slice
along the cartesian y-axis. The direction of motion is along +X. Note that the
energy density is scaled by a factor of c2 and the momentum density by a
factor of c. Figure 10: Constant velocity Warp Shell energy conditions for a
slice along the y-axis. The direction of motion is along +X. Negative values
represent violations of the condition. No negative values are found.
### 4.3 Cross-Sections
Figure 11: Metric for the constant velocity Warp Shell in the comoving frame.
The direction of motion is along +X. The cross-section is centered in Z. Only
the non-zero cross-sections are shown. Figure 12: Energy density for the
constant velocity Warp Shell. The direction of motion is along +X. The cross-
section along Z is aligned with the bubble center. Units are $[J/m^{3}]$
Figure 13: The stress-energy tensor for the constant velocity Warp Shell in
the comoving frame, for Eulerian observers. The energy density is shown in
Figure 12. The direction of motion is along +X. The cross-section along Z is
aligned with the bubble center. Only the non-zero cross-sections are shown.
Units are $[J/m^{3}]$ Figure 14: Energy condition evaluation for the constant
velocity Warp Shell. The direction of motion is along +X. The cross-section
along Z is aligned with the bubble center. The minimum value across all
observers is shown. Positive (blue) and zero (white) are physical and negative
(red) is violating. Units are in $[J/m^{3}]$
## 5 Discussion
### 5.1 Measuring Shift
The addition of a shift vector to the passenger volume of a shell creates
several changes to the solution. To fully differentiate a warp shell from a
normal matter shell, an invariant test can be constructed using a comparison
of light rays traveling through the bubble, measuring the difference in
transit time between two paths of rays as they transit along and against the
shift vector direction. Since the metric is already defined in a comoving
frame, we simply have to run null geodesics through the center of the shell
directed forward and backward along the direction of the shift vector and
record the proper time $\delta t$ for each photon to return as measured at the
emitting points, ignoring photon interaction with the stress-energy tensor. In
Figure 15, a diagram of the test setup is shown for each of the photon paths
as they travel through the shell and return to the emitting point. This test
configuration is constructed within Warp Factory and the light-ray times are
numerically determined. Running this test we find that the Warp Shell (from
Section 4) has $\delta t\approx 7.6$ ns and the Matter Shell (from Section 3)
has $\delta t=0$ ns. As expected, a normal shell has an equal transit time
between both light rays, whereas the Warp Shell has a difference in transit
time depending on the ray’s direction through the Warp Shell. This delay is
not a unique feature of the Warp Shell in this paper but is also true of other
proposed warp drives that utilize a shift vector. Using Warp Factory, we
conducted the same numerical experiment for a few of the warp drives discussed
in the literature, and all of them have a $\delta t>0$, shown in Table
1101010Reference warp metrics are converted to a comoving frame using a
Galilean transformation to their coordinates.. This experiment demonstrates a
Lense-Thirring effect exists for warp drives with shift vectors, creating a
linear frame dragging111111An example uses photon paths circling a Kerr black
hole which has a transit difference depending on traveling with or against the
rotation, this same effect occurs here except these paths are straight through
the warp bubble center. Both paths in either example are aligned with or
against the shift vector [7].. Since the photon travel time is a measurable
quantity, the shift-vector modification of the shell metric cannot be reduced
to a coordinate transformation.
Figure 15: Diagram of the light-ray test. The emitters, detectors, and mirrors are comoving with the shell of interest. Note that both beams pass through the center, but are offset in the diagram for visual clarity. Emitter-detector B is vertically aligned with the mirrors on the left and emitter-detector A is vertically aligned with the mirrors on the right. Emitter-detectors A and B are equidistant to the center of the shell. The return path of the two light beams can be anywhere outside of the shell. The Warp Shell’s warp effect is in the horizontal direction away from B and toward A. Table 1: Comparison of time delay between different warp models and the matter shell for $v_{warp}=0.04$ c. Name | Parameters | $\delta t$ [ns]
---|---|---
Alcubierre [1] | $R$ = 15 m | 8.0
Van Den Broeck [19] | $R_{1}$ = 10 m, $R_{2}$ = 15 m, $\alpha$ = $0.1$ | 9.1
Modified Time [4] | $R$ = 15 m, A = 2 | 6.7
Matter Shell (Sections 3) | $R_{1}$ = 10 m, $R_{2}$ = 20 m, M = $4.49\times 10^{27}$ kg | 0
Warp Shell (Section 4) | $R_{1}$ = 10 m, $R_{2}$ = 20 m, M = $4.49\times 10^{27}$ kg | 7.6
### 5.2 Positive Energy Density
Whether or not a spacetime satisfies the energy conditions is best understood
by the relationships between energy density, pressures, and momentum flux in
the Eulerian frame ($T^{\hat{\mu}\hat{\nu}}$) [11] since both the NEC and WEC
are just expressions of how the different elements of the stress-energy tensor
are perceived by different observers. In the Eulerian frame, the observer
contraction can be simplified into roughly a weighting of pressure and
momentum flux compared to the energy density121212The full energy conditions
are determined by contracting the set of all observers (null and timelike)
with the tensor, which weights all of the different tensor elements together..
We can generally say that a physical solution addressing those conditions can
exist only if, for Eulerian observers, the energy density is larger than the
magnitude of all of the other tensor terms combined.
One method of creating positive energy is to use spherical matter shell
solutions that have a defined ADM mass [5]. In a coordinate system that is
asymptotically Minkowski, these are Schwarzschild-like solutions that can be
parameterized by their ADM mass (given by $M$ in this paper). Building our
warp solution from a matter shell allowed us to use the ADM mass parameter to
engineer positive energy density into the solution, while the modified shift
vector gave us a warp effect by creating a linear frame dragging inside the
shell. However, the amount of mass is limited by the shell radius and
thickness so as not to produce an event horizon within the shell
($R_{shell}>2GM_{shell}/c^{2}$), so only a limited amount of energy density
can be added. Increasing the shift vector will continue to add more momentum
flux to the stress-energy tensor, so there is an upper limit to the magnitude
of the shift vector that keeps the warp drive physical before the momentum
flux exceeds energy density. This upper limit is a future direction of work.
We can say that the shift vector distribution considered here is very
conservative in terms of its magnitude since we keep the shell at a constant
density. However, there are certainly ways to greatly improve this through
optimizing the shift vector and energy density profiles to strategically place
energy density where the momentum flux is highest.
Another interesting point to note is that, while there does exist a shift
vector in the direction of A in the light ray setup, the Shapiro time delay
[17], the delay of light travel time due to gravitational time dilation, from
B to A is still a delay compared to the propagation time in the corresponding
flat region. This is in contrast to the Alcubierre metric, in which an advance
is perceived. The presence of a changing lapse rate creates this result, a
feature the Alcubierre metric does not have, that is related to the nature of
the solution having ADM mass. This constraint may be another important aspect
of physicality [21], namely that physical solutions might require a changed
lapse rate which maintains a Shapiro time delay over an advance.
### 5.3 Acceleration
The warp solution created here is evaluated for the constant velocity case,
but the immediate question is how it applies to the acceleration phase. One
possible approach is to have the bubble accelerate by simply accelerating the
coordinate center and increasing the magnitude of the shift vector
accordingly. However, this approach gives the exact same issue as the
Schwarzschild Drive [16], which takes a regular black hole solution and simply
moves its center through the timeslices. This approach changes the metric such
that it now requires a negative energy density throughout space,
asymptotically approaching zero at infinity.
An obvious alternative is to imagine that some basic momentum transfer occurs,
where mass is shed in the process of creating the momentum flux in the bubble.
In this way, a kind of rocket-like solution could be possible that cancels out
the acceleration effects for passengers inside. However, this approach also
presents its own problems since the bubble likely requires large amounts of
matter to cancel out acceleration inside, thus requiring an even larger
ejection of mass to accelerate itself which becomes quickly untenable. In
addition, creating the metric itself which describes this situation, has not
been done in detail before beyond simple photon rockets [22].
Insight might be gathered by considering the ADM momentum [2] where
conservation of the 4-momentum might be a key element to understanding the
constraint to warp solutions when creating physically accelerating solutions
with ADM mass. The key question in this regard is whether the ‘spinning-up’ of
the warp drive results in the forward motion of the entire structure without
the need for any energy ejection. Analyzing non-vacuum spacetimes with non-
Schwarzschild boundary conditions might yield valuable insight.
Another alternative is to explore the use of focused gravitational radiation
emission as a way to accelerate drives over traditional momentum transfer
methods, such as recently discussed in [20]. In the work here, we assumed that
$du_{i}/dt=0$ in the passenger region during the acceleration phase of the
warp drive, but this is not a requirement in general solutions. It is possible
that the presence of non-zero shift vector may not be the key source of
geodesic transport in all solutions if carefully constructed spatially varying
lapse and metric spatial terms exist in the passenger volume. In fact, in the
scenario of ejecting matter, the lapse and spatial terms will vary in time and
spatially across a given spatial slice. Ultimately, the question of how to
make physical and efficient acceleration is one of the foremost problems in
warp drive research.
## 6 Conclusion
In this paper, we have developed the first constant velocity subluminal
physical warp drive solution to date that is fully consistent with the
geodesic transport properties of the Alcubierre metric. It provides geodesic
transport of observers while also satisfying the NEC, WEC, DEC, and SEC. This
solution was constructed from a stable shell of matter with a modified shift
vector on its interior, creating a warp solution with positive ADM mass.
Analysis and construction of the shell used a new numerical toolkit called
Warp Factory, which was developed by APL for warp research. This exciting new
result offers an important first step toward understanding what makes physical
warp solutions. Moreover, the warp drive spacetime constructed here is a new
type of warp drive beyond the Natario class and hence not subject to the same
scope discussed in [9] and [18] due to its use of modified spatial terms in
the metric. This new solution shows that a more generic constant velocity warp
drive spacetime can be constructed that satisfies the energy conditions.
We intend to explore this solution further and find areas of optimization to
improve the mass-to-velocity ratio required to maintain physicality. The
metric construction process of smoothing can be replaced by a direct 1D
optimization of the radial profiles for density, pressure, and shift vector,
possibly reducing required mass by orders of magnitude. In addition, the
question of accelerating the drive efficiently without breaking physicality is
a major direction of work for the field of warp drive research. The code for
the metrics and figures shown here will be provided as an update to the Warp
Factory codebase.
Warp Factory can be found at
https://github.com/NerdsWithAttitudes/WarpFactory.
## Appendix A Numerical Error Analysis
Using numerical methods for analysis puts constraints on the accuracy of the
results due to limitations in finite differencing methods for solving the
field equations, representing the spacetime with precision-limited numbers,
and discretizing the grid. These errors are summarized below:
1. 1.
Spherical to Cartesian Interpolation Error: The conversion of the spherical
metric to the Cartesian metric uses Legendre polynomials to interpolate points
in $r$ to points in $x$, $y$, and $z$. This interpolation introduces errors in
the final metric.
2. 2.
Finite Difference Discretization Error: This error comes about from the
discretization of the space into a grid. With lower spatial resolution, the
finite difference methods deviate from the true analytical derivatives since
the step size of the finite difference algorithm is larger. This error is
largest when $f(x+h)-f(x)$ is large compared to the step size $h$.
3. 3.
Floating Point Round-Off Error: The numerical calculations are done in double
precision. This restricts the maximum possible range of floating point values
to about 16 orders of magnitude. The solver for the Einstein Field Equations
is written to reduce catastrophic cancellation of small numbers, but the
double precision limit still restricts meaningful results for the stress-
energy tensor and energy condition violation to those of above $10^{34}$ in
magnitude.
4. 4.
Finite Difference Truncation Error: Finally, finite difference truncation
error happens when the infinite series that calculates the derivatives is cut
off to the fourth order. For the fourth-order finite difference method, the
truncation error is below the double precision round-off error floor.
As an example of the error floor in this analysis, we can look at the full
returned values of the energy conditions in Figure 16 for the Shell, Boosted
Shell, and Warp Shell.
Figure 16: Energy condition evaluation all the way to below the double
precision floor for the Shell and Warp Shell. Only the violating values are
shown. The region between $R_{1}$ and $R_{2}$ is empty as no violation exists
and only positive values for all observers are found. No systematic deviation
in errors is seen between the metrics. The values of the stress-energy tensor
in this work are on the order of $10^{39}$, which leaves a difference between
the noise floor of around $10^{-6}$.
## Appendix B Lorentz Transformation
The Lorentz factor is given in the usual manner:
$\gamma=\frac{1}{\sqrt{(1-\beta^{2})}}$ (29)
Applying the Lorentz transformation corresponding to a boost along the
positive x-dimension to a comoving metric $g$ results in the new metric
$g^{\prime}$ in terms of the old components as:
$\displaystyle g^{\prime}_{00}$ $\displaystyle=\gamma^{2}\left(g_{00}-2\beta
g_{01}+\beta^{2}g_{11}\right)$ (30) $\displaystyle g^{\prime}_{01}$
$\displaystyle=\gamma^{2}\left(g_{01}-\beta g_{11}-\beta
g_{00}+\beta^{2}g_{01}\right)$ (31) $\displaystyle g^{\prime}_{10}$
$\displaystyle=g^{\prime}_{01}$ (32) $\displaystyle g^{\prime}_{02}$
$\displaystyle=\gamma\left(g_{02}-\beta g_{12}\right)$ (33) $\displaystyle
g^{\prime}_{20}$ $\displaystyle=g^{\prime}_{02}$ (34) $\displaystyle
g^{\prime}_{03}$ $\displaystyle=\gamma\left(g_{03}-\beta g_{13}\right)$ (35)
$\displaystyle g^{\prime}_{30}$ $\displaystyle=g^{\prime}_{03}$ (36)
$\displaystyle g^{\prime}_{11}$
$\displaystyle=\gamma^{2}\left(g_{11}+\beta^{2}g_{00}-2\beta g_{01}\right)$
(37) $\displaystyle g^{\prime}_{12}$ $\displaystyle=\gamma\left(g_{12}-\beta
g_{02}\right)$ (38) $\displaystyle g^{\prime}_{21}$
$\displaystyle=g^{\prime}_{12}$ (39) $\displaystyle g^{\prime}_{13}$
$\displaystyle=\gamma\left(g_{13}-\beta g_{03}\right)$ (40) $\displaystyle
g^{\prime}_{31}$ $\displaystyle=g^{\prime}_{13}$ (41) $\displaystyle
g^{\prime}_{22}$ $\displaystyle=g_{22}$ (42) $\displaystyle g^{\prime}_{33}$
$\displaystyle=g_{33}$ (43)
The direction of the transformation is opposite to the direction of $\beta$.
## Biblography
## References
* [1] Miguel Alcubierre. LETTER TO THE EDITOR: The warp drive: hyper-fast travel within general relativity. Classical and Quantum Gravity, 11(5):L73–L77, May 1994.
* [2] Richard Arnowitt, Stanley Deser, and Charles W. Misner. Republication of: The dynamics of general relativity. General Relativity and Gravitation, 40(9):1997–2027, September 2008.
* [3] F. Bacchini, B. Ripperda, A. Y. Chen, and L. Sironi. Generalized, Energy-conserving Numerical Simulations of Particles in General Relativity. I. Time-like and Null Geodesics. Astrophysical Journal, 237(1):6, July 2018.
* [4] Alexey Bobrick and Gianni Martire. Introducing physical warp drives. Classical and Quantum Gravity, 38(10):105009, May 2021.
* [5] Leo Brewin. A simple expression for the ADM mass. General Relativity and Gravitation, 39(4):521–528, April 2007.
* [6] Sean M. Carroll. Spacetime and geometry. An introduction to general relativity. 2004\.
* [7] L. Filipe. O. Costa and José Natário. Frame-Dragging: Meaning, Myths, and Misconceptions. Universe, 7(10):388, October 2021.
* [8] Allen E. Everett and Thomas A. Roman. Superluminal subway: The Krasnikov tube. Physical Review D, 56(4):2100–2108, August 1997.
* [9] Shaun D. B. Fell and Lavinia Heisenberg. Positive energy warp drive from hidden geometric structures. Classical and Quantum Gravity, 38(15):155020, August 2021.
* [10] L. H. Ford and Thomas A. Roman. Averaged energy conditions and quantum inequalities. Physical Review D, 51(8):4277–4286, April 1995.
* [11] Christopher Helmerich, Jared Fuchs, Alexey Bobrick, Luke Sellers, Brandon Melcher, and Gianni Martire. Analyzing warp drive spacetimes with Warp Factory. Classical and Quantum Gravity, 41(9):095009, May 2024.
* [12] Erik W. Lentz. Breaking the Warp Barrier: Hyper-Fast Solitons in Einstein-Maxwell-Plasma Theory. arXiv e-prints, page arXiv:2006.07125, June 2020.
* [13] Erik W. Lentz. Hyper-Fast Positive Energy Warp Drives. arXiv e-prints, page arXiv:2201.00652, December 2021.
* [14] José Natário. Warp drive with zero expansion. Classical and Quantum Gravity, 19(6):1157–1165, March 2002.
* [15] Jessica Santiago, Sebastian Schuster, and Matt Visser. Generic warp drives violate the null energy condition. Physical Review D, 105(6):064038, March 2022.
* [16] Sebastian Schuster, Jessica Santiago, and Matt Visser. ADM mass in warp drive spacetimes. arXiv e-prints, page arXiv:2205.15950, May 2022.
* [17] Irwin I. Shapiro. Fourth Test of General Relativity. Physical Review Letters, 13(26):789–791, December 1964.
* [18] Barak Shoshany and Ben Snodgrass. Warp Drives, Rest Frame Transitions, and Closed Timelike Curves. arXiv e-prints, page arXiv:2309.10072, September 2023.
* [19] Chris Van Den Broeck. A ‘warp drive’ with more reasonable total energy requirements. Classical and Quantum Gravity, 16(12):3973–3979, December 1999.
* [20] Vijay Varma, Sylvia Biscoveanu, Tousif Islam, Feroz H. Shaik, Carl-Johan Haster, Maximiliano Isi, Will M. Farr, Scott E. Field, and Salvatore Vitale. Evidence of Large Recoil Velocity from a Black Hole Merger Signal. Physical Review Letters, 128(19):191102, May 2022.
* [21] Matt Visser, Bruce Bassett, and Stefano Liberati. Perturbative superluminal censorship and the null energy condition. In C. P. Burgess and R. C. Myers, editors, General Relativity and Relativistic Astrophysics, volume 493 of American Institute of Physics Conference Series, pages 301–305, November 1999.
* [22] Y. Wang and X. Lin. Field of an arbitrarily accelerating charged point mass. In Third Marcel Grossmann Meeting on General Relativity, pages 1001–1004, January 1983.
|
# Effect of thresholding on avalanches and their clustering
for interfaces with long-range elasticity
Juha Savolainen Aalto University, Department of Applied Physics, PO Box
11000, 00076 Aalto, Finland Lasse Laurson Computational Physics Laboratory,
Tampere University, P.O. Box 692, FI-33101 Tampere, Finland Mikko Alava
Aalto University, Department of Applied Physics, PO Box 11000, 00076 Aalto,
Finland NOMATEN Centre of Excellence, National Centre for Nuclear Research,
A. Soltana 7, 05-400 Otwock-Swierk, Poland
###### Abstract
Avalanches are often defined as signals higher than some detection level in
bursty systems. The choice of the detection threshold affects the number of
avalanches, but it can also affect their temporal correlations. We simulated
the depinning of a long-range elastic interface and applied different
thresholds including a zero one on the data to see how the sizes and durations
of events change and how this affects temporal avalanche clustering. Higher
thresholds result in steeper size and duration distributions and cause the
avalanches to cluster temporally. Using methods from seismology, the frequency
of the events in the clusters was found to decrease as a power-law of time,
and the size of an event in a cluster was found to help predict how many
events it is followed by. The results bring closer theoretical studies of this
class of models to real experiments, but also highlight how different
phenomena can be obtained from the same set of data.
## I Introduction
Figure 1: A snapshot of simulated avalanches with a visualized detection
threshold. The dark blue line shows the velocity $V$ of the interface as a
function of the simulation time $t$, and the orange region depicts the
threshold. When a threshold is used, only movement above it is considered, so
whenever the velocity signal goes inside the orange region, the velocity is
set to zero.
A slide in a sandpile [1], Barkhausen noise in magnets [2], and solar flares
[3] are examples of avalanches in physics. Avalanches are intermittent events
with scale-free sizes and durations, defined as the events that are large
enough to stand out from some background activity or noise. The choice of
which events are large enough is done by setting a detection threshold, below
which all data are ignored.
Filtering out small signals affects also the larger events, as has been shown
for random walks [4, 5, 6] and elastic interfaces [7]. In experiments however
a threshold might be unavoidable, as even if all background activity could be
removed from the data by other means, the detection devices might not be able
to record the smallest relevant signals. Therefore, it is both interesting and
useful to study how a threshold affects the results in different systems.
Elastic interfaces model for example magnetic domain walls [8, 9] and fluid
invasion in porous media [10, 11]. We use a similar system to what was used in
[7], in which the elastic interactions are long-ranged with a quadratic decay.
For example planar cracks in fracture mechanics [12], contact lines in wetting
[13], and low-angle grain boundaries in dislocations [14] exhibit this type of
elasticity.
Planar cracks, as the name suggests, are tears that move in a plane in a
material. In experiments they can be created by pulling apart an object with a
pre-existing crack. The front of the crack behaves like an elastic interface
that moves intermittently whenever the pulling force is enough to overcome a
weak spot in the object. As one part of the crack front moves forward, it
tends to pull neighbouring parts with it, creating avalanches in the movement.
[15, 16] The crack front of course deviates from perfectly planar movement,
but the phenomenon was also demonstrated by attaching two sandblasted
Plexiglas plates on top of each other and tearing them apart [17, 18].
Tools from seismology are often borrowed to study correlations in avalanches
[19, 20, 21]. The fracture mechanics model [22, 23, 24], as well as other
phenomena like wood compression [25], follow similar scaling laws as what are
found for earthquakes.
In seismology, earthquakes divide into so called mainshocks and aftershocks. A
mainshock is an event that triggers smaller earthquakes in the nearby region,
and the aftershocks are the triggered events. The productivity law states that
the number of triggered aftershocks grows exponentially with the magnitude of
the mainshock, or equivalently as a power law of the mainshock’s energy. The
Omori-Utsu law states that the frequency of the aftershocks decreases as a
power law of the time elapsed after the mainshock. There are also small events
known as foreshocks that precede mainshocks. [26, 27]
Barés et al. used a similar division into mainshocks and aftershocks for
activity in the interface model and the related planar crack experiment,
treating event sizes analogously to the energies of earthquakes [22, 23, 24].
The avalanches followed the productivity law, the Omori-Utsu law, and a law
called Båth’s law. Båth’s law states that the magnitude of a mainshock is on
average 1.2 times the magnitude of its largest aftershock, regardless of the
mainshock’s magnitude.
The most obvious side effect of a threshold is that it makes avalanches
smaller by removing a part of the movement. The smallest avalanches vanish
completely, which reduces the number of events. A perhaps more interesting
effect is that different peaks of the same event can get labelled as separate
events, as every time the velocity drops below the detection threshold and
comes back up, the avalanche is assumed to have stopped and a new one to have
initiated. Thus, a threshold both removes events and creates new ones.
A threshold creates power law distributed waiting times between avalanches in
the interface model [7]. We expect a threshold to also affect the analogues of
the productivity and Omori-Utsu laws, as the choice of a threshold affects how
many events and thus aftershocks arise from an underlying signal.
In subsection III.1 we look at how a threshold affects the size and duration
distributions of avalanches, as well as repeat the earlier results found for
the waiting times in [7]. Subsection III.2 discusses the frequency of
avalanches and aftershocks. In subsection III.3 the productivity law is looked
at with two different definitions for the aftershocks. First, the aftershocks
are defined similarly as in [22, 23, 24], and then the aftershocks are
required to be within a specific window of time after the mainshock.
## II The numerical model
We simulated the movement of a long range elastic interface around the
depinning point, which is the point where the system is driven just enough to
cause movement, with a cellular automaton model. The interface consists of
$L=2^{17}$ points moving in a direction perpendicular to the initial direction
of the interface. Each point experiences the same external driving force and
individual pinning and elastic forces. The pinning force for each point is a
Gaussian random variable with variance 0.3, and it changes every time the
point moves.
The elastic force depends on how far each point has advanced, and it uses the
quadratically decaying form
$f_{i}=k\sum_{j\neq i}\dfrac{h_{j}-h_{i}}{(j-i)^{2}},$ (1)
where $k=0.3$ is a spring constant and $h_{l}$ denotes how many steps has the
point at site $l$ moved. The sum is over all points of the interface. Periodic
boundary conditions modify the elastic term to
$f_{i}=\dfrac{\pi^{2}k}{L^{2}}\sum_{j\neq
i}\dfrac{h_{j}-h_{i}}{\sin^{2}\Big{(}\dfrac{\pi}{L}(j-i)\Big{)}}.$ (2)
using $\sin^{-2}x=\sum_{n={-\infty}}^{\infty}(n\pi+x)^{-2}$.
Each time step starts by calculating the elastic force for all the points.
Each point for which the sum of the elastic, pinning, and driving forces is
positive moves one step.
The interface starts with a straight configuration, so it experiences a large
initial movement until the elastic forces grow large enough to balance out the
pinning forces. This initial roughening is not included in the data. When the
system stops after the initial reconfiguration, the external driving is
increased until at least one point becomes unstable and the first recorded
avalanche initiates.
The implementation for the external force is somewhat simpler than the common
comoving approach, in which the interface follows an average velocity set by
the experiment or simulation with a set spring tension [28, 29]. Now the
driving force changes with a constant rate at each time step, so that during
timesteps when at least one point in the interface moves, the driving
decreases by $10^{-7}$, and at quiescent steps the driving increases by
$10^{-7}$. This way the driving force balances as close to a theoretical
critical value as possible, and as a result roughly half of the timesteps
contain movement. The naturally occurring waiting times between events allow
us to study avalanches with no threshold at all.
The simulations run for $2^{18}$ timesteps. The data are averaged over 100
runs. The size of an avalanche is how much the sum of all $h_{l}$ changed,
i.e., how much the interface moved in total. A threshold subtracts a constant
number of movement from each timestep as long as the result is not negative.
Durations and waiting times are the number of time steps spent above or below
the threshold in simulation time.
## III Results
Figure 2: A space-time map of the avalanches during one simulation. The
horizontal axis is simulation time and the vertical axis is position along the
interface. The black dots denote which parts of the interface moved at that
time. Note that there are periodic boundary conditions, so the points in the
upper and lower boundaries in the graph are next to each other.
Each dataset contains 1174.53 avalanches on average. The average signal is
52.9 and during avalanches the average signal is 104.5. The main results are
in Figures 4-8, which show different avalanche distributions using thresholds
0, 1, 3, 10, 32, 100, and 316.
Figure 2 shows the spatial and temporal distribution of the activity in one
simulation. All simultaneous movement belongs to the same avalanche, even if
there is a large spatial separation, as even distant points have direct
elastic interactions. The avalanches consist of clusters of movement that are
dense in the middle and turn into sparse clouds farther away. Adding a
threshold to the global movement signal might have a similar effect as
removing some movement of the remote points. The remote points cause
avalanches to start and end more smoothly, and possibly unify the dense cores
of avalanches that are not simultaneous.
### III.1 Increased number of small events
Figure 3: The number of events $N_{S}$ per the simulation’s duration
$T_{tot}$ as a function of the threshold $V_{0}$. The number is the highest at
threshold 18. The continuous line is a fit by a function $\propto
V_{0}^{A}e^{-BV_{0}}$, where $A\approx 0.11$ and $B\approx 0.0059$ are
constants. Figure 4: The size distribution of the avalanches fitted as
$\propto(1+S/S_{min})^{-\tau_{S}}e^{-S/S_{max}}$, where $S$ is size,
$\tau_{S}$, is the power law exponent, and $S_{min}$ and $S_{max}$ are the
cutoffs at small and large avalanches. The different graphs represent
different thresholds. The graphs have been shifted vertically to avoid
overlapping. The legend and the inset show the thresholds and the fitted
exponents. Figure 5: The duration distribution fitted using a similar
function $\propto(1+T/T_{min})^{-\tau_{T}}e^{-T/T{max}}$ as for the size
distribution. Again, the different graphs represent different thresholds and
the legend and the inset show the thresholds and the fitted exponents. The
graphs have been moved vertically to avoid overlapping. Figure 6: The waiting
time distribution. The continuous lines are power-law fits with functions
$\propto(\Delta t)^{-\tau_{\Delta t}}$ and the dashed lines are exponent
function fits. The lowermost graph shows that without a threshold the waiting
times follow an exponential distribution, and moving upwards the graphs start
exhibiting a power-law region which grows with the threshold. Note that the
power-law region starts forming already in the second graph with threshold
$V_{0}=1$, although the fitted exponent is very inaccurate due to the limited
number of datapoints. The power-law distribution describes the waiting times
between the sub-events created with the threshold. The original events get
further away from each other as the threshold increases, and consequently the
exponentially distributed region moves to longer times. As before, the graphs
have been moved vertically for visual clarity. The legend and the inset show
the thresholds and the fitted exponents for the power-law region.
Figure 3 shows the number of events at each threshold. At small thresholds the
number is growing, until it starts decreasing exponentially after its peak
when the threshold equals 18. The number stays above its original value until
the threshold is increased to 126.
The change in the number of events has an effect on the size and duration
distributions, shown in Figures 4 and 5 respectively. As the size of every
event decreases and small events are both destroyed and created, the net
effect is an increase in small and short events and a decrease in the larger
and longer ones. Thus the magnitudes of the size and duration distributions’
power-law exponents increase with the threshold. At thresholds close to the
average velocity of 105 during avalanches, the exponents of the size and
duration distributions change by roughly 10 percentages compared to the zero-
threshold graphs. Consequently, experiments should yield slightly larger
exponents than what are found in theoretical studies that do not necessarily
require a threshold.
Similarly as in [7], the waiting time distribution changes from an exponential
one into a power law with an exponential bump at the end. As shown in [30] and
[31], avalanches start and end, on average, with slower movement. Thus a
threshold typically cuts out the beginning and the end of the events,
increasing the waiting times between the original avalanches. Because of this,
the exponential waiting time region starts at later times as the threshold
increases. The new events created by splitting the original avalanches on the
other hand must have waiting times shorter than the avalanche durations, so
they fill the short time-scales in the waiting time distribution.
Interestingly, the power-law region starts forming already at threshold
$V_{0}=1$, which is the smallest non-zero velocity that the interface can
have. Therefore, any choice of a threshold in an experiment should create an
increase in the waiting time distribution for at least the smallest values.
As the threshold increases, the amount of datapoints in the power-law regions
in the waiting time distributions grows, and the exponents in the duration and
waiting time distributions both approach $1.6$. This means that the interface
velocity makes symmetric visits above and below a large threshold before the
underlying event ends. In other words, at large velocities the velocity starts
to resemble a symmetric random walk, as discussed in [7].
### III.2 Temporal clustering of events
(a)0.3in0in
(b)0.3in0in
Figure 7: The frequency of aftershocks and clustered avalanches as a function
of time. Figure 7 shows the rate of aftershocks after a mainshock, and Figure
7 shows the rate of avalanches after any avalanche. The continuous lines are
fits using a function $\propto t^{-p}e^{-t/t_{P}}$, where $t$ is time and $p$
and $t_{P}$ are constants. As previously, the different graphs show different
thresholds, and they have been shifted vertically for clarity. The legend and
the inset show the thresholds and the fitted exponents. However the fits for
small thresholds are very inaccurate due to the small number of fitted data
points.
The division of avalanches into series of smaller ones changes the temporal
clustering of events. Barés et al. studied the clustering of avalanches in
elastic interfaces with the concept of mainshocks and aftershocks used in
seismology [24, 22, 23]. Any event could take the role of a mainshock, and
after that all the subsequent events were labelled as aftershocks, until an
event at least as large as the mainshock was encountered. Seismologists often
require the aftershocks to be within some distance of the mainshock [32, 33],
but that is not feasible in the interface problem, when only the velocity of
the whole interface is looked at, and not local movement.
The productivity law in seismology means that the number of aftershocks that
follow a mainshock is proportional to a power of the mainshock’s energy. The
Omori-Utsu law states that frequency of the aftershocks decreases as a power
of the time elapsed after the triggering event. [26, 27] Barés et al. found
that similar laws also applied to the mainshocks and aftershocks in interface
dynamics. The number of aftershocks was proportional to a power of the
mainshocks’ size, and the aftershock frequency decreased as a power of time.
Figure 7 shows the aftershock frequency in our system, with the definition
that all shocks after a mainshock are aftershocks, until a shock at least as
large as the mainshock is encountered. Interestingly, we find a decreasing
aftershock frequency only when using a threshold. As the waiting times in the
underlying pure signal showed no correlations, the frequency of the events
without a threshold only increases with time, possibly as more exponentially
distributed waiting times have ended and new avalanches initiated.
Just as for the waiting times, already the minimal positive threshold
$V_{0}=1$ causes a dramatic increase in the aftershock frequency for small
times. With higher thresholds, the increased activity extends to longer times,
and a power-law region starts forming.
Contrary to the findings of Barés et al., we see a plateau and even a slight
increase in the aftershock frequencies for longer times. As the increased
activity results from a threshold dividing underlying avalanches, the rate of
events initially decreases as more of the avalanches in the pure signal have
ended. Then as the waiting times in the underlying signal end and new
avalanches begin, the aftershock frequency for a thresholded signal plateaus
and possibly grows if there are enough new avalanches to divide.
Since the increased frequency of events seems to arise from the altered
waiting time distribution, we should get similar results even without dividing
the avalanches into mainshocks and aftershocks. Figure 7 shows the average
rate of events after each event, without requiring the following events to be
smaller than the initial one.
The event frequency looks very similar to Figure 7 with limited sized
aftershocks. However, the increased amount of data delay the cutoffs in the
graphs, making the power-law fits more reasonable and also altering the
exponents. Now the fitted exponents decrease monotonously with the threshold,
approaching $0.4$ for the largest thresholds.
Similarly to the durations of the avalanches in the underlying pure signal,
the durations of the avalanche clusters in the thresholded data probably also
follow a decreasing distribution. As the number of active avalanche clusters
decreases, the average frequency of avalanches decreases, causing the
decreasing rate of events in Figures 7 and 7.
### III.3 Number of aftershocks
(a)0.3in0in
(b)0.3in0in
(c)0.3in0in
(d)0.3in0in
Figure 8: The number of aftershocks per mainshock as a function of the
mainshock’s size and the threshold used. In (a) all events after a mainshock
are counted as aftershocks, until there is an event at least the size of the
mainshock. The continuous lines are fits using the function 3. Figures (b),
(c), and (d) require the aftershock sequences to last for at least 100, 1000
and 10 000 timesteps respectively, and no further shocks are recorded. A
power-law region becomes more apparent for larger time windows and thresholds.
For large time windows and small thresholds there are no data for the small
mainshocks, as none of their aftershock sequences are long enough for the time
window. The continuous lines are fits with a function
$\propto(S^{\alpha}-1)/(S^{\alpha}+S_{P}^{\alpha})$, where $S$ is the
mainshock’s size and $S_{P}$ and $\alpha$ are constants. Again, the graphs
showing data for different threshold have been moved vertically to avoid
overlapping, and the legends show the thresholds that were used and the fitted
exponents.
The relationship between the size of a mainshock and the number of aftershocks
turns out to be slightly complicated in our system. Doing a similar analysis
as in the previous studies [22, 23, 24] with the definition that all the
events after a mainshock before another at least as large event are
aftershocks, we find that the number of aftershocks grows as a power of the
mainshock’s size, as is shown in Figure 8.
This apparent productivity law does not describe how many shocks a mainshock
triggers, but rather for how long does the defined aftershock sequence last.
As larger avalanches are more scarce, there are of course more shocks between
two large mainshocks than two small ones on average. Similarly, there should
be more events between longer avalanches and more events between rare events
in general.
It is worth mentioning that the aftershocks in Figure 8 are not aftershocks in
the same sense as in seismology, as their frequency does not necessarily
follow the Omori-Utsu law for the duration of the whole sequence. Looking at
Figure 7, we see that the Omori-like aftershock sequences in our system last
roughly for 10-10000 timesteps depending on the threshold.
As was shown in [22, 23, 24], with this definition of aftershocks the
productivity law does not change even after randomly permuting the events. The
authors found that the behaviour indeed follows from the ratio between the
number of events smaller than a mainshock and the number of events at least as
large as the mainshock.
In Figure 8 the number of aftershocks for an avalanche of size $S$ is fitted
using the integral of a size distribution of the form
$(1+S/S_{min})^{-\tau_{S}}$ to get the number of events smaller than $S$
divided by the number of events with size $S$ or larger. The resulting
aftershock number
$N_{AS}=\dfrac{(1+(S-1)/S_{min})^{1-\tau_{S}}-(1+S_{0}/S_{min})^{1-\tau_{S}}}{(1+S_{1}/S_{min})^{1-\tau_{S}}-(1+S/S_{min})^{1-\tau_{S}}},$
(3)
where $S_{0}=1$ is the lower boundary and $S_{1}$ the upper boundary of the
integral. The values of $\tau_{S}$ in Figure 8 are indeed very close to the
values in the size distribution in Figure 4 despite neglecting the exponential
cutoff of the size distribution.
A different and probably more interesting way to look at the number of
aftershocks is to use a time window. In Figures 8, 8, and 8 the aftershocks
are still smaller than the initiating mainshock, but the sequences have to
last for at least some specific duration, and aftershocks are counted only for
that time. If the window is for example 5000 timesteps, sequences where there
is an event larger than the mainshock after 4000 steps are ignored, and only
the first 5000 steps of a sequence that lasts for 6000 steps are looked at.
When the aftershocks are counted only for a set time, the behaviour divides
into three categories. For short time windows, the aftershock number consists
mostly of the increased activity in the shock frequency distribution shown in
Figure 7. Consequently, in Figure 8, where the aftershocks are counted for 100
time steps, the aftershock number increases more for graphs with a higher
threshold. The graphs are fitted with a monotonously increasing function
$\propto(S^{\alpha}-1)/(S^{\alpha}+S_{P}^{\alpha})$, where $S_{P}$ is the
value of the shock size $S$ at which the aftershock number starts to plateau.
Without a threshold, the data do not follow a similar function, but instead
the aftershock number decreases after some value of the mainshock’s size.
For slightly larger time windows such as in Figure 8, where the window is 1000
time steps, the aftershock number includes more of the average activity in the
simulations, and hence the behaviour becomes more similar for all thresholds.
All graphs can be fitted with the function
$\propto(S^{\alpha}-1)/(S^{\alpha}+S_{P}^{\alpha})$, with the exponent
$\alpha$ around one.
In Figure 8 the time window is 10 000 steps. With a small threshold there are
no small mainshocks with long enough aftershock sequences, so the graphs start
at large mainshocks. With large thresholds the aftershock number increases for
almost the whole range of shock sizes, and the exponents are again close to
one.
Combining the findings in Figures 8, 8, and 8, we can deduce that a large
avalanche in interface depinning is most likely followed by a large number of
smaller avalanches on a variety of time scales. Increasing the detection
threshold extends the effect to a wider range of avalanche sizes.
It is important to note that the results do not say that a small avalanche is
followed by a small number of events. Large avalanches can still be preceded
by small ones, so that the events that follow the large avalanches also follow
the preceding small avalanches. But if we ignore small events that build up to
larger ones, then the larger an avalanche is, the more events it is followed
by, as long as a detection threshold is used.
## IV Discussion
We simulated the depinning of a long range elastic interface using a cellular
automaton model. Avalanches in the movement were defined using various
thresholds to study their effect. As the driving force balanced around the
depinning point, the interface moved intermittently and avalanches could also
be defined without a threshold.
A threshold divides avalanches into separate events whenever the velocity of
the interface visits below the threshold [7]. Consequently, we found that
higher thresholds increased small and short avalanches and decreased large and
long ones. Thresholds close to the average velocity changed the exponents of
the size and duration distributions by about 10 percent compared to the pure
signal with no threshold.
The seismic-like clustering of avalanches discussed in previous interface
studies [22, 23, 24] was investigated to see if a detection threshold would
affect it. We found that the power-law distributed frequency of aftershocks
depends on the use of a threshold. With no threshold, the shock frequency
initially increases with time, as more waiting times between events end. With
a threshold however, the aftershock frequency starts at a higher value and
decreases as a power of time until meeting some background event rate. A
higher threshold decreases the background activity and makes the power-law
region longer.
The results applied also if the aftershocks could be larger than the mainshock
they followed, so in general we found that a threshold causes avalanches in
interface depinning to cluster in time with a power-law frequency. This
clustering is probably a natural result of the power-law distributed waiting
times caused by a threshold shown previously in [7].
We studied also the dependence of the number of aftershocks on the size of a
mainshock. The aftershocks were looked at for different time scales. The
number of aftershocks was proportional to a power of a mainshock’s size as
long as the timescale was long enough or a threshold was used. For small
timescales and no threshold, the aftershock number did not grow monotonously
with the mainshock’s size, but rather decreased and plateaued after some
value. A larger threshold and a larger time window led to longer and more
apparent power laws, with exponents close to one.
The fairly simple detection threshold as well as the method for classifying
avalanches could be modified to study local events or a local threshold.
First, lonely events that do not have enough activity around them inside some
space-time window could be filtered out. We have already done initial tests
using this type of a local threshold, and the results seem to mimic what was
found here with the global threshold. The next step is to also classify the
avalanches using a space-time window to separate simultaneous but spatially
distant events.
## References
* Bak _et al._ [1987] P. Bak, C. Tang, and K. Wiesenfeld, Self-organized criticality: An explanation of the 1/f noise, Physical Review Letters 59, 381 (1987).
* Durin and Zapperi [2006] G. Durin and S. Zapperi, The Barkhausen effect, in _The Science of Hysteresis_, edited by G. Bertotti and I. D. Mayergoyz (Academic Press, Oxford, 2006) pp. 181–267.
* Aschwanden _et al._ [2014] M. J. Aschwanden, N. B. Crosby, M. Dimitropoulou, M. K. Georgoulis, S. Hergarten, J. McAteer, A. V. Milovanov, S. Mineshige, L. Morales, N. Nishizuka, and et al., 25 years of self-organized criticality: Solar and astrophysics, Space Science Reviews 198, 47–166 (2014).
* Laurson _et al._ [2009] L. Laurson, X. Illa, and M. J. Alava, The effect of thresholding on temporal avalanche statistics, Journal of Statistical Mechanics: Theory and Experiment , P01019 (2009).
* Font-Clos _et al._ [2015] F. Font-Clos, G. Pruessner, N. R. Moloney, and A. Deluca, The perils of thresholding, New Journal of Physics 17, 043066 (2015).
* Villegas _et al._ [2019] P. Villegas, S. di Santo, R. Burioni, and M. A. Muñoz, Time-series thresholding and the definition of avalanche size, Physical Review E 100, 012133 (2019).
* Janićević _et al._ [2016] S. Janićević, L. Laurson, K. Måløy, S. Santucci, and M. Alava, Interevent correlations from avalanches hiding below the detection threshold, Physical Review Letters 117, 230601 (2016).
* Lemerle _et al._ [1998] S. Lemerle, J. Ferré, C. Chappert, V. Mathet, T. Giamarchi, and P. Le Doussal, Domain wall creep in an ising ultrathin magnetic film, Physical Review Letters 80, 849 (1998).
* Zapperi _et al._ [1998] S. Zapperi, P. Cizeau, G. Durin, and H. E. Stanley, Dynamics of a ferromagnetic domain wall: Avalanches, depinning transition, and the barkhausen effect, Physical Review B 58, 6353 (1998).
* Rost _et al._ [2007] M. Rost, L. Laurson, M. Dubé, and M. Alava, Fluctuations in fluid invasion into disordered media, Physical Review Letters 98, 054502 (2007).
* Planet _et al._ [2009] R. Planet, S. Santucci, and J. Ortín, Avalanches and non-gaussian fluctuations of the global velocity of imbibition fronts, Physical Review Letters 102, 094502 (2009).
* Rice [1985] J. Rice, First-order variation in elastic fields due to variation in location of a planar crack front, Journal of Applied Mechanics 52, 571 (1985).
* Joanny and de Gennes [1984] J. F. Joanny and P. G. de Gennes, A model for contact angle hysteresis, The Journal of Chemical Physics 81, 552 (1984).
* Moretti _et al._ [2004] P. Moretti, M. Miguel, M. Zaiser, and S. Zapperi, Depinning transition of dislocation assemblies: Pileups and low-angle grain boundaries, Physical Review B 69, 214103 (2004).
* Gao and Rice [1989] H. Gao and J. R. Rice, A First-Order Perturbation Analysis of Crack Trapping by Arrays of Obstacles, Journal of Applied Mechanics 56, 828 (1989).
* Ponson and Pindra [2017] L. Ponson and N. Pindra, Crack propagation through disordered materials as a depinning transition: A critical test of the theory, Physical Review E 95, 053004 (2017).
* Delaplace _et al._ [1999] A. Delaplace, J. Schmittbuhl, and K. J. Måløy, High resolution description of a crack front in a heterogeneous plexiglas block, Physical Review E 60, 1337 (1999).
* Tallakstad _et al._ [2011] K. T. Tallakstad, R. Toussaint, S. Santucci, J. Schmittbuhl, and K. J. Måløy, Local dynamics of a randomly pinned crack front during creep and forced propagation: An experimental study, Physical Review E 83, 046108 (2011).
* Ferrero _et al._ [2017] E. E. Ferrero, L. Foini, T. Giamarchi, A. B. Kolton, and A. Rosso, Spatiotemporal patterns in ultraslow domain wall creep dynamics, Physical Review Lett. 118, 147208 (2017).
* Ferrero _et al._ [2021] E. E. Ferrero, L. Foini, T. Giamarchi, A. B. Kolton, and A. Rosso, Creep motion of elastic interfaces driven in a disordered landscape, Annual Review of Condensed Matter Physics 12, 111–134 (2021).
* Mäkinen [2020] T. Mäkinen, _Collective phenomena in deformation_ , Doctoral thesis, Aalto University (2020).
* Barés and Bonamy [2018] J. Barés and D. Bonamy, Crack growth in heterogeneous brittle solids: intermittency, crackling and induced seismicity, Philosophical Transactions of the Royal Society A 377, 20170386 (2018).
* Barés _et al._ [2018] J. Barés, A. Dubois, L. Hattali, D. Dalmas, and D. Bonamy, Aftershock sequences and seismic-like organization of acoustic events produced by a single propagating crack, Nature Communications 9, 1253 (2018).
* Barés _et al._ [2019] J. Barés, D. Bonamy, and A. Rosso, Seismic-like organization of avalanches in a driven long-range elastic string as a paradigm of brittle cracks, Physical Review E 100, 023001 (2019).
* Mäkinen _et al._ [2015] T. Mäkinen, A. Miksic, M. Ovaska, and M. J. Alava, Avalanches in wood compression, Physical Review Lett. 115, 055501 (2015).
* Utsu [1971] T. Utsu, Aftershocks and earthquake statistics(2) : Further investigation of aftershocks and other earthquake sequences based on a new classification of earthquake sequences, Journal of the Faculty of Science, Hokkaido University. Series 7, Geophysics 3, 197 (1971).
* Utsu _et al._ [1995] T. Utsu, Y. Ogata, S. Ritsuko, and Matsu-ura, The centenary of the omori formula for a decay law of aftershock activity, Journal of Physics of the Earth 43, 1 (1995).
* Bonamy _et al._ [2008] D. Bonamy, S. Santucci, and L. Ponson, Crackling dynamics in material failure as the signature of a self-organized dynamic phase transition, Physical Review Letters 101, 045501 (2008).
* Chauve _et al._ [2000] P. Chauve, T. Giamarchi, and P. Le Doussal, Creep and depinning in disordered media, Physical Review B 62, 6241 (2000).
* Laurson _et al._ [2013] L. Laurson, X. Illa, S. Santucci, K. Tallakstad, K. Måløy, and M. Alava, Evolution of the average avalanche shape with the universality class, Nature Communications 4, 2927 (2013).
* Dobrinevski _et al._ [2014] A. Dobrinevski, P. Le Doussal, and K. J. Wiese, Avalanche shape and exponents beyond mean-field theory, EPL (Europhysics Letters) 108, 66002 (2014).
* Shebalin _et al._ [2020] P. N. Shebalin, C. Narteau, and S. V. Baranov, Earthquake productivity law, Geophysical Journal International 222, 1264 (2020).
* Helmstetter [2003] A. Helmstetter, Is earthquake triggering driven by small earthquakes?, Physical Review Letters 91, 058501 (2003).
|
# Online Information-Aware Motion Planning with Inertial Parameter Learning
for Robotic Free-Flyers
Monica Ekal1∗, Keenan Albee2∗, Brian Coltin3, Rodrigo Ventura1, Richard
Linares2, and David W. Miller2 *Both authors contributed equally to this
work.1Institute for Systems and Robotics, Instituto Superior Técnico, {mekal,
<EMAIL_ADDRESS>Department of Aeronautics and
Astronautics, Massachusetts Institute of Technology, {albee, linaresr,
<EMAIL_ADDRESS>SGT Inc., NASA Ames Research Center<EMAIL_ADDRESS>
###### Abstract
Space free-flyers like the Astrobee robots currently operating aboard the
International Space Station must operate with inherent system uncertainties.
Parametric uncertainties like mass and moment of inertia are especially
important to quantify in these safety-critical space systems and can change in
scenarios such as on-orbit cargo movement, where unknown grappled payloads
significantly change the system dynamics. Cautiously learning these
uncertainties en route can potentially avoid time- and fuel-consuming pure
system identification maneuvers. Recognizing this, this work proposes RATTLE,
an online information-aware motion planning algorithm that explicitly weights
parametric model-learning coupled with real-time replanning capability that
can take advantage of improved system models. The method consists of a two-
tiered (global and local) planner, a low-level model predictive controller,
and an online parameter estimator that produces estimates of the robot’s
inertial properties for more informed control and replanning on-the-fly; all
levels of the planning and control feature online update-able models.
Simulation results of RATTLE for the Astrobee free-flyer grappling an
uncertain payload are presented alongside results of a hardware demonstration
showcasing the ability to explicitly encourage model parametric learning while
achieving otherwise useful motion.
## I Introduction
Robotic space systems are gearing up to perform a variety of tasks
autonomously, including in-space assembly and payload transportation [1] [2]
[3] [4] [5]. Precise execution of these tasks means that acceptable
characterization of the dynamical system involved is often necessary. However,
there is frequently underlying uncertainty in these systems; in addition to
any existing model uncertainty, fuel depletion and grasping of payloads can
further modify the inertial characteristics of the system during operation.
Moreover, operation in cluttered, dynamic environments such as the interior of
an orbiting space station calls for re-planning of trajectories in real-time
to account for system and environmental changes, e.g., other free-floating
payloads. Fortunately, some forms of uncertainty like inertial properties are
parametric and can be resolved using knowledge of the system model. One key
example of this is payload manipulation by robotic free-flyers, robots that
propel themselves in microgravity.
Figure 1: The Astrobee robotic free-flyer on an air-bearing grasping a new
payload using its underactuated gripper.
NASA’s Astrobee robot (Fig. 1), recently deployed to the International Space
Station (ISS), is a prime example of a free-flying robotic system [6] [7] [8]
[9]. With proposed uses including automatic cargo repositioning onboard future
microgravity outposts, Astrobee includes a robotic manipulator for grappling
payloads and perching [10]. Astrobee’s proposed functionality is a key example
of the need to account for system model changes and the underlying or
inherited system uncertainty. Operating around moving astronauts and free-
floating cargo, these systems must account for parametric model uncertainty or
face poor trajectory tracking and inefficient execution, as recently shown in
[11].
This places the robotic free-flying motion planning problem in the context of
motion planning under parametric uncertainty. Existing planning under
parametric uncertainty approaches are wide-ranging, but can be broadly placed
into two categories. Some approaches attempt full system identification (sys
ID) before even attempting motion planning [12], [13, 14] followed by planning
with the estimated nominal model; otherwise, robust or chance-constrained
approaches that operate under an assumed uncertainty bound [15] [16] [17] are
applied. Many approaches from robust and adaptive control can be applied to
the uncertain tracking problem, but do not address the higher-level motion
planning. Direct adaptive control [18] and sliding mode control can be
employed to reject disturbances from parametric model uncertainty [19], [20],
[21]. Indirect adaptive control approaches on the other hand, pair an
estimator and a controller, relying on the estimator to provide a better model
for the control method under the condition of a persistently exciting control
signal [22], [23]. Providing robustness against assumed uncertainty bounds
neglects the higher-level motion planning problem, and does not consider
online reduction of uncertainty. For learnable robotic parametric unknowns,
replanning capability is desirable due to an evolving understanding of the
unknown parameters; further, excitation might be desired in order to aid in
this model-learning process.
Relevant approaches explicitly consider the value of parametric information-
awareness in the motion planning. They include a POMDP formulation with
covariance minimization in the cost function [24], which was only demonstrated
for an unconstrained double integrator system with uncertain mass, and recent
work on covariance steering [25] [26], which attempts to answer exactly when
system excitation is most useful for model uncertainty resolution. However,
these approaches have not yet been implemented on hardware, their scalability
has not yet been demonstrated, and they do not address some of the
practicalities of motion planning such as dealing with global long-horizon
planning.
This paper proposes a method called RATTLE (Real-time information-Aware
Targeted Trajectory planning for Learning via Estimation) that combines
parameter information-aware motion planning with real-time model updates.
Building on the the authors’ previous work [27] where the idea of making
trajectory progress while accounting for information gain was explored, RATTLE
proposes a user-adjustable weighting toward gaining information about
parametric uncertainties to aid an online parameter estimator, with the
ultimate goal of incorporating model updates. The approach is real-time
receding-horizon, with the ability to also incorporate replanning. Further,
the user-specified information weighting can be customized for the environment
and dynamics at hand. Compared to traditional up-front sys ID, this approach
only applies as much excitation as desired simultaneously with goal-achieving
motion. The intent is to avoid interrupting the current maneuver and spending
time on full sys ID if a sufficient amount of uncertainty reduction can
instead be performed online, during otherwise useful motion.
RATTLE is especially relevant for free-flyer load transportation scenarios,
where an uncharacterized grappled payload might change the dynamics
dramatically (and parametrically). Space systems requiring careful execution
in cluttered space station interiors will benefit from a learning, replanning
approach that is not overly conservative. To the authors’ knowledge, this is
the first time that a parametric information-aware planning algorithm with
uncertainty reduction by parameter learning has been used for for robotic
free-flyers. Though RATTLE has been employed specifically for this robotic
free-flyer load transportation scenario, the algorithm’s applicability extends
to many systems with parametric model unknowns.
The main contributions of this paper are:
1. 1.
RATTLE, a novel motion planning method for creating selectively information-
aware plans with online parameter estimation to reduce parametric uncertainty;
2. 2.
The incorporation of global motion planning into such an approach;
3. 3.
Validation of the approach via a high-fidelity simulation of the Astrobee
free-flyer transporting payload under ground testing dynamics and proof of
concept results on the Astrobee hardware, demonstrating improved parametric
model learning under information-aware planning.
Section I has introduced planning under parametric uncertainty and
applications to robotic free-flyers in particular; Section II formulates the
parametric information-aware motion planning problem and introduces the free-
flying dynamics; Section III introduces RATTLE, a novel parametric
information-aware motion planning algorithm; Section IV demonstrates RATTLE’s
implementation, shows simulation and hardware results, and explains some of
the method’s key characteristics; Section V discusses the implications of the
approach and what improvements are now being pursued.
## II Problem Formulation
A robotic system with state $\mathbf{x}\in\mathbb{R}^{n}$, input
$\mathbf{u}\in\mathbb{R}^{m}$, and uncertain parameters
$\boldsymbol{\theta}\in\mathbb{R}^{j}$ is initially positioned at state
$\mathbf{x}_{0}$. A region of the state space that is admissible is specified
as $\mathcal{X}_{free}$, and a constraint on inputs may also be provided as
$\mathcal{U}$. A goal region $\mathcal{X}_{g}$ is also specified. Let the
dynamics and measurement models of the system be represented as
$\displaystyle\dot{\mathbf{x}}=f(\mathbf{x},\mathbf{u},\boldsymbol{\theta})+\mathbf{w}_{x}$
(1)
$\displaystyle\tilde{\mathbf{y}}=h(\mathbf{x},\mathbf{u},\boldsymbol{\theta})+\mathbf{w}_{y},$
(2)
where the vector of the measured quantities is
$\tilde{\mathbf{y}}\in\mathbb{R}^{l}$,
$\mathbf{w_{x}}\sim\mathcal{N}\left(0,\bm{\Sigma}_{Q}\right)$, and
$\mathbf{w_{y}}\sim\mathcal{N}\left(0,\bm{\Sigma}_{R}\right)$ where
$\mathcal{N}$ represents a Gaussian. Only initial estimates of the parameters
are known,
${\boldsymbol{\theta}_{0}}\sim\mathcal{N}(\boldsymbol{\hat{\theta}}_{0},\mathbf{\Sigma}_{\bm{\theta},0})$.
The aim is to plan a trajectory minimizing the following cost function while
respecting the input and state constraints, $\mathcal{U}$ and
$\mathcal{X}_{free}$,
$\displaystyle
J(\mathbf{x},\mathbf{u},t)=g(\mathbf{x}(t_{f}),\mathbf{u}(t_{f}))+\int_{t_{0}}^{t_{f}}l(\mathbf{x}(t),\mathbf{u}(t))\
dt.$ (3)
Here, $g(\mathbf{x}(t_{f}),\mathbf{u}(t_{f}))$ is a terminal cost and
$l(\mathbf{x}(t),\mathbf{u}(t))$ is an accumulated cost, computed over the
current nominal system model. Since knowledge of $\boldsymbol{\theta}$ can be
improved through parameter estimation, it is possible to obtain an enhanced
dynamics model that is a closer representation of reality. Details on the
problem setup are also provided in [27]. Even in the deterministic case, the
motion planning problem is known to be at least PSPACE-hard and often requires
approximate solutions [28].
### II-A Rigid body dynamics
The dynamics model of interest for robotic free-flyers is the rigid body
dynamics with uncertain inertial parameters. The linear and angular dynamics
for a 6 DOF rigid body expressed in a body-fixed frame not coincident with the
center of mass are
$\displaystyle\begin{split}\begin{bmatrix}\mathbf{F}\\\
\boldsymbol{\tau}_{{CM}_{0}}\end{bmatrix}&=\begin{bmatrix}m\mathbf{I}_{3}&-m[\mathbf{c}]_{\times}\\\
-m[\mathbf{c}]_{\times}&\mathbf{I}_{CM}-m[\mathbf{c}]_{\times}[\mathbf{c}]_{\times}\end{bmatrix}\begin{bmatrix}\dot{\mathbf{v}}\\\
\dot{\boldsymbol{\omega}}\end{bmatrix}+\\\
&\begin{bmatrix}m[\mathbf{w}]_{\times}[\mathbf{w}]_{\times}\mathbf{c}\\\
[\mathbf{w}]_{\times}\left(\mathbf{I}_{CM}-m[\mathbf{c}]_{\times}[\mathbf{c}]_{\times}\right)\boldsymbol{\omega}\end{bmatrix}\end{split}$
(4)
where ${\mathbf{v}}$, $\bm{\omega}\in\mathbb{R}^{3}$ denote the linear
velocity and angular velocity of the original center of mass (CM0),
$\mathbf{I}_{CM}$ is the inertia tensor about the center of mass (CM), $m$ is
the system mass, and $\mathbf{c}\in\mathbb{R}^{3}$ is the CM offset from CM0.
$\mathbf{F},\bm{\tau}\in\mathbb{R}^{3}$ are the forces and torques applied
through the $\mathcal{F}_{B}$ body frame, where $\mathcal{F}$ indicates a
frame as in Fig. 2. $[-]_{\times}$ is used to indicate a cross product matrix.
Note that these dynamics are significantly more complex than the Newton-Euler
equations of forces and torques in the center of mass fixed frame. For a 3 DOF
case commonly used in granite table free-flyer testing as in Fig. 1, the
equations can be written as,
$\displaystyle
F_{x}=m\left[\dot{v}_{x}-\dot{\omega}_{z}c_{y}-\omega_{z}^{2}c_{x}\right]$ (5)
$\displaystyle
F_{y}=m\left[\dot{v}_{y}+\dot{\omega}_{z}c_{x}-\omega_{z}^{2}c_{y}\right]$ (6)
$\displaystyle\tau_{z_{0}}=mc_{x}\dot{v}_{y}-mc_{y}\dot{v}_{x}+\left[{I_{zz,CM}}+m\left(c_{y}^{2}+c_{x}^{2}\right)\right]\dot{\omega}_{z}$
(7) which can be conveniently grouped into matrix form,
$\displaystyle\mathbf{F}=\begin{bmatrix}\mathbf{M}\end{bmatrix}\mathbf{\dot{x}}+\begin{bmatrix}\mathbf{C}\end{bmatrix}\mathbf{x}.$
(8)
These dynamics are also described in [29, 30] and are shown for the 3 DOF case
in Fig. 2. The parameter vector of interest is
$\boldsymbol{\theta}=\left\\{m,c_{x},c_{y},I_{zz}\right\\}$.
Figure 2: The 3 DOF rigid body dynamics model for free-flyers. An original
system CM0 located at $\mathcal{F}_{B}$ is offset by $\mathbf{c}$, with new
system mass $m$ and $\hat{z}$ moment of inertia $I_{zz}$. Note that
$\mathbf{T}_{WB}$ indicates body pose with respect to world frame.
## III Approach: Information-Aware Planning Algorithm
Figure 3: A sketch of the RATTLE planning framework, demonstrating high-level
(global) long-horizon planning via kino-RRT, and mid-level shorter-horizon
(local) planning incorporating information-aware planning via an adjustable
weighting term, $\gamma$. An online update-able controller, MPC, also benefits
from a more accurate system model. Note that horizon lengths are not
necessarily to scale.
### III-A RATTLE Overview
RATTLE is an information-aware motion planning method which aims to directly
add informative motion when desired en route, allowing one to improve model
parameter estimates online. Compared to full system identification performed
prior to planning, this approach offers time savings (and potential fuel
savings) by allowing useful model information to be learned en route via an
explicit weighting on information-awareness in the motion planning. Compared
to non-informative planning approaches, RATTLE offers a framework for trading
off standard state and fuel cost minimization with the ability to perform
model-improving actions; this allows the robot to take control of its level of
parametric model knowledge directly via motion planning, rather than ignoring
model improvement altogether.
RATTLE consists of four key ingredients:
* •
A high-level (global) planner
* •
A mid-level (local) information-aware planner
* •
A low-level model predictive controller
* •
An online parameter estimator
As shown in Fig. 3, a global planner that excels at handling e.g., obstacle
constraints and long time horizons is used to produce a nominal global plan,
using a nominal set of dynamics (Section III-B). Portions of this global plan
are used as waypoints in guiding the local planner, which incorporates an
information awareness metric and operates over a shorter time horizon (Section
III-C). Online, waypoints and information weighting may be updated at each
replan of the local plan. The division into a global and local planner is in
recognition of the fact that the general informative long-horizon trajectory
planning problem is not computationally tractable; the common approach of
using e.g., a sampling-based planner to perform global planning is proposed.
At the lowest level, a model predictive controller runs at the fastest rate
and continually incorporates model updates (Section III-D). A recursive
parameter estimator runs continually, passing off the latest available model
information for each planning/control element to use as desired (Section
III-E). The RATTLE algorithm is outlined in Fig. 4. The subsections that
follow describe each of these components and the estimator in further detail.
### III-B High-Level (Global) Planner: Kinodynamic RRT
Sampling-based planners (SBPs) operate based on growing a tree or graph of
sample points $\mathbf{x}_{i}$ within a sample space $\mathcal{X}_{free}$ and
have been applied to a large number of robotic motion planning problems [31].
A key advantage of SBPs is that difficult constraints, like collision-
checking, can be explicitly checked during exploration of the state space.
This framework uses kino-RRT, a variant of the popular rapidly exploring
random tree (RRT) algorithm. kino-RRT includes the robot dynamics and is a
good candidate for a long-horizon planner when numerical optimization-based
planning becomes impractical [32]. The reader is referred to Karaman for
implementation specifics [33]. Utilizing the advantage of a direct collision
checking module, one may use ellipsoidal constraints for instance in order to
perform simple collision checking; such constraints are common for space
robotics motion planning scenarios [34]. The result of this initial long-
horizon planning is a path, $\mathcal{P}_{g}$, of
$\mathbf{x}_{0:{N_{g}}}\in\mathcal{X}_{free}$, where each node obeys
$\mathbf{x}_{k+1}=f(\mathbf{x}_{k},\mathbf{u}_{k})$ and any additional
enforced constraints. Dynamics propagation is typically accomplished using a
set of representative motion primitives. The kino-RRT is represented as the
green solid line in Fig. 3, with motion primitive actions connecting adjacent
waypoints. Global planning is nominally performed only once prior to motion,
but online recomputation is enabled by reasonable solve times relative to the
dynamics of interest (e.g., a few seconds for robotic free-flyers).
### III-C Mid-Level (Local) Planner: Information-Aware Receding Horizon
Trajectory Planner
The mid-level planner performs receding-horizon, information-aware planning.
Starting off with the high-level, global plan $\mathcal{P}_{g}$ given by the
kino-RRT, the planner plans trajectories between selected waypoints using
updated information about the robot’s model $\mathcal{M}$ based on the latest
parameter knowledge $\boldsymbol{\theta}_{k}$. Significantly, this planner has
the ability to optimize a cost function that introduces excitation or richness
in the trajectories, thus facilitating the estimation of dynamic parameters
alongside traditional state error and input use. The result is the ability to
assign system excitation as desired while accomplishing otherwise useful
motion.
#### III-C1 Calculation of Fisher Information
Fisher information is employed as an information-theoretic metric in the cost.
The Fisher Information Matrix (FIM) [35] is a measure of the amount of
information given by an observation $\tilde{y}$ about a parameter of interest,
$\theta$. Assuming that there is no process noise in the parameter model,
i.e., $\boldsymbol{\theta}_{k+1}=\boldsymbol{\theta}_{k}$, and due to the
Gaussian nature of the measurement noise and linear measurement model, over
time $t_{0}..,t_{N}$, the FIM is
$\mathbf{F}=\sum_{k=0}^{N}\mathbf{H}(t_{k})^{T}\mathbf{\Sigma}^{-1}\mathbf{H}(t_{k})$
(9) $\begin{gathered}\mathbf{H}(t_{k})=\frac{\partial
h(\mathbf{x}(t_{k}),\mathbf{u}(t_{k}),\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}+\\\
\frac{\partial
h(\mathbf{x}(t_{k}),\mathbf{u}(t_{k}),\boldsymbol{\theta})}{\partial\mathbf{x}}\cdot\frac{\partial\mathbf{x}(\mathbf{x}(t_{k}),\mathbf{u}(t_{k}),\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}.\end{gathered}$
(10)
More details on calculation of the FIM can be found in [36], [37] and the
authors’ previous work [27]. A cost function is constructed to minimize the
trace of the inverse of the FIM, also known as the A-optimality criterion.
This is equivalent to minimizing the axis lengths of the uncertainty ellipsoid
over the parameters.
As is common in trajectory optimization problems, dynamics equation (3) is
discretized. The optimization problem solved by the mid-level planner over the
horizon is then
$\begin{aligned}
&\underset{\mathbf{u}}{\text{minimize}}&&J=\sum_{k=0}^{N-1}{\mathbf{x}^{T}_{t+k}\mathbf{Q}\mathbf{x}_{t+k}+\mathbf{u}^{T}_{t+k}\mathbf{R}\mathbf{u}_{t+k}}+\gamma
tr\left(\mathbf{F}^{-1}\right)\\\ &\text{subject
to}&&\mathbf{x}_{t+k+1}=f(\mathbf{x}_{t+k},\mathbf{u}_{t+k}),k=0,..,N-1\\\
&&&\mathbf{x}_{t+k}\in\mathcal{X}_{free},k=0,..,N,\\\
&&&\mathbf{u}_{t+k}\in\mathcal{U},k=0,..,N-1,\\\ \end{aligned}$
(11)
where $N$ is the length of the horizon and $\mathbf{Q}\succ 0$ and
$\mathbf{R}\succ 0$ are positive definite weighting matrices. The extent of
information-richness in the trajectory can be adjusted with the relative
weighting term $\gamma$. Mid-level planning occurs on timescales of
approximately every few seconds, providing local plans of sufficient length to
allow for system excitation without excessive recomputation “chattering”.
### III-D Low-Level Controller: Nonlinear Model Predictive Controller
Model predictive control (MPC) is a control scheme that casts an optimal
control problem as a mathematical optimization problem that is solved
repeatedly online. Using discrete inputs as decision variables, inputs are
found to minimize a cost function while satisfying constraints including the
system dynamics, based on model $\mathcal{M}$. At its core MPC relies on a
mathematical optimization solver to provide inputs over the designated time
horizon, only the first of which is executed before recomputation is performed
online. In the RATTLE framework, nonlinear MPC (NMPC) solves the optimization
problem given in equation (11) with $\gamma=0$ over a shorter horizon,
allowing for faster control free of information metrics. NMPC was selected as
the controller mainly for its ability to update model parameters on-the-fly
and to incorporate input and state constraints while determining control
inputs [38]. Low-level NMPC operates on timescales of approximately a few tens
or hundreds of milliseconds, depending on the system of interest.
### III-E Online Parameter Estimation: Extended Kalman Filter
An extended Kalman filter (EKF) is used for parameter estimation in this
framework. The EKF is a non-linear extension of the Kalman filter, obtained by
first-order Taylor linearization of the error dynamics about the current
estimate. Employing a filtering approach for parameter estimation in the
RATTLE framework allows the estimation and thus the model updating to be
performed sequentially and in real-time.
1:procedure
RATTLE($\mathbf{x}_{0},\mathcal{X}_{g},\boldsymbol{\theta}_{0},\mathbf{Q},\mathbf{R},\mathcal{M}$)
2: InitParamEst($\boldsymbol{\theta}_{0}$)
3: $\mathcal{P}_{g}\leftarrow$
GlobalPlan($\mathbf{x}_{0},\mathcal{X}_{g},\mathcal{M}$)
4: $k=0$
5: while $\mathbf{x}_{k}\not\in\mathcal{X}_{g}$ do
6: if $\mathcal{P}_{g}\texttt{ replan requested}$ then
7: $\mathcal{P}_{g}\leftarrow$
GlobalPlan($\mathbf{x}_{k},\boldsymbol{\theta}_{k},\mathcal{X}_{g},\mathcal{M}$)
8: $\gamma\leftarrow\texttt{GetInfoWeight}(k)$
9:
$\mathcal{P}_{l},N_{l}\leftarrow\texttt{LocalInfoPlan}(\mathbf{x}_{k},\boldsymbol{\theta}_{k},\mathcal{P}_{g},\mathcal{M},\gamma)$
10: while $k<N_{l}$ do
11:
$\mathbf{u}_{k}\leftarrow\texttt{NmpcControl}(\mathbf{x}_{k},\boldsymbol{\theta}_{k},\mathcal{P}_{l},\mathcal{M})$
12: $\mathbf{x}_{k+1}=f(\mathbf{x}_{k},\mathbf{u}_{k})$ $\triangleright$
system dynamics
13:
$\boldsymbol{\theta}_{k+1}\leftarrow\texttt{ParamEst}(\boldsymbol{\theta}_{k},\mathbf{\tilde{y}}_{k},\mathbf{u}_{k})$
14: $k=k+1$
15:
16:procedure
LocalInfoPlan($\mathbf{x}_{k},\boldsymbol{\theta}_{k},\mathcal{P}_{g},\mathcal{M},\gamma$)
17:
$\mathcal{M}\leftarrow\texttt{UpdateModel}(\boldsymbol{\theta}_{k},\mathcal{M})$
18: UpdateCost$(k)$
19: $\mathbf{x}_{N},N_{l}\leftarrow\texttt{UpdateWaypoint}(k,\mathcal{P}_{g})$
20:
$\mathcal{P}_{l}\leftarrow\texttt{RunTrajOpt}(\mathbf{x}_{k},\mathbf{x}_{N},\texttt{CalcFisher}(),\mathcal{M})$
21: return $\mathcal{P}_{l},N_{l}$
Figure 4: The algorithmic overview of the RATTLE framework. Note that
$\mathcal{M}$ indicates a system model, consiting of $f(-)$ and $h(-)$ as in
equation 2 with accompanying constraints $\mathcal{X}_{free}$ and
$\mathcal{U}$. $N_{l}$ indicates a local plan horizon length index, and
$\mathcal{P}_{[-]}$ indicates a plan, i.e., a set of $\mathbf{x}_{k:k+N}$ and
$\mathbf{u}_{k:k+N}$ over a time horizon.
## IV Results
RATTLE was validated in a high-fidelity simulator of the free-flyer dynamics
of NASA’s Astrobee robot; a proof of concept demonstration of the information-
aware planning and parameter estimator was also carried out on Astrobee
hardware at the NASA Ames granite table facility111A granite table is a near-
frictionless surface used for simulating the microgravity free-flyer dynamics.
They also require impeccable cleaning for successful tests, which the authors
were able to partake in firsthand..
The Astrobee free-flyer is a cube-shaped robot, measuring 32 cm per side [8].
Its holonomic propulsion system draws in air through two central impellers,
which is expelled precisely through 12 exhaust nozzles for thrust [7]. The
Astrobee Robot Software uses ROS as middleware for communication, with about
46 nodelets grouped into approximately 14 processes running on two ARM
processors [6] [39]. The Astrobee Robot Software consists of a simulator,
which enables testing of developed algorithms before implementation on
hardware. The simulator is essentially a set of plug-ins for the Gazebo robot
simulator, which offer the same ROS interfaces as the hardware. The
ROS/Gazebo-based simulation environment includes extensive modeling of
Astrobee including its impeller propulsion system, onboard visual navigation,
environmental disturbances, and many more true-to-life models [6].
A few key properties of the motion planning method were demonstrated.
Primarily, the ability of the method to selectively add parameter information
gathering was shown by setting informative values of $\gamma$. The convergence
of system parameter estimates was then compared to tests in which no
information weighting was provided. This illustrated the improved quality of
parameter estimates with on-the-fly parameter learning, thus offering the
ability to make goal-achieving plans that also accomplish parameter learning,
as opposed to conventional system identification. The full RATTLE pipeline was
demonstrated in simulation to show the selective addition of informativeness
to goal-achieving plans. Hardware results were also obtained specifically for
the mid-level planner, showing targeted parameter learning for a regulation
task. Ground truth parameter values used for both experiments are shown in
Table I.
Figure 5: Examples of the global planner under different obstacle constraints.
Here, global plans are shown in green with obstacles shown in red.
### IV-A Simulation Demonstration: RATTLE in the Astrobee High-Fidelity 3 DOF
Simulation
Figure 6: Parameter estimates while tracking without information-aware
planning (blue), and with information-awareness (red) for the robot-grasping-
payload system in simulation. Four parameters of interest are shown for each
case, and ground truth values are shown in black. Three runs of each case are
illustrated with 1-sigma confidence shown as a highlight.
The RATTLE framework was implemented in the high-fidelity Astrobee
simulation222https://github.com/nasa/astrobee to demonstrate its capabilities
for a 3 DOF cargo re-positioning scenario, matching the environment and
dynamics used for hardware testing. After Astrobee rigidly grapples a payload
as in Fig. 1 or Fig. 8, the ground truth parameters
$\boldsymbol{\theta}=\left\\{m,c_{x},c_{y},I_{zz}\right\\}$ change and
parametric uncertainty enters the problem. Equipped with the payload, Astrobee
was tasked with moving within a tolerance of a goal region $\mathcal{X}_{g}$.
Note that Astrobee has severe input limits of $u_{max}\leq 0.4\ [N]$, meaning
that system inertial parameters are particularly important to know before
safety critical maneuvering is needed.
A kinodynamic RRT for the translational states was used as a global planner.
Some examples of the planner’s flexibility are shown in Fig. 5, which also
shows an example of the granite table simulation environment. Any real-time
global planning method could be used, but kinodynamic RRT was selected because
it uses dynamics model knowledge in its planning but with real-time
computation capabilities. An $n=50$ Monte Carlo set of test runs was performed
on the randomly reordered obstacle world of the bottom left of Fig. 5 to
demonstrate real-time properties. Running on a quad-core Intel i7-4700MQ
machine alongside the full Astrobee autonomy stack, a C++ implementation of
the global planner computed plans in $3.59\pm 3.63\ [s]$ for $\sim 2\ [m]$
global plans with obstacle density of $\sim 30\%$. This was a particularly
challenging scenario—in practice for tests as in Fig. 8 runtimes were usually
below $0.5\ $ [s].
The ACADO toolkit [40] was used to implement the nonlinear programming-based
information-aware local planner running with a replan period of $12\ $[s] and
the low-level model predictive controller running at $10\ $[Hz]. The
information-aware mid-level planning scheme of Section III-C was used, with an
exponentially decaying weighting on $\mathbf{\gamma}$ with time constant
$\tau$ of $\frac{1}{10}$ the global plan horizon. The number of local replans
used was 11. A sample run of RATTLE can be seen in Fig. 7, where the global
plan is tracked by local plans containing desired levels of information-
awareness. As this weighting decreases and estimation accuracy improves, the
controller and planner models resemble the system behaviour to a greater
degree of accuracy. The parameter estimator used poses and twists from
Astrobee’s localization algorithm, along with the applied forces and torques
as inputs. Estimated parameters were incorporated into the system model of the
local planner and controller at a period of $16\ $[s]. This avoided updates
using transient estimates and controller instabilities due to a rapidly
changing system model. The parameter estimation comparison is shown explicitly
in Fig. 6, where non-informative plans for three representative runs are shown
at left (blue) compared to information-aware plans at right (red). $\hat{m}$
and $\hat{I}_{zz}$ in particular show improvement in information-aware plans,
while poor observability rendered accurate center of mass estimates difficult
to obtain for both cases.
Figure 7: An example of a robot-grasping-payload information-aware trajectory using RATTLE, in simulation. The yellow dot denotes the start point. Note the reduction of excitation in local plans towards the end of the trajectory. | Astrobee+ Arm+Carriage | Payload+ Carriage | Combined System ($I_{zz}$ about CM)
---|---|---|---
Sim | | |
$m\ $[kg] | 19.568 | 11.8 | 31.368
$I_{zz}\ $[kg-m2] | 0.282 | 0.015 | 0.980
$c_{x}\ $[m] | 0.0 | 0.0 | 0.0
$c_{y}\ $[m] | 0.0 | -.305 | -0.115
Hardware | | |
$m\ $[kg] | $19.0$ | 11.8 | 30.8
$I_{zz}\ $[kg-m2] | $0.25$ | 0.015 | 0.94
$c_{x}\ $[m] | $0.0$ | $0.0$ | 0.0
$c_{y}\ $[m] | 0.0 | -.305 | -0.12
Table I: Simulation and hardware ground truth values. Note that hardware
values are approximations, accounting for gas level, arm extension, and number
of batteries used. Figure 8: Top-down view of the test setup used for Fig. 6,
representing a room with a narrow opening and cluttered obstacles inside
(inflated for Astrobee radius). Here, the global plan can be seen in green
with a local plan (with some information weighting) in red.
### IV-B Hardware Demonstration: Information-Aware Motion Planning Proof of
Concept on 3 DOF Astrobee Testbed
A series of hardware tests were conducted on the Astrobee free-flyer granite
table facility, using a “without payload” and “with payload” configuration,
shown in Fig. 1, using nominal and information-aware versions of the mid-level
planner. Experiments included a three-waypoint maneuver; variance of estimates
of the targeted parameters was compared post-maneuver between nominal and
information-aware planning, with results indicated in Table II. The mid-level
planner ran onboard Astrobee, providing real-time updates at 3 Hz. $I_{zz}$ in
particular saw a large reduction in variance when information-aware planning
was used, as rotational excitation was not as frequently used in nominal
planning. This indicates the dramatic affect of intentional excitation in
parameter information-awareness for parameters which are not otherwise
excited; notably, mass saw little variation between nominal and information-
aware planning since nominal plans already include translational excitation.
| Without Payload | With Payload
---|---|---
$I_{zz}$ Covariance $[$% Change$]$ | -25.01% | -38.05%
$m$ Covariance $[$% Change$]$ | 2.47% | -3.71%
Table II: Parameter estimate variance reduction of information-aware plans
relative to non-informative plans for hardware testing. Decreases indicate
greater precision of estimated model parameters. Both “without payload” and
“with payload” cases are shown (average of three runs at the final timestep of
motion).
## V Conclusion
This paper introduced RATTLE (Real-time information-Aware Targeted Trajectory
planning for Learning via Estimation) for robotic systems operating under
parametric uncertainty. Particularly relevant for free-flyer cargo
transportation scenarios, this method encourages model-learning through
information-awareness in motion planning while fulfilling the primary control
objectives. A sampling-based global planner (kinodynamic RRT) and a receding
horizon planner that maximizes information content of a local trajectory
constitute the planning module. Non-linear model predictive control (NMPC) is
used for trajectory tracking. An online filtering estimator, EKF in this case,
provides real-time model updates for all planning and control elements,
providing an improved system model for future use. The ability of this
framework to plan for information gain and the resulting improvement in
estimate accuracy was validated with results from high-fidelity 3 DOF
simulation of the Astrobee free-flyer, as well as granite table hardware
testing using Astrobee; video results are
available333https://youtu.be/Kim32sjs2VM. Future work aims to expand
robustness guarantees to the approach, refine methods of updating the global
plan, discuss RATTLE tuning in further detail, and explore the application of
the approach to 6 DOF Astrobee cargo transportation on the International Space
Station.
## Acknowledgments
Funding for this work was provided by the NASA Space Technology Mission
Directorate through a NASA Space Technology Research Fellowship under grant
80NSSC17K0077. This work was also supported by the LARSyS - FCT Plurianual
funding 2020-2023, P2020 INFANTE project 10/SI/2016, and an MIT Seed Project
under the MIT Portugal Program. The authors gratefully acknowledge the support
that enabled this research. The authors would like to thank Marina Moreira,
Ruben Garcia Ruiz, and the Astrobee team at NASA Ames for their help in
setting up the hardware for experiments. Thank you to Alex Cabrales and Oliver
Jia-Richards for insightful conversations.
## References
* [1] J. R. Brophy, L. Friedman, N. J. Strange, D. Landau, T. Jones, R. Schweickart, C. Lewicki, M. Elvis, and D. Manzella, “Synergies of Robotic Asteroid Redirection Technologies and Human Space Exploration,” in 65th International Astronautical Congress, 2014.
* [2] A. Flores-Abad, O. Ma, K. Pham, and S. Ulrich, “A review of space robotics technologies for on-orbit servicing,” Progress in Aerospace Sciences, vol. 68, pp. 1–26, 2014.
* [3] J. M. DiFrancesco and J. M. Olson, “The economics of microgravity research,” Nature Partner Journals - Microgravity, vol. 1, nov 2015.
* [4] B. H. Wilcox, “ATHLETE: A Cargo-Handling Vehicle for Solar System Exploration,” in IEEE Aerospace Conference, 2011.
* [5] P. Roque and R. Ventura, “Space CoBot: modular design of an holonomic aerial robot for indoor microgravity environments,” tech. rep.
* [6] L. Fluckiger, K. Browne, B. Coltin, J. Fusco, T. Morse, and A. Symington, “Astrobee robot software: A modern software system for space,” 2018.
* [7] T. Smith, J. Barlow, M. Bualat, T. Fong, C. Provencher, H. Sanchez, and E. Smith, “Astrobee: A New Platform for Free-Flying Robotics on the ISS,” in iSAIRAS 2016, 2016.
* [8] M. Bualat, J. Barlow, T. Fong, C. Provencher, and T. Smith, “Astrobee: Developing a Free-flying Robot for the International Space Station,” in AIAA SciTech, 2015.
* [9] B. Coltin, J. Fusco, Z. Moratto, O. Alexandrov, and R. Nakamura, “Localization from visual landmarks on a free-flying robot,” in Proc. of the Int. Conf. on Intelligent Robots and Systems (IROS), pp. 4377–4382, 2016.
* [10] I. W. Park, T. Smith, H. Sanchez, S. W. Wong, P. Piacenza, and M. Ciocarlie, “Developing a 3-DOF compliant perching arm for a free-flying robot on the International Space Station,” IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, pp. 1135–1141, 2017.
* [11] K. Albee and A. C. Hernandez, “The Case for Parameter-Aware Control of Assistive Free-Flyers,” in AIAA SciTech GNC, 2021.
* [12] R. Lampariello and G. Hirzinger, “Modeling and experimental design for the on-orbit inertial parameter identification of free-flying space robots,” in ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 881–890, American Society of Mechanical Engineers Digital Collection, 2005.
* [13] K. Yoshida and S. Abiko, “Inertia parameter identification for a free-flying space robot,” in AIAA Guidance, Navigation, and Control Conference and Exhibit, p. 4568, 2002.
* [14] M. Ekal and R. Ventura, “On the accuracy of inertial parameter estimation of a free-flying robot while grasping an object,” Journal of Intelligent & Robotic Systems, vol. 98, no. 1, pp. 153–163, 2020.
* [15] J. P. How and M. Tillerson, “Analysis of the impact of sensor noise on formation flying control,” Proceedings of the American Control Conference, vol. 5, pp. 3986–3991, 2001.
* [16] A. Majumdar and R. Tedrake, “Funnel libraries for real-time robust feedback motion planning,” The International Journal of Robotics Research, vol. 36, no. 8, pp. 947–982, 2017.
* [17] B. T. Lopez, J. P. Howl, and J.-J. E. Slotine, “Dynamic tube mpc for nonlinear systems,” in 2019 American Control Conference (ACC), pp. 1655–1662, IEEE, 2019.
* [18] J.-J. E. Slotine and W. Li, “Applied Nonlinear Control,”
* [19] S. Abiko, R. Lampariello, and G. Hirzinger, “Impedance control for a free-floating robot in the grasping of a tumbling target with parameter uncertainty,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1020–1025, IEEE, 2006.
* [20] Y.-P. Chen and S.-C. Lo, “Sliding-mode controller design for spacecraft attitude tracking maneuvers,” IEEE transactions on aerospace and electronic systems, vol. 29, no. 4, pp. 1328–1333, 1993.
* [21] S. Ulrich, A. Saenz-Otero, and I. Barkana, “Passivity-based adaptive control of robotic spacecraft for proximity operations under uncertainties,” Journal of Guidance, Control, and Dynamics, vol. 39, no. 6, pp. 1444–1453, 2016\.
* [22] Y. Xu, H.-Y. Shum, T. Kanade, and J.-J. Lee, “Parameterization and adaptive control of space robot systems,” IEEE transactions on Aerospace and Electronic Systems, vol. 30, no. 2, pp. 435–451, 1994.
* [23] A. T. Espinoza and D. Roascio, “Concurrent adaptive control and parameter estimation through composite adaptation using model reference adaptive control/kalman filter methods,” in 2017 IEEE Conference on Control Technology and Applications (CCTA), pp. 662–667, IEEE, 2017.
* [24] D. J. Webb, K. L. Crandall, and J. Van Den Berg, “Online parameter estimation via real-time replanning of continuous Gaussian POMDPs,” Proceedings - IEEE International Conference on Robotics and Automation, pp. 5998–6005, 2014.
* [25] K. Okamoto, M. Goldshtein, and P. Tsiotras, “Optimal Covariance Control for Stochastic Systems under Chance Constraints,” IEEE Control Systems Letters, vol. 2, no. 2, pp. 266–271, 2018.
* [26] K. Okamoto and P. Tsiotras, “Optimal Stochastic Vehicle Path Planning Using Covariance Steering,” IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 2276–2281, 2019.
* [27] K. Albee, M. Ekal, R. Ventura, and R. Linares, “Combining parameter identification and trajectory optimization: Real-time planning for information gain,” arXiv preprint arXiv:1906.02758, 2019.
* [28] J. Reif, “Complexity of the Generalized Mover’s Problem,” tech. rep., Harvard University, 1985.
* [29] C. M. Jewison, D. Miller, and A. Saenz-Otero, “Reconfigurable Thruster Selection Algorithms for Aggregative Spacecraft Systems,” 2014.
* [30] K. Albee, Toward Optimal Motion Planning for Dynamic Robots: Applications On-Orbit. Master’s thesis, Massachusetts Institute of Technology, 2019.
* [31] S. M. LaValle, Planning Algorithms, vol. 9780521862. 2006\.
* [32] S. LaValle and J. Kuffner, “Randomized Kinodynamic Planning,” The International Journal of Robotics Research, 2001.
* [33] S. Karaman and E. Frazzoli, “Sampling-based Algorithms for Optimal Motion Planning,” The International Journal of Robotics Research, 2011.
* [34] C. Jewison, R. S. Erwin, and A. Saenz-Otero, “Model Predictive Control with ellipsoid obstacle constraints for spacecraft rendezvous,” IFAC-PapersOnLine, vol. 28, no. 9, pp. 257–262, 2015.
* [35] R. A. Fisher, “Statistical methods and scientific inference.,” 1956.
* [36] A. D. Wilson, J. A. Schultz, and T. D. Murphey, “Trajectory synthesis for fisher information maximization,” IEEE Transactions on Robotics, vol. 30, no. 6, pp. 1358–1370, 2014.
* [37] J. H. Taylor, “The cramer-rao estimation error lower bound computation for deterministic nonlinear systems,” in 1978 IEEE Conference on Decision and Control including the 17th Symposium on Adaptive Processes, pp. 1178–1181, Jan 1978.
* [38] J. B. Rawlings, D. Q. Mayne, and M. M. Diehl, Model predictive control: theory, computation, and design, vol. 197. 2019\.
* [39] L. Fluckiger, K. Browne, B. Coltin, J. Fusco, T. Morse, and A. Symington, “Astrobee robot software: Enabling mobile autonomy on the iss,” in Proc. of the Int. Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS), 2018.
* [40] B. Houska, H. Ferreau, and M. Diehl, “ACADO Toolkit – An Open Source Framework for Automatic Control and Dynamic Optimization,” Optimal Control Applications and Methods, vol. 32, no. 3, pp. 298–312, 2011.
|
# Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model
Zi Wang
###### Abstract
Knowledge distillation (KD) is a successful approach for deep neural network
acceleration, with which a compact network (student) is trained by mimicking
the softmax output of a pre-trained high-capacity network (teacher). In
tradition, KD usually relies on access to the training samples and the
parameters of the white-box teacher to acquire the transferred knowledge.
However, these prerequisites are not always realistic due to storage costs or
privacy issues in real-world applications. Here we propose the concept of
decision-based black-box (DB3) knowledge distillation, with which the student
is trained by distilling the knowledge from a black-box teacher (parameters
are not accessible) that only returns classes rather than softmax outputs. We
start with the scenario when the training set is accessible. We represent a
sample’s robustness against other classes by computing its distances to the
teacher’s decision boundaries and use it to construct the soft label for each
training sample. After that, the student can be trained via standard KD. We
then extend this approach to a more challenging scenario in which even
accessing the training data is not feasible. We propose to generate pseudo
samples distinguished by the teacher’s decision boundaries to the largest
extent and construct soft labels for them, which are used as the transfer set.
We evaluate our approaches on various benchmark networks and datasets and
experiment results demonstrate their effectiveness. Codes are available at:
_https://github.com/zwang84/zsdb3kd_.
Machine Learning, ICML
## 1 Introduction
Training compact deep neural networks (DNNs) (Howard et al., 2017) efficiently
has become an appealing topic because of the increasing demand for deploying
DNNs on resource-limited devices such as mobile phones and drones (Moskalenko
et al., 2018). Recently, a large number of approaches have been proposed for
training lightweight DNNs with the help of a cumbersome, over-parameterized
model, such as network pruning (Li et al., 2016; He et al., 2019; Wang et al.,
2021), quantization (Han et al., 2015), factorization (Jaderberg et al.,
2014), and knowledge distillation (KD) (Hinton et al., 2015; Phuong & Lampert,
2019; Jin et al., 2020; Yun et al., 2020; Passalis et al., 2020; Wang, 2021).
Among all these approaches, knowledge distillation is a popular scheme with
which a compact student network is trained by mimicking the softmax output
(class probabilities) of a pre-trained deeper and wider teacher model (Hinton
et al., 2015). By doing so, the rich information learned by the powerful
teacher can be imitated by the student, which often exhibits better
performance than solely training the student with a cross-entropy loss. Many
variants have been developed to improve the vanilla KD approach by not only
mimicking the softmax output but also matching extra elements in the teacher.
The success of KD relies on three factors: (1) access to the teacher’s
training dataset, (2) the white-box teacher model, i.e., access to the
teacher’s parameters, and (3) the score-based outputs, i.e., class
probabilities of the training samples outputted by the teacher. In real-world
applications, however, these prerequisites are usually unrealistic. Due to
storage costs of large training datasets (such as ImageNet (Deng et al.,
2009)) or privacy issues (such as sensitive patient data or personal photos),
accessing the training samples are sometimes not feasible. With this concern,
the concept of zero-shot knowledge distillation (ZSKD) (Nayak et al., 2019;
Chen et al., 2019; Yin et al., 2020; Wang, 2021) is proposed. ZSKD generates
pseudo training samples via backpropagation with access to the parameters of
the white-box teacher, which are then used as the transfer set for training
the student model via KD. However, we argue that this scenario is still not
realistic under certain circumstances.
In some cases, training samples are publicly available, but pre-trained models
are not. For example, YouTube’s recommendation system (Covington et al., 2016)
is trained with tons of videos that can be accessed by any user. However, the
trained model is a core competitiveness of the company and its parameters are
not released. One can argue that a surrogate teacher can be trained locally
with the accessible training set, but due to the limitations such as computing
resources, its performance is usually not satisfactory compared to the
provided powerful model with much more parameters and complicated
architectures.
Moreover, a much more challenging scenario is that, in many real-world
applications, none of the three factors mentioned above is available. A pre-
trained model stored on the remote server may only provide APIs for inference,
neither the model parameters nor the training samples are accessible to the
users. Worse than that, these APIs usually return a category index for each
sample (i.e., hard-label), rather than the class probabilities over all
classes. For example, speech recognition systems like Siri and Cortana are
trained with internal datasets and only return the results to users (López et
al., 2017). Cloud-based object classification systems like Clarifai (Clarifai,
2020) just give the top-1 classes of the identified objects in the images
uploaded by users.
With these concerns, we propose the concept of decision-based black-box
knowledge distillation (DB3KD), i.e., training a student model by transferring
the knowledge from a black-box teacher that only returns hard-labels rather
than probability distributions. We start with the scenario when the training
data is available. Our key idea is to extract the class probabilities of the
training samples from the DB3 teacher. We claim that the decision boundary of
a well-trained model distinguishes the training samples of different classes
to the largest extent. Therefore, the distance from a sample to the targeted
decision boundary (the boundary to the samples of a certain class) can be used
as a representation of a sample’s robustness, which determines how much
confidence of a specific class is assigned to the sample. Based on this, the
soft label of each training sample can be constructed with the value of sample
robustness and used for training the student via KD.
We further extend DB3KD to the scenario when training data are not accessible.
As the decision boundary makes every effort to differentiate the training
samples of all classes, samples used for training the teacher tend to be with
longer distances to the boundary than others. We propose to optimize randomly
generated noises away from the boundary to obtain robust pseudo samples that
simulate the distribution of the training samples. This is achieved by
iteratively estimating the gradient direction on the boundary and pushing the
samples away from the boundary in that direction. After that, pseudo samples
are used for training the student via DB3KD. To our best knowledge, this is
the first study of KD from a DB3 teacher, both with and without access to the
training set.
The contribution of this study is summarized as follows. (1) We propose the
concept of decision-based black-box knowledge distillation for the first time,
with which a student is trained by transferring knowledge from a black-box
teacher that only returns hard-labels. (2) We propose to use sample
robustness, i.e., the distance from a training sample to the decision
boundaries of a DB3 teacher, to construct soft labels for DB3KD when training
data is available. (3) We extend the DB3KD approach to a more challenging
scenario when accessing training data is not feasible and name it zero-shot
decision-based black-box knowledge distillation (ZSDB3KD). (4) Extensive
experiments validate that the proposed approaches achieve competitive
performance compared to existing KD methods in more relaxed scenarios.
## 2 Related Work
Knowledge distillation. Knowledge distillation is first introduced in (Buciluǎ
et al., 2006) and generalized in (Ba & Caruana, 2014; Hinton et al., 2015),
which is a popular network compression scheme to train a compact student
network by mimicking the softmax output predicted by a high-capacity teacher
or ensemble of models. Besides transferring the knowledge of class
probabilities, many variants have been proposed to add extra regulations or
alignments between the teacher and the student to improve the performance
(Romero et al., 2014; Yim et al., 2017; Kim et al., 2018; Heo et al., 2019).
For example, FitNet (Romero et al., 2014) introduces an extra loss term that
matches the values of the intermediate hidden layers of the teacher and the
student, which allows fast training of deeper student models. (Zagoruyko &
Komodakis, 2016) defines the attention of DNNs and uses it as the additional
transferred knowledge.
Knowledge distillation with limited data. To mitigate the storage and
transmission costs of large training datasets, several studies propose the
concept of few-shot KD, which generates pseudo samples with the help of a
small number of the original training samples (Kimura et al., 2018; Wang et
al., 2020; Li et al., 2020). Another study suggests that instead of the raw
data, some surrogates with much smaller sizes (also known as metadata) can be
used to distill the knowledge from the teacher. (Lopes et al., 2017) leverages
the statistical features of the activations of the teacher to train a compact
student without access to the original data. However, releasing this kind of
metadata along with the pre-trained teacher is usually not a common scenario.
Zero-shot knowledge distillation. To deal with the scenario when training data
is not accessible, (Nayak et al., 2019) proposes zero-shot knowledge
distillation (ZSKD). The authors model the softmax output space of the teacher
with a Dirichlet distribution and samples soft labels as the targets. Randomly
generated noise inputs are optimized towards these targets via backpropagation
and are used as the transfer set. (Wang, 2021) replaces the Dirichlet
distribution with a multivariate normal distribution to model the softmax
output space of the generated samples. Therefore, pseudo samples of different
classes can be generated simultaneously rather than one after another as in
(Nayak et al., 2019). Generative adversarial networks (GANs) (Goodfellow et
al., 2014) are leveraged in (Chen et al., 2019; Micaelli & Storkey, 2019) to
solve this task so that pseudo sample synthesis and student network training
can be conducted simultaneously. Another study (Yin et al., 2020) proposes to
use the features in the batch normalization layers to generate pseudo samples.
However, these methods still need access to the parameters of the teacher for
backpropagation, which is unrealistic in many cases.
Black-box knowledge distillation. Although the vanilla KD is built with a
black-box teacher (Hinton et al., 2015), the whole training dataset is used
for training. (Wang et al., 2020) investigates the possibility that a student
is trained with limited samples and a black-box teacher. Other than zero-shot
KD methods that generate pseudo inputs, (Orekondy et al., 2019) proposes to
sample from a large pool (such as ImageNet) to get the transfer set to train
the student. Therefore, there is no need to access the teacher’s parameters.
Although the prerequisites in these methods are relaxed, weak assumptions on
the training samples and a score-based teacher that outputs class
probabilities are still needed. Different from these studies, we consider a
much more challenging case in which knowledge is transferred from a black-box
teacher that only returns top-1 classes.
Decision-based adversarial attack. Our approach leverages the distance from a
sample to the decision boundary for soft label construction, which is related
to the research of decision-based black-box adversarial attack (Brendel et
al., 2017; Cheng et al., 2018, 2019; Liu et al., 2019). These methods aim to
add some imperceptible perturbations to the inputs to create adversarial
samples that fool a well-trained DNN with high confidence. This is achieved by
identifying the points on the decision boundary with minimal distance to the
original inputs. Inspired by these studies, we use the distance from a sample
to the targeted decision boundaries as a representation of a sample’s
robustness against other categories, which can be converted to a probability
distribution of all classes with proper operations.
Figure 1: The overall workflow of the proposed approach. Left: classic KD.
Bottom: decision-based black-box KD (DB3KD). Samples are iteratively fed to
the DB3 teacher to compute the sample robustness, which is transformed as soft
labels for training the student via KD. Right: Zero-shot DB3KD (ZSDB3KD).
Pseudo samples are generated by moving random noises away from the decision
boundary and approaching the distribution of the original training samples,
which are used as the transfer set for training the student via DB3KD.
## 3 Methodology
We first formulate KD in its standard form and present our approach that
creates soft labels of the training samples with a DB3 teacher. Finally, we
extend our approach to the scenario in which the training set is not
accessible.
### 3.1 Knowledge Distillation
KD is used for training a compact student by matching the softmax outputs of a
pre-trained, cumbersome teacher (Hinton et al., 2015) (Fig. 1(left)). For an
object classification task, denote $F_{t}(x)$ and $F_{s}(x)$ the teacher and
the student DNNs, respectively, which take an image $x$ as the input, and
output a vector $P\in\left[0,1\right]^{L}$, i.e.,
$F_{t}(x)=P_{t}=\text{softmax}(a_{t})$,
$F_{s}(x)=P_{s}=\text{softmax}(a_{s})$, where $L$ is the number of classes and
$a$ is the pre-softmax activation. In a KD procedure, a temperature $\tau$ is
usually introduced to soften the softmax output, i.e.,
$P^{\tau}=\text{softmax}(a/\tau)$, which is proved to be efficient to boost
the training process. The student is trained by minimizing the loss function
in Eq. (1).
$\mathcal{L}=\mathcal{L}_{CE}(P_{s},y)+\lambda\mathcal{L}_{KD}(P^{\tau}_{t},P^{\tau}_{s}),$
(1)
where $y$ is the ground truth label, $\mathcal{L}_{CE}$ and $\mathcal{L}_{KD}$
are the cross-entropy loss and the distillation loss. A scaling factor
$\lambda$ is used for balancing the importance of the two losses.
Figure 2: Strategies for computing sample robustness.
### 3.2 Decision-Based Black-Box Knowledge Distillation
As mentioned, in many real-world applications, users are prohibited from
querying any internal configuration of the teacher except for the final
decision (top-1 label). Denote $F_{t}^{B}(x)$ the DB3 teacher, then
$F_{t}^{B}(x)=l,l\in\\{1,2,\cdots,L\\}$. In this case, $P_{t}$ cannot be
obtained and the student cannot be trained with Eq. (1). We claim that a
sample’s robustness against a specific class can be used as a representation
of how much confidence should be assigned to this class, with proper post-
operations. Therefore, we extract the sample’s robustness against each class
from the DB3 teacher and convert it to a class distribution $\hat{P_{t}}$ as
an estimate of $P_{t}$ (Fig. 1(bottom)). In the following, we propose three
metrics to measure sample robustness and present how to construct class
distributions with the sample robustness measurements. Intuitively, if a
sample is closer to some points in the region of a specific class, it is more
vulnerable to this class and thus should be assigned higher confidence.
#### 3.2.1 Sample Robustness
Sample Distance (SD). The most straightforward way to quantify the sample
robustness is to compute the minimal $\ell_{2}$-norm distance from a sample to
those of other classes (Fig. 2(left)). Denote $x_{0}^{m}\in\mathbb{R}^{C\times
W\times H}$ a sample of the $m$-th class,
$\mathbf{x}^{n}=\\{x_{1}^{n},x_{2}^{n},\cdots,x_{S}^{n}\\}$ a batch of $S$
samples from the $n$-th class, where $n\neq m$, $C,W,H$ are the number of
channels, width and height of the sample, respectively. The robustness of
$x_{0}^{m}$ against class $n$ is computed with Eq. (2).
$r_{0}^{m,n}=\min_{1\leq i\leq S}||x_{i}^{n}-x_{0}^{m}||_{2}.$ (2)
The advantage of using SD is it can be implemented without querying from the
teacher. However, SD is a rough estimate of sample robustness since it does
not mine any information from the teacher. Therefore, we introduce two
advanced strategies to measure sample robustness.
Boundary Distance (BD). To obtain better representation of sample robustness,
we propose to leverage the distances from a sample to the targeted decision
boundaries of the teacher (Fig. 2(middle)). For each
$x_{i}^{n}\in\mathbf{x}^{n}$, we implement a binary search in the direction
$(x_{i}^{n}-x_{0}^{m})$ and find the corresponding point $\bar{x}_{i}^{n}$ on
the decision boundary (Eq. (3)).
$\displaystyle\bar{x}_{i}^{n}=\min_{\alpha}($ $\displaystyle
x_{0}^{m}+\alpha\cdot\frac{x_{i}^{n}-x_{0}^{m}}{||x_{i}^{n}-x_{0}^{m}||_{2}}),i=1,2,\cdots,S,$
(3)
$\displaystyle\text{s.t.}~{}~{}F_{t}^{B}(\bar{x}_{i}^{n}+\epsilon)=n,~{}~{}~{}~{}||\epsilon||_{2}\to
0.$
We then compute the sample robustness with Eq. (2) in which $x_{i}^{n}$ is
replaced by $\bar{x}_{i}^{n}$.
Minimal Boundary Distance (MBD). Inspired by recent studies of decision-based
black-box adversarial attack (Brendel et al., 2017; Cheng et al., 2018; Liu et
al., 2019; Cheng et al., 2019), we further optimize $\bar{x}_{i}^{n}$ by
moving it along the decision boundary to the point $x_{i}^{*n}$ where
$||x_{i}^{*n}-x_{0}^{m}||_{2}$ is minimized (Fig. 2(right)). Starting from
$\bar{x}_{i}^{n}$, we first estimate the gradient of the boundary $\nabla
F_{t}^{B}(\bar{x}_{i}^{n})$ via zeroth order optimization (Wang et al., 2018),
which is achieved by sampling $Q$ Gaussian random vectors
$\mathbf{u}_{q}\in\mathbb{R}^{C\times W\times H}~{}(q=1,2,\cdots,Q)$ and
averaging them (Fig. 3, Eq. (4)).
$\nabla
F_{t}^{B}(\bar{x}_{i}^{n})=\frac{1}{Q}\sum_{q=1}^{Q}\text{sign}(\bar{x}_{i}^{n}+\epsilon_{g}\mathbf{u}_{q})\mathbf{u}_{q},$
(4)
where $\epsilon_{g}$ is a very small scalar, and
$\text{sign}(x_{i}^{n}+\epsilon_{g}\mathbf{u}_{q})$ is a sign function, i.e,
$\text{sign}(x_{i}^{n}+\epsilon_{g}\mathbf{u}_{q})=\begin{cases}+1,~{}~{}F_{t}^{B}(\bar{x}_{i}^{n}+\epsilon_{g}\mathbf{u}_{q})=n,\\\
-1,~{}~{}\text{Otherwise}.\\\ \end{cases}$ (5)
Figure 3: The iterative procedure for the optimization of MBD.
Once the gradient is determined, we get a new sample outside the decision
boundary $\hat{x}_{i}^{n}\leftarrow\bar{x}_{i}^{n}+\xi_{d}\nabla
F_{t}^{B}(\bar{x}_{i}^{n})$ with a step size $\xi_{d}$. Then we conduct the
same binary search procedure (Eq. (3)) in the direction
$(\hat{x}_{i}^{n}-x_{0}^{m})$ and obtain an updated $\bar{x}_{i}^{n}$. Since
the search is within a very small region, the decision boundary in such a
region is smooth. Therefore, the new $\bar{x}_{i}^{n}$ has a smaller distance
to $x_{0}^{m}$ (Fig. 3). We repeat the procedure above to get the optimal
solution $x_{i}^{*n}=\bar{x}_{i}^{n}$ until
$||\bar{x}_{i}^{n}-x_{0}^{m}||_{2}$ cannot be further minimized or the query
limit is reached. Finally, we compute the sample robustness with Eq. (2) in
which $x_{i}^{n}$ is replaced by $x_{i}^{*n}$.
#### 3.2.2 Soft Label Construction
After obtaining all the samples’ robustness on all classes, we construct the
soft labels for them with proper manipulations. We start with the pre-softmax
activations for better illustration. Suppose the pre-softmax activation of a
sample $x_{s}^{m}$ is
$\mathbf{a_{s}^{m}}=\\{a_{s,1}^{m},a_{s,2}^{m},\cdots,a_{s,L}^{m}\\}$. Then
the pre-softmax activation and the sample robustness should be in correlation
with the following conditions. (1) $\text{argmax}_{i}a_{s,i}^{m}=m$. It is
obvious that $a_{s,m}^{m}$ should be the largest number to ensure that the
sample is assigned to the correct class. (2) If $r_{s}^{m,j}>r_{s}^{m,k}$,
then $a_{s,j}^{m}<a_{s,k}^{m}$. This is because bigger sample robustness
indicates longer distance to the targeted decision boundary, which means that
the sample is more robust against the certain class and should be assigned a
lower confidence. (3) If
$\sum_{j=1}^{L}r_{s}^{m,j}>\sum_{j=1}^{L}r_{p}^{m,j},j\neq m$, then
$a_{s,m}^{m}>a_{p,m}^{m}$. This is because when the sum of a sample’s
distances to its targeted decision boundaries is larger, the probability mass
of this sample is more concentrated in its top-1 class. Otherwise, the mass is
more dispersed among all elements.
With the above design philosophy, to meet requirement (1) and (2), we define
$\hat{a}_{s,n}^{m}(n=1,2,\cdots,L)$ in Eq. (6).
$\hat{a}_{s,n}^{m}=\begin{cases}\frac{1}{r_{s}^{m,n}},~{}~{}~{}~{}\text{for}~{}n\neq
m,\\\ \sum_{i=1}^{L}\frac{1}{r_{s}^{m,i}},i\neq
m,~{}~{}~{}~{}\text{for}~{}n=m.\\\ \end{cases}$ (6)
$\hat{a}_{s,n}^{m}$ is then divided by
$(\sum_{i=1}^{L}\frac{1}{r_{s}^{m,i}})^{2}$ to meet requirement (3), as
presented in Eq. (7).
${a}_{s,n}^{m}=\frac{\hat{a}_{s,n}^{m}}{(\sum_{i=1}^{L}\frac{1}{r_{s}^{m,i}})^{2}},~{}~{}i\neq
m,~{}\text{for}~{}n=1,2,\cdots,L.$ (7)
Finally, we get $\hat{P}_{t}=\text{softmax}(\mathbf{a}_{s}^{m})$ for sample
$x_{s}^{m}$.
#### 3.2.3 Training of Student Model
Once the soft labels of all the training samples are constructed with the
above approach, we can train the student with standard KD, using the objective
function in Eq. (1).
### 3.3 Zero-shot Decision-Based Black-Box Knowledge Distillation
In zero-shot KD, pseudo samples are usually generated by optimizing some noise
inputs via backpropagation towards some soft labels sampled from a prior
distribution, which are then used as the transfer set. However, with a DB3
teacher, backpropagation cannot be implemented and the prior distribution
cannot be obtained, which makes ZSDB3KD a much more challenging task. Since
the teacher is trained to largely distinguish the training samples, the
distance between a training sample to the teacher’s decision boundary is
usually much larger than the distance between a randomly generated noise image
to the boundary. With this claim, we propose to iteratively push random noise
inputs towards the region that is away from the boundary to simulate the
distribution of the original training data (Fig. 1(right)).
Denote $o_{0}^{m}$ and
$\mathbf{o}^{\bar{m}}=\left[o_{1}^{\bar{m}},o_{2}^{\bar{m}},\cdots,o_{T}^{\bar{m}}\right]$
a random noise input of the $m$-th class and a batch of $T$ random noises with
any other class, respectively. Similar but slightly different from Eq. (3),
for each $o_{i}^{\bar{m}}\in\mathbf{o}^{\bar{m}}$, we first identity its
corresponding points on the boundary $\bar{o}_{i}^{m}$ with Eq. (8).
$\displaystyle\bar{o}_{i}^{m}=\min_{\alpha}($ $\displaystyle
o_{0}^{m}+\alpha\cdot\frac{o_{i}^{\bar{m}}-o_{0}^{m}}{||o_{i}^{\bar{m}}-o_{0}^{m}||_{2}}),i=1,2,\cdots,T,$
(8) $\displaystyle\text{s.t.}~{}~{}F_{t}^{B}(\bar{o}_{i}^{m}+\epsilon)\neq
m,~{}~{}~{}~{}||\epsilon||_{2}\to 0.$
Similarly, the MBDs of $o_{0}^{m}$, i.e., $o_{i}^{*m}$, can be iteratively
estimated with Eq. (4) and (5). Let $o^{*m}$ be the one of
$o_{i}^{*m}~{}(i=1,2,\cdots,T)$ such that $||o^{*m}-o_{0}^{m}||_{2}$ attains
its minimal value, i.e.,
$||o^{*m}-o_{0}^{m}||_{2}=\min_{i}||o_{i}^{*m}-o_{0}^{m}||_{2}$. We then
estimate the gradient at the boundary $\nabla F_{t}^{B}({o}^{*m})$ with Eq.
(4) and update $o^{m}$ as $o^{m}\leftarrow o^{m}-\xi_{o}\nabla
F_{t}^{B}({o}^{*m})$ with the step size $\xi_{o}$. The new $o^{m}$ is usually
with longer distance to the boundary. We repeat the above process until
$||o^{*m}-o^{m}||_{2}$ cannot be further maximized or the query limit is
reached. Finally, we used the generated pseudo samples with the DB3KD approach
to train the student as described in Section 3.2.
## 4 Experiments
In this section, we first demonstrate the performance of DB3KD when training
samples are accessible. Then we show the results of ZSDB3KD under the
circumstance that training data is not accessible.
Algorithm | MNIST | Fashion-MNIST | CIFAR10 | FLOWERS102
---|---|---|---|---
LeNet5 | LeNet5 | LeNet5 | LeNet5 | AlexNet | AlexNet | ResNet-18
-half | -1/5 | -half | -1/5 | -half | -quarter
Teacher CE | 99.33% | 99.33% | 91.63% | 91.63% | 79.30% | 79.30% | 95.07%
Student CE | 99.11% | 98.77% | 90.21% | 88.75% | 77.28% | 72.21% | 92.18%
Standard KD | 99.33% | 99.12% | 90.82% | 89.09% | 77.81% | 73.14% | 94.05%
Surrogate KD | 99.13% | 98.85% | 90.27% | 88.72% | 77.49% | 72.49% | 92.93%
Noise logits | 99.01% | 98.72% | 89.81% | 88.20% | 77.04% | 72.06% | 91.99%
DB3KD-SD | 99.15% | 98.98% | 90.86% | 89.31% | 77.66% | 72.78% | 93.18%
DB3KD-BD | 99.51% | 99.19% | 90.68% | 89.47% | 77.92% | 72.94% | 93.30%
DB3KD-MBD | 99.52% | 99.22% | 91.45% | 89.80% | 78.30% | 73.78% | 93.77%
Table 1: Performance evaluation of the proposed DB3KD approach.
### 4.1 Experiment Setup of DB3KD
We demonstrate the effectiveness of DB3KD with several widely used DNNs and
datasets as follows. (1) A LeNet-5 (LeCun et al., 1998) with two convolutional
layers is pre-trained on MNIST (LeCun et al., 1998) as the teacher, following
the configurations in (Lopes et al., 2017; Chen et al., 2019). A LeNet-5-Half
and a LeNet-5-1/5 are designed as the student networks, which contains half
and 1/5 number of convolutional filters in each layer compared to LeNet-5,
respectively. (2) The same teacher and student networks as in (1) are used but
are trained and evaluated on the Fashion-MNIST dataset. (3) An AlexNet
(Krizhevsky et al., 2012) pre-trained on CIFAR-10 (Krizhevsky et al., 2009) is
used as the teacher. An AlexNet-Half and an AlexNet-Quarter with half and 25%
filters are used as student networks. (4) A ResNet-34 (He et al., 2016) pre-
trained on the high-resolution, fine-grained dataset FLOWERS102 (Nilsback &
Zisserman, 2008) is used as the teacher, and the student is a ResNet-18.
We evaluate our approach with the three strategies for sample robustness
calculation as described in Section 3.2.1, represented as DB3KD-SD, DB3KD-BD,
and DB3KD-MBD, respectively. For DB3KD-SD, we use 100 samples from each class
to compute the sample robustness $r$ for MNIST, Fashion-MNIST, and CIFAR-10.
Since there are only 20 samples in each class of FLOWERS102, we use all of
them. Starting with these samples, $\epsilon$ is set to $1e^{-5}$ as the stop
condition of the binary search in DB3KD-BD. In DB3KD-MBD, we use 200 Gaussian
random vectors to estimate the gradient and try different numbers of queries
from 1000 to 20000 with $\xi_{d}=0.2$ to optimize the MBD and report the best
test accuracies. The sample robustness are calculated in parallel with a batch
size of 20 with FLOWERS102, and 200 with the other datasets.
With the constructed soft labels, we train the student networks for 100
epochs, using an Adam optimizer (learning rate $5e^{-3}$), for all the
datasets except for FLOWERS102, which is trained for 200 epochs. The scaling
factor $\lambda$ is set to 1 for simplicity. Since Eq. (7) has the similar
functionality with the temperature $\tau$, $\tau$ is not need to be as large
as in previous studies (Hinton et al., 2015).With a hyperparameter search, we
find that smaller $\tau$s between $0.2$ and $1.0$ leads to good performance.
We use $\tau=0.3$ in our experiments. All experiments are evaluated for 5 runs
with random seeds.
Approach | Teacher | Student | Accuracy
---|---|---|---
Cross-entropy | ResNet-34 | - | 78.63%
Cross-entropy | ResNet-18 | - | 75.91%
Standard KD | ResNet-34 | ResNet-18 | 77.18%
Surrogate KD | 76.52%
BAN∗ | 76.84%
TF-KD | 77.23%
SSKD | 76.20%
DB3KD | 77.31%
DB3KD | ResNet-50 | ResNet-18 | 78.65%
Table 2: Performance comparison to self-distillation approaches with ResNet on
CIFAR-100. $*$ indicates the results are based on our own implementation.
(a) LeNet5-MNIST
(b) LeNet5-Fashion-MNIST
(c) AlexNet-CIFAR10
(d) ResNet-FLOWERS102
Figure 4: Performance comparison with different numbers of queries for
computing sample robustness.
### 4.2 Performance Evaluation of DB3KD
The performance of DB3KD is presented in Table 1. To understand the proposed
approach better, we also present the performance of the following training
strategies. (1) The teacher and the student networks trained solely with the
cross-entropy loss. (2) The standard KD with Eq. (1) (Hinton et al., 2015).
(3) Training the student network via KD with a surrogate white-box teacher
(Surrogate KD in Table 1), which is used for simulating the scenario in which
one can train a smaller but affordable surrogate model with full access to its
parameters compared to the powerful DB3 teacher. Here the surrogate has the
same architecture with the student. The performance of surrogate KD is
considered as the lower bound of DB3KD. (4) Training with the soft labels
constructed with randomly generated sample robustness (Noise logits in Table
1), which is used for verifying the effectiveness of DB3KD for soft label
construction.
We observe from the results that DB3KD works surprisingly well. With the most
straightforward strategy SD, our approach still achieve competitive
performance on all experiments compared to standard KD and outperform
surrogate KD. When using MBD to compute sample robustness, DB3KD-MBD
outperforms standard KD on all the experiments except for FLOWERS102. On
FLOWERS102, the performance of DB3KD is slightly worse due to the complexity
of the pre-trained teacher model. However, DB3KD still outperforms the
surrogate KD with a clear margin. These results validate the effectiveness of
DB3KD and indicates that sample robustness with proper post-operation provides
an informative representation of a sample’s probabilities over all classes and
can be used as an alternative to the softmax output when only a DB3 teacher is
provided.
We also observe the following phenomena in the experiments. (1) Training with
noise logits via KD does not work, but even results in worse performance than
training with cross-entropy. It indicates noise logits cannot capture the
distribution of class probabilities, but are even harmful due to the wrong
information introduced. (2) Training a student with a surrogate teacher not
only results in unsatisfactory performance, but is also a difficult task due
to the low capacity of the surrogate model. Also, the performance is sensitive
to hyperparameter selection ($\lambda$, $\tau$, learning rate, etc.).
Therefore, training an extra affordable surrogate teacher is not an optimal
solution compared to DB3KD.
We notice that in some experiments, surprisingly, DB3KD even works better than
standard KD, though the models are trained with a more challenging setting. A
reasonable hypothesis is that, for some problems, the distance between a
training sample to the decision boundary may provide more information than the
softmax output. These results provide future research directions that the dark
knowledge behind the teacher’s decision boundary is more instructive compared
to the teacher’s logits in certain cases.
### 4.3 Comparison with Self-Distillation Approaches
Similar to our proposed scenario, in the absence of a pre-trained teacher,
self-knowledge distillation aims to improve the performance of the student by
distilling the knowledge within the network itself (Furlanello et al., 2018).
Since self-distillation approaches can also deal with our proposed scenario,
we compare the performance of DB3KD to recent self-distillation approaches,
including born-again neural networks (BAN) (Furlanello et al., 2018), teacher-
free knowledge distillation (TF-KD) (Yuan et al., 2020), and self-supervision
knowledge distillation (SSKD) (Xu et al., 2020). We use ResNet-34/18 as the
teacher and the student on CIFAR-100 for illustration. For further comparison,
we also implement DB3KD with a ResNet-50 teacher.
The results are shown in Table 2. It is observed that our approach is still
competitive in this case. With the same network configuration, our student
achieves a test accuracy of 77.31%, which outperforms other self-distillation
approaches, even with a DB3 teacher. It is also worth mentioning that, given a
fixed student, the performance of self-distillation has an upper bound because
it is teacher-free. One advantage of our approach is that the student can
leverage the information from a stronger teacher and its performance can be
further improved. As an example, we substitute the DB3 teacher with a
ResNet-50 network and keep other other configurations unchanged, the
performance of our student network is further increased by 1.34%, which
outperforms self-distillation approaches with a clear margin.
(a) MNIST
(b) Fashion-MNIST
(c) CIFAR10
Figure 5: (a) The average minimal boundary distances over number of queries.
Error bar indicates one standard deviation. (b-d) Normalized average minimal
boundary distances of the samples of different classes. Darker colors indicate
smaller distances between two classes.
### 4.4 Ablation Studies and Analyses of DB3KD
We conduct several ablation studies and analyses for further understanding of
the effectiveness of DB3KD.
Number of queries in label construction. We first investigate whether
different numbers of queries used for computing sample robustness has any
influence on the performance. For each dataset, we query from the teacher for
a variety of times from 1000 to 20000 to compute the sample robustness (Fig.
4). It can be observed that with more queries, the student models perform
slightly better, especially for deeper architectures (ResNet) and high-
resolution datasets (FLOWERS102). In general, the student models perform well
with various numbers of queries. Even using a binary search with around 100
queries (DB3KD-BD), the performance are satisfactory on all student models.
This is because the quality of a sample’s soft label is largely related to its
robustness against different classes. Moreover, the MBD used for computing
sample robustness shows a highly positive correlation with the number of
queries (Fig. 5(a)). The ratios of sample robustness against different classes
remain stable against the number of queries. Therefore, it is not necessary to
optimize the MBD with a large number of queries, which indicates that DB3KD is
query efficient. It is also worth noting that the performance is not linearly
correlated with the query numbers. This is because for all experiments, we use
the same set of hyperparameters for fair comparison, which may not be optimal
as the query number increases. However, we’d like to emphasize the performance
is not sensitive to query numbers and is satisfactory with a wide range of
numbers (from 2k to 20k).
Although the boundary may be complex in the pixel domain and the boundary
sample may be fragile, what we actually care about is the minimal boundary
distance (MBD). It actually measures how fragile a training sample is against
other classes and is a robust measurement. As supplementary evidence, the
standard deviations of the MDBs are relatively small (shown with the error
bars in Fig. 5(a)), indicating the robustness of the proposed approach.
Algorithm | Data | Model | MNIST | FMNIST
---|---|---|---|---
Teacher CE | Yes | White | 99.33% | 91.63%
Student CE | Yes | White | 99.11% | 90.21%
Standard KD | Yes | Black-S | 99.33% | 90.82%
FSKD | Few | White | 86.70% | 72.60%
BBKD | Few | Black-S | 98.74% | 80.90%
Meta KD | Meta | White | 92.47% | -
DAFL | No | White | 98.20% | -
ZSKD | No | White | 98.77% | 79.62%
DFKD | No | White | 99.08% | -
ZSDB3KD | No | Black-D | 96.54% | 72.31%
Table 3: Result of ZSDB3KD with MNIST and Fashion-MNIST. S: score-based
teacher. D: decision-based teacher.
Correlation between sample robustness and class probability. To further
analyze the effectiveness of DB3KD for constructing soft labels, we visualize
the normalized average MBDs of the samples with different classes (Fig.
5(b-d)). It is observed that classes semantically closer with each other are
with smaller distances to their decision boundary. For example, in MNIST, the
distance between ‘8’ and ‘9’ is smaller than ‘8’ and ‘1’ because ‘8’ looks
more like ‘9’ than ‘1’. Therefore, a sample of ‘8’ is assigned higher
confidence in class ‘9’ than ‘1’. Similarly, in Fashion-MNIST, ‘T-shirt’ looks
more like ‘shirt’ than ‘sneaker’ so that their distance are smaller. In
CIFAR-10, samples of the ‘dog’ class are with smaller distances to the
boundary with ‘cat’ than ‘truck’ since ‘dog’ and ‘cat’ are semantically
closer. These analyses confirm the consistency between sample robustness and
class probability distribution.
### 4.5 Experiment Setup of ZSDB3KD
We evaluate ZSDB3KD with (1) a LeNet-5 and a LeNet-5-Half (on MNIST and
Fashion-MNIST), and (2) an AlexNet and an AlexNet-Half (on CIFAR-10) as the
teacher and the student. The networks are the same as in Section 4.1.
We optimize the pseudo samples for 40 ($\xi_{o}=0.5$) and 100 iterations
($\xi_{o}=3.0$) for the two LeNet-5 and the AlexNet experiments, respectively.
The query is limited to 5000 when iteratively searching for the MBD. We
generate 8000 samples for each class with a batch size of 200 for all the
experiments. We use data augmentation to enrich the transfer set (see
Appendix). We use 5000 queries for computing the sample robustness since we
have shown the number of queries is trivial. Other parameters are the same as
the DB3KD experiments. We compare the performance of ZSDB3KD with several
popular KD approaches in more relaxed scenarios, including FSKD (Kimura et
al., 2018), BBKD (Wang et al., 2020), Meta KD (Lopes et al., 2017), DAFL (Chen
et al., 2019), ZSKD (Nayak et al., 2019) and DFKD (Wang, 2021).
Algorithm | Data | Model | Accuracy
---|---|---|---
Teacher CE | Yes | White | 79.30%
Student CE | Yes | White | 77.28%
Standard KD | Yes | Black-S | 77.81%
FSKD | Few | White | 40.58%
BBKD | Few | Black-S | 74.60%
DAFL | No | White | 66.38%
ZSKD | No | White | 69.56%
DFKD | No | White | 73.91%
Noise input | No | Black-S | 14.79%
Noise input | No | Black-D | 13.53%
ZSDB3KD | No | Black-D | 59.46%
Table 4: Result of ZSDB3KD on AlexNet with CIFAR-10. Figure 6: Analysis and
ablation study of ZSDB3KD with MNIST. Left: evolution of pseudo images over
iterations. Middle: averaged images compared to other white-box zero-shot KD
approaches. Upper right: the accuracies with different iterations of sample
generation. Bottom right: the accuracies with different numbers of samples
used for training the student.
### 4.6 Performance Comparison of ZSDB3KD
The performance of ZSDB3KD on MNIST and Fashion-MNIST, and CIFAR-10 presented
in Table 3 and 4 show that ZSDB3KD achieves competitive performance. The
accuracies of the student networks are $96.54\%$ and $72.31\%$ on MNIST and
Fashion-MNIST, which are quite close to other KD approaches with more relaxed
scenarios (training data or the teacher’s parameters are accessible). On
CIFAR-10, our AlexNet-Half model achieves an accuracy of 59.46% without
accessing any training samples and the softmax outputs of the teacher. It is
worth noting that using random noise as the input results in very poor
performance with a DB3 teacher. These results indicate that the samples
generated with our proposed approach indeed capture the distribution of the
samples used for training the teachers.
### 4.7 Ablation Studies and Analyses of ZSDB3KD
In this subsection, we perform several studies to understand the effectiveness
of ZSDB3KD, using LeNet-5-Half trained on MNIST as an example.
Iteration of sample generation. We first evaluate the performance of the
student with pseudo samples generated with different iterations (Fig. 6(upper
right)). As expected, the performance is improved as the samples are optimized
away from the decision boundaries with more iterations. As shown in Fig.
6(left), with more steps, more pixels in the pseudo samples are activated,
with sharper edges and recognizable digits, which indicates that the samples
become more robust as we keep moving them to the opposite of the gradient
direction on the decision boundaries.
Number of samples used for training. We then investigate the effect of the
number of pseudo samples used for training on the performance of the student
network. The results of training the student network with different numbers of
generated samples (from 1k to 8k per class) are presented in Fig. 6(bottom
right). Not surprisingly, with more samples, the test accuracy increases. Even
with a small number of samples (1k per class), the student network can still
achieve a competitive performance of 94% test accuracy. With 8k samples per
class, the student’s performance gets saturated and is comparable to the
performance of standard KD.
Visualization of generated samples. As mentioned above, we have shown the
evolution of individual samples over iterations (Fig. 6(left)), which
gradually exhibits clear digits. To have a further visualization of the
generated pseudo samples, we further average 1k samples for each class as
shown in Fig. 6(middle). Even though generated with a DB3 teacher, the samples
are with a satisfactory quality compared with the averaged samples generated
with ZSKD and DAFL that use white-box teachers.
## 5 Conclusion
In this study, we introduced KD from a decision-based black-box teacher for
the first time. We proposed DB3KD to deal with this problem, which uses sample
robustness to construct the soft labels for the training samples by
iteratively querying from the teacher. We also extend DB3KD to a much more
challenging scenario in which the training set is not accessible and named it
Zero-shot DB3KD (ZSDB3KD). Experiments on various networks and datasets
validated the effectiveness of the proposed approaches.
Our study motivated a new line of research on KD, in which the black-box
teacher only returns top-1 classes. It is a much more challenging scenario
because the class probabilities of the training samples need to be constructed
by iteratively querying from the DB3 teacher. With the training set
accessible, our DB3KD achieved competitive performance on FLOWERS102, in which
samples largely overlap with ImageNet. We believe that DB3KD can work
effectively on large-scale datasets. With the training samples not available,
like most of the existing works, a large amount of computing resource is
required for pseudo sample generation, making zero-shot KD hard to accomplish
with large-scale datasets. With a DB3 teacher, even more iterations are needed
compared to learning from a white-box model. Although we proposed the first
principled solution, we hope it helps to raise attention in this area and
promote efficient approaches.
## References
* Ba & Caruana (2014) Ba, J. and Caruana, R. Do deep nets really need to be deep? In _Advances in neural information processing systems_ , pp. 2654–2662, 2014.
* Brendel et al. (2017) Brendel, W., Rauber, J., and Bethge, M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. _arXiv preprint arXiv:1712.04248_ , 2017.
* Buciluǎ et al. (2006) Buciluǎ, C., Caruana, R., and Niculescu-Mizil, A. Model compression. In _Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining_ , pp. 535–541, 2006.
* Chen et al. (2019) Chen, H., Wang, Y., Xu, C., Yang, Z., Liu, C., Shi, B., Xu, C., Xu, C., and Tian, Q. Data-free learning of student networks. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 3514–3522, 2019.
* Cheng et al. (2018) Cheng, M., Le, T., Chen, P.-Y., Yi, J., Zhang, H., and Hsieh, C.-J. Query-efficient hard-label black-box attack: An optimization-based approach. _arXiv preprint arXiv:1807.04457_ , 2018.
* Cheng et al. (2019) Cheng, M., Singh, S., Chen, P., Chen, P.-Y., Liu, S., and Hsieh, C.-J. Sign-opt: A query-efficient hard-label adversarial attack. _arXiv preprint arXiv:1909.10773_ , 2019.
* Clarifai (2020) Clarifai, I. Clarifai: Computer vision and ai enterprise platform. 2020\. URL http://www.clarifai.com.
* Covington et al. (2016) Covington, P., Adams, J., and Sargin, E. Deep neural networks for youtube recommendations. In _Proceedings of the 10th ACM conference on recommender systems_ , pp. 191–198, 2016.
* Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pp. 248–255. Ieee, 2009.
* Furlanello et al. (2018) Furlanello, T., Lipton, Z., Tschannen, M., Itti, L., and Anandkumar, A. Born again neural networks. In _International Conference on Machine Learning_ , pp. 1607–1616. PMLR, 2018.
* Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In _Advances in neural information processing systems_ , pp. 2672–2680, 2014.
* Han et al. (2015) Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. _arXiv preprint arXiv:1510.00149_ , 2015.
* He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 770–778, 2016.
* He et al. (2019) He, Y., Liu, P., Wang, Z., Hu, Z., and Yang, Y. Filter pruning via geometric median for deep convolutional neural networks acceleration. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 4340–4349, 2019.
* Heo et al. (2019) Heo, B., Lee, M., Yun, S., and Choi, J. Y. Knowledge transfer via distillation of activation boundaries formed by hidden neurons. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 3779–3787, 2019.
* Hinton et al. (2015) Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_ , 2015.
* Howard et al. (2017) Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ , 2017.
* Jaderberg et al. (2014) Jaderberg, M., Vedaldi, A., and Zisserman, A. Speeding up convolutional neural networks with low rank expansions. _arXiv preprint arXiv:1405.3866_ , 2014.
* Jin et al. (2020) Jin, X., Lan, C., Zeng, W., and Chen, Z. Uncertainty-aware multi-shot knowledge distillation for image-based object re-identification. _arXiv preprint arXiv:2001.05197_ , 2020.
* Kim et al. (2018) Kim, J., Park, S., and Kwak, N. Paraphrasing complex network: Network compression via factor transfer. In _Advances in neural information processing systems_ , pp. 2760–2769, 2018.
* Kimura et al. (2018) Kimura, A., Ghahramani, Z., Takeuchi, K., Iwata, T., and Ueda, N. Few-shot learning of neural networks from scratch by pseudo example optimization. _arXiv preprint arXiv:1802.03039_ , 2018.
* Krizhevsky et al. (2009) Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
* Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In _Advances in neural information processing systems_ , pp. 1097–1105, 2012.
* LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ , 86(11):2278–2324, 1998.
* Li et al. (2016) Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning filters for efficient convnets. _arXiv preprint arXiv:1608.08710_ , 2016.
* Li et al. (2020) Li, T., Li, J., Liu, Z., and Zhang, C. Few sample knowledge distillation for efficient network compression. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 14639–14647, 2020.
* Liu et al. (2019) Liu, Y., Moosavi-Dezfooli, S.-M., and Frossard, P. A geometry-inspired decision-based attack. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 4890–4898, 2019.
* Lopes et al. (2017) Lopes, R. G., Fenu, S., and Starner, T. Data-free knowledge distillation for deep neural networks. _arXiv preprint arXiv:1710.07535_ , 2017.
* López et al. (2017) López, G., Quesada, L., and Guerrero, L. A. Alexa vs. siri vs. cortana vs. google assistant: a comparison of speech-based natural user interfaces. In _International Conference on Applied Human Factors and Ergonomics_ , pp. 241–250. Springer, 2017.
* Micaelli & Storkey (2019) Micaelli, P. and Storkey, A. J. Zero-shot knowledge transfer via adversarial belief matching. In _Advances in Neural Information Processing Systems_ , pp. 9551–9561, 2019.
* Moskalenko et al. (2018) Moskalenko, V., Moskalenko, A., Korobov, A., Boiko, O., Martynenko, S., and Borovenskyi, O. Model and training methods of autonomous navigation system for compact drones. In _2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP)_, pp. 503–508. IEEE, 2018.
* Nayak et al. (2019) Nayak, G. K., Mopuri, K. R., Shaj, V., Babu, R. V., and Chakraborty, A. Zero-shot knowledge distillation in deep networks. _arXiv preprint arXiv:1905.08114_ , 2019.
* Nilsback & Zisserman (2008) Nilsback, M.-E. and Zisserman, A. Automated flower classification over a large number of classes. In _2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing_, pp. 722–729. IEEE, 2008.
* Orekondy et al. (2019) Orekondy, T., Schiele, B., and Fritz, M. Knockoff nets: Stealing functionality of black-box models. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 4954–4963, 2019.
* Passalis et al. (2020) Passalis, N., Tzelepi, M., and Tefas, A. Heterogeneous knowledge distillation using information flow modeling. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 2339–2348, 2020.
* Phuong & Lampert (2019) Phuong, M. and Lampert, C. Towards understanding knowledge distillation. In _International Conference on Machine Learning_ , pp. 5142–5151. PMLR, 2019.
* Romero et al. (2014) Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., and Bengio, Y. Fitnets: Hints for thin deep nets. _arXiv preprint arXiv:1412.6550_ , 2014.
* Wang et al. (2020) Wang, D., Li, Y., Wang, L., and Gong, B. Neural networks are more productive teachers than human raters: Active mixup for data-efficient knowledge distillation from a blackbox model. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 1498–1507, 2020.
* Wang et al. (2018) Wang, Y., Du, S., Balakrishnan, S., and Singh, A. Stochastic zeroth-order optimization in high dimensions. In _International Conference on Artificial Intelligence and Statistics_ , pp. 1356–1365, 2018.
* Wang (2021) Wang, Z. Data-free knowledge distillation with soft targeted transfer set synthesis. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, pp. 10245–10253, 2021.
* Wang et al. (2021) Wang, Z., Li, C., and Wang, X. Convolutional neural network pruning with structural redundancy reduction. _arXiv preprint arXiv:2104.03438_ , 2021.
* Xu et al. (2020) Xu, G., Liu, Z., Li, X., and Loy, C. C. Knowledge distillation meets self-supervision. In _European Conference on Computer Vision_ , pp. 588–604. Springer, 2020.
* Yim et al. (2017) Yim, J., Joo, D., Bae, J., and Kim, J. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 4133–4141, 2017.
* Yin et al. (2020) Yin, H., Molchanov, P., Alvarez, J. M., Li, Z., Mallya, A., Hoiem, D., Jha, N. K., and Kautz, J. Dreaming to distill: Data-free knowledge transfer via deepinversion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 8715–8724, 2020.
* Yuan et al. (2020) Yuan, L., Tay, F. E., Li, G., Wang, T., and Feng, J. Revisiting knowledge distillation via label smoothing regularization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 3903–3911, 2020.
* Yun et al. (2020) Yun, S., Park, J., Lee, K., and Shin, J. Regularizing class-wise predictions via self-knowledge distillation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 13876–13885, 2020.
* Zagoruyko & Komodakis (2016) Zagoruyko, S. and Komodakis, N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. _arXiv preprint arXiv:1612.03928_ , 2016.
## Appendix
### A. Architectures Used in DB3KD and ZSDB3KD Experiments
We use several networks to evaluate the performance of DB3KD and ZSDB3KD.
LeNet-5 (teacher)/ LeNet-5-Half (student) and AlexNet (teacher)/ AlexNet-Half
(student) are used for both DB3KD and ZSDB3KD experiments (Table S1-S4). For
the DB3KD experiments, we also design two student networks, i.e., LeNet-5-1/5
and AlexNet-Quarter for further evaluation (Table S5-S6). We also conduct
experiments with ResNet-34 (teacher)/ ResNet-18 (student) with DB3KD and the
architectures used are the same as the original ResNet architectures.
Index | Layer | Type | Feature map | Kernel size | Stride | Padding | Activation
---|---|---|---|---|---|---|---
0 | Input | Input | 1 | - | - | - | -
1 | conv1 | conv | 20 | 5x5 | 1 | 0 | ReLu
2 | maxpool1 | pooling | - | 2x2 | 2 | 1 | -
3 | conv2 | conv | 50 | 5x5 | 1 | 0 | ReLu
4 | maxpool2 | pooling | - | 2x2 | 2 | 1 | -
5 | fc1 | fc | 200 | - | - | - | ReLu
6 | fc2 | fc | 10 | - | - | - | Softmax
Table S1: The architecture of the teacher model LeNet-5 with MNIST and Fashion-MNIST, for both DB3KD and ZSDB3KD experiments. Index | Layer | Type | Feature map | Kernel size | Stride | Padding | Activation
---|---|---|---|---|---|---|---
0 | Input | Input | 1 | - | - | - | -
1 | conv1 | conv | 10 | 5x5 | 1 | 0 | ReLu
2 | maxpool1 | pooling | - | 2x2 | 2 | 1 | -
3 | conv2 | conv | 25 | 5x5 | 1 | 0 | ReLu
4 | maxpool2 | pooling | - | 2x2 | 2 | 1 | -
5 | fc1 | fc | 100 | - | - | - | ReLu
6 | fc2 | fc | 10 | - | - | - | Softmax
Table S2: The architecture of the student model LeNet-5-Half with MNIST and Fashion-MNIST, for both DB3KD and ZSDB3KD experiments. Index | Layer | Type | Feature map | Kernel size | Stride | Padding | Activation
---|---|---|---|---|---|---|---
0 | Input | Input | 1 | - | - | - | -
1 | conv1 | conv | 64 | 3x3 | 2 | 1 | ReLu
2 | maxpool1 | pooling | - | 3x3 | 2 | 0 | -
3 | bn1 | batch norm | - | - | - | - | -
4 | conv2 | conv | 192 | 3x3 | 1 | 2 | ReLu
5 | maxpool2 | pooling | - | 3x3 | 2 | 0 | -
6 | bn2 | batch norm | - | - | - | - | -
7 | conv3 | conv | 384 | 3x3 | 1 | 1 | ReLu
8 | bn3 | batch norm | - | - | - | - | -
9 | conv4 | conv | 256 | 3x3 | 1 | 1 | ReLu
10 | bn4 | batch norm | - | - | - | - | -
11 | conv5 | conv | 256 | 3x3 | 1 | 1 | ReLu
12 | maxpool3 | pooling | - | 3x3 | 2 | 0 | -
13 | bn5 | batch norm | - | - | - | - | -
14 | fc1 | fc | 4096 | - | - | - | ReLu
15 | bn6 | batch norm | - | - | - | - | -
16 | fc2 | fc | 4096 | - | - | - | ReLu
17 | bn7 | batch norm | - | - | - | - | -
18 | fc3 | fc | 10 | - | - | - | Softmax
Table S3: The architecture of the teacher model AlexNet with CIFAR-10, for both DB3KD and ZSDB3KD experiments. Index | Layer | Type | Feature map | Kernel size | Stride | Padding | Activation
---|---|---|---|---|---|---|---
0 | Input | Input | 1 | - | - | - | -
1 | conv1 | conv | 32 | 3x3 | 2 | 1 | ReLu
2 | maxpool1 | pooling | - | 3x3 | 2 | 0 | -
3 | bn1 | batch norm | - | - | - | - | -
4 | conv2 | conv | 96 | 3x3 | 1 | 2 | ReLu
5 | maxpool2 | pooling | - | 3x3 | 2 | 0 | -
6 | bn2 | batch norm | - | - | - | - | -
7 | conv3 | conv | 192 | 3x3 | 1 | 1 | ReLu
8 | bn3 | batch norm | - | - | - | - | -
9 | conv4 | conv | 128 | 3x3 | 1 | 1 | ReLu
10 | bn4 | batch norm | - | - | - | - | -
11 | conv5 | conv | 128 | 3x3 | 1 | 1 | ReLu
12 | maxpool3 | pooling | - | 3x3 | 2 | 0 | -
13 | bn5 | batch norm | - | - | - | - | -
14 | fc1 | fc | 2048 | - | - | - | ReLu
15 | bn6 | batch norm | - | - | - | - | -
16 | fc2 | fc | 2048 | - | - | - | ReLu
17 | bn7 | batch norm | - | - | - | - | -
18 | fc3 | fc | 10 | - | - | - | Softmax
Table S4: The architecture of the student model AlexNet-Half with CIFAR-10, for both DB3KD and ZSDB3KD experiments. Index | Layer | Type | Feature map | Kernel size | Stride | Padding | Activation
---|---|---|---|---|---|---|---
0 | Input | Input | 1 | - | - | - | -
1 | conv1 | conv | 4 | 5x5 | 1 | 0 | ReLu
2 | maxpool1 | pooling | - | 2x2 | 2 | 1 | -
3 | conv2 | conv | 10 | 5x5 | 1 | 0 | ReLu
4 | maxpool2 | pooling | - | 2x2 | 2 | 1 | -
5 | fc1 | fc | 40 | - | - | - | ReLu
6 | fc2 | fc | 10 | - | - | - | Softmax
Table S5: The architecture of the student model LeNet-5-1/5 with MNIST and Fashion-MNIST, for DB3KD experiments. Index | Layer | Type | Feature map | Kernel size | Stride | Padding | Activation
---|---|---|---|---|---|---|---
0 | Input | Input | 1 | - | - | - | -
1 | conv1 | conv | 16 | 3x3 | 2 | 1 | ReLu
2 | maxpool1 | pooling | - | 3x3 | 2 | 0 | -
3 | bn1 | batch norm | - | - | - | - | -
4 | conv2 | conv | 48 | 3x3 | 1 | 2 | ReLu
5 | maxpool2 | pooling | - | 3x3 | 2 | 0 | -
6 | bn2 | batch norm | - | - | - | - | -
7 | conv3 | conv | 96 | 3x3 | 1 | 1 | ReLu
8 | bn3 | batch norm | - | - | - | - | -
9 | conv4 | conv | 64 | 3x3 | 1 | 1 | ReLu
10 | bn4 | batch norm | - | - | - | - | -
11 | conv5 | conv | 64 | 3x3 | 1 | 1 | ReLu
12 | maxpool3 | pooling | - | 3x3 | 2 | 0 | -
13 | bn5 | batch norm | - | - | - | - | -
14 | fc1 | fc | 1024 | - | - | - | ReLu
15 | bn6 | batch norm | - | - | - | - | -
16 | fc2 | fc | 1024 | - | - | - | ReLu
17 | bn7 | batch norm | - | - | - | - | -
18 | fc3 | fc | 10 | - | - | - | Softmax
Table S6: The architecture of the student model AlexNet-Quarter with CIFAR-10,
for DB3KD experiments.
### B. Experiment details
#### B.1. Training of the Models with Cross-Entropy
In this subsection, we introduce the details of training the models with
cross-entropy loss, for both the pre-trained models used as the DB3 teachers,
and the performance of the student models trained solely with the cross-
entropy loss reported in Tables 1, 2, 3, and 4.
LeNet-5 on MNIST and Fashion-MNIST For the LeNet-5 architecture on MNIST and
Fashion-MNIST, we train the teacher model for 200 epochs, with a batch size of
1024, an Adam optimizer with a learning rate of 0.001. For the student models
trained with cross-entropy (reported in Tables 1 and 3), we use the same
hyperparameters as above.
AlexNet on CIFAR-10 For the AlexNet architecture on CIFAR-10, we train the
teacher model for 300 epochs, with a batch size of 1024 and an SGD optimizer.
We set the momentum to 0.9, and weight decay to 0.0001. The learning rate is
set to 0.1 at the beginning, and is divided by 10 at epochs 60, 120, and 180.
For the student models trained with cross-entropy (reported in Tables 1 and
4), we use the same hyperparameters as above.
ResNet on CIFAR-100 For the ResNet-{50,34} on CIFAR-100, we train the teacher
models for 300 epochs, with a batch size of 256 and an SGD optimizer. We set
the momentum to 0.9 and weight decay to 0.0001. The learning rate is set to
0.1 at the beginning, and is divided by 10 at epochs 60, 120, and 180. For the
student model (ResNet-18) trained with cross-entropy (reported in Table 2), we
use the same hyperparameters as above.
ResNet-34 on FLOWERS102 For the ResNet-34 architecture on FLOWERS102, we start
with the model pre-trained on ImageNet, which is provided by Pytorch, and
fine-tune the pre-trained model for 200 epochs with an SGD optimizer. We set
the batch size to 64 and the momentum to 0.9. The learning rate is set to 0.01
at the beginning, and set to 0.005 and 0.001 at epochs 60 and 100,
respectively. For the student model (Resnet-18) trained with cross-entropy
(reported in Table 1), we use the same hyperparameters as above.
#### B.2. Standard Knowledge Distillation Training Details
For the standard knowledge distillation results reported in Tables 1, 2, 3,
and 4, we train the student models via standard KD with the following
hyperparameters. The scaling factor $\lambda$ that balances the importance of
cross-entropy loss and knowledge distillation loss is set to 1. The Adam
optimizer is used for all experiments and the student networks are trained for
200 epochs with a temperature of 20. For the experiments with MNIST, Fashion-
MNIST, and CIFAR-10, we set the batch size to 512; for the experiments with
CIFAR-100 and FLOWERS102, we set the batch size to 64. The learning rate is
set to 0.001 for MNIST and Fashion-MNIST, 0.005 for CIFAR-10/100, and 0.0005
for FLOWERS102.
#### B.3. Surrogate Knowledge Distillation Training Details
Training the student networks by transferring the knowledge from a surrogate,
low-capacity white-box teacher whose parameters can be fully accessed is
sensitive to hyperparameter selection. We did an extensive hyperparameter
search in our experiments and report the best numbers in Table 1. We use the
hyperparameters listed below. The optimizer and batch size used for surrogate
KD are the same as in standard KD. We train the student models for 300 epochs
for all experiments. For MNIST and Fashion-MNIST, the scaling factor $\lambda$
is set to 0.7, the temperature is set to 3, and the learning rate is set to
0.005. For CIFAR-10/100, $\lambda$ is set to 0.5, the temperature is set to 5,
and the learning rate is set to 0.005. For FLOWERS102, $\lambda$ is set to 1,
the temperature is set to 10, and the learning rate is set to 0.001.
#### B.4. Data Augmentation Used in ZSDB3KD Experiments
In ZSDB3KD experiments, we found that data augmentation can improve the
performance. Since the number of queries for the soft label construction of
the samples is trivial to the performance, as shown in the DB3KD experiments
(Fig. 4), we can apply various augmentation strategies to enrich the transfer
set with affordable extra computing cost. In our study, we implement the
following data augmentation strategies.
* •
(1) Padding and crop. We first pad two pixels on each side of the generated
samples and crop it to the original size, starting from the upper left corner
to the bottom right corner, with an interval of 1.
* •
(2) Horizontal and vertical flip. We flip the generated samples horizontally
and vertically to create mirrored samples.
* •
(3) Rotation. We rotate each generated image starting from $-15^{\circ}$ to
$15^{\circ}$ with an interval of $5^{\circ}$ to create 6 more rotated samples.
* •
(4) Flip after padding and crop. We flip the images after (1), horizontally
and vertically.
* •
(5) Rotation after padding and crop. We rotate the images after (1), using the
same operation as (3).
For the MNIST and Fashion-MNIST datasets, only the strategies (1) and (2) are
used. For the CIFAR-10 dataset, all five strategies are used. For the DB3KD
experiment with CIFAR-100, we also use the above five strategies.
It is also worth mentioning that after generating images with the above
operations, some of the samples’ top-1 classes change to others. If this
happens, we use the approach described in Section 3 to find the sample’s
corresponding point on the targeted decision boundary, i.e., $x^{*}$, to
recover its top-1 class back to the top-1 class of the sample before
augmentation.
Table S7 presents the performance comparison with and without data
augmentation on each dataset used in the ZSDB3KD experiments. It is observed
that training the student networks with more samples augmented with the above
strategies can improve the performance.
Dataset | Acc. without augmentation | Acc. with augmentation
---|---|---
MNIST | 94.20% | 96.54%
Fashion-MNIST | 67.24% | 72.31%
CIFAR-10 | 37.58% | 59.46%
Table S7: Performance comparison of the ZSDB3KD experiments with and without
data augmentation, with LeNet-5-Half on the MNIST and Fashion-MNIST datasets,
and with AlexNet-Half on the CIFAR-10 dataset, respectively.
### C. More Experiment Results
#### C.1. Comparison of the Sample Robustness Computed with DB3KD and the
Logits Generated by the Teacher
To further understand the effectiveness of the label construction with sample
robustness in our DB3KD approach, we visualize the sample distances that are
computed with the softmax outputs of the teacher networks, by accessing the
teachers’ parameters. We first feed the training samples to the teacher model
and get the softmax output. For a training sample, if a bigger probability is
assigned to a class, it means the distance between this sample to the specific
class is smaller. Therefore, we simply use _1 - class probability_ to
represent the sample distance. The results are presented in Fig. S1. It can be
observed that the visualized heatmaps look similar to those visualized with
the sample robustness computed with our DB3 approach (Fig. 5(b-d)). For
example, both of the MNIST heatmaps indicate that digit ’4’ is close to digit
’9’. For the Fashion-MNIST, Fig. S1(b) shows that class T-shirt is
semantically close to class ’Shirt’ and ’Pullover’, which is consistent with
the results in Fig. 5(c). These results further validate that our proposed
approach to construct soft labels with sample robustness is meaningful.
(a) MNIST
(b) Fashion-MNIST
(c) CIFAR-10
Figure S1: Normalized average distances of the samples of different classes,
computed with the softmax outputs of the pre-trained teachers. Darker colors
indicate smaller distances between two classes.
#### C.2. Ablation Studies of ZSDB3KD on Fashion-MNIST and CIFAR-10
Figure S2: Performance of ZSDB3KD on the Fashion-MNIST dataset with (a)
different numbers of iterations for sample generation and (b) pseudo samples
used for KD training. Data augmentation is not used for the study.
Figure S3: Performance of ZSDB3KD on the CIFAR-10 dataset with (a) different
numbers of iterations for sample generation and (b) pseudo samples used for KD
training. Data augmentation is not used for the study.
Similar to the ablation studies of ZSDB3KD on the MNIST dataset, we also
investigate the effect of (1) different numbers of iterations for sample
generation and (2) different numbers of pseudo samples used for KD training on
the performance of the student networks (without using data augmentation). The
results are presented in Fig. S2 and Fig. S3, respectively.
Similar to the results of MNIST, it is observed that, with more iterations for
the sample optimization, more robust pseudo samples can be generated and the
performance of the student networks are increased via DB3KD. For example, when
optimizing the randomly generated noises for only 5 iterations, the
performance of the student network on the Fashion-MNIST is less than 58%
without data augmentation. After 40 iterations, the performance increases by
around 7%. The performance of the AlexNet-Half network on CIFAR-10 is only
around 15% when using pseudo samples that are optimized for only 10
iterations. On the other hand, the performance increases to 37% after 70
iterations.
The test accuracies of the student networks are also higher when using more
pseudo samples as the transfer set. For the Fashion-MNIST dataset, the
performance increases from 61.39% to 67.24% as the number of pseudo samples
used as the transfer set increases from 1000 to 8000 per category. For the
CIFAR-10 dataset, the performance is less than 28% when using only 1000
samples per class. When the number of samples for each class increases to
8000, an accuracy of 37.58% can be achieved.
|
# A simple algorithm for expanding a power series
as a continued fraction
Alan D. Sokal
Department of Mathematics
University College London
London WC1E 6BT
UNITED KINGDOM
Department of Physics
New York University
726 Broadway
New York, NY 10003
USA
<EMAIL_ADDRESS>
(June 30, 2022
revised December 15, 2022)
###### Abstract
I present and discuss an extremely simple algorithm for expanding a formal
power series as a continued fraction. This algorithm, which goes back to Euler
(1746) and Viscovatov (1805), deserves to be better known. I also discuss the
connection of this algorithm with the work of Gauss (1812), Stieltjes (1889),
Rogers (1907) and Ramanujan, and a combinatorial interpretation based on the
work of Flajolet (1980).
Key Words: Formal power series, continued fraction, Euler–Viscovatov
algorithm, Gauss’s continued fraction, Euler–Gauss recurrence method, Motzkin
path, Dyck path, Stieltjes table, Rogers’ addition formula.
Mathematics Subject Classification (MSC 2010) codes: 30B70 (Primary); 05A10,
05A15, 05A19 (Secondary).
* Surely the story unfolded here emphasizes how valuable it is to study and understand the central ideas behind major pieces of mathematics produced by giants like Euler.
— George Andrews [3, p. 284]
## 1 Introduction
The expansion of power series into continued fractions goes back nearly 300
years. Euler [41] showed circa 1746 that111 The paper [41], which is E247 in
Eneström’s [39] catalogue, was probably written circa 1746; it was presented
to the St. Petersburg Academy in 1753 and published in 1760. See also [13, 12,
99] for some commentary on the analytic aspects of this paper.
$\sum_{n=0}^{\infty}n!\>t^{n}\;=\;\cfrac{1}{1-\cfrac{1t}{1-\cfrac{1t}{1-\cfrac{2t}{1-\cfrac{2t}{1-\cfrac{3t}{1-\cfrac{3t}{1-\cdots}}}}}}}$
(1.1)
and more generally that
$\sum_{n=0}^{\infty}a(a+1)(a+2)\cdots(a+n-1)\>t^{n}\;=\;\cfrac{1}{1-\cfrac{at}{1-\cfrac{1t}{1-\cfrac{(a+1)t}{1-\cfrac{2t}{1-\cfrac{(a+2)t}{1-\cfrac{3t}{1-\cdots}}}}}}}\;\,.$
(1.2)
Lambert [70] showed circa 1761 that222 Several sources (e.g. [68, 105] [22, p.
110] [71, p. 327]) date Lambert’s proof to 1761, though I am not sure what is
the evidence for this. Lambert’s paper was read to the Royal Prussian Academy
of Sciences in 1767, and published in 1768. See [68, 105] for analyses of
Lambert’s remarkable work.
${\tan t\over t}\;=\;\cfrac{1}{1-\cfrac{{1\over 1\cdot
3}t^{2}}{1-\cfrac{{1\over 3\cdot 5}t^{2}}{1-\cfrac{{1\over 5\cdot
7}t^{2}}{1-\cfrac{{1\over 7\cdot 9}t^{2}}{1-\cdots}}}}}$ (1.3)
and used it to prove the irrationality of $\pi$ [68, 105].333 In fact, as
noted by Brezinski [22, p. 110], a formula equivalent to (1.3) appears already
in Euler’s first paper on continued fractions [40]: see top p. 321 in the
English translation. The paper [40], which is E71 in Eneström’s [39]
catalogue, was presented to the St. Petersburg Academy in 1737 and published
in 1744. Many similar expansions were discovered in the eighteenth and
nineteenth centuries: most notably, Gauss [51] found in 1812 a continued-
fraction expansion for the ratio of two contiguous hypergeometric functions
${{\tensor[_{2\\!}]{F}{{}_{1}}\\!}}$, from which many previously obtained
expansions can be deduced by specialization or taking limits [104, Chapter
XVIII]. A detailed history of continued fractions can be found in the
fascinating book of Brezinski [22].
Let us stress that this subject has two facets: algebraic and analytic. The
algebraic theory treats both sides of identities like (1.1)–(1.3) as formal
power series in the indeterminate $t$; convergence plays no role.444 See [78],
[58, Chapter 1] or [109, Chapter 2] for an introduction to formal power
series; and see [21, Section IV.4] for a more complete treatment. Thus, (1.1)
is a perfectly meaningful (and true!) identity for formal power series,
despite the fact that the left-hand side has zero radius of convergence. By
contrast, the analytic theory seeks to understand the regions of the complex
$t$-plane in which the left or right sides of the identity are well-defined,
to study whether they are equal there, and to investigate possible analytic
continuations. In this paper we shall be concerned solely with the algebraic
aspect; indeed, the coefficients in our formulae need not be complex numbers,
but may lie in an arbitrary field $F$.
The central goal of this paper is to present and discuss an extremely simple
algorithm for expanding a formal power series
$f(t)=\sum\limits_{n=0}^{\infty}a_{n}t^{n}$ with $a_{0}\neq 0$ as a continued
fraction of the form
$f(t)\;=\;\cfrac{\alpha_{0}}{1-\cfrac{\alpha_{1}t^{p_{1}}}{1-\cfrac{\alpha_{2}t^{p_{2}}}{1-\cdots}}}$
(1.4)
with integer powers $p_{i}\geq 1$, or more generally as
$f(t)\;=\;\cfrac{\alpha_{0}}{1-\sum\limits_{j=1}^{M_{1}}\delta_{1}^{(j)}t^{j}-\cfrac{\alpha_{1}t^{p_{1}}}{1-\sum\limits_{j=1}^{M_{2}}\delta_{2}^{(j)}t^{j}-\cfrac{\alpha_{2}t^{p_{2}}}{1-\cdots}}}$
(1.5)
with integers $M_{i}\geq 0$ and $p_{i}\geq M_{i}+1$. Most generally, we will
consider continued fractions of the form
$f(t)\;=\;\cfrac{A_{0}(t)}{1-\Delta_{1}(t)-\cfrac{A_{1}(t)}{1-\Delta_{2}(t)-\cfrac{A_{2}(t)}{1-\cdots}}}$
(1.6)
where $A_{0}(t)$ is a formal power series with nonzero constant term, and
$\Delta_{k}(t)$ and $A_{k}(t)$ for $k\geq 1$ are formal power series with zero
constant term.
In the classical literature on continued fractions [81, 104, 65, 61, 73, 27],
(1.4) is called a (general) C-fraction [72]; with $p_{1}=p_{2}=\ldots=1$ it is
called a regular C-fraction; and (1.5) with $M_{1}=M_{2}=\ldots=1$ and
$p_{1}=p_{2}=\ldots=2$ is called an associated continued fraction. In the
recent combinatorial literature on continued fractions [43, 101], the regular
C-fraction is called a Stieltjes-type continued fraction (or S-fraction), and
the associated continued fraction is called a Jacobi-type continued fraction
(or J-fraction).555 In the classical literature on continued fractions, the
terms “S-fraction” and “J-fraction” refer to closely related but different
objects.
Although the algorithm to be presented here is more than two centuries old, it
does not seem to be very well known, or its simplicity adequately appreciated.
In the special case (1.1) it goes back to Euler in 1746 [41, section 21], as
will be explained in Section 3 below. While reading [41] I realized that
Euler’s method is in fact a completely general algorithm, applicable to
arbitrary power series (perhaps Euler himself already knew this). Only later
did I learn that a substantially equivalent algorithm was proposed by
Viscovatov [102] in 1805 and presented in modern notation in the book of
Khovanskii [65, pp. 27–31].666 Some post-Khovanskii books on continued
fractions also discuss the Viscovatov algorithm [28, pp. 2, 16–17, 89–90] [73,
pp. 259–265] [11, pp. 133–141] [27, pp. 20–21, 112–113, 118–119], but in my
opinion they do not sufficiently stress its simplicity and importance.
Viscovatov’s work is also discussed briefly in Brezinski’s historical survey
[22, p. 190]. Perron, in his classic monograph [81], mentions in a footnote
the “useful recursive formula” of Viscovatov [81, 1st ed., p. 304; 2nd ed., p.
304; 3rd ed., vol. 2, p. 120], but without further explanation. See also the
Historical Remark at (2.12)/(2.13) below. I therefore refer to it as the
Euler–Viscovatov algorithm. This algorithm was rediscovered several times in
the mid-twentieth century [53] [107] [79, 76, 77] and probably many times
earlier as well. I would be very grateful to any readers who could point out
additional relevant references.
Many other algorithms for expanding a power series as a continued fraction are
known, notably the quotient-difference algorithm [61, Section 7.1.2]. The key
advantage of the algorithm presented here is that it avoids all nonlinear
operations on power series (such as multiplication or division).
But the Euler–Viscovatov algorithm is more than just an algorithm for
computing continued fractions; suitably reinterpreted, it becomes a method for
proving continued fractions. Since this method was employed implicitly by
Euler [41, section 21] for proving (1.1) and explicitly by Gauss [51, sections
12–14] for proving his continued fraction for ratios of contiguous
${{\tensor[_{2\\!}]{F}{{}_{1}}\\!}}$, I shall call it the Euler–Gauss
recurrence method, and I will illustrate it with a variety of examples.
Unless stated otherwise, I shall assume that the coefficients $a_{i}$,
$\alpha_{i}$ and $\delta_{i}^{(j)}$ belong to a field $F$. Later I shall make
some brief remarks about what happens when the coefficients lie instead in a
commutative ring-with-identity-element $R$.
## 2 Expansion as a C-fraction
To each continued fraction of the form (1.4) there manifestly corresponds a
unique formal power series $f(t)=\sum_{n=0}^{\infty}a_{n}t^{n}$; and clearly
$\alpha_{0}=0$ if and only if $f(t)$ is identically zero. Since we are always
assuming that $a_{0}\neq 0$, it follows that $\alpha_{0}=a_{0}\neq 0$.
We say that a continued fraction of the form (1.4) with $\alpha_{0}\neq 0$ is
terminating of length $\bm{k}$ ($k\geq 0$) if
$\alpha_{1},\ldots,\alpha_{k}\neq 0$ and $\alpha_{k+1}=0$; we say that it is
nonterminating if all the $\alpha_{i}$ are nonzero. Two continued fractions of
the form (1.4) will be called equivalent if they are both terminating of the
same length $k$ and they have the same values for
$\alpha_{1},\ldots,\alpha_{k}$ and $p_{1},\ldots,p_{k}$ (and of course for
$\alpha_{k+1}=0$); they then correspond to the same power series $f(t)$,
irrespective of the values of $\alpha_{k+2},\alpha_{k+3},\ldots$ and
$p_{k+1},p_{k+2},\ldots\,$, which play no role whatsoever.
We shall use the notation $[t^{m}]\,g(t)$ to denote the coefficient of $t^{m}$
in the formal power series $g(t)$.
Given a continued fraction of the form (1.4), let us define for $k\geq 0$
$f_{k}(t)\;=\;\cfrac{1}{1-\cfrac{\alpha_{k+1}t^{p_{k+1}}}{1-\cfrac{\alpha_{k+2}t^{p_{k+2}}}{1-\cdots}}}\;\,;$
(2.1)
of course these are formal power series with constant term 1. We thus have
$f(t)=\alpha_{0}f_{0}(t)$ and the recurrence
$f_{k}(t)\;=\;{1\over 1\,-\,\alpha_{k+1}t^{p_{k+1}}f_{k+1}(t)}\qquad\hbox{for
$k\geq 0$}\;.$ (2.2)
Given $f(t)$, we can reconstruct $(\alpha_{k})_{k\geq 0}$, $(p_{k})_{k\geq 1}$
and $(f_{k})_{k\geq 0}$ by the following obvious algorithm:
[0.85] Primitive algorithm.
1\. Set $\alpha_{0}=a_{0}=[t^{0}]\,f(t)$ and $f_{0}(t)=\alpha_{0}^{-1}f(t)$.
2\. For $k=1,2,3,\ldots$, do:
* (a)
If $f_{k-1}(t)=1$, set $\alpha_{k}=0$ and terminate. [Then
$\alpha_{k+1},\alpha_{k+2},\ldots$ and $p_{k},p_{k+1},\ldots$ can be given
completely arbitrary values.]
* (b)
If $f_{k-1}(t)\neq 1$, let $p_{k}$ be the smallest index $n\geq 1$ such that
$[t^{n}]\,f_{k-1}(t)\neq 0$; set $\alpha_{k}=[t^{p_{k}}]\,f_{k-1}(t)$; and set
$f_{k}(t)\;=\;\alpha_{k}^{-1}t^{-p_{k}}\biggl{(}1\,-\,{1\over
f_{k-1}(t)}\biggr{)}\;.$ (2.3)
If this algorithm terminates, then obviously $f$ is a rational function.
Conversely, if $f$ is a rational function, then it is not difficult to show,
by looking at the degrees of numerator and denominator, that the algorithm
must terminate. (I will give the details of this argument a bit later.) The
algorithm therefore proves:
###### Proposition 2.1.
(Leighton and Scott [72]) Let $f(t)=\sum_{n=0}^{\infty}a_{n}t^{n}$ be a
formal power series with coefficients in a field $F$, with $a_{0}\neq 0$. Then
$f(t)$ can be represented by a continued fraction of the form (1.4), which is
unique modulo equivalence. This continued fraction is terminating if and only
if $f(t)$ represents a rational function.
The disadvantage of the foregoing algorithm is that it requires division of
power series in the step (2.3). To eliminate this, let us define
$g_{k}(t)\;=\;\prod_{i=0}^{k}f_{i}(t)\qquad\hbox{for $k\geq-1$}\;;$ (2.4)
these are formal power series with constant term 1, which satisfy
$g_{-1}(t)=1$ and
$f_{k}(t)\;=\;{g_{k}(t)\over g_{k-1}(t)}\qquad\hbox{for $k\geq 0$}\;.$ (2.5)
Then the nonlinear two-term recurrence (2.2) for the $(f_{k})$ becomes the
linear three-term recurrence
$g_{k}(t)-g_{k-1}(t)\;=\;\alpha_{k+1}t^{p_{k+1}}g_{k+1}(t)$ (2.6)
for the $(g_{k})$. Rewriting the algorithm in terms of $(g_{k})_{k\geq-1}$, we
have:
[0.88] Refined algorithm.
1\. Set $g_{-1}(t)=1$, $\alpha_{0}=a_{0}=[t^{0}]\,f(t)$ and
$g_{0}(t)=\alpha_{0}^{-1}f(t)$.
2\. For $k=1,2,3,\ldots$, do:
* (a)
If $g_{k-1}(t)=g_{k-2}(t)$, set $\alpha_{k}=0$ and terminate.
* (b)
If $g_{k-1}(t)\neq g_{k-2}(t)$, let $p_{k}$ be the smallest index $n$ such
that $[t^{n}]\,g_{k-1}(t)\neq[t^{n}]\,g_{k-2}(t)$; set
$\alpha_{k}=[t^{p_{k}}]\,\bigl{(}g_{k-1}(t)-g_{k-2}(t)\bigr{)}$; and set
$g_{k}(t)\;=\;\alpha_{k}^{-1}t^{-p_{k}}\bigl{(}g_{k-1}(t)-g_{k-2}(t)\bigr{)}\;.$
(2.7)
This algorithm requires only linear operations on power series (together, of
course, with a nonlinear operation in the field $F$, namely, division by
$\alpha_{k}$).
Let us also observe that it is not mandatory to take $g_{-1}=1$. In fact, we
can let $g_{-1}$ be any formal power series with constant term 1, and replace
(2.4) by
$g_{k}(t)\;=\;g_{-1}(t)\prod_{i=0}^{k}f_{i}(t)\qquad\hbox{for $k\geq 0$}\;;$
(2.8)
then the key relation (2.5) still holds. The algorithm becomes:
[0.85] Refined algorithm, generalized version.
1\. Choose any formal power series $g_{-1}(t)$ with constant term 1; then set
$\alpha_{0}=a_{0}=[t^{0}]\,f(t)$ and $g_{0}(t)=\alpha_{0}^{-1}g_{-1}(t)f(t)$.
2\. As before.
This generalization is especially useful in case $f(t)$ happens to be given to
us as an explicit fraction; then we can (if we wish) choose $g_{-1}$ to be the
denominator.
In particular, suppose that $f=P/Q$ is a rational function normalized to
$Q(0)=1$, and that we choose $g_{-1}=Q$ and $g_{0}=P/P(0)$. Then all the
$g_{k}$ are polynomials, and we have
$\deg g_{k}\;\leq\;\max(\deg g_{k-1},\deg g_{k-2})-p_{k}\;\leq\;\max(\deg
g_{k-1},\deg g_{k-2})-1\;.$ (2.9)
It follows by induction that
$\deg g_{k}\;\leq\;d-\lceil k/2\rceil$ (2.10)
where $d=\max(\deg P,\deg Q)$ is the degree of $f$. Hence the algorithm (in
any of its versions) must terminate no later than step $k=2d$. This completes
the proof of Proposition 2.1.
Of course, the foregoing algorithms, interpreted literally, require
manipulations on power series with infinitely many terms. Sometimes this can
be done by hand (as we shall see in Sections 3–5) or by a sufficiently
powerful symbolic-algebra package, if explicit formulae for the series
coefficients are available. But in many cases we are given the initial series
$f(t)$ only through some order $t^{N}$, and we want to find a continued
fraction of the form (1.4) that represents $f(t)$ at least through this order.
This can be done as follows: We start by writing
$f(t)=\sum_{n=0}^{N}a_{n}t^{n}+O(t^{N+1})$ and then carry out the algorithm
(in any version) where each $f_{k}$ or $g_{k}$ is written as a finite sum plus
an explicit error term $O(t^{N_{k}+1})$.777 This is how Mathematica
automatically handles SeriesData objects, and how Maple handles the series
data structure. Clearly $N_{k}=N-\sum_{i=1}^{k}p_{i}$. The algorithm
terminates when $f_{k-1}(t)=1+O(t^{N_{k-1}+1})$ or
$g_{k-1}(t)-g_{k-2}(t)=O(t^{N_{k-1}+1})$. In terms of the coefficients
$g_{k,n}$ in $g_{k}(t)=\sum_{n=0}^{\infty}g_{k,n}t^{n}$ (where $g_{k,0}=1$),
the refined algorithm (in the generalized version) is as follows:
[0.9] Refined algorithm, finite-$\bm{N}$ version.
INPUT: Coefficients $g_{k,n}$ for $k=-1,0$ and $0\leq n\leq N$, where
$g_{-1,0}=g_{0,0}=1$.
1\. Set $N_{0}=N$.
2\. For $k=1,2,3,\ldots$, do:
* (a)
If $g_{k-1,n}=g_{k-2,n}$ for $0\leq n\leq N_{k-1}$, set $\alpha_{k}=0$ and
terminate.
* (b)
Otherwise, let $p_{k}$ be the smallest index $n$ ($\leq N_{k-1}$) such that
$g_{k-1,n}\neq g_{k-2,n}$; set $\alpha_{k}=g_{k-1,p_{k}}-g_{k-2,p_{k}}$; set
$N_{k}=N_{k-1}-p_{k}$; and set
$g_{k,n}\;=\;\alpha_{k}^{-1}(g_{k-1,n+p_{k}}-g_{k-2,n+p_{k}})\quad\hbox{for
$0\leq n\leq N_{k}$}\;.$ (2.11)
It is easy to see that this algorithm requires $O(N^{2})$ field operations to
find a continued fraction that represents $f(t)$ through order $t^{N}$. Note
also that if it is subsequently desired to extend the computation to larger
$N$, one can return to $k=0$ and compute the new coefficients $g_{k,n}$ using
(2.11), without needing to revisit the old ones; this is a consequence of the
method’s linearity.
Historical remark. While preparing this article I learned that the “refined
algorithm” is essentially equivalent (when $p_{1}=p_{2}=\ldots=1$) to a method
presented by Viscovatov [102, p. 228] in 1805. In terms of Khovanskii’s [65,
pp. 27–31] quantities $\alpha_{m,n}$, it suffices to define
$g_{m}(t)\;=\;\sum_{n=0}^{\infty}{\alpha_{m,n}\over\alpha_{m,0}}\,t^{n}$
(2.12)
and
$\alpha_{m}\;=\;-\,{\alpha_{m,0}\over\alpha_{m-1,0}\,\alpha_{m-2,0}}\;;$
(2.13)
then Khovanskii’s recurrence
$\alpha_{m,n}=\alpha_{m-1,0}\alpha_{m-2,n+1}-\alpha_{m-2,0}\alpha_{m-1,n+1}$
[65, p. 28] is equivalent to our (2.11) specialized to $p_{k}=1$. See also
[59, p. 547, eqns. (12.6-26) and (12.6-27)], [28, p. 17] and [27, p. 20, eq.
(1.7.7) and p. 112, eq. (6.1.12c)]. This same recurrence was independently
discovered in the mid-twentieth century by Gordon [53, Appendix A], who named
it the “product-difference algorithm”; by P.J.S. Watson [107, p. 94]; and by
O’Donohoe [79, 76, 77], who named it the “corresponding sequence (CS)
algorithm”. The presentation in [79, Chapter 3] [77] is particularly clear.
It should be mentioned, however, that some of the modern works that refer to
the “Viscovatov algorithm” fail to distinguish clearly between the primitive
algorithm (2.3) and the refined algorithm (2.7)/(2.11). However, the modern
authors should not be blamed: Viscovatov [102] himself fails to make this
distinction clear. As Khovanskii [65, p. 27] modestly says, “This procedure
was in principle [emphasis mine] proposed by V. Viskovatoff; we have merely
developed a more convenient notation for this method of calculation.”
See also [74] for fascinating information concerning the life of Vasiliĭ
Ivanovich Viscovatov (1780–1812).
A very similar algorithm was presented by Christoph (or Christian) Friedrich
Kausler (1760–1825) in 1802 [63] (see also [64, pp. 112 ff.]), but the precise
relation between the two algorithms is not clear to me. $\blacksquare$
We can also run this algorithm in reverse. Suppose that we have a sequence
$(g_{k})_{k\geq-1}$ of formal power series with constant term 1, which satisfy
a recurrence of the form
$g_{k}(t)-g_{k-1}(t)\;=\;\alpha_{k+1}t^{p_{k+1}}g_{k+1}(t)\qquad\hbox{for
$k\geq 0$}\;.$ (2.14)
(We need not assume that $g_{-1}=1$.) Then the series $(f_{k})_{k\geq 0}$
defined by $f_{k}=g_{k}/g_{k-1}$ satisfy the recurrence (2.2); and iterating
this recurrence, we see that they are given by the continued fractions (2.1).
This method for proving continued fractions was employed implicitly by Euler
[41, section 21] for proving (1.1) — as we shall see in the next section — and
explicitly by Gauss [51, sections 12–14] for proving his continued fraction
for ratios of contiguous ${{\tensor[_{2\\!}]{F}{{}_{1}}\\!}}$. We therefore
call it the Euler–Gauss recurrence method.
Suppose, finally, that the coefficients of $f(t)$ lie in a commutative ring-
with-identity-element $R$, not necessarily a field. There are two cases:
(a) If $R$ is an integral domain (i.e. has no divisors of zero), then we can
carry out the preceding algorithm (in any version) in the field of fractions
$F(R)$, yielding coefficients $(\alpha_{k})_{k\geq 0}$ that lie in $F(R)$ and
are unique modulo equivalence. In some cases these coefficients may lie in
$R$, in other cases not. If $(\alpha_{k})_{k\geq 0}$ do lie in $R$, then so
will all the coefficients of the series $(f_{k})_{k\geq 0}$ and
$(g_{k})_{k\geq 0}$.888 I am assuming here that the coefficients of the chosen
$g_{-1}$ lie in $R$. In this case the algorithm can be carried out entirely
in $R$; it requires divisions $a/b=c$, but only in cases where $c$ lies in $R$
(and $c$ is of course unique because $R$ has no divisors of zero).
(b) If, by contrast, $R$ has divisors of zero, then the expansion as a
continued fraction can be highly nonunique. For instance, in
$R={\mathbb{Z}}_{4}$, the series $f(t)=1+2t$ is represented in the form (1.4)
with $p_{1}=p_{2}=\ldots=1$ for any coefficients $(\alpha_{k})_{k\geq 0}$ in
${\mathbb{Z}}_{4}$ satisfying $\alpha_{0}=1$, $\alpha_{1}=2$ and
$\alpha_{2}\in\\{0,2\\}$. But one can say this: if the series $f(t)$ possesses
an expansion (1.4) with coefficients $(\alpha_{k})_{k\geq 0}$ in $R$ and none
of these coefficients is a divisor of zero, then this expansion is unique
modulo equivalence and the algorithm will find it.
The generalization from fields to commutative rings is important in
applications to enumerative combinatorics [43, 101, 94, 93], where $R$ is
often the ring ${\mathbb{Z}}[{\bf x}]$ of polynomials with integer
coefficients in some indeterminates ${\bf x}=(x_{i})_{i\in I}$. In particular,
the Euler–Gauss recurrence method applies in an arbitrary commutative ring
(with identity element) and is a useful method for proving continued fractions
in this context.
## 3 Example 1: From factorials to $\bm{{{\tensor[_{2\\!}]{F}{{}_{0}}\\!}}}$
Let us now examine Euler’s [41, section 21] derivation of the identity (1.1),
which expresses the formal power series $f(t)=\sum_{n=0}^{\infty}n!\,t^{n}$ as
a regular C-fraction [that is, (1.4) with $p_{1}=p_{2}=\ldots=1$] with
coefficients
$\alpha_{2j-1}=j,\quad\alpha_{2j}=j\;.$ (3.1)
Euler starts by writing out $f$ through order $t^{7}$; he then computes
$\alpha_{k}$ and $f_{k}$ for $1\leq k\leq 7$, writing each $f_{k}$ as an
explicit ratio $g_{k}/g_{k-1}$. It is thus evident that Euler is using what we
have here called the “refined algorithm” (with $g_{-1}=1$). Moreover, Euler
writes out each series $g_{k}$ through order $t^{7-k}$, to which he appends “+
etc.”; clearly he is using the “finite-$N$ algorithm” explained in the
preceding section, with $N=7$. After these computations he says:
> And therefore it will become clear, that it will analogously be
>
> $I\,=\,{4x\over 1+K},\quad K\,=\,{5x\over 1+L},\quad L\,=\,{5x\over
> 1+M},\quad\hbox{etc.\ to infinity,}$
>
> so that the structure of these formulas is easily perceived.
And he concludes by writing out the continued fraction (1.1) through
$\alpha_{13}=7$ (!), making clear that the intended coefficients are indeed
$\alpha_{2j-1}=j$ and $\alpha_{2j}=j$.
Euler does not give a proof of this final formula or an explicit expression
for the series $g_{k}$, but this is not difficult to do. One approach (the
first one I took) is to follow Euler and compute the first few coefficients of
the first few $g_{k}$; having done this, one can try, by inspecting this
finite array of numbers, to guess the general formula; once this has been
done, it is not difficult to prove the recurrence (2.14).
But in this case a better approach is available: namely, compute the full
infinite series $g_{k}(t)$ for small $k$, before trying to guess the general
formula. Thus, we begin by setting $g_{-1}=1$ and
$g_{0}(t)=\sum_{n=0}^{\infty}n!\,t^{n}$. We then use the recurrence (2.14)
[with all $p_{i}=1$] to successively compute $g_{1}(t)$, $g_{2}(t)$, …,
extracting at each stage the factor $\alpha_{k+1}t$ that makes $g_{k+1}(t)$
have constant term 1. After a few steps of this computation, we may be able to
guess the general formulae for $\alpha_{k}$ and $g_{k}(t)$ and then prove the
recurrence (2.14). Here are the details for this example:
The first step is
$g_{0}-g_{-1}\;=\;\sum_{n=1}^{\infty}n!\,t^{n}\;=\;t\sum_{n=0}^{\infty}(n+1)!\,t^{n}\;,$
(3.2)
from which we deduce that $\alpha_{1}=1$ and
$g_{1}(t)=\sum_{n=0}^{\infty}(n+1)!\,t^{n}$. The second step is
$g_{1}-g_{0}\;=\;\sum_{n=1}^{\infty}n\,n!\,t^{n}\;=\;t\sum_{n=0}^{\infty}(n+1)\,(n+1)!\,t^{n}\;,$
(3.3)
so that $\alpha_{2}=1$ and $g_{2}(t)=\sum_{n=0}^{\infty}(n+1)\,(n+1)!\,t^{n}$.
Next
$g_{2}-g_{1}\;=\;\sum_{n=1}^{\infty}n\,(n+1)!\,t^{n}\;=\;2t\sum_{n=0}^{\infty}(n+1)\,{(n+2)!\over
2}\,t^{n}\;,$ (3.4)
so that $\alpha_{3}=2$ and $g_{3}(t)=\sum_{n=0}^{\infty}(n+1)\,{(n+2)!\over
2}\,t^{n}$. And then
$g_{3}-g_{2}\;=\;\sum_{n=1}^{\infty}{n(n+1)\over
2}\,(n+1)!\,t^{n}\;=\;2t\sum_{n=0}^{\infty}{(n+1)(n+2)\over 2}\>{(n+2)!\over
2}\,t^{n}\;,$ (3.5)
so that $\alpha_{4}=2$ and $g_{4}(t)=\sum_{n=0}^{\infty}{(n+1)(n+2)\over
2}\>{(n+2)!\over 2}\,t^{n}$. At this stage it is not difficult to guess the
general formulae for $\alpha_{k}$ and $g_{k}(t)$: we have
$\alpha_{2j-1}=\alpha_{2j}=j$ and
$g_{2j-1}(t)&=\sum_{n=0}^{\infty}\binom{n+j}{n}\binom{n+j-1}{n}\,n!\>t^{n}\\\
g_{2j}(t)=\sum_{n=0}^{\infty}\binom{n+j}{n}^{\\!2}\,n!\>t^{n}$ (3.6)
for $j\geq 0$ (as Euler himself may well have known). Having written down
these expressions, it is then straightforward to verify that they satisfy the
recurrence
$g_{k}(t)-g_{k-1}(t)\;=\;\alpha_{k+1}t\,g_{k+1}(t)\qquad\hbox{for $k\geq 0$}$
(3.7)
with the given coefficients ${\bm{\alpha}}=(\alpha_{k})_{k\geq 1}$. This
completes the proof of (1.1).
In section 26 of the same paper [41], Euler says that the same method can be
applied to the more general series (1.2), which reduces to (1.1) when $a=1$;
but he does not provide the details, and he instead proves (1.2) by an
alternative method. Three decades later, however, Euler [42] returned to his
original method and presented the details of the derivation of (1.2).999 The
paper [42], which is E616 in Eneström’s [39] catalogue, was apparently
presented to the St. Petersburg Academy in 1776, and published posthumously in
1788. By a method similar to the one just shown, one can be led to guess
$\alpha_{2j-1}=a+j-1,\quad\alpha_{2j}=j$ (3.8)
and
$g_{2j-1}(t)&=\sum_{n=0}^{\infty}(a+j)^{\overline{n}}\>\binom{n+j-1}{n}\>t^{n}\\\
g_{2j}(t)=\sum_{n=0}^{\infty}(a+j)^{\overline{n}}\>\binom{n+j}{n}\>t^{n}{}$
(3.9)
where we have used the notation [67, 55]
$x^{\overline{n}}=x(x+1)\cdots(x+n-1)$. The recurrence (3.7) can once again be
easily checked.
We can, in fact, carry this process one step farther, by introducing an
additional parameter $b$. Let
$\alpha_{2j-1}=a+j-1,\quad\alpha_{2j}=b+j-1$ (3.10)
and
$g_{2j-1}(t)&=\sum_{n=0}^{\infty}{(a+j)^{\overline{n}}\>(b+j-1)^{\overline{n}}\over
n!}\;t^{n}\\\
g_{2j}(t)=\sum_{n=0}^{\infty}{(a+j)^{\overline{n}}\>(b+j)^{\overline{n}}\over
n!}\;t^{n}$ (3.11)
The recurrence (3.7) can again be easily checked; in fact, the reasoning is
somewhat more transparent in this greater generality. We no longer have
$g_{-1}=1$ (unless $b=1$), but no matter; we can still conclude that
$g_{0}(t)/g_{-1}(t)$ is given by the continued fraction with coefficients
(3.10).
The series appearing in (3.11) are nothing other than the hypergeometric
series ${{\tensor[_{2\\!}]{F}{{}_{0}}\\!}}$, defined by
${{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a,b\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}\;=\;\sum_{n=0}^{\infty}{a^{\overline{n}}\,b^{\overline{n}}\over
n!}\;t^{n}\;;$ (3.12)
and the recurrence (3.7) is simply the contiguous relation
${{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a,b\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}\>-\>{{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a,b-1\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}\;=\;at\;{{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a+1,b\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}\;,$ (3.13)
applied with interchanges $a\leftrightarrow b$ at alternate levels. We have
thus proven the continued fraction for the ratio of two contiguous
hypergeometric series ${{\tensor[_{2\\!}]{F}{{}_{0}}\\!}}$ [104, section 92]:
${{{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a,b\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}\over{{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a,b-1\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}}\;\>=\;\>\cfrac{1}{1-\cfrac{at}{1-\cfrac{bt}{1-\cfrac{(a+1)t}{1-\cfrac{(b+1)t}{1-\cfrac{(a+2)t}{1-\cfrac{(b+2)t}{1-\cdots}}}}}}}\;\,.$
(3.14)
At this point let me digress by making three remarks:
1) The hypergeometric series (3.12) is of course divergent for all $t\neq 0$
(unless $a$ or $b$ is zero or a negative integer, in which case the series
terminates). We can nevertheless give the formula (3.14) an analytic meaning
by defining
$F_{a,b}(t)\;=\;{1\over\Gamma(a)}\int\limits_{0}^{\infty}{e^{-x}\,x^{a-1}\over(1-tx)^{b}}\>dx\;,$
(3.15)
which is manifestly an analytic function jointly in $a,b,t$ for $a>0$,
$b\in{\mathbb{C}}$ and $t\in{\mathbb{C}}\setminus[0,\infty)$; moreover, its
asymptotic expansion at $t=0$ (valid in a sector staying away from the
positive real axis) is the hypergeometric series (3.12). It can also be shown
that $F_{a,b}(t)=F_{b,a}(t)$ where both sides are defined [66, p. 277].
Furthermore, by integration by parts the definition (3.15) can be extended to
arbitrary $a\in{\mathbb{C}}$.101010 This is a special case of the more general
result that the tempered distribution $x_{+}^{a-1}/\Gamma(a)$, defined
initially for $a>0$, can be analytically continued to an entire tempered-
distribution-valued function of $a$ [52, section I.3]. And this is, in turn, a
special case of a spectacular result, due to Bernstein and S.I. Gel’fand [16,
15] and Atiyah [10], on the analytic continuation of distributions of the form
$P(x_{1},\ldots,x_{n})^{\lambda}$ where $P$ is a real polynomial. (Here I
digress too far, I know … but this is really beautiful mathematics, on the
borderline between analysis, algebraic geometry, and algebra: see e.g. [20,
26].) It can then be shown [103] that the continued fraction on the right-
hand side of (3.14) converges throughout ${\mathbb{C}}\setminus[0,\infty)$
except possibly at certain isolated points (uniformly over bounded regions
staying away from the isolated points) and defines an analytic function having
these isolated points as poles; and this analytic function equals
$F_{a,b}(t)/F_{a,b-1}(t)$. (I know I had promised to stay away from analysis;
but this was too beautiful to resist.)
2) If we expand the ratio (3.14) as a power series,
${{{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a,b\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}\over{{\tensor[_{2\\!}]{F}{{}_{0}}\\!}\\!\biggl{(}\\!\\!\begin{array}[]{c}a,b-1\\\\[2.84526pt]
\hbox{---}\end{array}\\!\bigg{|}\,t\\!\biggr{)}}}\;=\;\sum_{n=0}^{\infty}P_{n}(a,b)\>t^{n}\;,$
(3.16)
it follows easily from the continued fraction that $P_{n}(a,b)$ is a
polynomial of total degree $n$ in $a$ and $b$, with nonnegative integer
coefficients. It is therefore natural to ask: What do these nonnegative
integers count?
Euler’s continued fraction (1.1) tells us that $P_{n}(1,1)=n!$; and there are
$n!$ permutations of an $n$-element set. It is therefore reasonable to guess
that $P_{n}(a,b)$ enumerates permutations of an $n$-element set according to
some natural bivariate statistic. This is indeed the case; and Dumont and
Kreweras [33] have identified the statistic. Given a permutation $\sigma$ of
$\\{1,2,\ldots,n\\}$, let us say that an index $i\in\\{1,2,\ldots,n\\}$ is a
* •
record (or left-to-right maximum) if $\sigma(j)<\sigma(i)$ for all $j<i$ [note
in particular that the index 1 is always a record];
* •
antirecord (or right-to-left minimum) if $\sigma(j)>\sigma(i)$ for all $j>i$
[note in particular that the index $n$ is always an antirecord];
* •
exclusive record if it is a record and not also an antirecord;
* •
exclusive antirecord if it is an antirecord and not also a record.
Dumont and Kreweras [33] then showed that
$P_{n}(a,b)\;=\;\sum_{\sigma\in{\mathfrak{S}}_{n}}a^{{\rm rec}(\sigma)}b^{{\rm
earec}(\sigma)}$ (3.17)
where ${\rm rec}(\sigma)$ [resp. ${\rm earec}(\sigma)$] is the number of
records (resp. exclusive antirecords) in $\sigma$. Some far-reaching
generalizations of this result can be found in [94].
3) Euler also observed [41, section 29] that the case $a={1\over 2}$ of (1.2)
leads to
$\sum_{n=0}^{\infty}(2n-1)!!\>t^{n}\;=\;\cfrac{1}{1-\cfrac{1t}{1-\cfrac{2t}{1-\cfrac{3t}{1-\cfrac{4t}{1-\cdots}}}}}$
(3.18)
with coefficients $\alpha_{k}=k$. Since $(2n-1)!!=(2n)!/(2^{n}n!)$ is the
number of perfect matchings of a $2n$-element set (i.e. partitions of the $2n$
objects into $n$ pairs), it is natural to seek generalizations of (3.18) that
enumerate perfect matchings according to some combinatorially interesting
statistics. Some formulae of this type can be found in [31, 94]. The proofs
use the bijective method to be discussed at the end of Section 8; I don’t know
whether results of this complexity can be proven by the Euler–Gauss recurrence
method.
This is by no means the end of the matter: by an argument similar to the one
we have used for ${{\tensor[_{2\\!}]{F}{{}_{0}}\\!}}$, Gauss [51] found in
1812 a continued fraction for the ratio of two contiguous hypergeometric
functions ${{\tensor[_{2\\!}]{F}{{}_{1}}\\!}}$. Moreover, the formula for
${{\tensor[_{2\\!}]{F}{{}_{0}}\\!}}$, as well as analogous formulae for ratios
of ${{\tensor[_{1\\!}]{F}{{}_{1}}\\!}}$, ${{\tensor[_{1\\!}]{F}{{}_{0}}\\!}}$
or ${{\tensor[_{0\\!}]{F}{{}_{1}}\\!}}$, can be deduced from Gauss’ formula by
specialization or taking limits. In fact, one of the special cases of the
${{\tensor[_{0\\!}]{F}{{}_{1}}\\!}}$ formula is Lambert’s continued fraction
(1.3) for the tangent function. See [104, Chapter XVIII] for details.111111
Laczkovich [68] and Wallisser [105] give nice elementary proofs of the
continued fraction for ${{\tensor[_{0\\!}]{F}{{}_{1}}\\!}}$, using the
Euler–Gauss recurrence method. As Wallisser [105, p. 525] points out, this
argument is due to Legendre [71, Note IV, pp. 320–322]. There is also a nice
explanation at [108], which makes clear the general principle of the
Euler–Gauss recurrence method: any recurrence of the form (2.14) for a
sequence $(g_{k})_{k\geq-1}$ of series with constant term 1 leads to a
continued-fraction representation (2.1) for the ratios $f_{k}=g_{k}/g_{k-1}$.
In Sections 6 and 7 we will see even more general versions of this principle.
## 4 Example 2: Bell polynomials
Here is an example from enumerative combinatorics. The Bell number $B_{n}$ is,
by definition, the number of partitions of an $n$-element set into nonempty
blocks; by convention we set $B_{0}=1$. The Stirling subset number (also
called Stirling number of the second kind) $\genfrac{\\{}{\\}}{0.0pt}{}{n}{k}$
is, by definition, the number of partitions of an $n$-element set into $k$
nonempty blocks; for $n=0$ we make the convention
$\genfrac{\\{}{\\}}{0.0pt}{}{0}{k}=\delta_{k0}$. The Stirling subset numbers
satisfy the recurrence
$\genfrac{\\{}{\\}}{0.0pt}{}{n}{k}\;=\;k\,\genfrac{\\{}{\\}}{0.0pt}{}{n-1}{k}\>+\>\genfrac{\\{}{\\}}{0.0pt}{}{n-1}{k-1}\qquad\hbox{for
$n\geq 1$}$ (4.1)
with initial conditions $\genfrac{\\{}{\\}}{0.0pt}{}{0}{k}=\delta_{k0}$ and
$\genfrac{\\{}{\\}}{0.0pt}{}{n}{-1}=0$. [Proof: Consider a partition $\pi$ of
the set $[n]\stackrel{{\scriptstyle\rm def}}{{=}}\\{1,\ldots,n\\}$ into $k$
nonempty blocks, and ask where the element $n$ goes. If the restriction of
$\pi$ to $[n-1]$ has $k$ blocks, then $n$ can be adjoined to any one of those
$k$ blocks. If the restriction of $\pi$ to $[n-1]$ has $k-1$ blocks, then $n$
must be a singleton in $\pi$. These two cases give the two terms on the right-
hand side of (4.1).]
Now define the Bell polynomials
$B_{n}(x)\;=\;\sum_{k=0}^{n}\genfrac{\\{}{\\}}{0.0pt}{}{n}{k}\,x^{k}$ (4.2)
and their homogenized version
$B_{n}(x,y)\;=\;y^{n}B_{n}(x/y)\;=\;\sum_{k=0}^{n}\genfrac{\\{}{\\}}{0.0pt}{}{n}{k}\,x^{k}y^{n-k}\;,$
(4.3)
so that $B_{n}=B_{n}(1)=B_{n}(1,1)$. Then the ordinary generating function
${\mathcal{B}}_{x,y}(t)\;=\;\sum_{n=0}^{\infty}B_{n}(x,y)\,t^{n}$ (4.4)
turns out to have a beautiful continued fraction:
${\mathcal{B}}_{x,y}(t)\;=\;\cfrac{1}{1-\cfrac{xt}{1-\cfrac{yt}{1-\cfrac{xt}{1-\cfrac{2yt}{1-\cfrac{xt}{1-\cfrac{3yt}{1-\cdots}}}}}}}$
(4.5)
with coefficients $\alpha_{2k-1}=x$ and $\alpha_{2k}=ky$.
Once again we can guess the continued fraction, and then prove it, by the
Euler–Gauss recurrence method. Take $g_{-1}=1$ and
$g_{0}(t)={\mathcal{B}}_{x,y}(t)$, and use the recurrence (3.7) to
successively compute $g_{1}(t)$, $g_{2}(t)$, …, extracting at each stage the
factor $\alpha_{k+1}t$ that makes $g_{k+1}(t)$ have constant term 1. This
computation is left as an exercise for the reader; by the stage $g_{6}$ (if
not earlier) the reader should be able to guess the general formulae for
$g_{2j-1}(t)$ and $g_{2j}(t)$. (In order not to spoil the fun, the answer is
given in the Appendix.) Once one has the formulae for $g_{k}(t)$, it is then
easy to verify the recurrence (3.7) with the given coefficients
${\bm{\alpha}}$ by using the recurrence (4.1) for the Stirling subset numbers
together with the Pascal recurrence for the binomial coefficients.
Remarks. I am not sure who first derived the continued fraction (4.5) for the
Bell polynomials, or its specialization to $x=y=1$ for the Bell numbers. An
associated continued fraction121212 In the terminology of combinatorialists, a
J-fraction. that is equivalent by contraction [104, p. 21] [101, p. V-31] to
(4.5) was found for the case $x=y=1$ by Touchard [98, section 4] in 1956, and
for the general case by Flajolet [43, Theorem 2(ia)] in 1980. Flajolet’s proof
was combinatorial, using ideas that will be explained in Section 8. Flajolet
also observed [43, pp. 141–142] that this associated continued fraction is
implicit in the three-term recurrence relation for the Poisson–Charlier
polynomials [23, p. 25, Exercise 4.10]; see [23, 101, 112] for the general
connection between continued fractions and orthogonal polynomials. The
continued fraction (4.5) can also be derived directly from a functional
equation satisfied by ${\mathcal{B}}_{x,y}(t)$: this elegant method is due to
the late Dominique Dumont [32]; see also [111, proof of Lemma 3] for some
$q$-generalizations. I have not seen the elementary derivation by the
Euler–Gauss recurrence method anywhere in the literature, but it is probably
not new.
See [62, 94] for some generalizations of this continued fraction, which
enumerate set partitions with respect to a larger set of simultaneous
statistics; these formulae are proven by the bijective method to be discussed
at the end of Section 8. $\blacksquare$
## 5 Example 3: Some $\bm{q}$-continued fractions of Ramanujan
Next I would like to show, following Bhatnagar [17], how the Euler–Gauss
recurrence method can be used to give simple proofs of some continued
fractions of Ramanujan. We use the standard notation for $q$-shifted
factorials,
$(a;q)_{n}\;=\;\prod_{j=0}^{n-1}(1-aq^{j})$ (5.1)
for integers $n\geq 0$; here $a$ and $q$ are to be interpreted as algebraic
indeterminates.
The Rogers–Ramanujan continued fraction. Rogers [88, p. 328, eq. (4)] proved
in 1894 the following beautiful continued fraction, which was later
rediscovered and generalized by Ramanujan [106] [14, p. 30, Entry 15 and
Corollary]:
${\displaystyle\;\sum_{n=0}^{\infty}{q^{n^{2}}\over(q;q)_{n}}\>t^{n}\;\over\displaystyle\;\sum_{n=0}^{\infty}{q^{n(n-1)}\over(q;q)_{n}}\>t^{n}\;}\;\>=\;\>\cfrac{1}{1+\cfrac{t}{1+\cfrac{qt}{1+\cfrac{q^{2}t}{1+\cfrac{q^{3}t}{1+\cdots}}}}}$
(5.2)
with coefficients $\alpha_{k}=-q^{k-1}$. The proof by the Euler–Gauss
recurrence method is extraordinarily easy. Define
$g_{k}(t)\;=\;\sum_{n=0}^{\infty}{q^{n(n+k)}\over(q;q)_{n}}\>t^{n}\qquad\hbox{for
$k\geq-1$}\;,$ (5.3)
so that the left-hand side of (5.2) is indeed $g_{0}/g_{-1}$. Then compute
$g_{k}-g_{k-1}&=\sum_{n=0}^{\infty}{q^{n(n+k-1)}\,(q^{n}-1)\over(q;q)_{n}}\>t^{n}\\\
=-\sum_{n=1}^{\infty}{q^{n(n+k-1)}\over(q;q)_{n-1}}\>t^{n}\\\
=-\sum_{n=0}^{\infty}{q^{(n+1)(n+k)}\over(q;q)_{n}}\>t^{n+1}\\\
=-q^{k}t\sum_{n=0}^{\infty}{q^{n(n+k+1)}\over(q;q)_{n}}\>t^{n}\\\
=\alpha_{k+1}t\,g_{k+1}\;,$ (5.4)
which completes the proof (see also [9, eqns. (4.43)/(4.44)] [18]).
In terms of the Rogers–Ramanujan function
$R(t,q)\;=\;\sum_{n=0}^{\infty}{q^{n(n-1)}\over(q;q)_{n}}\>t^{n}\;,$ (5.5)
we have $g_{k}(t)=R(q^{k+1}t,q)$; the left-hand side of (5.2) is
$f_{0}(t)=R(qt,q)/R(t,q)$, and more generally we have
$f_{k}(t)=R(q^{k+1}t,q)/R(q^{k}t,q)$. It is worth remarking that the
Rogers–Ramanujan function arises in a two-variable identity due to Ramanujan
and Rogers [85] from which the famous one-variable Rogers–Ramanujan identities
[2, Chapter 7] [4, 90] can be deduced. The Rogers–Ramanujan function has also
been studied as an entire function of $t$ for $|q|<1$ [5].
In fact, Ramanujan [14, p. 30, Entry 15] gave a generalization of (5.2) with
an additional free parameter; this result can be rewritten [17, p. 57,
Exercise] as
${\displaystyle\;\sum_{n=0}^{\infty}{q^{n^{2}}\over(q;q)_{n}\,(a;q)_{n}}\>t^{n}\;\over\displaystyle\;\sum_{n=0}^{\infty}{q^{n(n-1)}\over(q;q)_{n}\,(a;q)_{n}}\>t^{n}\;}\;\>=\;\>\cfrac{1}{1+\cfrac{{\displaystyle\frac{1}{1-a}}\,t}{1+\cfrac{{\displaystyle\frac{q}{(1-a)(1-aq)}}\,t}{1+\cfrac{{\displaystyle\frac{q^{2}}{(1-aq)(1-aq^{2})}}\,t}{1+\cfrac{{\displaystyle\frac{q^{3}}{(1-aq^{2})(1-aq^{3})}}\,t}{1+\cdots}}}}}$
(5.6)
with coefficients
$\alpha_{1}\;=\;-\,{1\over
1-a}\>,\qquad\alpha_{k}\;=\;-\,{q^{k-1}\over(1-aq^{k-2})(1-aq^{k-1})}\quad\hbox{for
$k\geq 2$}\;.$ (5.7)
(Note the difference in form between $\alpha_{1}$ and the remaining
coefficients: one factor in the denominator versus two.) This result can be
derived by a slight generalization of the computation (5.4), using
$g_{-1}(t)&=\sum_{n=0}^{\infty}{q^{n(n-1)}\over(q;q)_{n}\,(a;q)_{n}}\>t^{n}\\\
g_{k}(t)=\sum_{n=0}^{\infty}{q^{n(n+k)}\over(q;q)_{n}\,(aq^{k};q)_{n}}\>t^{n}\qquad\hbox{for
$k\geq 0$}$ (5.8)
(Note the corresponding difference between $k=-1$ and $k\geq 0$.) The proof,
which is not difficult, is left as an exercise for the reader.
On the other hand, there is a variant of (5.6) that is even simpler. Namely,
use $(aq;q)_{n}$ instead of $(a;q)_{n}$ in the numerator of the left-hand side
(but not the denominator); then we have
${\displaystyle\;\sum_{n=0}^{\infty}{q^{n^{2}}\over(q;q)_{n}\,(aq;q)_{n}}\>t^{n}\;\over\displaystyle\;\sum_{n=0}^{\infty}{q^{n(n-1)}\over(q;q)_{n}\,(a;q)_{n}}\>t^{n}\;}\;\>=\;\>\cfrac{1}{1+\cfrac{{\displaystyle\frac{1}{(1-a)(1-aq)}}\,t}{1+\cfrac{{\displaystyle\frac{q}{(1-aq)(1-aq^{2})}}\,t}{1+\cfrac{{\displaystyle\frac{q^{2}}{(1-aq^{2})(1-aq^{3})}}\,t}{1+\cfrac{{\displaystyle\frac{q^{3}}{(1-aq^{3})(1-aq^{4})}}\,t}{1+\cdots}}}}}$
(5.9)
with coefficients
$\alpha_{k}\;=\;-\,{q^{k-1}\over(1-aq^{k-1})(1-aq^{k})}\;.$ (5.10)
Now there is no difference between the first step and the rest, and we can use
the single formula
$g_{k}(t)\;=\;\sum_{n=0}^{\infty}{q^{n(n+k)}\over(q;q)_{n}\,(aq^{k+1};q)_{n}}\>t^{n}$
(5.11)
for all $k\geq-1$. In terms of the basic hypergeometric series
${\tensor[_{r}]{\phi}{{}_{s}}}$ defined by [50, p. 4]
${{\tensor[_{r}]{\phi}{{}_{s\\!}}}\\!\left(\\!\\!\begin{array}[]{c}a_{1},\ldots,a_{r}\\\
b_{1},\ldots,b_{s}\end{array}\\!;\,q,\,t\\!\right)\\!}\;=\;\sum_{n=0}^{\infty}{(a_{1};q)_{n}\,(a_{2};q)_{n}\,\cdots\,(a_{r};q)_{n}\over(b_{1};q)_{n}\,(b_{2};q)_{n}\,\cdots\,(b_{s};q)_{n}\,(q;q)_{n}}\>\Bigl{(}\\!(-1)^{n}q^{n(n-1)/2}\\!\Bigr{)}^{\\!s+1-r}\>t^{n}\;,$
(5.12)
the left-hand side of (5.9) is
$\displaystyle{{\tensor[_{0}]{\phi}{{}_{1\\!}}}\\!\left(\\!\\!\begin{array}[]{c}\hbox{---}\\\
aq\end{array}\\!;\,q,\,qt\\!\right)\\!}\biggl{/}\\!\displaystyle{{\tensor[_{0}]{\phi}{{}_{1\\!}}}\\!\left(\\!\\!\begin{array}[]{c}\hbox{---}\\\
a\end{array}\\!;\,q,\,t\\!\right)\\!}$ , and the continued fraction (5.9) can
alternatively be derived as a limiting case of Heine’s [57] [27, p. 395]
continued fraction for ratios of contiguous ${\tensor[_{2}]{\phi}{{}_{1}}}$.
The partial theta function. The function
$\Theta_{0}(t,q)\;=\;\sum_{n=0}^{\infty}q^{n(n-1)/2}\,t^{n}$ (5.13)
is called the partial theta function [7, Chapter 13] [8, Chapter 6] [6, 91]
because of its resemblance to the ordinary theta function, in which the sum
runs down to ${n=-\infty}$. A continued-fraction expansion for the partial
theta function was discovered by Eisenstein [35, 36] in 1844 and rediscovered
by Ramanujan [14, pp. 27–29, Entry 13] (see also [84, 45]). It reads
$\sum_{n=0}^{\infty}q^{n(n-1)/2}\,t^{n}\;=\;\cfrac{1}{1-\cfrac{t}{1-\cfrac{(q-1)t}{1-\cfrac{q^{2}t}{1-\cfrac{q(q^{2}-1)t}{1-\cfrac{q^{4}t}{1-\cfrac{q^{2}(q^{3}-1)t}{1-\cdots}}}}}}}$
(5.14)
with coefficients
$\alpha_{2j-1}\;=\;q^{2j-2},\qquad\alpha_{2j}\;=\;q^{j-1}(q^{j}-1)\;.$ (5.15)
Once again we can guess the continued fraction, and then prove it, by the
Euler–Gauss recurrence method with $g_{-1}=1$; but here it is a bit trickier
than in the previous examples to guess the coefficients ${\bm{\alpha}}$ and
the series $g_{k}(t)$. The computation is once again left as an exercise for
the reader; by the stage $g_{6}$ it should become clear that the coefficients
${\bm{\alpha}}$ are given by (5.15) and the series $g_{k}(t)$ by
$g_{2j-1}(t)&=\sum_{n=0}^{\infty}\genfrac{(}{)}{0.0pt}{}{n+j-1}{n}_{\\!\\!q}\;q^{n(n+2j-1)/2}\>t^{n}\\\
g_{2j}(t)=\sum_{n=0}^{\infty}\genfrac{(}{)}{0.0pt}{}{n+j}{n}_{\\!\\!q}\;q^{n(n+2j-1)/2}\>t^{n}$
(5.16)
where $\genfrac{(}{)}{0.0pt}{}{n}{k}_{\\!q}$ denotes the $\bm{q}$-binomial
coefficient
$\genfrac{(}{)}{0.0pt}{}{n}{k}_{\\!\\!q}\;=\;{(q;q)_{n}\over(q;q)_{k}\,(q;q)_{n-k}}\;.$
(5.17)
The right-hand side of (5.17) looks like a rational function of $q$, but it is
a nontrivial fact (though not terribly difficult to prove) that
$\genfrac{(}{)}{0.0pt}{}{n}{k}_{\\!q}$ is in fact a polynomial in $q$, with
nonnegative integer coefficients that have a nice combinatorial interpretation
[2, Theorem 3.1]. The $q$-binomial coefficients satisfy two “dual”
$q$-generalizations of the Pascal recurrence:
$\displaystyle\genfrac{(}{)}{0.0pt}{}{n}{k}_{\\!\\!q}$ $\displaystyle=$
$\displaystyle\genfrac{(}{)}{0.0pt}{}{n-1}{k}_{\\!\\!q}\,+\,q^{n-k}\genfrac{(}{)}{0.0pt}{}{n-1}{k-1}_{\\!\\!q}\quad\hbox{for
$n\geq 1$}$ (5.18) $\displaystyle\genfrac{(}{)}{0.0pt}{}{n}{k}_{\\!\\!q}$
$\displaystyle=$ $\displaystyle
q^{k}\genfrac{(}{)}{0.0pt}{}{n-1}{k}_{\\!\\!q}\,+\,\genfrac{(}{)}{0.0pt}{}{n-1}{k-1}_{\\!\\!q}\qquad\hbox{for
$n\geq 1$}$ (5.19)
(Of course, it follows immediately from either of these recurrences that
$\genfrac{(}{)}{0.0pt}{}{n}{k}_{\\!q}$ is a polynomial in $q$, with
nonnegative integer coefficients.) Using the recurrence (5.18), it is now
straightforward to verify the Euler–Gauss recurrence (3.7) for the given
${\bm{\alpha}}$ and $g_{k}$. This completes the proof of (5.14).
A different (but also simple) proof of (5.14) is given in [14, pp. 27–28,
Entry 13]. A more general continued fraction can be found in Ramanujan’s lost
notebook: see [7, Section 6.2].
The reader is referred to Bhatnagar’s beautiful survey articles [17, 19] for
derivations of many other continued fractions of Ramanujan by the Euler–Gauss
recurrence method (among other methods). See also [56] for a cornucopia of
related results.
## 6 Expansion in the form (1.5)
Let us now consider expansion in the form (1.5), which generalizes the
C-fraction (1.4) and reduces to it when $M_{1}=M_{2}=\ldots=0$. Here we
consider the integers $M_{i}\geq 0$ to be pre-specified, while the integers
$p_{i}\geq M_{i}+1$ are chosen by the algorithm.
Since the treatment closely parallels that of (1.4), I will be brief and
stress only the needed modifications. It is convenient to use the abbreviation
$\Delta_{i}(t)\;=\;\sum\limits_{j=1}^{M_{i}}\delta_{i}^{(j)}t^{j}$ (6.1)
for the “additive” coefficient in (1.5); it is a polynomial of degree $\leq
M_{i}$ in $t$, with zero constant term.
As usual we define
$f_{k}(t)\;=\;\cfrac{1}{1-\Delta_{k+1}(t)-\cfrac{\alpha_{k+1}t^{p_{k+1}}}{1-\Delta_{k+2}(t)-\cfrac{\alpha_{k+2}t^{p_{k+2}}}{1-\cdots}}}$
(6.2)
and observe that $f(t)=\alpha_{0}f_{0}(t)$ and
$f_{k}(t)\;=\;{1\over
1\,-\,\Delta_{k+1}(t)\,-\,\alpha_{k+1}t^{p_{k+1}}\,f_{k+1}(t)}\qquad\hbox{for
$k\geq 0$}\;.$ (6.3)
The primitive algorithm is then:
[0.82] Primitive algorithm.
1\. Set $\alpha_{0}=a_{0}=[t^{0}]\,f(t)$ and $f_{0}(t)=\alpha_{0}^{-1}f(t)$.
2\. For $k=1,2,3,\ldots$, do:
* (a)
Set $\Delta_{k}(t)$ equal to the expansion of $1-f_{k-1}(t)^{-1}$ through
order $t^{M_{k}}$.
* (b)
If $1-f_{k-1}(t)^{-1}=\Delta_{k}(t)$, set $\alpha_{k}=0$ and terminate.
* (c)
Otherwise, let $p_{k}$ be the smallest index $n>M_{k}$ such that
$[t^{n}]\,f_{k-1}(t)^{-1}\neq 0$; set
$\alpha_{k}=-[t^{p_{k}}]\,f_{k-1}(t)^{-1}$; and set
$f_{k}(t)\;=\;\alpha_{k}^{-1}t^{-p_{k}}\biggl{(}1\,-\,{1\over
f_{k-1}(t)}\,-\,\Delta_{k}(t)\biggr{)}\;.$ (6.4)
Historical remark. The case $M_{1}=M_{2}=\ldots=1$ of the primitive algorithm
was proposed in 1772 by Lagrange [69]. See Brezinski [22, pp. 119–120] and
especially Galuzzi [49] for further discussion of this work. $\blacksquare$
Let us now discuss the refined algorithm, passing immediately to the
generalized version in which $g_{-1}$ is an arbitrary series with constant
term 1. The series $(g_{k})_{k\geq 0}$ are therefore defined by (2.8), so that
$f_{k}=g_{k}/g_{k-1}$ as before. Then the nonlinear recurrence (6.3) for the
$(f_{k})$ becomes the linear recurrence
$g_{k}(t)-g_{k-1}(t)\;=\;\Delta_{k+1}(t)g_{k}(t)\,+\,\alpha_{k+1}t^{p_{k+1}}g_{k+1}(t)$
(6.5)
for the $(g_{k})$. The occurrence here of the term $\Delta_{k+1}g_{k}$ means
that division of power series is now required in order to determine
$\Delta_{k+1}$; but this division need only be exact through order
$t^{M_{k+1}}$, which is not onerous if $M_{k+1}$ is small. Rewriting the
algorithm in terms of $(g_{k})_{k\geq-1}$, we have:
[0.88] Refined algorithm.
1\. Choose any formal power series $g_{-1}(t)$ with constant term 1; then set
$\alpha_{0}=a_{0}=[t^{0}]\,f(t)$ and $g_{0}(t)=\alpha_{0}^{-1}g_{-1}(t)f(t)$.
2\. For $k=1,2,3,\ldots$, do:
* (a)
Set $\Delta_{k}(t)$ equal to the expansion of $1-g_{k-2}(t)/g_{k-1}(t)$
through order $t^{M_{k}}$.
* (b)
If $g_{k-1}(t)-g_{k-2}(t)-\Delta_{k}(t)g_{k-1}(t)=0$, set $\alpha_{k}=0$ and
terminate.
* (c)
Otherwise, let $p_{k}$ be the smallest index $n$ (necessarily $>M_{k}$) such
that
$[t^{n}]\,\bigl{(}g_{k-1}(t)-g_{k-2}(t)-\Delta_{k}(t)g_{k-1}(t)\bigr{)}\neq
0$; set
$\alpha_{k}=[t^{p_{k}}]\,\bigl{(}g_{k-1}(t)-g_{k-2}(t)-\Delta_{k}(t)g_{k-1}(t)\bigr{)}$;
and set
$g_{k}(t)\;=\;\alpha_{k}^{-1}t^{-p_{k}}\bigl{(}g_{k-1}(t)-g_{k-2}(t)-\Delta_{k}(t)g_{k-1}(t)\bigr{)}\;.$
(6.6)
We can also run this algorithm in reverse, leading to a generalization of the
Euler–Gauss recurrence method as presented in (2.14). Suppose that we have a
sequence $(g_{k})_{k\geq-1}$ of formal power series with constant term 1,
which satisfy a recurrence of the form
$g_{k}(t)-g_{k-1}(t)\;=\;\Delta_{k+1}(t)\,g_{k}(t)\>+\>A_{k+1}(t)\,g_{k+1}(t)\qquad\hbox{for
$k\geq 0$}$ (6.7)
where the $\Delta_{k}(t)$ and $A_{k}(t)$ are formal power series with zero
constant term. (We need not assume that $g_{-1}=1$, nor that $\Delta_{k}(t)$
is a polynomial, nor that $A_{k}(t)$ is simply a monomial
$\alpha_{k}t^{p_{k}}$.) Dividing by $g_{k}$ and defining
$f_{k}=g_{k}/g_{k-1}$, we have
$f_{k}(t)\;=\;{1\over
1\,-\,\Delta_{k+1}(t)\,-\,A_{k+1}(t)\,f_{k+1}(t)}\qquad\hbox{for $k\geq
0$}\;,$ (6.8)
which by iteration yields the continued-fraction expansions
$f_{k}(t)\;=\;\cfrac{1}{1-\Delta_{k+1}(t)-\cfrac{A_{k+1}(t)}{1-\Delta_{k+2}(t)-\cfrac{A_{k+2}(t)}{1-\cdots}}}\;.$
(6.9)
When $\Delta_{k}(t)$ is a polynomial of degree $\leq M_{k}$ and
$A_{k}(t)=\alpha_{k}t^{p_{k}}$, this reduces to (6.2). This method was used by
Rogers [89, p. 76] in 1907 to obtain expansions as an associated continued
fraction (i.e. $M_{1}=M_{2}=\ldots=1$ and $p_{1}=p_{2}=\ldots=2$) for the
Laplace transforms of the Jacobian elliptic functions sn and cn (see also [44,
p. 237]). Some spectacular extensions of these results, using the same method,
were given in the early 2000s by Milne [75, section 3] and Conrad and Flajolet
[24, 25]. On the other hand, the special case $\Delta_{k}(t)=\delta_{k}t$ and
$A_{k}(t)=\alpha_{k}t$ is also important, and is called a T-fraction [97, 87,
93, 83, 38].
## 7 Expansion in the form (1.6)
The continued-fraction schema (1.6) is so general that the expansion of a
given series $f(t)$ in this form is far from unique. Indeed, the series
$\Delta_{k}(t)$ can be chosen completely arbitrarily (with zero constant
term), while the $A_{k}(t)$ need only have the correct leading terms and are
otherwise also completely arbitrary. Let us define as usual
$f_{k}(t)\;=\;\cfrac{1}{1-\Delta_{k+1}(t)-\cfrac{A_{k+1}(t)}{1-\Delta_{k+2}(t)-\cfrac{A_{k+2}(t)}{1-\cdots}}}\qquad\hbox{for
$k\geq 0$}\;;$ (7.1)
these are formal power series with constant term 1, which satisfy
$f(t)=A_{0}(t)\,f_{0}(t)$ and
$f_{k}(t)\;=\;{1\over
1\,-\,\Delta_{k+1}(t)\,-\,A_{k+1}(t)\,f_{k+1}(t)}\qquad\hbox{for $k\geq
0$}\;.$ (7.2)
The procedure for finding a continued-fraction expansion of a given series
$f(t)$ in the form (1.6) — I am reluctant to call it an “algorithm”, as it now
involves so many arbitrary choices — is then as follows:
[0.88] Primitive procedure.
1\. Let $A_{0}(t)$ be any formal power series having the same leading term as
$f(t)$; and set $f_{0}(t)=A_{0}(t)^{-1}f(t)$.
2\. For $k=1,2,3,\ldots$, do:
* (a)
Let $\Delta_{k}(t)$ be any formal power series with zero constant term.
* (b)
If $1-f_{k-1}(t)^{-1}=\Delta_{k}(t)$, set $A_{k}(t)=0$ and terminate.
* (c)
Otherwise, let $p_{k}$ be the smallest index $n$ such that
$[t^{n}]\,[1-f_{k-1}(t)^{-1}-\Delta_{k}(t)]\neq 0$; set
$\alpha_{k}=[t^{p_{k}}]\,[1-f_{k-1}(t)^{-1}-\Delta_{k}(t)]$; let $A_{k}(t)$ be
any formal power series with leading term $\alpha_{k}t^{p_{k}}$; and set
$f_{k}(t)\;=\;A_{k}(t)^{-1}\biggl{(}1\,-\,{1\over
f_{k-1}(t)}\,-\,\Delta_{k}(t)\biggr{)}\;.$ (7.3)
The corresponding refined procedure is now left as an exercise for the reader;
it is a minor modification of the one presented in the preceding section. And
the corresponding generalization of the Euler–Gauss recurrence method was
already discussed in that section.
## 8 Combinatorial interpretation
A combinatorial interpretation of continued fractions in terms of lattice
paths was given in a seminal 1980 paper by the late Philippe Flajolet [43]; we
review it here, and then show how it can be used to interpret the series
$(f_{k})_{k\geq 0}$ and $(g_{k})_{k\geq 0}$ arising in our algorithm.
A Motzkin path is a path in the upper half-plane
${\mathbb{Z}}\times{\mathbb{N}}$, starting and ending on the horizontal axis,
using steps $(1,1)$ [“rise”], $(1,0)$ [“level step”] and $(1,-1)$ [“fall”].
More generally, a Motzkin path at level $\bm{k}$ is a path in
${\mathbb{Z}}\times{\mathbb{N}}_{\geq k}$, starting and ending at height $k$,
using these same steps. We denote by ${\mathcal{M}}_{k\to k}$ the set of all
Motzkin paths at level $k$ that start at $(0,k)$. We stress that a Motzkin
path must always stay on or above the horizontal axis, and that a Motzkin path
at level $k$ must always stay at height $\geq k$. A Motzkin path is called a
Dyck path if it has no level steps; obviously a Dyck path must have even
length.
Now let ${\bf a}=(a_{i})_{i\geq 0}$, ${\bf b}=(b_{i})_{i\geq 1}$ and ${\bf
c}=(c_{i})_{i\geq 0}$ be indeterminates; we will work in the ring
${\mathbb{Z}}[[{\bf a},{\bf b},{\bf c}]]$ of formal power series in these
indeterminates. We assign to each Motzkin path $\omega$ a weight
$W(\omega)\in{\mathbb{Z}}[[{\bf a},{\bf b},{\bf c}]]$ that is the product of
the weights for the individual steps, where a rise starting at height $i$ gets
weight $a_{i}$, a fall starting at height $i$ gets weight $b_{i}$, and a level
step at height $i$ gets weight $c_{i}$ (see Figure 1).
$a_{0}$$b_{1}$$c_{0}$$a_{0}$$a_{1}$$c_{2}$$b_{2}$$c_{1}$$b_{1}$ Figure 1: A
Motzkin path of length 9, which gets weight
$a_{0}^{2}a_{1}b_{1}^{2}b_{2}c_{0}c_{1}c_{2}$.
Define now for $k\geq 0$ the generating functions
$f_{k}\;=\;\sum_{\omega\in{\mathcal{M}}_{k\to k}}W(\omega)\;.$ (8.1)
These are well-defined elements of ${\mathbb{Z}}[[{\bf a},{\bf b},{\bf c}]]$
because there are finitely many $n$-step paths in ${\mathcal{M}}_{k\to k}$, so
each monomial occurs at most finitely many times.
Flajolet [43] showed how to express the generating functions $f_{k}$ as a
continued fraction:
###### Theorem 8.1 (Flajolet’s master theorem).
For each $k\geq 0$,
$f_{k}\;=\;\cfrac{1}{1-c_{k}-\cfrac{a_{k}b_{k+1}}{1-c_{k+1}-\cfrac{a_{k+1}b_{k+2}}{1-\cdots}}}$
(8.2)
as an identity in ${\mathbb{Z}}[[{\bf a},{\bf b},{\bf c}]]$.
Of course, the identity (8.2) for one value of $k$ trivially implies it for
all $k$, by redefining heights; but in the proof it is natural to consider all
$k$ simultaneously.
Proof [43]. Observe first that the right-hand side of (8.2) is a well-defined
element of ${\mathbb{Z}}[[{\bf a},{\bf b},{\bf c}]]$, because all terms
involving only $(a_{i})_{i\leq k+r-1}$, $(b_{i})_{i\leq k+r}$ and
$(c_{i})_{i\leq k+r-1}$ can be obtained by cutting off the continued fraction
at level $r$, yielding a rational fraction that expands into a well-defined
formal power series.
To prove (8.2), we proceed as follows. First define
$f_{k}^{\star}\;=\;\sum_{\omega\in{\mathcal{M}}_{k\to k}^{\rm
irred}}W(\omega)\;,$ (8.3)
where the sum is taken over irreducible Motzkin paths at level $k$, i.e. paths
of length $\geq 1$ that do not return to height $k$ until the final step.
Since a Motzkin path can be uniquely decomposed as a concatenation of some
number $m\geq 0$ of irreducible Motzkin paths, we have
$f_{k}\;=\;\sum_{m=0}^{\infty}(f_{k}^{\star})^{m}\;=\;{1\over
1-f_{k}^{\star}}\;.$ (8.4)
On the other hand, an irreducible Motzkin path at level $k$ is either a single
level step at height $k$ or else begins with a rise $k\to k+1$ and ends with a
fall $k+1\to k$, with an arbitrary Motzkin path at level $k+1$ in-between;
thus
$f_{k}^{\star}\;=\;c_{k}\,+\,a_{k}b_{k+1}f_{k+1}\;.$ (8.5)
Putting together (8.4) and (8.5), we have
$f_{k}\;=\;{1\over 1-c_{k}-a_{k}b_{k+1}f_{k+1}}\;.$ (8.6)
Iterating (8.6), we obtain (8.2). $\square$
Let us now generalize this setup slightly by defining, for any $k,\ell\geq 0$,
a Motzkin path at level $\bm{k\to\ell}$ to be a path in
${\mathbb{Z}}\times{\mathbb{N}}$, starting at height $k$ and ending at height
$\ell$, that stays always at height $\geq\min(k,\ell)$. We write
${\mathcal{M}}_{k\to\ell}$ for the set of all Motzkin paths at level
$k\to\ell$ that start at $(0,k)$. For $\ell=k$ this reduces to the previous
definition. We then define the generating function
$g_{k\to\ell}\;=\;\sum_{\omega\in{\mathcal{M}}_{k\to\ell}}W(\omega)\;.$ (8.7)
The generating functions $g_{k\to\ell}$ have a simple expression in terms of
the $f_{k}$:
###### Proposition 8.2.
For $k,\ell\geq 0$ we have
$g_{k\to\ell}\;=\;\begin{cases}f_{k}a_{k}f_{k+1}a_{k+1}\cdots
f_{\ell-1}a_{\ell-1}f_{\ell}&\textrm{if $k\leq\ell$}\\\\[2.84526pt]
f_{k}b_{k}f_{k-1}b_{k-1}\cdots f_{\ell+1}b_{\ell+1}f_{\ell}&\textrm{if
$k\geq\ell$}\end{cases}$ (8.8)
Proof [54, pp. 295–296] [101, pp. II-7–II-8]. For $k<\ell$, any path in
${\mathcal{M}}_{k\to\ell}$ can be uniquely decomposed by cutting it at its
last return to height $k$, then at its last return to height $k+1$, …, and so
forth through its last return to height $\ell-1$. The pieces of this
decomposition are an arbitrary Motzkin path at level $k$, followed by a rise
$k\to k+1$, followed by an arbitrary Motzkin path at level $k+1$, followed by
a rise $k+1\to k+2$, …, followed by an arbitrary Motzkin path at level $\ell$.
A similar argument handles the case $k>\ell$. $\square$
We can now specialize the foregoing results to interpret continued fractions
of the general form (1.6). Indeed, by taking $a_{i}=1$, $b_{i}=A_{i}(t)$ and
$c_{i}=\Delta_{i+1}(t)$, we see that (1.6) is $A_{0}(t)$ times the generating
function for Motzkin paths at level 0 with the above weights. Furthermore, the
recurrence (8.6) relating $f_{k}$ to $f_{k+1}$ is identical to the recurrence
(7.2); so the series $(f_{k})_{k\geq 0}$ arising in our algorithm are
identical to those defined in (8.1), which enumerate Motzkin paths at level
$k$. And finally, by Proposition 8.2, the series $(g_{k}/g_{-1})_{k\geq 0}$
arising in our refined algorithm are identical to $(g_{0\to k})_{k\geq 0}$
defined in (8.7), which enumerate Motzkin paths at level $0\to k$. We can
therefore state:
###### Proposition 8.3.
The continued fraction (1.6) is $A_{0}(t)$ times the generating function for
Motzkin paths at level $0$ in which each rise gets weight $1$, each fall
starting at height $i$ gets weight $A_{i}(t)$, and each level step at height
$i$ gets weight $\Delta_{i+1}(t)$.
Moreover, $f_{k}$ is the generating function for Motzkin paths at level $k$
with these weights, and $g_{k}$ ($k\geq 0$) is $g_{-1}(t)$ times the
generating function for Motzkin paths at level $0\to k$ with these weights.
Specializing this result we obtain interpretations of (1.5) and (1.4). In the
latter case the level steps get weight $c_{i}=0$, so the relevant paths are
Dyck paths.
Theorem 8.1 provides a powerful tool for proving continued fractions in
enumerative combinatorics. Suppose that $P_{n}({\bf x})$ is the generating
polynomial for some class ${\mathcal{O}}_{n}$ of combinatorial objects of
“size $n$” with respect to some set of statistics. (Example: The polynomials
$P_{n}(a,b)$ defined in (3.17), which enumerate the set ${\mathfrak{S}}_{n}$
of permutations of $\\{1,\ldots,n\\}$ with respect to records and exclusive
antirecords.) And suppose that we can find a bijection from
${\mathcal{O}}_{n}$ to some set ${\mathcal{L}}_{n}$ of labeled Motzkin paths,
i.e. Motzkin paths augmented by putting labels on the steps, where the label
for a rise (resp. fall, level step) starting at height $i$ belongs to some
specified set ${\mathcal{A}}_{i}$ (resp. ${\mathcal{B}}_{i}$,
${\mathcal{C}}_{i}$) of allowed labels. Then the weights $a_{i},b_{i},c_{i}$
in the continued fraction (8.2) can be obtained by summing over the labels.
This method goes back to Flajolet [43]; for a detailed presentation with
application to permutations and set partitions, see [94, Sections 5–7].
## 9 Connection with the work of Stieltjes and Rogers
From now on we restrict attention to regular C-fractions
$\cfrac{1}{1-\cfrac{\alpha_{1}t}{1-\cfrac{\alpha_{2}t}{1-\cdots}}}$ (9.1)
and associated continued fractions
$\cfrac{1}{1-\gamma_{0}t-\cfrac{\beta_{1}t^{2}}{1-\gamma_{1}t-\cfrac{\beta_{2}t^{2}}{1-\cdots}}}$
(9.2)
— what combinatorialists call S-fractions and J-fractions, respectively.
It is instructive to treat the coefficients
${\bm{\alpha}},{\bm{\beta}},{\bm{\gamma}}$ in these continued fractions as
algebraic indeterminates. We therefore write the S-fraction as
$\cfrac{1}{1-\cfrac{\alpha_{1}t}{1-\cfrac{\alpha_{2}t}{1-\cdots}}}\;\;=\;\;\sum_{n=0}^{\infty}S_{n}({\bm{\alpha}})\,t^{n}$
(9.3)
where $S_{n}({\bm{\alpha}})$ is obviously a homogeneous polynomial of degree
$n$ with nonnegative integer coefficients; following Flajolet [43], we call it
the Stieltjes–Rogers polynomial of order $n$. Likewise, we write the
J-fraction as
$\cfrac{1}{1-\gamma_{0}t-\cfrac{\beta_{1}t^{2}}{1-\gamma_{1}t-\cfrac{\beta_{2}t^{2}}{1-\cdots}}}\;\;=\;\;\sum_{n=0}^{\infty}J_{n}({\bm{\beta}},{\bm{\gamma}})\,t^{n}$
(9.4)
where $J_{n}({\bm{\beta}},{\bm{\gamma}})$ is a polynomial with nonnegative
integer coefficients that is quasi-homogeneous of degree $n$ if we assign
weight 1 to each $\gamma_{i}$ and weight 2 to each $\beta_{i}$; again
following Flajolet [43], we call it the Jacobi–Rogers polynomial of order $n$.
Since these are polynomials with nonnegative integer coefficients, it is
natural to ask what they count. Flajolet’s master theorem provides the
immediate answer:
###### Theorem 9.1 (Combinatorial interpretation of J-fractions and
S-fractions).
* (a)
The Jacobi–Rogers polynomial $J_{n}({\bm{\beta}},{\bm{\gamma}})$ is the
generating polynomial for Motzkin paths of length $n$, in which each rise gets
weight 1, each fall from height $i$ gets weight $\beta_{i}$, and each level
step at height $i$ gets weight $\gamma_{i}$.
* (b)
The Stieltjes–Rogers polynomial $S_{n}({\bm{\alpha}})$ is the generating
polynomial for Dyck paths of length $2n$, in which each rise gets weight 1 and
each fall from height $i$ gets weight $\alpha_{i}$.
(We here made the arbitrary choice to weight the falls and not the rises. Of
course we could have done the reverse.)
But we can go farther. Let us define a partial Motzkin path to be a path in
the upper half-plane ${\mathbb{Z}}\times{\mathbb{N}}$, starting on the
horizontal axis but ending anywhere, using the steps $(1,1)$, $(1,0)$ and
$(1,-1)$. Now define the generalized Jacobi–Rogers polynomial
$J_{n,k}({\bm{\beta}},{\bm{\gamma}})$ to be the generating polynomial for
partial Motzkin paths from $(0,0)$ to $(n,k)$, in which each rise gets weight
1, each fall from height $i$ gets weight $\beta_{i}$, and each level step at
height $i$ gets weight $\gamma_{i}$. Obviously $J_{n,k}$ is nonvanishing only
for $0\leq k\leq n$, so we have an infinite lower-triangular array ${\sf
J}=\big{(}J_{n,k}({\bm{\beta}},{\bm{\gamma}})\big{)}_{\\!n,k\geq 0}$ in which
the zeroth column displays the ordinary Jacobi–Rogers polynomials
$J_{n,0}=J_{n}$. On the diagonal we have $J_{n,n}=1$, and on the first
subdiagonal we have $J_{n,n-1}=\sum_{i=0}^{n-1}\gamma_{i}$. By considering the
last step of the path, we see that the polynomials
$J_{n,k}({\bm{\beta}},{\bm{\gamma}})$ satisfy the recurrence
$J_{n+1,k}\;=\;J_{n,k-1}\>+\>\gamma_{k}J_{n,k}\>+\>\beta_{k+1}J_{n,k+1}$ (9.5)
with the initial condition $J_{0,k}=\delta_{k0}$ (where of course we set
$J_{n,-1}=0$).
Similarly, let us define a partial Dyck path to be a partial Motzkin path
without level steps. Clearly a partial Dyck path starting at the origin must
stay on the even sublattice. Now define the generalized Stieltjes–Rogers
polynomial of the first kind $S_{n,k}({\bm{\alpha}})$ to be the generating
polynomial for Dyck paths starting at $(0,0)$ and ending at $(2n,2k)$, in
which each rise gets weight 1 and each fall from height $i$ gets weight
$\alpha_{i}$. Obviously $S_{n,k}$ is nonvanishing only for $0\leq k\leq n$, so
we have an infinite lower-triangular array ${\sf
S}=(S_{n,k}({\bm{\alpha}}))_{n,k\geq 0}$ in which the zeroth column displays
the ordinary Stieltjes–Rogers polynomials $S_{n,0}=S_{n}$. We have $S_{n,n}=1$
and $S_{n,n-1}=\sum_{i=1}^{2n-1}\alpha_{i}$.
Likewise, let us define the generalized Stieltjes–Rogers polynomial of the
second kind $S^{\prime}_{n,k}({\bm{\alpha}})$ to be the generating polynomial
for Dyck paths starting at $(0,0)$ and ending at $(2n+1,2k+1)$, in which again
each rise gets weight 1 and each fall from height $i$ gets weight
$\alpha_{i}$. Since $S^{\prime}_{n,k}$ is nonvanishing only for $0\leq k\leq
n$, we obtain a second infinite lower-triangular array ${\sf
S}^{\prime}=(S^{\prime}_{n,k}({\bm{\alpha}}))_{n,k\geq 0}$. We have
$S^{\prime}_{n,n}=1$ and $S^{\prime}_{n,n-1}=\sum_{i=1}^{2n}\alpha_{i}$.
The polynomials $S_{n,k}({\bm{\alpha}})$ and $S^{\prime}_{n,k}({\bm{\alpha}})$
manifestly satisfy the joint recurrence
$S^{\prime}_{n,k}&=S_{n,k}\>+\>\alpha_{2k+2}\,S_{n,k+1}\\\
S_{n+1,k}=S^{\prime}_{n,k-1}\>+\>\alpha_{2k+1}\,S^{\prime}_{n,k}{}$ (9.6)
for $n,k\geq 0$, with the initial conditions $S_{0,k}=\delta_{k0}$ and
$S^{\prime}_{n,-1}=0$. It follows that the $S_{n,k}$ satisfy the recurrence
$S_{n+1,k}\;=\;S_{n,k-1}\>+\>(\alpha_{2k}+\alpha_{2k+1})\,S_{n,k}\>+\>\alpha_{2k+1}\alpha_{2k+2}\,S_{n,k+1}$
(9.7)
(where $S_{n,-1}=0$ and $\alpha_{0}=0$), while the $S^{\prime}_{n,k}$ satisfy
the recurrence
$S^{\prime}_{n+1,k}\;=\;S^{\prime}_{n,k-1}\>+\>(\alpha_{2k+1}+\alpha_{2k+2})\,S^{\prime}_{n,k}\>+\>\alpha_{2k+2}\alpha_{2k+3}\,S^{\prime}_{n,k+1}\;.$
(9.8)
Note that (9.7) and (9.8) have the same form as (9.5), when ${\bm{\beta}}$ and
${\bm{\gamma}}$ are defined suitably in terms of the ${\bm{\alpha}}$: these
correspondences are examples of contraction formulae [104, p. 21] [101, p.
V-31] that express an S-fraction as an equivalent J-fraction. The recurrences
(9.5)/(9.7)/(9.8) define implicitly the (tridiagonal) production matrices for
${\sf J}$, ${\sf S}$ and ${\sf S}^{\prime}$: see [29, 30, 83]. Some workers
call the arrays ${\sf J}$, ${\sf S}$ and/or ${\sf S}^{\prime}$ the Stieltjes
table.
The columns of the arrays ${\sf S}$ and ${\sf S}^{\prime}$ are closely related
to the series $g_{k}(t)$ of the Euler–Viscovatov algorithm (2.7)/(2.11) with
$g_{-1}=1$ for the S-fraction (9.1), as follows:
###### Proposition 9.2.
Let $g_{-1}(t)=1$, let $g_{0}(t)$ be given by the S-fraction (9.1), and let
the series $(g_{k})_{k\geq-1}$ satisfy the recurrence
$g_{k}(t)-g_{k-1}(t)\;=\;\alpha_{k+1}t\,g_{k+1}(t)\qquad\hbox{for $k\geq
0$}\;.$ (9.9)
Then, in terms of the coefficients $g_{k,n}$ defined by
$g_{k}(t)=\sum\limits_{n=0}^{\infty}g_{k,n}t^{n}$, we have
$g_{2j,n}&=S_{n+j,j}{}\\\ g_{2j+1,n}=S^{\prime}_{n+j,j}$ (9.10)
In other words, the columns of ${\sf S}$ (resp. ${\sf S}^{\prime}$) coincide
with the coefficients of the even (resp. odd) $g_{k}$, but shifted downwards
to start at the diagonal.
We will give two proofs of Proposition 9.2: one combinatorial and one
algebraic.
First Proof. We apply Flajolet’s master theorem (Theorem 8.1) with $a_{i}=1$,
$b_{i}=\alpha_{i}t$ and $c_{i}=0$. Then $f_{0}(t)$ is the S-fraction (9.3),
and $f_{k}(t)$ is the analogous S-fraction but starting at $\alpha_{k+1}$. By
Proposition 8.2 we have $g_{0\to\ell}=f_{0}f_{1}\cdots f_{\ell}$, which equals
the $g_{\ell}$ of the Euler–Gauss recurrence (9.9) (since $g_{-1}=1$). So
$g_{\ell}$ is the generating function for Dyck paths at level $0\to\ell$ with
the weights given above. The coefficient of $t^{n}$ in $g_{\ell}$ corresponds
to paths with $n$ falls and $n+\ell$ rises, so the endpoint is
$(2n+\ell,\ell)$. If $\ell=2j$, this gives $S_{n+j,j}$; if $\ell=2j+1$ this
gives $S^{\prime}_{n+j,j}$. $\square$
Second Proof. The recurrence (9.9) can be written in terms of the coefficients
$g_{k,n}$ as
$g_{k,n}-g_{k-1,n}\;=\;\alpha_{k+1}\,g_{k+1,n-1}\;.$ (9.11)
Evaluating this for $k=2j$ and $k=2j+1$ and using (9.10), we recover the
recurrences (9.6). Note also that $S^{\prime}_{n,-1}=g_{-1,n+1}=0$ by
hypothesis. $\square$
###### Example 9.3.
Consider the continued fraction (1.1). With $g_{-1}=1$, the first few $g_{k}$
are
$g_{0}(t)&=1+t+2t^{2}+6t^{3}+24t^{4}+120t^{5}+720t^{6}+\ldots\\\
g_{1}(t)=1+2t+6t^{2}+24t^{3}+120t^{4}+720t^{5}+5040t^{6}+\ldots\\\
g_{2}(t)=1+4t+18t^{2}+96t^{3}+600t^{4}+4320t^{5}+35280t^{6}+\ldots\\\
g_{3}(t)=1+6t+36t^{2}+240t^{3}+1800t^{4}+15120t^{5}+141120t^{6}+\ldots\\\
g_{4}(t)=1+9t+72t^{2}+600t^{3}+5400t^{4}+52920t^{5}+564480t^{6}+\ldots\\\
g_{5}(t)=1+12t+120t^{2}+1200t^{3}+12600t^{4}+141120t^{5}+1693440t^{6}+\ldots\\\
g_{6}(t)=1+16t+200t^{2}+2400t^{3}+29400t^{4}+376320t^{5}+5080320t^{6}+\ldots\qquad\quad$
(9.12)
while the first few rows of ${\sf S}$ and ${\sf S}^{\prime}$ are
$\displaystyle{\sf S}$ $\displaystyle=$
$\displaystyle\begin{bmatrix}1&&&&&&\\\ 1&1&&&&&\\\ 2&4&1&&&&\\\
6&18&9&1&&&\\\ 24&96&72&16&1&&\\\ 120&600&600&200&25&1&\\\
720&4320&5400&2400&450&36&1\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\end{bmatrix}$ (9.13)
$\displaystyle{\sf S}^{\prime}$ $\displaystyle=$
$\displaystyle\begin{bmatrix}1&&&&&&\\\ 2&1&&&&&\\\ 6&6&1&&&&\\\
24&36&12&1&&&\\\ 120&240&120&20&1&&\\\ 720&1800&1200&300&30&1&\\\
5040&15120&12600&4200&630&42&1\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\end{bmatrix}$ (9.14)
The correspondences (9.10) can be observed. From (3.6) we have
$g_{2j-1,n}&=\binom{n+j}{n}\binom{n+j-1}{n}\,n!\\\
g_{2j,n}=\binom{n+j}{n}^{\\!2}\,n!$ (9.15)
and hence
$S_{n,k}&=g_{2k,n-k}\;=\;\binom{n}{k}^{\\!2}\,(n-k)!\\\
S^{\prime}_{n,k}=g_{2k+1,n-k}\;=\;\binom{n+1}{k+1}\,\binom{n}{k}\,(n-k)!$
(9.16)
The recurrences (9.6)–(9.8) with $\alpha_{2j-1}=\alpha_{2j}=j$ can easily be
checked. $\blacksquare$
Exercise. Work out the corresponding formulae for the continued fractions
(1.2) and (4.5). $\blacksquare$
An analogous result connects the series $g_{k}(t)$ of the Euler–Viscovatov
algorithm (6.6) with $g_{-1}=1$ for the J-fraction (9.2) to the columns of the
matrix ${\sf J}$:
###### Proposition 9.4.
Let $g_{-1}(t)=1$, let $g_{0}(t)$ be given by the J-fraction (9.2), and let
the series $(g_{k})_{k\geq-1}$ satisfy the recurrence
$g_{k}(t)-g_{k-1}(t)\;=\;\gamma_{k}t\,g_{k}(t)\>+\>\beta_{k+1}t^{2}\,g_{k+1}(t)\qquad\hbox{for
$k\geq 0$}\;.$ (9.17)
Then, in terms of the coefficients $g_{k,n}$ defined by
$g_{k}(t)=\sum\limits_{n=0}^{\infty}g_{k,n}t^{n}$, we have
$g_{k,n}\;=\;J_{n+k,k}\;.$ (9.18)
Once again, this can be proven either combinatorially or algebraically; these
are left as exercises for the reader.
We can also interpret the exponential generating functions of the columns of
these lower-triangular arrays, by using Hankel matrices. Given a sequence
${\bm{a}}=(a_{n})_{n\geq 0}$ and an integer $m\geq 0$, we define the
$m$-shifted infinite Hankel matrix
$H_{\infty}^{(m)}({\bm{a}})=(a_{i+j+m})_{i,j\geq 0}$. We will apply this to
the sequences ${\bm{J}}=(J_{n}({\bm{\beta}},{\bm{\gamma}}))_{n\geq 0}$ and
${\bm{S}}=(S_{n}({\bm{\alpha}}))_{n\geq 0}$ of Jacobi–Rogers and
Stieltjes–Rogers polynomials. It turns out that the corresponding Hankel
matrices have beautiful $LDL^{\rm T}$ factorizations in terms of the
triangular arrays of generalized Jacobi–Rogers and Stieltjes–Rogers
polynomials:
###### Theorem 9.5 ($LDL^{\rm T}$ factorization of Hankel matrices of
Jacobi–Rogers and Stieltjes–Rogers polynomials).
We have the factorizations
* (a)
$H_{\infty}^{(0)}({\bm{J}})\,=\,{\sf J}D{\sf J}^{\rm T}$ where $D=\mathop{\rm
diag}\nolimits(1,\beta_{1},\beta_{1}\beta_{2},\ldots)$;
* (b)
$H_{\infty}^{(0)}({\bm{S}})\,=\,{\sf S}D{\sf S}^{\rm T}$ where $D=\mathop{\rm
diag}\nolimits(1,\alpha_{1}\alpha_{2},\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4},\ldots)$;
* (c)
$H_{\infty}^{(1)}({\bm{S}})\,=\,{\sf S}^{\prime}D^{\prime}({\sf
S}^{\prime})^{\rm T}$ where $D=\mathop{\rm
diag}\nolimits(\alpha_{1},\alpha_{1}\alpha_{2}\alpha_{3},\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}\alpha_{5},\ldots)$.
Proof. It suffices to note the identity [1, p. 351] [60, Remark 2.2]
$J_{n+n^{\prime},0}({\bm{\beta}},{\bm{\gamma}})\;=\;\sum_{k=0}^{\infty}J_{n,k}({\bm{\beta}},{\bm{\gamma}})\biggl{(}\prod_{i=1}^{k}\beta_{i}\\!\biggr{)}J_{n^{\prime},k}({\bm{\beta}},{\bm{\gamma}})\;,$
(9.19)
which arises from splitting a Motzkin path of length $n+n^{\prime}$ into its
first $n$ steps and its last $n^{\prime}$ steps, and then imagining the second
part run backwards: the factor $\prod_{i=1}^{k}\beta_{i}$ arises from the fact
that when we reversed the path we interchanged rises with falls and thus lost
a factor $\prod_{i=1}^{k}\beta_{i}$ for those falls that were not paired with
rises. The identity (9.19) can be written in matrix form as in part (a).
The proofs of (b) and (c) are similar. $\square$
We can now prove an important equivalent formulation of the factorization
$H_{\infty}^{(0)}({\bm{J}})={\sf J}D{\sf J}^{\rm T}$, known as Rogers’
addition formula [89]. We start with a simple observation:
###### Lemma 9.6 (Bivariate egf of a Hankel matrix).
Let ${\bm{a}}=(a_{n})_{n\geq 0}$ be a sequence in a commutative ring $R$
containing the rationals, and let
$A(t)\;=\;\sum_{n=0}^{\infty}a_{n}\,{t^{n}\over n!}$ (9.20)
be its exponential generating function. Then
$A(t+u)\;=\;\sum_{n,n^{\prime}=0}^{\infty}a_{n+n^{\prime}}\>{t^{n}\over
n!}\,{u^{n^{\prime}}\over n^{\prime}!}\;.$ (9.21)
That is, $A(t+u)$ is the bivariate exponential generating function of the
Hankel matrix $H_{\infty}^{(0)}({\bm{a}})$.
Proof. An easy computation. $\square$
As an immediate consequence, we get:
###### Corollary 9.7.
Let $L=(\ell_{nk})_{n,k\geq 0}$ be a lower-triangular matrix with entries in a
commutative ring $R$ containing the rationals, let
$L_{k}(t)\;=\;\sum_{n=k}^{\infty}\ell_{nk}\,{t^{n}\over n!}$ (9.22)
be the exponential generating function of the $k$th column of $L$, and let
$D=\mathop{\rm diag}\nolimits(d_{0},d_{1},\ldots)$ be a diagonal matrix with
entries in $R$. Let ${\bm{a}}=(a_{n})_{n\geq 0}$ be a sequence in $R$, and let
$A(t)\;=\;\sum_{n=0}^{\infty}a_{n}\,{t^{n}\over n!}$ (9.23)
be its exponential generating function. Then $LDL^{\rm
T}=H_{\infty}^{(0)}({\bm{a}})$ if and only if
$A(t+u)\;=\;\sum\limits_{k=0}^{\infty}d_{k}\,L_{k}(t)\,L_{k}(u)\;.$ (9.24)
On the other hand, a converse to the factorization of Theorem 9.5(a) can be
proven. We recall that an element of a commutative ring $R$ is called regular
if it is neither zero nor a divisor of zero, and that a diagonal matrix is
called regular if all its diagonal elements are. We then have the following
result ([93], based on [80, Theorem 1] [110, Theorem 2.1]), which we state
here without proof:
###### Proposition 9.8.
Let $R$ be a commutative ring, let $L$ be a unit-lower-triangular matrix with
entries in $R$, let $D=\mathop{\rm diag}\nolimits(d_{0},d_{1},\ldots)$ be a
regular diagonal matrix with entries in $R$, and let ${\bm{a}}=(a_{n})_{n\geq
0}$ be a sequence in $R$. If $LDL^{\rm T}=H_{\infty}^{(0)}({\bm{a}})$, then
there exist sequences ${\bm{\beta}}=(\beta_{n})_{n\geq 1}$ and
${\bm{\gamma}}=(\gamma_{n})_{n\geq 0}$ in $R$ such that
$d_{n}=d_{0}\beta_{1}\cdots\beta_{n}$, $L={\sf J}({\bm{\beta}},{\bm{\gamma}})$
and ${\bm{a}}=d_{0}{\bm{J}}({\bm{\beta}},{\bm{\gamma}})$. In particular,
${\bm{a}}$ equals $d_{0}$ times the zeroth column of $L$.
Putting together Theorem 9.5, Corollary 9.7 and Proposition 9.8, we conclude
(compare [104, Theorem 53.1]):
###### Theorem 9.9 (Rogers’ addition formula).
The column exponential generating functions of the matrix of generalized
Jacobi–Rogers polynomials,
${\mathcal{J}}_{k}(t;{\bm{\beta}},{\bm{\gamma}})\;\stackrel{{\scriptstyle\rm
def}}{{=}}\;\sum_{n=k}^{\infty}J_{n,k}({\bm{\beta}},{\bm{\gamma}})\,{t^{n}\over
n!}\;,$ (9.25)
satisfy
${\mathcal{J}}_{0}(t+u;{\bm{\beta}},{\bm{\gamma}})\;=\;\sum\limits_{k=0}^{\infty}\beta_{1}\cdots\beta_{k}\,{\mathcal{J}}_{k}(t;{\bm{\beta}},{\bm{\gamma}})\,{\mathcal{J}}_{k}(u;{\bm{\beta}},{\bm{\gamma}})\;.$
(9.26)
And conversely, if $A(t)$ and $F_{0}(t),F_{1}(t),\ldots$ are formal power
series (with elements in a commutative ring $R$ containing the rationals)
satisfying
$A(t)\;=\;1+O(t)\,,\qquad F_{k}(t)\;=\;{t^{k}\over
k!}\,+\,\mu_{k}{t^{k+1}\over(k+1)!}\,+\,O(t^{k+2})$ (9.27)
and
$A(t+u)\;=\;\sum\limits_{k=0}^{\infty}\beta_{1}\cdots\beta_{k}\,F_{k}(t)\,F_{k}(u)$
(9.28)
for some regular elements ${\bm{\beta}}=(\beta_{k})_{k\geq 1}$, then
$A(t)=F_{0}(t)$ and $F_{k}(t)={\mathcal{J}}_{k}(t;{\bm{\beta}},{\bm{\gamma}})$
with the given ${\bm{\beta}}$ and with $\gamma_{k}=\mu_{k}-\mu_{k-1}$ (where
$\mu_{-1}\stackrel{{\scriptstyle\rm def}}{{=}}0$).
Here the formula for $\gamma_{k}$ follows from
$J_{k+1,k}=\sum_{i=0}^{k}\gamma_{i}$.
###### Example 9.10.
The secant numbers131313 See [92] and the references cited therein for more
information concerning the secant numbers and the closely-related tangent
numbers. $E_{2n}$ are defined by the exponential generating function
$\sec t\;=\;\sum_{n=0}^{\infty}E_{2n}\>{t^{2n}\over(2n)!}\;.$ (9.29)
More generally, the secant power polynomials $E_{2n}(x)$ are defined by the
exponential generating function
$(\sec t)^{x}\;=\;\sum_{n=0}^{\infty}E_{2n}(x)\>{t^{2n}\over(2n)!}\;.$ (9.30)
From the high-school angle-addition formula
$\cos(t+u)&=(\cos t)(\cos u)\,-\,(\sin t)(\sin u)\\\ =(\cos t)(\cos
u)\,[1\,-\,(\tan t)(\tan u)]$ (9.31)
we obtain
$[\sec(t+u)]^{x}\;=\;(\sec t)^{x}(\sec
u)^{x}\>\sum_{k=0}^{\infty}\binom{x+k-1}{k}\,(\tan t)^{k}\,(\tan u)^{k}\;,$
(9.32)
which is of the form (9.27)/(9.28) with
$\beta_{k}\>=\;k(x+k-1)\,,\qquad F_{k}(t)\>=\>{(\sec t)^{x}\,(\tan t)^{k}\over
k!}\>=\>{t^{k}\over k!}\,+\,O(t^{k+2})\;,$ (9.33)
so that $\mu_{k}=0$ and hence $\gamma_{k}=0$. Theorem 9.9 then implies that
the ordinary generating function of the secant power polynomials is given by
the J-fraction
$\sum_{n=0}^{\infty}E_{2n}(x)\,t^{2n}\;=\;\cfrac{1}{1-\cfrac{1\cdot
xt^{2}}{1-\cfrac{2(x+1)t^{2}}{1-\cfrac{3(x+2)t^{2}}{1-\cdots}}}}\;.$ (9.34)
After renaming $t^{2}\to t$, this is actually an S-fraction with coefficients
$\alpha_{n}=n(x+n-1)$. This example is due to Stieltjes [95] and Rogers [89].
$\blacksquare$
###### Example 9.11.
Let us use Rogers’ addition formula to give a second proof of Euler’s
continued fraction (1.2) for the sequence of rising powers
$(a^{\overline{n}})_{n\geq 0}$. This sequence has the exponential generating
function
$\sum_{n=0}^{\infty}a^{\overline{n}}\>{t^{n}\over
n!}\;=\;\sum_{n=0}^{\infty}\binom{a+n-1}{n}\,t^{n}\;=\;(1-t)^{-a}\;,$ (9.35)
which satisfies the addition formula
$(1-t-u)^{-a}&=(1-t)^{-a}\,(1-u)^{-a}\,\Bigl{[}1\,-\,{tu\over(1-t)(1-u)}\Bigr{]}^{-a}\\\
=(1-t)^{-a}\,(1-u)^{-a}\sum_{k=0}^{\infty}\binom{a+k-1}{k}\,\Bigl{(}{t\over
1-t}\Bigr{)}^{\\!k}\,\Bigl{(}{u\over 1-u}\Bigr{)}^{\\!k}\,.\qquad\qquad$
(9.36)
This expansion is of the form (9.27)/(9.28) with $\beta_{k}=k(a+k-1)$ and
$F_{k}(t)\;=\;(1-t)^{-a}\,\Bigl{(}{t\over 1-t}\Bigr{)}^{\\!k}\,{1\over
k!}\;=\;{t^{k}\over k!}\,+\,(k+1)(k+a){t^{k+1}\over(k+1)!}\,+\,O(t^{k+2})\;,$
(9.37)
so that $\mu_{k}=(k+1)(k+a)$ and hence $\gamma_{k}=2k+a$. Moreover, the
J-fraction (9.2) with $\beta_{k}=k(a+k-1)$ and $\gamma_{k}=2k+a$ is connected
by the contraction formula [104, p. 21] [101, p. V-31]
$\gamma_{0}&=\alpha_{1}{}\\\
\gamma_{n}=\alpha_{2n}+\alpha_{2n+1}\qquad\hbox{for $n\geq 1$}{}\\\
\beta_{n}=\alpha_{2n-1}\alpha_{2n}{}$ (9.38)
with the S-fraction having $\alpha_{2k-1}=a+k-1$ and $\alpha_{2k}=k$. This
completes the proof of (1.2).
We also see from this proof that the generalized Jacobi–Rogers polynomials for
the J-fraction with $\beta_{k}=k(a+k-1)$ and $\gamma_{k}=2k+a$ are
$J_{n,k}\;\stackrel{{\scriptstyle\rm def}}{{=}}\;\Bigl{[}{t^{n}\over
n!}\Bigr{]}\,F_{k}(t)&={n!\over k!}\>[t^{n}]\,(1-t)^{-a}\,\Bigl{(}{t\over
1-t}\Bigr{)}^{\\!k}\\\ ={n!\over k!}\>[t^{n-k}]\,(1-t)^{-(a+k)}\\\ ={n!\over
k!}\,\binom{a+n-1}{n-k}\\\ =\binom{n}{k}\,(a+k)^{\overline{n-k}}\;.$ (9.39)
These also coincide with the generalized Stieltjes–Rogers polynomials of the
first kind $S_{n,k}$ for the corresponding S-fraction (1.2), since the
contraction formula (9.38) corresponds combinatorially [101, p. V-31] to
grouping pairs of steps of the Dyck path to create a Motzkin path living on
even heights. Then (9.39) agrees with (LABEL:eq.euler.gk.BIS.2j) in view of
(LABEL:eq.prop.gk.Snk.even). $\blacksquare$
See [104, pp. 203–207] [60] for further discussion of Rogers’ addition formula
and its applications to the derivation of continued fractions.
Historical remarks. The generalized Stieltjes–Rogers polynomials $S_{n,k}$ and
$S^{\prime}_{n,k}$ were introduced by Stieltjes [95] in 1889 (his notation is
$\alpha_{k,n}$ and $\beta_{k,n}$): he defined them by the recurrences (9.6).
He then proved the factorizations in Theorem 9.5(b,c) by considering the
quadratic forms associated to the symmetric matrices ${\sf S}D{\sf S}^{\rm T}$
and ${\sf S}^{\prime}D^{\prime}({\sf S}^{\prime})^{\rm T}$: he used the
recurrence to prove that the matrix ${\sf S}D{\sf S}^{\rm T}$ is Hankel, i.e.
is $H_{\infty}^{(0)}({\bm{b}})$ for some sequence ${\bm{b}}=(b_{n})_{n\geq
0}$; then, using the previously known formula (for which he cited Frobenius
and Stickelberger [48, 47]) relating the coefficients
${\bm{\alpha}}=(\alpha_{n})_{n\geq 1}$ in an S-fraction to the Hankel
determinants of the power-series coefficients ${\bm{a}}=(a_{n})_{n\geq 0}$, he
concluded that ${\bm{a}}={\bm{b}}$. Stieltjes went on to use this matrix-
decomposition method to determine several explicit continued fractions related
to trigonometric functions and Jacobian elliptic functions. See also the
summary of this work given in Stieltjes’ 1894 memoir [96, pp. J.18–J.19],
where the matrix factorizations are made explicit.
The reformulation of Stieltjes’ factorization as an addition formula is due to
Rogers [89] in 1907.
The interpretation of $J_{n,k}$, $S_{n,k}$ and $S^{\prime}_{n,k}$ in terms of
partial Motzkin and Dyck paths is post-Flajolet folklore; it goes back at
least to [60, Theorem 2.1 and Remark 2.2]. $\blacksquare$
## 10 Timing tests
How do the primitive algorithm (2.3) and the refined algorithm (2.7)/(2.11)
compare in computational efficiency?
Numerical timing experiments for the continued fractions (1.1) and (1.2) are
reported in Table 1/Figure 2 and Table 2/Figure 3, respectively. The
computations were carried out in Mathematica version 11.1 under Linux on a
machine with an Intel Xeon W-2133 CPU running at 3.60 GHz. The primitive
algorithm was programmed in both recursive and iterative forms; the timings
for the two versions were essentially identical.
For the numerical series $a_{n}=n!$, the CPU time for the primitive algorithm
behaves roughly like $N^{\approx 2}$ for the smaller values of $N$, rising
gradually to $N^{\approx 4.6}$ for $1000\mathrel{\hbox to0.0pt{\lower
3.0pt\hbox{$\mathchar 536\relax$}\hss}\raise 2.0pt\hbox{$\mathchar
316\relax$}}N\mathrel{\hbox to0.0pt{\lower 3.0pt\hbox{$\mathchar
536\relax$}\hss}\raise 2.0pt\hbox{$\mathchar 316\relax$}}3000$. The CPU time
for the refined algorithm behaves roughly like $N^{\approx 2}$ over the range
$N\mathrel{\hbox to0.0pt{\lower 3.0pt\hbox{$\mathchar 536\relax$}\hss}\raise
2.0pt\hbox{$\mathchar 316\relax$}}2000$, rising gradually to $N^{\approx 2.8}$
for $N\approx 9000$.141414 Our computation for $N=10000$ required more memory
than the available 256 GB, which led to paging and an erratic timing; we have
therefore suppressed this data point as unreliable. This latter behavior is
consistent with the theoretically expected (but not yet reached) asymptotic
CPU time of order $N^{3}\log^{2}N$, arising as $\sim N^{2}$ field operations
in (2.7)/(2.11) times a CPU time of order $N\log^{2}N$ per operation: here the
operations are subtraction of numbers of magnitude roughly $N!$ (hence with
$\sim N\log N$ digits) and their division by the integers $\alpha_{k}$ of
order $N$ (hence with $\sim\log N$ digits). The advantage for the refined
algorithm grows from a factor $\approx 6$ at $N=200$ to $\approx 500$ at
$N=3000$.
For the polynomial series $a_{n}=a^{\overline{n}}$, the CPU time for the
primitive algorithm behaves roughly like $N^{\approx 3.3}$ for
$5\mathrel{\hbox to0.0pt{\lower 3.0pt\hbox{$\mathchar 536\relax$}\hss}\raise
2.0pt\hbox{$\mathchar 316\relax$}}N\mathrel{\hbox to0.0pt{\lower
3.0pt\hbox{$\mathchar 536\relax$}\hss}\raise 2.0pt\hbox{$\mathchar
316\relax$}}30$, bending suddenly at $N=30$ to a much more rapid growth
$N^{\approx 10}$ [see Figure 3(a)]. However, another possible interpretation
is that the behavior is exponential in $N$ [see Figure 3(b)]. The CPU time for
the refined algorithm, by contrast, behaves like $N^{\approx 3}$ over the
whole range $5\leq N\leq 1000$, with a slightly lower power ($\approx 2.7$) at
the smallest values of $N$ and a slightly higher power ($\approx 3.1$) at the
largest. I am not sure what should be the expected asymptotic behavior for
either algorithm. The advantage for the refined algorithm grows from a factor
$\approx 1.2$ at $N=10$ to $\approx 3$ at $N=30$ and $\approx 10000$ at
$N=80$.
| Primitive | Refined |
---|---|---|---
$N$ | algorithm | algorithm | Ratio
100 | 0.20 | 0.15 | 1.33
200 | 0.87 | 0.14 | 6.32
300 | 2.20 | 0.29 | 7.47
400 | 4.87 | 0.51 | 9.53
500 | 9.41 | 0.79 | 11.86
600 | 17.32 | 1.15 | 15.06
700 | 30.26 | 1.58 | 19.17
800 | 51.10 | 2.09 | 24.44
900 | 83.48 | 2.69 | 31.07
1000 | 131.90 | 3.25 | 40.63
1100 | 200.71 | 4.14 | 48.46
1200 | 297.45 | 5.10 | 58.38
1300 | 429.43 | 6.21 | 69.18
1400 | 606.35 | 7.20 | 84.20
1500 | 840.25 | 8.75 | 95.99
1600 | 1128.79 | 9.54 | 118.28
1700 | 1490.64 | 11.00 | 135.50
1800 | 1947.84 | 12.59 | 154.68
1900 | 2505.78 | 14.40 | 174.06
2000 | 3176.93 | 15.74 | 201.85
3000 | 20896.0 | 43.85 | 476.52
4000 | | 94.49 |
5000 | | 170.51 |
6000 | | 277.10 |
7000 | | 420.58 |
8000 | | 604.25 |
9000 | | 835.81 |
Table 1: Timings (in seconds) for the primitive and refined algorithms applied to the numerical series (1.1). Figure 2: Timings (in seconds) for the primitive algorithm (upper curve) and refined algorithm (lower curve) applied to the numerical series (1.1). | Primitive | Refined |
---|---|---|---
$N$ | algorithm | algorithm | Ratio
10 | 0.02 | 0.02 | 1.21
15 | 0.08 | 0.06 | 1.46
20 | 0.27 | 0.12 | 2.25
25 | 0.50 | 0.21 | 2.40
30 | 1.04 | 0.36 | 2.85
35 | 3.15 | 0.56 | 5.64
40 | 16.13 | 0.77 | 21.07
45 | 57.23 | 1.04 | 55.14
50 | 139.52 | 1.41 | 98.66
55 | 283.39 | 1.72 | 164.86
60 | 505.61 | 2.15 | 234.67
65 | 1029.79 | 2.90 | 355.29
70 | 5390.53 | 3.44 | 1567.81
75 | 20714.2 | 4.23 | 4893.62
80 | 54919.5 | 4.75 | 11560.1
90 | | 6.35 |
100 | | 8.60 |
110 | | 10.79 |
120 | | 13.52 |
130 | | 16.54 |
140 | | 19.97 |
150 | | 24.06 |
160 | | 28.42 |
170 | | 33.76 |
180 | | 39.46 |
190 | | 45.91 |
200 | | 52.23 |
300 | | 158.25 |
400 | | 360.65 |
500 | | 691.27 |
600 | | 1184.81 |
700 | | 1910.57 |
800 | | 2909.85 |
900 | | 4244.91 |
1000 | | 5960.16 |
Table 2: Timings (in seconds) for the primitive and refined algorithms
applied to the polynomial series (1.2).
(a)
(b)
Figure 3: Timings (in seconds) for the primitive algorithm (upper curve) and
refined algorithm (lower curve) applied to the polynomial series (1.2): log-
log plot in (a), linear-log plot for the primitive algorithm in (b).
Some remarks. 1\. When the primitive algorithm is programmed recursively in
Mathematica, it is necessary to set $Recursion Limit to a large enough number
(or Infinity) in order to avoid incomplete execution.
2\. Because of quirks in Mathematica’s treatment of power series with symbolic
coefficients, the primitive algorithm (in either version) applied to (1.2)
becomes exceedingly slow for $N\mathrel{\hbox to0.0pt{\lower
3.0pt\hbox{$\mathchar 536\relax$}\hss}\raise 2.0pt\hbox{$\mathchar
318\relax$}}10$ if the basic step is programmed simply as f[k] = (1 -
1/f[k-1])/(alpha[k]*t). Instead, it is necessary to write f[k] = Map[Together,
(1 - 1/f[k-1])/(alpha[k]*t)] in order to force the simplification of rational-
function expressions to polynomials. I thank Daniel Lichtblau for this crucial
suggestion. The results reported in Table 2 and Figure 3 refer to this latter
version of the program.
3\. The timings reported here were obtained using Mathematica’s command
Timing, which under this operating system apparently includes the total CPU
time in all threads. The real time elapsed was in some instances up to a
factor $\approx 2$ smaller than this, due to partially parallel execution on
this multi-core CPU.
4\. One might wonder: Why on earth would one want to compute 1000 or more
continued-fraction coefficients? One answer (perhaps not the only one) is that
the nonnegativity of the S-fraction coefficients $\alpha_{n}$ is a necessary
and sufficient condition for a sequence ${\bm{a}}=(a_{n})_{n\geq 0}$ of real
numbers to be a Stieltjes moment sequence, i.e. the moments of a positive
measure on $[0,\infty)$; this was shown by Stieltjes [96] in 1894. On the
other hand, it is easy to concoct sequences that are not Stieltjes moment
sequences but which have $\alpha_{n}>0$ until very high order. Consider, for
instance, the sequence151515 A closely related form of $a_{n}$ was suggested
to me by Andrew Elvey Price [37].
$a_{n}\;\stackrel{{\scriptstyle\rm
def}}{{=}}\;(1+\epsilon)\,n!\,-\,{\epsilon\over(n+1)^{2}}\;=\;\int\limits_{0}^{\infty}x^{n}\>\Bigl{[}(1+\epsilon)e^{-x}\,+\,\epsilon\log
x\Bigr{]}\>dx\;,$ (10.1)
which fails to be a Stieltjes moment sequence whenever $\epsilon>0$ because
the density is negative near $x=0$ (apply [92, Corollary 2.10]). For
$\epsilon=1$, the first negative coefficient $\alpha_{n}$ is $n=6$; for
$\epsilon=1/2$ it is $n=20$; for $\epsilon=1/4$ it is $n=178$; for
$\epsilon=1/8$ it is some unknown (to me) $n>1500$. So it can be important to
compute S-fraction coefficients to very high order when trying to determine
empirically whether a given sequence is or is not a Stieltjes moment sequence.
$\blacksquare$
## 11 Final remarks
The algorithm presented here is intended, in the first instance, for use in
exact arithmetic: the field $F$ could be (for example) the field
${\mathbb{Q}}$ of rational numbers, or more generally the field
${\mathbb{Q}}(x_{1},\ldots,x_{n})$ of rational fractions in indeterminates
$x_{1},\ldots,x_{n}$ with coefficients in ${\mathbb{Q}}$. I leave it to others
to analyze the numerical (in)stability of this algorithm when carried out in
$F={\mathbb{R}}$ or ${\mathbb{C}}$ with finite-precision arithmetic, and/or to
devise alternative algorithms with improved numerical stability.
The continued fractions discussed here are what could be called classical
continued fractions. Very recently combinatorialists have developed a theory
of branched continued fractions, based on generalizing Flajolet’s master
theorem (Theorem 8.1) to other classes of lattice paths. This idea was
suggested by Viennot [101, section V.6], carried forward in the Ph.D. theses
of Roblet [86] and Varvak [100], and then comprehensively developed by
Pétréolle, Sokal and Zhu [83, 82]. There is a corresponding generalization of
the Euler–Gauss recurrence method: for instance, for the $m$-S-fractions,
which generalize the regular C-fractions, the recurrence (9.9) is generalized
to
$g_{k}(t)-g_{k-1}(t)\;=\;\alpha_{k+m}t\,g_{k+m}(t)\qquad\hbox{for }k\geq 0$
(11.1)
for a fixed integer $m\geq 1$. Furthermore, Gauss’ [51] continued fraction for
the ratio of contiguous hypergeometric functions
${{\tensor[_{2\\!}]{F}{{}_{1}}\\!}}$ can be generalized to
${\tensor[_{r\\!}]{F}{{}_{s}}\\!}$ for arbitrary $r,s$, where now
$m=\max(r-1,s)$; the proof is based on (11.1). See [83] for details on all of
this, and [82] for further applications. On the other hand, branched continued
fractions are highly nonunique, and I do not know any algorithm for computing
them.
## Appendix
Answer to the exercise posed in Section 4:
$g_{2j-1}(t)&=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\genfrac{\\{}{\\}}{0.0pt}{}{n+j}{k+j}\,\binom{k+j-1}{k}\,x^{k}y^{n-k}\,t^{n}\\\
g_{2j}(t)=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\genfrac{\\{}{\\}}{0.0pt}{}{n+j}{k+j}\,\binom{k+j}{k}\,x^{k}y^{n-k}\,t^{n}$
(A.1)
## Acknowledgments
I wish to thank Gaurav Bhatnagar, Bishal Deb, Bill Jones, Xavier Viennot and
Jiang Zeng for helpful conversations and/or correspondence. I am especially
grateful to Gaurav Bhatnagar for reemphasizing to me the power and elegance of
the Euler–Gauss recurrence method, and for drawing my attention to Askey’s
masterful survey [9] as well as to his own wonderful survey article [17]. I
also thank Daniel Lichtblau for help with Mathematica.
This research was supported in part by the U.K. Engineering and Physical
Sciences Research Council grant EP/N025636/1.
## References
* [1] M. Aigner, Catalan and other numbers: a recurrent theme, in: Algebraic Combinatorics and Computer Science, edited by H. Crapo and D. Senato (Springer-Verlag Italia, Milan, 2001), pp. 347–390.
* [2] G.E. Andrews, The Theory of Partitions (Addison-Wesley, Reading MA, 1976). Reprinted with a new preface by Cambridge University Press, Cambridge, 1998.
* [3] G.E. Andrews, Euler’s pentagonal number theorem, Math. Mag. 56, 279–284 (1983).
* [4] G.E. Andrews, On the proofs of the Rogers–Ramanujan identities, in $q$-Series and Partitions, IMA Volumes in Mathematics and its Applications #18, edited by D. Stanton (Springer, New York, 1989), pp. 1–14.
* [5] G.E. Andrews, Ramanujan’s “lost” notebook. VIII. The entire Rogers–Ramanujan function, Adv. Math. 191, 393–407 (2005).
* [6] G.E. Andrews, Ramanujan’s “lost” notebook. IX. The partial theta function as an entire function, Adv. Math. 191, 408–422 (2005).
* [7] G.E. Andrews and B.C. Berndt, Ramanujan’s Lost Notebook, Part I (Springer-Verlag, New York, 2005).
* [8] G.E. Andrews and B.C. Berndt, Ramanujan’s Lost Notebook, Part II (Springer-Verlag, New York, 2009).
* [9] R. Askey, Ramanujan and hypergeometric and basic hypergeometric series, in Ramanujan International Symposium on Analysis (Pune, 1987), edited by N.K. Thakare, K.C. Sharma and T.T. Raghunathan (Macmillan India, New Delhi, 1989), pp. 1–83; reprinted in Russian Math. Surveys 45(1), 37–86 (1990), and in Ramanujan: Essays and Surveys, edited by B.C. Berndt and R.A. Rankin (American Mathematical Society, Providence, RI, 2001), pp. 277–324.
* [10] M.F. Atiyah, Resolution of singularities and division of distributions, Comm. Pure Appl. Math. 23, 145–150 (1970).
* [11] G.A. Baker, Jr. and P. Graves-Morris, Padé Approximants, 2nd ed., Encyclopedia of Mathematics and its Applications #59 (Cambridge University Press, Cambridge, 1996).
* [12] E.J. Barbeau, Euler subdues a very obstreperous series, Amer. Math. Monthly 86, 356–372 (1979).
* [13] E.J. Barbeau and P.J. Leah, Euler’s 1760 paper on divergent series, Historia Math. 3, 141–160 (1976); errata and additions 5, 332 (1978).
* [14] B.C. Berndt, Ramanujan’s Notebooks, Part III (Springer-Verlag, New York, 1991).
* [15] I.N. Bernšteĭn, The analytic continuation of generalized functions with respect to a parameter, Funkcional. Anal. i Priložen. 6(4), 26–40 (1972) [= Funct. Anal. Appl. 6, 273–285 (1972)].
* [16] I.N. Bernšteĭn and S.I. Gel’fand, Meromorphy of the function $P^{\lambda}$, Funkcional. Anal. i Priložen. 3(1), 84–85 (1969) [= Funct. Anal. Appl. 3, 68–69 (1969)].
* [17] G. Bhatnagar, How to prove Ramanujan’s $q$-continued fractions, in Ramanujan 125, Contemporary Mathematics #627, edited by K. Alladi, F. Garvan and A.J. Yee (American Math. Soc., Providence RI, 2014), pp. 49–68.
* [18] G. Bhatnagar, How to discover the Rogers–Ramunujan identities, Resonance 20(5), 416–430 (2015).
* [19] G. Bhatnagar, Ramanujan’s $q$-continued fractions, preprint (August 2022), arXiv:2208.12656 [math.CA] at arXiv.org.
* [20] J.-E. Björk, Rings of Differential Operators (North-Holland, Amsterdam–Oxford–New York, 1979).
* [21] N. Bourbaki, Algebra II (Springer-Verlag, Berlin–Heidelberg–New York, 1990).
* [22] C. Brezinski, History of Continued Fractions and Padé Approximants, Springer Series in Computational Mathematics #12 (Springer-Verlag, Berlin, 1991).
* [23] T.S. Chihara, An Introduction to Orthogonal Polynomials (Gordon and Breach, New York–London–Paris, 1978). Reprinted by Dover, Mineola NY, 2011.
* [24] E.v.F. Conrad, Some continued fraction expansions of Laplace transforms of elliptic functions, PhD thesis, Ohio State University, 2002. Available on-line at http://rave.ohiolink.edu/etdc/view?acc_num=osu1029248229
* [25] E.v.F. Conrad and P. Flajolet, The Fermat cubic, elliptic functions, continued fractions, and a combinatorial excursion, Séminaire Lotharingien de Combinatoire 54, article B54g (2006).
* [26] S.C. Coutinho, A Primer of Algebraic $D$-Modules, London Mathematical Society Student Texts #33 (Cambridge University Press, Cambridge, 1995).
* [27] A. Cuyt, V.B. Petersen, B. Verdonk, H. Waadeland and W.B. Jones, Handbook of Continued Fractions for Special Functions (Springer-Verlag, New York, 2008).
* [28] A. Cuyt and L. Wuytack, Nonlinear Methods in Numerical Analysis, North-Holland Mathematics Studies #136 = Studies in Computational Mathematics #1 (North-Holland, Amsterdam, 1987).
* [29] E. Deutsch, L. Ferrari and S. Rinaldi, Production matrices, Adv. Appl. Math. 34, 101–122 (2005).
* [30] E. Deutsch, L. Ferrari and S. Rinaldi, Production matrices and Riordan arrays, Ann. Comb. 13, 65–85 (2009).
* [31] D. Dumont, Pics de cycle et dérivées partielles, Séminaire Lotharingien de Combinatoire 13, article B13a (1986).
* [32] D. Dumont, A continued fraction for Stirling numbers of the second kind, unpublished note (1989), cited in [111].
* [33] D. Dumont and G. Kreweras, Sur le développement d’une fraction continue liée à la série hypergéométrique et son interprétation en termes de records et anti-records dans les permutations, European J. Combin. 9, 27–32 (1988).
* [34] D. Dumont and A. Randrianarivony, Sur une extension des nombres de Genocchi, European J. Combin. 16, 147–151 (1995).
* [35] G. Eisenstein, Théorèmes sur les formes cubiques et solution d’une équation du quatrième degré à quatre indéterminées, J. Reine Angew. Math. 27, 75–79 (1844).
* [36] G. Eisenstein, Transformations remarquables de quelques séries, J. Reine Angew. Math. 27, 193–197 (1844).
* [37] A. Elvey Price, private communication (May 2017).
* [38] A. Elvey Price and A.D. Sokal, Phylogenetic trees, augmented perfect matchings, and a Thron-type continued fraction (T-fraction) for the Ward polynomials, Electron. J. Combin. 27(4), #P4.6 (2020).
* [39] G. Eneström, Die Schriften Eulers chronologisch nach den Jahren geordnet, in denen sie verfaßt worden sind, Jahresbericht der Deutschen Mathematiker-Vereinigung (Teubner, Leipzig, 1913). [English translation, with explanatory notes, available at http://eulerarchive.maa.org/index/enestrom.html]
* [40] L. Euler, De fractionibus continuis dissertatio, Commentarii Academiae Scientiarum Petropolitanae 9, 98–137 (1744); reprinted in Opera Omnia, ser. 1, vol. 14, pp. 187–216. [English translation in Math. Systems Theory 18, 295–328 (1985). Latin original and English and German translations available at http://eulerarchive.maa.org/pages/E071.html]
* [41] L. Euler, De seriebus divergentibus, Novi Commentarii Academiae Scientiarum Petropolitanae 5, 205–237 (1760); reprinted in Opera Omnia, ser. 1, vol. 14, pp. 585–617. [Latin original and English and German translations available at http://eulerarchive.maa.org/pages/E247.html]
* [42] L. Euler, De transformatione seriei divergentis $1-mx+m(m+n)x^{2}-m(m+n)(m+2n)x^{3}+\hbox{etc.}$ in fractionem continuam, Nova Acta Academiae Scientarum Imperialis Petropolitanae 2, 36–45 (1788); reprinted in Opera Omnia, ser. 1, vol. 16, pp. 34–46. [Latin original and English and German translations available at http://eulerarchive.maa.org/pages/E616.html]
* [43] P. Flajolet, Combinatorial aspects of continued fractions, Discrete Math. 32, 125–161 (1980).
* [44] P. Flajolet and J. Françon, Elliptic functions, continued fractions and doubled permutations, European J. Combin. 10, 235–241 (1989).
* [45] A. Folsom, Modular forms and Eisenstein’s continued fractions, J. Number Theory 117, 279–291 (2006).
* [46] Free Software Foundation, The GNU Multiple Precision Arithmetic Library, https://gmplib.org/
* [47] G. Frobenius, Ueber Relationen zwischen den Näherungsbrüchen von Potenzreihe, Journal für die reine und angewandte Mathematik 90, 1–17 (1881).
* [48] G. Frobenius and L. Stickelberger, Ueber die Addition und Multiplication der elliptischen Functionen, Journal für die reine und angewandte Mathematik 88, 146–184 (1879).
* [49] M. Galuzzi, Lagrange’s essay “Recherches sur la manière de former des tables des planètes d’après les seules observations”, Rev. Histoire Math. 1, 201–233 (1995).
* [50] G. Gasper and M. Rahman, Basic Hypergeometric Series, 2nd ed. (Cambridge University Press, Cambridge–New York, 2004).
* [51] C.F. Gauss, Disquisitiones generales circa seriem infinitam $1+{\alpha\beta\over 1.\gamma}x+{\alpha(\alpha+1)\beta(\beta+1)\over 1.2.\gamma(\gamma+1)}xx+{\alpha(\alpha+1)(\alpha+2)\beta(\beta+1)(\beta+2)\over 1.2.3.\gamma(\gamma+1)(\gamma+2)}x^{3}+\hbox{etc.}$, Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores, Classis Mathematicae 2 (1813). [Reprinted in C.F. Gauss, Werke, vol. 3 (Cambridge University Press, Cambridge, 2011), pp. 123–162.] Available on-line at http://gdz.sub.uni-goettingen.de/dms/load/toc/?PPN=PPN235999628
* [52] I.M. Gel’fand and G.E. Shilov, Generalized Functions, vol. 1 (Academic Press, New York–London, 1964).
* [53] R.G. Gordon, Error bounds in equilibrium statistical mechanics, J. Math. Phys. 9, 655–663 (1968).
* [54] I.P. Goulden and D.M. Jackson, Combinatorial Enumeration (Wiley, New York, 1983). Reprinted by Dover, Mineola NY, 2004.
* [55] R.L. Graham, D.E. Knuth and O. Patashnik, Concrete Mathematics: A Foundation for Computer Science, 2nd ed. (Addison-Wesley, Reading, Mass., 1994).
* [56] N.S.S. Gu and H. Prodinger, On some continued fraction expansions of the Rogers–Ramanujan type, Ramanujan J. 26, 323–367 (2011).
* [57] E. Heine, Untersuchungen über die Reihe $\displaystyle 1+{(1-q^{\alpha})(1-q^{\beta})\over(1-q)(1-q^{\gamma})}\cdot x+{(1-q^{\alpha})(1-q^{\alpha+1})(1-q^{\beta})(1-q^{\beta+1})\over(1-q)(1-q^{2})(1-q^{\gamma})(1-q^{\gamma+1})}\cdot x^{2}+\ldots$, J. reine angew. Math. 34, 285–328 (1847). Available on-line at http://www.digizeitschriften.de/main/dms/img/?PPN=GDZPPN002145758
* [58] P. Henrici, Applied and Computational Complex Analysis, vol. 1 (Wiley-Interscience, New York–London–Sydney, 1974).
* [59] P. Henrici, Applied and Computational Complex Analysis, vol. 2 (Wiley-Interscience, New York–London–Sydney, 1977).
* [60] M.E.H. Ismail and J. Zeng, Addition theorems via continued fractions, Trans. Amer. Math. Soc. 362, 957–983 (2010).
* [61] W.B. Jones and W.J. Thron, Continued Fractions: Analytic Theory and Applications (Addison-Wesley, Reading MA, 1980).
* [62] A. Kasraoui and J. Zeng, Distribution of crossings, nestings and alignments of two edges in matchings and partitions, Electron. J. Combin. 13, #R33 (2006).
* [63] C.K. Kausler, Expositio methodi series quascunque datas in fractiones continuas convertendi, Mémoires de l’Académie Impériale des Sciences de St. Pétersbourg 1, 156–174 (1803–1806).
* [64] C.F. Kausler, Die Lehre von den Continuirlichen Brüchen: nebst ihren vorzüglichsten Anwendungen auf Arithmetik und Algebra vollständig abgehandelt (Franz Christian Löflund, Stuttgart, 1803). Available on-line at https://opacplus.bsb-muenchen.de/title/BV001436293
* [65] A.N. Khovanskii, The Application of Continued Fractions and their Generalizations to Problems in Approximation Theory, translated from the Russian by P. Wynn (Noordhoff, Groningen, 1963).
* [66] S. Khrushchev, Orthogonal Polynomials and Continued Fractions: From Euler’s Point of View, Encyclopedia of Mathematics and its Applications #122 (Cambridge University Press, Cambridge, 2008).
* [67] D.E. Knuth, Two notes on notation, Amer. Math. Monthly 99, 403–422 (1992).
* [68] M. Laczkovich, On Lambert’s proof of the irrationality of $\pi$, Amer. Math. Monthly 104, 439–443 (1997).
* [69] J.-L. Lagrange, Recherches sur la manière de former des tables des planètes d’après les seules observations, Mémoires de l’Académie Royale des Sciences de Paris 1772, 513–618; reprinted in Œuvres, vol. 6, pp. 507–627. Available on-line at http://gallica.bnf.fr/ark:/12148/bpt6k229225j/f509
* [70] J.H. Lambert, Mémoire sur quelques propriétés remarquables des quantités transcendentes circulaires et logarithmiques, Mémoires de l’Académie Royale des Sciences de Berlin 17, 265–322 (1768). Available on-line at http://www.kuttaka.org/~JHL/L1768b.html
* [71] A.M. Legendre, Éléments de Géometrie, avec des notes, 3${}^{\textrm{\\`{e}me}}$ édition (Firmin Didot, Paris, 1800). Available on-line at https://www.google.co.uk/books/edition/El%C3%A9ments_de_g%C3%A9om%C3%A9trie_avec_des_notes/SqpXAAAAYAAJ See also 11${}^{\textrm{\\`{e}me}}$ édition (Firmin Didot, Paris, 1817), available on-line at https://gallica.bnf.fr/ark:/12148/bpt6k29403p [English translation: Elements of Geometry and Trigonometry, with notes, translated by D. Brewster (Oliver & Boyd, Edinburgh, 1822). Available on-line at https://www.google.co.uk/books/edition/Elements_of_Geometry_and_Trigonometry/RYMAAAAAMAAJ]
* [72] W. Leighton and W.T. Scott, A general continued fraction expansion, Bull. Amer. Math. Soc. 45, 596–605 (1939).
* [73] L. Lorentzen and H. Waadeland, Continued Fractions with Applications (North-Holland, Amsterdam, 1992).
* [74] Mathematics Stack Exchange, Who was V. Viskovatov?, https://math.stackexchange.com/questions/390522/who-was-v-viskovatov
* [75] S.C. Milne, Infinite families of exact sums of squares formulas, Jacobi elliptic functions, continued fractions, and Schur functions, Ramanujan J. 6, 7–149 (2002).
* [76] J.A. Murphy and M.R. O’Donohoe, Some properties of continued fractions with applications in Markov processes, J. Inst. Math. Appl. 16, 57–71 (1975).
* [77] J.A. Murphy and M.R. O’Donohoe, A class of algorithms for obtaining rational approximants to functions which are defined by power series, Z. Angew. Math. Phys. 28, 1121–1131 (1977).
* [78] I. Niven, Formal power series, Amer. Math. Monthly 76, 871–889 (1969).
* [79] M.R. O’Donohoe, Applications of continued fractions in one and more variables, Ph.D. thesis, Brunel University (1974). Available on-line at http://bura.brunel.ac.uk/handle/2438/5800
* [80] P. Peart and W.-J. Woan, Generating functions via Hankel and Stieltjes matrices, J. Integer Seq. 3, article 00.2.1 (2000).
* [81] O. Perron, Die Lehre von den Kettenbrüchen (Teubner, Leipzig, 1913). Second edition: Teubner, Leipzig, 1929; reprinted by Chelsea, New York, 1950. Third edition, 2 vols.: Teubner, Stuttgart, 1954, 1957.
* [82] M. Pétréolle and A.D. Sokal, Lattice paths and branched continued fractions, II: Multivariate Lah polynomials and Lah symmetric functions, European J. Combin. 92, 103235 (2021).
* [83] M. Pétréolle, A.D. Sokal and B.-X. Zhu, Lattice paths and branched continued fractions: An infinite sequence of generalizations of the Stieltjes–Rogers and Thron–Rogers polynomials, with coefficientwise Hankel-total positivity, preprint (July 2018), arXiv:1807.03271 [math.CO] at arxiv.org, to appear in Memoirs of the American Mathematical Society.
* [84] K.G. Ramanathan, Hypergeometric series and continued fractions, Proc. Indian Acad. Sci. (Math. Sci.) 97, 277–296 (1987).
* [85] S. Ramanujan and L.J. Rogers, Proof of certain identities in combinatory analysis, Proc. Cambridge Philos. Soc. 19, 211–216 (1919).
* [86] E. Roblet, Une interprétation combinatoire des approximants de Padé, Thèse de doctorat, Université Bordeaux I (1994). Reprinted as Publications du Laboratoire de Combinatoire et d’Informatique Mathématique (LACIM) #17, Université du Québec à Montréal (1994). Available on-line at http://lacim.uqam.ca/en/les-parutions/
* [87] E. Roblet and X.G. Viennot, Théorie combinatoire des T-fractions et approximants de Padé en deux points, Discrete Math. 153, 271–288 (1996).
* [88] L.J. Rogers, Second memoir on the expansion of certain infinite products, Proc. London Math. Soc. 25, 318–343 (1894).
* [89] L.J. Rogers, On the representation of certain asymptotic series as convergent continued fractions, Proc. London Math. Soc. (series 2) 4, 72–89 (1907).
* [90] A.V. Sills, An Invitation to the Rogers–Ramanujan Identities (CRC Press, Boca Raton, 2018).
* [91] A.D. Sokal, The leading root of the partial theta function, Adv. Math. 229, 2603–2621 (2012).
* [92] A.D. Sokal, The Euler and Springer numbers as moment sequences, Expo. Math. 38, 1–26 (2020).
* [93] A.D. Sokal, Coefficientwise total positivity (via continued fractions) for some Hankel matrices of combinatorial polynomials, in preparation.
* [94] A.D. Sokal and J. Zeng, Some multivariate master polynomials for permutations, set partitions, and perfect matchings, and their continued fractions, Adv. Appl. Math. 138, 102341 (2022).
* [95] T.J. Stieltjes, Sur la réduction en fraction continue d’une série procédant selon les puissances descendantes d’une variable, Ann. Fac. Sci. Toulouse 3, H1–H17 (1889).
* [96] T.J. Stieltjes, Recherches sur les fractions continues, Ann. Fac. Sci. Toulouse 8, J1–J122 (1894) and 9, A1–A47 (1895). [Reprinted, together with an English translation, in T.J. Stieltjes, Œuvres Complètes/Collected Papers (Springer-Verlag, Berlin, 1993), vol. II, pp. 401–566 and 609–745.]
* [97] W.J. Thron, Some properties of continued fractions $1+d_{0}z+K(z/(1+d_{n}z))$, Bull. Amer. Math. Soc. 54, 206–218 (1948).
* [98] J. Touchard, Nombres exponentiels et nombres de Bernoulli, Canad. J. Math. 8, 305–320 (1956).
* [99] V.S. Varadarajan, Euler and his work on infinite series, Bull. Amer. Math. Soc. 44, 515–539 (2007).
* [100] A.L. Varvak, Encoding properties of lattice paths, Ph.D. thesis, Brandeis University, May 2004. Available on-line at http://people.brandeis.edu/~gessel/homepage/students/varvakthesis.pdf
* [101] G. Viennot, Une théorie combinatoire des polynômes orthogonaux généraux, Notes de conférences données à l’Université du Québec à Montréal, septembre-octobre 1983. Available on-line at http://www.xavierviennot.org/xavier/polynomes_orthogonaux.html
* [102] B. Viscovatov, De la méthode générale pour réduire toutes sortes de quantités en fractions continues, Mémoires de l’Académie Impériale des Sciences de St. Pétersbourg 1, 226–247 (1803–1806).
* [103] H.S. Wall, Note on a certain continued fraction, Bull. Amer. Math. Soc. 51, 930–934 (1945).
* [104] H.S. Wall, Analytic Theory of Continued Fractions (Van Nostrand, New York, 1948).
* [105] R. Wallisser, On Lambert’s proof of the irrationality of $\pi$, in Algebraic Number Theory and Diophantine Analysis, edited by F. Halter-Koch and R.F. Tichy (de Gruyter, Berlin, 2000), pp. 521–530.
* [106] G.N. Watson, Theorems stated by Ramanujan (VII): Theorems on continued fractions, J. London Math. Soc. 4, 39–48 (1929).
* [107] P.J.S. Watson, Algorithms for differentiation and integration, in Padé Approximants and their Applications, edited by P.R. Graves-Morris (Academic Press, London–New York, 1973), pp. 93–97.
* [108] Wikipedia, Gauss’s continued fraction, http://en.wikipedia.org/wiki/Gauss%27s_continued_fraction
* [109] H.S. Wilf, generatingfunctionology, 2nd ed. (Academic Press, San Diego–London, 1994).
* [110] W.-J. Woan, Hankel matrices and lattice paths, J. Integer Seq. 4, article 01.1.2 (2001).
* [111] J. Zeng, The $q$-Stirling numbers, continued fractions and the $q$-Charlier and $q$-Laguerre polynomials, J. Comput. Appl. Math. 57, 413–424 (1995).
* [112] J. Zeng, Combinatorics of orthogonal polynomials and their moments, in Lectures on Orthogonal Polynomials and Special Functions, edited by H.S. Cohl and M.E.H. Ismail, London Mathematical Society Lecture Note Series #464 (Cambridge University Press, Cambridge, 2021), pp. 280–334.
|
holomorphic structure $\partial^{\nabla}$ on $\overline{\Sigma}$. Due to [7]
this assumption does not hold for all irreducible flat ${\rm
SL}(n,{\mathbb{C}})$-connections. Under the assumption, we obtain a section
$s\,=\,s^{\nabla}$ as follows. If $\overline{\partial}^{\nabla}$ is stable
$\lambda\,\longmapsto\,(\lambda,\,\overline{\partial}^{\nabla},\,\lambda\partial^{\nabla})$
is an irreducible section over ${\mathbb{C}}\,\subset\,{\mathbb{C}}P^{1}$. If
$\overline{\partial}^{\nabla}$ is unstable, we consider its destabilizing
subbundle $L\,\subset\,V$ of positive degree. The connection induces a
nilpotent Higgs field $\Phi$ on the holomorphic vector bundle
$L\oplus(V/L)\,=\,L\oplus L^{*}$ via
$\Phi\,=\,\pi^{V/L}\circ\nabla_{\mid L}.$
This is a special case of [45] and can be interpreted from a gauge theoretic
point of view (see also [4, $\S~{}4$] for details): Consider a complementary
bundle $\widetilde{L}\,\subset\,V$ of $L$, and the family of gauge-
transformations
$g(\lambda)\,=\,\begin{pmatrix}1&0\\\ 0&\lambda\end{pmatrix}.$
The family
$\lambda\,\longmapsto\,(\lambda,\,\overline{\partial}^{\nabla.g(\lambda)},\,\partial^{\nabla.g(\lambda)})$
extends to an irreducible (stable) Higgs pair at $\lambda\,=\,0$ which
identifies with $(L\oplus L^{*},\,\Phi)$
## 4\. Energy functional on sections of the Deligne–Hitchin moduli space
### 4.1. The energy as a moment map
It was proven in [4, Corollary 3.11] that the energy of an irreducible section
$s$ with lift $\widehat{s}$ as in (3.23) is given by
$\mathcal{E}(s)\,=\,\tfrac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}(\Phi\wedge\Psi).$
(4.1)
In particular, this integral is independent of the lift $\widehat{s}$. The
reader should be aware of the different prefactors in (4.1) and in (3.11). In
particular, if we think of $\mathcal{E}$ as the energy of a harmonic map, it
should be real-valued, while we want a moment map for the $S^{1}$-action to be
$\mathsf{i}{\mathbb{R}}$-valued. Working with the prefactor
$\tfrac{1}{2\pi\mathsf{i}}$ also has the advantage that we get fewer factors
of $2\pi\mathsf{i}$ in the statements of the results below.
###### Remark 4.1.
As pointed out in [6, Remark 2.3], the energy in the present example is
defined for all local sections around $\lambda\,=\,0$ which admit a lift as in
(3.23).
Let us write again
$\mathcal{S}^{\prime}\,=\,\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}^{\prime}$
for the space of _irreducible_ sections whose normal bundle is isomorphic to
$\mathcal{O}_{{\mathbb{C}}P^{1}}(1)^{\oplus 2d}$. Take any
$s\,\in\,\mathcal{S}^{\prime}$. In terms of lifts of sections, a tangent
vector $V\in T_{s}\mathcal{S}^{\prime}$ is expressed as follows. Let
$\widehat{s}$ be a lift of $s$ as in (3.23), and denote the curvature of the
connection $\partial+\overline{\partial}$ by
$F^{\partial+\overline{\partial}}\,=\,\overline{\partial}\partial+\partial\overline{\partial}$.
Expanding the integrability condition
$\overline{\partial}(\lambda)D(\lambda)+D(\lambda)\overline{\partial}(\lambda)\,=\,0$
(4.2)
in powers of $\lambda$, the zeroth and first order coefficients yield
$\displaystyle\overline{\partial}\Phi$ $\displaystyle\,=\,0$ (4.3)
$\displaystyle F^{\partial+\overline{\partial}}+[\Phi\wedge\Psi]$
$\displaystyle\,=\,0\,.$ (4.4)
Consider a family of sections
$(s_{t}\,\in\,{\mathcal{S}}_{\mathcal{M}_{\mathrm{DH}}})_{t}$ with
$s\,=\,s_{0}$ which represents $V\,\in\,T_{s}\mathcal{S}^{\prime}$. The
corresponding (lifted) infinitesimal variation
$\dot{\widehat{s}}\,=\,(\dot{\overline{\partial}}(\lambda),\,\dot{D}(\lambda),\,\lambda)$
satisfies the linearisation of (4.2), i.e.,
$\overline{\partial}(\lambda)(\dot{D}(\lambda))+D(\lambda)(\dot{\overline{\partial}}(\lambda))\,=\,0.$
(4.5)
Expanding $\dot{\widehat{s}}$ into a power series
$\dot{\widehat{s}}(\lambda)\,=\,\left(\sum_{k=0}^{\infty}\psi_{k}\lambda^{k},\,\sum_{k=0}^{\infty}\varphi_{k}\lambda^{k},\,\lambda\right)\,,$
(4.6)
for
$\varphi_{k}\,\in\,\Omega^{1,0}(\mathfrak{sl}(E)),\,\psi_{k}\,\in\,\Omega^{0,1}(\mathfrak{sl}(E))$,
the linearisation of (4.3) becomes
$\displaystyle\overline{\partial}\varphi_{0}+[\psi_{0}\wedge\Phi]$
$\displaystyle\,=\,0$ (4.7)
$\displaystyle\overline{\partial}\varphi_{1}+\partial\psi_{0}+[\varphi_{0}\wedge\Psi]+[\Phi\wedge\psi_{1}]$
$\displaystyle\,=\,0\,.$ (4.8)
Variations along the gauge orbit of $\widehat{s}$ are determined by
infinitesimal gauge transformations
${\mathbb{C}}\,\ni\,\lambda\,\longmapsto\,\xi(\lambda)\,\in\,\Gamma(\Sigma,\,\operatorname{\mathfrak{sl}}(E))$
and are of the form
$(\overline{\partial}(\lambda)\xi(\lambda),\,D(\lambda)\xi(\lambda),\,\lambda).$
(4.9)
By expanding $\xi(\lambda)\,=\,\sum_{k=0}^{\infty}\xi_{k}\lambda^{k}$, we get
with (4.9) and (3.23)
$\displaystyle\overline{\partial}(\lambda)\xi(\lambda)$
$\displaystyle\,=\,\overline{\partial}\xi_{0}+(\overline{\partial}\xi_{1}+[\Psi,\,\xi_{0}])\lambda+O(\lambda^{2})$
(4.10) $\displaystyle D(\lambda)\xi(\lambda)$
$\displaystyle\,=\,[\Phi,\,\xi_{0}]+(\partial\xi_{0}+[\Phi,\xi_{1}])\lambda+O(\lambda^{2})\,.$
(4.11)
Now let $s\,\in\,\mathcal{S}^{\prime}$ with lift $\widehat{s}$ over
${\mathbb{C}}$, and consider $V_{j}\,\in\,T_{s}\mathcal{S}^{\prime}$,
$j\,=\,1,\,2$, represented by
$\dot{\widehat{s}}_{j}\,=\,(\dot{\overline{\partial}}_{j}(\lambda),\,\dot{D}_{j}(\lambda),\,\lambda)\,=\,(\psi^{(j)}_{0}+\psi^{(j)}_{1}\lambda,\,\varphi_{0}^{(j)}+\varphi_{1}^{(j)}\lambda,\lambda)+O(\lambda^{2}).$
(4.12)
Then we define, recalling the definition of $\omega_{\lambda}$ given in
(3.22),
$\displaystyle\widehat{\Omega}_{\widehat{s}}(V_{1},\,V_{2})$
$\displaystyle\,=\,-\tfrac{\mathsf{i}}{2}\tfrac{\partial}{\partial\lambda}_{|\lambda=0}\omega_{\lambda}(V_{1}(\lambda),\,V_{2}(\lambda))$
(4.13)
$\displaystyle=-\tfrac{\mathsf{i}}{2}\tfrac{\partial}{\partial\lambda}_{|\lambda=0}2\mathsf{i}\int_{\Sigma}\mathrm{tr}\left(-\dot{D}_{1}(\lambda)\wedge\dot{\overline{\partial}}_{2}(\lambda)+\dot{D}_{2}(\lambda)\wedge\dot{\overline{\partial}}_{1}(\lambda)\right)$
(4.14)
$\displaystyle=\,\int_{\Sigma}\mathrm{tr}\left(-\varphi_{0}^{(1)}\wedge\psi_{1}^{(2)}+\varphi_{0}^{(2)}\wedge\psi_{1}^{(1)}-\varphi_{1}^{(1)}\wedge\psi_{0}^{(2)}+\varphi_{1}^{(2)}\wedge\psi_{0}^{(1)}\right).$
(4.15)
We view $\widehat{\Omega}$ as a two-form on the infinite-dimensional space of
germs of sections of $\varpi$ at $\lambda\,=\,0$. Note that the formula for
$\widehat{\Omega}$ is exactly (1.32) in the present context.
###### Proposition 4.2.
The two-form $\widehat{\Omega}$ descends to a holomorphic two-form on the
space of irreducible sections, which on
$\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}^{\prime}$ coincides with the
holomorphic symplectic form $\Omega_{0}$ defined in (1.32).
###### Proof.
We will show that $\widehat{\Omega}_{\widehat{s}}$ is degenerate along the
gauge orbits. To this end, let $\widehat{s}$ be a germ of a section near
$\lambda\,=\,0$, and let
$\xi(\lambda)\,=\,\sum_{k=0}^{\infty}\xi_{k}\lambda^{k}$ be an infinitesimal
gauge transformation. The corresponding tangent vector $V_{1}$ is represented
by
$\dot{\widehat{s}}_{1}\,=\,(\overline{\partial}(\lambda)\xi(\lambda),\,D(\lambda)\xi(\lambda),\,\lambda).$
Then for an arbitrary tangent vector $V_{2}$ represented by
$\dot{\widehat{s}}_{2}\,=\,(\dot{\overline{\partial}}(\lambda),\,\dot{D}(\lambda),\,\lambda)$,
we find
$\displaystyle\widehat{\Omega}_{\widehat{s}}(V_{1},\,V_{2})$
$\displaystyle\,=\,\tfrac{\partial}{\partial\lambda}_{|\lambda=0}\int_{\Sigma}\mathrm{tr}\left(-\dot{D}(\lambda)\wedge\overline{\partial}(\lambda)\xi(\lambda)+D(\lambda)\xi(\lambda)\wedge\dot{\overline{\partial}}(\lambda)\right)$
(4.16) (Stokes)
$\displaystyle\,=\,\tfrac{\partial}{\partial\lambda}_{|\lambda=0}\int_{\Sigma}\mathrm{tr}\left(\overline{\partial}(\lambda)(\dot{D}(\lambda))+D(\lambda)(\dot{\overline{\partial}}(\lambda)))\xi(\lambda)\right)$
(4.17) $\displaystyle\,=\,0;$ (4.18)
we used (4.5). This shows that $\widehat{\Omega}$ descends to
$\mathcal{S}^{\prime}$. ∎
Theorem 2.3 thus allows us to make the following conclusion.
###### Corollary 4.3.
The restriction of
$2\pi\mathsf{i}\mathcal{E}\,:\,\mathcal{S}^{\prime}_{\mathcal{M}_{\mathrm{DH}}}\,\longrightarrow\,{\mathbb{C}}$
is a holomorphic moment map for the natural ${\mathbb{C}}^{*}$-action on
$\mathcal{S}^{\prime}_{\mathcal{M}_{\mathrm{DH}}}$ with respect to the
holomorphic symplectic form $\Omega_{0}$. In particular, the
${\mathbb{C}}^{*}$-orbits in
$\mathcal{S}^{\prime}_{\mathcal{M}_{\mathrm{DH}}}$ are exactly the critical
points of $\mathcal{E}|_{\mathcal{S}^{\prime}_{\mathcal{M}_{\mathrm{DH}}}}$.
### 4.2. Explicit description of some ${\mathbb{C}}^{*}$-fixed sections
Corollary 4.3 shows a close relationship between ${\mathbb{C}}^{*}$-orbits in
$\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}$ and the energy functional. We
therefore examine the ${\mathbb{C}}^{*}$-orbits more closely in this section.
Before explicitly determining the ${\mathbb{C}}^{*}$-fixed _irreducible_
sections, we first observe:
###### Lemma 4.4.
The set $\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}^{{\mathbb{C}}^{*}}$ of all
${\mathbb{C}}^{*}$-fixed sections is in a natural bijection with
$\mathcal{M}_{\mathrm{dR}}$, the moduli space of flat completely reducible
${\rm SL}(n,{\mathbb{C}})$-connections.
In particular, the critical points of
$\mathcal{E}\,\colon\,\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}^{\prime}\,\longrightarrow\,{\mathbb{C}}$
correspond to an open subset of $\mathcal{M}_{\mathrm{dR}}^{irr}$, the moduli
space of flat irreducible ${\rm SL}(n,{\mathbb{C}})$-connections.
###### Proof.
Let $\nabla\,\in\,\mathcal{M}_{\mathrm{dR}}$. As in Section 2.3, we obtain the
following ${\mathbb{C}}^{*}$-invariant section
$s_{\nabla}\,\colon\,{\mathbb{C}}^{*}\,\longrightarrow\,\mathcal{M}_{\mathrm{DH}}$:
$s_{\nabla}(\lambda)\,=\,[(\overline{\partial}^{\nabla},\,\lambda\partial^{\nabla},\,\lambda)],\quad\partial^{\nabla}\,=\,\nabla^{1,0},\quad\overline{\partial}^{\nabla}\,=\,\nabla^{0,1}.$
(4.19)
By a crucial result of Simpson ([44] for existence and [45] for a more
explicit approach), the limits of $s_{\nabla}(\lambda)$ for $\lambda\to 0$ and
$\lambda\infty$ _always_ exist in $\mathcal{M}_{\mathrm{Higgs}}(\Sigma,{\rm
SL}(n,{\mathbb{C}}))$ and $\mathcal{M}_{\mathrm{Higgs}}(\overline{\Sigma},{\rm
SL}(n,{\mathbb{C}}))$ respectively. The resulting section, also denoted by
$s_{\nabla}\,\in\,\mathcal{M}_{\mathrm{DH}}$, is ${\mathbb{C}}^{*}$-invariant
by continuity. Evaluation of sections
$s\,\colon\,{\mathbb{C}}P^{1}\,\longrightarrow\,\mathcal{M}_{\mathrm{DH}}$ at
$\lambda\,=\,1$ gives the inverse of the map
$\nabla\,\longmapsto\,s_{\nabla}$.
The last statement in the lemma is a direct consequence of Theorem 2.3 and
Corollary 4.3. ∎
We next determine explicitly the ${\mathbb{C}}^{*}$-fixed sections
$s\,\in\,\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}$ such that $s$ is irreducible
over ${\mathbb{C}}$, by using some results of [10]. In terms of Lemma 4.4,
these are precisely the sections $s_{\nabla}$ such that $s_{\nabla}(0)$ is
stable. Indeed, since irreducibility is an open condition,
$s_{\nabla}(\lambda)$ is an irreducible $\lambda$-connection for $\lambda$
close to $0$. Using the ${\mathbb{C}}^{*}$-invariance, we see that
$s(\lambda)$ is irreducible for every $\lambda\,\in\,{\mathbb{C}}$.
For any ${\mathbb{C}}^{*}$-fixed sections $s_{\nabla}$, its values at $0$ and
$\infty$ are ${\mathbb{C}}^{*}$-fixed Higgs bundles on $\Sigma$ and
$\overline{\Sigma}$ respectively. These are called complex variations of Hodge
structures (VHS). Let $(\overline{\partial},\,\Phi)$ be any VHS on $\Sigma$.
The fact that $(\overline{\partial},\,\Phi)$ is a ${\mathbb{C}}^{*}$-fixed
point yields a splitting
$E\,=\,\bigoplus_{j=1}^{l}E_{j}$ (4.20)
into a direct sum of holomorphic bundles. With respect to this splitting,
$\overline{\partial}$ and $\Phi$ are given in the following block form
$\overline{\partial}=\begin{pmatrix}\overline{\partial}_{E_{1}}&0&\dots&\dots&0\\\
0&\overline{\partial}_{E_{2}}&\ddots&&\vdots\\\
\vdots&\ddots&\ddots&\ddots&\vdots\\\ \vdots&&\ddots&\ddots&0\\\
0&\dots&\dots&0&\overline{\partial}_{E_{l}}\end{pmatrix},\qquad\Phi=\begin{pmatrix}0&\dots&\dots&\dots&0\\\
\Phi^{(1)}&\ddots&&&\vdots\\\ 0&\Phi^{(2)}&\ddots&&\vdots\\\
\vdots&\ddots&\ddots&\ddots&\vdots\\\ 0&\dots&0&\Phi^{(l-1)}&0\end{pmatrix}.$
(4.21)
where $\Phi^{(j)}\,\in\,H^{0}(\Sigma,\,{\rm Hom}(E_{j},E_{j+1})\otimes
K_{\Sigma})$. The sheaf $\mathfrak{sl}(E)$ of trace-free holomorphic
endomorphisms of $E$ further decomposes into
$\displaystyle\mathfrak{sl}(E)=\bigoplus_{k\in\mathbb{Z}}\mathfrak{sl}(E)_{k},\quad\mathfrak{sl}(E)_{k}=\\{\psi\in\mathfrak{sl}(E)~{}|~{}\psi(E_{i})\subset
E_{i-k}\\}.$ (4.22)
By construction,
$\Phi\,\in\,H^{0}(\Sigma,\,K_{\Sigma}\otimes\mathfrak{sl}(E)_{-1})$. To define
the next notion, let
$N_{+}\,=\,\bigoplus_{k>0}\mathfrak{sl}(E)_{k},\qquad
N_{-}\,=\,\bigoplus_{k<0}\mathfrak{sl}(E)_{k},\qquad\mathbb{L}\,=\,\mathfrak{sl}(E)_{0}.$
(4.23)
Note that $N_{+}$ (respectively, $N_{-}$) is the subspace of
$\mathfrak{sl}(E)$ consisting of endomorphisms of $E$ that are strictly upper
(respectively, lower) block-triangular with respect to the splitting (4.20),
while $\mathbb{L}$ is the space of block-diagonal elements of
$\mathfrak{sl}(E)$.
Now let $(\overline{\partial},\,\Phi)\,\in\,\mathcal{M}_{Higgs}({\rm
SL}(n,{\mathbb{C}}))$ be a _stable_ complex variation of Hodge structures.
Then the BB-slice ([10, Definition 3.7]) through
$(\overline{\partial},\,\Phi)$ is defined by
$\displaystyle\mathcal{B}^{+}_{(\overline{\partial},\Phi)}\,=\,\\{(\beta,\,\phi)\,\in\,\Omega^{0,1}(N_{+})$
$\displaystyle\oplus\Omega^{1,0}(\mathbb{L}\oplus
N_{+})~{}\mid~{}D^{\prime\prime}(\beta,\,\phi)+[\beta\wedge\phi]\,=\,0,\quad
D^{\prime}(\beta,\,\phi)\,=\,0\\}.$ (4.24)
Here we denote by
$D\,:=\,\overline{\partial}+\partial^{h}+\Phi+\Phi^{*_{h}}$ (4.25)
the non-abelian Hodge connection attached to $(\overline{\partial},\,\Phi)$
with harmonic metric $h$, and
$D^{\prime\prime}\,:=\,\overline{\partial}+\Phi,\qquad
D^{\prime}\,:=\,\partial^{h}+\Phi^{*_{h}}.$
Hence the equations in (4.24) are explicitly given by
$D^{\prime\prime}(\beta,\,\phi)+[\beta\wedge\phi]\,=\,\overline{\partial}\phi+[(\Phi+\phi)\wedge\beta]\,=\,0,\qquad
D^{\prime}(\beta,\,\phi)\,=\,\partial^{h}\beta+[\Phi^{*_{h}}\wedge\phi]\,=\,0.$
(4.26)
Note that $\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$ is a finite-
dimensional affine space. Then, [10, Theorem 1.4 (3)] states that the map
$p\,\colon\,\mathcal{B}^{+}_{(\overline{\partial},\Phi)}\times{\mathbb{C}}\,\longrightarrow\,\mathcal{M}_{\mathrm{Hod}},\quad((\beta,\,\phi),\,\lambda)\,\longmapsto\,[\lambda,\,\overline{\partial}+\lambda\Phi^{*}+\beta,\,\lambda\partial^{h}+\Phi+\phi]$
(4.27)
is a holomorphic embedding onto the “attracting set”
$W(\overline{\partial},\,\Phi)\,=\,\\{m\,\in\,\mathcal{M}_{\mathrm{Hod}}^{irr}~{}\mid~{}\lim_{\zeta\to
0}\zeta\cdot m\,=\,(\overline{\partial},\,\Phi)\\}$
and is compatible with the obvious projections to ${\mathbb{C}}$. In
particular, if $W^{\lambda}(\overline{\partial},\,\Phi)$ denotes the
intersection of $W(\overline{\partial},\,\Phi)$ with the fiber
$\varpi^{-1}(\lambda)$, then $W^{\lambda}(\overline{\partial},\Phi)$ is
biholomorphic to the affine space
$\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$ via the map
$p_{\lambda}\,:=\,p(\bullet,\,\lambda)$. Thus,
$\mathcal{M}_{\mathrm{Hod}}^{irr}$ is stratified by affine spaces.
Given $(\beta,\,\phi)\,\in\,\mathcal{B}^{+}_{(\overline{\partial},\,\Phi)}$,
we can use Lemma 4.4 and (4.19) to define the ${\mathbb{C}}^{*}$-fixed section
$s_{(\beta,\phi)}\,:=\,s_{p_{1}(\beta,\phi)}\,\in\,\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}.$
(4.28)
As observed earlier, $s_{(\beta,\phi)}$ is an irreducible section over
${\mathbb{C}}\,\subset\,{\mathbb{C}}P^{1}$ but not necessarily over all of
${\mathbb{C}}P^{1}$.
###### Proposition 4.5.
Over ${\mathbb{C}}$, the ${\mathbb{C}}^{*}$-fixed section $s_{(\beta,\phi)}$
may be expressed as
$s_{(\beta,\phi)}(\lambda)\,=\,\left[\lambda,\overline{\partial}+\lambda(\Phi^{*_{h}}+\beta_{1})+\sum_{j=2}^{l}\lambda^{j}\beta_{j},\Phi+\lambda\partial^{h}+\sum_{j=0}^{l}\lambda^{j+1}\phi_{j}\right],$
(4.29)
where $\beta\,=\,\sum_{j=1}^{l}\beta_{j}$, with
$\beta_{j}\,\in\,\Omega^{0,1}(\mathfrak{sl}(E)_{j})$ and
$\phi\,=\,\sum_{j=0}\phi_{j}$ with
$\phi_{j}\,\in\,\Omega^{1,0}(\mathfrak{sl}(E)_{j})$.
###### Proof.
Let $\nabla\,=\,p_{1}(\beta,\,\phi)\,=\,[D+\beta+\phi]$ so that
$\overline{\partial}^{\nabla}\,=\,\overline{\partial}+\Phi^{*_{h}}+\beta,\quad\partial^{\nabla}\,=\,\partial^{h}+\Phi+\phi.$
Hence $s_{(\beta,\phi)}\,=\,s_{\nabla}$ is given by
$s_{(\beta,\phi)}(\lambda)\,=\,[\lambda,\,\overline{\partial}+(\Phi^{*_{h}}+\beta),\,\lambda\partial^{h}+\lambda\Phi+\lambda\phi]$
(4.30)
for $\lambda\,\in\,{\mathbb{C}}^{*}$ (see (4.19)). This does _not_ give a lift
of $s_{(\beta,\phi)}$ over all of ${\mathbb{C}}$, unless the holomorphic
bundle $(E,\,\overline{\partial})$ is stable, in which case we must have
$\beta\,=\,0$ and $\Phi\,=\,0$.
To construct a lift over all of ${\mathbb{C}}$ we use the
${\mathbb{C}}^{*}$-family of gauge transformations
$g(\lambda)\,=\,\lambda^{m}\begin{pmatrix}\lambda^{1-l}\mathrm{id}_{E_{1}}&0&\dots&\dots&0\\\
0&\lambda^{2-l}\mathrm{id}_{E_{2}}&\ddots&&\vdots\\\
\vdots&0&\ddots&\ddots&\vdots\\\ \vdots&&\ddots&\ddots&0\\\
0&\dots&\dots&0&\lambda^{0}\mathrm{id}_{E_{l}}\end{pmatrix},$ (4.31)
where $m\,=\,\frac{1}{n}\sum_{j=1}^{l}(l-j)\mathrm{rk}(E_{j})$ in order to
ensure $\det g(\lambda)\,=\,1$. Then any $\xi\,\in\,\mathfrak{sl}(E)_{j}$
satisfies
$g(\lambda)^{-1}\xi g(\lambda)\,=\,\lambda^{j}\xi.$
Let $\beta\,=\,\sum_{j=1}^{l}\beta_{j}$, with
$\beta_{j}\,\in\,\Omega^{0,1}(\mathfrak{sl}(E)_{j})$, and similarly
$\phi\,=\,\sum_{j=0}\phi_{j}$ with
$\phi_{j}\,\in\,\Omega^{1,0}(\mathfrak{sl}(E)_{j})$. Then using
$\Phi\,\in\,H^{0}(K\otimes\mathfrak{sl}(E)_{-1})$ and
$\Phi^{*_{h}}\,\in\,\Omega^{0,1}(K\otimes\mathfrak{sl}(E)_{1})$, we get that
$(\overline{\partial}+(\Phi^{*_{h}}+\beta),\,\lambda\partial^{h}+\lambda\Phi+\lambda\phi).g(\lambda)\,=\,\left(\overline{\partial}+\lambda(\Phi^{*_{h}}+\beta_{1})+\sum_{j=2}^{l}\lambda^{j}\beta_{j},\,\Phi+\lambda\partial^{h}+\sum_{j=0}^{l}\lambda^{j+1}\phi_{j}\right).$
The result follows. ∎
We next discuss the implications for the ${\mathbb{C}}^{*}$-fixed leaves of
the foliation $\mathcal{F}^{+}$ on
$\mathcal{S}^{\prime}\,=\,\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}^{\prime}$.
Recall that these leaves consist, in particular, of _irreducible_ sections (on
all of ${\mathbb{C}}P^{1}$) by definition. We denote by
$\mathcal{S}_{(\overline{\partial},\Phi)}^{\prime}$ all sections in
$\mathcal{S}^{\prime}$ which pass through the stable complex variation of
Hodge structure
$(\overline{\partial},\,\Phi)\,\in\,\mathcal{M}_{\mathrm{Higgs}}^{{\mathbb{C}}^{*}}$
at $\lambda\,=\,0$.
###### Proposition 4.6.
The ${\mathbb{C}}^{*}$-fixed point locus
$(\mathcal{S}^{\prime}_{(\overline{\partial},\Phi)})^{{\mathbb{C}}^{*}}$ is
isomorphic to an open and non-empty subset of the affine space
$\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$.
###### Proof.
Consider the section
$s_{(\beta,\phi)}\,\colon\,{\mathbb{C}}P^{1}\,\longrightarrow\,\mathcal{M}_{\mathrm{DH}}$
for $(\beta,\,\phi)\,\in\,\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$ which
is irreducible over ${\mathbb{C}}$. Since the complement of
$\mathcal{M}_{\mathrm{Higgs}}^{irr}(\overline{\Sigma},\,{\rm
SL}(n,{\mathbb{C}}))$ in
$\mathcal{M}_{\mathrm{Higgs}}(\overline{\Sigma},\,{\rm SL}(n,{\mathbb{C}}))$
is closed and of codimension at least two (cf. [17]), it follows that
$s_{(\beta,\phi)}$ is an irreducible section for
$(\beta,\,\phi)\,\in\,\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$ in an open
and dense subset of $\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$.
Note that $(\beta,\,\phi)\,=\,(0,\,0)$ corresponds to the twistor line
$s_{(\overline{\partial},\Phi)}$ through $(\overline{\partial},\,\Phi)$, which
lies in $\mathcal{S}^{\prime}$. Since $\mathcal{S}^{\prime}$ is open and non-
empty in the space of all irreducible sections, we therefore see that the
irreducible and ${\mathbb{C}}^{*}$-fixed section $s_{(\beta,\phi)}$ has the
desired normal bundle for $(\beta,\,\phi)$ in an open and non-empty subset
$U\,\subset\,\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$. Altogether we
obtain the isomorphism
$p_{1}^{-1}\circ\mathrm{ev}_{1}\,\colon\,(\mathcal{S}_{(\overline{\partial},\Phi)}^{\prime})^{{\mathbb{C}}^{*}}\,\overset{\cong}{\longrightarrow}\,U$.
∎
From Theorem 2.3, we immediately obtain:
###### Corollary 4.7.
The locus of critical points $s\,\in\,\mathcal{S}^{\prime}$ of
$\mathcal{E}\,\colon\,\mathcal{S}^{\prime}\,\longrightarrow\,{\mathbb{C}}$ is
isomorphic to an open and non-empty subset in
$\mathcal{M}_{\mathrm{dR}}^{irr}$. It is foliated by leaves which are
isomorphic to open and non-empty subsets of affine spaces.
###### Proof.
The first statement follows, by a genericity argument, from Lemma 4.4. The
second one is a consequence of Proposition 4.6. ∎
###### Remark 4.8.
Let
$s\,:\,{\mathbb{C}}P^{1}\,\longrightarrow\,\mathcal{\mathcal{M}_{\mathrm{DH}}}$
be a ${\mathbb{C}}^{*}$-fixed section such that
$s(0)\,=\,(\overline{\partial},\,\Phi)$ and $s(\infty)\,=\,(\partial,\,\Psi)$
are _stable_ VHS on $\Sigma$ and $\overline{\Sigma}$ respectively, with
respective splittings of the underlying smooth bundle $E$ of the form
$E\,=\,\bigoplus_{j=1}^{l}E_{j},\qquad
E\,=\,\bigoplus_{j=1}^{l^{\prime}}E^{\prime}_{j}.$
With respect to these splittings the respective holomorphic structures are
diagonal and the Higgs fields $\Phi$ and $\Psi$ are lower triangular as in
(4.21). Then we have the BB-slices
$\mathcal{B}^{+}_{(\overline{\partial},\Phi)}(\Sigma)$ and
$\mathcal{B}^{+}_{(\partial,\Psi)}(\overline{\Sigma})$. By Proposition 4.6 and
its analog on $\overline{\Sigma}$ we see that, on the one hand, $s$
corresponds to
$(\beta,\,\phi)\,\in\,\mathcal{B}^{+}_{(\overline{\partial},\Phi)}(\Sigma)$,
and on the other hand to
$(\widetilde{\beta},\,\widetilde{\phi})\,\in\,\mathcal{B}^{+}_{(\partial,\Psi)}(\overline{\Sigma})$.
Therefore, we obtain two distinguished lifts of $s$ over ${\mathbb{C}}$ and
${\mathbb{C}}^{*}\cup\\{\infty\\}$ of the form
$s(\lambda)\,=\,[\lambda,\,\widehat{s}_{(\beta,\phi)}(\lambda)]_{\Sigma}\,=\,\left[\lambda,\,\overline{\partial}+\lambda(\Phi^{*_{h}}+\beta_{1})+\sum_{j=2}^{l}\lambda^{j}\beta_{j},\,\Phi+\lambda\partial^{h}+\sum_{j=0}^{l}\lambda^{j+1}\phi_{j}\right]_{\Sigma},$
$s(\lambda)\,=\,[\lambda^{-1},\,\widehat{s}_{(\widetilde{\beta},\widetilde{\phi})}(\lambda^{-1})]_{\overline{\Sigma}}\,=\,\left[\lambda^{-1},\,\partial+\lambda^{-1}(\Psi^{*_{\widetilde{h}}}+\widetilde{\beta}_{1})+\sum_{j=2}^{l^{\prime}}\lambda^{-j}\widetilde{\beta}_{j},\,\Psi+\lambda^{-1}\overline{\partial}^{\widetilde{h}}+\sum_{j=0}^{l^{\prime}}\lambda^{-(j+1)}\widetilde{\phi}_{j}\right]_{\overline{\Sigma}}.$
Let $g_{0}$ be a gauge transformation such that
$(\partial+\overline{\partial}^{\widetilde{h}}+\Psi+\Psi^{*_{\widetilde{h}}}+\widetilde{\beta}+\widetilde{\phi}).g_{0}\,=\,\overline{\partial}+\partial^{h}+\beta+\phi.$
Going through the proof of Proposition 4.5 and writing $g(\lambda)$ and
$\widetilde{g}(\lambda^{-1})$ for the respective ${\mathbb{C}}^{*}$-families
of gauge transformations we get that
$\widehat{s}_{(\beta,\phi)}(\lambda)\,=\,\widehat{s}_{(\widetilde{\beta},\widetilde{\phi})}(\lambda^{-1}).\widetilde{g}(\lambda^{-1})^{-1}g_{0}g(\lambda)$
for any $\lambda\,\in\,{\mathbb{C}}^{*}$.
In general, starting only with the lift $\widehat{s}_{(\beta,\phi)}$ over
${\mathbb{C}}$ obtained above, it seems hard to determine explicitly the lift
$\widehat{s}_{(\widetilde{\beta},\widetilde{\phi})}(\lambda^{-1})$ over
${\mathbb{C}}P^{1}\setminus\\{0\\}$ or even the limiting VHS
$s_{(\beta,\phi)}(\infty)$. The next two examples discuss some situations in
which the limit can be computed.
###### Example 4.9.
Suppose the holomorphic structure $\partial^{h}+\Phi+\phi$ is stable on
$\overline{\Sigma}$. Then we can argue as follows. For
$\lambda\in{\mathbb{C}}^{*}$ we can write, using the Deligne gluing:
$s_{(\beta,\phi)}(\lambda)=[\lambda,\overline{\partial}+\Phi^{*_{h}}+\beta,\lambda(\partial^{h}+\Phi+\phi)]_{\Sigma}=[\lambda^{-1},\partial^{h}+\Phi+\phi,\lambda^{-1}(\overline{\partial}+\Phi^{*_{h}}+\beta)]_{\overline{\Sigma}}.$
Under our assumption that $\partial^{h}+\Phi+\phi$ is stable, this allows us
to conclude $s_{(\beta,\phi)}(\infty)=(\partial^{h}+\Phi+\phi,0)$. We will see
in the proof of Theorem 4.17 that this situation does in fact occur, at least
for rank $2$ bundles.
###### Example 4.10.
Consider the rank two case, $n=2$. If $s$ is the twistor line through a VHS
$(\overline{\partial},\Phi)$ on $\Sigma$, then we have $E=V\oplus V^{*}$,
where $V$ is a line bundle with $0<\deg V\leq g-1$ and $V^{*}=\ker\Phi$. Then
$s(\infty)=(\partial^{h},\Phi^{*_{h}})$ and the corresponding splitting is
$E=V^{*}\oplus V$. Note that, since $\overline{\Sigma}$ and $\Sigma$ come with
opposite orientations, we have $\deg V^{*}>0$, as a bundle on
$\overline{\Sigma}$. Then $\widetilde{g}(\lambda^{-1})=g(\lambda)$ in this
case, as the order is reversed. The associated lifts are thus just the lifts
of $s$ over ${\mathbb{C}}$ and ${\mathbb{C}}^{*}$ given by the harmonic
metric, i.e. the associated solution of the self-duality equations.
###### Example 4.11 (Grafting sections).
In [24] a special class of ${\mathbb{C}}^{*}$-invariant sections of
$\mathcal{M}_{\mathrm{DH}}(\Sigma,{\rm SL}(2,{\mathbb{C}}))$, called _grafting
sections_ , have been constructed by using grafting of projective structures
on $\Sigma$. We recover them from the previous proposition as follows.
Consider the ${\mathbb{C}}^{*}$-fixed stable Higgs bundle
$(\overline{\partial},\Phi)$ with
$E=K_{\Sigma}^{\frac{1}{2}}\oplus
K^{-\frac{1}{2}}_{\Sigma},\quad\Phi=\begin{pmatrix}0&0\\\ 1&0\end{pmatrix}$
(4.32)
where $K_{\Sigma}^{\frac{1}{2}}$ is a square root of the canonical bundle
$K_{\Sigma}$. To determine (4.23) in this example, we define
$E_{1}:=K_{\Sigma}^{\frac{1}{2}}$, $E_{2}:=K_{\Sigma}^{-\frac{1}{2}}$. Then we
see that
$N_{+}\cong K_{\Sigma},\quad N_{-}=K^{-1}_{\Sigma},\quad
L\cong\mathcal{O}_{\Sigma}$ (4.33)
By (4.26), $(0,\phi)\in\mathcal{B}_{(\overline{\partial},\Phi)}^{+}$ if and
only if $\overline{\partial}\phi=0$ and $[\phi\wedge\Phi^{*_{h}}]=0$. Hence
$\phi$ is of the form
$\phi=\begin{pmatrix}0&q\\\ 0&0\end{pmatrix},\quad q\in
H^{0}(\Sigma,K_{\Sigma}^{\otimes 2}),$ (4.34)
with respect to the splitting $E=E_{1}\oplus E_{2}$. For those $q$ such that
the monodromy of the corresponding flat connection at $\lambda=1$ is real, the
sections $s_{(0,\phi)}$ are precisely the grafting sections of [24, §2.1].
Since $\beta=0$ in this case, we see that the energy of a grafting section is
the same as the energy of the twistor line associated with the stable Higgs
pair $(\overline{\partial},\Phi)$. If the monodromy of the corresponding flat
connection is real, then [24] shows that the section $s_{(0,\phi)}$ is real
and defines an element of
$(\mathcal{S}^{\prime}_{\mathcal{M}_{\mathrm{DH}}})^{\tau}$, in particular it
has the correct normal bundle $\mathcal{O}_{{\mathbb{C}}P^{1}}(1)^{\oplus
2d}$. But the section $s_{(0,\phi)}$ is not admissible and thus cannot
correspond to a solution of the self-duality equations. This shows that
$\mathcal{M}_{\mathrm{SD}}(\Sigma,{\rm
SL}(2,{\mathbb{C}}))\subsetneq(\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}^{\prime})^{\tau}$.
### 4.3. The energy of a ${\mathbb{C}}^{*}$-fixed section
Proposition 4.5 gives concrete formulas for all ${\mathbb{C}}^{*}$-fixed
points $s\,\in\,\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}^{{\mathbb{C}}^{*}}$
such that $s(0)$ is a stable VHS. We next compute the energy of such sections.
###### Proposition 4.12.
Let $(\overline{\partial},\,\Phi)$ be a stable ${\mathbb{C}}^{*}$-fixed ${\rm
SL}(n,{\mathbb{C}})$-Higgs bundle, and let $s_{(\beta,\phi)}$ be the
${\mathbb{C}}^{*}$-fixed section corresponding to
$(\beta,\,\phi)\,\in\,\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$. Its
energy is given by
$\mathcal{E}(s_{(\beta,\phi)})\,=\,\mathcal{E}(s_{0})\,=\,\sum_{k=2}^{l}(k-1)\deg(E_{k}),$
where $s_{0}$ is the twistor line through $(\overline{\partial},\,\Phi)$.
###### Proof.
Write $s_{(\beta,\phi)}$ in a form as in (4.29). Then the definition of
$\mathcal{E}$ immediately implies that
$\mathcal{E}(s_{(\beta,\phi)})\,=\,\mathcal{E}(s_{0})+\tfrac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}(\Phi\wedge\beta_{1}).$
Next we will show that $\int_{\Sigma}\mathrm{tr}(\Phi\wedge\beta_{1})\,=\,0$.
To this end, let us write
$\Phi\,=\,\sum_{k=1}^{l-1}\Phi^{(k)},\qquad\beta_{1}\,=\,\sum_{k=1}^{l-1}\beta^{(k)},$
where $\Phi^{(k)}\,\in\,\Omega^{1,0}(\mathrm{Hom}(E_{k},\,E_{k+1}),$
$\beta^{(k)}\,\in\,\Omega^{0,1}(\mathrm{Hom}(E_{k+1},\,E_{k})$; see the block
form in (4.21). It follows that
$\mathrm{tr}(\Phi\wedge\beta_{1})\,=\,\sum_{k=1}^{l}\mathrm{tr}_{E_{k}}(\Phi^{(k-1)}\wedge\beta^{(k-1)}).$
Note that each summand $\Phi^{(k-1)}\wedge\beta^{(k-1})$ belongs to
$\Omega^{1,1}(\mathrm{End}(E_{k}))$ and we have adopted the convention that
$\Phi^{(k)}\,=\,0\,=\,\beta^{(k)}$ if $k\,=\,0,\,l$.
Now, equation (4.26) implies that
$\overline{\partial}\phi_{0}+[\Phi\wedge\beta_{1}]\,=\,0$
and we can write
$[\Phi\wedge\beta_{1}]\,=\,\sum_{k=1}^{l-1}\Phi^{(k-1)}\wedge\beta^{(k-1)}+\beta^{(k)}\wedge\Phi^{(k)}.$
(4.35)
Thus, for each $k\,=\,1,\,\cdots,\,l$,
$\overline{\partial}\phi_{0}^{(k)}+\Phi^{(k-1)}\wedge\beta^{(k-1)}+\beta^{(k)}\wedge\Phi^{(k)}\,=\,0.$
Consider the case of $k\,=\,l$:
$\overline{\partial}\phi_{0}^{(l)}+\Phi^{(l-1)}\wedge\beta^{(l-1)}\,=\,0.$
Taking the trace of this equation and integrating over $\Sigma$, we find,
using Stokes’ theorem, that
$\int_{\Sigma}\mathrm{tr}_{E_{l}}(\Phi^{(l-1)}\wedge\beta^{(l-1)})\,=\,0.$
Now assume that
$\int_{\Sigma}\mathrm{tr}_{E_{k+1}}(\Phi^{(k)}\wedge\beta^{(k)})\,=\,0$ for
all $k\,\geq\,k_{0}$. Then we have
$\overline{\partial}\phi_{0}^{(k_{0})}+\Phi^{(k_{0}-1)}\wedge\beta^{(k_{0}-1)}+\beta^{(k_{0})}\wedge\Phi^{(k_{0})}\,=\,0.$
Taking the trace and integrating yields
$\displaystyle 0$
$\displaystyle=\int_{\Sigma}\mathrm{tr}_{E_{k_{0}}}(\Phi^{(k_{0}-1)}\wedge\beta^{(k_{0}-1)}+\beta^{(k_{0})}\wedge\Phi^{(k_{0})})$
(4.36)
$\displaystyle=\int_{\Sigma}\mathrm{tr}_{E_{k_{0}}}(\Phi^{(k_{0}-1)}\wedge\beta^{(k_{0}-1)}-\int_{\Sigma}\mathrm{tr}_{E_{k_{0}+1}}(\Phi^{(k_{0})}\wedge\beta^{(k_{0})})$
(4.37)
$\displaystyle=\int_{\Sigma}\mathrm{tr}_{E_{k_{0}}}(\Phi^{(k_{0}-1)}\wedge\beta^{(k_{0}-1)}).$
(4.38)
It follows inductively that
$\int_{\Sigma}\mathrm{tr}(\Phi\wedge\beta_{1})\,=\,0$.
It remains to compute the energy of the twistor line $s_{0}$. To this end, we
observe that
$\mathcal{E}(s_{0})\,=\,\tfrac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}(\Phi\wedge\Phi^{*_{h}})\,=\,\tfrac{1}{2\pi\mathsf{i}}\int_{\Sigma}\sum_{k=2}^{l}\mathrm{tr}_{E_{k}}(\Phi^{(k-1)}\wedge(\Phi^{k-1})^{*_{h}})\,=\,\sum_{k=2}^{l}\mathcal{E}_{k}(s_{0}),$
where we put
$\mathcal{E}_{k}(s_{0})\,=\,\tfrac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}_{E_{k}}(\Phi^{(k-1)}\wedge(\Phi^{k-1})^{*_{h}})$
for $k\,\geq\,2$. The equation $F^{\nabla^{h}}+[\Phi\wedge\Phi^{*_{h}}]\,=\,0$
is block-diagonal with respect to the splitting
$E\,=\,\bigoplus_{k=1}^{l}E_{k}$, with components
$F^{\nabla^{h}_{E_{k}}}+\Phi^{(k-1)}\wedge(\Phi^{(k-1)})^{*_{h}}+(\Phi^{(k)})^{*_{h}}\wedge\Phi^{(k)}=0.$
This gives the following recursive relations:
$\displaystyle\mathcal{E}_{k}(s_{0})$
$\displaystyle=\tfrac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}_{E_{k}}(\Phi^{(k-1)}\wedge(\Phi^{(k-1)})^{*_{h}})$
(4.39)
$\displaystyle=\tfrac{\mathsf{i}}{2\pi}\int_{\Sigma}\mathrm{tr}_{E_{k}}(F^{\nabla^{h}_{E_{k}}})+\tfrac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}_{E_{k+1}}(\Phi^{(k)}\wedge(\Phi^{(k)})^{*_{h}})$
(4.40) $\displaystyle=\deg(E_{k})+\mathcal{E}_{k+1}(s_{0}).$ (4.41)
Thus, if $k\,=\,l$, we find that
$\mathcal{E}_{l}(s_{0})\,=\,\deg(E_{l})\,,$
and for general $k$ we get that
$\mathcal{E}_{k}(s_{0})\,=\,\sum_{j=k}^{l-1}\deg(E_{j})+\mathcal{E}_{l}(s_{0})\,=\,\sum_{j=k}^{l}\deg(E_{j}).$
Therefore,
$\mathcal{E}(s_{0})\,=\,\sum_{k=2}^{l}\mathcal{E}_{k}(s_{0})=\sum_{k=2}^{l}\sum_{j=k}^{l}\deg(E_{j})\,=\,\sum_{k=2}^{l}(k-1)\deg(E_{k}),$
and this completes the proof. ∎
### 4.4. The second variation of the Energy at a ${\mathbb{C}}^{*}$-fixed
section
Next we study the second variation of the energy functional $\mathcal{E}$ at a
${\mathbb{C}}^{*}$–fixed point.
Examining the proof of Proposition 4.5, we can check explicitly that the
sections $s_{(\beta,\phi)}$ satisfy for any $\zeta\in{\mathbb{C}}^{*}$ the
relation
$\zeta.\widehat{s}_{(\beta,\phi)}=\widehat{s}_{(\beta,\phi)}.g(\zeta)^{-1}.$
Moreover, if we use the notation of equation (4.31) and put
$\xi=\begin{pmatrix}(m+1-l)\mathrm{id}_{E_{1}}&0&\dots&\dots&0\\\
0&(m+2-l)\mathrm{id}_{E_{2}}&\ddots&&\vdots\\\
\vdots&0&\ddots&\ddots&\vdots\\\ \vdots&&\ddots&\ddots&0\\\
0&\dots&\dots&0&m\mathrm{id}_{E_{l}}\end{pmatrix},$ (4.42)
then $[\xi,\cdot]$ acts as multiplication by $k$ on $\mathfrak{sl}(E)_{k}$ and
we see that
$-\mathsf{i}\lambda\frac{d}{d\lambda}\overline{\partial}(\lambda)=\overline{\partial}(\lambda)\xi(\lambda),\qquad\mathsf{i}D(\lambda)-\mathsf{i}\lambda\frac{d}{d\lambda}D(\lambda)=D(\lambda)\xi(\lambda)$
(4.43)
with
$(\overline{\partial}(\lambda),D(\lambda))=\left(\overline{\partial}+\lambda(\Phi^{*_{h}}+\beta_{1})+\sum_{j=2}^{l}\lambda^{j}\beta_{j},\Phi+\lambda\partial^{h}+\sum_{j=0}^{l}\lambda^{j+1}\phi_{j}\right).$
For $\xi(\lambda)=\sum_{k=0}^{\infty}\xi_{k}\lambda^{k}$ we deduce from (4.43)
the following equations
$\displaystyle 0$ $\displaystyle=\overline{\partial}\xi_{0}$
$\displaystyle\Phi$ $\displaystyle=[\Phi,\xi_{0}]$ (4.44) $\displaystyle-\Psi$
$\displaystyle=\overline{\partial}\xi_{1}+[\Psi,\xi_{0}]$ $\displaystyle 0$
$\displaystyle=[\Phi,\xi_{1}]+\partial\xi_{0}.$ (4.45)
We can now compute the second variation of $\mathcal{E}$ at such fixed points.
###### Proposition 4.13.
The second variation of $\mathcal{E}$ at a ${\mathbb{C}}^{*}$–fixed point $s$
with lift $\widehat{s}$ as in (3.23) is given by
$d^{2}\mathcal{E}(\dot{s})=\,\frac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}(\psi_{0}\wedge[\varphi_{1},\xi]+\varphi_{1}\wedge[\psi_{0},\xi]+\psi_{1}\wedge[\varphi_{0},\xi])+\varphi_{0}\wedge[\psi_{1},\xi]+2\varphi_{0}\wedge\psi_{1})\,.$
###### Proof.
Let $(s_{t})$ be a family of sections with $s_{0}=s$. We compute, using the
notation for $\widehat{s}$ and $\dot{s}$ as in Section 4
$2\pi\mathsf{i}\frac{d^{2}}{dt^{2}}|_{t=0}\mathcal{E}(s_{t})=\int_{\Sigma}\mathrm{tr}(\Phi\wedge\dot{\psi}_{1}+\dot{\varphi}_{0}\wedge\Psi+2\varphi_{0}\wedge\psi_{1}).$
(4.46)
Since $s_{0}\,=\,s$ is fixed by the action of ${\mathbb{C}}^{*}$, we can use
(4.44) (with $\xi=\xi_{0},\xi_{1}=0$) to write
$\displaystyle 2\pi\mathsf{i}\frac{d^{2}}{dt^{2}}|_{t=0}\mathcal{E}(s_{t})$
$\displaystyle=\int_{\Sigma}\mathrm{tr}([\Phi,\xi]\wedge\dot{\psi}-\dot{\phi}\wedge[\Psi,\xi]+2\phi\wedge\psi)$
(4.47)
$\displaystyle=\int_{\Sigma}\mathrm{tr}(-\xi([\Phi\wedge\dot{\psi}]+[\dot{\phi}\wedge\Psi])+2\phi\wedge\psi)$
(4.48)
$\displaystyle=\int_{\Sigma}\mathrm{tr}(\xi(\overline{\partial}\dot{\varphi}_{1}+\partial\dot{\psi}_{0}+2[\psi_{0}\wedge\varphi_{1}]+2[\varphi_{0}\wedge\psi_{1}])+2\varphi_{0}\wedge\psi_{1})$
(4.49) $\displaystyle({\rm using}~{}\overline{\partial}\xi=0=\partial\xi)$
$\displaystyle=\int_{\Sigma}\mathrm{tr}(\xi(2[\psi_{0}\wedge\varphi_{1}]+2[\varphi_{0}\wedge\psi_{1}])+2\varphi_{0}\wedge\psi_{1})$
(4.50)
$\displaystyle=\int_{\Sigma}\mathrm{tr}(\psi_{0}\wedge[\varphi_{1},\xi]+\varphi_{1}\wedge[\psi_{0}\wedge\xi]+\psi_{1}\wedge[\varphi_{0},\xi])+\varphi_{0}\wedge[\psi_{1}\wedge\xi]+2\varphi_{0}\wedge\psi_{1})$
(4.51)
In the third equation from above we made use of the second linearisation of
(4.3). ∎
Proposition 4.13 shows that the second variation is closely related to the
infinitesimal ${\mathbb{C}}^{*}$-action on the tangent space. The following
proposition is obtained.
###### Proposition 4.14.
Let
$\dot{s}(\lambda)\,=\,(\dot{\overline{\partial}}(\lambda),\,\dot{D}(\lambda),\lambda)\,=\,(\sum_{k=0}^{\infty}\psi_{k}\lambda^{k},\,\sum_{k=0}^{\infty}\varphi_{k}\lambda^{k},\,\lambda)$
be an infinitesimal deformation of the critical point $s\,\in\,\mathcal{S}$.
Suppose that $\dot{s}$ satisfies
$[\psi_{0},\xi]=n_{0}\psi_{0},\qquad[\psi_{1},\xi]=n_{1}\psi_{1},\qquad[\varphi_{0},\xi]=m_{0}\varphi_{0},\qquad[\varphi_{1},\xi]=m_{1}\varphi_{1}$
for some $m_{i},\,n_{i}\,\in\,\mathbb{Z}$. Then
$d^{2}\mathcal{E}(\dot{s})=\frac{1}{2\pi\mathsf{i}}\int_{\Sigma}\mathrm{tr}((m_{1}+n_{0})\psi_{0}\wedge\varphi_{1}+(m_{0}+n_{1}+2)\psi_{1}\wedge\varphi_{0})\,.$
###### Remark 4.15.
Note that this resembles the discussion surrounding Eq. (8.10) in [28]. In
fact, it does reproduce Hitchin’s result in the case that $s$ is the twistor
line corresponding to a ${\mathbb{C}}^{*}$-fixed point in
$\mathcal{\mathcal{M}_{\mathrm{Higgs}}}$ and the deformation $\dot{s}$ is
real, so that $\psi_{1}=\varphi_{0}^{*},\psi_{0}=-\varphi_{1}^{*}$.
### 4.5. Sections and the degree of the hyperholomorphic line bundle
Our previous results together with the energy can be used to show that the
space of irreducible sections is not connected. We begin with the following
###### Proposition 4.16.
Let $(\overline{\partial},\Phi)$ be a stable ${\mathbb{C}}^{*}$-fixed Higgs
bundle and let $s_{(\beta,\phi)}$ be a ${\mathbb{C}}^{*}$-fixed section
corresponding to
$(\beta,\phi)\in\mathcal{B}^{+}_{(\overline{\partial},\Phi)}$. If
$s_{\beta,\phi}(\infty)$ is given by a VHS on $\overline{\Sigma}$ with
underlying holomorphic bundle $E=\bigoplus_{k=1}^{l^{\prime}}E^{\prime}_{k}$,
then we have
$\deg(s_{(\beta,\phi)}^{*}L_{Z})\,=\,\sum_{k=1}^{l}(k-1)\deg(E_{k})+\sum_{k=1}^{l^{\prime}}(k-1)\deg(E_{k}^{\prime}).$
###### Proof.
Proposition 4.12 allows us to compute $\mathcal{E}_{0}(s)$ and
$\mathcal{E}_{\infty}(s)$. The assertion now follows from the formula
$\deg(s_{(\beta,\phi)}^{*}L_{Z})=\mathcal{E}(s_{(\beta,\phi)})+\mathcal{E}_{\infty}(s_{(\beta,\phi)})\,.$
∎
###### Theorem 4.17.
There exist irreducible sections $s$ of
$\varpi\,:\,\mathcal{M}_{\mathrm{DH}}(\Sigma,{\rm
SL}(2,{\mathbb{C}}))\,\longrightarrow\,{\mathbb{C}}P^{1}$ such that the
pullback $s^{*}L_{Z}$ of the holomorphic line bundle
$L_{Z}\,\longrightarrow\,\mathcal{M}_{\mathrm{DH}}(\Sigma,{\rm
SL}(2,{\mathbb{C}}))$ has non-zero degree. In particular, the space of
irreducible sections is not connected.
###### Proof.
Let $K_{\Sigma}^{\frac{1}{2}}$ be a square-root of the canonical line bundle
$K_{\Sigma}$. Consider the uniformization (Fuchsian) flat connection
$\nabla^{Fuchs}=\begin{pmatrix}\nabla^{K_{\Sigma}^{\frac{1}{2}}}&1^{*}\\\
1&\nabla^{K_{\Sigma}^{-\frac{1}{2}}}\end{pmatrix}$
on the rank two bundle $K_{\Sigma}^{\frac{1}{2}}\oplus
K_{\Sigma}^{-\frac{1}{2}}.$ For generic holomorphic quadratic differential
$q\,\in\,H^{0}(\Sigma,\,K_{\Sigma}^{2})$, the anti-holomorphic structure
$\begin{pmatrix}\partial_{K_{\Sigma}^{\frac{1}{2}}}&q\\\
1&\partial_{K_{\Sigma}^{-\frac{1}{2}}}\end{pmatrix}$
is stable (i.e., it defines a stable holomorphic bundle on
$\overline{\Sigma}$). Then,
$\nabla:=\nabla^{Fuchs}+\begin{pmatrix}0&q\\\ 0&0\end{pmatrix}$
gives a ${\mathbb{C}}^{*}$-invariant section
$s_{\nabla}\in\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}$ by the construction of
Lemma 4.4. In view of Proposition 4.12, the energy at $\lambda=0$ is given by
$\deg{K_{\Sigma}^{-\frac{1}{2}}}=1-g\neq 0.$
By assumption, $\partial^{\nabla}$ is stable, so the anti-Higgs field of $s$
at $\lambda=\infty$ vanishes, and the energy at $\lambda=\infty$ is given by
$\mathcal{E}_{\infty}=0.$ Finally, we have
$\deg(s^{*}L_{Z})=\mathcal{E}(s)+\mathcal{E}_{\infty}(s)\neq 0$
by the residue formula for the pull-back under $s$ of the meromorphic
connection to ${\mathbb{C}}P^{1}$ (see Section 3 of [4]). ∎
Given a(n irreducible) section $s\in\mathcal{S}_{\mathcal{M}_{\mathrm{DH}}}$,
it is in general very difficult to compute its normal bundle $N_{s}$. However,
by using the methods of [24], it can be shown that the
${\mathbb{C}}^{*}$-fixed points considered in the proof of Theorem 4.17 do not
have normal bundles of generic type, i.e., their normal bundles admit
holomorphic sections with double zeros.
## References
* [1] D. V. Alekseevsky, V. Cortés and T. Mohaupt, Conification of Kähler and hyper-Kähler manifolds, Comm. Math. Phys. 324 no. 2, (2013), 637 – 655
* [2] M. F. Atiyah and R. S. Ward, Instantons and algebraic geometry, Comm. Math. Phys. 55 no. 2, (1977), 117 – 124.
* [3] R. J. Baston and M. G. Eastwood, The Penrose transform: its interaction with representation theory, Oxford Mathematical Monographs, Clarendon Press, Oxford (1989).
* [4] F. Beck, S. Heller and M. Roeser, Energy of sections of the Deligne–Hitchin twistor space, Math. Ann. (2020), https://doi.org/10.1007/s00208-020-02042-0.
* [5] I. Biswas and S. Heller, On the Automorphisms of a Rank One Deligne–Hitchin Moduli Space, SIGMA 13 (2017), https://doi.org/10.3842/SIGMA.2017.072.
* [6] I. Biswas, S. Heller and M. Röser, Real holomorphic sections of the Deligne–Hitchin twistor space, Comm. Math. Phys. 366 (2019), 1099–1133.
* [7] I. Biswas and S. Dumitrescu, and S. Heller, Irreducible flat $\mathrm{SL}(2,{\mathbb{R}})$-connections on the trivial holomorphic bundle, Jour. Math. Pures Appl. (to appear), arXiv:2003.06997.
* [8] I. Biswas and N. Raghavendra, Line bundles over a moduli space of logarithmic connections on a Riemann surface, Geom. Funct. Anal. 15 (2005), 780–808.
* [9] N. Buchdahl, On the relative de Rham sequence Proc. AMS 87, No. 2 (1983), 363–366
* [10] B. Collier and R. Wentworth, Conformal limits and the Bialynicki-Birula stratification of the space of $\lambda$-connections, Adv. Math. 350 (2019), 1193–1225.
* [11] K. Corlette, Flat $G$-bundles with canonical metrics, J. Diff. Geom. 28 (1988), 361 – 382
* [12] S. K. Donaldson, Twisted Harmonic Maps and the self-duality equations, , Proc. London Math. Soc. (3) 55, no. 1, (1987), 127 – 131.
* [13] J. Dorfmeister, F. Pedit and H. Wu, Weierstrass type representation of harmonic maps into symmetric spaces, Comm. Anal. Geom. 6 (1998), no. 4, 633–668.
* [14] J.-M. Drézet and M. S. Narasimhan, Groupe de Picard des variétés de modules de fibrés semi-stables sur les courbes algébriques, Invent. Math. 97 (1989), 53–94.
* [15] O. Dumitrescu, L. Fredrickson, G. Kydonakis, R. Mazzeo, M. Mulase and A. Neitzke, From the Hitchin section to opers through Nonabelian Hodge, Journal of Differential Geometry, 117, No. 2, (2021), 223 – 253
* [16] O. Dumitrescu and M. Mulase, Interplay between opers, quantum curves, WKB analysis, and Higgs bundles, preprint arXiv:1702.00511v2 [math.AG]
* [17] G. Faltings, Stable $G$-bundles and projective connections, Journal of Algebraic Geometry 2 (1993), No.3, 507–568
* [18] B. Feix, Hyperkähler metrics on cotangent bundles, J. Reine Angew. Math. 532 (2001), 33–46.
* [19] B. Feix, Hypercomplex manifolds and hyperholomorphic bundles, Math. Proc. Cam. Philos. Soc. 133, (2002), 443–457.
* [20] B. Feix, Twistor spaces of hyperkähler manifolds with $S^{1}$-actions, Differential Geom. Appl. 19 (2003), 15–28.
* [21] H. Grauert and R. Remmert, _Coherent analytic sheaves_ , Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 265, Springer-Verlag, Berlin, 1984.
* [22] R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, 52, New York: Springer-Verlag, (1977).
* [23] A. Haydys, HyperKähler and quaternionic Kähler manifolds with $S^{1}$-symmetries, J. Geom. Phys. 58 (2008), 293–306.
* [24] S. Heller, Real projective structures on Riemann surfaces and new hyper-kähler manifolds, arXiv:1906.10350, (2019)
* [25] L. Heller and S. Heller, Higer solutions of Hitchin’s self-duality equations, Journal of Integrable Systems, Volume 5, Issue 1, (2020)
* [26] N. J. Hitchin, The self-duality equations on a Riemann surface, Proc. London Math. Soc. (3) 55, no. 1, (1987) 59 –126.
* [27] N. J. Hitchin, Harmonic maps from a 2-torus to the 3-sphere, J. Differential Geom. 31, no. 3, (1990), 627 – 710.
* [28] N. J. Hitchin, Lie groups and Teichmüller space, Topology 31 (1992), 449 – 473
* [29] N. J. Hitchin, On the hyperkähler/quaternion Kähler correspondence, Comm. Math. Phys. 324 (2013), 77 – 106.
* [30] N. J. Hitchin, The hyperholomorphic line bundle, in Algebraic and Complex Geometry. In honour of Klaus Hulek’s 60th birthday, Springer Publishing (2014)
* [31] N. J. Hitchin, A. Karlhede, U. Lindström and M. Roček, Hyper-Kähler metrics and supersymmetry, Comm. Math. Phys. 108 (1987), 535–589.
* [32] Z. Hu and P. Huang, Flat $\lambda$-Connections, Mochizuki Correspondence and Twistor Spaces, arXiv:1905.10765.
* [33] P. Huang, Non-Abelian hodge theory and related topics, SIGMA 16 (2020), https://doi.org/10.3842/SIGMA.2020.029.
* [34] S. A. Huggett and S. A. Merkulov, Twistor Transform of vector bundles, Math. Scand. Vol. 85, No. 2 (1999), pp. 219 – 244
* [35] M. Jardim and M. Verbitsky, Trihyperkähler reduction and instanton bundles on ${\mathbb{C}}P^{3}$, Compositio Math. 150 (2014), 1836–1868.
* [36] M. Jardim and M. Verbitsky, Moduli spaces of framed instanton bundles on ${\mathbb{C}}P^{3}$ and twistor sections of moduli spaces of instantons on ${\mathbb{C}}^{2}$ Adv. Math., 227 (2011), 1526–1538.
* [37] C. LeBrun, Quaternionic-Kähler manifolds and conformal geometry. Math. Ann. 284, 353–376 (1989).
* [38] M. Maruyama, Openness of a family of torsion free sheaves, Jour. Math. Kyoto Univ. 16 (1976), 627–637.
* [39] M. Mayrand, Hyperkahler metrics near Lagrangian submanifolds and symplectic groupoids, arXiv:2011.09282.
* [40] M. Namba, On maximal families of compact complex submanifolds of complex fiber spaces, Tohoku Math. J. (2) Volume 25, Number 2 (1973), 237–262.
* [41] M. S. Narasimhan and C. S. Seshadri, Stable and unitary vector bundles on a compact Riemann surface, Ann. of Math. 82 (1965), 540–567.
* [42] K. Pohlmeyer, Integrable Hamiltonian systems and interactions through quadratic constraints, Comm. Math. Phys. 46, no. 3, 1976.
* [43] D. G. Quillen, Determinants of Cauchy–Riemann operators over a Riemann surface, Funct. Anal. Appl. 19 (1985), 31–34.
* [44] C. Simpson, The Hodge filtration on nonabelian cohomology, Algebraic geometry—Santa Cruz 1995, Proc. Sympos. Pure Math., vol. 62, Amer. Math. Soc., Providence, RI, 1997, pp. 217–281.
* [45] C. Simpson, Iterated destabilizing modifications for vector bundles with connection, Contemporary Math. 522, 2010 (Proceedings of the Ramanan Conference, Madrid, 2008)., 2008.
* [46] C. Simpson, Higgs bundles and local systems, Inst. Hautes Études Sci. Publ. Math. 75 (1992), 5–95.
* [47] K. Uhlenbeck, Harmonic maps into Lie groups (classical solutions of the chiral model),J. Diff. Geom., Vol. 30, pages 1–50, 1989. |
Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro 2,
00185 Roma, Italy, European Union Department of Physics, Sapienza University
of Rome, Piazzale Aldo Moro 2, 00185 Roma, Italy, European Union Centro de
Fısica de Materiales (CSIC, UPV/EHU) - Materials Physics Center MPC, Paseo
Manuel de Lardizabal 5, 20018 San Sebastián, Spain, European Union.
IKERBASQUE—Basque Foundation for Science, Plaza Euskadi 5, 48009 Bilbao,
Spain, European Union.
# In-situ study of the impact of temperature and architecture on the
interfacial structure of microgels
Steffen Bochenek Institute of Physical Chemistry, RWTH Aachen University,
Landoltweg 2, 52056 Aachen, Germany, European Union Fabrizio Camerin CNR-
ISC, Sapienza University of Rome, Piazzale Aldo Moro 2, 00185 Roma, Italy,
European Union Emanuela Zaccarelli CNR-ISC, Sapienza University of Rome,
Piazzale Aldo Moro 2, 00185 Roma, Italy, European Union Armando Maestro
Institut Laue-Langevin ILL DS/LSS, 71 Avenue des Martyrs, 38000 Grenoble,
France, European Union Maximilian M. Schmidt Institute of Physical
Chemistry, RWTH Aachen University, Landoltweg 2, 52056 Aachen, Germany,
European Union Walter Richtering Institute of Physical Chemistry, RWTH
Aachen University, Landoltweg 2, 52056 Aachen, Germany, European Union Andrea
Scotti Institute of Physical Chemistry, RWTH Aachen University, Landoltweg 2,
52056 Aachen, Germany, European Union<EMAIL_ADDRESS>
###### Abstract
The structural characterization of microgels at interfaces is fundamental to
understand both their 2D phase behavior and their role as stabilizers that
enable emulsions to be broken on demand. However, this characterization is
usually limited by available experimental techniques, which do not allow a
direct investigation at interfaces. To overcome this difficulty, here we
employ neutron reflectometry, which allows us to probe the structure and
responsiveness of the microgels in-situ at the air-water interface. We
investigate two types of microgels with different cross-link density, thus
having different softness and deformability, both below and above their volume
phase transition temperature, combining experiments with computer simulations
of realistic in silico synthesized microgels. We find that temperature only
affects the portion of microgels in water, while the strongest effect of the
microgels softness is observed in their ability to protrude into the air. In
particular, standard microgels have an apparent contact angle of few degrees,
while ultra-low cross-linked microgels form a flat polymeric layer with zero
contact angle. Altogether, this study provides an in-depth microscopic
description of how different microgel architectures affect their arrangements
at interfaces, and will be the foundation for a better understanding of their
phase behavior and assembly. This manuscript has been accepted for publication
in Nature Communications (open access). The final version of the manuscript
including the Supplementary Information will be available in the future.
## 1 Introduction
Soft nano- and microgels - cross-linked polymer networks swollen in a good
solvent - reveal peculiar properties that are different from those of other
colloidal systems such as hard nanoparticles, polymers and surfactants.1, 2,
3, 4, 5 The impact of softness, for instance, emerges when micro- and nanogels
adsorb at interfaces: they stretch and deform to maximize the coverage of the
interface and minimize the interfacial energy. 6, 7, 8, 9, 10, 11 At the same
time, they do not completely disassemble but remain individual particles, in
contrast to other macromolecules such as block copolymer micelles, which
irreversibly change their internal conformation upon adsorption at an
interface 12, 13.
Nano- and microgels based on poly-_N_ -isopropylacrylamide (pNIPAM) have a
high interfacial activity14 and at the same time maintain their thermo-
responsiveness once adsorbed to air-15, 16, 17, liquid-18, 19, 20, 21, or
solid interfaces 22, 23, 24, 25. They can be used to prepare smart emulsions
18, 19, 26, 27, 28 that can be broken on demand as a function of external
stimuli such as temperature and pH 18, 29, 19, 30, 31, 32.
A detailed knowledge of the 3D structure of microgels at an interface is
essential to understand fundamental aspects such as their 2D-phase behavior
33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43 or their functionality in emulsion
stabilization. While there has been significant progress in studying microgels
at solid substrates, in-situ experiments at fluid interfaces are still scarce.
A powerful technique to obtain experimental insight into the structure and
composition of surfaces and/or thin films with sub-nanometric resolution is
specular neutron reflectometry (SNR), which has been employed to study NIPAM-
based systems, such as linear polymers and nanogels. 44, 45
Recently, Zielińska et al. probed the structure of pNIPAM nanogels (with
diameter smaller than 40 nm) below and at the lower critical solution
temperature of pNIPAM of 32$\,{}^{\circ}$C.44, 46 They found that nanogels
protrude for $\approx 2$ nanometers in the air phase and form a thick
polymeric layer at the interface. After this, two layers of highly solvated
pNIPAM were observed. As highlighted in these studies, a key aspect which
determines the monolayer structure is represented by the nanogel
deformability. More generally, the extent of the microgels’ deformation, their
final shape, and their phase behaviour strongly depend on their softness and
internal architecture.
It can be expected that size and cross-linker density of the microgels
strongly influence the structure of the microgel-covered interface and indeed
a transition from particle-to-polymer-like behavior has been observed for
ultra-soft microgels adsorbed to solid interfaces.39 Atomic force microscopy
(AFM), cryo-scanning electron (cryoSEM) microscopy, and computer simulations
show that adsorbed standard microgels, i.e. microgels with a cross-linker
content of few mol %, have a core-corona or fried-egg-like shape when dried,
where the fuzzy shell of the microgels forms a thin layer at the interface
with the more cross-linked core in the center. 8, 47, 6, 33, 48 The core-
corona structure gives rise to a rich 2D-phase behavior of the microgel
monolayer characterized by a solid-to-solid phase transition 33. In contrast,
AFM measurements demonstrate that ultra-soft microgels have a flat and
homogeneous pancake-like structure 25. Furthermore, depending on the monolayer
concentration, they can form both flat films and behave as polymers or as a
disordered arrangement of particles 39.
In this contribution, we address the following questions: Do microgels
protrude into the air and if so how far? Is it possible to determine a contact
angle for microgels? How are these quantities affected by the cross-linking
density and by the collapse of the microgels in the water phase? In
particular, we employ SNR to determine in-situ the structure of microgels
along the normal to the interface and compare the resulting polymer fraction
profiles with those obtained by computer simulations.
We investigate two different types of microgels. The first one is a standard
microgel synthesized with a cross-linker content of 5 mol %. This has an
architecture characterized by a more cross-linked core and a gradual decrease
of the cross-linking density and the polymer segment density towards the
periphery. Finally, dangling chains decorate the outer shells. 49. This
architecture is a consequence of the fact that the cross-linker agent reacts
faster than the monomer during the precipitation polymerization. 50 We
prepared two separate batches, where in one case the isopropyl group of the
monomer was deuterated to improve the contrast for neutron reflectivity (NR).
pNIPAM microgels can also be synthesized via precipitation polymerization
without addition of a cross-linker agent 51. The network is formed by self-
cross-linking of NIPAM due to transfer reactions 52. As with the standard
microgels, we use a partially deuterated monomer in which the vinyl group is
deuterated 52 to increase and vary the contrast in neutron reflectometry.
Given the absence of cross-linker agent, these ultra-low cross-linked (ULC)
microgels are ultra-soft 54, 53 and have an almost uniform, albeit very low,
internal density of polymer segments. 39. Nonetheless, such particles remain
fundamentally different from linear polymers. For instance, in bulk solution,
ULC microgels were found to form colloidal crystals in clear contrast to
linear or branched chains 55, 54. Furthermore, their behavior can be tuned
between that of polymer and the one of colloidal particle depending on the
compression of the monolayer 39. These microgels also differ from linear
polymers once adsorbed at a solid interface where their architecture is the
one of ultra-soft disks 25.
The differences in internal architecture between standard and ULC microgel
affect their compressibility and deformability. For instance, the presence of
a more cross-linked and denser core inhibits large compression in bulk, 56
whereas the poorly cross-linked network of the ULC microgels is easy to
compress in crowded solutions 57, 53. While compressibility is the key aspect
for the three-dimensional response of microgels, their deformability is
pivotal once they are confined in two dimensions, i.e. onto liquid or solid
interfaces.
The analysis of our data shows the effects of the microgel internal
architecture on their structure orthogonal to the interface. For both systems,
the protrusion in the air and the polymeric layer sitting at the interface are
independent of the temperature, T. Furthermore, simple geometrical
considerations on the density profiles combined with the in-plane microgel
radius determined by AFM, allow us to determine the apparent contact angle of
the adsorbed microgels. We show that the morphology of ULC microgels is more
similar to linear polymers and macromolecules, while standard microgels
resemble more closely hard colloids.
## 2 Results
### 2.1 Microgel structure in bulk solution
The ratio between the hydrodynamic radius in the swollen and collapsed state -
swelling ratio - is a good measurement of the softness of the microgel
network: The larger this ratio, the softer the microgel 58, 59, 60. All
microgels studied here have a comparable hydrodynamic radius at
20$\,{}^{\circ}$C, see Table 1 and Supplementary Fig. 1a. They do however
exhibit different swelling ratios, see Supplementary Fig. 1b.
Table 1: Characteristic lengths of the individual pNIPAM based microgels below
and above their VPTT.
Name | T | $R_{h}$ | $R_{\text{SANS}}$ | $R_{\text{SANS,c}}$ | 2$\sigma_{\text{SANS}}$ | 2$R_{\text{2D}}$ | 2$R_{\text{2D,c}}$ | h${}_{\text{2D}}$
---|---|---|---|---|---|---|---|---
| (∘C) | (nm) | (nm) | (nm) | (nm) | (nm) | (nm) | (nm)
5 mol% D0 | 20 | 150 | 151 | 32 | 119 | 688 | 360 | 21
5 mol% D0 | 40 | 85 | 72 | 59 | 13 | 651 | 289 | 26
5 mol% D7 | 20 | 153 | 120 | 33 | 87 | - | - | -
5 mol% D7 | 40 | 72 | 62 | 57 | 5 | - | - | -
ULC D3 | 20 | 138 | 134 | 53 | 81 | 733 | - | 3
ULC D3 | 40 | 54 | 56 | 41 | 15 | 689 | - | 4
* •
Hydrodynamic radius in water, $R_{\text{h}}$, radius from SANS in D2O,
$R_{\text{SANS}}~{}=~{}R_{\text{SANS,c}}~{}+~{}$2$\sigma_{\text{SANS}}$ where
$R_{\text{SANS,c}}$ is the core radius in bulk and 2$\sigma_{\text{SANS}}$ is
the fuzziness of the shell in bulk determined by SANS. 2$R_{\text{2D}}$ is the
interfacial (dry) diameter and 2$R_{\text{2D,c}}$ is the interfacial (dry)
diameter of the core. h${}_{\text{2D}}$ is the maximum height once adsorbed
(dry). The last three quantities are determined by AFM, see Supplementary
Figs. 3 and 4. The values including the errors are given in Supplementary
Table 1.
For the hydrogenated 5 mol% cross-linked standard pNIPAM microgels, 5 mol% D0,
the swelling ratio is $1.76\pm 0.03$. For the deuterated pNIPAM microgels
synthesized with the same amount of cross-linker - 5 mol% D7 - the swelling
ratio is $2.12\pm 0.04$. Finally, the swelling ratio of the deuterated pNIPAM
ULC microgels, ULC D3, is $2.56\pm 0.05$. This confirms that the ULC microgels
are the softest, according to this parameter.
Small-angle neutron scattering (SANS) is used to determine the characteristic
lengths of the microgels, such as total radius, $R_{\text{SANS}}$, radius of
the more cross-linked core, $R_{\text{SANS,c}}$, and extension of the fuzzy
shell, 2$\sigma_{\text{SANS}}$. The values of these quantities are determined
fitting the form factors with the fuzzy-sphere model 49 and are reported in
Table. 1. The data and the fits in Supplementary Figs. 2a-d confirm the
different internal architecture between standard and ULC microgels.
We note that the main effects of selective deuteration and of using deuterated
solvents is to shift the VPTT of pNIPAM to a higher temperature 61, 62, 63,
64. However, at the lowest and highest temperatures measured, the microgels
are in the fully swollen and collapsed state (see Supplementary Figs. 1c and
2a-d), respectively, allowing for an appropriate comparison of the different
architectures.
### 2.2 Standard Microgels at the interface
For each monolayer of hydrogenated and deuterated microgels studied here, the
intensities of the reflected neutrons, R(Q), were recorded as a function of
momentum transfer normal to the interface, $Q$, in two isotopic contrasts: D2O
and air contrast matched water (ACMW). The latter consists in a mixture of D2O
and H2O (8.92% v/v), which matches the scattering length density (SLD) of air
($b_{\text{air}}$ = 0 $\cdot~{}10^{-4}$ nm-2), and therefore only the polymer
contributes to the reflected signal of the curves in Figures 1a and b. R(Q)
for the same microgels, measured in D2O as sub-phase, is plotted in the insets
of the Figures 1a and b. In this case, when a neutron beam is reflected from
air at D2O, which has a higher SLD ($b_{\text{D${}_{2}$O}}=6.36\cdot 10^{-4}$
nm-2) or a lower refractive index $n=1-\lambda^{2}/2\pi b$ (with $\lambda$ the
neutron wavelength), respectively, total reflection occurs below a critical
value of the momentum transfer $Q_{\text{c}}=0.16$ nm-1. Above this value the
reflectivity decays as a function of $Q^{4}$.
Figure 1: Reflectivity curves of 5 mol% cross-linked microgels at different
temperatures. a Reflectivity curves, reflectivity, R(Q), versus momentum
transfer, $Q$, of pNIPAM microgels at the air-ACMW interface and corresponding
fits. b Reflectivity curves of D7-NIPAM microgels at the air-ACMW interface
with fits. Insets: Reflectivity curves at air-D2O interfaces. The curves are
shifted in y-direction for clarity. The unshifted curves are shown in
Supplementary Figs. 6a and b. The error bars represent the statistical errors
on R(Q).
The samples studied here yielded laterally homogeneous interfaces on the
length scale of the in-plane neutron coherence length, on the order of several
microns 65. This implies that the measured SNR can be correlated with the
averaged SLD depth profile across the interface delimited by this coherence
length and, therefore, the in-situ structure of the microgels as a function of
the distance from the interface $z$ can be determined. This is done by fitting
the reflectivity curves with a model consisting in different layers
characterized by a thickness, $d$, a roughness, $\sigma$, and a SLD, $b$. The
latter contains information on the atomic density of the NIPAM molecules and,
therefore, is linked to the polymer concentration and solvation of the
different layers (see Methods section for further details). Here, we find that
a model composed by 4 layers is the most suitable to describe the density
profile of the standard pNIPAM microgels perpendicular to the plane of the
interface where the layers 1-to-4 are sandwiched between the bulk air (layer
0) and the bulk solvent (background layer).
The length and width of the slab are delimited by the illuminated area that is
roughly 109 times the interfacial diameter of the measured microgel.
Therefore, in contrast to microscopy-based techniques, our measurements probe
a statistically significant ensemble of microgels. We fit the R(Q)-curves of
the same sample at the same temperature for both contrasts simultaneously to
reduce the number of free parameter. The best fits are shown by the black full
lines in Figures 1a and 1b. The parameters of the fits are reported in Table
2. The use of models with a smaller number of layers cannot reproduce the
experimental data or it leads to density profile inconsistent with previous
studies 66, 6, 67, 10, 35, 68, 69, see Supplementary Figs. 7a-c.
Additionally, to verify the validity of the four slab-models, the data for the
deuterated microgels at 20$\,{}^{\circ}$C have been fitted using a continuous
variation of the SLD profile sliced into many ($>$ 1000) thin layers of 1.5 Å
thickness. As shown in the Supporting Information (Supplementary Fig. 10a and
b), the fit leads to identical results and, therefore, validates the findings
from the four slab-models used. From this discussion, it is clear that the
model employed here can reproduce the data with the due accuracy and the
lowest number of free fitting parameters.
Table 2: Parameters of the 4-layers fit for the 5% cross-linked microgels in
Figure 1.
T | Layer 1 | Layer 2 | Layer 3 | Layer 4 | Background
---|---|---|---|---|---
| $d_{\text{1}}$ | $\sigma_{\text{1}}$ | $b_{1}$ | $d_{\text{2}}$ | $\sigma_{\text{2}}$ | $b_{2}$ | $d_{\text{3}}$ | $\sigma_{\text{3}}$ | $b_{3}$ | $d_{\text{4}}$ | $\sigma_{\text{4}}$ | $b_{4}$ | $\sigma_{\text{bkg}}$ | $d_{\text{total}}$
(∘C) | (nm) | (nm) | ($10^{-6}$ Å-2) | (nm) | (nm) | ($10^{-6}$ Å-2) | (nm) | (nm) | ($10^{-6}$ Å-2) | (nm) | (nm) | ($10^{-6}$ Å-2) | (nm) | (nm)
5 mol% D0 Microgels, $b_{\text{theo}}$ = 0.93 $\cdot$ $10^{-6}$ Å-2
10 | 14 | 8 | 0.06 | 2.1 | 0.7 | 0.32 | 4.4 | 0.4 | 0.14 | 122 | 3.5 | 0.06 | 31 | 220
20 | 14 | 8 | 0.06 | 2.1 | 0.7 | 0.31 | 4.3 | 0.8 | 0.19 | 117 | 3.5 | 0.07 | 28 | 210
30 | 14 | 8 | 0.08 | 2.2 | 1.0 | 0.35 | 4.7 | 0.6 | 0.20 | 99 | 4.0 | 0.08 | 29 | 194
40 | 14 | 7 | 0.10 | 2.7 | 0.5 | 0.35 | 6.8 | 1.0 | 0.23 | 48 | 3.2 | 0.10 | 26 | 140
5 mol% D7 Microgels, $b_{\text{theo}}$ = 4.78 $\cdot$ $10^{-6}$ Å-2
20 | 16 | 11 | 0.1 | 2.3 | 0.5 | 1.58 | 3.0 | 0.2 | 0.49 | 136 | 3.4 | 0.21 | 33 | 245
40 | 16 | 8 | 0.2 | 2.6 | 0.2 | 1.73 | 4.7 | 0.3 | 0.62 | 66 | 2.6 | 0.26 | 27 | 160
* •
$d_{\text{i}}$ is the thickness of a layer with the scattering length density
$b_{i}$. $\sigma_{i}$ is the roughness between a layer and the layer above it.
$d_{\text{total}}$ is the total film thickness and $\sigma_{\text{bkg}}$ is
the roughness between the last layer and the background. The Uncertainties
from the fits are given as errors in Supplementary Table 2.
Figures 2a and b show the polymer fraction normal to the interface
(z-distance) of the hydrogenated (5 mol% D0) and deuterated (5 mol% D7)
microgels, respectively. These curves are calculated from the SLD profiles
obtained from the fits and shown in Supplementary Figs. 8a and b.
We note that the extension of the dangling, highly hydrated polymeric chains
at the end of the swollen microgels is accounted considering the roughness
between the last layer and the background, i.e. equals $2\sigma_{\text{bkg}}$.
The profiles of the polymer fraction normal to the interface show that the
microgels deswell in the vertical direction with increasing temperature. The
total film thickness
$d_{\text{total}}=d_{\text{1}}+...+d_{\text{N}}+2\sigma_{\text{1}}+2\sigma_{\text{bkg}}$
is reported in the last column of Table 2.
Below the VPTT, the 5 mol% D0 microgels are fully swollen and have a
$d_{\text{total}}$ in between $210\pm 6$ and $220\pm 5$ nm. Once the microgels
are collapsed at 40$\,{}^{\circ}$C, they are deswollen and have a thickness of
$d_{\text{total}}$ = (140 $\pm$ 5) nm. In the literature, a very similar value
of the thickness was measured for the same microgels in the swollen and
collapsed state with ellipsometry 34. Also the deuterated microgels show the
deswelling with temperature. The thickness of the monolayer in the swollen and
the deswollen state is $d_{\text{total}}=245\pm 14$ nm and
$d_{\text{total}}=160\pm 2$ nm, respectively; see Table 2.
Figure 2: Structure of 5 mol% cross-linked microgels at liquid interfaces.
Polymer fractions of the adsorbed 5 mol% D0 a and 5 mol% D7 b microgels at
different temperatures. c Density profiles of simulated microgels at different
effective temperatures, corresponding to $\alpha=0,0.5$. Horizontal and
vertical dashed lines are guidelines for the eyes and represent zero polymer
fraction/density and zero z-distance from the interface, respectively.
Negative values of z represent the air phase and positive values represent the
water phase. d Simulation snapshots showing the side perspective of an
adsorbed standard microgel for $\alpha=0,0.5$. Solvent particles are not shown
for visual clarity.
In our model, the protrusion of the microgel into the air is
$d_{p}=d_{\text{1}}+2\sigma_{\text{1}}$ and is calculated using the values
given in Table 2. For clarity, we have shifted the position of the polymer
profiles along the z-distance to have this protrusion layer at negative
distances from the interface, Figures 2a and b. The unshifted polymer fraction
profiles are shown in the Supporting Information, Supplementary Fig. 9a and b.
At 20$\,{}^{\circ}$C, the 5 mol% D0 and 5 mol% D7 microgels protrude
approximately $30\pm 2$ and $37\pm 2$ nm into the air, respectively. This
corresponds to about 10$\%$ of the diameter of the swollen microgels in
solution or 15$\%$ of their $d_{\text{total}}$. The protrusion into the air
phase does not change significantly with increasing temperature. Geisel et al.
determined a protrusion height below 70 nm for microgels of similar size. They
noted that this value is the maximum protrusion height according to
geometrical calculations from the cryoSEM images and has to be interpreted as
an upper limit 6.
The estimated values of $d_{\text{p}}$ allow us to calculate the apparent
contact angles of the microgels assuming a simple orthogonal triangle. To this
aim, we make use of the total interfacial diameter 2$R_{\text{2D}}$ of the
individual microgels determined by AFM measurements, see Table 1. The apparent
contact angle, $\theta_{\text{C,app}}=\arctan(d_{p}/R_{2D})$ is found to be
approximately 5$\,{}^{\circ}$ at 20 and 40$\,{}^{\circ}$C. Since the corona of
the microgels is expected to form a flat layer within the interfacial plane,
the interfacial diameter of the core, 2$R_{\text{2D,c}}$, can be used instead.
This results in $\theta_{\text{C,app}}$ of 9$\,{}^{\circ}$ and
11$\,{}^{\circ}$ at 20 and 40$\,{}^{\circ}$C, respectively.
The second region is a thin, polymer-rich layer lying at $z=0$ ( Figures 2a
and b). In our model, this region is described by layer 2 in Table 2. We
assume slabs parallel to the interface and, therefore, we only determine an
average SLD which is proportional to the average polymer fraction at the
interface. Similarly to the protrusion of the microgels in air, also this
polymer-rich layer is temperature independent and has a constant volume
fraction of $\approx 0.33$, as indicated by the constant values of SLD
reported in Table 2. The high polymer content in these regions implies that
the network expelled a significant amount of solvent compared to the solvated
part in water. Therefore, we can compare the thickness of these two layers
($\approx 40$ nm) to the length of the collapsed shell at high temperatures in
bulk, see Table 1, which is found to be much smaller than the layers
thickness. From this, we can infer that also a part of the more cross-linked
core protrudes into the air, as shown in Figure 2 and in the sketch in Figure
3a-c.
Our model also reproduces the portion of a microgel in the aqueous phase, i.e.
the third region, as shown by the polymer fractions at $z>0$ in Figures 2a and
b. This portion of the microgel is described by the third and fourth layers,
and the corresponding parameters are reported in Table 2. Its extension is
calculated as $d_{\text{water}}=d_{3}+d_{4}+2\sigma_{\text{bkg}}$ and shows
the strongest reaction to a change of temperature. For the hydrogenated
microgels, $d_{\text{water}}$ goes from $178\pm 5$ to $106\pm 5$ nm when
temperature increases from 20 to 40$\,{}^{\circ}$C. A change in
$d_{\text{water}}$ from $205\pm 6$ to $125\pm 2$ nm for the same temperature
increase is determined for the 5 mol% D7 microgels. This collapse is
accompanied by an increase of the polymer fraction in the layers 3 and 4 for
both microgels as indicated by the increases in the values of $b_{i}$. We note
that both below and above the VPTT, the values of $d_{\text{water}}$ are
smaller than the hydrodynamic diameters of the swollen and collapsed microgels
in bulk, $2R_{h}$ in Table 1. This observation, combined with the large values
of the interfacial diameters, indicates a strong deformation of the adsorbed
microgels, see Figure 3a-c. On the other hand, the swelling ratio in 2D,
defined as the ratio between $d_{\text{water}}$ at 20 and 40$\,{}^{\circ}$C,
is found to be $1.68\pm 0.09$ and $1.65\pm 0.05$ for the hydrogenated and
deuterated 5 mol% cross-linked microgels, respectively. These values are
smaller than the corresponding ratios in 3D, implying that the adsorption
leads to a stiffening of the polymeric networks swollen in water, as also
found in computer simulations 37. Furthermore, provided both microgels have
the same 2D swelling ratio, the 5 mol% cross-linked standard microgels have
similar softness at the interface, whereas in bulk the deuterated ones appear
to be slightly softer.
We also note that the slight difference in polymer fraction in the water phase
between deuterated and hydrogenated nanogels depends on the fact that they
have slightly different masses and molecular weights $M_{\text{w}}$. Combining
viscosimetry measurements and dynamic light scattering measurements 70, we
found that the 5 mol% D7 microgels have a mass of $6.3\pm 0.6\cdot 10^{-19}$
kg ($M_{\text{w}}=3.8\pm 0.4\cdot 10^{8}$ gmol-1), while the 5 mol% D0
microgels have a mass of $7.7\pm 0.7\cdot 10^{-19}$ kg ($M_{\text{w}}=4.6\pm
0.4\cdot 10^{8}$ gmol-1).
The conformation of the regular microgel at the interface is in excellent
agreement with numerical simulations. In this case, microgels are synthesized
in silico through the self-assembly of patchy particles 71, 37. The resulting
polymer network is disordered and accounts for a higher concentration of
cross-linkers in the core of the particle, with a bulk density profile that
progressively rarefies in the outer corona. The microgel is embedded within
two different types of immiscible solvents, mimicking air and water, which
gives rise to a surface tension similar to experiments. In this way, the
simulated microgel spontaneously acquires the typical fried-egg-like shape.
More details on the assembly process and on the simulations at the interface
can be found in the Methods section.
In order to compare with the experimental profiles of the microgel parallel to
the plane of the interface, we calculate the numerical number density profiles
by dividing the simulation box into three-dimensional slabs along the
z-direction, i.e. orthogonally to the interfacial plane. In this way, we have
direct access to the polymer network, without any interference given by the
presence of the solvent. The resulting profiles are reported in Fig. 2c for
two different effective temperatures.
The three regions described experimentally are also present in the numerical
profiles. At all temperatures, we detect the presence of a protrusion into the
air phase and a polymer layer lying on the interface. As shown by the
snapshots reported in Fig. 2d, the protrusion is given by the fact that the
more cross-linked core cannot fully expand, as it happens for the corona, on
the interfacial plane. In fact, the corona creates the second part of the
density profile that is characterized by a pronounced peak at the interface.
The polymer network accumulates onto the interface to reduce the surface
tension between the two fluids as much as possible. The third region of the
profile is inside the aqueous phase. As in the experiments, this region is
most affected by temperature. While at low temperatures a large portion of the
microgel protrudes significantly into the aqueous phase, at high temperatures
the microgel tends to assume a more spherical and compact shape, contracting
the polymer chains toward the interfacial plane. The consistency between
simulations and experiments also allows us to confirm the robustness of the
four layers fitting model used in experiments.
Figure 3: Sketch of the adsorbed microgels. Panel a shows the vertical
profiles of standard microgels and f the vertical profiles of ULC microgels
below and above the VPTT. Their corresponding shapes are outlined in b-e. The
shapes are based on the combination of our polymer fraction profiles,
simulations and AFM measurements at the liquid-solid interface from the
literature25.
### 2.3 ULC microgels at the interface
The reflectivity curves of deuterated ULC microgels at the air-ACMW interface
are shown in Figure 4. In the inset, the measurements with pure D2O as sub-
phase are shown. In contrast to standard microgels, a three-layer model can
successfully fit the data (solid lines in Figure 4). The fit parameters are
obtained by fitting neutron reflectivity (NR) curves of the same sample at the
same temperature simultaneously for both contrasts. Their values are reported
in Table 3.
Once more, we checked the validity of the three-layer model by comparing the
results from a fit with a model consisting of a continuous variation of the
SLD with many thin layers. In the Supplementary Information, it is shown that
the results from the two models are identical (Supplementary Figs. 10c and d).
This further demonstrates that a slab model including a Gaussian error
function can successfully reproduce the experimental NR data of ULC microgels
with the smallest number of free parameters.
Figure 4: Reflectivity curves of ULC D3 microgels at different temperatures.
Reflectivity, R(Q), versus momentum transfer, $Q$, at the air-ACMW interface.
The fits are shown by continuous lines. Inset: Reflectivity curves at air-D2O
interfaces. The curves are shifted in y-direction for clarity. The unshifted
curves are shown in Supplementary Figs. 6c. The error bars represent the
statistical errors on R(Q).
The structure of the deuterated ULC microgels as a function of the distance to
the interface is described by the shifted and unshifted polymer fraction
profiles in Figure 5a and Supplementary Fig. 9c, respectively. At
$20~{}^{\circ}$C, the length of the protrusion of ULC microgels into air is
$d_{p}=~{}8~{}\pm~{}3$ nm. This is less than 3% of the ULC swollen diameter in
solution and approximately 5% of the total thickness of the ULC,
$d_{\text{total}}=157\pm 7$ nm, see Table 3. Similarly to standard microgels,
the ULC protrusion into air does not change once temperature rises above the
VPTT. Another similarity with the standard microgels is the presence of a
dense layer of polymer sitting on the interface of $\approx 3$ nm. Adding the
length of the protrusion in air, $d_{\text{p}}$, to this extension, we obtain
$\approx 11-15$ nm which is consistent with the extension of the collapsed
fuzzy shell measured by SANS for the D3-ULC microgels, see Table 1. This
indicates that, in contrast to standard microgels, only the collapsed external
shell protrudes into air and lies on the interface, as shown in Figure 5 and
sketched in Figure 3d-f.
Table 3: Summary of the model fits of the reflectivity curves of the ULC D3
microgels in Figure 4.
T | Layer 1 | Layer 2 | Layer 3 | Background
---|---|---|---|---
| $d_{\text{1}}$ | $\sigma_{\text{1}}$ | $b_{1}$ | $d_{\text{2}}$ | $\sigma_{\text{2}}$ | $b_{2}$ | $d_{\text{3}}$ | $\sigma_{\text{3}}$ | $b_{3}$ | $\sigma_{\text{bkg}}$ | $d_{\text{total}}$
(∘C) | (nm) | (nm) | ($10^{-6}$ Å-2) | (nm) | (nm) | ($10^{-6}$ Å-2) | (nm) | (nm) | ($10^{-6}$ Å-2) | (nm) | (nm)
ULC D3 Microgels, $b_{\text{theo}}$ = 2.57 $\cdot$ $10^{-6}$ Å-2
20 | 3 | 2 | 0.04 | 2.2 | 0.4 | 1.01 | 86 | 0.4 | 0.09 | 30 | 157
30 | 3 | 2 | 0.070 | 2.4 | 0.4 | 1.08 | 64 | 0.2 | 0.09 | 26 | 125
36 | 3 | 2 | 0.110 | 2.6 | 0.2 | 1.08 | 61 | 0.2 | 0.05 | 25 | 120
40 | 3 | 1 | 0.120 | 2.7 | 0.4 | 1.08 | 52 | 0.4 | 0.008 | 15 | 89
* •
$d_{\text{i}}$ is the thickness of a layer with the scattering length density
$b_{i}$. $\sigma_{i}$ denotes the roughness between a layer and the layer
above it. $d_{\text{total}}$ the approximated total film thickness and
$\sigma_{\text{bkg}}$ the roughness between the last layer and the background.
The uncertainties from the fits are given as errors in Supplementary Table 3.
The third region of the ULC microgels has a lower polymer fraction (below
0.04, Fig. 5a) compared to the standard microgels (above 0.05 Figs. 2a and b)
below the VPTT. Unfortunately, due to the resolution of NR and the fact that
we average the SLD over the entire monolayer, it is not possible to finely
resolve the structure of the collapsed ULC. Above the VPTT, the polymer
fraction in the third region of the collapsed ULC is estimated from the value
of $b_{3}$ to be $\approx 0.003$. This small value might result from the
average between regions with no polymer and denser globules of collapsed
microgels around the few cross-linking points. Such globules have been
observed by AFM on re-hydrated ULC microgels adsorbed onto solid interfaces
after transferring from a Langmuir-Blodgett trough 38.
As for the regular microgels, we can use the estimated $d_{\text{p}}$ and the
2D radius of the ULC microgels to compute their apparent contact angles. The
resulting angles are negligible, $\approx$ 1$\,{}^{\circ}$, at both
temperatures. This behavior is close to what one can expect for macromolecules
adsorbed at interfaces in contrast to colloidal particles. This is consistent
with recent literature on these ultra-soft microgels. Indeed, it has been
shown that, due to their high compressibility and deformability 25, 54, these
microgels show the typical behavior of polymers. For instance, their bulk
viscosity does not diverge in proximity of the glass transition but at much
higher concentrations, indicating a high degree of deformability 72. Also at
the interface it has been shown that, depending on their concentration, they
can cover the interface uniformly as a linear polymer or create a disordered
array of individual particles as hard colloids 39.
Figure 5: Structure of ULC microgels at liquid interfaces. a Results of the
fits of the experimental data for the ULC D3 microgels. Inset: Zoom of polymer
fraction profiles. b Density profiles of simulated ultra-low cross-linked
microgels at different effective temperatures, corresponding to
$\alpha=0,0.5$. Horizontal and vertical dashed lines are guidelines for the
eyes and represent zero polymer fraction and zero z-distance from the
interface, respectively. Negative values of z represent the air phase and
positive values represent the water phase. c Simulation snapshots showing the
side perspective of an adsorbed ULC microgel for $\alpha=0,0.5$. Solvent
particles are not shown for visual clarity.
To gain more information on the adsorbed ULC microgels, we also performed
computer simulations of such system. The corresponding density profiles and
simulation snapshots are reported in Figure 5b and 5c, respectively. At both
effective temperatures, the ULC microgels show a flat profile. The polymer
network appears to be equally distributed across the interface, with only a
slight preference for the water phase. Consistently with experiments, no
effect of temperature change is observed for the fraction of polymer in the
air side and on the plane of the interface. Furthermore, as in the experiment,
the contact angle for the ULC microgels is virtually zero.
For standard microgels, the presence of the well-defined core generates a
noticeable dense protrusion into the aqueous phase (Fig. 2a); for the ULC
microgels, the amount of polymer in water is considerably lower (Fig. 5a). The
ULC microgels extend into the aqueous phase for
$d_{\text{water}}=d_{4}+2\sigma_{\text{bkg}}=144\pm 8$ nm at T =
20$\,{}^{\circ}$C. Furthermore, they remain thermo-responsive and their
extension in water decreases to $d_{\text{water}}=83\pm 3$ nm when temperature
changes from 20 to 40$\,{}^{\circ}$C. The 2D swelling ratio equals $1.7\pm
0.1$, a value much smaller than the corresponding 3D ratio and comparable to
the swelling ratio of the standard microgels in 2D. This implies that also ULC
microgels experience a significant stiffening of the polymeric network in
water due their the large deformation. This takes place both in the lateral
and in the vertical directions, as indicated by their large in-plane diameter
and by the fact that $d_{\text{water}}\ll 2R_{h}$, see Table 1. Furthermore,
$d_{\text{water}}$ at 40$\,{}^{\circ}$C is slightly larger than the region
with more homogeneous polymer distribution of the collapsed ULC as measured by
SANS, see Table 1. Therefore, we can assume that this region does not protrude
into the air as shown in the sketch in Figure 3d-e.
While the experimental and numerical descriptions of ULC microgels agree
regarding the microgel portion which protrudes in air and sits onto the
interface, there is a difference in what we observe in the water phase. This
is most likely generated by the presence of few dangling chains that do not
absorb on the plane of the interface and, therefore, protrude into the aqueous
phase. The reason why this protrusion is not observed in the numerical
profiles is most likely due to the small size of the simulated microgel. In
fact, the number of monomers and the minimal percentage of cross-linkers
employed for the in silico synthesis cause the microgel to be highly extended
allowing for all simulated monomers to absorb at the interface. On the
contrary, we expect that a significantly larger microgel would have enough
monomers to form a plain layer at the interface so that some chains would be
desorbed into the aqueous phase, as is the case in experiments. Nevertheless,
at present, this is computationally unfeasible due to the huge number of
particles that would be involved in an explicit solvent simulation with such a
large-sized microgel. For the same reason, an accurate quantitative comparison
between numerical and experimental density profiles is, at the moment, out of
reach.
## 3 Discussion
In this article, we used neutron reflectometry and computer simulations to
probe the structure of microgels orthogonal to the air-water interface, below
and above the VPTT. The advantage of neutron reflectometry is that it allows
to probe the structure of a statistically significant ensemble of microgels
in-situ at the interface. Using NR, we can directly measure the protrusion of
the microgels in the air and estimate how it changes with temperature.
Microscopy-based techniques such as transmission X-ray microscopy (TXM) or
cryoSEM are usually limited by the small number of observed particles, the
size of the particles, an observation direction perpendicular to the
interface, and complicated sample preparation6, 66, 10, 8. The latter makes it
particularly difficult, for example, to observe the effect of temperature on
the swelling of microgels.
In the future, super-resolved fluorescence microscopy techniques, which in
principle can resolve sizes below 30 nm 5, could also be used at the air-water
interface to obtain complementary data. To date, however, even these
techniques are limited by the spatial resolution in the $z$-direction that is
$\approx 60$ nm 73 and by the difficulties in the analysis of the point clouds
generated by the blinking of the dyes 74, 75.
For both 5 mol% cross-linked and ultra-low cross-linked microgels, we find
that the portion of microgels protruding in air is insensitive to changes in
temperature (Figs. 3a and f). Concerning standard microgels, the more cross-
linked core is found to partially protrude in the air, leading to an estimate
of the apparent contact angle of a few degrees (Figs. 3b and c). This value is
significantly smaller than the angle estimated using cryoSEM and TXM of
microgels protruding into different n-alkanes.6, 66 The reason for this
discrepancy is probably that the cryoSEM estimates were limited either by the
smallest angle employed, which was about 30$\,{}^{\circ}$,6 or by the size of
the employed microgels.66
In contrast, ULC microgels form a flat polymer layer that protrudes only a few
nanometers into the air, resulting in a nearly null apparent contact angle
(Figs. 3d and e). We also note that the length of such a layer is
approximately equal to the extent of its collapsed fuzzy shell (Table 1),
supporting the idea that only this part protrudes into the air. Again, since
these microgels are ultra-soft and extremely deformable, they stretch as much
as possible after adsorption at the interface to minimize the interfacial
energy. This behavior is consistent with the experiments of Richardson and co-
workers that used neutron reflectivity to probe linear pNIPAM solutions and
nanogels with a mesh-size comparable to their dimensions and, therefore,
highly stretchable at the interfaces 45, 44. Above the pNIPAM LCST, the
collapsed film protrudes about 4 nm into air 45, which is practically the same
as the protrusion height estimated here for the ULC microgels. These
observations are consistent with the fact that the adsorbed ULC microgels
behave more like linear polymers rather than rigid particles 39.
The present study can also contribute to the current debate on the role and
importance of capillary interactions for microgels adsorbed at the interface,
which seem to be significant only for large particles 76, 77. Indeed, the
strength of capillary interactions depends on the size of the particles, the
density difference between the particles and the liquid, and the contact angle
78. Therefore, our measurements reinforce the idea that for small microgels
with low contact angle, such as the one investigated here, capillary forces
are negligible.
Finally, our work is important to shed light on the collective behavior of
microgels at interfaces. The differences we highlighted in the structure may
be relevant for a more comprehensive understanding of microgels’ effective
interactions, paving the way for a better description of their 2D assembly and
for a clever design of their applications such as emulsion stabilizers.
Recent literature has also shown that the substitution between air and
alkanes, such as decane, only slightly changes the stretching of the microgels
at the interface 36. This is due to high interfacial tension of the two
systems and the insolubility of the microgels in the alkane/oil. However, at
lower interfacial tensions, a greater reduction in the spreading of the
microgels is observed 79. Therefore, we expect that our results on the
protrusion of the microgels into the hydrophobic phase and the observed
difference between ULC and standard microgels at an alkane/(oil)-water
interface will not change qualitatively.
## 4 Methods
### 4.1 Synthesis
Standard 5 mol% D0 (SFB985_B8_SB_M000325), 5 mol% D7 (SFB985_A3_MB_M000238),
and ULC D3 (SFB985_A3_MB_M000301) Microgels were synthesized by precipitation
polymerization. 34, 56, 57, 52 The main monomers for all microgels were NIPAM
(D0) or deuterated NIPAM, in which three (D3) or seven (D7) hydrogen atoms
have been exchanged by deuterium. The deuterated monomers were obtained from
Polymer Source, Canada, hydrogenated monomers were obtained from Acros
Organics, Belgium. Surfactants, sodium dodecyl sulfate (SDS) or
cetyltrimethylammonium bromide (CTAB), were added during the synthesis to
control the size polydispersity and final microgel size. Briefly, for the
three different synthesis, 5.4546 g of D0-NIPAM (5 mol% D0 microgels), or
1.5072 g of D7-NIPAM (5 mol% D7 microgels), or 1.0093 g of D3-NIPAM (ULC D3
microgels) were dissolved in 330 mL, 83 mL, and 70 mL double-distilled water,
respectively. For the 5 mol% microgels 0.3398 g (5 mol% D0) or 0.1021 g (5
mol% D7) of the cross-linker $N$,$N$’-methylenebisacrylamide (BIS) were added.
No additional cross-linker was included during the synthesis of the ULC D3
microgels. The reaction flask of the 5 mol% D0 microgels contained
additionally 0.1474 g of $N$-(3-aminopropyl) methacrylamide hydrochloride
(APMH) as co-monomer. The monomer solutions were purged with nitrogen under
stirring and heated to 65$\,{}^{\circ}$C (5 mol% D0), 70$\,{}^{\circ}$C (5
mol% D7) and 70$\,{}^{\circ}$C (ULC D3). The initiators and the surfactants
were dissolved in a few milliliters of double-distilled water in separated
vessels and degassed for at least one hour. For the deuterated 5 mol% D7 and
ULC D3 microgels 0.372 g and 0.0506 mg of potassium peroxydisulfate (KPS) and
0.202 g and 0.0277 g of SDS were used, respectively. For the 5 mol% D0
microgels, 0.2253 g $2$,$2$’-Azobis-(2-methyl-propionamidin) dihydrochlorid
(V50) and 0.0334 g of CTAB were used. After adding the surfactant to the
reaction flask, the polymerization was initiated by injecting the dissolved
initiators. The reactions were carried out for 4 h at the given temperatures
and under constant nitrogen flow and stirring. The obtained microgels were
purified by threefold ultra-centrifugation and re-dispersion in fresh double-
distilled water. Lyophilization was applied for storage for all microgels.
### 4.2 Dynamic light scattering
A laser with vacuum wavelength $\lambda_{0}=633$ nm was used to probe diluted
suspensions of the different microgels in water and heavy water. The
temperature was change from 20$\,{}^{\circ}$C to 50$\,{}^{\circ}$C in steps of
2$\,{}^{\circ}$C using a thermal bath filled with toluene to match the
refractive index of the glass. The momentum transfer
$Q=4\pi/\lambda\sin{\theta}$, was changed by varying the scattering angle,
$\theta$, between 30 and 130 degrees, in steps of 5 degrees.
### 4.3 Small-angle neutron scattering
SANS experiments were performed at the KWS-2 instrument operated by the JCNS
at the MLZ, Garching, Germany, and at the D11 instrument at the Institut Laue-
Langevin (ILL, Grenoble, France). For the KWS-2 the $q$-range of interest was
covered by using a wavelength for the neutron beam of $\lambda=0.5$ and $1$ nm
and three sample-detector distances: 20, 8 and 2 m. The detector is a 2D-3He
tubes array with a pixel-size of 0.75 cm and a $\Delta\lambda/\lambda=10\%$.
For the D11 three configurations were used: sample detector distance,
$d_{\text{SD}}=34\,$m with $\lambda=0.6\,$nm; $d_{\text{SD}}=8\,$m with
$\lambda=0.6\,$nm; and $d_{\text{SD}}=2\,$m with $\lambda=0.6\,$nm. Due to the
velocity selector, the resolution in $\lambda$ was 9 %. The D11 is equipped
with a 3He detector with a pixel size of $7.5$ mm.
### 4.4 Compression isotherms and depositions
Gradient Langmuir-Blodgett type deposition 33, 34, 36 from air-water
interfaces were performed to study the mechanical properties of the microgels
and microgel monolayers and visualize them ex-situ. The Langmuir-Blodgett
trough was made from polyoxymethylene (POM) and was equipped with two movable
POM barriers. For each deposition, the trough was carefully cleaned, heated to
the appropriated temperature (20 or 40$\,{}^{\circ}$C) with an external water
bath, and a fresh air-water interface was created. The surface pressure was
monitored during the depositions with an electric balance fitted with a
platinum Wilhelmy plate. The substrates were rectangular pieces of ultra-flat
silicon wafer ($\approx$ 1.1 x 6 cm, P100). The substrates were carefully
cleaned with distilled water, isopropyl alcohol and ultrasonication. They were
mounted to the dipper arm of the Langmuir-Blodgett trough with an inclination
with respect to the liquid interface of about 25 $\,{}^{\circ}$. After moving
the substrate to the starting position, the microgels were spread at the air-
water interface. For this purpose, microgels were suspended either in 50/50
vol$\%$ mixtures of water-propan-2-ol or in pure chloroform. This was done to
maximize the adsorption of the microgels to air-water interfaces and minimize
partial loss of microgels into the sub-phase. This loss is unavoidable if the
surface-active component is soluble in either phase. After equilibration for
at least 30 minutes, the substrates were lifted through the interface while
the barriers of the Langmuir-Blodgett trough compressed the interface. The
speed of the barriers ($v_{\rm barrier}=6.48$ cm2 min-1) was matched to the
speed of the dipper arm ($v_{\rm dipper}=0.15$ mm min-1). This, together with
the tilt of the substrate, allowed the microgels to be deposited on the
substrate with increasing concentration 33.
### 4.5 Atomic force microscopy
Deposited, dried microgels were imaged using a Dimension Icon atomic force
microscope with closed loop (Veeco Instruments Inc., USA, Software: Nanoscope
9.4, Bruker Co., USA) in tapping mode. The probes were OTESPA tips with a
resonance frequency of 300 kHz, a nominal spring constant of 26 N m-1 of the
cantilever and a nominal tip radius of $<$ 7 nm (Opus by Micromasch, Germany).
### 4.6 Image analysis
The open-source analysis software Gwyddion 2.54 was used to process the AFM
images. All images were leveled to remove the tilt and zero height was fixed
as the minimum z-value of the image.
Height profiles of single dried microgels were extracted through their apices
and at different angles with respect to the fast scan direction. Multiple
height profiles of one image were summarized and aligned to the apices (zero
coordinate of the x-axis) to obtain averaged microgel profiles and not to bias
the results. The profiles are presented with standard deviations as the error.
The apices and heights of microgels were computed using the Matlab function
findpeaks.
The AFM phase images were used to determine the interfacial (dry) diameter,
2$R_{\text{2D}}$, of the all microgels and the interfacial (dry) diameter of
the core, 2$R_{\text{2D,c}}$, of the standard microgels. For this, the
interfacial areas, $A_{\text{2D}}$ and $A_{\text{2D,core}}$, of at least 200
well separated, isolated, and uncompressed microgels were measured.
2$R_{\text{2D}}$ and 2$R_{\text{2D,c}}$ were calculated by 2$R_{\text{2D}}$ =
$\sqrt{{(4\cdot A_{2D})/\pi}}$
### 4.7 Specular neutron reflectometry
Specular neutron reflectometry measurements were conducted on FIGARO, a time-
of-flight reflectometer at the Institute Laue-Langevin, Grenoble, France. Two
angles of incidence ($\theta_{\rm in}=$ 0.615 and 3.766$\,{}^{\circ}$) and a
wavelength resolution of 7% $\Delta\lambda/\lambda$ were used yielding a
momentum transfer of 0.089 $<Q<$ 3.5 nm-1, normal to the interface. The
wavelength of the neutron beam, $\lambda$, was 0.2 to 3 nm.
An area of $\approx$ 10 $\times$ 40 mm2 was illuminated with the neutron beam.
The reflected neutrons were detected by a two-dimensional 3He detector. The
raw time-of-flight experimental data at these two angles of incidence were
calibrated with respect to the incident wavelength distribution and the
efficiency of the detector. Using COSMOS80, in the framework of LAMPS 81, this
yielded the resulting reflectivity profiles R(Q), where $R$ is defined as the
ratio of the intensity of the neutrons scattered at the air-water interface
over the incident intensity of the neutron beam.
SNR experiments were performed using D2O and 8.92% v/v D2O:H2O mixtures as
sub-phase. The latter is generally known as air contrast matched water (ACMW)
since its scattering length density is equal to the one of air. A
polytetrafluoroethylene (PTFE) Langmuir trough with an area of 100 cm2 and a
volume of $\approx$ 60 mL equipped with two parallel moving PTFE barriers was
used. The trough was placed inside an gas-tight box with heated sapphire or
quartz glass windows to prevent condensation. The box is placed on an active
anti-vibration stage which can be moved vertically and horizontally. Prior to
a measurement series (measurements at different temperatures), the trough was
carefully cleaned and a fresh air-water (D2O or ACMW) interface was created.
For temperature control, the trough was connected to an external water bath.
The trough was cooled down to the lowest temperature and left to equilibrate
for 30 mins. The microgels were added to the interface from solution with a
concentration of 1 mg mL-1 in deuterated chloroform or 50/50 vol$\%$ mixtures
of water-propan-2-ol. Subsequently, the interface was compressed to $\approx$
13 mN m-1 and the first measurement was conducted. At this surface pressure
the average nearest neighbour distance between the microgels is $\approx 500$
nm as determined from AFM, see Supplementary Fig. 5. Afterwards the trough was
tempered to the next temperature, left to equilibrate for 30 mins, and
subsequently a measurement conducted. This was repeated until
40$\,{}^{\circ}$C was reached. A feedback loop controlled and adjusted the
surface pressure during the experiments. Surface pressures were measured with
electric balances equipped with paper Wilhelmy plates.
In the literature it is shown that the polymer fraction within a ULC microgels
in bulk is much lower than for cross-linked microgels.39, 72, 54 As a
consequence, their contrast is very low both in the bulk and at the interface,
and long measurement times would be required to collect statistically reliable
data. For this reason, only deuterated ULC microgel were measured at the
interface. The substitution of 3 atoms of hydrogen with 3 atoms of deuterium
improves the contrast of the ULC microgels when both ACMW and pure D2O are
used for the water-phase.
### 4.8 Analysis and model for neutron reflectometry data
As mentioned above, SNR allowed us to determine the density profile of the
microgel monolayer in-situ along the z-direction, normal to the interface. The
measured R(Q) profile can be linked to an in-plane averaged scattering length
density (SLD) profile of the monolayer along the $z$-direction, $b(z)$, thus
giving information of a statistically significant number of microgels.
Here, SNR data modeling was performed by minimizing the difference between the
experimental and the calculated reflectivity profile using the Parratt’s
recursive formalism 82. The calculated profiles were obtained under the
assumption that the $z$-profile of the SLD can be decomposed in $N$-layers,
with an error function connecting adjacent layers. Every layer was
characterized by a constant scattering length density $b_{\rm i}$, which
depends on the volume fraction of polymer and solvent in this layer. Data
analysis was performed using constraints between layer parameters (thickness,
roughness, and degree of hydration or SLD) and simultaneous co-refinement of
data sets at two contrasts (D2O and ACMW) to reduce ambiguity in modeling with
Motofit83 in IGOR Pro (Wavemetrics). Thus, all parameters in Table 2 and 3,
except $b_{i}$, were co-refined for the two contrasts. The model was fitted to
the data using global minimization of a least squares function $\chi^{\rm 2}$.
In each $i$-layer, the SLD and the polymer fraction $x$ follows $b_{\rm
i}=xb_{\rm pNIPAM}+(1-x)b_{\rm solvent}$, where $b_{\rm pNIPAM}$ and $b_{\rm
solvent}$ are the theoretically calculated values. The polymer fraction
distribution x(z) normal to the plane of the interface for each i-layer was
calculated as the sum of two error functions as follows
$x(z)=\frac{1}{2}x_{i}\left[\operatorname{erf}\left(\frac{z-d_{i}/2}{\sqrt{2}\sigma_{i}}\right)-\operatorname{erf}\left(\frac{z+d_{i}/2}{\sqrt{2}\sigma_{i+1}}\right)\right],d_{i}<z<d_{i+1}$
(1)
where, $d_{\text{i}}$ represents the length of the layer with scattering
length density $b_{\rm i}$. The roughness between two layers is given by
$\sigma_{\rm i}$. $\sigma_{\rm i}$ denotes the roughness of a layer $i$ with
the layer above $i-1$. A similar model has been successfully used to fit NR-
curves of pNIPAM nanogels 44, 46.
For the regular microgels, $N$ was chosen equal four to satisfactory fit the
experimental curves. In contrast, good fits of the R(Q)s of monolayer of
ultra-low cross-linked microgels were obtained using three layer.
Additionally, to demonstrate that a Fresnel reflectivity calculation of a slab
model that includes Gaussian error function connecting the layers is valid
even in our case, where the obtained roughness values are of the order of the
layer thicknesses, an alternative model based on a continuous variation of the
SLD profile was used. The SLD profiles were divided into many thin layers (1.5
Å), which sustain the same physical polymer fraction distribution. The results
are compared in the Supplementary Information, Supplementary Figs. 10a-d. In
particular, two sets of data (5 mol% D7 and ULC D3) were fitted with this
alternative method (see Supplementary Information) yielding similar results
and, therefore, validating the findings from the different slab-models used.
### 4.9 Computer simulations
Standard and ULC microgels modeling Individual microgels were obtained by
self-assembling a binary mixture of patchy particles with valence two and four
71 mimicking the NIPAM monomers and the BIS cross-linkers, respectively. The
assembly was carried out through the oxdna simulation package 84. Standard
microgels were created from a total number of monomers $N$ $\approx$ 42000
within a sphere with the radius $Z=100\sigma_{\rm m}$, where $\sigma_{\rm m}$
is the unit of length in simulations. The cross-linkers, whose concentration
was set to be the $5\%$ of the total number of monomers, experienced an
additional designing force during the assembly so that they were more densely
distributed in the center of the particle. The effect of this additional force
has been extensively studied in previous works 85. For ultra-low-cross-linked
(ULC) microgels, we used $N\approx 21000$ and a sphere with $Z=55.5\sigma_{\rm
m}$, as determined from the comparison of the form factors in bulk. In this
case, the number of cross-linkers was set to $0.3\%$ of the total number of
monomers. In both standard and ULC microgels, the assembly was carried out
until $>99.9\%$ of the possible bonds in the network were formed.
At this stage, reversible patchy interactions were made permanent by allowing
the microgel beads to interact via the Kremer-Grest model 86, according to
which all beads interact via the Weeks-Chandler-Anderson (WCA) potential:
$V_{\rm
WCA}(r)=\begin{cases}4\epsilon\left[\left(\frac{\sigma_{m}}{r}\right)^{12}-\left(\frac{\sigma_{m}}{r}\right)^{6}\right]+\epsilon&\text{if
$r\leq 2^{\frac{1}{6}}\sigma_{m}$}\\\ 0&\text{otherwise.}\end{cases}$ (2)
where $\epsilon$ sets the energy scale and $r$ is the distance between two
particles. Connected beads interacted also via the Finitely Extensible
Nonlinear Elastic (FENE) potential,
$V_{\rm FENE}(r)=-\epsilon
k_{F}R_{0}^{2}\ln\left[1-\left(\frac{r}{R_{0}\sigma_{m}}\right)^{2}\right]\text{
if $r<R_{0}\sigma_{m}$,}$ (3)
with $k_{\rm F}=15$ which determines the stiffness of the bond and $R_{\rm
0}=1.5$ is the maximum bond distance.
To account for the responsivity of the microgel at different temperatures,
monomers also interact via an additional potential
$V_{\alpha}(r)=\begin{cases}-\epsilon\alpha&\text{if }r\leq
2^{1/6}\sigma_{m}\\\
\frac{1}{2}\alpha\epsilon\left\\{\cos\left[\gamma{\left(\frac{r}{\sigma_{m}}\right)}^{2}+\beta\right]-1\right\\}&\text{if
}2^{1/6}\sigma_{m}<r\leq R_{0}\sigma_{m}\\\ 0&\text{if
}r>R_{0}\sigma_{m}\end{cases}$ (4)
with $\gamma=\pi\left(\frac{9}{4}-2^{1/3}\right)^{-1}$ and
$\beta=2\pi-\frac{9}{4}\gamma$ 87. $V_{\rm\alpha}$ introduces an effective
attraction among polymer beads, modulated by the parameter $\alpha$, whose
increase allows to mimic the collapse of the microgel observed at high
temperatures.
Behavior at the interface To investigate the behavior of a microgel adsorbed
at an interface, we reproduced the effects of the surface tension by placing a
microgel between two fluids. Such fluids were modeled with soft beads within
the dissipative particle dynamics (DPD) framework 88, 89. The total
interaction force among beads is $\vec{F}_{\rm ij}=\vec{F}^{C}_{\rm
ij}+\vec{F}^{D}_{\rm ij}+\vec{F}^{R}_{\rm ij}$, where:
$\displaystyle\vec{F}^{C}_{ij}$ $\displaystyle=$ $\displaystyle
a_{ij}w(r_{ij})\hat{r}_{ij}$ (5) $\displaystyle\vec{F}^{D}_{ij}$
$\displaystyle=$ $\displaystyle-\gamma
w^{2}(r_{ij})(\vec{v}_{ij}\cdot\vec{r}_{ij})\hat{r}_{ij}$ (6)
$\displaystyle\vec{F}^{R}_{ij}$ $\displaystyle=$ $\displaystyle
2\gamma\frac{k_{B}T}{m}w(r_{ij})\frac{\theta}{\sqrt{\Delta t}}\hat{r}_{ij}$
(7)
where $\vec{F}^{C}_{ij}$ is a conservative repulsive force, with $w(r_{\rm
ij})=1-r_{\rm ij}/r_{\rm c}$ for $r_{\rm ij}<r_{c}$ and $0$ elsewhere,
$\vec{F}^{D}_{\rm ij}$ and $\vec{F}^{R}_{\rm ij}$ are a dissipative and a
random contribution of the DPD, respectively; $a_{\rm ij}$ quantifies the
repulsion between two particles, $\gamma=2.0$ is a friction coefficient,
$\theta$ is a Gaussian random variable with zero average and unit variance,
and $\Delta t=0.002$ is the integration time-step. Following previous works
10, 37, we chose $a_{\rm 11}=a_{\rm 22}=8.8$, $a_{\rm 12}=31.1$, for the
interactions between fluid 1 and fluid 2. Instead, for the monomer-solvent
interactions we chose $a_{\rm m1}=4.5$ and $a_{\rm m2}=5.0$. In this way, we
made fluid 1 the preferred phase for the microgel particle. The cut-off radius
was always set to be $r_{\rm c}=1.9\sigma_{m}$ and the reduced solvent density
$\rho_{\rm DPD}=4.5$. In this way, the total number of particles was about
$2.6\times 10^{6}$ for simulating standard microgels and $\approx 5.3\times
10^{6}$ for ULC microgels. The reduced temperature of the system $T^{*}$ was
fixed to $1$ via the DPD thermostat. We note that by adjusting $V_{\rm\alpha}$
to reproduce the effect of temperature on the microgel, we did not change the
feature of the interface, which remains defined by the DPD parameters listed
above. Simulations were performed with the lammps simulation package 90.
## 5 Data availability
Raw data were generated at the Institute Laue-Langevin (ILL, Grenoble, France)
using the Fluid Interfaces Grazing Angles Reflectometer (FIGARO). The NR raw
data used in this study are available in the ILL Data Portal database under
accession code 10.5291/ILL-DATA.9-11-187191 and 10.5291/ILL-DATA.EASY-46292.
The raw data, associated data, and derived data supporting the results of this
study have been deposited in the RADAR4Chem database under DOI:10.22000/60393
or are available from the corresponding author at the link
http://hdl.handle.net/21.11102/b0e200f4-d196-44bd-874a-2f5f79d22527.
## References
* 1 Van Der Scheer, P., Van De Laar, T., Van Der Gucht, J., Vlassopoulos, D. & Sprakel, J. Fragility and strength in nanoparticle glasses. ACS Nano 11, 6755-6763 (2017).
* 2 Keidel, R., Ghavami, A., Lugo, D., Lotze, G., Virtanen, O., Beumers, P., Pedersen, J., Bardow, A., Winkler, R. & Richtering, W. Time-resolved structural evolution during the collapse of responsive hydrogels: The microgel-to-particle transition. Science Advances 4, eaao7086 (2018).
* 3 Brijitta, J. & Schurtenberger, P. Responsive hydrogel colloids: Structure, interactions, phase behavior, and equilibrium and nonequilibrium transitions of microgel dispersions. Current Opinion In Colloid & Interface Science 40, 87-103 (2019).
* 4 Karg, M., Pich, A., Hellweg, T., Hoare, T., Lyon, L., Crassous, J., Suzuki, D., Gumerov, R., Schneider, S., Potemkin, I. & Richtering, W. Nanogels and microgels: From model colloids to applications, recent developments, and future trends. Langmuir 35, 6231-6255 (2019).
* 5 Scheffold, F. Pathways and challenges towards a complete characterization of microgels. Nature Communications 11, 1-13 (2020).
* 6 Geisel, K., Isa, L. & Richtering, W. Unraveling the 3D localization and deformation of responsive microgels at oil/water interfaces: A step forward in understanding soft emulsion stabilizers. Langmuir 28, 15770-15776 (2012).
* 7 Rey, M., Fernandez-Rodriguez, M., Karg, M., Isa, L. & Vogel, N. Poly-N-isopropylacrylamide nanogels and microgels at fluid interfaces. Accounts Of Chemical Research 53, 414-424 (2020).
* 8 Destribats, M., Lapeyre, V., Wolfs, M., Sellier, E., Leal-Calderon, F., Ravaine, V. & Schmitt, V. Soft microgels as Pickering emulsion stabilisers: Role of particle deformability. Soft Matter 7, 7689-7698 (2011).
* 9 Fernandez-Rodriguez, M., Martin-Molina, A. & Maldonado-Valderrama, J. Microgels at interfaces, from mickering emulsions to flat interfaces and back. Advances In Colloid And Interface Science , 102350 (2020).
* 10 Camerin, F., Fernández-Rodriguez, M., Rovigatti, L., Antonopoulou, M., Gnan, N., Ninarello, A., Isa, L. & Zaccarelli, E. Microgels adsorbed at liquid-liquid interfaces: A joint numerical and experimental study. ACS Nano 13, 4548-4559 (2019).
* 11 Minato, H., Murai, M., Watanabe, T., Matsui, S., Takizawa, M., Kureha, T. & Suzuki, D. The deformation of hydrogel microspheres at the air/water interface. Chemical Communications 54, 932-935 (2018).
* 12 Cox, J., Yu, K., Constantine, B., Eisenberg, A. & Lennox, R. Polystyrene- poly (ethylene oxide) diblock copolymers form well-defined surface aggregates at the air/water interface. Langmuir 15, 7714-7718 (1999).
* 13 Cox, J., Bruce Lennox, R. & Others Compression of polystyrene–poly (ethylene oxide) surface aggregates at the air/water interface. Physical Chemistry Chemical Physics 1, 4417-4421 (1999).
* 14 Zhang, J. & Pelton, R. Poly (N-isopropylacrylamide) microgels at the air- water interface. Langmuir 15, 8032-8036 (1999).
* 15 Fameau, A., Carl, A., Saint-Jalmes, A. & Von Klitzing, R. Responsive aqueous foams. ChemPhysChem 16, 66-75 (2015).
* 16 Wu, D., Mihali, V. & Honciuc, A. pH-responsive pickering foams generated by surfactant-free soft hydrogel particles. Langmuir 35, 212-221 (2018).
* 17 Horiguchi, Y., Kawakita, H., Ohto, K. & Morisada, S. Temperature-responsive Pickering foams stabilized by poly (N-isopropylacrylamide) nanogels. Advanced Powder Technology 29, 266-272 (2018).
* 18 Ngai, T., Behrens, S. & Auweter, H. Novel emulsions stabilized by pH and temperature sensitive microgels. Chemical Communications, 331-333 (2005).
* 19 Brugger, B., Rosen, B. & Richtering, W. Microgels as stimuli-responsive stabilizers for emulsions. Langmuir 24, 12202-12208 (2008).
* 20 Fujii, S., Read, E., Binks, B. & Armes, S. Stimulus-responsive emulsifiers based on nanocomposite microgel particles. Advanced Materials 17, 1014-1018 (2005).
* 21 Bochenek, S., McNamee, C., Kappl, M., Butt, H. & Richtering, W. Interactions between a responsive microgel monolayer and a rigid colloid: from soft to hard interfaces. Physical Chemistry Chemical Physics 23, 16754-16766 (2021).
* 22 Nerapusri, V., Keddie, J., Vincent, B. & Bushnak, I. Swelling and deswelling of adsorbed microgel monolayers triggered by changes in temperature, pH, and electrolyte concentration. Langmuir 22, 5036-5041 (2006).
* 23 Schmidt, S., Zeiser, M., Hellweg, T., Duschl, C., Fery, A. & Möhwald, H. Adhesion and mechanical properties of PNIPAM microgel films and their potential use as switchable cell culture substrates. Advanced Functional Materials 20, 3235-3243 (2010).
* 24 Cors, M., Wiehemeier, L., Hertle, Y., Feoktystov, A., Cousin, F., Hellweg, T. & Oberdisse, J. Determination of internal density profiles of smart acrylamide-based microgels by small-angle neutron scattering: A multishell reverse monte carlo approach. Langmuir 34, 15403-15415 (2018).
* 25 Schulte, M., Bochenek, S., Brugnoni, M., Scotti, A., Mourran, A. & Richtering, W. Stiffness tomography of ultra-soft nanogels by atomic force microscopy. Angewandte Chemie International Edition 60, 2280-2287 (2021).
* 26 Richtering, W. Responsive emulsions stabilized by stimuli-sensitive microgels: emulsions with special non-Pickering properties. Langmuir 28, 17218-17229 (2012).
* 27 Lefroy, K., Murray, B. & Ries, M. Advances in the use of microgels as emulsion stabilisers and as a strategy for cellulose functionalisation. Cellulose 28, 647-670 (2021).
* 28 Stock, S. & Klitzing, R. Microgels at droplet interfaces of water-in-oil emulsions-challenges and progress. Current Opinion In Colloid & Interface Science, 101561 (2022).
* 29 Nguyen, B., Wang, W., Saunders, B., Benyahia, L. & Nicolai, T. pH-responsive water-in-water Pickering emulsions. Langmuir 31, 3605-3611 (2015).
* 30 Monteux, C., Marliere, C., Paris, P., Pantoustier, N., Sanson, N. & Perrin, P. Poly (N-isopropylacrylamide) microgels at the oil- water interface: Interfacial properties as a function of temperature. Langmuir 26, 13839-13846 (2010).
* 31 Faulde, M., Tonn, J. & Jupke, A. Microgels for the intensification of liquid-liquid extraction processes–feasibility and advantages. Chemical Engineering & Technology 43, 137-142 (2020).
* 32 Mehrabian, H., Snoeijer, J. & Harting, J. Desorption energy of soft particles from a fluid interface. Soft Matter 16, 8655-8666 (2020).
* 33 Rey, M., Fernández-Rodriguez, M., Steinacher, M., Scheidegger, L., Geisel, K., Richtering, W., Squires, T. & Isa, L. Isostructural solid–solid phase transition in monolayers of soft core–shell particles at fluid interfaces: structure and mechanics. Soft Matter 12, 3545-3557 (2016).
* 34 Bochenek, S., Scotti, A., Ogieglo, W., Fernández-Rodríguez, M., Schulte, M., Gumerov, R., Bushuev, N., Potemkin, I., Wessling, M., Isa, L. & Richtering, W. Effect of the 3D swelling of microgels on their 2D phase behavior at the liquid–liquid interface. Langmuir 35, 16780-16792 (2019).
* 35 Harrer, J., Rey, M., Ciarella, S., Löwen, H., Janssen, L. & Vogel, N. Stimuli-responsive behavior of PNiPAm microgels under interfacial confinement. Langmuir 35, 10512-10521 (2019).
* 36 Bochenek, S., Scotti, A. & Richtering, W. Temperature-sensitive soft microgels at interfaces: air–water versus oil–water. Soft Matter 17, 976-988 (2021).
* 37 Camerin, F., Gnan, N., Ruiz-Franco, J., Ninarello, A., Rovigatti, L. & Zaccarelli, E. Microgels at interfaces behave as 2D elastic particles featuring reentrant dynamics. Physical Review X 10, 031012 (2020).
* 38 Schulte, M., Scotti, A., Brugnoni, M., Bochenek, S., Mourran, A. & Richtering, W. Tuning the structure and properties of ultra-low cross-linked temperature-sensitive microgels at interfaces via the adsorption pathway. Langmuir 35, 14769-14781 (2019).
* 39 Scotti, A., Bochenek, S., Brugnoni, M., Fernandez-Rodriguez, M., Schulte, M., Houston, J., Gelissen, A., Potemkin, I., Isa, L. & Richtering, W. Exploring the colloid-to-polymer transition for ultra-low crosslinked microgels from three to two dimensions. Nature Communications 10, 1418 (2019).
* 40 Vialetto, J., Camerin, F., Grillo, F., Ramakrishna, S., Rovigatti, L., Zaccarelli, E. & Isa, L. Effect of internal architecture on the assembly of soft particles at fluid interfaces. ACS Nano 15, 13105-13117 (2021).
* 41 Ciarella, S., Rey, M., Harrer, J., Holstein, N., Ickler, M., Lowen, H., Vogel, N. & Janssen, L. Soft particles at liquid interfaces: From molecular particle architecture to collective phase behavior. Langmuir 37, 5364-5375 (2021).
* 42 Menath, J., Eatson, J., Brilmayer, R., Andrieu-Brunsen, A., Buzza, D. & Vogel, N. Defined core–shell particles as the key to complex interfacial self-assembly. Proceedings Of The National Academy Of Sciences 118 (2021).
* 43 Grillo, F., Fernandez-Rodriguez, M., Antonopoulou, M., Gerber, D. & Isa, L. Self-templating assembly of soft microparticles into complex tessellations. Nature 582, 219-224 (2020).
* 44 Zielińska, K., Sun, H., Campbell, R., Zarbakhsh, A. & Resmini, M. Smart nanogels at the air/water interface: structural studies by neutron reflectivity. Nanoscale 8, 4951-4960 (2016).
* 45 Richardson, R., Pelton, R., Cosgrove, T. & Zhang, J. A Neutron reflectivity study of poly (N-isopropylacrylamide) at the air- water interface with and without sodium dodecyl sulfate. Macromolecules 33, 6269-6274 (2000).
* 46 Zielińska, K., Campbell, R., Zarbakhsh, A. & Resmini, M. Adsorption versus aggregation of NIPAM nanogels: new insight into their behaviour at the air/water interface as a function of concentration. Physical Chemistry Chemical Physics 19, 17173-17179 (2017).
* 47 Mourran, A., Wu, Y., Gumerov, R., Rudov, A., Potemkin, I., Pich, A. & Möller, M. When colloidal particles become polymer coils. Langmuir 32, 723-730 (2016).
* 48 Schmidt, S., Liu, T., Rütten, S., Phan, K., Möller, M. & Richtering, W. Influence of microgel architecture and oil polarity on stabilization of emulsions by stimuli-sensitive core–shell poly (N-isopropylacrylamide-co-methacrylic acid) microgels: Mickering versus Pickering behavior? Langmuir 27, 9801-9806 (2011).
* 49 M. Stieger, W. Richtering, S. Pedersen, J. & P. Lindner Small-angle neutron scattering study of structural changes in temperature sensitive microgel colloids. The Journal Of Chemical Physics 120, 6197-6206 (2004).
* 50 Pelton, R. & Chibante, P. Preparation of aqueous latices with N-isopropylacrylamide. Colloids And Surfaces 20, 247-256 (1986).
* 51 Gao, J. & Frisken, B. Cross-linker-free N-isopropylacrylamide gel nanospheres. Langmuir 19, 5212-5216 (2003).
* 52 Brugnoni, M., Nickel, A., Kröger, L., Scotti, A., Pich, A., Leonhard, K. & Richtering, W. Synthesis and structure of deuterated ultra-low cross-linked poly (N-isopropylacrylamide) microgels. Polymer Chemistry 10, 2397-2405 (2019).
* 53 Houston, J. E., Fruhner, L., de la Cotte, A., Rojo González, J., Petrunin, A., Gasser, U., Schweins, R., Allgaier, J., Richtering, W., Fernandez-Nieves, A. & Scotti A. Resolving the different bulk moduli within individual soft nanogels using small-angle neutron scattering. Science Advances 8, eabn6129 (2022).
* 54 Islam, M., Nguyen, R. & Lyon, L. Emergence of non-hexagonal crystal packing of deswollen and deformed ultra-soft microgels under osmotic pressure control. Macromolecular Rapid Communications 42, 2100372 (2021).
* 55 Scotti, A., Houston, J., Brugnoni, M., Schmidt, M., Schulte, M., Bochenek, S., Schweins, R., Feoktystov, A., Radulescu, A. & Richtering, W. Phase behavior of ultrasoft spheres show stable bcc lattices. Physical Review E 102, 052602 (2020).
* 56 Scotti, A. Characterization of the volume fraction of soft deformable microgels by means of small-angle neutron scattering with contrast variation. Soft Matter 17, 5548-5559 (2021)
* 57 Scotti, A., Denton, A., Brugnoni, M., Houston, J., Schweins, R., Potemkin, I. & Richtering, W. Deswelling of microgels in crowded suspensions depends on cross-Link density and architecture. Macromolecules 52, 3995-4007 (2019).
* 58 Flory, P. & Rehner, J. Statistical mechanics of cross-linked polymer networks I. rubberlike elasticity. The Journal Of Chemical Physics 11, 512-520 (1943).
* 59 Flory, P. & Rehner, J. Statistical mechanics of cross-linked polymer networks II. swelling. The Journal Of Chemical Physics 11, 521-526 (1943).
* 60 Lopez, C. & Richtering, W. Does Flory-Rehner theory quantitatively describe the swelling of thermoresponsive microgels? Soft Matter 13, 8271-8280 (2017).
* 61 Shirota, H., Kuwabara, N., Ohkawa, K. & Horie, K. Deuterium isotope effect on volume phase transition of polymer gel: Temperature dependence. The Journal Of Physical Chemistry B 103, 10400-10408 (1999).
* 62 Shirota, H. & Horie, K. Deuterium isotope effect on swelling process in aqueous polymer gels. Chemical Physics 242, 115-121 (1999).
* 63 Cors, M., Wiehemeier, L., Oberdisse, J. & Hellweg, T. Deuteration-induced volume phase transition temperature shift of PNIPMAM microgels. Polymers 11, 620 (2019).
* 64 Buratti, E., Tavagnacco, L., Zanatta, M., Chiessi, E., Buoso, S., Franco, S., Ruzicka, B., Angelini, R., Orecchini, A., Bertoldo, M. & Others The role of polymer structure on water confinement in poly (N-isopropylacrylamide) dispersions. Journal Of Molecular Liquids 355, 118924 (2022).
* 65 Maestro, A. & Gutfreund, P. In situ determination of the structure and composition of Langmuir monolayers at the air/water interface by neutron and X-ray reflectivity and ellipsometry. Advances In Colloid And Interface Science 293, 102434 (2021).
* 66 Geisel, K., Henzler, K., Guttmann, P. & Richtering, W. New insight into microgel-stabilized emulsions using transmission x-ray microscopy: nonuniform deformation and arrangement of microgels at liquid interfaces. Langmuir 31, 83-89 (2014).
* 67 Destribats, M., Lapeyre, V., Sellier, E., Leal-Calderon, F., Schmitt, V. & Ravaine, V. Water-in-oil emulsions stabilized by water-dispersible poly (N-isopropylacrylamide) microgels: Understanding anti-Finkle behavior. Langmuir 27, 14096-14107 (2011).
* 68 Geisel, K., Rudov, A., Potemkin, I. & Richtering, W. Hollow and core–shell microgels at oil–water interfaces: Spreading of soft particles reduces the compressibility of the monolayer. Langmuir 31, 13145-13154 (2015).
* 69 Kleinschmidt, D., Nothdurft, K., Anakhov, M., Meyer, A., Mork, M., Gumerov, R., Potemkin, I., Richtering, W. & Pich, A. Microgel organocatalysts: Modulation of reaction rates at liquid–liquid interfaces. Materials Advances 1, 2983-2993 (2020).
* 70 Romeo, G., Imperiali, L., Kim, J., Fernández-Nieves, A. & Weitz, D. Origin of de-swelling and dynamics of dense ionic microgel suspensions. The Journal Of Chemical Physics 136, 124905 (2012).
* 71 Gnan, N., Rovigatti, L., Bergman, M. & Zaccarelli, E. In silico synthesis of microgel particles. Macromolecules 50, 8777-8786 (2017).
* 72 Scotti, A., Brugnoni, M., Lopez, C., Bochenek, S., Crassous, J. & Richtering, W. Flow properties reveal the particle-to-polymer transition of ultra-low crosslinked microgels. Soft Matter 16, 668-678 (2020).
* 73 Conley, G., Nöjd, S., Braibanti, M., Schurtenberger, P. & Scheffold, F. Supeomeoresolution microscopy of the volume phase transition of pNIPAM microgels. Colloids And Surfaces A: Physicochemical And Engineering Aspects 499, 18-23 (2016).
* 74 Andronov, L., Orlov, I., Lutz, Y., Vonesch, J. & Klaholz, B. ClusterViSu, a method for clustering of protein complexes by Voronoi tessellation in super-resolution microscopy. Scientific Reports 6, 1-9 (2016).
* 75 Rubin-Delanchy, P., Burn, G., Griffié, J., Williamson, D., Heard, N., Cope, A. & Owen, D. Bayesian cluster identification in single-molecule localization microscopy data. Nature Methods 12, 1072-1076 (2015).
* 76 Huang, S., Gawlitza, K., Klitzing, R., Gilson, L., Nowak, J., Odenbach, S., Steffen, W. & Auernhammer, G. Microgels at the water/oil interface: In situ observation of structural aging and two-dimensional magnetic bead Microrheology. Langmuir 32, 712-722 (2016).
* 77 Fernandez-Rodriguez, M., Antonopoulou, M. & Isa, L. Near-zero surface pressure assembly of rectangular lattices of microgels at fluid interfaces for colloidal lithography. Soft Matter 17, 335-340 (2021).
* 78 Kralchevsky, P. & Nagayama, K. Capillary interactions between particles bound to interfaces, liquid films and biomembranes. Advances In Colloid And Interface Science 85, 145-192 (2000).
* 79 Vialetto, J., Nussbaum, N., Bergfreund, J., Fischer, P. & Isa, L. Influence of the interfacial tension on the microstructural and mechanical properties of microgels at fluid interfaces. Journal Of Colloid And Interface Science 608, 2584-2592 (2022).
* 80 Gutfreund, P., Saerbeck, T., Gonzalez, M., Pellegrini, E., Laver, M., Dewhurst, C. & Cubitt, R. Towards generalized data reduction on a chopper-based time-of-flight neutron reflectometer. Journal Of Applied Crystallography 51, 606-615 (2018).
* 81 Richard, D., Ferrand, M. & Kearley, G. Lamp, the large array manipulation program. J. Neutron Res 4, 33-39 (1996).
* 82 Parratt, L. Surface studies of solids by total reflection of X-Rays. Phys. Rev. 95, 359-369 (1954).
* 83 Nelson, A. Co-refinement of multiple-contrast neutron/X-ray reflectivity data using MOTOFIT. Journal Of Applied Crystallography 39, 273-276 (2006).
* 84 Rovigatti, L., Šulc, P., Reguly, I. & Romano, F. A comparison between parallelization approaches in molecular dynamics simulations on GPUs. Journal Of Computational Chemistry 36, 1-8 (2015).
* 85 Ninarello, A., Crassous, J., Paloli, D., Camerin, F., Gnan, N., Rovigatti, L., Schurtenberger, P. & Zaccarelli, E. Modeling microgels with a controlled structure across the volume phase transition. Macromolecules 52, 7584-7592 (2019).
* 86 Kremer, K. & Grest, G. Dynamics of entangled linear polymer melts: A molecular-dynamics simulation. The Journal Of Chemical Physics 92, 5057-5086 (1990).
* 87 Soddemann, T., Dünweg, B. & Kremer, K. A generic computer model for amphiphilic systems. The European Physical Journal E 6, 409-419 (2001).
* 88 Groot, R. & Warren, P. Dissipative particle dynamics: Bridging the gap between atomistic and mesoscopic simulation. J. Chem. Phys. 107, 4423-4435 (1997).
* 89 Camerin, F., Gnan, N., Rovigatti, L. & Zaccarelli, E. Modelling realistic microgels in an explicit solvent. Scientific Reports 8, 1-12 (2018).
* 90 Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. Journal Of Computational Physics 117, 1-19 (1995).
* 91 Bochenek, S., Gutfreund P., Lima, L., Maestro, A., Richtering, W., Schmidt, M. M., Scotti, A. Thermo-responsive microgels at air-water interfaces: specular neutronreflectometry study to obtain out-fo-plane density profiles. Institut Laue-Langevin Data Portal (ILL Data Portal), doi:10.5291/ILL-DATA.9-11-1871 (2018).
* 92 Bochenek, S., Maestro, A., Scotti, A. Deuterated microgels at the liquid-air interface: Effect of crosslinking and temperature. (Continuation of experiment 9-11-1871). Institut Laue-Langevin Data Portal (ILL Data Portal), doi:10.5291/ILL-DATA.EASY-462 (2019).
* 93 Bochenek, S., Camerin, F., Zaccarelli, E., Maestro, A., Schmidt, M. M., Richtering, W., Scotti, A. Dataset: In-situ study of the impact of temperature and architecture on the interfacial structure of thermo-responsive microgels. RADAR4Chem, doi:10.22000/603 (2022).
## 6 Acknowledgements
The authors thank Yuri Gerelli for valuable discussions and Monia Burgnoni for
synthesis of the deuterated microgels. SB, MMS, WR and AS acknowledge funding
from the Deutsche Forschungsgemeinschaft within SFB 985 ”Functional Microgels
and Microgel Systems”, projects A3 and B8. FC and EZ acknowledge financial
support from the European Research Council (Consolidator Grant 681597, MIMIC).
This work is based upon NR experiments performed at the Institute Laue-
Langevin (ILL, Grenoble, France) using the Fluid Interfaces Grazing Angles
Reflectometer (FIGARO). This work is partially based on SANS experiments
performed at the D11 instrument at the Institut Laue-Langevin (ILL), Grenoble,
France and at the KWS-2 instrument operated by JCNS at the Heinz Maier-
Leibnitz Zentrum (MLZ), Garching, Germany.
## 7 Author contributions statement
W.R., A.S., E.Z., F.C., and S.B. designed this study. A.S., M.S., A.M., and
S.B. performed the NR measurements. A.S., A.M. and S.B. designed the model for
the NR data. A.M. and S.B. analyzed the NR data. S.B. synthesized and
characterized the hydrogenated microgels. S.B. performed Langmuir-Blodgett and
AFM measurements. S.B. analyzed the AFM data. F.C. performed the computer
simulations. All authors participated in discussing the results, writing,
finalizing, and revising the manuscript.
## 8 Competing interests statement
The authors declare no competing interests.
|
# Context-aware Adversarial Training for Name Regularity Bias in
Named Entity Recognition
Abbas Ghaddar, Philippe Langlais†, Ahmad Rashid and Mehdi Rezagholizadeh
Huawei Noah’s Ark Lab, Montreal Research Center, Canada
†RALI/DIRO, Université de Montréal, Canada
<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In this work, we examine the ability of NER models to use contextual
information when predicting the type of an ambiguous entity. We introduce NRB,
a new testbed carefully designed to diagnose Name Regularity Bias of NER
models. Our results indicate that all state-of-the-art models we tested show
such a bias; BERT fine-tuned models significantly outperforming feature-based
(LSTM-CRF) ones on NRB, despite having comparable (sometimes lower)
performances on standard benchmarks.
To mitigate this bias, we propose a novel model-agnostic training method which
adds learnable adversarial noise to some entity mentions, thus enforcing
models to focus more strongly on the contextual signal, leading to significant
gains on NRB. Combining it with two other training strategies, data
augmentation and parameter freezing, leads to further gains.
## 1 Introduction
Recent advances in language model pre-training Peters et al. (2018); Devlin et
al. (2019); Liu et al. (2019) have greatly improved the performance of many
Natural Language Understanding (NLU) tasks. Yet, several studies McCoy et al.
(2019); Clark et al. (2019); Utama et al. (2020b) revealed that state-of-the-
art NLU models often make use of surface patterns in the data that do not
generalize well. Named-Entity Recognition (NER), a downstream task that
consists in identifying textual mentions and classifying them into a
predefined set of types, is no exception.
Gonzales, Louisiana
Gonzales${}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}LOC}}_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}PER}}$
is a small city in Ascension Parish, Louisiana.
Obama, Fukui
Obama${}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}LOC}}_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}PER}}$
is located in far southwestern Fukui Prefecture.
Patricia A. Madrid
Madrid${}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}PER}}_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}LOC}}$
won her first campaign in 1978 ..
Asda Jayanama
Asda${}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}PER}}_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}ORG}}$
joined his brother, Surapong …
Figure 1: Examples extracted from Wikipedia (title in bold) that illustrate
name regularity bias in NER. Entities of interest are underlined, gold types
are in blue superscript, model predictions are in red subscript, and context
information is highlighted in purple. Models employed in this study disregard
contextual information and rely instead on some signal from the named-entity
itself.
The robustness of modern NER models has received considerable attention
recently Mayhew et al. (2019, 2020); Agarwal et al. (2020a); Zeng et al.
(2020); Bernier-Colborne and Langlais (2020). Name Regularity Bias Lin et al.
(2020); Agarwal et al. (2020b); Zeng et al. (2020) in NER occurs when a model
relies on a signal coming from the entity name, and disregards evidences
within the local context. Figure 1 shows examples where state-of-the-art
models Peters et al. (2018); Akbik et al. (2018); Devlin et al. (2019) fail to
exploit contextual information. For instance, the entity Gonzales in the first
sentence of the figure is wrongly recognized as a person, while the context
clearly signals that it is a location (city).
To better highlight this issue, we propose NRB, a testbed designed to
accurately diagnose name regularity bias of NER models by harvesting natural
sentences from Wikipedia that contain challenging entities, such as those in
Figure 1. This is different from previous works that evaluate models on
artificial data obtained by either randomizing Lin et al. (2020) or
substituting entities by ones from a pre-defined list Agarwal et al. (2020a).
NRB is compatible with any annotation scheme, and is intended to be used as an
auxiliary validation set.
We conduct experiments with the feature-based LSTM-CRF architecture Peters et
al. (2018); Akbik et al. (2018) and the BERT Devlin et al. (2019) fine-tuning
approach trained on standard benchmarks. The best LSTM-based model we tested
is able to correctly predict 38% of the entities in NRB. BERT-based models are
performing much better (+37%), even if they (slightly) underperform on in-
domain development and test sets. This mismatch in performance between NRB and
standard benchmarks indicates that context awareness of models is not rewarded
by existing benchmarks, thus justifying NRB as an additional validation set.
We further propose a novel architecture-agnostic adversarial training
procedure Miyato et al. (2016) in which learnable noise vectors are added to
named-entity words, weakening their signal, thus encouraging the model to pay
more attention to contextual information. Applying it to both feature-based
LSTM-CRF and fine-tuned BERT models leads to consistent gains on NRB (+13
points) while maintaining the same level of performance on standard
benchmarks.
The remainder of the paper is organized as follows. We discuss related works
in Section 2. We describe how we built NRB in Section 3, and its use in
diagnosing named-entity bias of state-of-the-art models in Section 4. In
Section 5, we present a novel adversarial training method that we compare and
combine with two simpler ones. We further analyze these training methods in
Section 6, and conclude in Section 7.
## 2 Related Work
Robustness and out-of-distribution generalization has always been a persistent
concern in deep learning applications such as computer vision Szegedy et al.
(2013); Recht et al. (2019), speech processing Seltzer et al. (2013); Borgholt
et al. (2020), and NLU Søgaard (2013); Hendrycks and Gimpel (2017); Ghaddar
and Langlais (2017); Yaghoobzadeh et al. (2019); Hendrycks et al. (2020). One
key challenge behind this issue in NLU is the tendency of models to quickly
leverage surface form features and annotation artifacts Gururangan et al.
(2018), which is often referred to as dataset biases Dasgupta et al. (2018);
Shah et al. (2020). We discuss related works along two axes: diagnosis and
mitigation.
### 2.1 Diagnosing Biais
A growing number of studies Zellers et al. (2018); Poliak et al. (2018); Geva
et al. (2019); Utama et al. (2020b); Sanh et al. (2020) are showing that NLU
models rely heavily on spurious correlations between output labels and surface
features (e.g. keywords, lexical overlap), impacting their generalization
performance. Therefore, considerable attention has been paid to design
diagnostic benchmarks where models relying on bias would perform poorly. For
instance, HANS McCoy et al. (2019), FEVER Symmetric Schuster et al. (2019),
and PAWS Zhang et al. (2019) are benchmarks that contain counterexamples to
well-known biases in the training data of textual entailment Williams et al.
(2017), fact verification Thorne et al. (2018), and paraphrase identification
Wang et al. (2018) respectively.
Naturally, many entity names have a strong correlation with a single type
(e.g. <Gonzales, PER> or <Madrid, LOC>). Recent works have noted that over-
relying on entity name information negatively impacts NLU tasks.
Balasubramanian et al. (2020) found that substituting named-entities in
standard test sets of natural language inference, coreference resolution, and
grammar error correction has a negative impact on those tasks. In political
claims detection Padó et al. (2019), Dayanik and Padó (2020) show that claims
made by frequently occurring politicians in the training data are better
recognized than those made by less frequent ones.
Recently, Zeng et al. (2020) and Agarwal et al. (2020b) conducted two separate
analyses on the decision making mechanism of NER models. Both works found that
context tokens do contribute to system performance, but that entity names play
a major role in driving high performances. Agarwal et al. (2020a) reported a
performance drop in NER models when entities in standard test sets are
substituted with other ones pulled from pre-defined lists. Concurrently, Lin
et al. (2020) conducted an empirical analysis on the robustness of NER models
in the open domain scenario. They show that models are biased by strong entity
name regularity, and train$\backslash$test overlap in standard benchmarks.
They observe a drop in performance of 34% when entity mentions are randomly
replaced by other mentions.
The aforementioned studies certainly demonstrate name regularity bias. Still,
in many cases the entity mention is the only key to infer its type, as in
"James won the league". Thus, randomly swapping entity names, as proposed by
Lin et al. (2020), typically introduces false positive examples, which
obscures observations. Furthermore, creating artificial word sequences
introduces a mismatch between the pre-training and the fine-tuning phases of
large-scale language models.
NER is also challenging because of compounding factors such as entity boundary
detection Zheng et al. (2019), rare words and emerging entities Strauss et al.
(2016), document-level context Durrett and Klein (2014), capitalization
mismatch Mayhew et al. (2019), unbalance datasets Nguyen et al. (2020), and
domain shift Alvarado et al. (2015); Augenstein et al. (2017). It is unclear
to us how randomizing mentions in a corpus, as proposed by Lin et al. (2020),
is interfering with these factors.
NRB gathers genuine entities that appear in natural sentences extracted from
Wikipedia. Examples are selected so that entity boundaries are easy to
identify, and their types can be inferred from the local context, thus
avoiding compounding many factors responsible for lack of robustness.
### 2.2 Mitigating Bias
The prevailing approach to address dataset biases consists in adjusting the
training loss for biased examples. A number of recent studies Clark et al.
(2019); Belinkov et al. (2019); He et al. (2019); Mahabadi et al. (2020);
Utama et al. (2020a) proposed to train a shallow model that exploits manually
designed biased features. A main model is then trained in an ensemble with
this pre-trained model, in order to discourage the main model from adopting
the naive strategy of the shallow one.
Adversarial training Miyato et al. (2016) is a regularization method which has
been shown to improve not only robustness Ebrahimi et al. (2018); Bekoulis et
al. (2018), but also generalization Cheng et al. (2019); Zhu et al. (2019) in
NLU. It builds on the idea of adding adversarial examples Goodfellow et al.
(2014); Fawzi et al. (2016) to the training set, that is, small perturbations
of the data that can change the prediction of a classifier. These
perturbations for NLP tasks are done at the token embedding level and are norm
bounded. Typically, adversarial training algorithms can be defined as a minmax
optimization problem wherein the adversarial examples are generated to
maximize the loss, while the model is trained to minimize it.
Belinkov et al. (2019) used adversarial training to mitigate the hypothesis-
only bias in textual entailment models. Clark et al. (2020) adversarially
trained a low and a high capacity model in an ensemble in order to ensure that
the latter model is focusing on patterns that should generalize better.
Dayanik and Padó (2020) used an extra adversarial loss in order to encourage a
political claims detection model to learn more from samples with infrequent
politician names. Le Bras et al. (2020) proposed an adversarial technique to
filter-out biased examples from training material. Models trained on the
filtered datasets show improved out-of-distribution performances on various
computer vision and NLU tasks.
Data augmentation is another strategy for enhancing robustness. It was
successfully used in Min et al. (2020) and Moosavi et al. (2020) to improve
textual entailment performances on the HANS benchmark. The former approach
proposes to append original training sentences with their corresponding
predicate-arguments triplets generated by a semantic role labelling tagger;
while the latter generates new examples by applying syntactic transformations
to the original training instances.
Zeng et al. (2020) created new examples by randomly replacing an entity by
another one of the same type that occurs in the training data. New examples
are considered valid if the type of the replaced entity is correctly predicted
by a NER model trained on the original dataset. Similarly, Dai and Adel (2020)
explored different entity substitution techniques for data augmentation
tailored to NER. Both studies conclude that data augmentation techniques based
on entity substitution improves the overall performances on low resource
biomedical NER.
Studies discussed above have the potential to mitigate name regularity bias of
NER models. Still, we are not aware of any dedicated work that shows it is so.
In this work, we propose ways of mitigating name regularity bias for NER,
including an elaborate adversarial method that enforces the model to capture
more signal from the context. Our methods do not require an extra training
stage, or to manually characterize biased features. They are therefore
conceptually simpler, and can potentially be combined to any of the discussed
techniques. Furthermore, our proposed methods are effective under both low and
high resource settings.
## 3 The NRB Benchmark
NRB is a diagnosing testbed exclusively dedicated to name regularity bias in
NER. To this end, it gathers named-entities that satisfy 4 criteria:
1. 1.
Must be real-world entities within natural sentences $\rightarrow$ We select
sentences from Wikipedia articles.
2. 2.
Must be compatible with any annotation scheme $\rightarrow$ We restrict our
focus on the 3 most common types found in NER benchmarks: person, location,
and organization.
3. 3.
Boundary detection (segmentation) should not be a bottleneck $\rightarrow$ We
only select single word entities that start with a capital letter.
4. 4.
Supporting evidences of the type must be restricted to local context only (a
window of 2 to 4 tokens) $\rightarrow$ We developed a primitive context-only
tagger to filter-out entities with no close-context signal.
Disambiguation page |
---|---
Bromwich (disambiguation)
Query term | Bromwich
Wikipedia article | Kenny Bromwich
Freebase type | PER
Sentence |
Round 5 of the 2013 NRL season Bromwich made his NRL debut for the Melbourne
Storm
Taggers |
weak supervision org (confidence: 0.97)
context-only per: 0.58, org: 0.30, loc: 0.12
Figure 2: Selection of a sentence in NRB.
The strategy used to gather examples in NRB is illustrated in Figure 2. We
first select Wikipedia articles that are listed in a disambiguation page.
Disambiguation pages group different topics that could be referred to by the
same query
term.111https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Disambiguation_pages.
The query term Bromwich in Figure 2 has its own disambiguation page that
contains a link to the city of West Bromwich, West Bromwich Albion Football
Club, and Kenny Bromwich the rugby league player.
We associate each article in a disambiguation page to the entity type found in
its corresponding Freebase page Bollacker et al. (2008), considering only
articles whose Freebase type can be mapped to a person, a location, or an
organization. We assume that occurrences of the query term within the article
are of this type. This assumption was found accurate in previous works on
Wikipedia distant supervision for NER Ghaddar and Langlais (2016, 2018). The
sentence in our example is extracted from the Kenny Bromwich article, whose
Freebase type can be mapped to a person. Therefore, we assume Bromwich in this
sentence to be a person.
To decide whether a sentence containing a query term is worth being included
in NRB, we rely on two NER taggers. One is a popular NER system which provides
a confidence score to each prediction, and which acts as a weak superviser,
the other is a context-only tagger we designed specifically (see section 3.1)
to detect entities with a strong signal from their local context. A sentence
is selected if the query term is incorrectly labeled with high confidence
(score $>$ 0.85) by the former tagger, while the latter one labels it
correctly with high confidence (a gap of at least 0.25 in probability between
the first and second predicted types). This is the case of the sentence in
Figure 2 where Bromwich is incorrectly labeled as an organisation by the weak
supervision tagger, however correctly labeled as a person by the context-only
tagger.
### 3.1 Implementation
We used the Stanford CoreNLP Manning et al. (2014) tagger as our weak
supervision tagger and developed a simple yet efficient method to build a
context-only tagger. For this, we first applied the Stanford tagger to the
entire Wikipedia dump and replaced all entity mentions identified by their
tag. Then, we train a 5-gram language model on the resulting corpus using
kenLM Heafield (2011). Figure 3 illustrates how this model is deployed as an
entity tagger: the mention is replaced by an empty slot and the language model
is queried for each type. We rank the tags using the perplexity score given by
the model to the resulting sentences, then we normalize those scores to get a
probability distribution over types.
Obama is located in far southwestern Fukui Prefecture.
$<$?$>$ is located in far southwestern Fukui Prefecture.
{LOC: 0.61, ORG: 0.28, PER: 0.11}
Figure 3: Illustration of a language model used as a context-only tagger.
We downloaded the Wikipedia dump of June 2020, which contains 30k
disambiguation pages. These pages contain links to 263k articles, where only
107k (40%) of them have a type in Freebase that can be mapped to the 3 types
of interest. The Stanford tagger identified 440k entities that match the query
term of the disambiguation pages. The thresholds discussed previously were
chosen to select around 5000 of the most challenging examples in terms of name
regularity bias. This figure aligns with the number of entities present in the
test set of the well-studied CoNLL benchmark Tjong Kim Sang and De Meulder
(2003).
We assessed the annotation quality, by asking a human to filter out noisy
examples. A sentence was removed if it contains an annotation error, or if the
type of the query term cannot be inferred from the local context. Only 1.3% of
the examples where removed, which confirms the accuracy of our automatic
procedure. NRB is composed of 5275 examples, and each sentence contains a
single annotation (see Figure 1 for examples).
Model | CoNLL | OntoNotes
---|---|---
Dev | Test | NRB | WTS | Dev | Test | NRB | WTS
Feature-based
Flair-LSTM | - | 93.03 | 27.56 | 99.58 | - | 89.06 | 33.67 | 93.98
ELMo-LSTM | 96.69 | 92.47 | 31.65 | 98.24 | 88.31 | 89.38 | 34.34 | 94.90
BERT-LSTM | 95.94 | 91.94 | 38.34 | 98.08 | 86.12 | 87.28 | 43.07 | 92.04
Fine-tuning
BERT-base | 96.18 | 92.19 | 75.54 | 98.67 | 87.23 | 88.19 | 75.34 | 94.22
BERT-large | 96.90 | 92.86 | 75.55 | 98.51 | 89.26 | 89.93 | 75.41 | 95.06
Table 1: Mention level F1 scores of models on CoNLL and OntoNotes, as well as
on NRB and WTS.
### 3.2 Control Set (WTS)
In addition to NRB, we collected a set of domain control sentences — called
WTS for Witness — that contain the very same query terms selected in NRB, but
which were correctly labeled by both the Stanford (score $>$ 0.85) and the
context-only taggers. We selected examples with a small gap ($<$ 0.1) between
the first and second ranked type assigned to the query term by the latter
tagger. Thus, examples in WTS should be easy to tag. For example, because
Obama the Japanese city (see Figure 3) is selected among the query terms in
NRB, we added an instance of Obama the president.
Performing poorly on such examples222That is, a system that fail to tag Obama
the president as a person. indicates a domain shift between NRB (Wikipedia)
and whatever dataset a model is trained on (we call it the in-domain corpus).
WTS is composed of 5192 sentences that have also been manually checked.
## 4 Diagnosing Bias
### 4.1 Data
To be comparable with state-of-the-art models, we consider two standard
benchmarks for NER: CoNLL-2003 Tjong Kim Sang and De Meulder (2003) and
OntoNotes 5.0 Pradhan et al. (2012) which include 4 and 18 types of named-
entities respectively. OntoNotes is 4 times larger than CoNLL, and both
benchmarks mainly cover the news domain. We run experiments on the official
train/dev/test splits, and report mention-level F1 scores, following previous
works. Since in NRB, there is only one entity per sentence to annotate, a
system is evaluated on its ability to correctly identify the boundaries of
this entity and its type. When we train on OntoNotes (18 types) and evaluate
on NRB (3 types), we perform type mapping using the scheme of Augenstein et
al. (2017).
### 4.2 Systems
Following Devlin et al. (2019), we term all approaches that learn the encoder
from scratch as feature-based, as opposed to the ones that fine-tune a pre-
trained model for the downstream task. We conduct experiments using 3 feature-
based and 2 fine-tuning approaches for NER:
* •
Flair-LSTM An LSTM-CRF model that uses Flair Akbik et al. (2018)
contextualized embeddings as main features.
* •
ELMo-LSTM The LSTM-CRF tagging model of Peters et al. (2018) that uses ELMo
contextualized embeddings at the input layer.
* •
BERT-LSTM Similar to the previous model, but replacing ELMo by a
representation gathered from the last four layers of BERT.
* •
BERT-base The fine-tuning approach proposed by Devlin et al. (2019) using the
BERT-base model.
* •
BERT-large The fine-tuning approach using the BERT-large model.
We used Flair-LSTM off-the-shelf,333https://github.com/flairNLP/flair and re-
implemented other approaches using the default settings proposed in the
respective papers. For our reimplementations, we used early stopping based on
performance on the development set, and report average performance over 5
runs. For BERT-based solutions, we adopt spanBERT Joshi et al. (2020) as a
backbone model since it was found by Li et al. (2020) to perform better on
NER.
### 4.3 Results
Table 1 shows the mention level F1 score of the systems considered. Flair-LSTM
and BERT-large are the best performing models on in-domain test sets, the
maximum gap with other models being 1.1 and 2.7 on CoNLL and OntoNotes
respectively. These figures are in line with previous works. What is more
interesting is the performance on NRB. Feature-based models do poorly, Flair-
LSTM underperforms compared to other models (F1 score of 27.6 and 33.7 when
trained on CoNLL and OntoNotes respectively). Fine-tuned BERT models clearly
perform better (around 75), but far from in-domain results (92.9 and 89.9 on
CoNLL and OntoNotes respectively). Domain shift is not a reason for those
results, since the performances on WTS are rather high (92 or higher).
Furthermore, we found that the boundary detection (segmentation) performance
on NRB is above 99.2% across all settings. Since errors made on NRB are
neither due to segmentation nor to domain shift, they must be imputed to name
regularity bias of models.
It is worth noting that BERT-LSTM outperforms ELMo-LSTM on NRB, despite
underperforming on in-domain test sets. This may be because BERT was pre-
trained on Wikipedia (same domain of NRB), while ELMo embeddings were trained
on the One Billion Word corpus Chelba et al. (2014). Also, we observe that
switching from BERT-base to BERT-large, or training on 4 times more data
(CoNLL versus OntoNotes) does not help on NRB. This suggests that name
regularity bias is neither a data nor a model capacity issue.
### 4.4 Feature-based vs. Fine-tuning
In this section, we analyze reasons for the drastic superiority of fined-tuned
models on NRB. First, the large gap between BERT-LSTM and BERT-base on NRB
suggests that this is not related to the representations being used at the
input layer.
Second, we tested several configurations of ELMo-LSTM where we scale up the
number of LSTM layers and hidden units. We observed a degradation of
performance on dev, test and NRB sets, mostly due to over-parameterized
models. We also trained 9, 6 and 4 layers BERT-base models,444We used early
exit Xin et al. (2020) at the $k^{th}$ layer. and still noticed a large
advantage of BERT models on NRB.555The 4-layer model has 53M parameters and
performs 52% on NRB. This suggests that the higher capacity of BERT alone can
not explain all the gains.
Third, since by design, evidences on the entity type in NRB reside within the
local context, it is unlikely that gains on this set come from the ability of
Transformers Vaswani et al. (2017) to better handle long dependencies than
LSTM Hochreiter and Schmidhuber (1997). To further validate this statement, we
fine-tuned BERT models with randomly initialized weights, except the embedding
layer. We noticed that this time, the performances on NRB fall into the same
range of those of feature-based models, and a drastic decrease (12-15%) on
standard benchmarks. These observations are in keeping with results from
Hendrycks et al. (2020) on the out-of-distribution robustness of fine-tuning
pre-trained transformers, and also confirms observations made by Agarwal et
al. (2020b).
From these analyses, we conclude that the Masked Language Model (MLM)
objective Devlin et al. (2019) that the BERT models were pre-trained with is a
key factor driving superior performances of the fine-tuned models on NRB. In
most cases, the target word is masked or randomly selected, therefore the
model must rely on the context to predict the correct target, which is what a
model should do to correctly predict the type of entities in NRB. We think
that in fine-tuning, training for a few epochs with a small learning rate,
helps the model to preserve the contextual behaviour induced by the MLM
objective.
Nevertheless, fine-tuned models recording at best an F1 score of 75.6 on NRB
do show some name regularity bias, and fail to capture useful local contextual
information.
Figure 4: Illustration of our adversarial method applied on the entity New
York. First, we generate a noisy type (PER), and then add a learnable noise
embedding (LOC$\rightarrow$PER) to the input representation of that entity.
This will make entity patterns (hashed rectangles) unreliable for the model,
hence forcing it to collect evidences (dotted arrow) from the context. The
noise embedding matrix and the noise label projection layer weights (dotted
rectangle) are trained independently from the model parameters.
## 5 Mitigating Bias
In this section, we investigate training procedures that are designed to
enhance the contextual awareness of a model, leading to a better performance
on NRB without impacting in-domain performance. These training procedures are
not supposed to use any external data. In fact, NRB is only used as a
diagnosing corpus, once the model is trained. We propose 3 training procedures
that can be combined, two of them are architecture-agnostic, and one is
specific to fine-tuning BERT.
### 5.1 Entity Masking
Inspired by the masking strategy applied during the pre-training phase of
BERT, we propose a data augmentation approach that introduces a special [MASK]
token in some of the training examples. Specifically, we search for entities
in the training material that are preceded or followed by 3 non-entity words.
This criterion applies to 35% and 39% of entities in the training data of
CoNLL and OntoNotes respectively. For each such entity, we create a new
training example (new sentence) by replacing the entity by [MASK], thus
forcing the model to infer the type of masked tokens from the context. We call
this procedure mask.
### 5.2 Parameter Freezing
Another simple strategy, specific to fine-tuning BERT, consists of freezing
part of the network. More precisely, we freeze the bottom half of BERT,
including the embedding layer. The intuition is to preserve part of the
predicting-by-context mechanism that BERT has acquired during the pre-training
phase. This training procedure is expected to enforce the contextual ability
of the model, thus adding to our analysis on the critical role of the MLM
objective in pre-training BERT. We name this method freeze.
### 5.3 Adversarial Noise
We propose an adversarial learning algorithm that makes entity type patterns
in the input representation less reliable for the model, thus enforcing it to
rely more aggressively on the context. To do so, we add a learnable
adversarial noise vector (only) to the input representation of entities. We
refer to this method as adv.
Let $T=\\{t_{1},t_{2},\ldots,t_{K}\\}$ be a predefined set of types such as
PER, LOC, and ORG in our case. Let $x=x_{1},x_{2},\ldots,x_{n}$ be the input
sequence of length $n$, $y=y_{1},y_{2},\ldots,y_{n}$ be the gold label
sequence following the IOB666Naturally applies to other schemes, such as BILOU
that Ratinov and Roth (2009) found more informative. tagging scheme, and
$y^{\prime}=y^{\prime}_{1},y^{\prime}_{2},\ldots,y^{\prime}_{n}$ be a sequence
obtained by adding noise to $y$ at the mention-level, that is, by randomly
replacing the type of mentions in $y$ with some noisy type sampled from $T$.
Let $\mathcal{Y}_{ij}(t)=y_{i},\ldots,y_{j}$ be a mention of type $t\in T$,
spanning the sequence of indices $i$ to $j$ in $y$. We derive a noisy mention
$\mathcal{Y^{\prime}}_{ij}$ in $y^{\prime}$ from $\mathcal{Y}_{ij}(t)$ as
follows:
$\mathcal{Y}^{\prime}_{ij}=\begin{cases}\mathcal{Y}_{ij}(t^{\prime})&{p\sim
U(0,1)\leq\lambda}\\\ &t^{\prime}\sim\underset{\gamma\in
T\setminus{\\{t\\}}}{\text{Cat}}(\gamma|\xi=\frac{1}{K-1})\\\
\mathcal{Y}_{ij}(t)&\text{otherwise}\\\ \end{cases}$
where $\lambda$ is a threshold parameter, $U(0,1)$ refers to the uniform
distribution in the range [0,1], Cat$(\gamma|\xi=\frac{1}{K-1})$ is the
categorical distribution whose outcomes are equally likely with the
probability of $\xi$, and the set
$T\setminus{\\{t\\}}=\\{t^{\prime}:t^{\prime}\in T\wedge t^{\prime}\neq t\\}$
stands for the set $T$ excluding type $t$.
The above procedure only applies to the entities which are preceded or
followed by 3 context words. For instance, in Figure 4, we produce a noisy
type for New York (PER), but not for John ($p>\lambda$). Also, note that we
generate a different sequence $y^{\prime}$ from $y$ at each training epoch.
Next, we define a learnable noisy embedding matrix
$E^{\prime}\in\mathbb{R}^{m\times d}$ where $m=|T|\times(|T|-1)$ is the number
of valid type switching possibilities, and $d$ is the dimension of the input
representations of $x$. For each token with a noisy label, we add the
corresponding noisy embedding to its input representation. For other tokens,
we simply add a zero vector of size $d$. As depicted in Figure 4, the noisy
type of the entity New York is PER, therefore we add the noise embedding at
index $LOC\rightarrow{PER}$ to its input representation.
Then, the input representation of the sequence is fed to an encoder followed
by an output layer, such as LSTM-CRF in Peters et al. (2018), or BERT-Softmax
in Devlin et al. (2019). First, we extend the aforementioned models by
generating an extra logit $f^{\prime}$ using a projection layer parametrized
by $W^{\prime}$ and followed by a softmax function. As shown in Figure 4, for
each token the model produces two logits relative to the true and noisy tags.
Then, we train the entire model to minimize two losses: $L_{true}(\theta)$ and
$L_{noisy}(\theta^{\prime})$, where $\theta$ is the original set of parameters
and $\theta^{\prime}=\\{E^{\prime},W^{\prime}\\}$ is the extra set we added
(dotted boxes in Figure 4). $L_{true}(\theta)$ is the regular loss on the true
tags, while $L_{noisy}(\theta^{\prime})$ is the loss on the noisy tags defined
as follows:
$L_{\text{noisy}}(\theta^{\prime})=\sum_{i=1}^{n}\mathbbm{1}(y^{\prime}_{i}\neq
y_{i})\text{ CE}(f^{\prime}_{i},y^{\prime}_{i})$
where CE is the cross-entropy loss function. Both losses are minimized using
gradient descent. It is worth mentioning that $\lambda$ is the only hyper-
parameter of our adv method. It controls how often noisy embeddings are added
during training. Higher values of $\lambda$ increase the amount of uncertainty
around salient patterns in the input representation of entities, hence
preventing the model from overfitting those patterns, and therefore pushing it
to rely more on context information. We tried values of $\lambda$ between
$0.3$ and $0.9$, and found $\lambda=0.8$ to be the best one based on CoNLL and
OntoNotes development sets.
### 5.4 Results
We trained models on CoNLL and OntoNotes, and evaluated them on their
respective test set.777Performances on dev show very similar trends. Recall
that NRB and WTS are only used as auxiliary diagnosing sets. Table 2 shows the
impact of our training methods when fine-tuning the BERT-large model (the one
that performs best on NRB).
First, we observe that each training method significantly improves the
performance on NRB. Adding adversarial noise is notably the best performing
method on NRB, with an additional gain of 10.5 and 10.4 F1 points over the
respective baselines. On the other hand, we observe minor variations on in-
domain test sets, as well as on WTS. The paired sample t-test Cohen (1996)
confirms that these variations are not statistically significant ($p>0.05$).
After all, the number of decisions that differ between the baseline and the
best model on a given in-domain set is less than 20.
Method | CoNLL | OntoNotes
---|---|---
Test | nrb | wts | Test | nrb | wts
BERT-lrg | 92.8 | 75.6 | 98.6 | 89.9 | 75.4 | 95.1
+mask | 92.9 | 82.9 | 98.4 | 89.8 | 77.3 | 96.5
+freeze | 92.7 | 83.1 | 98.4 | 89.9 | 79.8 | 96.0
+adv | 92.7 | 86.1 | 98.3 | 90.1 | 85.8 | 95.2
+f&m | 92.8 | 85.5 | 97.8 | 89.9 | 80.6 | 95.9
+a&m | 92.8 | 87.7 | 98.1 | 89.7 | 87.6 | 95.9
+a&f | 92.7 | 88.4 | 98.2 | 90.0 | 88.1 | 95.7
+a&m&f | 92.8 | 89.7 | 97.9 | 89.9 | 88.8 | 95.6
Table 2: Impact of training methods on BERT-large models fine-tuned on CoNLL
or OntoNotes.
Second, we observe that combining methods always leads to improvements on NRB;
the best configuration being when we combine all 3 methods. It is interesting
to note that combining training methods leads to a performance on NRB which
does not depend much on the training set used: CoNLL (89.7) and OntoNotes
(88.8). This suggests that name regularity bias is a modelling issue, and not
the effect of factors such as training data size, domain, or type granularity.
Method | CoNLL | OntoNotes
---|---|---
Test | nrb | wts | Test | nrb | wts
E-LSTM | 92.5 | 31.7 | 98.2 | 89.4 | 34.3 | 94.9
+mask | 92.4 | 40.8 | 97.5 | 89.3 | 38.8 | 95.3
+adv | 92.4 | 42.4 | 97.8 | 89.4 | 40.7 | 95.0
+a&m | 92.4 | 45.7 | 96.8 | 89.3 | 46.6 | 93.7
Table 3: Impact of training methods on the ELMo-LSTM trained on CoNLL or
OntoNotes.
In order to validate that our training methods are not specific to the fine-
tuning approach, we replicated the same experiments with the ELMo-LSTM. Table
3 shows the performances of the mask and adv procedures (the freeze method
does not apply here). The results are in line with those observed with BERT-
large: significant gains on NRB of 14 and 12 points for CoNLL and OntoNotes
models respectively, and no statistically significant changes on in-domain
test sets. Again, combining training methods leads to systematic gains on NRB
(13 points on average). Differently from fine-tuning BERT, we observe a slight
drop in performance of 1.2% on WTS when both methods are used.
The performance of ELMo-LSTM on NRB does not rival with the one obtained by
fine-tuning the BERT-large model, which confirms that BERT is a key factor to
enhance robustness, even if in-domain performance is not necessarily rewarded
McCoy et al. (2019); Hendrycks et al. (2020).
## 6 Analysis
So far, we have shown that state-of-the-art models do suffer from name
regularity bias, and we proposed model-agnostic training methods which are
able to mitigate this bias to some extent. In Section 6.1, we provide further
evidences that our training methods force the BERT-large model to better
concentrate on contextual cues. In Section 6.2, we replicate the evaluation
protocol of Lin et al. (2020) in order to clear out the possibility that our
training methods are only valid on NRB. Last, we perform extensive experiments
on name regularity bias under low resource (Section 6.3) and multilingual
(Section 6.4) settings.
### 6.1 Attention Heads
We leverage the attention map of BERT to better understand how our method
enhances context encoding. To this end, we calculate the average number of
attention heads that point to the entity mentions being predicted at each
layer. We conduct this experiment on NRB with the BERT-large model (24 layers
with 16 attention heads at each layer) fine-tuned on CoNLL.
Figure 5: Average number of attention heads (y-axis) pointing to NRB entity
mentions at each layer (x-axis) of the BERT-large model fine-tuned on CoNLL.
At each layer, we average the number of heads which have their highest
attention weight (argmax) pointing to the entity name.888We used the weights
of the first sub-token since NRB only contains single word entities. Figure 5
shows the average number of attention heads that point to an entity mention in
the BERT-large model fine-tuned without our methods, with the adversarial
noise method (adv), and with all three methods.
We observe an increasing number of heads pointing to entity names when we get
closer to the output layer: at the bottom layers (left part of the figure)
only a few heads are pointing to entity names, in contrast to the last 2
layers (right part) where almost all heads do so. This observation is inline
with Jawahar et al. (2019) who show that bottom and intermediate BERT layers
mainly encode lexical and syntactic information, while top layers represent
task-related information. Our training methods lead to less heads at top
layers pointing to entity mentions, suggesting the model is focusing more on
contextual information.
### 6.2 Random Permutations
Following the protocol described in Lin et al. (2020), we modified dev and
test sets of standard benchmarks by randomly permuting dataset-wise mentions
of entities, keeping the types untouched. For instance, the span of a specific
mention of a person can be replaced by a span of a location, whenever it
appears in the dataset. These randomized tests are highly challenging, as
discussed in Section 2, since here the context is the only available clue to
solve the task, and many false positive examples are introduced that way.
Method | $\pi($dev$)$ | $\pi($test$)$
---|---|---
BERT-large | 23.45 | 25.46
+adv | 31.98 | 31.99
+adv&mask | 35.02 | 34.09
+adv&mask&freeze | 40.39 | 38.62
Table 4: F1 scores of BERT-large models fine-tuned on CoNLL and evaluated on
randomly permuted versions of the dev and test sets: $\pi($dev$)$ and
$\pi($test$)$.
Table 4 shows the results of the BERT-large model fine-tuned on CoNLL and
evaluated on the permuted in-domain dev and test sets. F1 scores are much
lower here, confirming this is a hard testbed, but they do provide evidences
of the named-regularity bias of BERT. Our training methods improve the model
F1 score by 17% and 13% on permuted dev and test sets respectively, an
increase much inline with what we observed on NRB.
### 6.3 Low Resource Setting
Similarly to Zhou et al. (2019); Ding et al. (2020), we simulate a low
resource setting by randomly sampling tiny subsets of the training data. Since
our focus is to measure the contextual learning ability of models, we first
selected sentences of CoNLL training data that contain at least one entity
followed or preceded by 3 non-entity words.
Figure 6: Performance on NRB of BERT-large models as a function of the number
of sentences used to fine-tune them.
Then, we randomly sampled $k\in\\{100,500,1000,2000\\}$
sentences999$\\{0.7,3.5,7.1,14.3\\}$% of the training sentences. with which we
fine-tuned BERT-large. Figure 6 shows the performance of the resulting models
on NRB. Expectedly, F1 scores of models fine-tuned with few examples are
rather low on NRB as well as on the in-domain test set. Not shown in Figure 6,
fine-tuning on 100 and 2000 sentences leads to performance of 14% and 45%
respectively on the CoNLL test set. Nevertheless, we observe that our training
methods, and adv in particular, improve performances on NRB even under
extremely low resource settings. On CoNLL test and WTS sets, scores vary in a
range of $\pm 0.5$ and $\pm 0.7$ respectively when our methods are added to
BERT-large.
### 6.4 Multilingual Setting
#### 6.4.1 Experimental Protocol
For in-domain data, we use the German, Spanish, and Dutch CoNLL-2002 Tjong Kim
Sang (2002) NER datasets. Those benchmarks — also from the news domain — come
with a train/dev/test split, and the training material is comparable in size
to the English CoNLL dataset. In addition, we experiment with four non CoNLL
benchmarks: Finnish Luoma et al. (2020), Danish Hvingelby et al. (2020),
Croatian Ljubešić et al. (2018), and Afrikaans Eiselen (2016) data. These
corpora have more diversified text genres, yet mainly follow the CoNLL
annotation scheme.101010The Finnish data is tagged with EVENT, PRODUCT and
DATE in addition to the CoNLL 4 classes. Finnish and Afrikaans datasets have
comparable size to English CoNLL, Danish is 60% smaller, while the Croatian is
twice larger. We use the provided train/dev/test splits for Danish and
Finnish, while we randomly split (80/10/10) the Croatian and Afrikaans
datasets.
Since NRB and WTS are in English, we designed a simple yet generic method for
projecting them to another language. First, both test sets are translated to
the target language using an online translation service. In order to ensure a
high quality corpus, we eliminate a sentence if the BLEU score Papineni et al.
(2002) between the original (English) sentence and the back translated one is
below 0.65.
| NRB | WTS | | NRB | WTS
---|---|---|---|---|---
de | 37% | 44% | fi | 53% | 62%
es | 20% | 22% | da | 19% | 24%
nl | 20% | 24% | hr | 39% | 48%
| | | af | 26% | 32%
Table 5: Percentage of translated sentences from NRB and WTS discarded for
each language.
Model | German | Spanish | Dutch | Finnish | Danish | Croatian | Afrikaans
---|---|---|---|---|---|---|---
test | nrb | wts | test | nrb | wts | test | nrb | wts | test | nrb | wts | test | nrb | wts | test | nrb | wts | test | nrb | wts
Feature-based
BERT-LSTM | 78.9 | 36.4 | 84.2 | 85.6 | 59.9 | 90.8 | 84.9 | 45.4 | 85.7 | 76.0 | 38.9 | 84.5 | 76.4 | 42.6 | 78.1 | 78.0 | 28.4 | 79.3 | 76.2 | 39.7 | 65.8
+adv | 78.2 | 44.1 | 82.8 | 85.0 | 65.8 | 90.2 | 84.3 | 57.8 | 83.5 | 75.1 | 52.9 | 81.0 | 75.4 | 47.2 | 76.9 | 77.5 | 35.2 | 75.5 | 75.7 | 42.3 | 63.3
+adv&mask | 78.1 | 47.6 | 82.9 | 84.9 | 72.2 | 88.7 | 84.0 | 62.8 | 83.5 | 74.6 | 54.3 | 81.8 | 75.1 | 48.4 | 76.6 | 76.9 | 36.8 | 76.7 | 75.1 | 52.8 | 63.1
Fine-tuning
BERT-base | 83.8 | 64.0 | 93.3 | 88.0 | 72.3 | 93.9 | 91.8 | 56.1 | 92.0 | 91.3 | 64.6 | 91.9 | 83.6 | 56.6 | 86.2 | 89.7 | 54.7 | 95.6 | 80.4 | 54.3 | 91.6
+adv | 83.7 | 68.9 | 93.6 | 87.9 | 75.9 | 93.9 | 91.9 | 58.3 | 91.8 | 90.2 | 66.4 | 92.5 | 82.7 | 58.4 | 86.5 | 89.5 | 57.9 | 95.5 | 79.7 | 60.2 | 92.1
+a&m&f | 83.2 | 73.3 | 94.0 | 87.4 | 81.6 | 93.7 | 91.2 | 63.6 | 91.0 | 89.8 | 67.4 | 92.7 | 82.3 | 63.1 | 85.4 | 88.8 | 59.6 | 94.9 | 79.4 | 64.2 | 91.6
Table 6: Mention level F1 scores of 7 multilingual models trained on their
respective training data, and tested on their respective in-domain test, NRB,
and WTS sets.
Table 5 reports the percentage of discarded sentences for each language. While
for the Finnish (fi), Croatian (hr) and German (de) languages we remove a
large proportion of sentences, we found our translation approach more simple
and systematic than generating an NRB corpus from scratch for each language.
The latter approach depends on the robustness of the weak tagger, the number
of Wikipedia articles and disambiguation pages per language, as well as the
existence of type information. This is left as future work.
For experiments with fine-tuning, we use language-specific BERT
models111111Language-specific models have been reported more accurate than
multilingual ones in a monolingual setting Martin et al. (2019); Le et al.
(2020); Delobelle et al. (2020); Virtanen et al. (2019). for German Chan et
al. (2020), Spanish Canete et al. (2020), Dutch de Vries et al. (2019),
Finnish Virtanen et al. (2019),
Danish121212https://github.com/botxo/nordic_bert, Croatain Ulčar and Robnik-
Šikonja (2020), while we use mBERT Devlin et al. (2019) for Afrikaans.
For feature-based approaches, we use the same architecture for ELMo-LSTM
Peters et al. (2018) except that we replace English word embeddings by
language-specific ones: FastText Bojanowski et al. (2017) for static
representations, and the aforementioned BERT-base models for contextualized
ones.
#### 6.4.2 Results
Table 6 reports the performances on test, NRB, and WTS sets for both feature-
based and fine-tuning approaches with and without our training methods. We
used the hyper-parameters of the English CoNLL experiments with no further
tuning. We selected the best performing models based on development sets
score, and report average results on 5 runs.
Mainly due to implementation details and hyper-parameter settings, our fine-
tuned BERT-base models perform better on the CoNLL test sets for German (83.8
vs. 80.4) and Dutch (91.8 vs. 90.0) and slightly worse on Spanish (88.0 vs.
88.4) compared to the results reported in their respective BERT papers.
Consistent with the results obtained on English for feature-based (Table 1)
and fine-tuned (Table 3) models, the latter approach performs better on NRB,
although by a smaller margin compared to English (+37%). More precisely, we
observe a gain of +28% and +26% on German and Croatian respectively, and a
gain ranging between 11% and 15% for other languages.
Nevertheless, our training methods lead to systematic and often drastic
improvements on NRB coupled with a statistically non significant overall
decrease on in-domain test sets. They do however incur a slight but
significant drop of around 2 F1 score points on WTS for feature-based models.
Similar to what was previously observed, the best scores on NRB are obtained
by BERT models when the training methods are combined. For the Dutch language,
we observe that once trained with our methods, the type of models used
(feature-based versus BERT fine-tuned) leads to much less difference on NRB.
Altogether, these results demonstrate that name regularity bias is not
specific to a particular language, even if its degree of severity varies from
one language to another, and that the training methods proposed notably
mitigate this bias.
## 7 Conclusion
In this work, we focused on the name regularity bias of NER models, a problem
first discussed in Lin et al. (2020). We propose NRB, a benchmark we
specifically designed to diagnose such a bias. As opposed to existing
strategies devised to measure it, NRB is composed of real sentences with easy
to identify mentions.
We show that current state-of-the-art models, perform from poorly (feature-
based) to decently (fined-tuned BERT) on NRB. In order to mitigate this bias,
we propose a novel adversarial training method based on adding some learnable
noise vectors to entity words. These learnable vectors encourage the model to
better incorporate contextual information. We demonstrate that this approach
greatly improves the contextual ability of existing models, and that it can be
combined with other training methods we proposed. Significant gains are
observed in both low-resource and multilingual settings. To foster research on
NER robustness, we encourage others to report results on NRB and
WTS.131313English and multilingual NRB and WTS are available at
http://rali.iro.umontreal.ca/rali/?q=en/wikipedia-nrb-ner
This study opens up new avenues of investigations. Conducting a large-scaled
multilingual experiment, characterizing the name regularity bias of more
diversified morphological language families is one of them, possibly
leveraging massively multilingual resources such as WikiAnn Pan et al. (2017),
Polyglot-NER Al-Rfou et al. (2015), or Universal Dependencies Nivre et al.
(2016). We can also develop a more challenging NRB by selecting sentences with
multi-word entities.
Also, non-sequential labelling approaches for NER like the ones of Li et al.
(2020); Yu et al. (2020) have reported impressive results on both flat and
nested NER. We plan to measure their bias on NRB and study the benefits of
applying our training methods to those approaches. Finally, we want to
investigate whether our adversarial training method can be successfully
applied to other NLP tasks.
## 8 Acknowledgments
We are grateful to the reviewers of this work for their constructive comments
that greatly contributed to improving this paper.
## References
* Agarwal et al. (2020a) Oshin Agarwal, Yinfei Yang, Byron C Wallace, and Ani Nenkova. 2020a. Entity-Switched Datasets: An Approach to Auditing the In-Domain Robustness of Named Entity Recognition Models. _arXiv preprint arXiv:2004.04123_.
* Agarwal et al. (2020b) Oshin Agarwal, Yinfei Yang, Byron C Wallace, and Ani Nenkova. 2020b. Interpretability analysis for named entity recognition to understand system predictions and how they can improve. _arXiv preprint arXiv:2004.04564_.
* Akbik et al. (2018) Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 1638–1649.
* Al-Rfou et al. (2015) Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2015. Polyglot-ner: Massive multilingual named entity recognition. In _Proceedings of the 2015 SIAM International Conference on Data Mining_ , pages 586–594. SIAM.
* Alvarado et al. (2015) Julio Cesar Salinas Alvarado, Karin Verspoor, and Timothy Baldwin. 2015. Domain adaption of named entity recognition to support credit risk assessment. In _Proceedings of the Australasian Language Technology Association Workshop 2015_ , pages 84–90.
* Augenstein et al. (2017) Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. _Computer Speech & Language_, 44:61–83.
* Balasubramanian et al. (2020) Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, and Sunita Sarawagi. 2020. What’s in a name? are bert named entity representations just as good for any other name? _arXiv preprint arXiv:2007.06897_.
* Bekoulis et al. (2018) Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2830–2836.
* Belinkov et al. (2019) Yonatan Belinkov, Adam Poliak, Stuart M Shieber, Benjamin Van Durme, and Alexander M Rush. 2019. On adversarial removal of hypothesis-only bias in natural language inference. In _Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (SEM 2019)_ , pages 256–262.
* Bernier-Colborne and Langlais (2020) Gabriel Bernier-Colborne and Phillippe Langlais. 2020. HardEval: Focusing on Challenging Tokens to Assess Robustness of NER. In _Proceedings of The 12th Language Resources and Evaluation Conference_ , pages 1697–1704, Marseille, France. European Language Resources Association.
* Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. _Transactions of the Association for Computational Linguistics_ , 5:135–146.
* Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008\. Freebase: a collaboratively created graph database for structuring human knowledge. In _Proceedings of the 2008 ACM SIGMOD international conference on Management of data_ , pages 1247–1250.
* Borgholt et al. (2020) Lasse Borgholt, Jakob D Havtorn, Anders Søgaard Zeljko Agic, Lars Maaløe, and Christian Igel. 2020. Do end-to-end speech recognition models care about context? In _Proc. of Interspeech_.
* Canete et al. (2020) José Canete, Gabriel Chaperon, Rodrigo Fuentes, and Jorge Pérez. 2020. Spanish pre-trained bert model and evaluation data. _PML4DC at ICLR_ , 2020.
* Chan et al. (2020) Branden Chan, Stefan Schweter, and Timo Möller. 2020. German’s next language model. _arXiv preprint arXiv:2010.10906_.
* Chelba et al. (2014) Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2014. One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. In _Fifteenth Annual Conference of the International Speech Communication Association_.
* Cheng et al. (2019) Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4324–4333.
* Clark et al. (2019) Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4060–4073.
* Clark et al. (2020) Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2020. Learning to model and ignore dataset bias with mixed capacity ensembles. _arXiv preprint arXiv:2011.03856_.
* Cohen (1996) Paul R Cohen. 1996. Empirical methods for artificial intelligence. _IEEE Intelligent Systems_.
* Dai and Adel (2020) Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 3861–3867.
* Dasgupta et al. (2018) Ishita Dasgupta, Demi Guo, Andreas Stuhlmüller, Samuel J Gershman, and Noah D Goodman. 2018. Evaluating compositionality in sentence embeddings. _arXiv preprint arXiv:1802.04302_.
* Dayanik and Padó (2020) Erenay Dayanik and Sebastian Padó. 2020. Masking actor information leads to fairer political claims detection. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4385–4391.
* Delobelle et al. (2020) Pieter Delobelle, Thomas Winters, and Bettina Berendt. 2020. Robbert: a dutch roberta-based language model. _arXiv preprint arXiv:2001.06286_.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186.
* Ding et al. (2020) Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. 2020. Daga: Data augmentation with a generation approach for low-resource tagging tasks. _arXiv preprint arXiv:2011.01549_.
* Durrett and Klein (2014) Greg Durrett and Dan Klein. 2014. A Joint Model for Entity Analysis: Coreference, Typing, and Linking. _Transactions of the Association for Computational Linguistics_ , 2:477–490.
* Ebrahimi et al. (2018) Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 31–36.
* Eiselen (2016) Roald Eiselen. 2016. Government domain named entity recognition for south african languages. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 3344–3348.
* Fawzi et al. (2016) Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. 2016. Robustness of classifiers: from adversarial to random noise. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_ , pages 1632–1640.
* Geva et al. (2019) Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1161–1166.
* Ghaddar and Langlais (2018) Abbas Ghaddar and Philippe Langlais. 2018. Transforming Wikipedia into a Large-Scale Fine-Grained Entity Type Corpus. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_.
* Ghaddar and Langlais (2016) Abbas Ghaddar and Phillippe Langlais. 2016. Coreference in Wikipedia: Main concept resolution. In _Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning_ , pages 229–238.
* Ghaddar and Langlais (2017) Abbas Ghaddar and Phillippe Langlais. 2017. Winer: A wikipedia annotated corpus for named entity recognition. In _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 413–422.
* Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_.
* Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 107–112.
* He et al. (2019) He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. _EMNLP-IJCNLP 2019_ , page 132.
* Heafield (2011) Kenneth Heafield. 2011. KenLM: faster and smaller language model queries. In _Proceedings of the EMNLP 2011 Sixth Workshop on Statistical Machine Translation_ , pages 187–197, Edinburgh, Scotland, United Kingdom.
* Hendrycks and Gimpel (2017) Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. _Proceedings of International Conference on Learning Representations_.
* Hendrycks et al. (2020) Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. _arXiv preprint arXiv:2004.06100_.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ , 9(8):1735–1780.
* Hvingelby et al. (2020) Rasmus Hvingelby, Amalie Brogaard Pauli, Maria Barrett, Christina Rosted, Lasse Malm Lidegaard, and Anders Søgaard. 2020. Dane: A named entity resource for danish. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 4597–4604.
* Jawahar et al. (2019) Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What Does BERT Learn about the Structure of Language? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3651–3657.
* Joshi et al. (2020) Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. _Transactions of the Association for Computational Linguistics_ , 8:64–77.
* Le et al. (2020) Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoit Crabbé, Laurent Besacier, and Didier Schwab. 2020. Flaubert: Unsupervised language model pre-training for french. In _Proceedings of The 12th Language Resources and Evaluation Conference_ , pages 2479–2490.
* Le Bras et al. (2020) Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases. In _International Conference on Machine Learning_ , pages 1078–1088. PMLR.
* Li et al. (2020) Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020\. A unified MRC framework for named entity recognition. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5849–5859.
* Lin et al. (2020) Hongyu Lin, Yaojie Lu, Jialong Tang, Xianpei Han, Le Sun, Zhicheng Wei, and Nicholas Jing Yuan. 2020. A rigorous study on named entity recognition: Can fine-tuning pretrained model lead to the promised land? In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7291–7300.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Ljubešić et al. (2018) Nikola Ljubešić, Željko Agić, Filip Klubička, Vuk Batanović, and Tomaž Erjavec. 2018. Training corpus hr500k 1.0. Slovenian language resource repository CLARIN.SI.
* Luoma et al. (2020) Jouni Luoma, Miika Oinonen, Maria Pyykönen, Veronika Laippala, and Sampo Pyysalo. 2020. A broad-coverage corpus for finnish named entity recognition. In _Proceedings of The 12th Language Resources and Evaluation Conference_ , pages 4615–4624.
* Mahabadi et al. (2020) Rabeeh Karimi Mahabadi, Yonatan Belinkov, and James Henderson. 2020. End-to-end bias mitigation by modelling biases in corpora. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8706–8716. Association for Computational Linguistics.
* Manning et al. (2014) Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In _ACL (System Demonstrations)_ , pages 55–60.
* Martin et al. (2019) Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. 2019. Camembert: a tasty french language model. _arXiv preprint arXiv:1911.03894_.
* Mayhew et al. (2020) Stephen Mayhew, Gupta Nitish, and Dan Roth. 2020. Robust named entity recognition with truecasing pretraining. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , pages 8480–8487.
* Mayhew et al. (2019) Stephen Mayhew, Tatiana Tsygankova, and Dan Roth. 2019. ner and pos when nothing is capitalized. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6257–6262.
* McCoy et al. (2019) Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3428–3448.
* Min et al. (2020) Junghyun Min, R Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2339–2352.
* Miyato et al. (2016) Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. _arXiv preprint arXiv:1605.07725_.
* Moosavi et al. (2020) Nafise Sadat Moosavi, Marcel de Boer, Prasetya Ajie Utama, and Iryna Gurevych. 2020\. Improving robustness by augmenting training sentences with predicate-argument structures. _arXiv preprint arXiv:2010.12510_.
* Nguyen et al. (2020) Thong Nguyen, Duy Nguyen, and Pramod Rao. 2020. Adaptive Name Entity Recognition under Highly Unbalanced Data. _arXiv preprint arXiv:2003.10296_.
* Nivre et al. (2016) Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 1659–1666.
* Padó et al. (2019) Sebastian Padó, André Blessing, Nico Blokker, Erenay Dayanik, Sebastian Haunss, and Jonas Kuhn. 2019. Who sides with whom? towards computational construction of discourse networks for political debates. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2841–2847.
* Pan et al. (2017) Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1946–1958.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_ , pages 311–318.
* Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 2227–2237.
* Poliak et al. (2018) Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In _Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics_ , pages 180–191.
* Pradhan et al. (2012) Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In _Joint Conference on EMNLP and CoNLL-Shared Task_ , pages 1–40.
* Ratinov and Roth (2009) Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In _Proceedings of the Thirteenth Conference on Computational Natural Language Learning_ , pages 147–155. Association for Computational Linguistics.
* Recht et al. (2019) Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classifiers generalize to imagenet? In _International Conference on Machine Learning_ , pages 5389–5400. PMLR.
* Sanh et al. (2020) Victor Sanh, Thomas Wolf, Yonatan Belinkov, and Alexander M Rush. 2020. Learning from others’ mistakes: Avoiding dataset biases without modeling them. _arXiv preprint arXiv:2012.01300_.
* Schuster et al. (2019) Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola Ortiz, Enrico Santus, and Regina Barzilay. 2019. Towards debiasing fact verification models. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3410–3416.
* Seltzer et al. (2013) Michael L Seltzer, Dong Yu, and Yongqiang Wang. 2013. An investigation of deep neural networks for noise robust speech recognition. In _2013 IEEE international conference on acoustics, speech and signal processing_ , pages 7398–7402. IEEE.
* Shah et al. (2020) Deven Santosh Shah, H Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5248–5264.
* Søgaard (2013) Anders Søgaard. 2013. Part-of-speech tagging with antagonistic adversaries. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 640–644.
* Strauss et al. (2016) Benjamin Strauss, Bethany Toma, Alan Ritter, Marie-Catherine de Marneffe, and Wei Xu. 2016. Results of the wnut16 named entity recognition shared task. In _Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)_ , pages 138–144.
* Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. _arXiv preprint arXiv:1312.6199_.
* Thorne et al. (2018) James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018\. Fever: a large-scale dataset for fact extraction and verification. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 809–819.
* Tjong Kim Sang (2002) Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In _COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002)_.
* Tjong Kim Sang and De Meulder (2003) Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In _Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4_ , pages 142–147. Association for Computational Linguistics.
* Ulčar and Robnik-Šikonja (2020) Matej Ulčar and Marko Robnik-Šikonja. 2020. Finest bert and crosloengual bert: less is more in multilingual models. _arXiv preprint arXiv:2006.07890_.
* Utama et al. (2020a) Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020a. Mind the trade-off: Debiasing nlu models without degrading the in-distribution performance. _arXiv preprint arXiv:2005.00315_.
* Utama et al. (2020b) Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020b. Towards debiasing nlu models from unknown biases. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7597–7610.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in Neural Information Processing Systems_ , pages 5998–6008.
* Virtanen et al. (2019) Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. _arXiv preprint arXiv:1912.07076_.
* de Vries et al. (2019) Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. Bertje: A dutch bert model. _arXiv preprint arXiv:1912.09582_.
* Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355.
* Williams et al. (2017) Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. _arXiv preprint arXiv:1704.05426_.
* Xin et al. (2020) Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. DeeBERT: Dynamic early exiting for accelerating BERT inference. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2246–2251, Online. Association for Computational Linguistics.
* Yaghoobzadeh et al. (2019) Yadollah Yaghoobzadeh, Remi Tachet, Timothy J Hazen, and Alessandro Sordoni. 2019\. Robust natural language inference models with example forgetting. _arXiv preprint arXiv:1911.03861_.
* Yu et al. (2020) Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. _arXiv preprint arXiv:2005.07150_.
* Zellers et al. (2018) Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 93–104.
* Zeng et al. (2020) Xiangji Zeng, Yunliang Li, Yuchen Zhai, and Yin Zhang. 2020. Counterfactual generator: A weakly-supervised method for named entity recognition. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7270–7280.
* Zhang et al. (2019) Yuan Zhang, Jason Baldridge, and Luheng He. 2019. Paws: Paraphrase adversaries from word scrambling. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1298–1308.
* Zheng et al. (2019) Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A Boundary-aware Neural Model for Nested Named Entity Recognition. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 357–366.
* Zhou et al. (2019) Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, and Kenneth Kwok. 2019. Dual adversarial neural transfer for low-resource named entity recognition. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3461–3471.
* Zhu et al. (2019) Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2019\. Freelb: Enhanced adversarial training for language understanding. _arXiv preprint arXiv:1909.11764_.
|
# Elucidating the local atomic and electronic structure of amorphous oxidized
superconducting niobium films
Thomas F. Harrelson Materials Science Division, Lawrence Berkeley National
Laboratory, Berkeley, CA 94720, USA Molecular Foundry, Lawrence Berkeley
National Laboratory, Berkeley, CA 94720, USA. Evan Sheridan Materials
Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720,
USA Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA
94720, USA Theory and Simulation of Condensed Matter, Department of Physics,
King’s College London, The Strand, London WC2R 2LS, UK. Ellis Kennedy
Department of Materials Science and Engineering, University of California,
Berkeley, CA 94720, USA John Vinson Material Measurement Laboratory,
National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
Alpha T. N’Diaye Advanced Light Source, Lawrence Berkeley National
Laboratory, Berkeley, CA 94720, USA M. Virginia P. Altoé Molecular Foundry,
Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Adam
Schwartzberg Molecular Foundry, Lawrence Berkeley National Laboratory,
Berkeley, CA 94720, USA Irfan Siddiqi Materials Science Division, Lawrence
Berkeley National Laboratory, Berkeley, CA 94720, USA Department of Physics,
University of California, Berkeley, CA 94720, USA D. Frank Ogletree
Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720,
USA Mary C. Scott Department of Materials Science and Engineering,
University of California, Berkeley, CA 94720, USA NCEM, Molecular Foundry,
Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Sinéad M.
Griffin Materials Science Division, Lawrence Berkeley National Laboratory,
Berkeley, CA 94720, USA Molecular Foundry, Lawrence Berkeley National
Laboratory, Berkeley, CA 94720, USA
###### Abstract
Qubits made from superconducting materials are a mature platform for quantum
information science application such as quantum computing. However, materials-
based losses are now a limiting factor in reaching the coherence times needed
for applications. In particular, knowledge of the atomistic structure and
properties of the circuit materials is needed to identify, understand, and
mitigate materials-based decoherence channels. In this work we characterize
the atomic structure of the native oxide film formed on Nb resonators by
comparing fluctuation electron microscopy experiments to density functional
theory calculations, finding that an amorphous layer consistent with an Nb2O5
stoichiometry. Comparing X-ray absorption measurements at the Oxygen K edge
with first-principles calculations, we find evidence of d-type magnetic
impurities in our sample, known to cause impedance in proximal
superconductors. This work identifies the structural and chemical composition
of the oxide layer grown on Nb superconductors, and shows that soft X-ray
absorption can fingerprint magnetic impurities in these superconducting
systems.
Superconducting qubits are one of the leading solid-state platforms for QIS
(quantum information science) applications, with reported coherence times
reaching $\sim$100 microsecondsDevoret and Schoelkopf (2013); Kjaergaard _et
al._ (2020). Despite this, materials-based decoherence channels contribute
significantly to microwave losses, and are now a central hurdle in device
coherence and scalingMcDermott (2009). In particular, the inevitable
inhomogeneities that are present from growth and fabrication, such as
interfaces, defects, and structural disorder, each contribute to the
decoherence in qubits amde from superconducting materialsOliver and Welander
(2013); de Leon _et al._ (2021).
Precise knowledge of the atomistic structural and chemical makeup of
superconducting qubit materials is particularly necessary for understanding
materials-dependent decoherence processes. Intrinsic noise sources in
superconducting qubits are typically classified into two categories – two-
level system (TLS) noise, and non-TLS noiseMüller, Cole, and Lisenfeld (2019).
TLSs are fluctuating two-level states comprising local energy minima in the
atomic structural potential which were originally proposed to describe the
microstructure of amorphous materialsPhillips (1987). TLSs can couple to
electric and magnetic fields, reducing a qubit’s coherence time. Since the
amorphous materials present on superconducting qubit surfaces consist of a
variety of bonding environments, TLSs can host a range of barrier heights and
tunnelling rates, and correspondingly a distribution of fluctuation
frequencies even within a given materialBurnett, Faoro, and Lindström (2016).
Because of this, characterization of the local atomic arrangements is needed
to build any predictive description of TLS-related decoherence.
Non-TLS noise intrinsic in QIS materials includes the presence of
nonequilibrium quasiparticles (QP)Wilen _et al._ (2020); Cardani _et al._
(2020) and of magnetic impuritiesKharitonov _et al._ (2012); Proslier _et
al._ (2011); Sheridan _et al._ (2021). While careful shielding can mitigate
some of these effects, the decay and control of QPs is materials dependent,
and can be crucially influenced by nanofabrication and materials’
controlVepsäläinen _et al._ (2020); Wilen _et al._ (2020); Martinis (2020).
Another key non-TLS loss mechanism is Cooper pair breaking induced by the
presence of magnetic impuritiesKharitonov _et al._ (2012), which can occur
due to materials’ defects, interfaces, and surfaces, and cause impedance
losses in the superconductorKharitonov _et al._ (2012); Proslier _et al._
(2011); Sheridan _et al._ (2021). Therefore, to understand the structure-
coherence relationships associated with the materials’ properties in
superconducting qubits, knowledge of the local structural and chemical
environment is needed, regardless of the origin (TLS, non-TLS) of the noise.
Superconducting qubits are typically comprised of Al/AlOx/Al Josephson
junctions with superconducting circuit elements commonly made from Al, Nb, Ta,
and alloys containing theseOliver and Welander (2013); Place _et al._ (2021).
Of these, Nb has many advantages over other superconducting materials
including low kinetic inductance resulting in reduced variability, and a
higher superconducting gap making it less susceptible to QP poisoningKaplan
_et al._ (1976). Importantly, Nb forms a relatively clean surface, and is a
mature material for the advanced processing and lithographic patterning that
is required for contemporary qubit fabrication and for future scaling of
highly coherent superconducting architectures. However, Nb readily forms
surface oxides such as NbO, NbO2 and Nb2O5, which introduce both TLS and non-
TLS losses in the qubitDelheusy _et al._ (2008); Altoé _et al._ (2020).
Previous work has looked at the use of ultrahigh vacuum packing to reduce
surface contaminationMergenthaler _et al._ (2021), in addition to an
understanding of the influence of both oxide surface removalAltoé _et al._
(2020) and regrowthVerjauw _et al._ (2021) on the performance of
superconducting resonators.
Despite extensive research on the variety of loss channels and their
mitigation through surface treatments and fabrication Romanenko and Schuster
(2017); Romanenko _et al._ (2020), the precise microscopic origins of TLS and
non-TLS losses in superconducting systems is not known. This is primarily due
to the difficulty in accessing information about the local structural and
chemical environments which critically control the presence of these losses.
Since the native oxides formed on Nb are often amorphous, conventional
diffraction and computational techniques cannot be used for structural
information. Theoretical treatments often either rely on having crystalline
materials with periodic boundary conditionsHeinrich, Pascual, and Franke
(2018), or propose phenomenological models without incorporating nanoscale
structural information. Instead, in this work, we combine Fluctuation Electron
Microscopy (FEM), X-Ray Absorption Spectroscopy (XAS), and first-principles
calculations to investigate the structural and chemical composition of
amorphous oxides on superconducting Nb films. We classify the short- and mid-
range structural properties of our oxides by comparing our ab initio
calculations with experiments, identifying the structural and chemical makeup
of surface Nb oxides on superconducting resonators.
To characterize the mid-range atomic structure of the amorphous films we used
FEM, a 4-D scanning transmission electron microscopy technique that is
sensitive to medium-range atomic ordering in disordered materials Voyles and
Muller (2002). FEM experiments were performed using an FEI TitanX operated at
an acceleration voltage of 200 kV. Additionally, XAS measurements of the O
K-edge were performed at the bending magnet beamline 6.3.1 at the Advanced
Light Source at Lawrence Berkeley National Laboratory. We consider three
different Nb treatments: (1) unpatterned, oxidized Nb films without any
treatments, (2) Nb film from a chip patterned with qubits, and (3) Nb film
from a chip with resonators only (no Josephson junctions), which allows us to
potentially observe changes in the Nb oxides with these different fabrication
steps (Table 1). XAS was performed on all three samples. FEM was performed on
Sample 2 because it had the thickest oxide layer, which was required for
improved signal in FEM analysis. Further details of the FEM and XAS
measurements are given in the SI.
Electronic and magnetic properties were calculated using density functional
theory (DFT) as implemented in the Vienna Ab initio Simulation Package (VASP)
Kresse and Hafner (1993). We used Nb2O5 amorphous structures that were
generated previously with ab initio molecular dynamics as detailed in
Ref.Sheridan _et al._ (2021), which are available on ZenodoHarrelson _et
al._ (2021a). X-ray absorption calculations were carried out using the Bethe-
Salpeter equation (BSE) formalism as implemented within the ocean codeVinson
_et al._ (2011); *ocean2; *ocean0. The BSE calculations use a basis of
electron orbitals from DFT calculated with Quantum ESPRESSO, Giannozzi _et
al._ (2017); *espresso0 with pseudopotentials from the PseudoDojo collection.
van Setten _et al._ (2018); *pspdojo0; Hamann (2013); *oncvp More details on
the DFT and XAS calculations are given in the SI.
Figure 1: (a) Representative ab initio molecular dynamics generated amorphous
structure of Nb2O5. (b) Averaged speckle pattern of Nb2O5 using FEM over many
diffraction patterns. (c) Radial Distribution Function for Nb2O5 obtained from
averaging over nine amorphous stoichiometric configurations generated using ab
initio molecular dynamics, (d) Annular mean of normalized variances of the FEM
data measuring the average interatomic spacing between Nb centers in the
Nb2O5.
We first describe our FEM diffraction results of a representative oxidized Nb
sample with the largest oxide thickness (Sample 2), and compare the short-
range structural description to ab initio generated structures. In contrast to
other diffraction techniques, which generally identify long-range ordering,
FEM is uniquely sensitive to the medium-range ordering on the size scale of
the electron beam probe Voyles and Abelson (2003); Daulton, Bondi, and Kenneth
(2010). FEM data is acquired by rastering a small electron probe over a sample
and capturing a diffraction pattern at each probe location. The diffraction
patterns are digitally preprocessed to remove imaging distortions, and the
variance of the measured intensity as a function of scattering vector is
calculated Kennedy _et al._ (2020). As Bragg scattering in the diffraction
patterns creates large variance in intensity, the calculated variance is a
metric for ordering in the amorphous material on the length scale of the
electron probe Hwang and Voyles (2011). Full details of the FEM method and
data analysis are given in the SI. In Figure 1b, we show the average speckle
pattern of many nanodiffraction patterns taken over the Nb oxide region of the
film cross-section (see SI). The brighter spots in the speckled halo primarily
represent Nb-Nb distances because electron scattering from Nb atoms dominates
over scattering from O atoms. The broad diffuse halo present in the average
nanodifraction pattern suggests that the Nb oxide film is amorphous. In Figure
1(d) we show an average spatial variance computed from six regions of the
sample, where each region differs in its thickness, as shown in Figure S1 of
the Supplementary material. The broad peak centered at the wavevector
$\approx$3 nm-1 is a measure of the average interatomic spacing between Nb
centres, corresponding to an average Nb-Nb distance of 3.37 Å.
We next analyze ab initio-generated amorphous structures to investigate the
short- and medium-range structural order across a sample of stoichiometric
Nb2O5 amorphous configurations. Figure 1(a) illustrates a representative
stoichiometric amorphous configuration of Nb2O5 containing 105 atoms in the
unit cell. The solid line indicates the Nb-Nb distance for edge sharing Nb
sites in Nb2O5, while the dashed line highlights the longer Nb-Nb distances
for corner sharing Nb sites. These features are also present in Figure 1(c),
where we show the averaged radial distribution function (RDF) obtained from
nine stoichiometric amorphous configurations of Nb2O5 whose volume and
internal coordinates were optimized using DFT. We see from the first peak that
the shorter edge sharing Nb sites are typically 3.15 Å apart, and the longer
corner sharing Nb sites are 3.8 Å apart as indicated by the second peak. The
immediate dip of the RDF at 4 Å suggests that the edge- and corner-sharing
environments shown in Figure 1(a) are the primary structural motifs present in
our amorphous Nb2O5. Given the reasonable comparison between ab initio-
generated amorphous structures and FEM analysis of our Nb oxide thin films, we
can conclude that indeed our films are amorphous, lacking any long-range
order, and that our generated structures can be used for further analysis.
Additionally, we find the average Nb-Nb distance in the FEM measurement to be
3.37 Å, which is between the average corner- and edge-shared Nb-Nb distances
in the ab initio structures, suggesting our amorphous films comprise a mix of
corner- and edge-sharing polyhedra.
Figure 2: Measured XAS spectra of O K-edge for the three different samples
described in Table 1. Inset: Sketch of the electronic structure of
octahedrally coordinated Nb forming $t_{2g}$ and $e_{g}$ split orbital sets.
These hybridize with the unoccupied O orbitals that are excited upon X-ray
absorption, creating the observed splitting between the peaks in the data.
Spectra were normalized by matching the baselines, and dividing by the maximum
value in the 525 eV to 550 eV window.
We next use X-ray absorption spectroscopy (XAS) to obtain information about
the local morphology, electronic structure, and potential magnetism. We focus
on O K-edge spectra for three different NbOx samples, which are described in
Table 1. While XAS of the O K edge probes unoccupied p-type states surrounding
the oxygen atoms, these states are hybridized with the neighboring Nb, and so
provide information on both the Nb and O species. In Figure 2 we plot the
measured XAS for the three samples, and find the XAS is similar for all three.
As is typical in transition metal oxides, we identify the two peaks at 533 eV
and 537 eV as hybridized with the empty Nb 4d orbitals, split by the crystal
field splitting $\Delta$ into lower-energy $t_{2g}$ and higher-energy $e_{g}$
states. The broad feature near 544 eV reflects hybridization with Nb 5sp-like
states. Changes in the relative intensities of the $t_{2g}$ and $e_{g}$, and
splitting between them $\Delta$, and (less reliably) position of the edge
onset, reflect changes in the Nb d-manifold occupation, strength of the Nb-O
bonding, and oxidation state of the metal ion, respectivelyFrati, Hunault, and
de Groot (2020).
Sample | NbOx Thickness | Description
---|---|---
1 | 3 nm | Unpatterned, oxidized Nb film.
2 | 15 nm | Nb film fabricated with qubits including AlOx Josephson junctions.
3 | 5 nm | Nb film fabricated with resonators only (no Josephson junctions).
Table 1: Summary of sample details used in experiments. XAS measurements were
performed on all three samples, whereas FEM measurements were performed on
Sample 2.
Comparing the XAS spectra of the three measured samples shows that the
unpatterned film (Sample 1) and the resonator chip (Sample 3) are the most
similar. We observe a slight increase in energy of the peak near 537 eV, and
the slight increase in intensity of the broad feature near 544 eV for the
resonator sample (Sample 3) compared to the unpatterned sample (Sample 1). The
qubit sample (Sample 2) has the largest NbOx thickness ($\sim 15$ nm), and
largest increase in energy of the 537 eV peak. The observed increase in the
energy of the 537 eV peak in the patterned samples suggests a greater crystal-
field splitting hence more crystalline character compared to the unpatterned
films.
We use a combination of DFT and BSE calculations to further analyze the XAS
spectra. We calculate spectra for fifteen different Nb2O5 amorphous
configurations (both stoichiometric and non-stoichiometric), and five
different crystalline phases of Nb-oxides. In Figure 3(a), we plot the
calculated crystalline spectra for NbO ($Pm3m$), NbO2 ($P4_{2}/mnm$), and the
average of 3 different Nb2O5 phases (N-phase ($C_{2}/m$), M-phase ($I4/mmm$),
and B-phase ($C_{2}/c$) and compare to the experimental spectrum of Sample 1.
We find that the measured XAS spectra are best described by Nb2O5. The
splitting between the two dominant peaks is larger in the crystalline
reference samples, while the relative heights of the two dominant peaks is
qualitatively described by Nb2O5, suggesting amorphous structures with Nb2O5
stoichiometry. We further find that as the oxidation state of the Nb atom
increases ($+2$ in NbO to $+5$ in Nb2O5), both the intensity of the first peak
increases, and the ratio of the intensity of the first peak to the second peak
increases. This is partially explained by considering the resulting filling of
the $t_{2g}$ and $e_{g}$ states of an octahedrally coordinated Nb atom (see
inset of Figure 2); NbO deviates slightly from the trend because the
coordination of the Nb atoms is square planar.
Figure 3: (a) Calculated XAS spectra for crystalline NbO, NbO2 and Nb2O5
(averaged over all three calculated phases) and XAS measurements of the O K
edge of Sample 1. (b) Calculated XAS spectra for crystalline Nb2O5 in the M-,
B-, and B- phases, a representative ab initio generated amorphous structure,
and XAS measurements of Sample 1. Experimental data is normalized by rigidly
shifting the spectrum to the relative scale, removing the background signal,
and normalizing the heights to be comparable to our XAS calculations.
In Figure 3(b), we compare the calculated XAS spectra for three different
crystalline polymorphs and an amorphous structure of stoichiometric Nb2O5 to
the measured XAS of Sample 1. We choose Sample 1 since we anticipate the
oxidized film with no additional fabrication steps is most similar to a
completely amorphous phase. We find that both the average amorphous spectrum
and the crystalline N-phase spectrum are most similar to the experimental
spectrum from Sample 1. Of the crystalline phases, we find that the N-phase
best agrees with the XAS measurements, but the calculation shows a larger
splitting between the two dominant peaks than the measured spectrum. This is
the case for all of the considered crystalline phases of Nb2O5 (Figure 3(b)),
which is caused by the crystalline order increasing the crystal field
splitting.
Figure 4: (a) Comparison between stoichiometric, oxygen-deficient, and oxygen-
rich amorphous calculated spectra versus the experimental spectrum of sample
1. (b) Statistical analysis of the expected relative changes in structural
descriptors given the three experimental XAS spectra. The highest variance
descriptors are on the left and lowest variance descriptors are on the right.
Previous works suggest magnetic impurities contribute to impedance-based
losses in superconducting qubitsKharitonov _et al._ (2012); Sheridan _et
al._ (2021). In particular, $d$-type magnetic impurities on Nb atoms were
found to be more detrimental than $p$-type impurities on O atoms in Nb
oxidesSheridan _et al._ (2021). To investigate if our XAS measurements can
identify a low density of magnetic impurities, we compare our calculated XAS
spectra with those measured. We divide our calculations into three groups;
stoichiometric Nb2O5, oxygen rich Nb2O5 (includes oxygen interstitials or Nb
vacancies), and oxygen poor Nb2O5 (includes oxygen vacancies) with the results
given in Figure 4. As expected, we find a pre-edge feature in the oxygen rich
calculations coming from O-O dangling bonds, and resulting in the p-type
magnetic impurities. However, such a pre-peak feature is not observed in any
of the measured XAS spectra, so we can conclude that there is not a
significant density of p-type magnetic impurities in our measured samples. We
find a slightly better agreement between the oxygen-poor calculated spectra
and the measured spectra with shape of the second peak at $\approx 13$ eV a
closer match in this case. This suggests the presence of d-type magnetic
impurities associated with oxygen poor (and Nb rich) samples. To further
quantify this, we perform statistical analysis on the 1250 distinct calculated
atomic spectra and compare them to the measured XAS to correlate spectral
changes with structural and magnetic changes (details in the SI). These
results suggest that small densities of d-type magnetic impurities are present
in all three measured samples with an estimated density of 1.8$\times
10^{22}$, 1.7$\times 10^{22}$, and 1.5$\times 10^{22}$ d-type impurities per
mol formed from oxygen vacancies found in Samples 1, 2, and 3, respectively.
This density of magnetic moments is higher than previous magnetic measurements
on bulk crystalline T\- and TT-Nb2O5 where they estimated a density of
$10^{21}$ – $10^{22}$ effective magnetic moments per molHerval _et al._
(2015). However, the greater density of magnetic moments in our samples is
consistent with the increased disorder and off-stoichiometry that we expect in
our amorphous surface oxides. To elucidate the structural changes amongst the
different samples that are correlated with changes in the XAS spectra, we
calculate the conditional mean for each structural and magnetic descriptor
given to the experimental spectrum. The relative expected changes for the most
and least varying descriptors are plotted in Figure 4b. The shape descriptors
(volume, area, etc.) refer to Voronoi polyhedra constructed around each atom,
$\sigma$ values refer to distortions within those descriptors (full
descriptions are in the SI). We find that the fabrication procedure has a
reasonably large effect on the p-type impurity density along with the shape
descriptors of both Nb and O atoms. Bond length and coordination
characteristics along with d-type impurity density showed less variation
amongst the samples.
In summary,our FEM measurements confirm the lack of long-range order in a
representative Nb-oxide film, observed from the broad halo in the average
speckle pattern of the FEM image. Comparing the calculated RDF of our ab
initio-generated stoichiometric Nb2O5 amorphous structures to the angular
average of the FEM pattern indicates that our generated amorphous
configurations are a good representation of the distribution of structures
observed in real Nb-oxide films, containing a mix of edge and corner-sharing
polyhedra motifs. We next compared our measured XAS spectra for a selection of
Nb-oxide samples (Table 1) to first-principles calculations of both
crystalline and amorphous Nb-oxide compounds, of which the amorphous phase
most closely matched the data, which isconsistent with our FEM results, and
prior elemental analysis of Nb-oxide filmsAltoé _et al._ (2020). Finally, we
analyze our first-principles predictions for signatures of magnetic impurities
in the amorphous configurations to identify experimental markers of these
magnetic impurities in the XAS spectra. We find a better fit of the XAS
spectra for Nb2O5 configurations with oxygen vacancies, suggesting the
presence of d-type magnetic impurities. We find no evidence for pre-edge
impurity states associated with p-type magnetic impurities. Our results give
an estimate of the density of decoherence-inducing local magnetic moments, and
suggest experimental fingerprints for the characterization of superconducting
thin films using spectroscopic approaches.
## Data Availability
The data that support the findings of this study are openly available in
Zenodo at Ref Harrelson _et al._ (2021b).
## Acknowledgments
We thank John Clarke and David Santiago for useful discussions. Specific
software and hardware is identified for information purposes only and is not
intended to imply recommendation or endorsement by NIST. This work was funded
by the U.S. Department of Energy, Office of Science, Office of Basic Energy
Sciences, Materials Sciences and Engineering Division under Contract No. DE-
AC02-05-CH11231 “High-Coherence Multilayer Superconducting Structures for
Large Scale Qubit Integration and Photonic Transduction program (QIS-LBNL)”.
This research used resources of the National Energy Research Scientific
Computing Center (NERSC), a U.S. Department of Energy Office of Science User
Facility located at Lawrence Berkeley National Laboratory, operated under
Contract No. DE-AC02-05CH11231. E.S. acknowledges support from the US-Irish
Fulbright Commission, the Air Force Office of Scientific Research under award
number FA9550-18-1-0480 and the EPSRC Centre for Doctoral Training in Cross-
Disciplinary Approaches to Non-Equilibrium Systems (EP/L015854/1). This work
also used the Extreme Science and Engineering Discovery Environment (XSEDE),
which is supported by National Science Foundation grant number ACI-1548562.
Electron microscopy data acquisition for this work was supported by National
Science Foundation STROBE Grant No. DMR-1548924. Work at the Molecular Foundry
was supported by the Office of Science, Office of Basic Energy Sciences, of
the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This
research used resources of the Advanced Light Source, which is a DOE Office of
Science User Facility under Contract no. DE-AC02-05CH11231.
## Author Declarations
The authors have no conflicts to disclose.
## References
* Devoret and Schoelkopf (2013) M. H. Devoret and R. J. Schoelkopf, “Superconducting circuits for quantum information: an outlook,” Science 339, 1169–1174 (2013).
* Kjaergaard _et al._ (2020) M. Kjaergaard, M. E. Schwartz, J. Braumüller, P. Krantz, J. I.-J. Wang, S. Gustavsson, and W. D. Oliver, “Superconducting qubits: Current state of play,” Annual Review of Condensed Matter Physics 11, 369–395 (2020).
* McDermott (2009) R. McDermott, “Materials origins of decoherence in superconducting qubits,” IEEE Trans. Appl. Supercond. 19, 2–13 (2009).
* Oliver and Welander (2013) W. D. Oliver and P. B. Welander, “Materials in superconducting quantum bits,” MRS Bull. 38 (2013), 10.1557/mrs.2013.229.
* de Leon _et al._ (2021) N. P. de Leon, K. M. Itoh, D. Kim, K. K. Mehta, T. E. Northup, H. Paik, B. S. Palmer, N. Samarth, S. Sangtawesin, and D. W. Steuerman, “Materials challenges and opportunities for quantum computing hardware,” Science 372 (2021), 10.1126/science.abb2823.
* Müller, Cole, and Lisenfeld (2019) C. Müller, J. H. Cole, and J. Lisenfeld, “Towards understanding two-level-systems in amorphous solids: Insights from quantum circuits,” Reports on Progress in Physics 82, 1–34 (2019).
* Phillips (1987) W. A. Phillips, “Two-level states in glasses,” Rep. Prog. Phys. 50, 1657–1708 (1987).
* Burnett, Faoro, and Lindström (2016) J. Burnett, L. Faoro, and T. Lindström, “Analysis of high quality superconducting resonators: consequences for TLS properties in amorphous oxides,” Supercond. Sci. Technol. 29, 044008 (2016).
* Wilen _et al._ (2020) C. D. Wilen, S. Abdullah, N. A. Kurinsky, C. Stanford, L. Cardani, G. D’Imperio, C. Tomei, L. Faoro, L. B. Ioffe, C. H. Liu, A. Opremcak, B. G. Christensen, J. L. DuBois, and R. McDermott, “Correlated charge noise and relaxation errors in superconducting qubits,” (2020), arXiv:2012.06029 [quant-ph] .
* Cardani _et al._ (2020) L. Cardani, F. Valenti, N. Casali, G. Catelani, T. Charpentier, M. Clemenza, I. Colantoni, A. Cruciani, L. Gironi, L. Grünhaupt, D. Gusenkova, F. Henriques, M. Lagoin, M. Martinez, G. Pettinari, C. Rusconi, O. Sander, A. V. Ustinov, M. Weber, W. Wernsdorfer, M. Vignati, S. Pirro, and I. M. Pop, “Reducing the impact of radioactivity on quantum circuits in a deep-underground facility,” (2020), arXiv:2005.02286 [cond-mat.supr-con] .
* Kharitonov _et al._ (2012) M. Kharitonov, T. Proslier, A. Glatz, and M. J. Pellin, “Surface impedance of superconductors with magnetic impurities,” Phys. Rev. B Condens. Matter 86, 024514 (2012).
* Proslier _et al._ (2011) T. Proslier, M. Kharitonov, M. Pellin, J. Zasadzinski, and Ciovati, “Evidence of surface paramagnetism in niobium and consequences for the superconducting cavity surface impedance,” IEEE Trans. Appl. Supercond. 21, 2619–2622 (2011).
* Sheridan _et al._ (2021) E. Sheridan, T. F. Harrelson, E. Sivonxay, K. A. Persson, M. V. P. Altoe, I. Siddiqi, D. F. Ogletree, D. I. Santiago, and S. M. Griffin, “Microscopic theory of magnetic Disorder-Induced decoherence in superconducting nb films,” (2021).
* Vepsäläinen _et al._ (2020) A. P. Vepsäläinen, A. H. Karamlou, J. L. Orrell, A. S. Dogra, B. Loer, F. Vasconcelos, D. K. Kim, A. J. Melville, B. M. Niedzielski, J. L. Yoder, S. Gustavsson, J. A. Formaggio, B. A. VanDevender, and W. D. Oliver, “Impact of ionizing radiation on superconducting qubit coherence,” Nature 584, 551–556 (2020).
* Martinis (2020) J. M. Martinis, “Saving superconducting quantum processors from qubit decay and correlated errors generated by gamma and cosmic rays,” (2020), arXiv:2012.06137 [quant-ph] .
* Place _et al._ (2021) A. P. M. Place, L. V. H. Rodgers, P. Mundada, B. M. Smitham, M. Fitzpatrick, Z. Leng, A. Premkumar, J. Bryon, A. Vrajitoarea, S. Sussman, G. Cheng, T. Madhavan, H. K. Babla, X. H. Le, Y. Gang, B. Jäck, A. Gyenis, N. Yao, R. J. Cava, N. P. de Leon, and A. A. Houck, “New material platform for superconducting transmon qubits with coherence times exceeding 0.3 milliseconds,” Nat. Commun. 12, 1779 (2021), arXiv:2003.00024 [quant-ph] .
* Kaplan _et al._ (1976) S. B. Kaplan, C. Chi, D. Langenberg, J.-J. Chang, S. Jafarey, and D. Scalapino, “Quasiparticle and phonon lifetimes in superconductors,” Physical Review B 14, 4854 (1976).
* Delheusy _et al._ (2008) M. Delheusy, A. Stierle, N. Kasper, R. Kurta, A. Vlad, H. Dosch, C. Antoine, A. Resta, E. Lundgren, and J. Andersen, “X-ray investigation of subsurface interstitial oxygen at nb/oxide interfaces,” Applied Physics Letters 92, 101911 (2008).
* Altoé _et al._ (2020) M. V. P. Altoé, A. Banerjee, C. Berk, A. Hajr, A. Schwartzberg, C. Song, M. A. Ghadeer, S. Aloni, M. J. Elowson, J. M. Kreikebaum, E. K. Wong, S. Griffin, S. Rao, A. Weber-Bargioni, A. M. Minor, D. I. Santiago, S. Cabrini, I. Siddiqi, and D. F. Ogletree, “Localization and reduction of superconducting quantum coherent circuit losses,” (2020), arXiv:2012.07604 .
* Mergenthaler _et al._ (2021) M. Mergenthaler, S. Paredes, P. Müller, C. Müller, S. Filipp, M. Sandberg, J. Hertzberg, V. P. Adiga, M. Brink, and A. Fuhrer, “Ultrahigh vacuum packaging and surface cleaning for quantum devices,” Review of Scientific Instruments 92, 025121 (2021).
* Verjauw _et al._ (2021) J. Verjauw, A. Potočnik, M. Mongillo, R. Acharya, F. Mohiyaddin, G. Simion, A. Pacco, T. Ivanov, D. Wan, A. Vanleenhove, _et al._ , “Investigation of microwave loss induced by oxide regrowth in high-q niobium resonators,” Physical Review Applied 16, 014018 (2021).
* Romanenko and Schuster (2017) A. Romanenko and D. I. Schuster, “Understanding quality factor degradation in superconducting niobium cavities at low microwave field amplitudes,” Phys. Rev. Lett. 119, 264801 (2017).
* Romanenko _et al._ (2020) A. Romanenko, R. Pilipenko, S. Zorzetti, D. Frolov, M. Awida, S. Belomestnykh, S. Posen, and A. Grassellino, “Three-Dimensional superconducting resonators at $T<20$ mk with photon lifetimes up to $\\{\tau\\}=2$ s,” Phys. Rev. Applied 13, 034032 (2020).
* Heinrich, Pascual, and Franke (2018) B. W. Heinrich, J. I. Pascual, and K. J. Franke, “Single magnetic adsorbates on s-wave superconductors,” Progress in Surface Science 93, 1–19 (2018).
* Voyles and Muller (2002) P. Voyles and D. Muller, “Fluctuation microscopy in the STEM,” Ultramicroscopy 93, 147–159 (2002).
* Kresse and Hafner (1993) G. Kresse and J. Hafner, “Ab initio molecular dynamics for liquid metals,” Physical Review B 47, 558–561 (1993).
* Harrelson _et al._ (2021a) T. F. Harrelson, E. Sheridan, E. Sivonxay, K. A. Persson, and S. M. Griffin, “Amorphous Niobium Oxide Structures Calculated from First Principles using Density Functional Theory and Molecular Dynamics,” (2021a), 10.5281/ZENODO.5139270.
* Vinson _et al._ (2011) J. Vinson, J. J. Rehr, J. J. Kas, and E. L. Shirley, “Bethe-salpeter equation calculations of core excitation spectra,” Phys. Rev. B 83, 115106 (2011).
* Gilmore _et al._ (2015) K. Gilmore, J. Vinson, E. Shirley, D. Prendergast, C. Pemmaraju, J. Kas, F. Vila, and J. Rehr, “Efficient implementation of core-excitation bethe-salpeter equation calculations,” Comput. Phys. Comm. 197, 109 – 117 (2015).
* (30) www.ocean-code.com v 2.9.7.
* Giannozzi _et al._ (2017) P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. D. Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Küçükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. O. de-la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, “Advanced capabilities for materials modelling with quantum ESPRESSO,” Journal of Physics: Condensed Matter 29, 465901 (2017).
* (32) www.quantum-espresso.org v 6.7.
* van Setten _et al._ (2018) M. van Setten, M. Giantomassi, E. Bousquet, M. Verstraete, D. Hamann, X. Gonze, and G.-M. Rignanese, “The pseudodojo: Training and grading a 85 element optimized norm-conserving pseudopotential table,” Computer Physics Communications 226, 39 – 54 (2018).
* (34) http://www.pseudo-dojo.org Scalar-relativstic v. 0.4.
* Hamann (2013) D. R. Hamann, “Optimized norm-conserving vanderbilt pseudopotentials,” Phys. Rev. B 88, 085117 (2013).
* (36) The open-source code oncvpsp is avaiable at http://www.mat-simresearch.com v. 3.3.1.
* Voyles and Abelson (2003) P. M. Voyles and J. R. Abelson, “Medium-range order in amorphous silicon measured by fluctuation electron microscopy,” Solar energy materials and solar cells 78, 85–113 (2003).
* Daulton, Bondi, and Kenneth (2010) T. Daulton, K. Bondi, and K. Kenneth, “Nanobeam diffraction fluctuation electron microscopy technique for structural characterization of disordered materials — application to Al${}_{8}8-x$Y7Fe5Tix metallic glasses,” Ultramicroscopy 110, 1279–1289 (2010).
* Kennedy _et al._ (2020) E. Kennedy, N. Reynolds, L. Rangel DaCosta, F. Hellman, C. Ophus, and M. Scott, “Tilted fluctuation electron microscopy,” Applied Physics Letters 117, 091903 (2020).
* Hwang and Voyles (2011) J. Hwang and P. Voyles, “Variable resolution fluctuation electron microscopy on cu-zr metallic glass using a wide range of coherent stem probe size,” Microscopy and Microanalysis 17, 67–74 (2011).
* Frati, Hunault, and de Groot (2020) F. Frati, M. O. Hunault, and F. M. de Groot, “Oxygen k-edge x-ray absorption spectra,” Chemical reviews 120, 4056–4110 (2020).
* Herval _et al._ (2015) L. K. Herval, D. Von Dreifus, A. C. Rabelo, A. D. Rodrigues, E. C. Pereira, Y. G. Gobato, A. J. De Oliveira, and M. P. De Godoy, “The role of defects on the structural and magnetic properties of Nb2O5,” Journal of Alloys and Compounds 653, 358–362 (2015).
* Harrelson _et al._ (2021b) T. F. Harrelson, J. Vinson, E. Sheridan, and S. M. Griffin, “Calculated O K-edge XAS Spectra of Niobium Oxide Phases using Bethe-Salpeter equation,” (2021b), 10.5281/ZENODO.5156863.
|
# A Minimal Approach to Baryogenesis via Affleck-Dine and Inflaton Mass Terms
Amy Lloyd-Stubbs and John McDonald<EMAIL_ADDRESS><EMAIL_ADDRESS>Dept. of Physics, Lancaster University, Lancaster
LA1 4YB, UK
###### Abstract
We present a minimal approach to the generation of the baryon ($B$) asymmetry
of the Universe, in which the asymmetry is generated in a complex inflaton
condensate via $B$-violating quadratic inflaton potential terms and the
Affleck-Dine (AD) mechanism. We show that the $B$-violating quadratic mass
terms create an oscillating asymmetry in the complex inflaton condensate at
late times. The final asymmetry transferred to the Standard Model sector at
reheating is naturally reduced to the magnitude of the observed $B$ asymmetry
by the effect of averaging over the $B$ oscillations. This approach to
baryogenesis can easily be realised in a wide range of inflation models.
## I Introduction
The Affleck-Dine (AD) mechanism [1, 2] provides a remarkably simple and
elegant explanation for the baryon ($B$) asymmetry of the Universe. A complex
scalar with a $U(1)$ global symmetry, corresponding to conserved baryon
number, evolves into a coherently oscillating condensate. $B$ violating terms
in the potential act on the field, pushing it into an elliptical trajectory in
the complex field plane, which is equivalent to a $B$ asymmetry in the scalar
field.
The conventional AD mechanism is based on a complex scalar field $\Phi$ with a
potential which at late times is dominated by a $|\Phi|^{2}$ mass term.
Higher-order operators that violate baryon number cause the real ($\phi_{1}$)
and imaginary ($\phi_{2}$) parts of $\Phi$ to evolve differently when the
$|\Phi|^{2}$ term comes to dominate the potential, pushing the trajectory into
an ellipse in the $(\phi_{1},\phi_{2})$ plane. The higher-order operators
become less important as the magnitude of $\Phi$ decreases due to expansion,
effectively switching off the $B$ violation and leaving a conserved baryon
asymmetry in the complex field at late times.
Here we will present a new and unconventional implementation of AD
baryogenesis, in which $B$-violating $\Phi^{2}$ terms in the potential of a
complex inflaton $\Phi$ generate the asymmetry111The same model can also be
used to generate a lepton asymmetry which is subsequently processed via
sphalerons into a baryon asymmetry.. (Applications of the conventional AD
mechanism to a complex inflaton have been considered in [3, 4, 5, 6, 7].) We
will show that these terms generate a $B$ asymmetry in the $\Phi$ condensate
which oscillates about zero. When the condensate asymmetry is transferred to
the Standard Model (SM) sector by $\Phi$ decay, a net asymmetry is left in the
SM sector. The oscillating baryon asymmetry initially generated in the $\Phi$
condensate is typically much larger than that required to explain the observed
baryon-to-entropy ratio. The asymmetry transferred to the SM is subsequently
suppressed by averaging over the condensate asymmetry oscillations, reducing
the asymmetry to the observed value222AD baryogenesis via mass terms has
previously been considered in the context of a different class of model in
[8]. The analysis of [8] assumes that the averaging over of asymmetry
oscillations washes out the final asymmetry. We will show that although the
asymmetry is suppressed, it is significantly non-zero. This suppression plays
an important role in the model described here.. The resulting model is
dynamically quite different from existing inflaton-based AD baryogenesis
models, with the inflaton asymmetry being generated at late times during
inflaton oscillations rather than during or shortly after inflation.
The paper is organised as follows. In Section 2 we discuss the generation of
the asymmetry via quadratic B-violating potential terms. In Section 3 we
consider possible washout of the asymmetry via inflaton exchange operators. In
Section 4 we discuss the validity of the classical calculation of the
asymmetry. In Section 5 we present our conclusions.
## II Affleck-Dine Baryogenesis via Quadratic Potential Terms
We will consider a renormalisable $B$ symmetric inflaton potential together
with $B$-violating $\Phi^{2}$ terms,
$\mbox{$$}V(\Phi)=m_{\phi}^{2}|\Phi|^{2}+\lambda_{\Phi}|\Phi|^{4}-(A\Phi^{2}+{\rm
h.\,c.})~{},\vspace{0.1cm}$ (1)
where $A$ is real and positive. Such potentials are naturally compatible with
inflation models which are non-minimally coupled to gravity [9]. More
generally, they represent the leading order terms of an inflaton potential
during post-inflation evolution333Whilst the inflaton is the natural candidate
for the field responsible for reheating, we note that the model can apply to
any coherently oscillating complex scalar that is responsible for reheating..
$\Phi$ is initially coherently oscillating, with the potential dominated by
the $|\Phi|^{4}$ term and with no asymmetry in the field. In terms of
$\Phi=(\phi_{1}+i\phi_{2})/\sqrt{2}$, the potential becomes
$\mbox{$$}V(\Phi)=\frac{1}{2}(m_{\Phi}^{2}-2A)\phi_{1}^{2}+\frac{1}{2}(m_{\Phi}^{2}+2A)\phi_{2}^{2}+\frac{\lambda_{\Phi}}{4}(\phi_{1}^{2}+\phi_{2}^{2})^{2}~{}.\vspace{0.1cm}$
(2)
The field equations are
$\mbox{$$}\ddot{\phi}_{1}+3H\dot{\phi}_{1}=-m_{1}^{2}\phi_{1}-\lambda_{\Phi}(\phi_{1}^{2}+\phi_{2}^{2})\phi_{1}~{}\vspace{0.1cm}$
(3)
and
$\mbox{$$}\ddot{\phi}_{2}+3H\dot{\phi}_{2}=-m_{2}^{2}\phi_{2}-\lambda_{\Phi}(\phi_{1}^{2}+\phi_{2}^{2})\phi_{2}~{},\vspace{0.1cm}$
(4)
where
$\mbox{$$}m_{1}^{2}=m_{\Phi}^{2}-2A\;\;\;;\;\;\;m_{2}^{2}=m_{\Phi}^{2}+2A~{}.\vspace{0.1cm}$
(5)
In the limit $\lambda_{\Phi}\rightarrow 0$ the equations for $\phi_{1}$ and
$\phi_{2}$ are decoupled from each other, with coherently oscillating
solutions for $\phi_{1}$ and $\phi_{2}$ which have angular frequencies $m_{1}$
and $m_{2}$, respectively.
We first derive an analytical expression for the asymmetry using a threshold
approximation, which we then compare to a complete numerical solution. In the
threshold approximation we consider the potential to be approximated by
$V(\Phi)=\lambda_{\Phi}|\Phi|^{4}\;\;\;;\;\;\;\phi>\phi_{*}$
$\mbox{$$}V(\Phi)=m_{\Phi}^{2}|\Phi|^{2}-(A\Phi^{2}+{\rm
h.c.})\;\;\;;\;\;\;\phi<\phi_{*}~{},\vspace{0.1cm}$ (6)
where $\phi_{*}=m_{\Phi}/\sqrt{\lambda_{\Phi}}$ is the value of $\phi$ at
which $V^{\prime}(\phi)$ becomes dominated by the $|\Phi|^{4}$ term (here
$\Phi=\phi e^{i\theta}/\sqrt{2}$ and we have set $A=0$ when determining
$\phi_{*}$). The potential is initially strongly dominated by the $|\Phi|^{4}$
term, with $\phi_{i}\gg\phi_{*}$, and the field is initially at rest with
initial values $(\phi_{1,\;i},\phi_{2,\;i})$. Assuming rapid coherent
oscillations, the field amplitude will initially evolve as $\phi\propto 1/a$
when $\phi>\phi_{*}$. Therefore the field amplitudes at $a_{*}$ are
$\mbox{$$}\phi_{1,\;*}=\left(\frac{a_{i}}{a_{*}}\right)\phi_{1,\;i}=\left(\frac{\phi_{*}}{\phi_{i}}\right)\phi_{1,\;i}\;\;\;;\;\;\;\phi_{2,\;*}=\left(\frac{a_{i}}{a_{*}}\right)\phi_{2,\;i}=\left(\frac{\phi_{*}}{\phi_{i}}\right)\phi_{2,\;i}~{},\vspace{0.1cm}$
(7)
where $\phi_{i}=\left(\phi_{1,\;i}^{2}+\phi_{2,\;i}^{2}\right)^{1/2}$. The
field evolves purely due to the mass squared terms once $a>a_{*}$. We assume
that $m_{1,2}\gg H$, so that we can neglect the effect of expansion on the
rapid $\phi_{1,2}$ oscillations and simply factor in the effect of expansion
by damping the oscillation amplitude. The solution for $\phi_{1}$ and
$\phi_{2}$ is then
$\mbox{$$}\phi_{1}=\phi_{1,\;*}\left(\frac{a_{*}}{a}\right)^{3/2}\cos(m_{1}(t-t_{*}))\;\;\;;\;\;\;\phi_{2}=\phi_{2,\;*}\left(\frac{a_{*}}{a}\right)^{3/2}\cos(m_{2}(t-t_{*}))~{}.\vspace{0.1cm}$
(8)
The baryon asymmetry in the $\Phi$ condensate is
$\mbox{$$}n(t)=i\left(\Phi^{\dagger}\dot{\Phi}-\dot{\Phi}^{\dagger}\Phi\right)=\dot{\phi}_{1}\phi_{2}-\dot{\phi}_{2}\phi_{1}~{}.\vspace{0.1cm}$
(9)
Therefore
$\mbox{$$}n(t)=\phi_{1,\;*}\phi_{2,\;*}\left(\frac{a_{*}}{a}\right)^{3}\left[m_{2}\sin(m_{2}(t-t_{*}))\cos(m_{1}(t-t_{*}))-m_{1}\sin(m_{1}(t-t_{*}))\cos(m_{2}(t-t_{*}))\right]~{}.\vspace{0.1cm}$
(10)
We will assume that $2A\ll m_{\Phi}^{2}$. In this limit, to leading order in
$A/m_{\Phi}^{2}$, the condensate baryon asymmetry becomes
$\mbox{$$}n(t)=\phi_{1,\;*}\phi_{2,\;*}\left(\frac{a_{*}}{a}\right)^{3}\left[m_{\Phi}\sin\left(\frac{2A(t-t_{*})}{m_{\Phi}}\right)+\frac{A}{m_{\Phi}}\sin\left(2m_{\Phi}(t-t_{*})\right)\right]~{}.\vspace{0.1cm}$
(11)
During averaging over the $\phi_{1\,,2}$ coherent field oscillations, we can
consider the scale factor to be constant since $H\ll m_{\Phi}$. The second
term in Eq. (11) then averages to zero. The condensate asymmetry at $t>t_{*}$,
in terms of the initial field values, is then
$\mbox{$$}n(t)=\phi_{1,\;i}\phi_{2,\;i}\left(\frac{\phi_{i}}{\phi_{*}}\right)\left(\frac{a_{i}}{a}\right)^{3}m_{\Phi}\sin\left(\frac{2A(t-t_{*})}{m_{\Phi}}\right)~{}.\vspace{0.1cm}$
(12)
Thus the baryon asymmetry in the $\Phi$ condensate oscillates about zero with
period $T_{asy}=\pi m_{\Phi}/A$.
It is useful to define a comoving asymmetry
$n_{c}(t)\equiv(a(t)/a_{i})^{3}n(t)$, which is constant when there is no
production or decay of the asymmetry. For the threshold model at $t>t_{*}$
$\mbox{$$}n_{c}(t)=\phi_{1,\;i}\phi_{2,\;i}\left(\frac{\phi_{i}}{\phi_{*}}\right)m_{\Phi}\sin\left(\frac{2A(t-t_{*})}{m_{\Phi}}\right)~{},\vspace{0.1cm}$
(13)
with $n_{c}(t)=0$ at $t<t_{*}$. The $\Phi$ condensate asymmetry is assumed to
transfer to a conserved SM baryon asymmetry via $B$-conserving $\Phi$ decays
to SM particles444A specific implementation of the model to baryogenesis from
AD leptogenesis via a decaying inflaton will be presented in a future work
[10]. Here we focus on the general features of inflaton mass term AD
baryogenesis.,555It is also possible for the inflaton to decay via gravity
mediated modes [11]. The importance of this process will depend upon the
coupling of the inflaton to the Ricci curvature in a given inflation model.
The condensate will decay away completely after a time
$t_{R}\approx\Gamma_{\Phi}^{-1}$, where $R$ denotes reheating, with continuous
production of SM baryon asymmetry due to decay of the condensate asymmetry
from $t_{*}$ to $t_{R}$. Neglecting any reduction of the $\Phi$ field due to
decays at $t<t_{R}$, the comoving baryon asymmetry transferred to the SM
sector, which we denote by $\hat{n}_{c}$(t), is
$\mbox{$$}\hat{n}_{c}(t)=\int_{t_{i}}^{t}\Gamma_{\Phi}n_{c}(t)dt~{}.\vspace{0.1cm}$
(14)
Thus the comoving baryon asymmetry transferred out of the $\Phi$ condensate as
a function of $t$ is
$\mbox{$$}\hat{n}_{c}(t)=\frac{\Gamma_{\Phi}\phi_{1,\;i}\phi_{2,\;i}m_{\Phi}^{2}}{2A}\left(\frac{\phi_{i}}{\phi_{*}}\right)\left[1-\cos\left(\frac{2A(t-t_{*})}{m_{\Phi}}\right)\right]~{}.\vspace{0.1cm}$
(15)
$\hat{n}_{c}(t)$ increases linearly with $t-t_{*}$ until $t-t_{*}\approx\pi
m_{\Phi}/4A$. On longer timescales, $\hat{n}_{c}(t)$ oscillates between a
maximum value and zero with period $T_{asy}$. The maximum possible asymmetry
is obtained when $A=A_{max}=\pi m_{\Phi}\Gamma_{\Phi}/2$.
The $\Phi$ condensate decays away completely once
$t-t_{*}\;^{>}{}_{\sim}\;\Gamma_{\Phi}^{-1}$. To take into account the
B-conserving decay of the condensate asymmetry, we include in Eq. (14) an
exponential decay factor,
$\mbox{$$}\hat{n}_{c}(t)=\int_{t_{*}}^{t}\Gamma_{\Phi}n_{c}(t)e^{-\Gamma_{\Phi}(t-t_{*})}dt~{}.\vspace{0.1cm}$
(16)
The total comoving asymmetry transferred to the SM sector as
$t\rightarrow\infty$ is then
$\mbox{$$}\hat{n}_{c,\;tot}=\frac{\Gamma_{\Phi}\phi_{1,\;i}\phi_{2,\;i}m_{\Phi}^{2}}{2A}\left(\frac{\phi_{i}}{\phi_{*}}\right)\left(1+\left(\frac{\Gamma_{\Phi}m_{\Phi}}{2A}\right)^{2}\right)^{-1}~{}.\vspace{0.1cm}$
(17)
The transferred asymmetry is proportional to $A$ until
$A>\Gamma_{\Phi}m_{\Phi}/2$, in which case $\tau_{\Phi}>T_{asy}$ and the
transferred asymmetry decreases as $A^{-1}$ and $\tau_{\Phi}^{-1}$, where
$\tau_{\Phi}=\Gamma_{\Phi}^{-1}$ is the lifetime of the $\Phi$ scalars. This
can be understood as due to the effect of averaging condensate oscillations
over the time taken for the condensate to decay. When $\tau_{\phi}\gg
T_{asy}$, the asymmetry in the condensate will undergo many oscillations from
positive to negative values during the decay of the condensate. Therefore the
asymmetry produced during a positive half-cycle will almost cancel against
that produced during the following negative half-cycle, up to the effect of
the small decrease in the condensate asymmetry amplitude due to the decay of
the condensate during $\Delta t\sim T_{asy}$. Therefore only a small net
asymmetry is produced during each condensate oscillation cycle as compared to
the case with $T_{asy}\;^{>}{}_{\sim}\;\tau_{\Phi}$, where there is no
averaging over oscillations.
We first consider the case where the lifetime of $\Phi$ is much longer than
$T_{asy}$, such that $2A/m_{\Phi}\Gamma_{\Phi}\gg 1$. $\hat{n}_{c\;tot}$ can
then be expressed as
$\mbox{$$}\hat{n}_{c,\;tot}=\frac{\Gamma_{\Phi}\phi_{i}^{2}m_{\Phi}^{2}\sin\left(2\theta\right)}{4A}\left(\frac{\phi_{i}}{\phi_{*}}\right)~{},\vspace{0.1cm}$
(18)
where $\theta$ is the initial phase of $\Phi$. The total baryon asymmetry
transferred to the SM, $\hat{n}_{tot}$, is then
$\mbox{$$}\hat{n}_{tot}=\left(\frac{a_{i}}{a_{R}}\right)^{3}\hat{n}_{c,\;tot}=\frac{3M_{Pl}^{2}\Gamma_{\Phi}^{3}\sin\left(2\theta\right)}{2A}~{},\vspace{0.1cm}$
(19)
where we have used $a\propto H^{-2/3}$ when $a>a_{R}$ and $a\propto 1/\phi$
when $a<a_{R}$ to obtain the final expression. This can also be expressed in
terms of the baryon-to-entropy ratio, $n_{B}/s$. Using $s=4k_{T}^{2}T^{3}$ and
$\Gamma_{\Phi}=H_{R}=k_{T_{R}}T_{R}^{2}/M_{Pl}$, where $T_{R}$ is the
reheating temperature and $k_{T}=(\pi^{2}g(T)/90)^{1/2}$, the baryon-to-
entropy ratio is
$\mbox{$$}\frac{n_{B}}{s}\equiv\frac{\hat{n}_{tot}}{s}=\frac{3}{8}\frac{k_{T_{R}}T_{R}^{3}\sin\left(2\theta\right)}{AM_{Pl}}=5.2\times
10^{-21}\frac{m_{\Phi}^{2}}{A}\left(\frac{T_{R}}{10^{8}{\rm\
GeV}}\right)^{3}\left(\frac{10^{13}{\rm\
GeV}}{m_{\Phi}}\right)^{2}\sin\left(2\theta\right)~{},\vspace{0.1cm}$ (20)
where we have normalised the expression to some representative
values666$T_{R}=10^{8}{\rm\ GeV}$ is within the range of reheating
temperatures that may be detectable in the spectrum of primordial
gravitational waves [12]. of $T_{R}$ and $m_{\Phi}$. The observed baryon-to-
entropy ratio is $(n_{B}/s)_{obs}=0.861\pm 0.005\times 10^{-10}$. In order to
account for the observed asymmetry, we require that
$\mbox{$$}\frac{A^{1/2}}{m_{\Phi}}=7.8\times
10^{-6}\sin^{1/2}\left(2\theta\right)\left(\frac{10^{13}{\rm\
GeV}}{m_{\Phi}}\right)\left(\frac{T_{R}}{10^{8}{\rm\
GeV}}\right)^{3/2}~{}.\vspace{0.1cm}$ (21)
The maximum possible asymmetry, which corresponds to
$A=\Gamma_{\Phi}m_{\Phi}/2$ in Eq. (17), is
$\mbox{$$}\frac{n_{B,\;max}}{s}=\frac{3T_{R}\sin\left(2\theta\right)}{8m_{\Phi}}=3.8\times
10^{-6}\,\left(\frac{T_{R}}{10^{8}{\rm\ GeV}}\right)\left(\frac{10^{13}{\rm\
GeV}}{m_{\Phi}}\right)\sin\left(2\theta\right)~{}.\vspace{0.1cm}$ (22)
This can easily be much larger than the observed baryon asymmetry. Therefore
the suppression of the asymmetry by averaging over oscillations plays an
important role in this model.
In the case where $\Phi$ decays before any condensate asymmetry oscillations
can occur, corresponding to $\Gamma_{\Phi}m_{\Phi}/2A\gg 1$ in Eq. (17), the
total transferred asymmetry obtains an additional factor
$(2A/\Gamma_{\Phi}m_{\Phi})^{2}$ compared to Eq. (20). Therefore
$\mbox{$$}\frac{n_{B}}{s}=\frac{3}{2}\frac{AM_{Pl}\sin(2\theta)}{k_{T_{R}}T_{R}m_{\phi}^{2}}~{}\vspace{0.1cm}$
(23)
and we find that the required value of $A^{1/2}/m_{\Phi}$ is
$\mbox{$$}\frac{A^{1/2}}{m_{\Phi}}=8.9\times
10^{-11}\,\left(\frac{T_{R}}{10^{8}{\rm\
GeV}}\right)^{1/2}\left(\frac{1}{\sin\left(2\theta\right)}\right)^{1/2}~{}.\vspace{0.1cm}$
(24)
This is typically much smaller than in the case with asymmetry oscillations,
due to the lack of additional suppression of the baryon asymmetry from
averaging over condensate oscillations.
The threshold asymmetry is a good approximation if the $B$-violating mass
terms do not cause the field to significantly evolve until the potential is
$|\Phi|^{2}$ dominated. The condition for this to be true, which we have
confirmed in our numerical solutions, is that the mass of the angular field
perturbations about the minimum of the potential as a function of $\theta$,
$m_{\delta\theta}=2A^{1/2}$, is less than $H$ when $\phi=\phi_{*}$. This is
satisfied if
$\mbox{$$}\frac{A^{1/2}}{m_{\Phi}}\;^{<}_{\sim}\;\frac{A^{1/2}_{th}}{m_{\Phi}}=\frac{m_{\Phi}}{4\sqrt{\lambda_{\Phi}}M_{Pl}}\equiv
1.0\times 10^{-6}\lambda_{\Phi}^{-1/2}\left(\frac{m_{\Phi}}{10^{13}\;{\rm\
GeV}}\right)~{}.\vspace{0.1cm}$ (25)
We finally compare the threshold approximation to the complete numerical
solution for the case $\Gamma_{\Phi}(t-t_{*})\ll 1$ 777Further details of the
numerical analysis will be presented in [10]. As an example, we show in Figure
1 the numerical results for the case $m_{\Phi}=10^{16}{\rm\ GeV}$ and
$\lambda_{\Phi}=0.1$ for a range of values of $A^{1/2}/m_{\Phi}$. The
analytical approximation in left-hand figure is given by Eq. (13) and in the
right-handed figure by Eq. (15), with $\Gamma_{\Phi}$ corresponding to
$T_{R}=10^{8}{\rm\ GeV}$. For this case, the upper limit for the threshold
approximation to be valid is $A_{th}^{1/2}/m_{\Phi}\approx 3\times 10^{-3}$.
We find that the threshold approximation is in perfect agreement with the
numerical solution for both the condensate and transferred asymmetries when
Eq. (25) is satisfied. For larger $A^{1/2}/m_{\Phi}$, the evolution during the
$|\Phi|^{4}$ dominated era modifies the asymmetries. The amplitude of the
transferred asymmetry $A\hat{n}_{c}$ rapidly decreases with increasing
$A>A_{th}$ down to an approximately constant value, which is suppressed
relative to the threshold value of $A\hat{n}_{c}$ by a factor that numerically
is approximately $m_{\Phi}/10^{17}{\rm GeV}$. The transferred asymmetry
$A\hat{n}_{c}$ oscillates between zero and a maximum when the threshold
approximation is valid, but for larger $A^{1/2}/m_{\Phi}$ it oscillates about
zero. However, since the transferred asymmetry is the total asymmetry
transferred to the SM sector as a function of time after averaging over
condensate asymmetry oscillations, the oscillation of the transferred
asymmetry about zero has no impact on the typical magnitude of the baryon
asymmetry transferred to the SM.
Figure 1: The condensate asymmetry (left) and transferred asymmetry (right)
for the case $m_{\Phi}=10^{16}{\rm\ GeV}$, $\lambda_{\Phi}=0.1$ and
$T_{R}=10^{8}{\rm\ GeV}$. The threshold asymmetry and the numerical results
for $A^{1/2}/m_{\Phi}=0.001,0.005,0.007,0.01$ and 0.05 are shown. (The
numerical result for $A^{1/2}/m_{\Phi}=0.001$ coincides with the threshold
result, in agreement with Eq. (25).)
## III Baryon Washout due to Inflaton Exchange
In application to a specific model, the possible washout of the asymmetry must
be considered. The interaction which allows the decay of the inflaton will
generally result in a B-violating operator via $\Phi$ exchange. Dimensionally,
the rate of B-violating scattering processes at reheating due to $\phi_{1}$
and $\phi_{2}$ exchange is
$\mbox{$$}\Gamma_{\Delta
B}\sim\frac{\lambda_{\psi}^{2}A^{2}T_{R}^{5}}{m_{\Phi}^{8}}~{},\vspace{0.1cm}$
(26)
where $\lambda_{\psi}$ is the coupling responsible for $\Phi$ decay and $A$ is
necessary in the scattering amplitude in order to have B-violation. Washout
due to $\Phi$ exchange will be negligible if $\Gamma_{\Delta B}<H(T_{R})$,
which is satisfied if
$\mbox{$$}\lambda_{\psi}\lesssim\frac{m_{\Phi}^{4}}{M_{Pl}^{1/2}T_{R}^{3/2}A}=6\times
10^{4}\left(\frac{m_{\Phi}^{2}}{A}\right)\left(\frac{m_{\Phi}}{10^{13}{\rm\
GeV}}\right)^{2}\left(\frac{10^{8}{\rm\
GeV}}{T_{R}}\right)^{3/2}~{}.\vspace{0.1cm}$ (27)
The inflaton decay rate is
$\Gamma_{\Phi}\approx\lambda_{\psi}^{2}m_{\Phi}/4\pi$, therefore the reheating
temperature from $H(T_{R})=\Gamma_{\Phi}$ is
$T_{R}\approx\lambda_{\psi}(m_{\Phi}M_{Pl})^{1/2}$. Thus Eq. (27) is satisfied
if
$\mbox{$$}T_{R}\;^{<}{}_{\sim}\;\left(\frac{m_{\Phi}^{2}}{A}\right)^{2/5}m_{\Phi}~{},\vspace{0.1cm}$
(28)
where $A<m_{\Phi}^{2}$. Therefore washout due to B-violating $\Phi$ exchange
is negligible if $T_{R}\;^{<}{}_{\sim}\;m_{\Phi}$ and so it is unlikely
present a serious obstacle to this class of model. A complete analysis of
washout will depend upon the specific model for the decay of the inflaton and
the transfer of the baryon asymmetry.
## IV Validity of the Classical Analysis of the Baryon Asymmetry
Throughout our analysis we have assumed that classical fields can be used to
calculate the baryon asymmetry. When the potential is dominated by quadratic
terms, the $\phi_{1}$ and $\phi_{2}$ fields evolve as independent non-
interacting coherently oscillating scalars. In general, a classical
oscillating scalar field corresponds to a quantum coherent state in the limit
where the occupation number of the state is large compared to one [13, 14].
The condition for this to be true is that $\phi_{i}>m_{\Phi}$ ($i=1,\,2$).
However, this is typically not satisfied at inflaton decay in the present
model. Nevertheless, the classical calculation of the baryon asymmetry remains
correct. This is because it is the coherent state corresponding to the
classical field that is important for AD baryogenesis.
By construction, the expectation value of the field operator $\hat{\phi}_{i}$
in the coherent state $|\phi_{i}(t)>$ is equal to the classical field
$\phi_{i,\;cl}(t)$
$\mbox{$$}<\phi_{i}(t)|\hat{\phi}_{i}|\phi_{i}(t)>=\phi_{i,\;cl}(t)~{}.\vspace{0.1cm}$
(29)
We have included a time dependence in the coherent state to take into account
the dilution of the number density by expansion. Since the scalar fields
$\phi_{1}$ and $\phi_{2}$ are independent fields, the coherent state of the
complex field is a product of the coherent states for $\phi_{1}$ and
$\phi_{2}$, $|\Phi(t)>=|\phi_{1}(t)>|\phi_{2}(t)>$. Therefore, with the baryon
number density operator given by
$\hat{n}=\hat{\dot{\phi}}_{1}\hat{\phi}_{2}-\hat{\dot{\phi}}_{2}\hat{\phi}_{1}$,
the expectation value of the baryon asymmetry in the coherent state is given
by
$\mbox{$$}<\Phi(t)|\hat{n}|\Phi(t)>=<\Phi(t)|\hat{\dot{\phi}}_{1}\hat{\phi}_{2}-\hat{\dot{\phi}}_{2}\hat{\phi}_{1}|\Phi(t)>=\dot{\phi}_{1,\;cl}\phi_{2,\;cl}-\dot{\phi}_{1,\;cl}\phi_{1,\;cl}\equiv
n_{cl}~{}.\vspace{0.1cm}$ (30)
Therefore the expectation value of the baryon number density operator is equal
to the baryon number density $n_{cl}$ calculated using the classical fields.
When $\phi_{i}<m_{\Phi}$, the variance of the field in the coherent state will
become large compared to the squared classical field. Therefore there will be
large quantum fluctuations of the fields about their expectation values and so
the field cannot be considered classical. However, the correlation length of
the quantum fluctuations cannot be larger than the horizon at inflaton decay.
Since the volume that evolves into the presently observed Universe will be
very much larger than the horizon volume at inflaton decay, the observed
baryon asymmetry will be given by its spatial average value and so will equal
the expectation value of the baryon asymmetry. Therefore the baryon asymmetry
will equal its classical value even when $\phi_{i}<m_{\Phi}$. This shows that
it is the coherent state describing the scalar field, rather than its
classical nature, that is essential for AD baryogenesis.
In reaching this conclusion we have assumed that the mean asymmetry
transferred from the condensate by decay is equal to the mean asymmetry in the
coherent state of the condensate and that there is no additional washout
effect due to the decay process. Condensate decay in this model occurs when
the occupation number is less than one, therefore the conventional classical
analysis based on production of particles due to a time-dependent classical
field is no longer valid. Whilst there is no obvious reason to expect an
additional source of washout due to the decay process when the coherent state
is no longer in the classical limit, this should be confirmed by a full
quantum field theory analysis.
## V Conclusions
We have presented a new minimal approach to baryogenesis which is based on
$B$-violating mass terms for the inflaton. The resulting model requires only
the addition of $B$-violating mass terms to an existing inflaton potential and
therefore can easily be realised in a wide range of inflation models. The
asymmetry is generated at late times during inflaton oscillations, in contrast
to existing inflaton-based AD baryogenesis models which generate the asymmetry
during or shortly after inflation. The model also provides exact analytical
expressions for the resulting baryon asymmetry.
In this analysis we have not addressed the question of baryon isocurvature
perturbations. We note that these can easily be controlled by including a
$\Phi^{4}+\Phi^{\dagger\;4}$ term in the potential which is significant during
inflation and becomes negligible after inflation, whilst leaving open the
possibility of observable isocurvature perturbations. A detailed
implementation of the mechanism to baryogenesis from AD leptogenesis via
inflaton decay, including a discussion of isocurvature perturbations, will be
presented in a future work [10]. The model also raises new questions regarding
the Affleck-Dine mechanism in the limit where the classical approximation is
no longer valid, which requires a dedicated analysis.
## Acknowledgements
The work of ALS is supported by STFC.
## References
* [1] I. Affleck and M. Dine, Nucl. Phys. B 249 (1985), 361-380 doi:10.1016/0550-3213(85)90021-5
* [2] M. Dine, L. Randall and S. D. Thomas, Nucl. Phys. B 458 (1996), 291-326 doi:10.1016/0550-3213(95)00538-2 [arXiv:hep-ph/9507453 [hep-ph]].
* [3] J. M. Cline, M. Puel and T. Toma, Phys. Rev. D 101 (2020) no.4, 043014 doi:10.1103/PhysRevD.101.043014 [arXiv:1909.12300 [hep-ph]]; J. M. Cline, M. Puel and T. Toma, JHEP 05 (2020), 039 doi:10.1007/JHEP05(2020)039 [arXiv:2001.11505 [hep-ph]].
* [4] Y. Y. Charng, D. S. Lee, C. N. Leung and K. W. Ng, Phys. Rev. D 80 (2009), 063519 doi:10.1103/PhysRevD.80.063519 [arXiv:0802.1328 [hep-ph]].
* [5] M. P. Hertzberg and J. Karouby, Phys. Lett. B 737 (2014), 34-38 doi:10.1016/j.physletb.2014.08.021 [arXiv:1309.0007 [hep-ph]]; M. P. Hertzberg and J. Karouby, Phys. Rev. D 89 (2014) no.6, 063523 doi:10.1103/PhysRevD.89.063523 [arXiv:1309.0010 [hep-ph]].
* [6] N. Takeda, Phys. Lett. B 746 (2015), 368-371 doi:10.1016/j.physletb.2015.05.039 [arXiv:1405.1959 [astro-ph.CO]].
* [7] C. M. Lin and K. Kohri, [arXiv:2003.13963 [hep-ph]].
* [8] E. Babichev, D. Gorbunov and S. Ramazanov, Phys. Lett. B 792 (2019), 228-232 doi:10.1016/j.physletb.2019.03.046 [arXiv:1809.08108 [astro-ph.CO]].
* [9] D. S. Salopek, J. R. Bond and J. M. Bardeen, Phys. Rev. D 40 (1989), 1753 doi:10.1103/PhysRevD.40.1753
* [10] A.Lloyd-Stubbs and J.McDonald, In progress.
* [11] Y. Ema, R. Jinno, K. Mukaida and K. Nakayama, JCAP 05 (2015), 038 doi:10.1088/1475-7516/2015/05/038 [arXiv:1502.02475 [hep-ph]].
* [12] K. Nakayama, S. Saito, Y. Suwa and J. Yokoyama, JCAP 06 (2008), 020 doi:10.1088/1475-7516/2008/06/020 [arXiv:0804.1827 [astro-ph]].
* [13] S. Davidson, Astropart. Phys. 65 (2015), 101-107 doi:10.1016/j.astropartphys.2014.12.007 [arXiv:1405.1139 [hep-ph]].
* [14] K. D. Lozanov, [arXiv:1907.04402 [astro-ph.CO]].
|
# Diophantine sets in general are Cantor sets
Fernando Argentieri Fernando Argentieri
###### Abstract
Let ${\gamma}\in(0;\frac{1}{{2}}),{\tau}\geq 1$ and define the
“${\gamma},{\tau}$ Diophantine set” as:
$D_{\gamma,\tau}:=\\{{\alpha}\in(0;1):||q{\alpha}||\geq\frac{{\gamma}}{q^{{\tau}}}\quad\forall
q\in{N}\\},\qquad||x||:=\inf_{p\in{Z}}|x-p|.$
In this paper we study the topology of these sets and we show that, for large
${\tau}$ and for almost all ${\gamma}>0$, $D_{\gamma,\tau}$ is a Cantor set.
## 1 Introduction
Diophantine sets play an important role in dynamical systems, in particular,
in small divisors problems with applications to KAM theory, Aubry-Mather
theory, conjugation of circle diffeomorphisms, etc. (see, for example, [3],
[5], [9], [12], [13], [14], [16]).
The set $D_{\gamma,\tau}$ is compact and totally disconnected (since
$D_{\gamma,\tau}\cap{Q}=\emptyset$), however, these sets may be not Cantor
sets. In fact, in [17] we have shown various examples in which
$D_{\gamma,\tau}$ have isolated points. In this paper we prove the
following:Theorem Let ${\tau}>\frac{3+\sqrt{17}}{2}$. Then, for almost all
${\gamma}>0$ $D_{\gamma,\tau}$ is a Cantor set.
By [6], for ${\tau}=1$ and $\frac{1}{{3}}<{\gamma}<\frac{1}{{2}}$
$D_{\gamma,\tau}$ is countable (and non empty for ${\gamma}>\frac{1}{{3}}$
small enough). In particular, this result does not holds for ${\tau}=1$. We
expect that ${\tau}>\frac{3+\sqrt{17}}{2}$ can be improved with ${\tau}>3$.
However it is not clear what is the best constant. Following the same proof,
we can prove also that, fixed ${\tau}>\frac{3+\sqrt{17}}{2}$, for almost all
${\gamma}>0$, if ${\alpha}\in D_{\gamma,\tau}$ and $U$ is an open neighborhood
that contains ${\alpha}$, then ${\mu}(D_{\gamma,\tau}\cap U)>0$.
The paper is organized as follows: in the second section we give some basic
definitions and remarks, in the third section we prove our result and, in the
last section are present some natural questions.
## 2 Definitions and remarks
### 2.1 Definitions
* •
${N}:=\\{1,2,3,...\\}$, ${N}_{0}:=\\{0,1,2,3,...\\}$
* •
Given $a,b\in{Z}-\\{0\\}$, we indicate with $(a,b)$ the maximum common divisor
of $a$ and $b$.
* •
Let ${\alpha}$ be a real number. We indicate with $[{\alpha}]$ the integral
part of ${\alpha}$, with $\\{{\alpha}\\}$ the fractional part of ${\alpha}$ .
* •
Given E$\subseteq{{R}}$, we indicate with $\mathcal{I}$(E) the set of isolated
points of E.
* •
Given E$\subseteq{{R}}$, we indicate with $\mathcal{A}$(E) the set of
accumulated points of E.
* •
We say that E$\subseteq{{R}}$ is perfect if $\mathcal{A}$(E)=E.
* •
Given a Borel set E$\subseteq{{R}}$ we denote with ${\mu}$(E) the Lebesgue
measure of E.
* •
A topological space X is a totally disconnected space if the points are the
only connected subsets of X.
* •
$X\subseteq{R}$ is a Cantor set if it is closed, totally disconnected and
perfect.
* •
For $E\subseteq{R}^{n}$, $\dim_{H}E$ is the Hausdorff dimension of $E$.
* •
Given ${\alpha}\in{R}$ we define:
$||{\alpha}||:=\min_{p\in{Z}}|{\alpha}-p|$
* •
Given ${\gamma}>0,{\tau}\geq 1$, we define the $({\gamma},{\tau})$ Diophantine
points in $(0;1)$ as the numbers in the set:
$D_{\gamma,\tau}:=\\{{\alpha}\in(0;1):||q{\alpha}||\geq\frac{{\gamma}}{q^{{\tau}}}\quad\forall
q\in{N}\\}$
* •
$D^{{R}}_{{\gamma},{\tau}}:=\\{{\alpha}\in{R}:||q{\alpha}||\geq\frac{{\gamma}}{q^{\tau}}\quad\forall
q\in{N}\\},$ $D_{{\tau}}:=\bigcup_{{\gamma}>0}D_{{\gamma},{\tau}},\quad
D:=\bigcup_{{\tau}\geq 1}D_{{\tau}}.$
We call $D$ the set of Diophantine numbers.
* •
Given ${\tau}\geq 1,{\alpha}\in{R}$, we define:
$\gamma(\alpha,\tau):=\inf_{q\in{N}}q^{{\tau}}||q{\alpha}||$
* •
Given ${\alpha}\in{R}$ we define:
${\tau}({\alpha}):=\inf\\{{\tau}\geq 1:\gamma(\alpha,\tau)>0\\}$
* •
Given an irrational number
${\alpha}=[a_{0};a_{1},...]:=a_{0}+\frac{1}{{a_{1}+\frac{1}{{a_{2}+...}}}}$,
we denote with $\\{\frac{p_{n}}{q_{n}}\\}_{n\in{N}_{0}}$ the convergents of
${\alpha}$, ${\alpha}_{n}:=[a_{n};a_{n+1},...]$111for informations about
continued fractions see [4],[8],[15] .
* •
We indicate with
$[a_{1},a_{2},a_{3},...]:=\frac{1}{{a_{1}+\frac{1}{{a_{2}+\frac{1}{{a_{3}+...}}}}}}$.
* •
Let ${\alpha}$ be an irrational number. We define:
${\gamma}_{n}({\alpha},{\tau}):=q_{n}^{{\tau}}||q_{n}{\alpha}||=q_{n}^{{\tau}}|q_{n}{\alpha}-p_{n}|$
* •
Let ${\tau}\geq 1$,
${\gamma}_{-}({\alpha},{\tau}):=\inf_{n\in
2{N}_{0}}{\gamma}_{n}({\alpha},{\tau}),$
${\gamma}_{+}({\alpha},{\tau}):=\inf_{n\in
2{N}_{0}+1}{\gamma}_{n}({\alpha},{\tau}),$
${\mathcal{D}_{{\tau}}}:=\\{{\alpha}\in
D_{{\tau}}:{\tau}({\alpha})={\tau}\\},$
${\mathcal{I}}^{1}_{{\gamma},{\tau}}:=\\{{\alpha}\in
D_{{\gamma},{\tau}}:\exists n\not\equiv
m\quad{(\rm{mod}2)},{\gamma}_{n}({\alpha},{\tau})={\gamma}_{m}({\alpha},{\tau})=\gamma(\alpha,\tau)\\},$
${\mathcal{I}}^{2}_{{\gamma},{\tau}}:=\\{{\alpha}\in
D_{{\gamma},{\tau}}:\exists
n\in{N}_{0},{\gamma}_{n}({\alpha},{\tau})={\gamma}({\alpha},{\tau})\\}\cap({\mathcal{I}}^{1}_{{\gamma},{\tau}})^{c},$
${\mathcal{I}}^{3}_{{\gamma},{\tau}}:={\mathcal{I}}(D_{\gamma,\tau})\cap({\mathcal{I}}^{1}_{{\gamma},{\tau}}\cup{\mathcal{I}}^{2}_{{\gamma},{\tau}})^{c},$
${\mathcal{I}}^{1}_{{\tau}}:=\bigcup_{{\gamma}>0}{\mathcal{I}}^{1}_{{\gamma},{\tau}},$
${\mathcal{I}}^{2}_{{\tau}}:=\bigcup_{{\gamma}>0}{\mathcal{I}}^{2}_{{\gamma},{\tau}},$
${\mathcal{I}}^{3}_{{\tau}}:=\bigcup_{{\gamma}>0}{\mathcal{I}}^{3}_{{\gamma},{\tau}}.$
### 2.2 Remarks
We list here some simple remarks. For a proof see [17].
1. (a)
${\alpha}\in D_{\gamma,\tau}\iff 1-{\alpha}\in D_{\gamma,\tau}$.
2. (b)
$\gamma(\alpha,\tau)\leq\min\\{{\alpha},1-{\alpha}\\}.$
3. (c)
Fixed ${\tau}\geq 1$,
${\gamma}(.,{\tau}):D_{{\tau}}\rightarrow(0,\frac{1}{2})$.
4. (d)
$D_{\gamma,\tau}^{{R}}=\bigcup_{n\in{Z}}(D_{\gamma,\tau}+n)$, thus we can
restrict to study the Diophantine points in $(0,1)$.
5. (e)
$\left\\{\begin{array}[]{l}{\gamma}_{n}({\alpha},{\tau})=\frac{q_{n}^{{\tau}}}{{\alpha}_{n+1}q_{n}+q_{n-1}},\\\
\frac{1}{{{\gamma}_{n}({\alpha},{\tau})}}=\frac{q_{n+1}}{q_{n}^{{\tau}}}+\frac{1}{{\alpha}_{n+2}q_{n}^{{\tau}-1}}\end{array}\right.$
(1)
6. (f)
$\gamma(\alpha,\tau)=\inf_{n\in{N}_{0}}{\gamma}_{n}({\alpha},{\tau})$.
7. (g)
If ${\tau}<{\tau}({\alpha})$, then $\gamma(\alpha,\tau)=0$; if
${\tau}>{\tau}({\alpha})$ then $\gamma(\alpha,\tau)>0$. Moreover, for
${\tau}>{\tau}({\alpha})$ the inf is a minimum.
8. (h)
${\alpha}\in{\mathcal{D}_{{\tau}}}\iff{\tau}({\alpha})={\tau}$ and
$\gamma(\alpha,\tau)>0$.
9. (i)
If ${\alpha}\in{\mathcal{I}}^{1}_{{\gamma},{\tau}}$, then ${\alpha}$ is an
isolated point of $D_{\gamma,\tau}$.
10. (j)
The cardinality of ${{\mathcal{I}}^{1}_{{\tau}}}$ is at most countable.
11. (k)
${\mu}({\mathcal{D}_{{\tau}}})=0$ for all ${\tau}\geq 1$.
12. (l)
${\gamma}_{0}({\alpha},{\tau})=\left\\{{\alpha}\right\\}$, in particular
${\gamma}_{0}({\alpha},{\tau})$ does not depend on ${\tau}$.
13. (m)
Let $\frac{p}{q}$ a rational number.
${\alpha}\in D_{{\tau}}\iff\left\\{{\alpha}+\frac{p}{q}\right\\}\in D_{\tau},$
(2)
${\alpha}\in{\mathcal{D}_{{\tau}}}\iff\left\\{{\alpha}+\frac{p}{q}\right\\}\in{\mathcal{D}_{{\tau}}}.$
(3)
14. (n)
If ${\tau}>{\tau}({\alpha})$,
${\gamma}_{-}({\alpha},{\tau})={\gamma}_{+}({\alpha},{\tau})$, then
${\alpha}\in{\mathcal{I}_{{\tau}}}$.
15. (o)
${\alpha}\in D_{\tau}\iff q_{n+1}=O(q_{n}^{{\tau}}).$
## 3 Proof of the Theorem
In the first part of this section, we suppose without loss of generality, that
$n$ is always even. In fact, for $n$ odd it suffices to consider $1-{\alpha}$
(${\alpha}=[a_{0};...,a_{n},...]\in D_{\gamma,\tau}\iff 1-{\alpha}\in
D_{\gamma,\tau}$, and the denominators of the odd convergents to $1-{\alpha}$
are the same of the even convergents to ${\alpha}$, hence, by symmetry, all
that is demonstrated for $n$ even continues to hold if $n$ is odd). Moreover,
in all the section $0<{\gamma}<\frac{1}{{2}}$ (otherwise
$D_{\gamma,\tau}=\emptyset$). We want to prove that, for
${\tau}>\frac{3+\sqrt{17}}{2}$:
${\mu}\left(\left\\{0<{\gamma}<\frac{1}{2}:{\mathcal{I}}(D_{\gamma,\tau})\not=\emptyset\right\\}\right)=0.$
By Remark (j) it is enough to prove it for
${\mathcal{I}}^{2}_{{\gamma},{\tau}}$ and
${\mathcal{I}}^{3}_{{\gamma},{\tau}}$. Observe that the isolated points of
type 2,3 are obtained by infinitely many intersections of intervals centered
in rational numbers $\frac{p}{q}$ with length
$\frac{2{\gamma}}{q^{{\tau}+1}}$. Thus, the first step is to show that, given
${\alpha}\in D_{\gamma,\tau}$, it is enough (up to a set of measure zero and
for ${\tau}$ big enough) to control the intersection of intervals centred in
the convergents. The second step will be to show that, if intervals centred in
the convergents intersects, then the coefficients of the continued fractions
cannot grow too. In the final step we prove that, when intervals centred in
the convergents do not intersect and for big convergents, the interval between
two subsequent convergentes (with the same parity) contains a diophantine sets
with positive mesure.
###### Lemma 1
Let ${\gamma}>0,{\tau}>1,{\alpha}\in D_{\gamma,\tau}$, $\frac{p_{n}}{q_{n}}$
the convergents to ${\alpha}$,
$I_{n}:=\left(\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}\right).$
Suppose that $\exists N\in{N}$ such that, for all $n>N$ even:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}.$
(4)
For $n>N$ define
$A_{n}:=\left(\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}},\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}\right).$
Moreover, suppose that for every $n$ (even):
${\alpha}-\frac{p_{n}}{q_{n}}>\frac{{\gamma}}{q_{n}^{{\tau}+1}}$ (5)
Then, there exists $N_{1}\in{N}$ such that, for all $n>N_{1}$:
$\frac{p}{q}\not\in I_{n}\ \Longrightarrow\
\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}\not\in
A_{n}.$
Proof Note that it is enough to verify the inequality when
$\frac{p}{q}<{\alpha}$. In fact the inequality is trivial if
$\frac{p}{q}>{\alpha}$ (because of ${\alpha}\in D_{\gamma,\tau}$ implies
$\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}\geq{\alpha}>\frac{p_{n+2}}{q_{n+2}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}$
by (5)). By (4) it follows that $A_{n}\cap A_{m}=\emptyset$ for $n\not=m$,
with $n,m>N$ even. From
${\alpha}-\frac{p_{n}}{q_{n}}>\frac{{\gamma}}{q_{n}^{{\tau}+1}}$
for $n$ even, we get
$\max_{2n\leq
N}\frac{p_{2n}}{q_{2n}}+\frac{{\gamma}}{q_{2n}^{{\tau}+1}}=:C<{\alpha},$
from which it follows that there exists $N_{1}\in{N}$ such that for $n$ even,
$n>N_{1}$:
$\frac{p_{n}}{q_{n}}-\frac{{\gamma}}{q_{n}^{{\tau}+1}}>C.$
If $\frac{p}{q}=\frac{p_{m}}{q_{m}}\not\in I_{n}$ is an even convergent to
${\alpha}$ with $n>N_{2}:=\max\\{N,N_{1}\\}$ then, for $m\leq N$ even:
$\frac{p_{m}}{q_{m}}<\frac{p_{n}}{q_{n}}.$
Moreover, by definition of $N_{1}$ it follows that:
$\frac{p_{m}}{q_{m}}+\frac{{\gamma}}{q_{m}^{{\tau}+1}}\leq
C<\frac{p_{n}}{q_{n}}-\frac{{\gamma}}{q_{n}^{{\tau}+1}},$
from which it follows that the Lemma holds if
$\frac{p}{q}=\frac{p_{m}}{q_{m}}$ is an even convergent to ${\alpha}$ with
$m\leq N$. If $m>N$ and $n>m$ is even:
$\frac{p_{m}}{q_{m}}+\frac{{\gamma}}{q_{m}^{{\tau}+1}}<\frac{p_{m+2}}{q_{m+2}}-\frac{{\gamma}}{q_{m+2}^{{\tau}+1}}\leq\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}$
while, for $n<m$ even:
$\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}>\frac{p_{m-2}}{q_{m-2}}+\frac{{\gamma}}{q_{m-2}^{{\tau}+1}}\geq\frac{p_{n+2}}{q_{n+2}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}.$
So Lemma 1 is true if $\frac{p}{q}$ is an even convergent to ${\alpha}$. Thus,
Lemma 1 remains to be verified when $\frac{p}{q}$ is not a convergent to
${\alpha}$. It is no restrictive to suppose that there exists $m\not=n$ even
for which $\frac{p}{q}\in I_{m}$, otherwise Lemma 1 is trivial. Now we show
that, for $m$ big enough:
$\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}\in\left(\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}},\frac{p_{m+2}}{q_{m+2}}+\frac{{\gamma}}{q_{m+2}^{{\tau}+1}}\right)$
from which Lemma 1 follows immediately by (5). By the properties of Farey
sequence, for the rationals $\frac{p}{q}\in I_{m}$ we have $q>q_{m}$, so the
inequality:
$\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}>\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}$
holds. It remains to show that:
$\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}<\frac{p_{m+2}}{q_{m+2}}+\frac{{\gamma}}{q_{m+2}^{{\tau}+1}}.$
This inequality holds for $q\geq\frac{q_{m+2}}{2}$ and $m$ big enough. In
fact, in that case:
$\frac{p_{m+2}}{q_{m+2}}-\frac{p}{q}\geq\frac{1}{qq_{m+2}}>\frac{{\gamma}}{q^{{\tau}+1}}-\frac{{\gamma}}{q_{m+2}^{{\tau}+1}},$
that is true for $m$ big enough (because of ${\tau}>1$). So, we can assume
that $q_{m}<q<\frac{q_{m+2}}{2}$. Because we have assumed that $\frac{p}{q}$
is not a convergent, by Legendre’s Theorem (see [8]), we have:
${\alpha}-\frac{p}{q}>\frac{1}{2q^{2}},$
while, because $\frac{p_{m}}{q_{m}}$ is a convergent, we have:
${\alpha}-\frac{p_{m+2}}{q_{m+2}}<\frac{1}{q_{m+2}^{2}}.$
So, putting together the two inequalities, if $q<\frac{q_{m+2}}{2}$:
$\frac{p_{m+2}}{q_{m+2}}-\frac{p}{q}=\frac{p_{m+2}}{q_{m+2}}-{\alpha}+{\alpha}-\frac{p}{q}>\frac{1}{2q^{2}}-\frac{1}{q_{m+2}^{2}}>-\frac{{\gamma}}{q_{m+2}^{{\tau}+1}}+\frac{{\gamma}}{q^{{\tau}+1}}\iff$
$\frac{1}{2q^{2}}-\frac{{\gamma}}{q^{{\tau}+1}}>\frac{1}{q_{m+2}^{2}}-\frac{{\gamma}}{q_{m+2}^{{\tau}+1}},$
that is true for $m$ big enough (it follows by $q_{m}<q<\frac{q_{m+2}}{2}$).
So Lemma 1 is proved.
We know by Farey’s sequence that for $\frac{p}{q}\in I_{n}$, $q>q_{n+1}$. So,
there are a finite numbers of $\frac{p}{q}\in I_{n}$ with $q<q_{n+2}$. In the
next Lemma we want to control the distance between these numbers and
$\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}$.
###### Lemma 2
Let ${\gamma}>0$, ${\tau}>3,{\alpha}\in D_{\gamma,\tau},\frac{p_{n}}{q_{n}}$
the convergents to ${\alpha}$. There exists $N_{1}\in{N}$ such that, for
$n>N_{1}$:
$\frac{p}{q}\in I_{n},q<q_{n+2}\ \Longrightarrow\
\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}-\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}.$
Proof Let $n>N$, $\frac{p}{q}\in I_{n}$, so by definition of convergents and
the fact that $\frac{p_{n}}{q_{n}}<\frac{p}{q}<\frac{p_{n+2}}{q_{n+2}}$ we get
that $\frac{p}{q}$ is not a convergent. If $q\geq\frac{q_{n+2}}{2}$ we get:
$\frac{p_{n+2}}{q_{n+2}}-\frac{p}{q}\geq\frac{1}{qq_{n+2}}\geq\frac{1}{q_{n+2}^{2}}>\frac{{\gamma}2^{{\tau}+1}}{q_{n+2}^{{\tau}+1}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}+\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}\geq\frac{{\gamma}}{q^{{\tau}+1}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}+\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}$
for $n$ big enough (because of ${\tau}>3$). So, for $n$ big enough, the
inequality remain to be proved for $q<\frac{q_{n+2}}{2}$. In that case:
$\frac{p_{n+2}}{q_{n+2}}-\frac{p}{q}=\frac{p_{n+2}}{q_{n+2}}-{\alpha}+{\alpha}-\frac{p}{q}>\frac{1}{2q^{2}}-\frac{1}{q_{n+2}^{2}}>\frac{{\gamma}}{q^{{\tau}+1}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}+\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}\iff$
$\frac{1}{2q^{2}}-\frac{{\gamma}}{q^{{\tau}+1}}>\frac{1}{q_{n+2}^{2}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}+\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}.$
From the fact that
$G(x):=\frac{1}{2x^{2}}-\frac{{\gamma}}{x^{{\tau}+1}}$
is a decreasing function for $x$ big enough, it is enough to show the
inequality for $q=[\frac{q_{n+2}}{2}]$. In this case we get:
$\frac{1}{2q^{2}}-\frac{1}{q_{n+2}^{2}}\geq\frac{2}{q_{n+2}^{2}}-\frac{1}{q_{n+2}^{2}}=\frac{1}{q_{n+2}^{2}}>\frac{{\gamma}}{q^{{\tau}+1}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}+\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}$
for $n$ big enough (for ${\tau}>3$), so $\exists N_{1}\in{N}$ such that, when
$n>N_{1}$ is even the inequality is verified.
###### Lemma 3
Let ${\tau}>3$ $,{\alpha}=[a_{1},a_{2},...]\in
D_{\gamma,\tau},\frac{p_{n}}{q_{n}}$ the convergents to ${\alpha}$, then
$\exists N\in{N}$ such that for all $n>N$ even:
${\mu}\left(\bigcup_{\frac{p}{q}\in I_{n},q\geq
q_{n+2}}\left(\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)\right)<\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}$
Proof
${\mu}\left(\bigcup_{\frac{p}{q}\in I_{n},q\geq
q_{n+2}}\left(\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)\right)$
$<\sum_{q\geq
q_{n+2}}\sum_{q\frac{p_{n}}{q_{n}}<p<q\frac{p_{n+2}}{q_{n+2}}}\frac{2{\gamma}}{q^{{\tau}+1}}<2{\gamma}\left(\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}}\right)\sum_{q\geq
q_{n+2}}\frac{1}{{q^{{\tau}}}}$
$<2{\gamma}C\left(\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}}\right)\frac{1}{{q_{n+2}^{{\tau}-1}}}=o\left(\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}\right)$
for some constant $C>0$.
###### Lemma 4
Let ${\tau}>1,{\gamma}>0,$ ${\alpha}=[a_{1},a_{2},...]\in
D_{\gamma,\tau},\frac{p_{n}}{q_{n}}$ be the convergents to ${\alpha}$. Then:
$\quad\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}\iff$
(6)
$a_{n+2}>\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}-\frac{q_{n}}{q_{n+1}}\quad$
(7)
Proof (6) is true if and only if:
$\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}}=\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n+1}}{q_{n+1}}+\frac{p_{n+1}}{q_{n+1}}-\frac{p_{n}}{q_{n}}=$
$\frac{1}{q_{n}q_{n+1}}-\frac{1}{q_{n+1}q_{n+2}}>\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\iff$
$\frac{1}{q_{n+2}q_{n+1}}<\frac{1}{q_{n}q_{n+1}}-\frac{{\gamma}}{q_{n}^{{\tau}+1}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}\iff$
$\frac{1}{q_{n+2}q_{n+1}}<\frac{{\gamma}}{q_{n}q_{n+1}}(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}\iff$
$\frac{1}{{q_{n+2}}}<\frac{{\gamma}}{q_{n}}(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-q_{n+1}\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}\iff$
$\left\\{\begin{array}[]{l}\displaystyle\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}}>\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}},\\\
\\\ \displaystyle
q_{n+2}>\frac{q_{n}}{{\gamma}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}\\\
\end{array}\right.\ \\\ $ (8)
The first inequality is always true because of:
$\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}}>\frac{1}{{\alpha}_{n+2}q_{n}^{{\tau}-1}}>\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}.$
So Lemma 4 follows from the fact that $q_{n+2}=a_{n+2}q_{n+1}+q_{n}$.
###### Lemma 5
Let ${\tau}>1$, for almost all ${\gamma}\in(0,\frac{1}{{2}})$ (for
${\gamma}\geq\frac{1}{{2}}$ $D_{\gamma,\tau}=\emptyset$), given ${\epsilon}>0$
there exists $C=C({\epsilon},{\gamma})>0$ such that:
$\left|\frac{1}{{{\gamma}}}-\frac{p}{q^{{\tau}}}\right|\geq\frac{C}{q^{{\tau}+1+{\epsilon}}}$
for all $\frac{p}{q}\in{Q}$.
Proof Define
$B_{C,k}:=\left\\{{\alpha}:|{\alpha}-\frac{p}{q^{{\tau}}}|\geq\frac{C}{q^{k}}\quad\forall\frac{p}{q}\in{Q}\right\\}$,
so ${\alpha}\in B_{C,k}^{c}\iff$ there exists $\frac{p}{q}$ such that
${\alpha}\in\left(\frac{p}{q}-\frac{C}{q^{k}},\frac{p}{q}+\frac{C}{q^{k}}\right)$.
So, given $N\in{N}$ we get:
${\mu}\left(B_{C,k}^{c}\cap\left(-N,N\right)\right)<\sum_{q>0}\sum_{-Nq^{{\tau}}<p<Nq^{{\tau}}}\frac{2C}{q^{k}}<\sum_{q>0}\frac{4NC}{q^{k-{\tau}}}$
and for $k>{\tau}+1$, $C$ that tends to zero, also
${\mu}\left(B_{C,k}^{c}\cap\left(-N,N\right)\right)$
goes to zero. From the arbitrariness of $N$ we obtain:
${\mu}\left(\bigcap_{C>0}B_{C,k}^{c}\right)=0$
for $k>{\tau}+1$, from which follows Lemma 5.
###### Lemma 6
Let ${\tau}>1$, ${\alpha}=[a_{1},a_{2},...]\in D_{\gamma,\tau}$,
$\frac{p_{n}}{q_{n}}$ the convergents to ${\alpha}$. The inequality:
$\quad\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}-\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}$
(9)
is definitively verified if and only if definitively:
${a_{n+2}>\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}}}-\frac{q_{n}}{q_{n+1}}}$
(10)
###### Remark 1
Observe that (10) is definitively true if:
$\limsup\frac{q_{n+1}}{q_{n}^{{\tau}}}<\frac{1}{{\gamma}},$
because in that case:
$\limsup\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}}}-\frac{q_{n}}{q_{n+1}}<1.$
Thus, if for infinitely many $n$ even $(\ref{ci})$ is not verified, for this
$n$, with $n$ big enough:
$\frac{q_{n+1}}{q_{n}^{{\tau}}}\sim\frac{1}{{\gamma}},$
so $q_{n+1}\sim\frac{q_{n}^{{\tau}}}{{\gamma}}$.
Proof
In a similar way of Lemma 4, (9) is verified if and only if:
$\left\\{\begin{array}[]{l}\displaystyle\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}}>\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}},\\\
\\\ \displaystyle
q_{n+2}>\frac{q_{n}}{{\gamma}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}}}\\\
\end{array}\right.\ \\\ $ (11)
Because of ${\alpha}\in D_{\gamma,\tau}$, the first of the two conditions is
definitively verified, in fact, for $n$ big enough:
$\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}<\frac{1}{{\alpha}_{n+2}q_{n}^{{\tau}-1}}<\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}}$
So, from the fact that $q_{n+2}=a_{n+2}q_{n+1}+q_{n}$ we are done.
###### Lemma 7
Let ${\tau}>\frac{3+\sqrt{17}}{2}$. For almost all
${\gamma}\in(0,\frac{1}{2})$, if ${\alpha}=[a_{0},a_{1},...]\in
D_{\gamma,\tau}$ ,for $n$ even big enough: (6) is true if and only if (9) is
true.
Proof If (9) is true, then trivially (6) is true. So we have to show that for
almost all ${\gamma}\in(0,\frac{1}{{2}})$ and for all ${\alpha}\in
D_{\gamma,\tau}$ (with ${\tau}>\frac{3+\sqrt{17}}{2}$) holds the converse. So,
suppose by contradiction that exists $A\subseteq\left(C_{1},C_{2}\right)$,
with $0<C_{1}<C_{2}<\frac{1}{{2}}$, ${\mu}(A)>0$ such that, for all
${\gamma}\in A$ there exists ${\alpha}\in D_{\gamma,\tau}$ that satisfies (6)
but not (9) for infinitely many $n$ even. By Lemma 4 and Lemma 6 it follows
that for all ${\gamma}$ in $A$ there exists ${\alpha}\in D_{\gamma,\tau}$ such
that for infinitely many $n$ even:
$\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}}}-\frac{q_{n}}{q_{n+1}}\geq
a_{n+2}>\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}-\frac{q_{n}}{q_{n+1}},$
and by Remark 1 it follows that, for this $n$:
$q_{n+1}\sim\frac{q_{n}^{{\tau}}}{{\gamma}}.$
So, for $n$ big enough such that (6) holds but (9) doesn’t hold we get:
$\frac{q_{n}^{{\tau}}}{C_{2}}<q_{n+1}<\frac{q_{n}^{{\tau}}}{C_{1}}.$
Moreover:
$a_{n+2}>\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}-\frac{q_{n}}{q_{n+1}}\iff$
$\frac{a_{n+2}q_{n+1}}{q_{n}}+1=\frac{q_{n+2}}{q_{n}}>\frac{1}{{1-\frac{{\gamma}q_{n+1}}{q_{n}^{{\tau}}}-\frac{{\gamma}q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}\iff$
$1-\frac{{\gamma}q_{n+1}}{q_{n}^{{\tau}}}-\frac{{\gamma}q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}>\frac{q_{n}}{q_{n+2}}\iff$
${\gamma}<\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{{\tau}}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}$
In a similar way:
$\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}-\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}}}-\frac{q_{n}}{q_{n+1}}\geq
a_{n+2}\iff$
${\gamma}\geq\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{{\tau}}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}}.$
Thus:
$\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{{\tau}}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}+\frac{2q_{n}q_{n+1}}{q_{n+2}^{{\tau}-1}}}\leq{\gamma}<\frac{1-\frac{q_{n}}{q_{n+2}}}{\frac{q_{n+1}}{q_{n}^{{\tau}}}+\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}$
for infinitely many $n$ even, so for all ${\gamma}\in A$ there exist
infinitely many $q\in{N}$ such that:
$\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{{\tau}}}+\frac{qp}{(Np+q)^{{\tau}+1}}+\frac{2qp}{(Np+q)^{{\tau}-1}}}\leq{\gamma}<\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{{\tau}}}+\frac{qp}{(Np+q)^{{\tau}+1}}}$
for some $N\in{N}$ and some
$\frac{q^{{\tau}}}{C_{2}}<p<\frac{q^{{\tau}}}{C_{1}}$. So for all $M\in{N}$:
$A\subseteq\bigcup_{q>M}\bigcup_{\frac{q^{{\tau}}}{C_{2}}<p<\frac{q^{{\tau}}}{C_{1}}}\bigcup_{N>0}\left(\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{{\tau}}}+\frac{qp}{(Np+q)^{{\tau}+1}}+\frac{2qp}{(Np+q)^{{\tau}-1}}},\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{{\tau}}}+\frac{qp}{(Np+q)^{{\tau}+1}}}\right),$
moreover:
$\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{{\tau}}}+\frac{qp}{(Np+q)^{{\tau}+1}}}-\frac{1-\frac{q}{Np+q}}{\frac{p}{q^{{\tau}}}+\frac{qp}{(Np+q)^{{\tau}+1}}+\frac{2qp}{(Np+q)^{{\tau}-1}}}<$
$\frac{2qp}{(Np+q)^{{\tau}-1}}\left(\frac{1}{{\frac{p}{q^{{\tau}}}+\frac{qp}{(Np+q)^{{\tau}+1}}}}\right)^{2}<\frac{2qC_{2}^{2}}{N^{{\tau}-1}p^{{\tau}-2}}$
so we obtain:
$m(A)\leq\sum_{q>M}\sum_{\frac{q^{{\tau}}}{C_{2}}<p<\frac{q^{{\tau}}}{C_{1}}}\sum_{N>0}\frac{2qC_{2}^{2}}{N^{{\tau}-1}p^{{\tau}-2}}<$
${\beta}\sum_{q>M}\frac{q^{{\tau}+1}}{q^{{\tau}^{2}-2{\tau}}}={\beta}\sum_{q>M}\frac{1}{{q^{{\tau}^{2}-3{\tau}-1}}}$
for some constant ${\beta}>0$. From the hypothesis
(${\tau}>\frac{3+\sqrt{17}}{2}$) we have that the series converge, so for $M$
that goes to infinity we get that ${\mu}(A)=0$, that contradicts the
hypothesis ${\mu}(A)>0$. Thus, for almost all ${\gamma}\in(C_{1},C_{2})$ we
have that: if (6) holds, then (9) holds, and from the arbitrariness of
$C_{1},C_{2}$ Lemma 7 follows.
###### Proposition 1
Let ${\tau}>\frac{3+\sqrt{17}}{2}$. For almost every
$0<{\gamma}<\frac{1}{{2}}$: if ${\alpha}\in D_{\gamma,\tau}$,
$\frac{p_{n}}{q_{n}}$ are the convergents to ${\alpha}$,
${\alpha}-\frac{p_{n}}{q_{n}}>\frac{{\gamma}}{q_{n}^{{\tau}+1}}$, and
definitively:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}},$
then ${\alpha}$ is an accumulation point of $D_{\gamma,\tau}$ and in
particular, for $n$ even big enough:
${\mu}\left(D_{\gamma,\tau}\cap\left(\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}\right)\right)>0$
Proof By Lemma 1 it follows that $\exists N_{1}\in{N}$ such that for $n>N_{1}$
even:
$\frac{p}{q}\not\in I_{n}\ \Longrightarrow\
\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}\not\in
A_{n},$
and by Lemma 7 for almost all ${\gamma}\in(0,\frac{1}{{2}})$:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}\
\Longrightarrow\
\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}-\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}},$
therefore, up to a set of measure zero we can suppose that ${\gamma}$
satisfies this property. Moreover, by Lemma 2, for $n$ even big enough, if
$\frac{p}{q}\in I_{n},$ $q<q_{n+2}$ then:
$\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}-\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}.$
So, if we define:
$c_{n}:=\max_{\frac{p}{q}\in[\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}),q<q_{n+2}}\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}},$
we obtain:
$c_{n}<\frac{p_{n+2}}{q_{n+2}}-\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}.$
By Lemma 1, if $n>N_{1}$ is even and $\frac{p}{q}\not\in I_{n}$, then
$\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}\not\in
A_{n},$
so, if
$\frac{p}{q}<\frac{p_{n}}{q_{n}}\ \Longrightarrow\
\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}<\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\leq
c_{n},$
while for $\frac{p}{q}>\frac{p_{n+2}}{q_{n+2}}$ we get $q>q_{n+2}$, so:
$\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}>\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}},$
but from:
${\beta}\in
D_{\gamma,\tau}^{c}\iff\exists\frac{p}{q}\in(0,1):{\beta}\in\left(\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)$
we get that for $n>N_{1}$ even, holds:
${\mu}\left(D_{\gamma,\tau}^{c}\cap
I_{n}\right)\leq{\mu}\left(\bigcup_{\frac{p}{q}\in[\frac{p_{n}}{q_{n}},\frac{p_{n+2}}{q_{n+2}}),q<q_{n+2}}\left(\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)\cap
I_{n}\right)$ $+{\mu}\left(\bigcup_{\frac{p}{q}\in I_{n},q\geq
q_{n+2}}\left(\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)\right)+{\mu}\left(\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}},\frac{p_{n+2}}{q_{n+2}}\right).$
So by Lemma 3:
${\mu}(D_{\gamma,\tau}^{c}\cap I_{n})\leq
c_{n}-\frac{p_{n}}{q_{n}}+\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}<{\mu}(I_{n})=\frac{p_{n+2}}{q_{n+2}}-\frac{p_{n}}{q_{n}}\iff$
$c_{n}<\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}-\frac{2{\gamma}}{q_{n+2}^{{\tau}-1}},$
that follows from the definition of $c_{n}$.
So, given ${\tau}>3$, for almost all ${\gamma}>0$: if ${\alpha}\in
D_{\gamma,\tau}$ is not an isolated point of the first type and definitively
the intervals centered in the convergents have an empty intersection, then
${\alpha}$ is an accumulation point in $D_{\gamma,\tau}$. The second step is
to show that: if ${\tau}>3$, ${\gamma}>0$, ${\alpha}\in D_{\gamma,\tau}$ but
${\alpha}$ is not an isolated point of the first type and
${\tau}>{\tau}({\alpha})$, then ${\alpha}$ is an accumulation point in
$D_{\gamma,\tau}$.
###### Lemma 8
Let ${\tau}>3$. For almost all ${\gamma}\in(0,\frac{1}{{2}})$: given
${\alpha}\in D_{\gamma,\tau}$, if for infinitely many $n$ even:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}>\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}},$
then there exists $C>0$ such that for this $n$:
$a_{n+2}\leq Cq_{n}^{2+{\epsilon}},$
with ${\epsilon}>0$ arbitrarily small.
Proof By Lemma 4 it follows that, given ${\alpha}\in D_{\gamma,\tau}$ that
satisfies the hypothesis of Lemma 8, for $n$ even big enough:
$a_{n+2}\leq\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}-\frac{q_{n}}{q_{n+1}},$
so, up to a set of measure zero, by Lemma 5 we can suppose that there exist
${\epsilon}>0,C>0$ such that $\frac{1}{{{\gamma}}}\in
B_{C,{\tau}+1+{\epsilon}}$ with ${\tau}+1+{\epsilon}<{\tau}^{2}-1$, from which
it follows that:
$\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{(\frac{1}{{\gamma}}-\frac{q_{n+1}}{q_{n}^{{\tau}}})-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}-\frac{q_{n}}{q_{n+1}}\leq\frac{q_{n}}{{\gamma}q_{n+1}}\frac{1}{{\frac{C}{q_{n}^{{\tau}+1+{\epsilon}}}-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}}}-\frac{q_{n}}{q_{n+1}},$
moreover, by Remark 1 it follows that
$q_{n+1}\sim\frac{q_{n}^{{\tau}}}{{\gamma}}$, from which we obtain:
$\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}<\frac{q_{n}}{q_{n+1}^{{\tau}}}\sim\frac{{\gamma}^{{\tau}}}{q_{n}^{{\tau}^{2}-1}},$
so, if $n$ is big enough, by ${\tau}+1+{\epsilon}<{\tau}^{2}-1$ we have:
$\frac{C}{q_{n}^{{\tau}+1+{\epsilon}}}-\frac{q_{n}q_{n+1}}{q_{n+2}^{{\tau}+1}}>\frac{C}{2q_{n}^{{\tau}+1+{\epsilon}}}.$
So we obtain:
$a_{n+2}<\frac{q_{n}}{q_{n+1}}\frac{2q_{n}^{{\tau}+1+{\epsilon}}}{C}\sim\frac{2{\gamma}}{C}q_{n}^{2+{\epsilon}}<\frac{4{\gamma}}{C}q_{n}^{2+{\epsilon}}=C^{\prime}q_{n}^{2+{\epsilon}}$
definitively, from which we get Lemma 8.
###### Lemma 9
Let ${\tau}>\frac{3+\sqrt{17}}{2},{\gamma}>0$, ${\alpha}\in D_{\gamma,\tau}$.
If for infinitely many $m$ even, for $n<m$ even holds:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}<\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}},$
(12)
and ${\alpha}-\frac{p_{n}}{q_{n}}>\frac{{\gamma}}{q_{n}^{{\tau}+1}}$ for all
$n$ even, then ${\alpha}$ is in ${\mathcal{A}}(D_{\gamma,\tau})$.
Proof Let $\frac{p_{n}}{q_{n}}<\frac{p}{q}<\frac{p_{n+2}}{q_{n+2}}$ with $n$
even and $n<m-2$, for $\frac{q_{n+2}}{2}\leq q$:
$\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}<\frac{p_{n+2}}{q_{n+2}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}$
is definitively true, while for $q<\frac{q_{n+2}}{2}$:
$\frac{p_{n+2}}{q_{n+2}}-\frac{p}{q}=\frac{p_{n+2}}{q_{n+2}}-{\alpha}+{\alpha}-\frac{p}{q}>\frac{1}{2q^{2}}-\frac{1}{q_{n+2}^{2}}>\frac{{\gamma}}{q^{{\tau}+1}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}}\iff$
$\frac{1}{2q^{2}}-\frac{{\gamma}}{q^{{\tau}+1}}>\frac{1}{q_{n+2}^{2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}},$
that is true for $q$ big enough, so $\exists T\in{N}$ such that the inequality
is verified for $q\geq T$ (from the fact that
$G(x):=\frac{1}{2x^{2}}-\frac{{\gamma}}{x^{{\tau}+1}}$ is definitively
decreasing and ${\tau}>3>1$). From the hypothesis that
${\alpha}-\frac{p_{n}}{q_{n}}>\frac{{\gamma}}{q_{n}^{{\tau}+1}}$ for all $n$
even:
$v:=\max_{\frac{p}{q}<{\alpha},q\leq
T}\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}<{\alpha},$
so there exists $T_{1}\in{N}$ such that for $n>T_{1}$:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}>v.$
By Lemma 2, for $m$ big enough, $\frac{p}{q}\in I_{n}$, with $n<m-2$ even:
$\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\leq\max\left\\{\frac{p_{n+2}}{q_{n+2}}+\frac{{\gamma}}{q_{n+2}^{{\tau}+1}},v\right\\}\leq\frac{p_{m-2}}{q_{m-2}}+\frac{{\gamma}}{q_{m-2}^{{\tau}+1}}<\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}+1}},$
while by Lemma 2, for $m$ big enough:
$\frac{p}{q}\in I_{m-2},q<q_{m-2}\ \Longrightarrow\
\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}<\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}},$
so if we define:
$c_{m}:=\max\left\\{\max_{\frac{p}{q}\in
I_{m-2},q<q_{m}}\left(\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right),\max_{\frac{p}{q}\leq\frac{p_{m-2}}{q_{m-2}}}\left(\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)\right\\},$
for $m$ even big enough:
$c_{m}<\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}}.$
Moreover, by Lemma 3, from ${\tau}>3>2$, for $m$ even big enough:
${\mu}\left(\bigcup_{\frac{p}{q}\in I_{m-2},q\geq
q_{m}}\left(\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}},\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)\right)<\frac{2{\gamma}}{q_{m}^{{\tau}-1}}.$
Finally, if $\frac{p}{q}>\frac{p_{m}}{q_{m}}$, by the properties of continued
fractions we obtain $q>q_{m}$, so
$\frac{p}{q}-\frac{{\gamma}}{q^{{\tau}+1}}>\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}$.
Thus:
${\mu}\left(D_{\gamma,\tau}^{c}\cap\left(\frac{p_{m-2}}{q_{m-2}},\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}\right)\right)<c_{m}-\frac{p_{m-2}}{q_{m-2}}+\frac{2{\gamma}}{q_{m}^{{\tau}-1}}$
$<\frac{p_{m}}{q_{m}}-\frac{p_{m-2}}{q_{m-2}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}={\mu}(\frac{p_{m-2}}{q_{m-2}},\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}),$
then
$D_{\gamma,\tau}\cap\left(\frac{p_{m-2}}{q_{m-2}},\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}\right)\not=\emptyset,$
and from the fact that this holds for infinitely many $m$ even, then
${\alpha}$ is an accumulation point of $D_{\gamma,\tau}$.
###### Remark 2
Let ${\tau}>\frac{\sqrt{17}+3}{2},{\gamma}>0$, ${\alpha}\in D_{\gamma,\tau}$,
if ${\alpha}\in{\mathcal{I}}^{2}_{{\gamma},{\tau}}$ or
${\mathcal{I}}^{3}_{{\gamma},{\tau}}$, then ${\tau}({\alpha})={\tau}$. In fact
if this doesn’t hold, from
${\alpha}\not\in{\mathcal{I}}^{1}_{{\gamma},{\tau}}$ we get that for all $n$
even or for all $n$ odd:
$\left|{\alpha}-\frac{p_{n}}{q_{n}}\right|>\frac{{\gamma}}{q_{n}^{{\tau}+1}}.$
Suppose for example that this property holds for all $n$ even. If on the
contrary ${\tau}({\alpha})<{\tau}$, by Remark 1, the hypothesis of Proposition
1 are satisfied, so ${\alpha}\in{\mathcal{A}}(D_{\gamma,\tau})$,
contradiction.
###### Corollary 1
If ${\tau}>\frac{3+\sqrt{17}}{2}$:
${\mu}\left(\left\\{{\gamma}>0:{\mathcal{I}}^{2}_{{\gamma},{\tau}}\not=\emptyset\right\\}\right)=0.$
Proof Observe that, if ${\alpha}\in{\mathcal{I}}^{2}_{{\gamma},{\tau}}$, then
there exists $n\in{N}$ such that
$\left|{\alpha}-\frac{p_{n}}{q_{n}}\right|=\frac{{\gamma}}{q_{n}^{{\tau}+1}}.$
Suppose for example that $n$ is even, thus:
${\alpha}=\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}.$
Moreover, for almost all ${\gamma}\in(0,\frac{1}{{2}})$:
${\tau}\left(\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)={\tau}\left(\frac{{\gamma}}{q^{{\tau}+1}}\right)=1.$
Taking the union on all the $\frac{p}{q}$ we obtain that for almost all
${\gamma}\in(0,\frac{1}{{2}})$ and for all $\frac{p}{q}\in{Q}$,
${\tau}\left(\frac{p}{q}+\frac{{\gamma}}{q^{{\tau}+1}}\right)=1.$
So Corollary 1 follows by Remark 2.
It remains the last one step, in which we get the Theorem.
###### Lemma 10
Let ${\tau}>3$. For almost all ${\gamma}>0$, if
${\alpha}\in{\mathcal{I}}(D_{\gamma,\tau})$, there exists $N\in{N}$ such that,
for all $m>N$ even there is some $n<m$ even with:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\geq\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}}$
.
Proof By Corollary 1 and Remark (j) it follow that, up to a set of measure
zero, we can suppose that
${\mathcal{I}}^{1}_{{\gamma},{\tau}}={\mathcal{I}}^{2}_{{\gamma},{\tau}}=\emptyset$,
so observe that if the Lemma were not true, it would exist
${\alpha}\in{\mathcal{I}}^{2}_{{\gamma},{\tau}}$ with the even convergents
that satisfy the hypothesis of Lemma 9, that implies
${\alpha}\in{\mathcal{A}}(D_{\gamma,\tau})$, contradiction.
Theorem Let ${\tau}>\frac{3+\sqrt{17}}{2}$. Then, for almost all ${\gamma}>0$
$D_{\gamma,\tau}$ is a Cantor set.
Proof By Corollary 1 and Remark (j) it follows that, up to a set of measure
zero, we can suppose that
${\mathcal{I}}^{1}_{{\gamma},{\tau}}={\mathcal{I}}^{2}_{{\gamma},{\tau}}=\emptyset$.
Suppose by contradiction that the statement doesn’t hold, and take
$0<C_{1}<C_{2}$ such that:
${\mu}\left(\left\\{C_{1}<{\gamma}<C_{2}:{\mathcal{I}}(D_{\gamma,\tau})\not=\emptyset\right\\}\right)>0,$
and define
$A:=\\{C_{1}<{\gamma}<C_{2}:{\mathcal{I}}(D_{\gamma,\tau})\not=\emptyset\\}$.
By Lemma 10, for almost all ${\gamma}>0$ there exists
${\alpha}\in{\mathcal{I}}(D_{\gamma,\tau})$ and there exists $N\in{N}$ such
that for all $m>N$ even, there is some $n<m$ even, with:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\geq\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}}.$
Now we want to show that, for almost all chosen of ${\gamma}\in A$ we have:
$\limsup\frac{q_{2k+2}}{q_{2k+1}^{{\tau}}}<\frac{1}{{\gamma}}.$
In fact if it doesn’t hold, by Remark 1 we get that for infinitely many $m$
even:
$q_{m}\sim\frac{q_{m-1}^{{\tau}}}{{\gamma}},$
and for $m>N$ exists $n<m$ even, with:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\geq\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}}$
By Lemma 7, up to a set of measure zero in $A$:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\geq\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}}\iff\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\geq\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}.$
By the properties of convergents:
${\alpha}-\frac{p_{m}}{q_{m}}<\frac{1}{q_{m}^{2}},$
from which we get:
$\frac{1}{q_{m}^{2}}>{\alpha}-\frac{p_{n}}{q_{n}}-\frac{{\gamma}}{q_{n}^{{\tau}+1}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}.$
Moreover:
${\alpha}-\frac{p_{n}}{q_{n}}=\frac{1}{{q_{n}(q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}})}},$
so:
$\frac{1}{q_{m}^{2}}>\frac{1}{{q_{n}(q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}})}}-\frac{{\gamma}}{q_{n}^{{\tau}+1}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}$
For $m$ big enough:
$\frac{1}{q_{m}^{2}}+\frac{{\gamma}}{q_{m}^{{\tau}+1}}<\frac{2}{q_{m}^{2}},$
so:
$\frac{2}{q_{m}^{2}}>\frac{1}{{q_{n}(q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}})}}-\frac{{\gamma}}{q_{n}^{{\tau}+1}}\iff$
${\gamma}>\frac{q_{n}^{{\tau}}}{q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}}}-\frac{2q_{n}^{{\tau}+1}}{q_{m}^{2}},$
moreover:
${\gamma}\leq\frac{q_{n}^{{\tau}}}{q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}}}.$
So we obtain:
$\frac{q_{n}^{{\tau}}}{q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}}}-\frac{2q_{n}^{{\tau}+1}}{q_{m}^{2}}<{\gamma}\leq\frac{q_{n}^{{\tau}}}{q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}}}$
From
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\geq\frac{p_{m}}{q_{m}}-\frac{{\gamma}}{q_{m}^{{\tau}+1}}-\frac{2{\gamma}}{q_{m}^{{\tau}-1}},$
we get:
$\frac{p_{n}}{q_{n}}+\frac{{\gamma}}{q_{n}^{{\tau}+1}}\geq\frac{p_{n+2}}{q_{n+2}}-\frac{{\gamma}}{q_{n+2}^{{\tau}+1}},$
moreover, from
${\alpha}-\frac{p_{n}}{q_{n}}>\frac{{\gamma}}{q_{n}^{{\tau}+1}}$ for all $n$
even, when $m$ increase, also $n$ increase, and by the last inequality and
Remark 1 we get that $q_{n+1}\sim\frac{q_{n}^{{\tau}}}{{\gamma}}$. So
$q_{m}\sim\frac{q_{m-1}^{{\tau}}}{{\gamma}}\geq\frac{q_{n+1}^{{\tau}}}{{\gamma}}\sim\frac{q_{n}^{{\tau}^{2}}}{{\gamma}^{{\tau}}}\geq\frac{q_{n}^{{\tau}^{2}}}{C_{2}^{{\tau}}}.$
So we obtain:
$\frac{q_{n}^{{\tau}}}{q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}}}-\frac{C}{q_{n}^{2{\tau}^{2}-{\tau}-1}}<{\gamma}\leq\frac{q_{n}^{{\tau}}}{q_{n+1}+\frac{{\alpha}_{n+2}}{q_{n}}}$
with a constant $C>0$. By Lemma 8, up to a set of measure zero, we can suppose
that there exists ${\epsilon}>0$ arbitrarily small such that, for $n$ big
enough:
$a_{n+2}<q_{n}^{2+{\epsilon}}.$
So, up to a set of measure zero, we can suppose that for all ${\gamma}\in A$,
there exists infinitely many $q>0$,
$\frac{q^{{\tau}}}{2C_{2}}<p<\frac{2}{C_{1}q^{{\tau}}}$, $N<q^{2+{\epsilon}}$
such that:
$\frac{q^{{\tau}}}{p+\frac{N}{q}}-\frac{C}{q^{2{\tau}^{2}-{\tau}-1}}<{\gamma}\leq\frac{q^{{\tau}}}{q+\frac{N}{q}}.$
So, for all $M\in{N}$:
$A\subseteq\bigcup_{q>M}\bigcup_{\frac{q^{{\tau}}}{2C_{2}}<p<\frac{2q^{{\tau}}}{C_{1}}}\bigcup_{N<q^{2+{\epsilon}}}\left(\frac{q^{{\tau}}}{p+\frac{N}{q}}-\frac{C}{q^{2{\tau}^{2}-{\tau}-1}},\frac{q^{{\tau}}}{q+\frac{N}{q}}\right),$
Thus:
${\mu}(A)<\sum_{q>M}\sum_{\frac{q^{{\tau}}}{2C_{2}}<p<\frac{2q^{{\tau}}}{C_{1}}}\sum_{N<q^{2+{\epsilon}}}\frac{C}{q^{2{\tau}^{2}-{\tau}-1}}$
$<{\beta}\sum_{q>M}\frac{1}{{q^{2{\tau}^{2}-2{\tau}-3-{\epsilon}}}}$
with some constant ${\beta}>0$. Because of ${\tau}>\frac{3+\sqrt{17}}{2}$, for
${\epsilon}$ small enough the series converge, so for $M$ that tends to
infinity we obtain ${\mu}(A)=0$, contradiction. So we have proved that:
$\limsup\frac{q_{2k+2}}{q_{2k+1}^{{\tau}}}<\frac{1}{{\gamma}}.$
But, by Remark 1 and Proposition 1 (used with $n$ odd) we have that
${\alpha}\in{\mathcal{A}}(D_{\gamma,\tau})$, contradiction. So ${\mu}(A)=0$.
The estimate ${\tau}>\frac{3+\sqrt{17}}{2}$ can be improved putting a better
inequality in Lemma 5. Probably the Proposition holds also with ${\tau}>3$.
## 4 Questions
* •
By [17] we konw that, for some choice of ${\gamma},{\tau}$,
${\mathcal{I}}^{1}_{{\gamma},{\tau}}\not=\emptyset$. What about
${\mathcal{I}}^{2}_{{\gamma},{\tau}},{\mathcal{I}}^{3}_{{\gamma},{\tau}}$?
* •
What is the best ${\tau}>1$ such that the result holds?
* •
Is it true that, for all ${\tau}\geq 1$ there exists
${\gamma}_{\tau}\in(0,\frac{1}{{2}})$ such that $D_{\gamma,\tau}$ is a Cantor
set for almost all ${\gamma}\in(0,{\gamma}_{\tau})$?
#### Acknowledgement
I am very grateful to Prof. Luigi Chierchia for his suggestions, remarks, for
his special support and for encouraging me to complete this work.
## References
* [1] M. E. Borel, “Les probabilités dénombrables et leurs applications arithmétiques”, Rendiconti del circolo mat. di Palermo, Vol. 27, 1909
* [2] H.Broer, Do Diophantine vectors form a Cantor bouquet? J. Difference Equ. Appl. 16 (2010), no. 5-6, 433-434.
* [3] Broer HW (2004) “KAM theory: the legacy of AN Kolmogorov’s 1954 paper”. Comment on: “The general theory of dynamical systems and classical mechanics”. (French in: Proceedings of the International Congress of Mathematicians, Amsterdam, 1954, vol 1, pp 315-333, Erven P, Noordhoff NV, Groningen, 1957). Bull Amer Math Soc (N.S.) 41(4):507-521 (electronic)
* [4] J. W. S. Cassels, “An introduction to Diophantine approximation”, Cambridge University Press, 1957
* [5] L. Chierchia, A. N. Kolmogorov’s 1954 paper on nearly-integrable Hamiltonian systems, Regul. Chaotic Dyn. 13 (2008), no. 2, 130-139. 37J40 (70H08)
* [6] T. W. Cusick, M. E. Flahive, “The Markoff and Lagrange spectra”, Mathematical Surveys and Monographs, 1989
* [7] M. M. Dodson, S. Kristensen, “Hausdorff Dimension and Diophantine Approximation”, Proceedings of Symposia in Pure Mathematics, 2003
* [8] G. H. Hardy and E. M. Wright, “An Introduction to the Theory of Numbers”, Oxford University
* [9] M.R. Herman, Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations. (French) Inst. Hautes Études Sci. Publ. Math. No. 49 (1979), 5-233.
* [10] V. Jarnik, Diophantischen Approximationen und Hausdorffschess Mass, Mat. Sbornik 36, 1929, 371-382
* [11] W. J. LeVeque, “Topics In Number Theory, Vol. II”, Addison-Wesley Publishing Company, Inc, 1956
* [12] G. Popov, “KAM theorem for Gevrey Hamiltonians”, Erg. Th. Dyn. Sys. 24 (2004),no. 5, 1753-1786
* [13] J. P$\ddot{o}$schel, “Integrability of Hamiltonian systems on Cantor sets”, Comm. Pure Appl. Math. 35 (1982), no. 5, 653-696.
* [14] H. R$\ddot{u}$ssmann, KAM iteration with nearly infinitely small steps in dynamical systems of polynomial character. Discrete Contin. Dyn. Syst. Ser. S 3 (2010), no. 4, 683-718.
* [15] W.M. Schmidt, “Diophantine Approximation”, LNM 785, Springer Verlag, 1980
* [16] J. C. Yoccoz, Conjugaison différentiable des difféomorphismes du cercle dont le nombre de rotation vérifie une condition diophantienne, Ann. Sci. École Norm. Sup. (4) 17 (1984), no. 3, 333-359
* [17] F. Argentieri, Isolated points of Diophantine sets, Preprint, 2020.
|
# Shannon theory for quantum systems and beyond: information compression for
fermions
Paolo Perinotti<EMAIL_ADDRESS>QUIT group, Physics Dept., Pavia
University, and INFN Sezione di Pavia, via Bassi 6, 27100 Pavia, Italy
Alessandro Tosini<EMAIL_ADDRESS>QUIT group, Physics Dept., Pavia
University, and INFN Sezione di Pavia, via Bassi 6, 27100 Pavia, Italy
Leonardo Vaglini<EMAIL_ADDRESS>QUIT group, Physics
Dept., Pavia University, and INFN Sezione di Pavia, via Bassi 6, 27100 Pavia,
Italy
(27th August 2024)
###### Abstract
We address the task of compression of fermionic quantum information. Due to
the parity superselection rule, differently from the case of encoding of
quantum information in qubit states, part of the information carried by
fermionic systems is encoded in their delocalised correlations. As a
consequence, reliability of a compression protocol must be assessed in a way
that necessarily accounts also for the preservation of correlations. This
implies that input/output fidelity is not a satisfactory figure of merit for
fermionic compression schemes. We then discuss various aspects regarding the
assessment of reliability of an encoding scheme, and show that entanglement
fidelity in the fermionic case is capable of evaluating the preservation of
correlations, thus revealing itself strictly stronger than input/output
fidelity, unlike the qubit case. We then introduce a fermionic version of the
source coding theorem showing that, as in the quantum case, the von Neumann
entropy is the minimal rate for which a fermionic compression scheme exists,
that is reliable according to the entanglement fidelity criterion.
## I Introduction
The task of _data compression_ addresses the primary question in information
theory as to how redundant is the information contained in a message and to
what extent the message can then be compressed.
In classical information theory this question is answered by the source coding
theorem 6773024 , which establishes the fundamental role of Shannon entropy in
information theory and its operational interpretation. The coding theorem
recognizes the Shannon entropy as the fundamental limit for the compression
rate in the i.i.d. setting. This means that if one compresses at a rate above
the Shannon entropy, then the compressed data can be recovered perfectly in
the asymptotic limit of infinitely long messages, while this is not possible
for compression rate below the Shannon entropy. As a result the Shannon
entropy, which can be intuitively thought of as the uncertainty about the
outcome of an experiment that we are going to perform on a classical system,
quantifies in a stringent way the amount of “non-redundant” information that
is encoded in the state of the classical system, what one would definitely
call _information content_.
In quantum information theory the Shannon entropy is replaced by the von
Neumann entropy. In particular, the quantum source coding theorem
PhysRevA.51.2738 identifies von Neumann entropy as the rate at which quantum
compression can be reliably achieved. Consider a quantum information source
described by a system $\mathrm{A}$ and density operator
$\rho\in\mathsf{St}{(\mathrm{A})}$, with $\mathsf{St}{(\mathrm{A})}$ the set
of states of system $\mathrm{A}$. The density operator describes the
preparation of a state $\sigma_{i}$ from any ensemble $\\{p_{i}\sigma_{i}\\}$,
with probabilities $p_{i}$, such that $\sum_{i}p_{i}\sigma_{i}=\rho$. A
quantum message of $N$ letters can now be understood in terms of $N$ uses of
the quantum source, that output a sequence of $N$ states
$p_{i_{j}}\sigma_{i_{j}}$, with $1\leq j\leq N$, drawn independently. One
instance of this preparation protocol thus produces
$\sigma_{\mathrm{\mathbf{i}}}\coloneqq\bigotimes_{j=1}^{N}\sigma_{i_{j}}$,
with probability $p_{\mathrm{\mathbf{i}}}\coloneqq\prod_{j}p_{i_{j}}$. Each of
the $N$ systems has density operator $\rho$, and the density operator of the
entire message is then given by $\rho^{\otimes N}$. A compression scheme for
messages from the above described source consists of two steps. _Encoding:_
Alice encodes the system $\mathrm{A}^{\otimes N}$ according to a compression
map given by a channel $\mathscr{E}:\mathsf{St}(\mathrm{A}^{\otimes
N})\rightarrow\mathsf{St}(B)$, where $\mathrm{B}$ is generally a system with
dimension $d_{\mathrm{B}}(N)$ smaller then $\mathrm{A}^{\otimes N}$. The
compression rate is defined as the asymptotic quantity
$R=\lim_{N\rightarrow\infty}\log_{2}d_{\mathrm{B}}(N)/N$. Typically, one
estimates the “size” of the compressed message through the capacity of system
$\mathrm{B}$ given in terms of $\log_{2}d_{\mathrm{B}}(N)$, namely the number
of qubits that are needed to simulate $\mathrm{B}$. Alice then sends the
system $\mathrm{B}$ to Bob using $NR$ noiseless qubit channels. _Decoding:_
Finally, Bob applies a decompression map
$\mathscr{D}:\mathsf{St}(B)\rightarrow\mathsf{St}(\mathrm{A}^{\otimes N})$ to
the message encoded in system $\mathrm{B}$, with the purpose of recovering the
original message as reliably as possible.
As one might expect, the above scheme generally introduces an error in the
decoding: we now discuss the figure of merit by which we estimate the error
introduced by the compression scheme. In order to understand the operational
meaning of the figure of merit, think of a referee (Charlie) who prepares the
states $\sigma_{\mathrm{\mathbf{i}}}$ with probability
$p_{\mathrm{\mathbf{i}}}$, and receives the final states
$\mathscr{D}\mathscr{E}(\sigma_{\mathrm{\mathbf{i}}})$. The figure of merit
that we use corresponds to the probability that, after receiving Bob’s final
state, Charlie is able to distinguish it from the input one. For a single
instance, this probability is a linear function of the trace-norm distance
$\|\sigma_{\mathrm{\mathbf{i}}}-\mathscr{D}\mathscr{E}(\sigma_{\mathrm{\mathbf{i}}})\|_{1}$.
The probability of successful discrimination is thus evaluated to
$\sum_{\mathrm{\mathbf{i}}}p_{\mathrm{\mathbf{i}}}\|\sigma_{\mathrm{\mathbf{i}}}-\mathscr{D}\mathscr{E}(\sigma_{\mathrm{\mathbf{i}}})\|_{1}=\|\rho^{\otimes
N}-\mathscr{D}\mathscr{E}(\rho^{\otimes N})\|_{1}$. The protocol has then
error $\epsilon$ if the compressed and decompressed states
$\mathscr{D}\mathscr{E}(\sigma_{\mathrm{\mathbf{i}}})$ are $\epsilon$-close to
the original states $\sigma_{\mathrm{\mathbf{i}}}$, in the trace-norm
distance. In the case of qubits the above quantity is equivalent to fidelity,
thanks to the Fuchs-van der Graaf inequalities 761271 . The optimal quantum
encoding will then make the error arbitrarily small for $N$ large enough, with
rate $R$ as small as possible. Schumacher’s quantum source coding theorem
shows that the optimal rate is equal to the von Neumann entropy $S(\rho)$ of
the state $\rho$.
Another way to evaluate the error for a compression scheme is the following:
Charlie prepares a purification of the density operator $\rho^{\otimes N}$ and
sends the $N$ copies of system $\mathrm{A}$ to Alice. Alice then sends her
share of the pure state to Bob, sending as few qubits to Bob as possible.
After decompressing the received qubits, Bob shares an entangled state with
Charlie. The quality of the compression scheme can then be evaluated
considering how well Charlie can distinguish the initial state from the final
one, after receiving Bob’s $N$ systems. The probability that Charlie detects a
compression error can be evaluated through the input/output fidelity. Again,
Schumacher’s theorem states that Alice can transfer her share of the pure
state to Bob by sending $NS(\rho)$ qubits and achieving arbitrarily good
fidelity, increasing the length $N$ of the message. This second perspective
answers the question whether the compression protocol preserves the
correlations that system $\mathrm{A}^{\otimes N}$ has with a remote system
$\mathrm{C}$.
Equivalence of the two approaches in assessing the quality of a compression
scheme shows that the ability to send quantum superpositions is equivalent to
the ability to send entanglement. In other terms, the amount of quantum
information preserved by a compression scheme represents the dimension of the
largest Hilbert space whose superpositions can be reliably compressed, or
equivalently the amount of entanglement that can be reliably compressed.
According to the above discussion a crucial point in the compression protocol
is to quantify the reliability of the compression map
$\mathscr{C}:=\mathscr{D}\mathscr{E}$, which in the asymptotic limit of
$N\to\infty$ must coincide with the identity map. In quantum theory checking
the reliability of the compression map looking only at its local action,
namely via the fidelity between states $\mathscr{C}(\rho^{\otimes N})$ and
$\rho^{\otimes N}$, or at the effects on correlations, namely via entanglement
fidelity, is equivalent. This follows from _local process tomography_ of
quantum theory where, given a map $\mathscr{C}$ on system $\mathrm{A}$ one has
$\displaystyle(\mathscr{C}\otimes\mathscr{I}_{\mathrm{C}})(\Psi)=$
$\displaystyle\Psi\qquad\forall\Psi\in\mathsf{St}(\mathrm{A}\mathrm{C})$ (1)
$\displaystyle\Leftrightarrow$ $\displaystyle\mathscr{C}(\rho)=$
$\displaystyle\rho\qquad\forall\rho\in\mathsf{St}{(\mathrm{A})}.$
This equivalence is due to _local discriminability_ PhysRevA.81.062348 ;
PhysRevA.84.012311 ; DAriano:2017up of quantum theory, where the
discrimination of bipartite quantum states can always be performed using local
measurements only (this property is equivalent to the one known in the
literature as local tomography, or tomographic locality Araki:1980tr ;
dakic_brukner_2011 ; Masanes_2011 ; Barnum:2014vt ). However, in the absence
of local discriminability, a map preserving local states still can affect
correlations with remote systems DAriano2020information . This raises a
crucial issue if one aims at studying the compression task beyond quantum
theory, where the reliability of a protocol generally needs to be verified on
extended systems. Indeed, in general, testing a compression scheme using
ancillary systems is strictly stronger than testing them with local schemes.
As a first step in the direction of generalizing the compression protocol to
an arbitrary information theory, in this paper we consider the case of
fermionic systems as carriers of information. Fermionic computation has been
proposed in Ref. Bravyi2002210 and later studied in several works Wolf2006 ;
Banuls2007 ; Friis2013 ; DAriano2014a ; PhysRevA.101.052326 . Differently from
quantum systems, fermions obey the _parity superselection rule_. As a
consequence, fermionic information theory does not satisfy local
discriminability, thus providing a physically relevant example of a theory
where the task of compression is not straightforward. Indeed, in the case of
study, a map $\mathscr{C}$ that acts as the identity on local states
$\rho^{\otimes N}$ could still destroy the correlations with remote systems,
and then be mistakenly considered as a reliable compression map.
After reviewing the structure of fermionic quantum information, we prove that
the entanglement fidelity is a valid criterion for the reliability of a
fermionic compression map. We then show an analogous of the quantum source
coding theorem in the fermionic scenario, showing that the minimal compression
rate for which a reliable compression scheme exists is the von Neumann entropy
of the fermionic state. We conclude therefore that the von Neumann entropy
provides the informational content of the state also in the case of fermionic
theory, namely in the presence of parity superselection. The above result,
however, is not a straightforward consequence of Schumacher’s source coding
theorem.
## II Fermionic information theory
We now briefly review fermionic information theory. The systems of the theory
are made by local fermionic modes (LFMs). A LFM is the counterpart of the
qubit in quantum theory, and can be thought of as a cell that can be either
empty or occupied by a fermionic excitation. An $L$-LFMs system, denoted
$\mathrm{L}_{\mathrm{F}}$, is described by $L$ fermionic fields $\varphi_{i}$,
satisfying the canonical anticommutation rule (CAR)
$\\{\varphi_{i},\varphi_{j}^{\dagger}\\}=\delta_{ij}I$,
$\\{\varphi_{i},\varphi_{j}\\}=0$ where $i,j=1,\dots,L$. With these fields one
constructs the occupation number operators $\varphi_{i}^{\dagger}\varphi_{i}$,
which can be easily proved to have only eigenvalues 0 and 1. The common
eigenvector $\left|{\Omega}\right\rangle$ of the operators
$\varphi_{i}^{\dagger}\varphi_{i}$, $i=1,\ldots,L$ with eigenvalue 0 defines
the vacuum state $\left|{\Omega}\right\rangle\left\langle{\Omega}\right|$ of
$\mathrm{L}_{\mathrm{F}}$, representing the state in which all the modes are
not excited. The fermionic vacuum state in terms of the field operators is
given by
$\left|{\Omega}\right\rangle\left\langle{\Omega}\right|=\prod_{i=1}^{L}\varphi_{i}\varphi_{i}^{\dagger}$.
By applying the operators $\varphi_{i}^{\dagger}$ to
$\left|{\Omega}\right\rangle$ the corresponding $i$-th mode is excited and, by
raising $\left|{\Omega}\right\rangle$ in all possible ways, we get the $2^{L}$
orthonormal vectors forming the Fock basis in the occupation number
representation: a generic element of this basis is
$\displaystyle\left|{n_{1},\dots,n_{L}}\right\rangle:=(\varphi_{1}^{\dagger})^{n_{1}}\dots(\varphi_{L}^{\dagger})^{n_{L}}\left|{\Omega}\right\rangle,$
(2)
with $n_{i}=\\{0,1\\}$ corresponding to the occupation number at the $i$-th
site. The linear span of these vectors corresponds to the antisymmetric Fock
space $\mathcal{F}_{L}$ of dimension $d_{\mathcal{F}_{L}}=2^{L}$. Notice that
the Fock space $\mathcal{F}_{L}$ is isomorphic to the Hilbert space
$\mathcal{H}_{L}$ of $L$ qubits, by the trivial identification of the
occupation number basis with the qubit computational basis. This
correspondence lies at the basis of the Jordan-Wigner isomorphism Jordan1928 ;
Verstraete2005 ; Pineda2010 typically used in the literature to map fermionic
systems to qubits systems and vice-versa. We recall here the definition of the
Jordan-Wigner map
$\displaystyle
J_{L}(\varphi_{i})=\left(\bigotimes_{l=1}^{i-1}\sigma^{z}_{l}\right)\otimes\sigma^{-}_{i}\otimes\left(\bigotimes_{k=i+1}^{L}I_{k}\right),$
(3) $\displaystyle J_{L}(\varphi^{\dagger}_{i})=J_{L}(\varphi_{i})^{\dagger},$
$\displaystyle J_{L}(XY)=J_{L}(X)J_{L}(Y),$ $\displaystyle
J_{L}(aX+bY)=aJ_{L}(X)+bJ_{L}(Y),$
with $X,Y$ linear combinations of products of field operators on the $L$-LFMs,
and where we used the standard notation for Pauli sigma operators. In the
following we will drop the dependence on the number of LFMs in the Jordan-
Wigner map, namely we will write $J(X)$ in place of $J_{L}(X)$, when it will
be clear from the context. Notice that the Jordan-Wigner isomorphism is
implicitly defined in Eq. (2), and, as such, it depends on the arbitrary
ordering of the modes. All such representations are unitarily equivalent.
Differently from standard qubits, fermionic systems satisfy the parity
superselection rule Schuch2004 ; Kitaev2004 ; Schuch2004a ; DAriano2014a ;
fermionic_theory . One can decompose the Fock space $\mathcal{F}_{L}$ of
system $\mathrm{L}_{\mathrm{F}}$ in the direct sum
$\mathcal{F}_{L}=\mathcal{F}_{L}^{e}\oplus\mathcal{F}_{L}^{o}$, with
$\mathcal{F}^{e}_{L}$ and $\mathcal{F}^{o}_{L}$ the spaces generated by
vectors with even and odd total occupation number, respectively. The convex
set of states $\mathsf{St}{(\mathrm{L}_{\mathrm{F}})}$ corresponds, in the
Jordan-Wigner representation, to the set of density matrices on
$\mathcal{F}_{L}$ of the form $\rho=\rho_{e}+\rho_{o}$, with
$\rho_{e},\rho_{o}\geq 0$,
$\operatorname{Tr}[\rho_{o}]+\operatorname{Tr}[\rho_{e}]\leq 1$ and with
$\rho_{e}$ and $\rho_{o}$ having support on $\mathcal{F}_{L}^{e}$ and
$\mathcal{F}_{L}^{o}$, respectively, and pure states are represented by rank
one density operators. Moreover, the density matrices representing the states
represent linear combinations of products of an even number of field operators
(see appendix A and fermionic_theory for further details). Viceversa, every
linear combination of products of an even number of field operators that is
represented by a density matrix is an admissible state. Analogously, effects
in the set $\mathsf{Eff}{(\mathrm{L}_{\mathrm{F}})}$ are represented by
positive operators on $\mathrm{L}_{\mathrm{F}}$ of the form $a=a_{e}+a_{o}$,
with $a_{e}$ and $a_{o}$ having support on $\mathcal{F}_{L}^{e}$ and
$\mathcal{F}_{L}^{o}$, respectively. Notice that set of states and effects of
system $\mathrm{L}_{\mathrm{F}}$ have dimension
$d^{2}_{\mathcal{F}_{L}}/2=2^{2L-1}$, compared to the quantum case where the
set of states and effects associated to the Hilbert space $\mathcal{H}_{L}$ of
$L$ qubits has dimension $d^{2}_{\mathcal{H}_{L}}=2^{2L}$.
Given a state $\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$ we define the
refinement set of $\rho$ as
$\mathsf{Ref}(\rho):=\\{\sigma\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})|\exists\tau\in\mathsf{St}(\mathrm{L}_{\mathrm{F}}):\rho=\sigma+\tau\\}$,
and a state is pure when all the elements in the refinement are proportional
to the state itself. In the following we will denote by
$\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}})$ and
$\mathsf{St}_{1}(\mathrm{L}_{\mathrm{F}})$ the set of pure states and the set
of normalized states (of trace one) of system $\mathrm{L}_{\mathrm{F}}$,
respectively.
Given two fermionic systems $L_{\mathrm{F}}$ and $M_{\mathrm{F}}$, we
introduce the composition of the two as the system made of $K\equiv L+M$ LFMs,
denoted with the symbol
$\mathrm{K}_{\mathrm{F}}\coloneqq\mathrm{L}_{\mathrm{F}}\boxtimes\mathrm{M}_{\mathrm{F}}$,
or simply
$\mathrm{K}_{\mathrm{F}}\coloneqq\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}}$.
We use the symbol $\boxtimes$ to distinguish the fermionic parallel
composition rule from the quantum one, corresponding to the tensor product
$\otimes$.
Given a state
$\Psi\in\mathsf{St}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$, one can
discard the subsystem $\mathrm{M}_{\mathrm{F}}$ and consider the marginal
state, which we denote by
$\sigma:=\operatorname{Tr}^{f}_{\mathrm{M}_{\mathrm{F}}}(\Psi)$. We use the
symbol $\operatorname{Tr}^{f}_{\mathrm{M}_{\mathrm{F}}}$ to denote the
fermionic partial trace on the subsystem $\mathrm{M}_{\mathrm{F}}$. This is
computed by performing the following steps (see ref. fermionic_theory for
further details): (i) drop all those terms in $\Psi$ containing an odd number
of field operators in any of the LFMs in $\mathrm{M}_{\mathrm{F}}$; (ii)
remove all the field operators corresponding to the LFMs in
$\mathrm{M}_{\mathrm{F}}$ from the remaining terms. The fermionic trace
$\operatorname{Tr}^{f}(\rho)$ of a state
$\rho\in\mathsf{St}(\mathrm{M}_{\mathrm{F}})$ is then defined as a special
case of the partial one, corresponding to the case in which $L=0$.
Finally, the set of transformations from $\mathrm{L}_{\mathrm{F}}$ to
$\mathrm{M}_{\mathrm{F}}$, denoted by
$\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$, is
given by completely positive maps from $\mathsf{St}(\mathrm{L}_{\mathrm{F}})$
to $\mathsf{St}(\mathrm{M}_{\mathrm{F}})$ in the Jordan-Wigner representation.
Moreover, we denote by
$\mathsf{Tr}_{1}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$
the set of deterministic transformations, also called _channels_ , from
$\mathrm{L}_{\mathrm{F}}$ to $\mathrm{M}_{\mathrm{F}}$, corresponding to
trace-preserving completely positive maps. Like in quantum theory, any
fermionic transformation
$\mathscr{C}\in\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$
can be expressed in Kraus form $\mathscr{C}(\rho)=\sum_{i}C_{i}\rho
C^{\dagger}_{i}$, with deterministic transformations having Kraus operators
$\\{C_{i}\\}$ such that $J(\sum_{i}C_{i}^{\dagger}C_{i})=I_{\mathcal{H}_{L}}$,
$I_{\mathcal{H}_{L}}$ denoting the identity operator on $\mathcal{H}_{L}$. For
a map
$\mathscr{C}\in\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$
with Kraus operators $\\{C_{i}\\}$, we define its Jordan-Wigner representative
$J(\mathscr{C})$ as the quantum map with Kraus operators $\\{J(C_{i})\\}$.
Now, given two transformations
$\mathscr{C}\in\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$
and
$\mathscr{D}\in\mathsf{Tr}(\mathrm{K}_{\mathrm{F}}\rightarrow\mathrm{N}_{\mathrm{F}})$,
we denote by
$\mathscr{C}\boxtimes\mathscr{D}\in\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}}\to\mathrm{M}_{\mathrm{F}}\mathrm{N}_{\mathrm{F}})$
the _parallel composition_ of $\mathscr{C}$ and $\mathscr{D}$, with Kraus
operators $\\{C_{i}D_{j}\\}$, where $\\{C_{i}\\}$ are Kraus operators for
$\mathscr{C}$ and $\\{D_{j}\\}$ for $\mathscr{D}$. We observe that in the
Jordan-Wigner representation one generally has $J_{L+K}(C_{i}D_{j})\neq
J_{L}(C_{i})\otimes J_{K}(D_{j})$, and
$J_{L+K}(\mathscr{C}\boxtimes\mathscr{D})\neq J_{L}(\mathscr{C})\otimes
J_{K}(\mathscr{D})$. If $\mathscr{C}$ is a transformation in
$\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$, its
extension to a composite system
$\mathrm{L}_{\mathrm{F}}\mathrm{N}_{\mathrm{F}}$, is given by
$\mathscr{C}\boxtimes\mathscr{I}$, with $\mathscr{I}$ the identity map of
system $\mathrm{N}_{\mathrm{F}}$—whose Jordan-Wigner representative is the
quantum identity map—and its Kraus operators involve field operators on the
$\mathrm{L}_{\mathrm{F}}$ modes only. It is worth noticing that, despite the
Jordan-Wigner representative of this map is not necessarily of the form
$J_{L}(\mathscr{C})\otimes\mathscr{I}$, upon suitable choice of the ordering
of the LFMs that defines the representation, one can always reduce to the case
where, actually,
$J_{L+N}(\mathscr{C}\boxtimes\mathscr{I})=J_{L}(\mathscr{C})\otimes\mathscr{I}$.
As a special case of the above composition rule, one can define
$\rho\boxtimes\sigma\coloneqq\rho\sigma\in\mathsf{St}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
for the parallel composition of states
$\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$ and
$\sigma\in\mathsf{St}(\mathrm{M}_{\mathrm{F}})$, and similarly $a\boxtimes
b\coloneqq ab\in\mathsf{Eff}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
for the parallel composition of effects
$a\in\mathsf{Eff}(\mathrm{L}_{\mathrm{F}})$ and
$b\in\mathsf{Eff}(\mathrm{M}_{\mathrm{F}})$.
A useful characterization of fermionic maps in
$\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{L}_{\mathrm{F}})$,
proved in Ref. DAriano2014 , is the following:
###### Proposition II.1 (Fermionic transformations).
All the transformations in
$\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{L}_{\mathrm{F}})$ with
Kraus operators being linear combinations of products of either an even number
or an odd number of field operators are admissible fermionic transformations.
Viceversa, each admissible fermionic transformation in
$\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{L}_{\mathrm{F}})$ has
Kraus operators being superpositions of products of either an even number or
an odd number of field operators.
###### Corollary II.1 (Fermionic effects).
Fermionic effects are positive operators bounded by the identity operator that
are linear combinations of products of an even number of field operators.
Viceversa, every linear combination of products of an even number of field
operators that is represented by a positive operator bounded by the identity
is a fermionic effect.
The corollary follows immediately from Proposition II.1, since an effect $A$
is obtained as a fermionic transformation $\mathscr{A}$ followed by the
discard map, i.e. the trace. Thus
$\displaystyle\operatorname{Tr}[\rho
A]=\operatorname{Tr}[\mathscr{A}(\rho)]=\sum_{i}\operatorname{Tr}[K_{i}\rho
K_{i}^{\dagger}]=\operatorname{Tr}[\rho\sum_{i}K_{i}^{\dagger}K_{i}],$
namely $A=\sum_{i}K_{i}^{\dagger}K_{i}$. Having the polynomial $K_{i}$ a
definite parity (though not necessarily the same for every $i$), $A$ is an
even polynomial.
In the following we denote by $\mathcal{L}(\mathcal{H}_{L})$ the set of linear
operators on the Hilbert space $\mathcal{H}_{L}$ of $L$-qubits and by
$\mathcal{L}(\mathcal{H}_{L},\mathcal{H}_{M})$ the set of linear operators
from $\mathcal{H}_{L}$ to $\mathcal{H}_{M}$. It is useful to introduce the
isomorphism between operators $X$ in
$\mathcal{L}(\mathcal{H}_{L},\mathcal{H}_{M})$ and vectors
$|X\rangle\\!\rangle$ in $\mathcal{H}_{M}\otimes\mathcal{H}_{L}$ given by
$|X\rangle\\!\rangle=(X\otimes
I_{\mathcal{H}_{L}})|I_{\mathcal{H}_{L}}\rangle\\!\rangle=(I_{\mathcal{H}_{M}}\otimes
X^{T})|I_{\mathcal{H}_{M}}\rangle\\!\rangle,$ (4)
where $I_{\mathcal{H}_{L}}$ is the identity operator in $\mathcal{H}_{L}$,
$|I_{\mathcal{H}_{L}}\rangle\\!\rangle\in\mathcal{H}_{L}^{\otimes 2}$ is the
maximally entangled vector
$|I_{\mathcal{H}_{L}}\rangle\\!\rangle=\sum_{l}|l\rangle|l\rangle$ (with
$\\{|l\rangle\\}$ a fixed orthonormal basis for $\mathcal{H}_{L}$), and
$X^{T}\in\mathcal{L}(\mathcal{H}_{M},\mathcal{H}_{L})$ is the transpose of $X$
with respect to the two fixed bases chosen in $\mathcal{H}_{L}$ and
$\mathcal{H}_{M}$. Notice also the useful identity
$\displaystyle Y\otimes Z|X\rangle\\!\rangle=|YXZ^{T}\rangle\\!\rangle,$ (5)
where $X\in\mathcal{L}(\mathcal{H}_{L},\mathcal{H}_{M})$,
$Y\in\mathcal{L}(\mathcal{H}_{M},\mathcal{H}_{N})$ and
$Z\in\mathcal{L}(\mathcal{H}_{L},\mathcal{H}_{K})$. Moreover, for
$X,Y\in\mathcal{L}(\mathcal{H}_{L},\mathcal{H}_{M})$, one has
$\operatorname{Tr}_{\mathcal{H}_{L}}[|X\rangle\\!\rangle\langle\\!\langle
Y|]=XY^{\dagger}$, and
$\operatorname{Tr}_{\mathcal{H}_{M}}[|X\rangle\\!\rangle\langle\\!\langle
Y|]=X^{T}Y^{*}$. We remark that, in the above paragraph, we are dealing with
abstract linear operators on an Hilbert space, disregarding their possible
interpretation as Jordan-Wigner representatives of some fermionic operator.
A notion that will be used in the following is that of states dilation.
###### Definition II.1 (Dilation set of a state $\rho$).
For any $\rho\in\mathsf{St}{(\mathrm{L}_{\mathrm{F}})}$, we say that
$\Psi_{\rho}\in\mathsf{St}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
for some system $\mathrm{M}_{\mathrm{F}}$, is a dilation of $\rho$ if
$\rho=\operatorname{Tr}^{f}_{\mathrm{M}_{\mathrm{F}}}[\Psi_{\rho}]$. We denote
by $D_{\rho}$ the set of all possible dilations of $\rho$. A pure dilation
$\Psi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
of $\rho$ is called a purification.
Naturally, any purification of $\rho$ belongs to $D_{\rho}$, more precisely
the set of purifications of $\rho$ is the subset of $D_{\rho}$ containing pure
states. A main feature of quantum theory that is valid also for fermionic
systems is the existence of a purification of any state, that is unique modulo
channels on the purifying system.
###### Proposition II.2 (Purification of states).
For every $\rho\in\mathsf{St}{(\mathrm{L}_{\mathrm{F}})}$, there exists a
purification
$\Psi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
of $\rho$ for some system $\mathrm{M}_{\mathrm{F}}$. Moreover, the
purification is unique up to channels on the purifying system: if
$\Psi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
and
$\Phi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}})$
are two purifications of $\rho$ then there exists a channel
$\mathscr{V}\in\mathsf{Tr}_{1}(\mathrm{M}_{\mathrm{F}}\rightarrow\mathrm{K}_{\mathrm{F}})$
such that
$(\mathscr{I}_{\mathrm{L}_{\mathrm{F}}}\boxtimes\mathscr{V})(\Psi_{\rho})=\Phi_{\rho}$.
###### Proof.
It can be easily verified that every purification of
$\rho\in\mathsf{St}{(\mathrm{L}_{\mathrm{F}})}$, having even part $\rho_{e}$
and odd part $\rho_{o}$, can be obtained in terms of the minimal one
$J^{-1}(|F\rangle\\!\rangle\langle\\!\langle
F|)\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$, with
$F=J(\rho)^{\frac{1}{2}}$, $M=\lceil\log_{2}{2r}\rceil$ and
$r=\max(\operatorname{rank}(\rho_{e}),\operatorname{rank}(\rho_{o}))$. Now,
let
$\Psi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
and
$\Phi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}})$
be two purifications of $\rho$. If $M=K$, let us choose the ordering defining
the Jordan-Wigner isomorphism of Eq. (2) in such a way that the modes in the
purifying systems $\mathrm{M}_{\mathrm{F}}$ precede the modes of
$\mathrm{L}_{\mathrm{F}}$. Then, using the quantum purification theorem, we
know that there exists a reversible map $\mathscr{U}$ with unitary Kraus $U$
such that $|F_{\rho}\rangle\\!\rangle=(U\otimes I)|P_{\rho}\rangle\\!\rangle$,
where
$\displaystyle|F_{\rho}\rangle\\!\rangle\langle\\!\langle
F_{\rho}|=J(\Phi_{\rho}),\quad|P_{\rho}\rangle\\!\rangle\langle\\!\langle
P_{\rho}|=J(\Psi_{\rho}).$
The unitary $U$ can be chosen in such a way that $J^{-1}(\mathscr{U})$ is an
admissible fermionic map, namely in such a way that it respects the parity
superselection rule (see Lemma B.1 in Appendix B). Moreover, due to Lemma B.2
in Appendix B, $J^{-1}(U\otimes I)$ cannot contain field operators on the
modes in $\mathrm{L}_{\mathrm{F}}$, and is then local on the purifying system
$\mathrm{K}_{\mathrm{F}}$. Now, let $K>M$. Then, we can consider a pure state
$\omega$ on the $K-M$ modes and take the parallel composition
$\Psi_{\rho}\boxtimes\omega$. This is still a purification of $\rho$, and by
the previous argument, there exists a reversible channel
$\mathscr{U}\in\mathsf{Tr}_{1}(\mathrm{K}_{\mathrm{F}}\rightarrow\mathrm{K}_{\mathrm{F}})$
such that
$\Phi_{\rho}=(\mathscr{I}_{\mathrm{L}_{\mathrm{F}}}\boxtimes\mathscr{U})(\Psi_{\rho}\boxtimes\omega)=(\mathscr{I}_{\mathrm{L}_{\mathrm{F}}}\boxtimes\mathscr{V})(\Psi_{\rho})$
where $\mathscr{V}$ is the channel defined by
$\mathscr{V}=\mathscr{U}(\mathscr{I}\boxtimes\omega)$. If $K<M$, we consider
$\Phi_{\rho}\boxtimes\omega$, where $\omega$ is any pure state on $N=M-K$
modes system, and we have
$\Phi_{\rho}\boxtimes\omega=(\mathscr{I}_{\mathrm{L}_{\mathrm{F}}}\boxtimes\mathscr{U})(\Psi_{\rho})$.
Now we discard the additional modes, and the channel connecting the
purifications is the sequential composition of $\mathscr{U}$ and the
discarding map:
$\mathscr{V}:=(\mathscr{I}_{\mathrm{K}_{\mathrm{F}}}\boxtimes\operatorname{Tr}^{f}_{\mathrm{N}_{\mathrm{F}}})\mathscr{U}$.
∎
The main difference between fermionic and quantum information lies in the
notion of what Kraus operators correspond to local maps. While in the case of
qubit systems local maps acting on the $i$-th qubit of a composite system have
Kraus operators that can be factorized as a non trivial operator on the $i$-th
tensor factor $\mathbb{C}^{2}$ of the total Hilbert space, in the case of the
fermionic Fock space $\mathcal{F}_{L}$ a local transformation on the $i$-th
mode can be represented in the Jordan-Wigner isomorphism by operators that act
non trivially on factors $\mathbb{C}^{2}$ different from the $i$-th one. This
fact is the source of all the differences between the theory of qubits and
fermionic theory, including superselection and features that it affects, such
as the notion of entanglement DAriano2014 and local states discrimination
protocols fermLOCC1 ; fermLOCC2 . Due to parity superselection, fermionic
theory does not satisfy _local process tomography_ , namely the property
stating that two transformations
$\mathscr{C}_{1},\mathscr{C}_{2}\in\mathsf{Tr}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$
are equal iff they act in the same way on the local states in
$\mathsf{St}(\mathrm{L}_{\mathrm{F}})$, namely
$\mathscr{C}_{1}(\rho)=\mathscr{C}_{2}(\rho)$ for every
$\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$ (see for example Eq. (1) in the
introduction on the equality between the compression map $\mathscr{C}$ and the
identity map). As a consequence, fermionic theory also violates _local
tomography_. A typical example of a transformation that is locally equivalent
to the identity but differs from it when extended to multipartite systems is
the parity transformation, as shown in the following. Let us consider a single
fermionic mode system $\mathrm{1}_{\mathrm{F}}$, whose possible states are
constrained to be of the form
$J(\rho)=q_{0}\left|{0}\right\rangle\left\langle{0}\right|+q_{1}\left|{1}\right\rangle\left\langle{1}\right|$
by the parity superselection rule. Let $P_{0}$ and $P_{1}$ be the projectors
on $\left|{0}\right\rangle$ and $\left|{1}\right\rangle$ respectively, namely
on the even and odd sector of the Fock space. The parity transformation
$\mathscr{P}$, that in the Jordan-Wigner representation $J(\mathscr{P})$ has
Kraus operators $P_{0}$ and $P_{1}$, acts as the identity
$\mathscr{I}_{\mathrm{1}_{\mathrm{F}}}$ when applied to states in
$\mathsf{St}{(\mathrm{1}_{\mathrm{F}})}$. However, taking the system
$\mathrm{2}_{\mathrm{F}}$ and considering the extended transformation
$\mathscr{P}\boxtimes\mathscr{I}_{\mathrm{1}_{\mathrm{F}}}$ on
$\mathsf{St}{(\mathrm{2}_{\mathrm{F}})}$ one notices that $\mathscr{P}$
differs from the identity map $\mathscr{I}_{\mathrm{1}_{\mathrm{F}}}$. Indeed,
the state $J^{-1}(\left|{\Psi}\right\rangle\left\langle{\Psi}\right|)$, with
$\left|{\Psi}\right\rangle=\frac{1}{\sqrt{2}}(\left|{00}\right\rangle+\left|{11}\right\rangle)$
is a legitimate fermionic state in $\mathsf{St}{(\mathrm{2}_{\mathrm{F}})}$,
and one can straightforwardly verify that
$\displaystyle(\mathscr{P}\boxtimes\mathscr{I}_{\mathrm{1}_{\mathrm{F}}})[J^{-1}(\left|{\Psi}\right\rangle\left\langle{\Psi}\right|)]$
$\displaystyle=\frac{1}{2}J^{-1}(\left|{00}\right\rangle\left\langle{00}\right|+\left|{11}\right\rangle\left\langle{11}\right|)$
$\displaystyle\neq
J^{-1}(\left|{\Psi}\right\rangle\left\langle{\Psi}\right|).$
### II.1 Identical channels upon-input of $\rho$
In the following we will be interested in quantitatively assessing how closely
a channel (the compression map) resembles another one (the identity map),
provided that we know that the input state corresponds to a given $\rho$. To
this end we introduce the notion of identical fermionic channels upon-input of
$\rho$.
Given two fermionic channels
$\mathscr{C}_{1},\mathscr{C}_{2}\in\mathsf{Tr}_{1}{(\mathrm{L}_{\mathrm{F}}\to\mathrm{M}_{\mathrm{F}})}$
and a state $\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$, we say that
$\mathscr{C}_{1}$ and $\mathscr{C}_{2}$ are equal upon-input of $\rho$ if
$(\mathscr{C}_{1}\boxtimes\mathscr{I})(\Sigma)=(\mathscr{C}_{2}\boxtimes\mathscr{I})(\Sigma)\qquad\forall\Sigma\in\mathsf{Ref}(D_{\rho}).$
(6)
Operationally, this means that one cannot discriminate between
$\mathscr{C}_{1}$ and $\mathscr{C}_{2}$ when applied to any dilation
$\Psi_{\rho}$ of the state $\rho$, independently of how $\Psi_{\rho}$ has been
prepared. Suppose that $\Psi_{\rho}\in D_{\rho}$ was prepared as
$\Psi_{\rho}=\sum_{i}\Sigma_{i}$, for some refinement of $\Psi_{\rho}$. Even
using the knowledge of the preparation, one cannot distinguish between
$\mathscr{C}_{1}$ and $\mathscr{C}_{2}$. Notice that, differently from the
quantum case here it is necessary to check the identity between channels on
bipartite systems.
Following the above definition, one can quantify how close two channels are.
One has that $\mathscr{C}_{1}$ and $\mathscr{C}_{2}$ are $\varepsilon$-close
upon-input of $\rho$ if
$\displaystyle\sum_{i}\lVert[(\mathscr{C}_{1}-\mathscr{C}_{2})\boxtimes\mathscr{I}](\Sigma_{i})\rVert_{1}\leq\varepsilon\quad\forall\\{\Sigma_{i}\\}:\
\sum_{i}\Sigma_{i}\in D_{\rho},$ (7)
where $\lVert X\rVert_{1}$ is the $1$-norm of $J(X)$ . One can
straightforwardly prove that the trace distance
$d(\rho,\sigma):=\frac{1}{2}\lVert\rho-\sigma\rVert_{1}$ has a clear
operational interpretation in terms of the maximum success probability of
discrimination between the two states $\rho$ and $\sigma$. Eq. (7) provides
then an upper bound for the probability of discriminating between
$\mathscr{C}_{1}$ and $\mathscr{C}_{2}$ when applied to the dilations of
$\rho$, including their refinements: $\mathscr{C}_{1}$ and $\mathscr{C}_{2}$
cannot be distinguished with a succes probability bigger than
$\frac{1}{2}+\frac{1}{4}\varepsilon$. Accordingly, a sequence of channels
$\mathscr{C}_{N}\in\mathsf{Tr}_{1}{(\mathrm{L}_{\mathrm{F}}\to\mathrm{M}_{\mathrm{F}})}$
converges to the channel
$\mathscr{C}\in\mathsf{Tr}_{1}{(\mathrm{L}_{\mathrm{F}}\to\mathrm{M}_{\mathrm{F}})}$
upon-input of $\rho$ if
$\lim_{N\to\infty}\lVert[(\mathscr{C}_{N}-\mathscr{C})\boxtimes\mathscr{I}](\Sigma)\rVert_{1}=0\quad\forall\Sigma\in\mathsf{Ref}(D_{\rho}).$
## III Fermionic compression
Consider now a system $\mathrm{L}_{\mathrm{F}}$ and let
$\rho\in\mathsf{St}{(\mathrm{L}_{\mathrm{F}})}$ be the generic state of the
system. As usual the source of fermionic information is supposed to emit $N$
independent copies of the state $\rho$ . A fermionic compression scheme
$(\mathscr{E}_{N},\mathscr{D}_{N})$ consists of the following two steps:
1. 1.
Encoding: Alice encodes the system $\mathrm{L}_{\mathrm{F}}^{\boxtimes N}$ via
a channel $\mathscr{E}_{N}:\mathsf{St}(\mathrm{L}_{\mathrm{F}}^{\boxtimes
N})\rightarrow\mathsf{St}(\mathrm{M}_{\mathrm{F}})$, where the target system
is generally a system of $M$-LFMs. The map $\mathscr{E}_{N}$ produces a
fermionic state $\mathscr{E}(\rho^{\boxtimes N})$ with support
$\mathsf{Supp}(\mathscr{E}(\rho^{\boxtimes N}))$ on a Fock space
$\mathcal{F}_{M}$ of dimension $d_{\mathcal{F}_{M}}(N)$ smaller than the one
of the original state $\rho^{\boxtimes N}$. The compression rate is defined as
the quantity
$\displaystyle R=\log_{2}d_{\mathcal{F}_{M}}(N)/N.$
Alice sends the system $\mathrm{M}_{\mathrm{F}}$ to Bob using $N\lceil
R\rceil$ noiseless fermionic channels.
2. 2.
Decoding: Finally Bob sends the system $\mathrm{M}_{\mathrm{F}}$ through a
decompression channel
$\mathscr{D}_{N}:\mathsf{St}(\mathrm{M}_{\mathrm{F}})\rightarrow\mathsf{St}(\mathrm{L}_{\mathrm{F}}^{\boxtimes
N})$.
The scheme $(\mathscr{E}_{N},\mathscr{D}_{N})$ overall transforms the
$L^{\boxtimes N}$ LFMs, with a compression map
$\mathscr{C}_{N}:=\mathscr{D}_{N}\mathscr{E}_{N}$. The latter can be more or
less “good” (in a sense that will be precisely defined) in preserving the
information which is contained in the system, depending on $\rho$ itself. The
goal now is to define the notion of reliable compression scheme once we are
provided with an information source $\rho$.
### III.1 Reliable compression scheme
The aim of a compression scheme, besides reducing the amount of information
carriers used, is to preserve all the information that is possibly encoded in
a given state $\rho$. What we actually mean is not only to preserve the input
state and keep track of the correlations of our system with an arbitrary
ancilla, but also to preserve these informations for any procedure by which
the input system and its correlations have been prepared. In other words, even
the agent that prepared the system along with possible ancillary systems, must
have a small success probability in detecting the effects of compression on
the original preparation. This amounts to require that the compression channel
$\mathscr{C}_{N}$ must be approximately equal to the identity channel upon-
input of $\rho$, and more precisely that in the limit of $N\to\infty$ the two
channels must coincide upon-input of $\rho$.
In Section II.1 we introduced the notion of $\varepsilon$-close channels upon-
input of $\rho$. This notion can now be used to quantify the error, say
$\varepsilon$, introduced by the map $\mathscr{C}_{N}$ in a compression
protocol given the source $\rho$. According to Eq. (7) we have indeed the
following definition of a reliable compression scheme
###### Definition III.1 (Reliable compression scheme).
Given a state $\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$, a compression
scheme $(\mathscr{E}_{N},\mathscr{D}_{N})$ is $\varepsilon$-reliable if
$\sum_{i}\left\lVert(\mathscr{C}_{N}\boxtimes\mathscr{I})(\Sigma_{i})-\Sigma_{i}\right\rVert<\varepsilon$
for every $\\{\Sigma_{i}\\}$ such that $\sum_{i}\Sigma_{i}\in
D_{\rho^{\boxtimes N}}$, where
$\mathscr{C}_{N}:=\mathscr{D}_{N}\mathscr{E}_{N}$.
It is clear from the definition that in order to check the reliability of a
fermionic compression map one should test it on states of an arbitrary large
system, since the dilation set $D_{\rho^{\boxtimes N}}$ includes dilations on
any possible ancillary system. It is then necessary to find a simpler
criterion to characterize the reliability of a compression scheme. Let us
start with a preliminary definition.
###### Definition III.2.
Let $\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$. We define its _square root_
$\rho^{\frac{1}{2}}$ as follows
$\displaystyle\rho^{\frac{1}{2}}\coloneqq J^{-1}[J(\rho)^{\frac{1}{2}}].$ (8)
One can easily prove that the square root of a fermionic state is well
defined, i.e. it does not depend on the particular Jordan-Wigner
representation $J$ chosen (see Appendix C). In the following we show that a
useful criterion for reliability can be expressed via _entanglement fidelity_
:
###### Definition III.3 (Entanglement fidelity).
Let $\rho\in\mathsf{St}_{1}(\mathrm{L}_{\mathrm{F}})$,
$\mathscr{C}\in\mathsf{Tr}_{1}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{M}_{\mathrm{F}})$
and
$\Phi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}})$
be any purification of $\rho$. The entanglement fidelity is defined as and
$F(\rho,\mathscr{C})=F(\Phi_{\rho},(\mathscr{C}\boxtimes\mathscr{I})(\Phi_{\rho}))^{2}$,
where $F(\rho,\sigma):=\operatorname{Tr}[J(\rho^{1/2}\sigma\rho^{1/2})^{1/2}]$
denotes the Uhlmann’s fidelity between states
$\rho,\sigma\in\mathsf{St}_{1}(\mathrm{L}_{\mathrm{F}})$.
We notice that the Uhlmann fidelity of fermionic states is well defined,
namely it is independent of the ordering of the fermionic modes (see Appendix
C). As a consequence also the Entanglement fidelity, given in terms of the
Uhlmann one, must be well defined.
Since by definition the Uhlmann fidelity of fermionic states coincides with
the one of their Jordan-Wigner representatives and the same for their trace-
norm distance, given $\rho,\sigma\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$, the
Fuchs-van der Graaf inequalities 761271 hold as a trivial consequence of
their quantum counterparts
$1-F(\rho,\sigma)\leq\frac{1}{2}\lVert\rho-\sigma\rVert_{1}\leq\sqrt{1-F(\rho,\sigma)^{2}}.$
(9)
The following proposition summarizes the main properties of fermionic
entanglement fidelity that will be used in the remainder.
###### Proposition III.1.
Let $\rho\in\mathsf{St}_{1}(\mathrm{L}_{\mathrm{F}})$,
$\mathscr{C}\in\mathsf{Tr}_{1}(\mathrm{L}_{\mathrm{F}}\rightarrow\mathrm{L}_{\mathrm{F}})$
and
$\Phi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}})$
be any purification of $\rho$. Entanglement fidelity has the following
properties.
1. 1.
$F(\rho,\mathscr{C})$ is independent of the particular choice for the
purification $\Phi_{\rho}$.
2. 2.
If the ordering is chosen in such a way that the $L$ modes are all before the
purifying ones, the following identity holds:
$F(\rho,\mathscr{C})=\sum_{i}|\operatorname{Tr}[J(\rho)C_{i}]|^{2}$ (10)
for arbitrary Kraus decomposition $J(\mathscr{C})=\sum_{i}C_{i}\cdot
C_{i}^{\dagger}$ of the Jordan-Wigner representative $J(\mathscr{C})$. From
the second inequality in (9) it follows that, if $F(\rho,\mathscr{C})\geq
1-\delta$, one has
$\lVert(\mathscr{C}\boxtimes\mathscr{I}_{\mathrm{C}})(\Phi_{\rho})-\Phi_{\rho}\rVert_{1}\leq
2\sqrt{\delta}$ (11)
for every purification $\Phi_{\rho}$ of $\rho$.
###### Proof.
Let
$\Phi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{M}_{\mathrm{F}})$
be a purification of $\rho$. If we choose the trivial ordering for the LFMs,
the Kraus of $J(\mathscr{C}\boxtimes\mathscr{I})$ are of the form
$C_{i}\otimes I$. Moreover, since the minimal purification
$|F\rangle\\!\rangle\langle\\!\langle F|$ (introduced in the proof of
proposition II.2) and $J(\Phi_{\rho})$ both purify the same quantum state,
they are connected through an isometry $V$. Recalling that for quantum states
$\left|{\psi}\right\rangle\left\langle{\psi}\right|$ and $\sigma$ the quantum
Uhlmann fidelity is given by
$F(\left|{\psi}\right\rangle\left\langle{\psi}\right|,\sigma)=\left\langle{\psi}\right|\sigma\left|{\psi}\right\rangle^{1/2}$,
we find
$\displaystyle F(\rho,\mathscr{C})=$
$\displaystyle\sum_{i}\operatorname{Tr}(|FV^{T}\rangle\\!\rangle\langle\\!\langle{FV^{T}}|{C_{i}FV^{T}}\rangle\\!\rangle\langle\\!\langle
C_{i}FV^{T}|)$ $\displaystyle=$
$\displaystyle\sum_{i}|\operatorname{Tr}(|C_{i}FV^{T}\rangle\\!\rangle\langle\\!\langle
FV^{T}|)|^{2}=$ $\displaystyle=$
$\displaystyle\sum_{i}|\operatorname{Tr}[J(\rho)C_{i}]|^{2},$
namely, the claimed formula in (10). Since $\Phi_{\rho}$ is arbitrary, this
also implies independence from the choice of the purification. ∎
In quantum theory a compression scheme $(\mathscr{E}_{N},\mathscr{D}_{N})$ is
reliable when the entanglement fidelity $F(\rho^{\boxtimes
N},\mathscr{C}_{N})$, with $\mathscr{C}_{N}:=\mathscr{D}_{N}\mathscr{E}_{N}$,
approaches $1$ as $N\rightarrow\infty$. Here we prove an analogous reliability
criterion for the fermionic case.
We can now prove the following proposition and the subsequent corollary
providing a simple reliability criterion for fermionic compression.
###### Proposition III.2.
Given a state $\rho\in\mathsf{St}_{1}(\mathrm{L}_{\mathrm{F}})$ and a channel
$\mathscr{C}\in\mathsf{Tr}_{1}(\mathrm{L}_{\mathrm{F}}\to\mathrm{L}_{\mathrm{F}})$,
$\forall\varepsilon>0$ there exists $\delta>0$ such that if
$F(\rho,\mathscr{C})>1-\delta$ then
$\sum_{i}\left\lVert[(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}](\Sigma_{i})\right\rVert_{1}\leq\varepsilon$
for every $\\{\Sigma_{i}\\}$ such that $\sum_{i}\Sigma_{i}\in D_{\rho}$.
###### Proof.
Firstly we observe that, given a set of states $\\{\Sigma_{i}\\}$ such that
$\sum_{i}\Sigma_{i}\in D_{\rho}$, considering any purification
$\Psi_{\rho}\in\mathsf{PurSt}(\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}}\mathrm{N}_{\mathrm{F}})$
of $\Sigma\coloneqq\sum_{i}\Sigma_{i}$, one can find a POVM
$\\{b_{i}\\}\in\mathsf{Eff}(\mathrm{N}_{\mathrm{F}})$ such that
$\Sigma_{i}=\operatorname{Tr}_{\mathrm{N}_{\mathrm{F}}}[(I_{\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}}}\boxtimes
b_{i})\Psi_{\rho}]$. As a consequence, we have
$\displaystyle\sum_{i}$
$\displaystyle\lVert(\mathscr{C}-\mathscr{I})\Sigma_{i}\rVert_{1}$
$\displaystyle=\sum_{i}\lVert\operatorname{Tr}_{\mathrm{N}_{\mathrm{F}}}[\\{(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}_{\mathrm{N}_{\mathrm{F}}}\\}(\Psi_{\rho})(I_{\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}}}\boxtimes
b_{i})]\rVert_{1}$
$\displaystyle\leq\lVert\\{(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}_{\mathrm{N}_{\mathrm{F}}}\\}(\Psi_{\rho})\rVert_{1},$
where the last inequality follows from the equivalent definition of the 1-norm
for $X\in\mathsf{St}_{\mathbb{R}}(\mathrm{L}_{\mathrm{F}})$
$\displaystyle\|X\|_{1}=\max_{b\in\mathsf{Eff}(\mathrm{L}_{\mathrm{F}})}\operatorname{Tr}[Xb],$
and from the fact that, for
$\\{a_{i}\\}\subseteq\mathsf{Eff}(\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}})$
such that
$\displaystyle\lVert\operatorname{Tr}_{\mathrm{N}_{\mathrm{F}}}[$
$\displaystyle\\{(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}_{\mathrm{N}_{\mathrm{F}}}\\}(\Psi_{\rho})(I_{\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}}}\boxtimes
b_{i})]\rVert_{1}$
$\displaystyle=\operatorname{Tr}_{\mathrm{N}_{\mathrm{F}}}[\\{(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}_{\mathrm{N}_{\mathrm{F}}}\\}(\Psi_{\rho})(a_{i}\boxtimes
b_{i})],$
one can write
$\displaystyle\sum_{i}\lVert($
$\displaystyle\mathscr{C}-\mathscr{I})\Sigma_{i}\rVert_{1}$
$\displaystyle=\sum_{i}\operatorname{Tr}_{\mathrm{N}_{\mathrm{F}}}[\\{(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}_{\mathrm{N}_{\mathrm{F}}}\\}(\Psi_{\rho})(a_{i}\boxtimes
b_{i})]$
$\displaystyle=\operatorname{Tr}_{\mathrm{N}_{\mathrm{F}}}[\\{(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}_{\mathrm{N}_{\mathrm{F}}}\\}(\Psi_{\rho})A],$
where $A\coloneqq\sum_{i}(a_{i}\boxtimes b_{i})$. Now, by the Fuchs-van der
Graaf inequalities, if $F(\rho,\mathscr{C})\geq 1-\delta$, then
$\displaystyle\lVert\\{(\mathscr{C}-\mathscr{I})\boxtimes\mathscr{I}_{\mathrm{N}_{\mathrm{F}}}\\}(\Psi_{\rho})\rVert_{1}\leq
2\sqrt{1-F(\rho,\mathscr{C})}\leq 2\sqrt{\delta}.$
The thesis is then obtained just taking $\delta\leq\varepsilon^{2}/4$. ∎
###### Corollary III.1 (Reliable compression scheme).
Given a state $\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$, a compression
scheme $(\mathscr{E}^{N},\mathscr{D}^{N})$ is $\epsilon$-reliable if one has
$F(\rho^{\boxtimes N},\mathscr{C}_{N})>1-\delta$, where
$\delta=\epsilon^{2}/4$, and
$\mathscr{C}_{N}:=\mathscr{D}^{N}\mathscr{E}^{N}$.
### III.2 Fermionic typical subspace
At the basis of the quantum source coding theorem lies the notion of typical
subspace, that in turn generalizes to the quantum case that of typical
sequences and typical sets of classical information. We now introduce the
notion of typical subspace also for fermionic systems and use it to show that,
like in quantum theory, the von Neumann entropy of a fermionic state is the
rate that separates the region of rates for which a reliable compression
scheme exists from that of unachievble rates. In order to do this we have to
verify that the compression map given in terms of the projection on the
typical subspace represents an admissible fermionic map.
We start by defining the notion of logarithm of a fermionic state
###### Definition III.4.
Let $\rho$ be a fermionic state. We define its logarithm as
$\displaystyle\log_{2}\rho=J^{-1}[\log_{2}J(\rho)].$ (12)
Then we define the von Neumann entropy of a fermionic state via its Jordan-
Wigner representative.
###### Definition III.5.
Given a fermionic state $\rho$, its von-Neumann entropy is defined as
$\displaystyle
S_{f}(\rho):=S(J(\rho))=-\operatorname{Tr}(J(\rho)\log_{2}J(\rho)).$ (13)
These definitions are independent of the particular Jordan-Wigner transform
corresponding to a given ordering of the modes (see Appendix C).
When we use the orthonormal decomposition for
$J(\rho)=\sum_{x_{i}}p_{i}|{x_{i}}\rangle\langle{x_{i}}|$, this reduces to the
Shannon entropy of the classical random variable $X$ that takes values in
$\mathsf{Rng}(X)=\\{x_{1},x_{2},\ldots x_{n}\\}$, called range of $X$, with
probability distribution $(p_{1},p_{2},\ldots,p_{n})$:
$S_{f}(\rho)=H(X)=-\sum_{i}p_{i}\log_{2}p_{i}$. We remind that $N$ i.i.d.
copies of the state $\rho$ are represented as $J(\rho^{\boxtimes
N})=J(\rho)^{\otimes
N}=\sum_{x_{\mathrm{\mathbf{i}}}\in\mathsf{Rng}(X)^{N}}p_{\mathrm{\mathbf{i}}}|{x_{\mathrm{\mathbf{i}}}}\rangle\langle{x_{\mathrm{\mathbf{i}}}}|,$.
With $\mathsf{T}_{N,\varepsilon}(\rho)$ we will denote the typical set of the
random variable $X$.
###### Definition III.6 (Typical subspace).
Let $\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$ with orthonormal
decomposition
$J(\rho)=\sum_{x_{i}\in\mathsf{Rng}(X)}p_{i}|{x_{i}}\rangle\langle{x_{i}}|$.
The $\varepsilon$-typical subspace $\mathsf{F}_{N,\varepsilon}(\rho)$ of
$\mathcal{H}^{\otimes N}_{L}$ is defined as
$\mathsf{F}_{N,\varepsilon}(\rho):=\mathsf{Span}\\{\left|{x_{\mathrm{\mathbf{i}}}}\right\rangle\
|\ x_{\mathrm{\mathbf{i}}}\in\mathsf{T}_{N,\varepsilon}(X)\\},$ (14)
where
$\left|{x_{\mathrm{\mathbf{i}}}}\right\rangle:=\left|{x_{i_{1}}}\right\rangle\left|{x_{i_{2}}}\right\rangle\ldots\left|{x_{i_{N}}}\right\rangle$,
and $X$ is the random variable with $\mathsf{Rng}(X)=\\{x_{i}\\}$ and
$\mathbb{P}_{X}(x_{i}):=p_{i}$.
It is an immediate consequence of the definition of typical subspace that
$\mathsf{F}_{N,\varepsilon}(\rho):=\mathsf{Span}\left\\{\left|{x_{\mathrm{\mathbf{i}}}}\right\rangle\
|\
\left|\frac{1}{N}\log_{2}\frac{1}{\mathbb{P}_{X^{N}}(x_{\mathrm{\mathbf{i}}})}-S_{f}(\rho)\right|\leq\varepsilon\right\\}.$
We will denote the projector on the typical subspace as
$\displaystyle P_{N,\varepsilon}(\rho):=$
$\displaystyle\sum_{x_{\mathrm{\mathbf{i}}}\in\mathsf{T}_{N,\varepsilon}(X)}|{x_{\mathrm{\mathbf{i}}}}\rangle\langle{x_{\mathrm{\mathbf{i}}}}|$
(15) $\displaystyle=$
$\displaystyle\sum_{x_{\mathrm{\mathbf{i}}}\in\mathsf{T}_{N,\varepsilon}(X)}|{x_{i_{1}}}\rangle\langle{x_{i_{1}}}|\otimes\cdots\otimes|{x_{i_{N}}}\rangle\langle{x_{i_{N}}}|,$
and we have that
$\dim(\mathsf{F}_{N,\varepsilon}(\rho))=\operatorname{Tr}[P_{N,\varepsilon}(\rho)]=|\mathsf{T}_{N,\varepsilon}(X)|$.
Notice that some of the superpositions of vectors in the typical subspace
might not be legitimate fermionic pure states, as their parity might be
different. However, up to now, we only defined the typical subspace as a
mathematical tool, and it does not need a consistent physical interpretation.
We will come back to this point later (see Lemma III.1), when we will discuss
the physical meaning of the projection $P_{N,\varepsilon}(\rho)$. Now, it is
immediate to see that
$\displaystyle\operatorname{Tr}[P_{N,\varepsilon}(\rho)J(\rho)^{\otimes N}]$
$\displaystyle\qquad=\sum_{x_{\mathrm{\mathbf{i}}}\in\mathsf{T}_{N,\varepsilon}(\mathrm{X})}\mathbb{P}_{X^{N}}(x_{\mathrm{\mathbf{i}}})=\mathbb{P}_{\mathrm{X}^{N}}[x_{\mathrm{\mathbf{i}}}\in\mathsf{T}_{N,\varepsilon}(X)].$
(16)
As in quantum theory, also the fermionic typical subspace has the following
features:
###### Proposition III.3 (Typical subspace).
Let $\rho\in\mathsf{St}(\mathrm{L}_{\mathrm{F}})$. The following statements
hold:
1. 1.
For every $\varepsilon>0$ and $\delta>0$ there exists $N_{0}$ such that for
every $N\geq N_{0}$
$\operatorname{Tr}[P_{N,\varepsilon}(\rho)J(\rho)^{\otimes N}]\geq 1-\delta.$
(17)
2. 2.
For every $\epsilon>0$ and $\delta>0$ there exists $N_{0}$ such that for every
$N\geq N_{0}$ the dimension of the typical subspace
$\mathsf{F}_{N,\varepsilon}(\rho)$ is bounded as
$(1-\delta)2^{N(S_{f}(\rho)-\varepsilon)}\leq\dim(\mathsf{F}_{N,\varepsilon}(\rho))\leq
2^{N(S_{f}(\rho)+\varepsilon)}$ (18)
3. 3.
For given $N$, let $S_{N}$ denote an arbitrary orthogonal projection on a
subspace of $\mathcal{F}_{L}^{\otimes N}$ with dimension
$\operatorname{Tr}(S_{N})<2^{NR}$, with $R<S_{f}(\rho)$ fixed. Then for every
$\delta>0$ there exists $N_{0}$ such that for every $N\geq N_{0}$ and every
choice of $S_{N}$
$\operatorname{Tr}[S_{N}J(\rho)^{\otimes N}]\leq\delta.$ (19)
The proof of the above properties is exactly the same as the one of quantum
theory (see for instance QInielsenchuang ). However, order to exploit the same
scheme proposed by Schumacher for the quantum case, one has to check that the
encoding and decoding channels given in the constructive part of the proof are
admissible fermionic maps. In particular, the encoding channel makes use of
the projector $P_{N,\varepsilon}(\rho)$ as a Kraus operator, therefore, we
have to show that it is a legitimate Kraus for a fermionic map. This is proved
in the following lemma based on characterization of fermionic transformations
of Proposition II.1.
###### Lemma III.1.
Let $\rho$ be a fermionic state. The projector $P_{N,\varepsilon}(\rho)$ of eq
15 is the Kraus operator of an admissible fermionic transformation.
###### Proof.
By proposition II.1 the projector on the typical subspace
$P_{N,\varepsilon}(\rho)$ is a legitimate fermionic Kraus if it is the sum of
products of either an even or an odd number of fermionic fields. Let us
consider the single projection
$\left|{x_{\mathrm{\mathbf{i}}}}\right\rangle\left\langle{x_{\mathrm{\mathbf{i}}}}\right|$.
This is given by the tensor product
$\left|{x_{i_{1}}}\right\rangle\left\langle{x_{i_{1}}}\right|\otimes\dots\otimes\left|{x_{i_{N}}}\right\rangle\left\langle{x_{i_{N}}}\right|$,
where each $\left|{x_{i_{k}}}\right\rangle$ is an eigenvector of the density
matrix $J(\rho)$ representing the fermionic state $\rho$, and, as such, it has
a definite parity. Thus, each factor in the above expression of
$\left|{x_{\mathrm{\mathbf{i}}}}\right\rangle\left\langle{x_{\mathrm{\mathbf{i}}}}\right|$
is the Jordan-Wigner representative of an even polynomial, and also the
projection
$\left|{x_{\mathrm{\mathbf{i}}}}\right\rangle\left\langle{x_{\mathrm{\mathbf{i}}}}\right|$
is thus the representative of an even polynomial for every
$\mathrm{\mathbf{i}}$, which is given, in detail, by the product
$J^{-1}(\left|{x_{\mathrm{\mathbf{i}}}}\right\rangle\left\langle{x_{\mathrm{\mathbf{i}}}}\right|)=\prod_{j=1}^{N}J^{-1}(\left|{x_{i_{j}}}\right\rangle\left\langle{x_{i_{j}}}\right|)$.
Now, by Proposition II.1, $P_{N,\varepsilon}(\rho)$ is the Jordan-Wigner
representative of a legitimate fermionic Kraus operator. ∎
### III.3 Fermionic source coding theorem
We can now prove the source coding theorem for fermionic information theory.
###### Theorem III.1 (Fermionic source coding).
Let $\rho\in\mathsf{St}_{1}(\mathrm{L}_{\mathrm{F}})$ be a state of system
$\mathrm{L}_{\mathrm{F}}$. Then for every $\delta>0$ and $R>S_{f}(\rho)$ there
exists $N_{0}$ such that for every $N\geq N_{0}$ one has a compression scheme
$\\{\mathscr{E}_{N},\mathscr{D}_{N}\\}$ with rate $R$, and $F(\rho^{\boxtimes
N},\mathscr{D}_{N}\mathscr{E}_{N})\geq 1-\delta$. Conversely, for every
$R<S_{f}(\rho)$ there is $\delta\geq 0$ such that for every compression scheme
$\\{\mathscr{E}_{N},\mathscr{D}_{N}\\}$ with rate $R$ one has
$F(\rho^{\boxtimes N},\mathscr{D}_{N}\mathscr{E}_{N})\leq\delta$.
The proof follows exactly the lines of the original proof for standard quantum
compression, that can be found e.g. in Ref. QInielsenchuang . As the direct
proof is constructive, we only need to take care of the legitimacy of the
compression protocol as a fermionic map. To this end, we recapitulate the
construction here.
1. 1.
Encoding: Perform the measurement
$\\{P_{N,\varepsilon}(\rho),I-P_{N,\varepsilon}(\rho)\\}$. If the outcome
corresponding to $P_{N,\varepsilon}(\rho)$ occurs, then leave the state
unchanged. Otherwise, if the outcome corresponding to
$I-P_{N,\varepsilon}(\rho)$ occurs, replace the state by a standard state
$|{S}\rangle\langle{S}|$, with
$\left|{S}\right\rangle\in\mathsf{F}_{N,\varepsilon}(\rho)$. Such an map is
described by the channel $\mathscr{M}_{N}:\mathrm{L}_{\mathrm{F}}^{\boxtimes
N}\to\mathrm{L}^{\boxtimes N}_{\mathrm{F}}$ given by
$\displaystyle J(\mathscr{M}_{N})(\sigma)\coloneqq$ $\displaystyle\quad
P_{N,\varepsilon}(\rho)\sigma
P_{N,\varepsilon}(\rho)+\operatorname{Tr}[(I-P_{N,\varepsilon}(\rho))\sigma]\left|{S}\right\rangle\left\langle{S}\right|$
Notice that this is a well defined transformation since by Lemma III.1 the
projector on the typical subspace is a legitimate fermionic Kraus operator.
The second term is a measure and prepare channel, which is also a legitimate
fermionic transformation. Then consider a system $\mathrm{M}_{\mathrm{F}}$
made of $M:=N\lceil R\rceil$ LFMs and the (partial) isometric embedding
$V:\mathsf{F}_{N,\varepsilon}(\rho)\rightarrow\mathcal{H}_{N\lceil R\rceil}$
such that $V^{\dagger}V=I_{\mathsf{F}_{N,\varepsilon}(\rho)}$. Since the first
stage of the protocol never produces states in the complement of
$\mathsf{F}_{N,\varepsilon}(\rho)$, we can complete the map $V\cdot
V^{\dagger}$ to a fermionic channel $\mathscr{V}_{N}$. The encoding is then
given by the composite map $\mathscr{E}_{N}:=\mathscr{V}_{N}\mathscr{M}_{N}$.
2. 2.
Decoding: For the decoding channel, we simply choose the co-isometry
$V^{\dagger}$, which inverts $V$ on $\mathsf{F}_{N,\varepsilon}(\rho)$.
As for the converse statement, the proof for quantum compression is based on
item 3, which we proved for fermionic theory as well. Thus, the quantum proof
applies to the fermionic case.
## IV Discussion
We have studied information compression for fermionic systems, showing the
fermionic counterpart of the quantum source coding theorem. In spite of parity
superselection rule and the non locality of the Jordan-Wigner representation
of fermionic operators, the von Neumann entropy of fermionic states can still
be interpreted as their information content, providing the minimal rate for
which a reliable compression is achievable.
The novelty in this paper is the analysis of compression in the absence of
local tomography. Here, the properties of a map, and in the specific case of
study of the compression map, cannot be accessed locally. This poses stronger
constraints on the set of reliable compression maps.
Despite the significant differences between fermionic and quantum information
DAriano2014 , the source coding theorem holds also for fermions. We can now
wonder which are the minimal features of a theory that lie behind the coding
theorem. As we learn from classical, quantum, and now also fermionic
information theory, the task of information compression is intimately related
to the notion of entropy. However, it is known that information theories
beyond quantum exhibit inequivalent notions of entropy KIMURA2010175 ;
Barnum_2010 ; Short_2010 . This is the main issue one has to face in order to
introduce the notion of information content in the general case. On one side
one has to provide a definition of information content including a broad class
of probabilistic theories. On the other side one can compare such a notion
with the different notions of entropy, identifying the one that plays the same
role of Shannon entropy in the compression task.
###### Acknowledgements.
A.T. acknowledges financial support from the Elvia and Federico Faggin
Foundation through the Silicon Valley Community Foundation, Grant No.
2020-214365. This work was supported by MIUR Dipartimenti di Eccellenza
2018-2022 project F11I18000680001.
## References
* [1] C. E. Shannon. A mathematical theory of communication. The Bell System Tech. Jour., 27(3):379–423, July 1948.
* [2] Benjamin Schumacher. Quantum coding. Phys. Rev. A, 51:2738–2747, Apr 1995.
* [3] C. A. Fuchs and J. van de Graaf. Cryptographic distinguishability measures for quantum-mechanical states. IEEE Transactions on Information Theory, 45(4):1216–1227, 1999\.
* [4] Giulio Chiribella, Giacomo Mauro D’Ariano, and Paolo Perinotti. Probabilistic theories with purification. Phys. Rev. A, 81:062348, Jun 2010.
* [5] Giulio Chiribella, Giacomo Mauro D’Ariano, and Paolo Perinotti. Informational derivation of quantum theory. Phys. Rev. A, 84:012311, Jul 2011.
* [6] Giacomo Mauro D’Ariano, Giulio Chiribella, and Paolo Perinotti. Quantum Theory from First Principles: An Informational Approach. Cambridge University Press, Cambridge, 2017.
* [7] Huzihiro Araki. On a characterization of the state space of quantum mechanics. Communications in Mathematical Physics, 75(1):1–24, 1980.
* [8] Borivoje Dakić and Časlav Brukner. Quantum Theory and Beyond: Is Entanglement Special?, pages 365–392. Cambridge University Press, 2011.
* [9] Lluís Masanes and Markus P Müller. A derivation of quantum theory from physical requirements. New Journal of Physics, 13(6):063001, jun 2011.
* [10] Howard Barnum and Alexander Wilce. Local tomography and the jordan structure of quantum theory. Foundations of Physics, 44(2):192–212, 2014.
* [11] Giacomo Mauro D’Ariano, Paolo Perinotti, and Alessandro Tosini. Information and disturbance in operational probabilistic theories. Quantum, 4:363, November 2020.
* [12] Sergey B. Bravyi and Alexei Yu. Kitaev. Fermionic quantum computation. Annals of Physics, 298(1):210 – 226, 2002.
* [13] Michael M. Wolf. Violation of the entropic area law for fermions. Physical Review Letters, 96(1):010404, January 2006.
* [14] Mari-Carmen Bañuls, Juan Ignacio Cirac, and Michael M. Wolf. Entanglement in fermionic systems. Physical Review A, 76(2):022311, August 2007.
* [15] Nicolai Friis, Antony R. Lee, and David Edward Bruschi. Fermionic-mode entanglement in quantum information. Physical Review A, 87(2):022338, February 2013.
* [16] Giacomo Mauro D’Ariano, Franco Manessi, Paolo Perinotti, and Alessandro Tosini. Fermionic computation is non-local tomographic and violates monogamy of entanglement. EPL (Europhysics Letters), 107(2):20009, July 2014.
* [17] Tiago Debarba, Fernando Iemini, Geza Giedke, and Nicolai Friis. Teleporting quantum information encoded in fermionic modes. Phys. Rev. A, 101:052326, May 2020.
* [18] Ernst Pascual Jordan and Eugene Paul Wigner. Über das paulische äquivalenzverbot. Zeitschrift für Physik, 47(9-10):631–651, September 1928.
* [19] Frank Verstraete and Juan Ignacio Cirac. Mapping local hamiltonians of fermions to local hamiltonians of spins. Journal of Statistical Mechanics: Theory and Experiment, 2005(09):P09012–P09012, September 2005.
* [20] Carlos Pineda, Thomas Barthel, and Jens Eisert. Unitary circuits for strongly correlated fermions. Physical Review A, 81(5):050303, May 2010.
* [21] Norbert Schuch, Frank Verstraete, and Juan Ignacio Cirac. Nonlocal resources in the presence of superselection rules. Physical Review Letters, 92(8):087904, February 2004.
* [22] Alexei Kitaev, Dominic Mayers, and John Preskill. Superselection rules and quantum protocols. Physical Review A, 69(5):052326, May 2004.
* [23] Norbert Schuch, Frank Verstraete, and Juan Ignacio Cirac. Quantum entanglement theory in the presence of superselection rules. Physical Review A, 70(4):042310, October 2004.
* [24] Giacomo Mauro D’Ariano, Franco Manessi, Paolo Perinotti, and Alessandro Tosini. The feynman problem and fermionic entanglement: Fermionic theory versus qubit theory. International Journal of Modern Physics A, 29(17):1430025, 2014\.
* [25] Giacomo Mauro D’Ariano, Franco Manessi, Paolo Perinotti, and Alessandro Tosini. The feynman problem and fermionic entanglement: Fermionic theory versus qubit theory. International Journal of Modern Physics A, 29(17):1430025, June 2014\.
* [26] Matteo Lugli, Paolo Perinotti, and Alessandro Tosini. Fermionic state discrimination by local operations and classical communication. Phys. Rev. Lett., 125:110403, Sep 2020.
* [27] Matteo Lugli, Paolo Perinotti, and Alessandro Tosini. Unambiguous discrimination of fermionic states through local operations and classical communication. Phys. Rev. A, 103:012416, Jan 2021.
* [28] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, USA, 10th edition, 2011.
* [29] Gen Kimura, Koji Nuida, and Hideki Imai. Distinguishability measures and entropies for general probabilistic theories. Rep. on Math. Phys., 66(2):175 – 206, 2010.
* [30] Howard Barnum, Jonathan Barrett, Lisa Orloff Clark, Matthew Leifer, Robert Spekkens, Nicholas Stepanik, Alex Wilce, and Robin Wilke. Entropy and information causality in general probabilistic theories. New Jour. of Phys., 12(3):033024, mar 2010.
* [31] Anthony J Short and Stephanie Wehner. Entropy in general physical theories. New Journal of Physics, 12(3):033023, mar 2010.
## Appendix A Fermionic States
In a $L$-LFM system, fermionic states in
$\mathsf{St}(\mathrm{L}_{\mathrm{F}})$ are represented by density matrices on
the antisymmetric Fock space $\mathcal{F}_{L}$ satisfying the parity
superselection rule. As such they can be written as combinations of products
of field operators. Indeed, a fermionic state $\rho$ can be split in its even
and odd part as follows
$\displaystyle\rho=\sum_{e}E_{e}\left|{\Omega}\right\rangle\left\langle{\Omega}\right|E^{\dagger}_{e}+\sum_{o}O_{o}\left|{\Omega}\right\rangle\left\langle{\Omega}\right|O^{\dagger}_{o},$
where $E_{e}$ and $O_{o}$ are linear combinations of products of even and odd
number of field operators respectively. By recalling that
$\left|{\Omega}\right\rangle\left\langle{\Omega}\right|=\prod_{i=1}^{L}\varphi_{i}\varphi_{i}^{\dagger}$
one can easily realize that $\rho$ can be written as combination of products
of even number of field operators. Moreover, by using the CAR, the generic
state can be written as follows
$\rho=\sum_{\underline{s},\underline{t}}\rho_{\underline{s}\underline{t}}\prod_{i=1}^{L}\varphi_{i}^{\dagger
s_{i}}\varphi_{i}\varphi_{i}^{\dagger}\varphi_{i}^{t_{i}},$
where $\underline{s},\underline{t}\in\\{0,1\\}^{L}$ and
$\rho_{\underline{s}\underline{t}}\in\mathbb{C}$.
## Appendix B Technical Lemmas
Here we show two lemmas that are used in the proof of Proposition II.2 in the
main text.
As a preliminary notion we define quantum states with definite parity. Let
$\mathcal{H}_{L}$ be an Hilbert space of $L$-qubits and let
$\mathsf{St}{(\mathcal{H}_{L})}$ be the corresponding set of states. The
vectors of the computational basis
$\displaystyle\left|{s_{1},s_{2},\ldots,s_{L}}\right\rangle,\quad
s_{i}=\\{0,1\\},\quad i=1,\ldots L,$ (20)
can be divided into even $p=0$ and odd $p=1$ vectors according to their parity
$p:=\oplus_{i=1}^{L}s_{i}$. Denoting by $\mathcal{H}_{L}^{0}$ and
$\mathcal{H}_{L}^{1}$, with
$\mathcal{H}_{L}=\mathcal{H}_{L}^{0}\oplus\mathcal{H}_{L}^{1}$, the spaces
generated by even and odd vectors respectively, one says that a state
$\rho\in\mathsf{St}{(\mathcal{H}_{L})}$ has definite parity if it is of the
form $\rho=\rho_{0}+\rho_{1}$, with $\rho_{0}$ and $\rho_{1}$ having support
on $\mathcal{H}_{L}^{0}$ and $\mathcal{H}_{L}^{1}$ respectively. As a special
case, a pure state of definite parity $p$ must have support only on
$\mathcal{H}_{L}^{p}$. We can now prove the following lemma.
###### Lemma B.1.
Consider a quantum state $\rho\in\mathsf{St}{(\mathcal{H}_{L})}$ and two
purifications $\Psi,\Phi\in\mathsf{St}{(\mathcal{H}_{L}\mathcal{H}_{M})}$ with
definite parity. Then it is alway possible to find a unitary channel
$\mathscr{U}$ that maps states of definite parity into states of definite
parity and such that $(\mathscr{I}\otimes\mathscr{U})(\Psi)=\Phi$.
###### Proof.
Let $\left|{\Psi}\right\rangle\in\mathcal{H}_{LM}^{p}$ and
$\left|{\Phi}\right\rangle\in\mathcal{H}_{LM}^{q}$, for $p,q\in\\{0,1\\}$.
Since the two states are purification of the same state
$\rho\in\mathsf{St}{(\mathcal{H}_{L})}$ their Schmidt decomposition can always
be taken as follows
$\displaystyle\left|{\Psi}\right\rangle=\sum_{i}\lambda_{i}\left|{i}\right\rangle\left|{\Psi_{i}}\right\rangle,\qquad\left|{\Phi}\right\rangle=\sum_{i}\lambda_{i}\left|{i}\right\rangle\left|{\Phi_{i}}\right\rangle,$
where $\\{\left|{i}\right\rangle\\}\in\mathcal{H}_{L}$ is the same orthonormal
set for the two states, while
$\\{\left|{\Psi_{i}}\right\rangle\\},\\{\left|{\Phi_{i}}\right\rangle\\}\in\mathcal{H}_{M}$
are two generally different orthonormal sets. Notice that, since $\Psi$ and
$\Phi$ are pure states of definte parity, any element in the above orthonormal
sets must be a vector of definite parity. Within the set
$\\{\left|{i}\right\rangle\\}=\\{\\{\left|{i_{0}}\right\rangle\\},\\{\left|{i_{1}}\right\rangle\\}\\}$
one can separate even $\\{\left|{i_{0}}\right\rangle\\}$ and odd
$\\{\left|{i_{0}}\right\rangle\\}$ parity vectors, and then write $\Psi$ and
$\Phi$ (respectively of parity $p$ and $q$) as
$\displaystyle\left|{\Psi}\right\rangle=\sum_{i_{0}}\lambda_{i_{0}}\left|{i_{0}}\right\rangle\left|{\Psi^{p}_{i_{0}}}\right\rangle+\sum_{i_{1}}\lambda_{i_{1}}\left|{i_{1}}\right\rangle\left|{\Psi^{\bar{p}}_{i_{1}}}\right\rangle,$
$\displaystyle\left|{\Phi}\right\rangle=\sum_{i_{0}}\lambda_{i_{0}}\left|{i_{0}}\right\rangle\left|{\Phi^{q}_{i_{0}}}\right\rangle+\sum_{i_{1}}\lambda_{i_{1}}\left|{i_{1}}\right\rangle\left|{\Phi^{\bar{q}}_{i_{1}}}\right\rangle,$
where $\bar{r}=r\oplus 1$, and in the orthonormal sets
$\\{\left|{\Psi^{p}_{i_{0}}}\right\rangle,\left|{\Psi^{\bar{p}}_{i_{1}}}\right\rangle\\}$
and
$\\{\left|{\Phi^{q}_{i_{0}}}\right\rangle,\left|{\Phi^{\bar{q}}_{i_{1}}}\right\rangle\\}$
we separated vectors according to their parity. We can now complete the above
two sets to orthonormal bases in such a way that all vectors in both bases
have definite parity. Let us take for example the basis
$\\{\left|{\Psi^{p}_{i_{0}}}\right\rangle,\left|{\Psi^{\bar{p}}_{i_{1}}}\right\rangle\\},|\Psi_{k}^{r(k)}\rangle\\}$
and
$\\{\left|{\Phi^{1}_{i_{0}}}\right\rangle,\left|{\Phi^{\bar{q}}_{i_{1}}}\right\rangle,|\Phi_{k}^{t(k)}\rangle\\}$
with $r(k),t(k)\in\\{0,1\\}$. It is now straightforward to see that the
unitary map $\mathscr{U}$ having Kraus operator
$\displaystyle
U=\sum_{i_{0}}\left|{\Psi^{p}_{i_{0}}}\right\rangle\left\langle{\Phi^{q}_{i_{0}}}\right|+\sum_{i_{1}}\left|{\Psi^{\bar{p}}_{i_{1}}}\right\rangle\left\langle{\Phi^{\bar{q}}_{i_{1}}}\right|+\sum_{k}|\Psi^{r(k)}_{k}\rangle\langle\Phi^{t(k)}_{k}|$
is such that $(I\otimes
U)\left|{\Psi}\right\rangle=\left|{\Phi}\right\rangle$. Moreover $\mathscr{U}$
maps states of definite parity into states of definite parity. ∎
###### Lemma B.2.
Let $\mathrm{N}_{\mathrm{F}}:=\mathrm{L}_{\mathrm{F}}\mathrm{K}_{\mathrm{F}}$
and
$\mathscr{C}\in\mathsf{Tr}(\mathrm{N}_{\mathrm{F}}\rightarrow\mathrm{N}_{\mathrm{F}})$
be a single Kraus transformation with Kraus $C$ having Jordan-Wigner
rapresentative $J(C)=U\otimes I_{\mathrm{K}_{F}}$, $U$ acting on the first $L$
qubits. Then $\mathscr{C}$ is local on the first $L$ modes.
###### Proof.
Due to Proposition II.1, the Kraus operator of $\mathscr{C}$ can be written as
$C=\sum_{i}C_{i}$, where either each $C_{i}$ is a product of an even number of
field operators, or each $C_{i}$ is a product of an odd one. The set
$\\{C_{i}\\}$ can be taken to be linearly independent without loss of
generality. Let us assume by contradiction that $\mathscr{C}$ is not local on
the first $L$ modes. Therefore, since a set of independent operators
generating the algebra of the $j$-th mode is
$\\{\varphi_{j},\varphi_{j}^{\dagger},\varphi_{j}^{\dagger}\varphi_{j},\varphi_{j}^{\dagger}\varphi_{j}+\varphi_{j}\varphi_{j}^{\dagger}\\}$,
there exists at least one product $C_{i}$ that contains one of the factors
$\varphi_{j}$, $\varphi_{j}^{\dagger}$, or $\varphi_{j}\varphi_{j}^{\dagger}$,
for some mode $j$ of the system $\mathrm{K}_{\mathrm{F}}$. Let $j(i)$ be the
mode with largest label in the chosen ordering of the $N=L+K$ modes, such that
the corresponding factor in the product $C_{i}$ is not the identity (i.e.
$\varphi_{j}^{\dagger}\varphi_{j}+\varphi_{j}\varphi_{j}^{\dagger}$).
Accordingly, one has that the Jordan-Wigner representative of $C_{i}$ is of
the form
$\displaystyle J(C_{i})=K\otimes
O_{j(i)}\otimes\left(\bigotimes_{l=j(i)+1}^{N}I_{l}\right),$
where $K$ is an operator on the first $1,\ldots,{j(i)-1}$ qubits, and
$O_{j(i)}$ is one of the factors
$\sigma_{j(i)}^{+},\sigma_{j(i)}^{-},\sigma_{j(i)}^{+}\sigma_{j(i)}^{-}$ on
the $j$-th qubit. This contradicts the hypothesis on the form of $J(C)$. ∎
## Appendix C Jordan-Wigner independence
In this appendix we show the consistency of definitions III.2, III.3, III.4
and III.5 given in text. In particular, we check that they are independent of
the particular choice of the order of the fermionic modes, which defines the
Jordan-Wigner transform. We remember that all Jordan-Wigner representations
are unitarily equivalent.
###### Lemma C.1.
Let $\rho$ be a fermionic state. The square root and the logarithm of $\rho$
are well defined.
###### Proof.
Once we have fixed the ordering of the modes, the square root of a fermionic
state $\rho$ is defined via its Jordan-Wigner representative as follows
$\rho^{\frac{1}{2}}:=J^{-1}[J(\rho)^{\frac{1}{2}}]$
If $\tilde{J}$ is the Jordan-Wigner isomorphism associated to a different
ordering, then consider $X:=\tilde{J}^{-1}[\tilde{J}(\rho)^{\frac{1}{2}}]$. We
can now prove that $X=\rho^{\frac{1}{2}}$ and then independence of the square
root from the ordering. Indeed, one has
$\tilde{J}(X)^{2}=\tilde{J}(\rho)=UJ(\rho)U^{\dagger},$
with $U$ unitary. It follows that
$J(\rho)=U^{\dagger}\tilde{J}(X)UU^{\dagger}\tilde{J}(X)U=J(X)^{2}\implies
J(X)=J(\rho)^{\frac{1}{2}}.$
Since $J$ is an isomorphism, by taking $J^{-1}$ we finally get
$X=J^{-1}[J(\rho^{\frac{1}{2}})]=\rho^{\frac{1}{2}}.$
Analogously, the logarithm of a fermionic state is defined thorugh its Jordan-
Wigner representative
$\log_{2}(\rho):=J^{-1}[\log_{2}(J(\rho))]$
Again, let $\tilde{J}$ be the Jordan-Wigner isomorphism corresponding to a
different ordering, and let $X=\tilde{J}^{-1}[\log_{2}(\tilde{J}(\rho))]$.
Firstly we notice that
$\log_{2}[\tilde{J}(\rho)]=\log_{2}[UJ(\rho)U^{\dagger}]=U\log_{2}[J(\rho)]U^{\dagger}$
since $U$ is unitary (we remind that the logarithm of a positive operator is
defined via its spectral decomposition, and a unitary map preserves the
spectrum). Therefore, we find
$J(X)=\log_{2}[J(\rho)]\implies X=J^{-1}[\log_{2}(J(\rho))]=\log_{2}(\rho).$
that concludes the proof. ∎
Based on the above lemma we have the following proposition.
###### Proposition C.1.
Let $\rho$ and $\sigma$ be two fermionic states. The Uhlmann fidelity
$F(\rho,\sigma)$ and the von Neumann entropy $S_{f}(\rho)$ of definitions
III.3 and III.5 are well defined.
###### Proof.
These two quantities are given by a trace of two well defined operators, as
proved in the previous lemma. Moreover, since a reordering of the modes
corresponds to a unitarily change of basis, the trace is Jordan-Wigner
independent, and so are $F(\rho,\sigma)$ and $S_{f}(\rho)$.
∎
|
# dm2gal: Mapping Dark Matter to Galaxies with Neural Networks
Noah Kasmanoff
Center for Data Science
New York University
New York, NY 10011
<EMAIL_ADDRESS>
Francisco Villaescusa-Navarro
Department of Astrophysical Sciences
Princeton University
Princeton NJ 08544
<EMAIL_ADDRESS>
Jeremy Tinker
Center for Cosmology and Particle Physics
New York University
New York, NY 10011
<EMAIL_ADDRESS>
Shirley Ho
Center for Computational Astrophysics
Flatiron Institute
New York, NY 10010
<EMAIL_ADDRESS>
###### Abstract
Maps of cosmic structure produced by galaxy surveys are one of the key tools
for answering fundamental questions about the Universe. Accurate theoretical
predictions for these quantities are needed to maximize the scientific return
of these programs. Simulating the Universe by including gravity and
hydrodynamics is one of the most powerful techniques to accomplish this;
unfortunately, these simulations are very expensive computationally.
Alternatively, gravity-only simulations are cheaper, but do not predict the
locations and properties of galaxies in the cosmic web. In this work, we use
convolutional neural networks to paint galaxy stellar masses on top of the
dark matter field generated by gravity-only simulations. Stellar mass of
galaxies are important for galaxy selection in surveys and thus an important
quantity that needs to be predicted. Our model outperforms the state-of-the-
art benchmark model and allows the generation of fast and accurate models of
the observed galaxy distribution.††Code available at
https://github.com/nkasmanoff/dm2gal
## 1 Introduction
Galaxies are not randomly distributed in the sky, but follow a particular
pattern known as the cosmic web. Galaxies concentrate in high-density regions
composed of dark matter halos, and galaxy clusters usually lie within these
dark matter halos and they are connected via thin and long filaments. Those
filaments are surrounded by very large low-density regions with almost no
galaxies in them: cosmic voids. Cosmologists use the cosmic web as a
laboratory to learn about the fundamental laws and constituents of our
Universe. The scientific community has invested billions of dollars in
missions, both from ground and space, to survey the cosmic web as accurately
as possible. In order to maximize the scientific return of those missions,
accurate theoretical predictions are needed to extract the relevant
information from observational data. Since these surveys observe galaxies and
their properties such as stellar masses (the galaxy mass in stars), we need
theoretical predictions for those quantities.
Cosmological hydrodynamic simulations are probably the best way to obtain
these predictions; however, due to their large computational cost (millions of
CPU hours), they only allow predictions of very small volumes. On the other
hand, gravity-only simulations are much cheaper, but do not model galaxies nor
their properties. In this work we try to bridge the gap between these two
approaches using convolutional neural networks. Our purpose is to show that
neural networks can learn to paint galaxy properties on top of gravity-only
simulations. This will speed up the process of creating predicted galaxy
distributions used to analyze data from astronomical missions. In this work,
we focus our attention in one of the most important galaxy properties, the
stellar mass, i.e. the mass in stars a galaxy contains.
The mapping we want to perform is
$M^{h}_{*}(\vec{x})=f(M^{g}_{\rm dm}(\vec{x}),M^{g}_{\rm dm}(\vec{y}))~{},$
(1)
where $M_{*}^{h}(\vec{x})$ represents the stellar mass at position $\vec{x}$
according to the hydrodynamic simulation, $M^{g}_{\rm dm}(\vec{x})$
corresponds to the dark matter mass from the gravity-only simulation at
position $\vec{x}$. We emphasize that the stellar mass of a galaxy will likely
depend on its environment in a very complicated way. Although the underlying
structure of the simulation pairs are the same, baryonic effects give rise to
minor variations. That is the reason why we included the term $M^{g}_{\rm
dm}(\vec{y})$ in the above equation, where $\vec{y}\neq\vec{x}$.Our purpose in
this paper is to show that convolutional neural networks can approximate the
function $f$.
Some of the studies that inspired this work are [1],[2] and [3].
## 2 Methods
### 2.1 Data
We use data from the state-of-the-art magneto-hydrodynamic simulation TNG100-1
[4, 5], and its gravity-only counterpart, TNG100-1-Dark, at present time.
Those simulations contain, among other things, the position and mass of all
particles in the simulations. Each simulation also contains a catalogue of
dark matter halos with their properties (e.g. mass and position). We construct
the stellar mass and dark matter mass density fields from the particle
positions and masses of the hydrodynamic and gravity-only simulations,
respectively. Since galaxies are expected to reside in dark matter subhalos,
we facilitate the training of the network by using also the mass-weighted
subhalo field, that we construct from the gravity-only simulation. The fields
span a volume of $(75~{}h^{-1}{\rm Mpc})^{3}$ ($1$ ${\rm Mpc}$ corresponds to
$3.26$ million light-years) and they contain $2048^{3}$ voxels.
One of the greatest challenges of working with this data is its sparsity: most
of the voxels in this simulations do not contain galaxies (i.e. stellar mass
is zero). We circumvent this problem by training the network only on regions
centered on a subhalo with a stellar mass larger than
$10^{8}~{}h^{-1}M_{\odot}$.
### 2.2 Model
Our network takes as input a two-channel 3D volume with $65^{3}$ voxels each:
the dark matter and subhalos fields. The output of the model is the value of
the stellar mass in the central voxel of the 3D fields. Our architecture
consists of a series of blocks composed of convolutional, batch normalization,
and ReLU activation layers that alternate between having a larger kernel size
of $k\geq 5$ and stride 1, to a smaller kernel size, $k=3$, with stride 2.
Both block types capture information on different scales, while efficiently
down-sampling this large input to a single value. After six blocks, the latent
representation is flattened into a vector that is passed through two fully
connected layers which produces a predicted stellar mass. We will refer to
this network as dm2gal, as its purpose is to map dark matter from gravity-only
simulations to galaxies in hydrodynamic simulations.
Weighted sampling. The abundance of voxels with different stellar masses (the
target) varies by many orders of magnitude. This poses a problem to our
network, that learns to predict the stellar masses of the voxels with lowest
stellar masses (the most abundant), but fails for the less frequent voxels
with large stellar masses. We overcome this problem with a weighted sampling
procedure. We first bin the voxels with stellar masses of the training data
into $100$ bins logarithmically spaced between the minimum
($10^{8}~{}h^{-1}M_{\odot}$) and maximum ($10^{11.5}~{}h^{-1}M_{\odot}$)
target values. We associate to each training sample a weight corresponding to
the inverse the number count of values within its assigned bin. We also made
use of data augmentation (3D random rotations) to increase our training data
set.
Training and validation. From the $2048^{3}$ voxel fields, we reserve two
cubes, one for validation and one for testing. The validation and testing
cubes have $840^{3}$ ($30.76~{}h^{-1}\rm Mpc$) and $868^{3}$
($31.78~{}h^{-1}\rm Mpc$) voxels, respectively. We save the model that best
matched the cosmological statistics first on the validation cube, and then
report performance on the testing cube. These regions were selected by
requiring that they were representative enough, i.e. avoiding they contain big
voids or very massive halos. The remaining voxels are used for training.
We train our model by minimizing the mean square error
$L_{\mathrm{MSE}}=(\frac{1}{n})\sum_{i=1}^{n}(M_{*}^{h}(i)-M_{*}^{\mathrm{NN}}(i))^{2}$,
where $M_{*}^{h}(i)$ and $M_{*}^{\mathrm{NN}}(i)$ are the stellar masses from
the simulation and the prediction of the neural network, respectively. The sum
runs over all samples in the training set. After trained to convergence, we
select for testing models that best match the stellar mass power spectrum of
the validation region. Because the validation MSE value may correspond to good
performance on only reconstructing low-mass galaxies, we avoid using it for
indicating performance after training.
Hyper-parameter search. We utilize PyTorch and PyTorch-Lightning [6] to
quickly train in parallel a broad range of hyper-parameter configurations,
with learning rates between $10^{-5}$ to $10^{-1}$, weight decay between $0$
to $10^{-1}$, and capacity (number of channels in each layer). We employ a
learning rate scheduler which decreases by a factor of $10$ for every 5 epochs
in which the validation loss does not improve. Each model’s best performing
validation score was achieved within 24 hours of training on a single NVIDIA
P100 GPU.
### 2.3 Benchmark model
We now describe the benchmark model we use to compare our results with. We
refer to this method as HOD, from halo occupation distribution [7, 8, 9, 10].
The most important assumption of this model is that it considers that all
galaxies reside within dark matter halos. The method works as follows. First,
the dark matter halos from the hydrodynamic simulation are assigned to
different bins according to their halo masses. Within each halo mass bin,
galaxies are split into centrals and satellites, and their stellar mass
distribution is calculated. Each halo mass bin will then have two stellar mass
distributions: one for the centrals and one for the satellites. The HOD works
as follows. We take a halo from the gravity-only simulation and its subhalos
are split into central and satellites; the subhalo stellar masses are assigned
by sampling the distribution obtained from the hydrodynamic simulation. We
also correct for the effects of baryons on the halo mass function and number
of satellites by multiplying the HOD prediction by the ratio of satellites,
and overall halo mass between the simulations. We expect our HOD to perform
better than the traditional one, where neither subhalo positions, nor halo
mass corrections are considered.
## 3 Results
We now investigate the performance of our network on the test set, and compare
the results against the HOD model. The first two panels of the upper row of
Fig. 1 show the spatial distribution of dark matter and subhalos from a
$(2.4~{}h^{-1}{\rm Mpc})^{3}$ ($65^{3}$ voxels) region of the test set. These
represent the input to the network, that outputs the value of the stellar mass
in the central voxel. With inputs to the network at different spatial
positions, the 3D stellar mass field can be predicted; we show it in the third
panel. The stellar mass fields from the hydrodynamic simulation and the HOD
model are shown in the fourth and fifth panels, respectively. From visual
inspection, we find that dm2gal performs better than the HOD, and closely
match the results of the hydrodynamic simulation.
Figure 1: The upper row shows the spatial distribution of dark matter and
subhalos from the fast gravity-only simulations. Those fields are the inputs
of the network, that outputs the stellar mass in the central voxel. By
choosing different input regions the 3D stellar mass field can be predicted;
we show it in the third panel. The fourth and fifth panels display the stellar
mass from the expensive hydrodynamic simulation and the benchmark HOD model.
The bottom panels compare different summary statistics (power spectrum-left,
bispectrum-center, PDF-right) from the simulations (blue), dm2gal (red), and
HOD (green). As can be seen, dm2gal outperforms the HOD in the clustering
statistics (power spectrum and bispectrum) while yielding similar performance
than the HOD for the relevant range of stellar masses.
We now quantify the agreement between the predicted, HOD, and simulation
stellar mass fields using three different summary statistics: 1) the power
spectrum, 2) the bispectrum, and 3) the probability distribution function
(PDF).
Power spectrum. Given a 3D field, $\delta(\vec{x})$, we can compute its
Fourier transform as $\delta(\vec{k})=\int
e^{-i\vec{k}\cdot\vec{x}}\delta(\vec{x})d^{3}\vec{x}$ (using the discrete
version for finite fields). The power spectrum can be computed as
$P(k_{i})=1/N_{k_{i}}\sum_{k\in[k,k+dk]}|\delta(\vec{k})|^{2}$, where
$N_{k_{i}}$ is the number of independent modes in the internal $[k,k+dk]$. The
power spectrum is one of the most important quantities in cosmology, as it
describes the cosmic web on large, linear, scales. The first panel on the
bottom row of Fig. 1 shows the results. We find strong agreement on large
scales (low values of $k$) between all fields for the power spectrum. This is
expected for the HOD, but is a prediction for dm2gal. On smaller scales,
dm2gal outperforms the HOD, with the exception of scales $k\geq 30~{}h{\rm
Mpc}^{-1}$. We emphasize that these are extremely small scales (in
cosmological terms), and most non-linear regime of physics. We believe that
with more training on a higher resolution input, this fit will improve.
Bispectrum. The bispectrum is a higher-order statistic that contains non-
Gaussian information from density fields [11]. It is calculated as
$B(k_{1},k_{2},\theta)=1/N_{k}\sum_{\vec{k}_{1},\vec{k}_{2}|\vec{k}_{1}+\vec{k}_{2}+\vec{k}_{3}=\vec{0}}[\delta(\vec{k}_{1})\delta(\vec{k}_{2})\delta(\vec{k}_{3})]$,
where $N_{k}$ is the number of independent modes in the considered interval in
$k_{1}$, $k_{2}$ and $\theta$, that is the angle between $\vec{k}_{1}$ and
$\vec{k}_{2}$. We have taken a configuration with $k_{1}=3~{}h{\rm Mpc}^{-1}$
and $k_{2}=4~{}h{\rm Mpc}^{-1}$ and show the results of the bispectrum, as a
function of $\theta$, in the middle panel of the bottom row of Fig. 1. In this
case, we find that dm2gal outperforms the HOD for all angles. We have repeated
the exercise for other triangle configurations, finding similar results.
Probability distribution function. Finally, we consider the probability
distribution function, that we compute as the number of voxels with a certain
stellar mass, as a function of the stellar mass value (for clarity, we do not
normalize the distribution to have an area equal to 1 under it). This quantity
contains additional information to the one embedded into the power spectrum
and bispectrum [12], and therefore, represents a different way to quantify the
agreement between the different methods. We show the results in the bottom
right panel of Fig. 1. We find that for stellar masses
$M_{*}>10^{8.5}~{}h^{-1}M_{\odot}$, both dm2gal and the HOD outputs a
distribution very similar to the one from the hydrodynamic simulation. On the
other hand, at the low mass end of the stellar mass PDF, the HOD outperforms
dm2gal. We note that it is expected that the HOD model works very well for the
PDF, as it is built to reproduce this statistic. We believe that with further
training and tuning of the hyperparameters we can improve the results of the
network in that regime. However, we emphasize that the low stellar mass regime
is not very important for cosmological analysis, as astronomical surveys will
have a hard time detecting such low mass objects.
## 4 Conclusion
We have shown, for the first time, that convolutional neural networks can be
used to paint stellar masses into the dark matter field of computationally
cheap gravity-only simulations. This method allows the production of stellar
mass fields over large cosmological volumes. Generating these fields using
hydrodynamic simulations will have a computational cost between 10x and 100x
higher than with our method, that only requires running a gravity-only
simulation. In terms of its performance, we have shown, that our model
outperforms the traditional HOD method, while being more computationally
efficient.
This work has made use of simulations where the cosmological and astrophysics
model is fixed. In the future, we plan to generalize our network to models
with different cosmologies and astrophysics. We have also neglected any
dependence on time that the mapping between dark matter to stellar mass may
have. We plan to quantify the importance of that term by training the network
using inputs at different times or by training using information from the
merger trees. We also plan to extend the network to be able to predict other
galactic properties such as metallicity, luminosity, and radius.
## Acknowledgments
We thank Jacky Yip, Carlos Fernandez-Granda, Gabriella Contardo, Yin Li, and
Sigurd Naess for insightful discussions. This work was conducted using the
computational resources at New York University and Princeton University
## Broader Impact
This work will benefit upcoming cosmological missions by speeding up the
computational time needed to generate mock galaxy catalogues, needed to
analyze the collected data. Before using this method for cosmological
analyses, it is important to perform blind tests with simulations to
corroborate that the network produce unbiased results for the required
precision of the data. No ethical aspects are relevant for this work.
## References
* [1] Siyu He, Yin Li, Yu Feng, Shirley Ho, Siamak Ravanbakhsh, Wei Chen, and Barnabás Póczos. Learning to predict the cosmological structure formation. Proceedings of the National Academy of Sciences, 116(28):13825–13832, Jun 2019.
* [2] Xinyue Zhang, Yanfang Wang, Wei Zhang, Yueqiu Sun, Siyu He, Gabriella Contardo, Francisco Villaescusa-Navarro, and Shirley Ho. From dark matter to galaxies with convolutional networks, 2019.
* [3] Digvijay Wadekar, Francisco Villaescusa-Navarro, Shirley Ho, and Laurence Perreault-Levasseur. Hinet: Generating neutral hydrogen from dark matter with neural networks, 2020.
* [4] R. Weinberger, V. Springel, L. Hernquist, A. Pillepich, F. Marinacci, R. Pakmor, D. Nelson, S. Genel, M. Vogelsberger, J. Naiman, and P. Torrey. Simulating galaxy formation with black hole driven thermal and kinetic feedback. mnras, 465:3291–3308, March 2017.
* [5] Annalisa Pillepich, Dylan Nelson, Lars Hernquist, Volker Springel, Rüdiger Pakmor, Paul Torrey, Rainer Weinberger, Shy Genel, Jill P. Naiman, Federico Marinacci, and Mark Vogelsberger. First results from the IllustrisTNG simulations: the stellar mass content of groups and clusters of galaxies. mnras, 475(1):648–675, March 2018.
* [6] WA Falcon. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning, 3, 2019.
* [7] Román Scoccimarro, Ravi K. Sheth, Lam Hui, and Bhuvnesh Jain. How Many Galaxies Fit in a Halo? Constraints on Galaxy Formation Efficiency from Spatial Clustering. apj, 546(1):20–34, January 2001.
* [8] Uroš Seljak. Analytic model for galaxy and dark matter clustering. mnras, 318(1):203–213, October 2000.
* [9] J. A. Peacock and R. E. Smith. Halo occupation numbers and galaxy bias. mnras, 318(4):1144–1156, November 2000.
* [10] Andreas A. Berlind and David H. Weinberg. The Halo Occupation Distribution: Toward an Empirical Determination of the Relation between Galaxies and Mass. apj, 575(2):587–616, August 2002.
* [11] ChangHoon Hahn, Francisco Villaescusa-Navarro, Emanuele Castorina, and Roman Scoccimarro. Constraining Mν with the bispectrum. Part I. Breaking parameter degeneracies. jcap, 2020(3):040, March 2020.
* [12] Cora Uhlemann, Oliver Friedrich, Francisco Villaescusa-Navarro, Arka Banerjee, and Sand rine Codis. Fisher for complements: extracting cosmology and neutrino mass from the counts-in-cells PDF. mnras, 495(4):4006–4027, May 2020.
|
but not sensitive at all to $M$. This is encouraging since a large amount of
computational overhead comes from the ensemble. Note that while smaller
$\varphi$ performs the best here, we use a larger value ($\varphi=30$) in
combination with TEE.
In Figure 17(c), we show the effect of only training the feature extractor
using the gradient from one member of the ensemble at every iteration. The
results are computed on: ninja, plunder, jumper, caveflyer, bigfish, leaper,
climber for 1 seed. We observe that always training the feature extractor
leads to lower performance, corroborating our intuition that the feature
extractor should be trained at the same speed as the individual ensemble
members.
(a) $\varphi$ ablation.
(b) $M$ ablation.
(c) Feature extractor ablation.
Figure 17: Aggregated min-max normalized test scores for $\varphi$ (for fixed
$M=3$, and training feature extractor for all value heads), $M$ (for fixed
$\varphi=50$ and training feature extractor for all value heads), and whether
feature extractor is trained with all value head (for fixed $\varphi=50$ and
$M=3$).
In Figure 18, we study the performance under different hyperparameter values
of TEE. We use fixed $M=5$ and $\varphi=30$ and vary the values of either
$\alpha$ or $\lambda$ while holding the other one fixed. We observe no
significant difference across these, suggesting that the algorithm is robust
to the values of $\alpha$ and $\lambda$.
Figure 18: Aggregated min-max normalized test scores for $\lambda$ (for fixed
$\alpha=7$) and $\alpha$ (for fixed $\lambda=0.6$) on 4 games: bossfight,
climber, plunder, starpilot.
## Appendix K Hardware
The experiments are conducted on 2080 and V100 and take approximately 250 GPU
days. |
# XUV ionization of the H2 molecule studied with attosecond angular streaking
Vladislav V. Serov1 Anatoli S. Kheifets2 1General, Theoretical and Computer
Physics, Saratov State University, Saratov 410012, Russia 2Research School of
Physics, The Australian National University, Canberra ACT 2601, Australia
<EMAIL_ADDRESS>
###### Abstract
We study orientation and two-center interference effects in attosecond time-
resolved photoionization of the H2 molecule. Time resolution of XUV ionization
of H2 is gained through the phase retrieval capability of attosecond angular
streaking demonstrated earlier by Kheifets et al [arXiv:2202.06147 (2022)].
Once applied to H2 , this technique delivers an anisotropic phase and time
delay which both depend sensitively on the molecular axis orientation. In
addition, the photoelectron momentum distribution displays a very clear two-
center interference pattern. When the interference formula due to Walter and
Briggs [J. Phys. B 32 2487 (1999)] is applied, an effective photoelectron
momentum appears to be greater than the asymptotic momentum at the detector.
This effect is explained by a molecular potential well surrounding the
photoemission center.
###### pacs:
32.80.Rm 32.80.Fb 42.50.Hz
Attosecond time resolved studies of molecular photoionization have become a
rapidly growing field. Starting from the pioneering experiment of Huppert et
al. (2016) on H2O and N2O, the method of attosecond interferometry has been
progressively used combining an extreme-ultraviolet (XUV) attosecond pulse
train (APT) and a synchronized infrared (IR) pulse. This technique has also
been known as reconstruction of attosecond beating by interference of two-
photon transitions (RABBITT) Muller (2002); Toma and Muller (2002). Recent
applications of RABBITT to molecular photoionization include attosecond
resolution of coupled electron and nuclear dynamics in dissociative ionization
of H2 Cattaneo et al. (2018) and orientation-dependent time delay and electron
localization studies in CO Vos et al. (2018). Nandi et al. (2020) resolved
attosecond timing of electron emission from a shape resonance in N2. Kamalov
et al. (2020) recorded electron correlation effects in attosecond
photoionization of CO2. Wang et al. (2021) explored the role of nuclear-
electronic coupling in attosecond photoionization of H2 .
The roadmap of atomic and molecular physics Young et al. (2018) has identified
X-ray free-electron lasers (XFELs) as a promising tool for resolving ultrafast
molecular dynamics. Attosecond time-energy structure of XFEL pulses has been
recently demonstrated Hartmann et al. (2018); Duris et al. (2020). This
demonstration makes XFEL sources potentially suitable for attosecond time
resolution of atomic and molecular photoionization. The only stumbling block
preventing such an application is a stochastic nature and an inherent time
jitter of XFEL radaition.
The method of attosecond angular streaking of XUV ionization was developed to
overcome this obstacle. Prompted by theoretical works Zhao et al. (2005);
Kazansky et al. (2016); Li et al. (2018); Kazansky et al. (2019), this method
was eventually implemented in practice for a shot-to-shot characterization of
isolated attosecond pulses (IAP) at XFEL Hartmann et al. (2018); Duris et al.
(2020). Angular streaking of XUV ionization (ASXUVI or ASX for brevity) has
common elements with the two previously developed techniques: attosecond
angular streaking known as the attoclock Eckle et al. (2008a, b); Pfeiffer et
al. (2012) and the attosecond streak camera (ASC) Constant et al. (1997);
Itatani et al. (2002); Goulielmakis et al. (2004); Kienberger et al. (2004);
Yakovlev et al. (2005); Frühling et al. (2009); Zhang and Thumm (2011); Ivanov
and Smirnova (2011). As in ASC, ASX uses XUV pulses to ionize the target.
Then, similarly to the attoclock, the photoelectrons are steered by a
circularly polarized laser field which makes its imprint on the photoelectron
momentum distribution (PMD). This imprint is most visible in the plane
perpendicular to the laser propagation direction. In its original form
Kazansky et al. (2016); Li et al. (2018); Kazansky et al. (2019); Hartmann et
al. (2018); Duris et al. (2020), ASX employed an intense IR laser field and
was interpreted within the strong field approximation (SFA) Zhao et al.
(2022). In these strong field settings, the phase of the XUV ionization is
usually neglected and the timing information associated with this phase is
lost. An alternative view within the lowest order perturbation theory (LOPT)
Dahlström et al. (2012); Dahlström et al (2013); Maquet et al. (2014)
considers IR streaking as an interference phenomenon which opens a natural
access to the streaking phase $\Phi_{S}$. The latter is typically decomposed
into the XUV ionization phase (or Wigner phase) and the continuum-continuum
(CC) phase from the IR interaction. These two phases can be converted to the
corresponding time delay components, which add up to the atomic time delay
$\tau_{a}$.
Phase retrieval capability of ASX based on this analysis was demonstrated
recently by Kheifets et al. (2022). In their numerical simulations on the
hydrogen atom, they recovered accurately the streaking phase and the atomic
time delay across a wide range of photon energies starting from the threshold
and exceeding it many times. Most importantly, this phase retrieval could be
conducted from a single XUV shot. This is a significant advantage over the
existing interferometric techniques which require a systematic and
controllable variation of the XUV/IR pulse delay in one set of measurements in
order to record a streaking spectrogram or a RABBITT trace. This recording
require a precise and stable temporal synchronization of the XUV/IR pulses
which is not feasible at XFEL at present.
In this paper, we extend ASX to molecular photoionization. We solve
numerically the time-dependent Schrödinger equation (TDSE) describing the
hydrogen molecule driven by a combination of the linearly polarized XUV and
circularly polarized IR pulses. In our simulations, the XUV/IR pulse delay is
incremented in several steps. By augmenting the isochrone analysis proposed by
Kazansky et al. (2016) with the energy dependent XUV ionization phase, we are
able to interpret the molecular TDSE results in terms of the atomic time
delay. While the phase and time delay determination is most accurate combining
several increments of the XUV/IR delay, the accuracy is not significantly
compromised with just a single XUV/IR pulse delay. We make a comparison with
the previous RABBITT simulations on H2 Serov and Kheifets (2017) and confirm
validity of our interpreatation and accuracy of our numerical results. We also
demonstrate a strong dependence of the time delay on the molecular axis
orientation discovered earlier in H${}_{2}^{+}$ ion Ning et al. (2014); Serov
and Kheifets (2016).
The paper is organized into the following sections. In Sec. I we outline
basics of the ASX method. In Sec. II we describe our computational procedure.
In Sec. III we analyze and interpret our numerical results. In Sec. IV we give
our concluding remarks.
## I Basic considerations
The proposed phase retrieval by ASX is outlined in our preceding work Kheifets
et al. (2022). The basics of the molecular ASX are essentially the same as for
atoms. We proceed as follows. We apply the SFA and write the photoionization
amplitude as Kitzler et al. (2002)
$a({\bm{k}},\tau)=i\int_{t_{0}}^{\infty}\\!\\!dt\
E_{x}(t-\tau)D_{x}\left[{\bm{k}}-{\bm{A}}(t)\right]e^{-i\Phi(t)}\ .$ (1)
Here the electric field of the XUV pulse $E_{x}$ is advancing the streaking
pulse by the time $\tau$. The streaking field is described by its vector
potential
${\bm{A}}(t)=A_{0}\cos(\omega t)\hat{{\bm{x}}}+A_{0}\sin(\omega
t)\hat{{\bm{y}}}\ .$
The photoelectron momentum is confined to the polarization plane
${\bm{k}}=k\cos\phi~{}\hat{{\bm{x}}}+k\sin\phi~{}\hat{{\bm{y}}}$, where $\phi$
is the emission angle.
The exponential term contains the phase factor
$\Phi(t)=\frac{1}{2}\int_{t}^{\infty}dt^{\prime}\left[{\bm{k}}-{\bm{A}}(t^{\prime})\right]^{2}-E_{0}\,t\
,$ (2)
which contains the photoelectron energy in the absence of streaking
$E_{0}=\Omega-I_{p}$. The most probable photoelectron trajectory, starting at
the time $t_{\rm st}$, keeps the phase stationary:
$\Phi^{\prime}(t_{\rm st})=\frac{1}{2}|{\bm{k}}-{\bm{A}}(t_{\rm
st})|^{2}-E_{0}=0$ (3)
We assume that the XUV pulse is short relative to the IR pulse and shifted
relative to its peak position by the time $\tau$. Under these conditions, Eq.
(3) is transformed to the following isochrone equation Kazansky et al.
(2016):
$k^{2}/2-E_{0}=kA_{0}\cos(\phi-\omega\tau)$ (4)
Here we neglect the ponderomotive energy $U_{p}=A_{0}^{2}/2$ in a weak
streaking field.
The above stationary phase analysis should be modified to account for the
photoelectron energy dependence of the dipole matrix element Schultze et al.
(2010)
$\arg\left\\{D\left[{\bm{k}}-{\bm{A}}(t)\right]\right\\}\propto\alpha|{\bm{k}}-{\bm{A}}(t)|^{2}/2\
,$ (5)
where
$\alpha=\partial\arg D(\sqrt{2E})/\partial E$ (6)
The modified stationary phase equation reads
$\frac{1}{2}\left|{\bm{k}}-{\bm{A}}(t_{st})\right|^{2}-E_{0}+\frac{\alpha}{2}\frac{d}{dt}\left[\left({\bm{k}}-{\bm{A}}(t_{\rm
st})\right)^{2}\right]=0$ (7)
This leads to a generalized isochrone equation
$\displaystyle k^{2}/2-E_{0}$ $\displaystyle=$ $\displaystyle
kA_{0}\left[\cos(\phi-\omega\tau)-\alpha\omega\sin(\phi-\omega\tau)\right]$
(8) $\displaystyle\approx$ $\displaystyle
kA_{0}\cos[\phi-\omega\tau+\omega\alpha]$
Here $\alpha=\Phi_{S}/\omega=\tau_{a}$ under certain XUV and IR pulse
parameters as demonstrated in Kheifets et al. (2022).
## II Computational details
We solve numerically the molecular TDSE equation using the computer code Serov
(2011) to obtain the ionization amplitude $f({\bm{k}})$. We use an angular
basis that included spherical harmonics up to $l_{max}=7$ and $|m_{max}|=7$.
Unlike the dipole selection rules in atomic XUV photoionization, the quantum
numbers $l,m$ adhere to the parity conservation.
The photoelectron momentum spectrum $P({\bm{k}})$ is obtained as the modulus
squared of the ionization amplitude
$P({\bm{k}})\propto|f({\bm{k}})|^{2}\ .$ (9)
The PMD is restricted to the polarization plane $P(k_{x},k_{y},k_{z}=0)$ and
converted to the polar coordinates $P(k,\phi)$ where
$k=(k_{x}^{2}+k_{y}^{2})^{1/2}\ ,\ \phi=\tan^{-1}(k_{y}/k_{x})\ .$ (10)
In these coordinates, we define the directional probability of the
photoelectron emission
$P(\phi)=\int dk\ P(k,\phi)$ (11)
and the mean (central) radial momentum in the given direction
$\bar{k}(\phi)=\int kP(k,\phi)dk/P(\phi)\ .$ (12)
The TDSE is driven by the XUV and IR pulses with the following parameters. The
XUV pulse with a Gaussian envelope has a FWHM of 2 fs and the intensity of
$\rm{6}\times 10^{{13}}~{}W/cm^{2}$. The XUV photon energy $\Omega$ ranges
from 0.7 au to 3 au. A relatively low XUV field intensity is required to
remain within the LOPT framework. A fairly large pulse duration is employed to
ensure a moderately narrow spectral width to probe XUV ionization sufficiently
close to the threshold at 15.6 eV (0.57 au). At the same time, the spectral
width $\Gamma$ should be kept sufficiently large to make sure the IR assisted
XUV absorption process overlaps spectrally with unassisted XUV ionization
Kheifets et al. (2022). This requires $\Gamma>2\omega$, where $\omega$ is the
laser photon energy. To satisfy this requirement, we chose a mid-IR laser
pulse with $\omega=0.038$ au corresponding to $\lambda=1200$ nm. The pulse has
a cosine squred envelope with FWHM of 25 fs and the intensity of
$\rm{1.5}\times 10^{{11}}~{}W/cm^{2}$. The XUV pulse is linearly polarized
along the $\hat{\bm{x}}$ axis whereas the IR pulse is circularly polarized in
the $(xy)$ plane. At each XUV photon energy, we scan the delay between the XUV
pulse and the IR laser field ($\tau$) in the range of 0 to 60 au in 7
increments.
## III Numerical results
We identify three regions in the photoelectron energies which display
distinctively different PMD in the polarization plane. These regions can be
characterized by the strength of the molecular two-center interference. The
theory of this interference was proposed by Cohen and Fano (1966) and Kaplan
and Markin (1969) and further developed for diatomic molecules fixed in space
by Walter and Briggs (1999). In the latter formulation, the ionization
amplitude is approximated by the expression
$f_{\rm
WB}({\bm{k}})\propto({\bm{e}}\cdot{\bm{k}})\cos({\bm{k}}\cdot{\bm{R}}/2)\ ,$
(13)
where $\bm{e}$ is the polarization vector of light and ${\bm{R}}$ is the
vector connecting the nuclei. The first term in the RHS of Eq. (13) is the
atomic hydrogen dipole factor whereas the second term represents the molecular
two-center interference. In the following, we will use a scalar coefficient
$c=kR/2$ to identify the strength of this interference.
Figure 1: Top: PMD of H2 at $\Omega=0.7$ au in the parallel field orientation
with the XUV only pulse (top) and the XUV+IR pulses (middle). The horizontal
dashed line visualize the photoelectron momentum $k_{0}=\sqrt{2(\Omega-
I_{p})}$ from the energy conservation. The vertical dashed line mark the half
of the angular width.
### III.1 Weak interference
At low photoelectron energy when $c\ll 1$, the PMD of H2 looks essentially
atomic like with very little anisotropy seen between the parallel and
perpendicular orientation of the molecular axis ${\bm{R}}$ relative to the
linear polarization axis $\bm{e}$ of the XUV pulse. This behavior is featured
in Fig. 1 which displays the PMD at $\Omega=0.7$ au. The top and middle panels
both illustrate the case of the parallel orientation with the XUV only pulse
(top) and XUV+IR pulses (middle). The bottom panel displays the radially
integrated PMD of the middle panel in the form of the angular distribution
$P(\phi)$ which is overlapped with the analogous distribution for the
perpendicular orientation. Except for an overall magnitude factor $\times
1.8$, the $\perp$ angular distributions look essentially the same as the
$\parallel$ one.
Meanwhile, the PMD of the top panel (XUV only) and the middle panel (XUV+IR)
differ by a noticeable displacement of the radial momentum by the vector-
potential $A_{\small\tt IR}$ of the streaking field. To quantify this
displacement, we use the central photoelectron momenta (12) in the downwards
(-) and upwards (+) shifted lobes of the PMD
$k_{-}\equiv\bar{k}(\phi=0)\ \ ,\ \ k_{+}\equiv\bar{k}(\phi=\pi)\ ,$
These momenta $k_{\pm}(\tau)$, which depend sensitively on the XUV/IR time
delay $\tau$, are then used to obtain the isochrone phase offset:
$k_{\pm}^{2}(\tau)/2-E_{0}=\pm A_{0}\,k_{\pm}(\tau)\cos(\omega\tau+\Phi_{S})\
.$ (14)
This determination is illustrated in the top panel of Fig. 2. Here we
determine $\Phi_{S}=-0.216\pm 0.003$ rad by fitting either of the
$k_{\pm}(\tau)$ branches with a common streaking phase value over the whole
set of the time delays $\tau$. Alternatively, we can apply Eq. (14) to
individual $\tau$ values and to determine the instantaneous $\Phi_{S}(\tau)$.
These values are displayed along with the average streaking phase on the
bottom panel of Fig. 2. Even though the variation of $\Phi_{S}(\tau)$ exceeds
the error bars of the average value, the accuracy of the instantaneous
streaking phase determination is not significantly compromised.
Figure 2: Top: Radial momentum displacements $k_{\pm}^{2}/2-k_{0}^{2}/2$ are
shown at various XUV/IR delays $\tau$. The dashed line represents the fit with
Eq. (14). The arrow indicates the streaking phase $\Phi_{S}$. Bottom: the fit
with Eq. (14) is applied to individual $\tau$ values to determine the
instantaneous $\Phi_{S}(\tau)$. The average $\Phi_{s}$ is shown as a solid
line with error bars visualized by dotted lines. Figure 3: Top: PMD of H2 at
$\Omega=1.5$ au for the parallel (top) and perpendicular (middle) field
orientation with the XUV only pulse The horizontal dashed line visualize the
photoelectron momentum $k_{0}$ while the vertical line marks half of the
angular width.
### III.2 Moderate interference
This region is characterized by a moderate factor $c\lesssim 1$. A typical PMD
in this region is presented in the top and middle panels of Fig. 3. Here the
XUV photon energy $\Omega=1.5$ au and the molecule is oriented parallel (top)
and perpendicular (middle) to the polarization axis. Both panels visualize
single-photon XUV ionization. Adding the IR streaking field does not change
the PMD structure except for a vertical up and down displacement by the amount
of $A_{\small\tt IR}$ as in the middle panel of Fig. 1.
The case of $c\lesssim 1$ differs from $c\ll 1$ by a significant deviation of
the PMD shapes corresponding to the parallel and perpendicular orientations.
The PMD lobes are noticeably elongated for the parallel orientation and
acquire a greater angular width. The photoelectron angular distribution shown
in the bottom panel is markedly different for the $\parallel$ and $\perp$
orientations. While the latter retains the atomic like structure, the former
widens significantly and becomes drastically, by a factor $\times 10$,
suppressed. This parallel emission suppression is documented in the literature
and termed the ”confinement effect” Fernández et al. (2007, 2009). This
corresponds to the dominant photoelectron $p$-wave trapped inside a one-
dimensional box of length $R$ when the momentum quantization condition
$kR=\pi$ satisfied at $c=\pi/2$.
### III.3 Strong interference
This region is characterized by a large interference factor
$c\gtrapprox\pi/2$. In this region, the shape distortion of PMD is most
graphical as shown in Fig. 4 for $\Omega=2.5$ au. While the perpendicular
orientation (middle panel) retains an atomic like shape, the parallel
orientation (top panel) displays very clear interference fringes. These
fringes are also seen in the angular resolved cross-section exhibited in the
bottom panel of Fig. 4.
Figure 4: Same as Fig. 3 for $\Omega=2.5$ au Figure 5: Top: expansion
coefficients of the ionization amplitude over the spherical harmonics (16)
plotted as functions of the interference factor $c=kR/2$. Bottom: angular half
width of the PMD lobes as a function of the photoelectron energy $E=\Omega-
I_{p}$. The upper horizontal scale marks the corresponding interference
factors.
To quantify the two-center interference effects across a wide range of the
photon energies, we plot in the bottom panel of Fig. 5 the half width of the
PMD lobes. The atomic like half width of $45^{\circ}$ corresponds to the
dipole $\cos^{2}\phi$ angular shape. It is retained consistently over the
whole photon energy range in the perpendicular molecular orientation for XUV
only photoionization. Adding a streaking IR field reduces this width
insignificantly for the $\perp$ orientation. Meanwhile, the $\parallel$
orientation, both in XUV and XUV+IR fields, displays a wide oscillation of the
width in the range of moderate to strong two-center interference.
To understand the nature of this oscillation, we note that the amplitude (13)
for the parallel orientation is reduced to
$f^{\parallel}_{\rm WB}(\phi)\propto\cos\phi\cdot\cos(0.5kR\cos\phi)\ .$ (15)
This amplitude can be expanded over the spherical harmonics with the expansion
coefficients given by the following expression Serov et al. (2012)
$A_{\ell}(c)=\left\langle Y_{\ell
0}|f^{\parallel}_{\text{WB}}\right\rangle=\sqrt{2\pi}\int_{-1}^{1}\bar{P}_{\ell}(\eta)\eta\cos(c\eta)d\eta.$
(16)
Here $\bar{P}_{\ell}(\eta)$ are the normalized Legendre polynomials which
depend on $\eta=\cos\phi$. The expansion coefficients (16) for various $\ell$
are plotted in the top panel of Fig. 5. From this graph we see clearly that
$c\simeq 1$ corresponds to a noticeable contribution of the $f$-wave whereas
at $c\simeq\pi/2$ the $p$\- and $f$-wave contributions become of the same
magnitude. These two boundaries correspond to the region of moderate and
strong two-center interference according to our classification in Sec. III.2
and Sec. III.3. In the meantime, the weak interference $c\ll 1$ considered in
Sec. III.1 corresponds to a nearly sole contribution of the $p$-wave.
Fitting the numerical TDSE results for the photoelectron angular distributions
with the squared amplitude (15) gives systematically higher effective momenta
$k_{\rm eff}$ in comparison with the nominal momenta $k$ determined by the
energy conservation. We find $k_{\rm eff}$ from the moduli ratio of the $f$\-
and $p$-waves
$\left|\frac{A_{3}(c_{a})}{A_{1}(c_{a})}\right|=\left|\frac{\left\langle
Y_{30}|f({\bm{k}})\right\rangle}{\left\langle
Y_{10}|f({\bm{k}})\right\rangle}\right|.$ (17)
This ratio equates the expansion coefficients $A_{\ell}$ from Eq. (16)
evaluated at $c_{a}=k_{\rm eff}R/2$ with the corresponding expansion
coefficients of the exact numerical amplitude $f({\bm{k}})$ found by the TDSE
solution.
The deviation $k_{\rm eff}$ from $k$ displayed in the top panel of Fig. 6 can
be explained by the effective potential of the ion remainder. Due to this
potential, the momentum of the electron near the nucleus is greater, and,
accordingly, a larger phase difference between the emitting centers is
accumulated. We can introduce an average effective potential related to the
effective momentum through the following expression
$k_{\rm eff}/k=\sqrt{1+2|\bar{U}_{\rm eff}|/k^{2}}\ .$ (18)
The values of $\bar{U}_{\rm eff}$ are presented in the bottom panel of Fig. 6.
A gradual reduction of $\bar{U}_{\rm eff}$ with a decreasing XUV photon energy
can be understood as follows. By the uncertainty principle, a slower
photoelectron has a larger birthplace area across which the ionic potential is
sampled. Therefore, its effective depth becomes smaller.
### III.4 Streaking phase and time delay
The streaking phase results for the H2 molecule in the $\parallel$ and $\perp$
orientations are summarized in the top panel of Fig. 7 where they are compared
with the corresponding values of the H atom. While the molecular $\Phi_{S}$ in
the $\perp$ orientation is very similar to the atomic one, the $\parallel$
orientation displays a systematically higher values, especially at the onset
of the strong interference when the $c$ factor approaching $\pi/2$. The atomic
time delay derived from the streaking phase $\tau_{a}=\Phi_{S}/\omega$ is
shown in the bottom panel of Fig. 7 where it is compared with the
corresponding values returned by the RABBITT simulations Serov and Kheifets
(2017). Numerical $\tau_{a}$ values from the ASX and RABBITT simulations are
slightly different because of a difference in the wavelength $\lambda=1200$ nm
in the former and 800 nm in the latter. The IR photon wavelength and energy
affect the CC component of the atomic time delay Dahlström et al (2013); Serov
et al. (2015) which becomes particularly noticeable close to the threshold.
Nevertheless, the qualitative behavior of $\tau_{a}$ is very similar in both
sets of simulations. The atomic time delay in the H atom and the H2 molecule
in the $\perp$ orientation remain negative in the studied XUV photon energy
range. At the same time, the $\parallel$ orientation displays a sharp rise of
the time delay to positive values. This effect is also recorded in the
H${}_{2}^{+}$ ion Ning et al. (2014); Serov and Kheifets (2016). It was
attributed in Ning et al. (2014) to the destructive two-center interference.
We offer a more physically appealing interpretation of the positive time delay
due to the trapping the photoelectron in the molecular potential well. From
the condition of this trapping $k_{\rm eff}R=\pi$ occurring at $kR\simeq 2.4$
we can estimate $|U_{\rm eff}|\simeq 1$ au. This determination is consistent
with the values of $U_{\rm eff}$ presented in the bottom panel of Fig. 6.
Figure 6: Top: Effective momentum $k_{\rm eff}/k$. Bottom: effective potential
$U_{\rm eff}$. Figure 7: Top: Streaking phase $\Phi_{S}$ as a function of the
photoelectron energy for the hydrogen atom and the H2 molecule in the
$\parallel$ and $\perp$ orientations. Bottom: the atomic time delay derived
from the streaking phase $\tau_{a}=\Phi_{S}/\omega$ is compared with the
corresponding values returned from the RABBITT simulations Serov and Kheifets
(2017).
## IV Conclusions
In the present work, we employed the angular streaking of XUV ionization of
the H2 molecule to determine the streaking phase and time delay corresponding
to various orientations of the inter-nuclear axis relative to the polarization
axis of ionizing radiation. The ASX technique was originally developed to
characterize isolated attosecond pulses from XFEL source on the shot-to-shot
basis. This technique was adapted to determine the streaking phase and applied
in our previous work Kheifets et al. (2022) to the atomic hydrogen. In the
present work we expand this technique to diatomic homonuclear molecules. We
converted the streaking phase to the atomic time delay and found it in good
agreement with our earlier RABBITT simulations Serov and Kheifets (2017).
Unlike RABBITT, which requires an accurate and stable synchronization of the
ionizing XUV and probing IR pulses, ASX can determine the streaking phase and
time delay from a single XUV shot. This is essential in XFEL sources with
their inherent time jitter.
As in earlier works Ning et al. (2014); Serov and Kheifets (2016, 2017) we
observe a strong orientation dependence of the molecular time delay. In most
cases, $\tau_{a}$ remains negative in H, H2 and H${}_{2}^{+}$ due to a large
negative CC component. However, $\tau_{a}$ becomes positive in H2 and
H${}_{2}^{+}$ in the parallel orientation ${\bm{R}}\parallel\bm{e}$. This
happens when the photoelectron in the dominant $p$-wave becomes trapped in the
molecular potential well. From the condition of this trapping we can estimate
the depth of this well $U_{\rm eff}$.
While the streaking phase retrieval by ASX was demonstrated for a diatomic
homo-nuclear molecule H2 , the proposed method should work for arbitrary
molecular targets. Its application in XFEL will be particularly beneficial for
studying inner shell ionization in atomic and molecular targets which cannot
be ionized at present with conventional laser HHG sources.
#### Acknowledgment:
We thank Rickson Wielian for reviewing the literature and useful discussions.
This work is supported by the Discovery grant DP190101145 of the Australian
Research Council. Resources of National Computational Infrastructure facility
(NCI Australia) have been employed.
## References
* Huppert et al. (2016) M. Huppert, I. Jordan, D. Baykusheva, A. von Conta, and H. J. Wörner, _Attosecond delays in molecular photoionization_ , Phys. Rev. Lett. 117, 093001 (2016).
* Muller (2002) H. Muller, _Reconstruction of attosecond harmonic beating by interference of two-photon transitions_ , Appl. Phys. B 74, s17 (2002).
* Toma and Muller (2002) E. S. Toma and H. G. Muller, _Calculation of matrix elements for mixed extreme-ultraviolet-infrared two-photon above-threshold ionization of argon_ , J. Phys. B 35(16), 3435 (2002).
* Cattaneo et al. (2018) L. Cattaneo, J. Vos, R. Y. Bello, A. Palacios, S. Heuser, L. Pedrelli, M. Lucchini, C. Cirelli, F. Martín, and U. Keller, _Attosecond coupled electron and nuclear dynamics in dissociative ionization of H 2_, Nature Physics 14, 733 (2018).
* Vos et al. (2018) J. Vos, L. Cattaneo, S. Patchkovskii, T. Zimmermann, C. Cirelli, M. Lucchini, A. Kheifets, A. S. Landsman, and U. Keller, _Orientation-dependent stereo Wigner time delay in a small molecule_ , Science 360(6395), 1326 (2018).
* Nandi et al. (2020) S. Nandi, E. Pl’esiat, S. Zhong, A. Palacios, D. Busto, M. Isinger, L. Neoricic, C. L. Arnold, R. J. Squibb, R. Feifel, et al., _Attosecond timing of electron emission from a molecular shape resonance_ , Science Advances 6(31), eaba7762 (2020).
* Kamalov et al. (2020) A. Kamalov, A. L. Wang, P. H. Bucksbaum, D. J. Haxton, and J. P. Cryan, _Electron correlation effects in attosecond photoionization of CO 2_, Phys. Rev. A 102, 023118 (2020).
* Wang et al. (2021) A. L. Wang, V. V. Serov, A. Kamalov, P. H. Bucksbaum, A. Kheifets, and J. P. Cryan, _Role of nuclear-electronic coupling in attosecond photoionization of ${\mathrm{h}}_{2}$_, Phys. Rev. A 104, 063119 (2021).
* Young et al. (2018) L. Young, K. Ueda, M. Gühr, P. H. Bucksbaum, M. Simon, S. Mukamel, N. Rohringer, K. C. Prince, C. Masciovecchio, M. Meyer, et al., _Roadmap of ultrafast x-ray atomic and molecular physics_ , J. Phys. B 51(3), 032003 (2018).
* Hartmann et al. (2018) N. Hartmann, G. Hartmann, R. Heider, M. S. Wagner, M. Ilchen, J. Buck, A. O. Lindahl, C. Benko, J. Grünert, J. Krzywinski, et al., _Attosecond time-energy structure of x-ray free-electron laser pulses_ , Nature Photonics 12, 215 (2018).
* Duris et al. (2020) J. Duris, S. Li, T. Driver, E. G. Champenois, J. P. MacArthur, A. A. Lutman, Z. Zhang, P. Rosenberger, J. W. Aldrich, R. Coffee, et al., _Tunable isolated attosecond x-ray pulses with gigawatt peak power from a free-electron laser_ , Nature Photonics 14, 30 (2020).
* Zhao et al. (2005) Z. X. Zhao, Z. Chang, X. M. Tong, and C. D. Lin, _Circularly-polarized laser-assisted photoionization spectra of argon for attosecond pulse measurements_ , Opt. Express 13(6), 1966 (2005).
* Kazansky et al. (2016) A. K. Kazansky, A. V. Bozhevolnov, I. P. Sazhina, and N. M. Kabachnik, _Interference effects in angular streaking with a rotating terahertz field_ , Phys. Rev. A 93, 013407 (2016).
* Li et al. (2018) S. Li, Z. Guo, R. N. Coffee, K. Hegazy, Z. Huang, A. Natan, T. Osipov, D. Ray, A. Marinelli, and J. P. Cryan, _Characterizing isolated attosecond pulses with angular streaking_ , Opt. Express 26(4), 4531 (2018).
* Kazansky et al. (2019) A. K. Kazansky, I. P. Sazhina, and N. M. Kabachnik, _Fast retrieval of temporal characteristics of fel pulses using streaking by thz field_ , Opt. Express 27(9), 12939 (2019).
* Eckle et al. (2008a) P. Eckle, M. Smolarski, P. Schlup, J. Biegert, A. Staudte, M. Schoffler, H. G. Muller, R. Dorner, and U. Keller, _Attosecond angular streaking_ , Nat. Phys. 4, 565 (2008a).
* Eckle et al. (2008b) P. Eckle, M. Smolarski, P. Schlup, J. Biegert, A. Staudte, M. Schoffler, H. G. Muller, R. Dorner, and U. Keller, _Attosecond angular streaking_ , Nat Phys 4, 565 (2008b).
* Pfeiffer et al. (2012) A. N. Pfeiffer, C. Cirelli, M. Smolarski, D. Dimitrovski, M. Abu-samha, L. B. Madsen, and U. Keller, _Attoclock reveals natural coordinates of the laser-induced tunnelling current flow in atoms_ , Nat Phys 8, 76 (2012).
* Constant et al. (1997) E. Constant, V. D. Taranukhin, A. Stolow, and P. B. Corkum, _Methods for the measurement of the duration of high-harmonic pulses_ , Phys. Rev. A 56, 3870 (1997).
* Itatani et al. (2002) J. Itatani, F. Quéré, G. L. Yudin, M. Y. Ivanov, F. Krausz, and P. B. Corkum, _Attosecond streak camera_ , Phys. Rev. Lett. 88, 173903 (2002).
* Goulielmakis et al. (2004) E. Goulielmakis, M. Uiberacker, R. Kienberger, A. Baltuska, V. Yakovlev, A. Scrinzi, T. Westerwalbesloh, U. Kleineberg, U. Heinzmann, M. Drescher, et al., _Direct measurement of light waves_ , Science 305(5688), 1267 (2004).
* Kienberger et al. (2004) R. Kienberger, E. Goulielmakis, M. Uiberacker, A. Baltuska, V. Yakovlev, F. Bammer, A. Scrinzi, T. Westerwalbesloh, U. Kleineberg, U. Heinzmann, et al., _Atomic transient recorder_ , Nature 427, 817 (2004).
* Yakovlev et al. (2005) V. S. Yakovlev, F. Bammer, and A. Scrinzi, _Attosecond streaking measurements_ , J. Mod. Opt. 52(2-3), 395 (2005).
* Frühling et al. (2009) U. Frühling, M. Wieland, M. Gensch, T. Gebert, B. Schütte, M. Krikunova, R. Kalms, F. Budzyn, O. Grimm, J. Rossbach, et al., _Single-shot terahertz-field-driven x-ray streak camera_ , Nature Photonics 3, 523 (2009).
* Zhang and Thumm (2011) C.-H. Zhang and U. Thumm, _Streaking and Wigner time delays in photoemission from atoms and surfaces_ , Phys. Rev. A 84, 033401 (2011).
* Ivanov and Smirnova (2011) M. Ivanov and O. Smirnova, _How accurate is the attosecond streak camera?_ , Phys. Rev. Lett. 107, 213605 (2011).
* Zhao et al. (2022) X. Zhao, S. Li, T. Driver, V.-H. Hoang, A.-T. Le, J. P. Cryan, A. Marinelli, and C. D. Lin, _Characterization of single-shot attosecond pulses with angular streaking photoelectron spectra_ , Phys. Rev. A 105, 013111 (2022).
* Dahlström et al. (2012) J. M. Dahlström, A. L. Huillier, and A. Maquet, _Introduction to attosecond delays in photoionization_ , J. Phys. B 45(18), 183001 (2012).
* Dahlström et al (2013) J. M. Dahlström et al, _Theory of attosecond delays in laser-assisted photoionization_ , Chem. Phys. 414, 53 (2013).
* Maquet et al. (2014) A. Maquet, J. Caillat, and R. Taïeb, _Attosecond delays in photoionization: time and quantum mechanics_ , J. Phys. B 47(20), 204004 (2014).
* Kheifets et al. (2022) A. S. Kheifets, R. Wielian, I. A. Ivanov, A. L. Wang, A. Marinelli, and J. P. Cryan, _Phase retrieval from angular streaking of XUV atomic ionization_ (2022), URL https://arxiv.org/abs/2202.06147.
* Serov and Kheifets (2017) V. V. Serov and A. S. Kheifets, _Time delay in XUV/IR photoionization of H 2O_, J. Chem. Phys. 147(20), 204303 (2017).
* Ning et al. (2014) Q.-C. Ning, L.-Y. Peng, S.-N. Song, W.-C. Jiang, S. Nagele, R. Pazourek, J. Burgdörfer, and Q. Gong, _Attosecond streaking of Cohen-Fano interferences in the photoionization of H ${}_{2}^{+}$_, Phys. Rev. A 90, 013423 (2014).
* Serov and Kheifets (2016) V. V. Serov and A. S. Kheifets, _Angular anisotropy of time delay in XUV+IR photoionization of H ${}_{2}^{+}$_, Phys. Rev. A 93, 063417 (2016).
* Kitzler et al. (2002) M. Kitzler, N. Milosevic, A. Scrinzi, F. Krausz, and T. Brabec, _Quantum theory of attosecond XUV pulse measurement by laser dressed photoionization_ , Phys. Rev. Lett. 88, 173904 (2002).
* Schultze et al. (2010) M. Schultze, M. Fiess, N. Karpowicz, J. Gagnon, M. Korbman, M. Hofstetter, S. Neppl, A. L. Cavalieri, Y. Komninos, T. Mercouris, et al., _Delay in Photoemission_ , Science 328(5986), 1658 (2010).
* Serov (2011) V. V. Serov, _Calculation of intermediate-energy electron-impact ionization of molecular hydrogen and nitrogen using the paraxial approximation_ , Phys. Rev. A 84, 062701 (2011).
* Cohen and Fano (1966) H. D. Cohen and U. Fano, _Interference in the photo-ionization of molecules_ , Phys. Rev. 150, 30 (1966).
* Kaplan and Markin (1969) I. G. Kaplan and A. P. Markin, _Interference phenomena in photoionization of molecules_ , Sov. Phys. Dokl. 14, 36 (1969).
* Walter and Briggs (1999) M. Walter and J. Briggs, _Photo-double ionization of molecular hydrogen_ , J. Phys. B 32(11), 2487 (1999).
* Fernández et al. (2007) J. Fernández, O. Fojón, A. Palacios, and F. Martín, _Interferences from fast electron emission in molecular photoionization_ , Phys. Rev. Lett. 98, 043005 (2007).
* Fernández et al. (2009) J. Fernández, F. L. Yip, T. N. Rescigno, C. W. McCurdy, and F. Martín, _Two-center effects in one-photon single ionization of H ${}_{2}^{+}$, H2, and Li${}_{2}^{+}$ with circularly polarized light_, Phys. Rev. A 79, 043409 (2009).
* Serov et al. (2012) V. V. Serov, I. A. Ivanov, and A. S. Kheifets, _Single-photon double ionization of H 2 away from equilibrium: A showcase of two-center electron interference_, Phys. Rev. A 86, 025401 (2012).
* Serov et al. (2015) V. V. Serov, V. L. Derbov, and T. A. Sergeeva, _Interpretation of the Time Delay in the Ionization of Coulomb Systems by Attosecond Laser Pulses_ , _Advanced Lasers: Laser Physics and Technology for Applied and Fundamental Science_ (Springer Netherlands, Dordrecht, 2015), pp. 213–230, ISBN 978-94-017-9481-7.
|
# On Intersection Graph of Dihedral Group
Sanhan M.S. Khasraw
Department of Mathematics, College of Basic Education,
Salahaddin University-Erbil, Erbil, Kurdistan Region, Iraq
<EMAIL_ADDRESS>
Abstract
Let $G$ be a finite group. The intersection graph of $G$ is a graph whose
vertex set is the set of all proper non-trivial subgroups of $G$ and two
distinct vertices $H$ and $K$ are adjacent if and only if $H\cap
K\neq\\{e\\}$, where $e$ is the identity of the group $G$. In this paper, we
investigate some properties and exploring some topological indices such as
Wiener, Hyper-Wiener, first and second Zagreb, Schultz, Gutman and eccentric
connectivity indices of the intersection graph of $D_{2n}$ for $n=p^{2}$, $p$
is prime. We also find the metric dimension and the resolving polynomial of
the intersection graph of $D_{2p^{2}}$.
Keywords: Intersection graph of subgroups, Wiener index, Zagreb indices,
Schultz index, resolving polynomial of a graph.
## 1 Introduction
The notion of intersection graph of a finite group has been introduced by
Csákány and Pollák in 1969 [1]. For a finite group $G$, associate a graph
$\Gamma(G)$ with it in such away that the set of vertices of $\Gamma(G)$ is
the set of all proper non-trivial subgroups of $G$ and join two vertices if
their intersection is non-trivial. For more studies about intersection graphs
of subgroups, we refer the reader to see [9, 2, 3, 6, 7].
Suppose that $\Gamma$ is a simple graph, which is undirected and contains no
multiple edges or loops. We denote the set of vertices of $\Gamma$ by
$V(\Gamma)$ and the set of edges of $\Gamma$ by $E(\Gamma)$. We write $uv\in
E(\Gamma)$ if $u$ and $v$ form an edge in $\Gamma$. The size of the vertex-set
of $\Gamma$ is denoted by $|V(\Gamma)|$ and the number of edges of $\Gamma$ is
denoted by $|E(\Gamma)|$. The degree of a vertex $v$ in $\Gamma$, denoted by
$deg(v)$, is defined as the number of edges incident to $v$. The distance
between any pair of vertices $u$ and $v$ in $\Gamma$, denoted by $d(u,v)$, is
the shortest $u-v$ path in $\Gamma$. For a vertex $v$ in $\Gamma$, the
eccentricity of $v$, denoted by $ecc(v)$, is the largest distance between $v$
and any other vertex in $\Gamma$. The diameter of $\Gamma$, denoted as
$diam(\Gamma)$, is defined by $diam(\Gamma)=max\\{ecc(v):v\in V(\Gamma)\\}$. A
graph $\Gamma$ is called complete if every pair of vertices in $\Gamma$ are
adjacent. If $S\subseteq V(\Gamma)$ and no two elements of $S$ are adjacent,
then $S$ is called an independent set. The cardinality of the largest
independent set is called an independent number of the graph $\Gamma$. A graph
$\Gamma$ is called bipartite if the set $V(\Gamma)$ can be partitioned into
two disjoint independent sets such that each edge in $\Gamma$ has its ends in
different independent sets. A graph $\Gamma$ is called split if $V(\Gamma)$
can be partitioned into two different sets $U$ and $K$ such that $U$ is an
independent set and the subgraph induced by $K$ is a complete graph.
Let $W=\\{v_{1},v_{2},\cdots,v_{k}\\}\subseteq V(\Gamma)$ and let $v$ be any
vertex of $\Gamma$. The representation of $v$ with respect to $W$ is the
k-vector $r(v|W)=(d(v,v_{1}),d(v,v_{2}),$ $\cdots,d(v,v_{k}))$. If distinct
vertices have distinct representations with respect to $W$, then $W$ is called
a resolving set for $\Gamma$. A basis of $\Gamma$ is a minimum resolving set
for $\Gamma$ and the cardinality of a basis of $\Gamma$ is called the metric
dimension of $\Gamma$ and denoted by $\beta(\Gamma)$ [8]. Suppose $r_{i}$ is
the number of resolving sets for $\Gamma$ of cardinality $i$. Then the
resolving polynomial of a graph $\Gamma$ of order $n$, denoted by
$\beta(\Gamma,x)$, is defined as
$\beta(\Gamma,x)=\sum_{i=\beta(\Gamma)}^{n}r_{i}x^{i}$. The sequence
$(r_{\beta(\Gamma)},r_{\beta(\Gamma)+1},\cdots,r_{n})$ formed from the
coefficients of $\beta(\Gamma,x)$ is called the resolving sequence.
For a graph $\Gamma$, the Wiener index is defined by
$W(\Gamma)=\sum_{\\{u,v\\}\subseteq V(\Gamma)}d(u,v)$ [5]. The hyper-Wiener
index of $\Gamma$ is defined by
$WW(\Gamma)=\frac{1}{2}W(\Gamma)+\frac{1}{2}\sum_{\\{u,v\\}\subseteq
V(\Gamma)}(d(u,v))^{2}$ [10]. The Zagreb indices are defined by
$M_{1}(\Gamma)=\sum_{v\in V(\Gamma)}(deg(v))^{2}$ and
$M_{2}(\Gamma)=\sum_{uv\in E(\Gamma)}deg(u)deg(v)$ [13]. The Schultz index of
$\Gamma$, denoted by $MTI(\Gamma)$ is defined in [14] by
$MTI(\Gamma)=\sum_{\\{u,v\\}\subseteq V(\Gamma)}d(u,v)[deg(u)+deg(v)]$. In
[15, 11] the Gutman index has been defined by
$Gut(\Gamma)=\sum_{\\{u,v\\}\subseteq V(\Gamma)}d(u,v)[deg(u)\times deg(v)]$.
Sharma, Goswami and Madan defined the eccentric connectivity index of
$\Gamma$, denoted by $\xi^{c}(\Gamma)$, in [12] by $\xi^{c}(\Gamma)=\sum_{v\in
V(\Gamma)}deg(v)ecc(v)$.
For an integer $n\geq 3$, the dihedral group $D_{2n}$ of order $2n$ is defined
by
$D_{2n}=\langle r,s:r^{n}=s^{2}=1,srs=r^{-1}\rangle.$
In [6], Rajkumar and Devi studied the intersection graph of subgroups of some
non-abelian groups, especially the dihedral group $D_{2n}$, quaternion group
$Q_{n}$ and quasi-dihedral group $QD_{2^{\alpha}}$. They were only able to
obtain the clique number and degree of vertices. It seems difficult to study
most properties of the intersection graph of subgroups of these groups. In
this paper, the focus will be on the intersection graph of subgroups of the
dihedral group $D_{2n}$ for the case when $n=p^{2}$, $p$ is prime. It is clear
that when $n=p$, then the resulting intersection graph of subgroups is a null
graph, which is not of our interest. For $n=p^{2}$, the intersection graph
$\Gamma({D_{2p^{2}}})$ of the group $D_{2p^{2}}$ has $p^{2}+p+2$ vertices. We
leave the other possibilities for $n$ open and we might be able to work on
them in the future. So, all throughout this paper, the considered dihedral
group is of order $2p^{2}$, and by intersection graph we mean intersection
graph of subgroups.
This paper is organized as follows. In Section 2, some basic properties of the
intersection graph of $D_{2p^{2}}$ are presented. We see that the intersection
graph $\Gamma({D_{2p^{2}}})$ is split. In Section 3, we find some topological
indices of the intersection graph $\Gamma({D_{2p^{2}}})$ of $D_{2p^{2}}$ such
as the Wiener, hyper-Wiener and Zagreb indices. In Section 4, we find the
metric dimension and the resolving polynomial of the intersection graph
$\Gamma({D_{2p^{2}}})$.
## 2 Some properties of the intersection graph of $D_{2n}$
In [6], all proper non-trivial subgroups of the group $D_{2n}$ has been
classified as shown in the following lemma.
###### Lemma 2.1.
The proper non-trivial subgroups of $D_{2n}$ are:
1. 1.
cyclic groups $H^{r}=\langle r^{\frac{n}{k}}\rangle$ of order $k$, where $k$
is a divisor of $n$ and $k\neq 1$,
2. 2.
cyclic groups $H_{s}=\langle sr^{i}\rangle$ of order 2, where
$i=1,2,\cdots,n$, and
3. 3.
dihedral groups $H_{s}^{r}=\langle r^{\frac{n}{k}},sr^{i}\rangle$ of order
$2k$, where $k$ is a divisor of $n$, $k\neq 1,n$ and
$i=1,2,\cdots,\frac{n}{k}$.
The total number of these proper subgroups is $\tau(n)+\sigma(n)-2$, where
$\tau(n)$ is the number of positive divisors of $n$ and $\sigma(n)$ is the sum
of positive divisors of $n$. We mentioned that we only focus on the case when
$n=p^{2}$, $p$ is prime. Recall that, for $n=p^{2}$, the intersection graph
$\Gamma({D_{2p^{2}}})$ of the group $D_{2p^{2}}$ has $p^{2}+p+2$ vertices. The
vertex set of $\Gamma({D_{2p^{2}}})$ is
$V(\Gamma({D_{2p^{2}}}))=\cup_{i=1}^{p}H_{i}\cup\\{H_{p}^{i}\\}\cup H_{1,p}$,
where
1. 1.
$H_{i}=\\{\langle sr^{i+lp}\rangle;1\leq l\leq p\\}$,
2. 2.
$H_{p}^{i}=\langle r^{p},sr^{i}\rangle$, and
3. 3.
$H_{1,p}=\\{\langle r\rangle,\langle r^{p}\rangle\\}$
The following lemma is given in [6] to compute the degree of any vertex in
$\Gamma({D_{2n}})$. Since we only consider the case $n=p^{2}$, we restate it
as follows:
###### Theorem 2.2.
In the graph $\Gamma({D_{2p^{2}}})$,
$deg(v)=\left\\{\begin{tabular}[]{ll}$1$,&\mbox{ if }$v\in H_{i}$\\\
$2p+1$&\mbox{ if }$v=H_{p}^{i}$\\\ $p+1$,&\mbox{ if }$v\in H_{1,p}$\\\
\end{tabular}\right.$
where $i=1,2,...,p$.
The following theorem gives the exact number of edges in
$\Gamma({D_{2p^{2}}})$ which can be in the Section 3 to compute the second
Zagreb index.
###### Theorem 2.3.
In the graph $\Gamma({D_{2p^{2}}})$,
$|E(\Gamma(D_{2p^{2}}))|=\frac{1}{2}(3p^{2}+3p+2)$.
###### Proof.
It follows from Theorem 2.2 that there are $p^{2}$ vertices of degree 1, $p$
vertices of degree $2p+1$ and 2 vertices of degree $p+1$. Thus,
$|E(\Gamma(D_{2p^{2}}))|=\frac{1}{2}\sum_{v\in
V(\Gamma({D_{2p^{2}}}))}deg(v)=\frac{1}{2}(p^{2}\cdot
1+p\cdot(2p+1)+2\cdot(p+1))=\frac{1}{2}(3p^{2}+3p+2)$. ∎
###### Theorem 2.4.
Let $\Gamma=\Gamma({D_{2p^{2}}})$ be an intersection graph on $D_{2p^{2}}$.
Then $diam(\Gamma)=3$. In particular, $\Gamma$ is connected.
###### Proof.
Suppose $u$ and $v$ are two distinct vertices of $\Gamma({D_{2p^{2}}})$. If
$u$ and $v$ are adjacent, then $d(u,v)=1$. Otherwise, let $u\cap v=\\{e\\}$.
Then there are three possibilities: both $u$ and $v$ are in $H_{i}$, one of
them is in $H_{i}$ and the other is in $H_{1,p}$, or one of them is in $H_{i}$
and the other is in $H_{j}$ for $i\neq j$. For the first case, if $u,v\in
H_{i}$ for some $i$. There exists $z=H_{p}^{i}$ such that $uz,vz\in E(\Gamma)$
and then $d(u,v)=2$. If $u\in H_{i}$ and $v\in H_{1,p}$, then there exists
$z^{\prime}=H_{p}^{i}$ such that $uz^{\prime},vz^{\prime}\in E(\Gamma)$. Again
$z^{\prime}$ is adjacent to both $u$ and $v$, and then $d(u,v)=2$. Finally, if
$u\in H_{i}$ and $v\in H_{j}$ for $i\neq j$, then there exists $w=H_{p}^{i}$
and $w^{\prime}=H_{p}^{j}$ such that $uw,vw^{\prime}\in E(\Gamma)$. However,
$ww^{\prime}\in E(\Gamma)$, then the shortest path from $u$ to $v$ has length
3 and so $d(u,v)=3$. ∎
From Theorem 2.4, one can see that the maximum distance between any pair of
vertices in $\Gamma(D_{2p^{2}})$ is 3. In order to explore the exact distance
between any pair of vertices in $\Gamma(D_{2p^{2}})$ , we state the following
corollary which can be used in the next section to find some topological
indices of $\Gamma(D_{2p^{2}})$.
###### Corollary 2.5.
In the graph $\Gamma({D_{2p^{2}}})$,
$d(u,v)=\left\\{\begin{tabular}[]{ll}$1$,&\mbox{ if }$u,v\in
H_{1,p}\cup\\{H_{p}^{i}\\}$ \mbox{ or } $v\in H_{i},v=H_{p}^{i}$\\\
$2$,&\mbox{ if }$u,v\in H_{i}$ \mbox{ or } $u\in H_{j},v\in
H_{1,p}\cup\\{H_{p}^{i}\\},i\neq j$\\\ $3$,&\mbox{ if }$u\in H_{i},v\in
H_{j},i\neq j$\\\ \end{tabular}\right.$
where $i,j=1,2,...,p$.
###### Theorem 2.6.
Let $\Gamma=\Gamma({D_{2p^{2}}})$ be an intersection graph on $D_{2p^{2}}$.
Then for each $i$, $H_{i}$ forms an independent set.
###### Proof.
From Corollary 2.5, $d(u,v)=2$ for every distinct pairs of vertices $u,v\in
H_{i}$ and so $uv\notin E(\Gamma)$. Therefore, $H_{i}$ is an independent set
for each $i$. ∎
###### Corollary 2.7.
The independent number of the graph $\Gamma({D_{2p^{2}}})$ is $p^{2}+1$.
###### Proof.
From Theorem 2.6, there are $p$ independent sets of size $p$. Also, from
Corollary 2.5, one can see that none of the vertices of $H_{1,p}$ is adjacent
to vertices in $H_{i}$ for each $i$. So, in total the size of the largest
independent set is $p^{2}+1$. ∎
###### Theorem 2.8.
Let $H\subseteq V(\Gamma({D_{2p^{2}}}))$. Then the intersection graph
$\Gamma(H)$ is complete if and only if $H=\cup_{i=1}^{p}\\{H_{p}^{i}\\}\cup
H_{1,p}$.
###### Proof.
Suppose $H=\cup_{i=1}^{p}\\{H_{p}^{i}\\}\cup H_{1,p}$. By Corollary 2.5,
$d(u,v)=1$ for every distinct pairs of vertices $u,v\in H$. Then the graph
$\Gamma(H)$ is complete. The converse follows directly from Corollary 2.5. ∎
The complete graph in the previous theorem is the largest complete subgraph of
$\Gamma({D_{2n}})$. As a consequence, the clique number of $\Gamma({D_{2n}})$
is $p+2$ which coincides with Theorem 2.3 in [6].
###### Theorem 2.9.
Let $H\subseteq V(\Gamma({D_{2p^{2}}}))$. Then $\Gamma(H)=K_{1,p}$ if and only
if $H=H_{i}\cup\\{H_{p}^{i}\\}$ for some $i$.
###### Proof.
The proof follows from Theorems 2.6 and 2.8. ∎
As a consequence of the above theorem, we have the following corollary.
###### Corollary 2.10.
The graph $\Gamma({D_{2p^{2}}})$ is split.
###### Theorem 2.11.
In the graph $\Gamma({D_{2p^{2}}})$,
$ecc(v)=\left\\{\begin{tabular}[]{ll}$2$,&\mbox{ if }$v\in
H_{1,p}\cup\\{H_{p}^{i}\\}$\\\ $3$&\mbox{ if }$v\in H_{i}$\\\
\end{tabular}\right.$
where $i=1,2,...,p$.
###### Proof.
It follows from Corollary 2.5 that no vertex of $H_{1,p}\cup\\{H_{p}^{i}\\}$
is adjacent to any vertex of $H_{j}$, where $i,j=1,2,...,p$ and $i\neq j$.
Then the maximum distance between any vertex of $H_{1,p}\cup\\{H_{p}^{i}\\}$
and any other vertex in $H_{j}$, $i\neq j$, is 2. Thus, $ecc(v)=2$ for each
$v\in H_{1,p}\cup\\{H_{p}^{i}\\}$. Again, from Corollary 2.5, the maximum
distance between any vertex of $H_{i}$ and any other vertex of $H_{j}$, $i\neq
j$, is 3, so $ecc(v)=3$ for each $v\in H_{i}$. ∎
## 3 Some Topological Indices of intersection graph on $D_{2p^{2}}$
In this section, some topological indices, such as the Wiener index, Hyper-
Wiener index, Zagreb indices, the Schultz index, the Gutman index and the
eccentric connectivity index, of the intersection graph for the dihedral group
$D_{2n}$, where $n=p^{2}$, are computed.
###### Theorem 3.1.
Let $\Gamma=\Gamma({D_{2n}})$ be an intersection graph on $D_{2n}$. Then
$W(\Gamma)=\frac{1}{2}(3p^{4}+3p^{3}+5p^{2}+3p+2).$
###### Proof.
Let $u,v\in V(\Gamma)$. It follows from Corollary 2.5 that the number of
possibilities of $d(u,v)=1$ is $p^{2}+{{p+2}\choose{2}}$, the number of
possibilities of $d(u,v)=2$ is $p\cdot{{p}\choose{2}}+p\cdot p\cdot(p+1)$ and
the number of possibilities of $d(u,v)=3$ is
${{p}\choose{2}}{{p}\choose{1}}{{p}\choose{1}}$. Thus,
$W(\Gamma({D_{2n}}))=(p^{2}+\frac{1}{2}(p+1)(p+2))\cdot
1+(\frac{1}{2}(3p^{3}+p^{2}))\cdot 2+(\frac{1}{2}(p^{4}-p^{3}))\cdot
3=\frac{1}{2}(3p^{4}+3p^{3}+5p^{2}+3p+2)$. ∎
###### Theorem 3.2.
Let $\Gamma({D_{2n}})$ be an intersection graph on $D_{2n}$. Then
$WW(\Gamma({D_{2n}}))=\frac{1}{2}(6p^{4}+3p^{3}+6p^{2}+3p+2).$
###### Proof.
From Theorem 3.1 and Corollary 2.5, we can see that
$WW(\Gamma({D_{2n}}))=\frac{1}{2}\bigg{(}\frac{1}{2}(3p^{4}+3p^{3}+5p^{2}+3p+2)\bigg{)}+\frac{1}{2}\bigg{(}\bigg{(}p^{2}+\frac{1}{2}(p+1)(p+2)\bigg{)}\cdot
1^{2}+\bigg{(}\frac{1}{2}(3p^{3}+p^{2})\bigg{)}\cdot
2^{2}+\bigg{(}\frac{1}{2}(p^{4}-p^{3})\bigg{)}\cdot
3^{2}\bigg{)}=\frac{1}{2}(6p^{4}+3p^{3}+6p^{2}+3p+2)$. ∎
In the next two theorems, the first and second Zagreb indices for the
intersection graph $\Gamma({D_{2n}})$ are presented.
###### Theorem 3.3.
Let $\Gamma({D_{2n}})$ be an intersection graph on $D_{2n}$. Then
$M_{1}(\Gamma({D_{2n}}))=4p^{3}+7p^{2}+5p+2.$
###### Proof.
The proof is similar to the proof of Theorem 2.3. It follows from Theorem 2.2
that $M_{1}(\Gamma({D_{2n}}))=p^{2}\cdot
1^{2}+p\cdot(2p+1)^{2}+2\cdot(p+1)^{2}=4p^{3}+7p^{2}+5p+2$. ∎
###### Theorem 3.4.
Let $\Gamma({D_{2n}})$ be an intersection graph on $D_{2n}$. Then
$M_{2}(\Gamma({D_{2n}}))=2p^{4}+6p^{3}+\frac{13}{2}p^{2}+\frac{7}{2}p+1.$
###### Proof.
By Theorem 2.3, $\Gamma$ has $\frac{1}{2}(3p^{2}+3p+2)$ edges in which $p^{2}$
edges with one end-vertex of degree 1 and the other end-vertex of degree
$2p+1$, $\frac{p(p-1)}{2}$ edges where end-vertices have degree $2p+1$, $2p$
edges with one end-vertex of degree $2p+1$ and the other end-vertex of degree
$p+1$ and one edge where end-vertices have degree $p+1$. Thus,
$M_{2}(\Gamma({D_{2n}}))=p^{2}\cdot(1)(2p+1)+\frac{p(p-1)}{2}\cdot(2p+1)^{2}+2p\cdot(2p+1)(p+1)+1\cdot(p+1)^{2}=2p^{4}+6p^{3}+\frac{13}{2}p^{2}+\frac{7}{2}p+1$.
∎
###### Theorem 3.5.
Let $\Gamma({D_{2n}})$ be an intersection graph on $D_{2n}$. Then
$MTI(\Gamma({D_{2n}}))=7p^{4}+6p^{3}+5p^{2}+5p+2.$
###### Proof.
By Theorem 2.2 and Corollary 2.5, for $i,j=1,2,\cdots,p$,
$\displaystyle MTI(\Gamma({D_{2n}}))$ $\displaystyle=\bigg{(}\sum_{u\in
H_{i},v=H_{p}^{i}}d(u,v)[deg(u)+deg(v)]$
$\displaystyle+\sum_{u,v\in\\{H_{p}^{i}\\}}d(u,v)[deg(u)+deg(v)]$
$\displaystyle+\sum_{u,v\in H_{1,p}}d(u,v)[deg(u)+deg(v)]$
$\displaystyle+\sum_{u\in
H_{1,p},v\in\\{H_{p}^{i}\\}}d(u,v)[deg(u)+deg(v)]\bigg{)}$
$\displaystyle+\bigg{(}\sum_{u,v\in H_{i}}d(u,v)[deg(u)+deg(v)]$
$\displaystyle+\sum_{u\in H_{j},v=H_{p}^{i},i\neq j}d(u,v)[deg(u)+deg(v)]$
$\displaystyle+\sum_{u\in H_{i},v\in H_{1,p}}d(u,v)[deg(u)+deg(v)]\bigg{)}$
$\displaystyle+\bigg{(}\sum_{u\in H_{i},v\in H_{j},i\neq
j}d(u,v)[deg(u)+deg(v)]\bigg{)}$ $\displaystyle=\bigg{(}p^{2}\cdot
1\cdot[1+(p+1)]+{{p}\choose{2}}\cdot 1\cdot[(2p+1)+(2p+1)]$
$\displaystyle+1\cdot 1\cdot[(p+1)+(p+1)]$
$\displaystyle+{{2}\choose{1}}\cdot{{p}\choose{1}}\cdot
1\cdot[(p+1)+(2p+1)]\bigg{)}+\bigg{(}p\cdot{{p}\choose{2}}\cdot 2\cdot[1+1]$
$\displaystyle+p\cdot p\cdot(p-1)\cdot 2\cdot[1+(2p+1)]$ $\displaystyle+p\cdot
p\cdot 2\cdot
2\cdot[1+(p+1)]\bigg{)}+\bigg{(}{{p}\choose{1}}\cdot{{p}\choose{1}}\cdot{{p}\choose{2}}\cdot
3\cdot[1+1]\bigg{)}$ $\displaystyle=7p^{4}+6p^{3}+5p^{2}+5p+2.$
∎
###### Theorem 3.6.
Let $\Gamma({D_{2n}})$ be an intersection graph on $D_{2n}$. Then
$Gut(\Gamma({D_{2n}}))=\frac{1}{2}(15p^{4}+13p^{3}+15p^{2}+7p+2).$
###### Proof.
Again by Theorem 2.2 and Corollary 2.5, for $i,j=1,2,\cdots,p$,
$\displaystyle Gut(\Gamma({D_{2n}}))$ $\displaystyle=\bigg{(}\sum_{u\in
H_{i},v=H_{p}^{i}}d(u,v)[deg(u)\times deg(v)]$
$\displaystyle+\sum_{u,v\in\\{H_{p}^{i}\\}}d(u,v)[deg(u)\times deg(v)]$
$\displaystyle+\sum_{u,v\in H_{1,p}}d(u,v)[deg(u)\times deg(v)]$
$\displaystyle+\sum_{u\in H_{1,p},v\in\\{H_{p}^{i}\\}}d(u,v)[deg(u)\times
deg(v)]\bigg{)}$ $\displaystyle+\bigg{(}\sum_{u,v\in H_{i}}d(u,v)[deg(u)\times
deg(v)]$ $\displaystyle+\sum_{u\in H_{j},v=H_{p}^{i},i\neq
j}d(u,v)[deg(u)\times deg(v)]$ $\displaystyle+\sum_{u\in H_{i},v\in
H_{1,p}}d(u,v)[deg(u)\times deg(v)]\bigg{)}$ $\displaystyle+\bigg{(}\sum_{u\in
H_{i},v\in H_{j},i\neq j}d(u,v)[deg(u)\times deg(v)]\bigg{)}$
$\displaystyle=\bigg{(}p^{2}\cdot 1\cdot[1\times(p+1)]+{{p}\choose{2}}\cdot
1\cdot[(2p+1)\times(2p+1)]$ $\displaystyle+1\cdot 1\cdot[(p+1)\times(p+1)]$
$\displaystyle+{{2}\choose{1}}\cdot{{p}\choose{1}}\cdot
1\cdot[(p+1)\times(2p+1)]\bigg{)}$
$\displaystyle+\bigg{(}p\cdot{{p}\choose{2}}\cdot 2\cdot[1\times 1]+p\cdot
p\cdot(p-1)\cdot 2\cdot[1\times(2p+1)]$ $\displaystyle+p\cdot p\cdot 2\cdot
2\cdot[1\times(p+1)]\bigg{)}+\bigg{(}{{p}\choose{1}}\cdot{{p}\choose{1}}\cdot{{p}\choose{2}}\cdot
3\cdot[1\times 1]\bigg{)}$
$\displaystyle=\frac{1}{2}(15p^{4}+13p^{3}+15p^{2}+7p+2).$
∎
###### Theorem 3.7.
Let $\Gamma({D_{2n}})$ be an intersection graph on $D_{2n}$. Then
$\xi^{c}(\Gamma({D_{2n}}))=7p^{2}+6p+4.$
###### Proof.
By Theorems 2.2 and 2.11, for $i=1,2,\cdots,p$, we see that
$\displaystyle\xi^{c}(\Gamma({D_{2n}}))=$ $\displaystyle\sum_{v\in
H_{i}}deg(v)ecc(v)+\sum_{v\in\\{H_{p}^{i}\\}}deg(v)ecc(v)+\sum_{v\in
H_{1,p}}deg(v)ecc(v)$ $\displaystyle=\sum_{v\in H_{i}}1\times
3+\sum_{v\in\\{H_{p}^{i}\\}}(2p+1)\times 2+\sum_{v\in H_{1,p}}(p+1)\times 2$
$\displaystyle=p^{2}\times 1\times 3+p\times(2p+1)\times 2+2\times(p+1)\times
2$ $\displaystyle=7p^{2}+6p+4.$
∎
## 4 Metric dimension and resolving polynomial of intersection graph on
$D_{2p^{2}}$
For a vertex $u$ of a graph $\Gamma$, the set $N(u)=\\{v\in V(\Gamma):uv\in
E(\Gamma)\\}$ is called the open neighborhood of $u$ and the set
$N[u]=N(u)\cup\\{u\\}$ is called the closed neighborhood of $u$. If $u$ and
$v$ are two distinct vertices of $\Gamma$, then $u$ and $v$ are said to be
adjacent twins if $N[u]=N[v]$ and non-adjacent twins if $N(u)=N(v)$. Two
distinct vertices are called twins if they are adjacent or non-adjacent twins.
A subset $U\subseteq V(\Gamma)$ is called a twin-set in $\Gamma$ if every pair
of distinct vertices in $U$ are twins.
###### Lemma 4.1.
Let $\Gamma$ be a connected graph of order $n$ and $U\subseteq V(\Gamma)$ be a
twin set in $\Gamma$ with $|U|=m$. Then every resolving set for $\Gamma$
contains at least $m-1$ vertices of $U$.
###### Corollary 4.2.
[4] Let $\Gamma$ be a connected graph, $U$ resolves $\Gamma$ and $u$ and $v$
are twins. Then $u\in U$ or $v\in U$. In addition, if $u\in U$ and $v\notin
U$, then $(U\setminus\\{u\\})\cup\\{v\\}$ also resolves $\Gamma$.
###### Theorem 4.3.
Let $\Gamma(D_{2p^{2}})$ be an intersection graph on $D_{2p^{2}}$. Then
$\beta(\Gamma(D_{2p^{2}}))=p^{2}-p+1.$
###### Proof.
Let $W=\cup_{i=1}^{p}(H_{i}-\\{\langle sr^{i}\rangle\\})\cup\\{\langle
r^{p}\rangle\\}$. One can see that $W$ is a resolving set for
$\Gamma(D_{2p^{2}})$ of cardinality $p(p-1)+1$. Then
$\beta(\Gamma({D_{2p^{2}}}))\leq p^{2}-p+1$.
On the other hand, the sets $H_{i}$, for $i=1,2,\cdots,p$, and $H_{1,p}$ are
twin sets of cardinality $p$ and $2$, respectively. Then by Lemma 4.1, we see
that $\beta(\Gamma({D_{2p^{2}}}))\geq p^{2}-p+1$. ∎
The following is a useful property for finding a resolving polynomial of a
graph of order $n$.
###### Lemma 4.4.
If $\Gamma$ is a connected graph of order $n$, then $r_{n}=1$ and $r_{n-1}=n$.
###### Theorem 4.5.
Let $\Gamma=\Gamma(D_{2p^{2}})$ be an intersection graph on $D_{2p^{2}}$. Then
$\beta(\Gamma,x)=x^{p^{2}-p+1}\bigg{(}{{2}\choose{1}}{{p}\choose{p-1}}^{p}+\sum_{q=1}^{p}r_{p^{2}-p+1+q}x^{q}+\sum_{k=p+1}^{2p-1}r_{p^{2}-p+1+k}x^{k}+(p^{2}+p+1)x^{2p}+x^{2p+1}\bigg{)},$
where
$r_{p^{2}-p+1+q}={{p}\choose{q}}{{p}\choose{p-1}}^{p-q}{{2}\choose{1}}{{p}\choose{0}}+{{p}\choose{q-i}}{{p}\choose{p-1}}^{p-(q-i)}{{2}\choose{1}}{{p}\choose{i}}+{{p}\choose{q-1}}{{p}\choose{p-1}}^{p-(q-1)}{{2}\choose{2}}{{p}\choose{0}}\\\
+{{p}\choose{q-1-i}}{{p}\choose{p-1}}^{p-(q-1-i)}{{2}\choose{2}}{{p}\choose{i}}$,
$r_{p^{2}-p+1+k}={{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{1}}{{p}\choose{k_{2}}}+{{p}\choose{k_{2}}}{{p}\choose{p-1}}^{p-k_{2}}{{2}\choose{1}}{{p}\choose{k_{1}}}+\\\
{{p}\choose{k_{1}-1}}{{p}\choose{p-1}}^{p-(k_{1}-1)}{{2}\choose{2}}{{p}\choose{k_{2}}}+{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{2}}{{p}\choose{k_{2}-1}}+{{p}\choose{k_{2}-1}}{{p}\choose{p-1}}^{p-(k_{2}-1)}{{2}\choose{2}}{{p}\choose{k_{1}}}\\\
+{{p}\choose{k_{2}}}{{p}\choose{p-1}}^{p-k_{2}}{{2}\choose{2}}{{p}\choose{k_{1}-1}}$,
$i\leq q,k_{1}+k_{2}=k,k_{1}\neq k_{2},k_{1}-1\neq k_{2},k_{1}\neq k_{2}-1$
and $1\leq k_{j}\leq p$ for $j=1,2$.
###### Proof.
By Theorem 4.3, $\beta(\Gamma)=p^{2}-p+1$. It is required to find the
resolving sequence
$(r_{\beta(\Gamma)},r_{\beta(\Gamma)+1},\cdots,r_{\beta(\Gamma)+2p+1})$ of
length $2p+2$.
To find $r_{\beta(\Gamma)}$. For the reason that $H_{i}(1\leq i\leq p)$ and
$H_{1,p}$ are twin sets, then by Corollary 4.2 and the principal of
multiplication, we see that there are
$\underbrace{{{p}\choose{p-1}}{{p}\choose{p-1}}\cdots{{p}\choose{p-1}}}_{p-times}{{2}\choose{1}}=2p^{p}$
possibilities of resolving sets of cardinality $\beta(\Gamma)$, that is,
$r_{\beta(\Gamma)}=2p^{p}$.
For $1\leq l\leq 2p-1$, we aim to find $r_{\beta(\Gamma)+l}$.
First, we try to find $r_{\beta(\Gamma)+q}$, where $1\leq q\leq p$. Suppose
$u_{1},u_{2},\cdots,u_{q}$ be $q$ distinct vertices of $\Gamma$ that do not
belong to any resolving set of cardinality $\beta(\Gamma)+q-1$. Then there are
four possibilities to consider:
$u_{1},u_{2},\cdots,u_{q}\in\cup_{j=1}^{p}H_{j}$;
$u_{1},u_{2},\cdots,u_{q}\in\cup_{j=1}^{p}H_{j}\cup\\{H_{p}^{j}\\}$;
$u_{1},u_{2},\cdots,u_{q}\in\cup_{j=1}^{p}H_{j}\cup H_{1,p}$ or
$u_{1},u_{2},\cdots,u_{q}\in\cup_{j=1}^{p}H_{j}\cup\\{H_{p}^{j}\\}\cup
H_{1,p}$.
Altogether, by principals of addition and multiplication, there are
${{p}\choose{q}}{{p}\choose{p-1}}^{p-q}{{2}\choose{1}}{{p}\choose{0}}+{{p}\choose{q-i}}{{p}\choose{p-1}}^{p-(q-i)}{{2}\choose{1}}{{p}\choose{i}}+{{p}\choose{q-1}}{{p}\choose{p-1}}^{p-(q-1)}{{2}\choose{2}}{{p}\choose{0}}+\\\
{{p}\choose{q-1-i}}{{p}\choose{p-1}}^{p-(q-1-i)}{{2}\choose{2}}{{p}\choose{i}}$
possibilities of resolving sets of cardinality $\beta(\Gamma)+q$, where $1\leq
q\leq p$ and $i\leq q$.
Second, to find $r_{\beta(\Gamma)+k}$, where $p+1\leq k\leq 2p-1$. Take the
set of vertices $v_{1},v_{2},\cdots,v_{k}$ in $\Gamma$ that do not belong to
any resolving set of cardinality $\beta(\Gamma)+k-1$. Since $k>p$, then we
assume that $k=k_{1}+k_{2}$ such that $k_{1}\neq k_{2},k_{1}-1\neq k_{2}$ and
$k_{1}\neq k_{2}-1$, where $1\leq k_{j}\leq p$ and $j=1,2$. Then there are the
following possibilities:
$k_{1}$ vertices of the set $\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in
$\cup_{i}^{p}H_{i}$ and $k_{2}$ vertices of the set
$\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in $\cup_{i}^{p}\\{H_{p}^{i}\\}$,
$k_{2}$ vertices of the set $\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in
$\cup_{i}^{p}H_{i}$ and $k_{1}$ vertices of the set
$\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in $\cup_{i}^{p}\\{H_{p}^{i}\\}$,
$k_{1}$ vertices of the set $\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in
$\cup_{i}^{p}H_{i}\cup H_{1,p}$ and $k_{2}$ vertices of the set
$\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in $\cup_{i}^{p}\\{H_{p}^{i}\\}$,
$k_{1}$ vertices of the set $\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in
$\cup_{i}^{p}H_{i}$ and $k_{2}$ vertices of the set
$\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in $\cup_{i}^{p}\\{H_{p}^{i}\\}\cup
H_{1,p}$,
$k_{2}$ vertices of the set $\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in
$\cup_{i}^{p}H_{i}\cup H_{1,p}$ and $k_{1}$ vertices of the set
$\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in $\cup_{i}^{p}\\{H_{p}^{i}\\}$ or
$k_{2}$ vertices of the set $\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in
$\cup_{i}^{p}H_{i}$ and $k_{1}$ vertices of the set
$\\{v_{1},v_{2},\cdots,v_{k}\\}$ are in $\cup_{i}^{p}\\{H_{p}^{i}\\}\cup
H_{1,p}$.
Again, by the principal of addition and multiplication, there are
${{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{1}}{{p}\choose{k_{2}}}+{{p}\choose{k_{2}}}{{p}\choose{p-1}}^{p-k_{2}}{{2}\choose{1}}{{p}\choose{k_{1}}}+{{p}\choose{k_{1}-1}}{{p}\choose{p-1}}^{p-(k_{1}-1)}{{2}\choose{2}}{{p}\choose{k_{2}}}+{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{2}}{{p}\choose{k_{2}-1}}+{{p}\choose{k_{2}-1}}{{p}\choose{p-1}}^{p-(k_{2}-1)}{{2}\choose{2}}{{p}\choose{k_{1}}}+{{p}\choose{k_{2}}}{{p}\choose{p-1}}^{p-k_{2}}{{2}\choose{2}}{{p}\choose{k_{1}-1}}$
possible resolving sets of cardinality $\beta(\Gamma)+k$, where $p<k\leq
2p-1$.
By Lemma 4.4, $r_{\beta(\Gamma)+2p}=p^{2}+p+1$ and $r_{\beta(\Gamma)+2p+1}=1$.
∎
In the following remark, some additional possibilities of
$r_{\beta(\Gamma)+k}$, where $p<k\leq 2p-1$, are given.
###### Remark 4.6.
In Theorem 4.5, we have the following additional possibilities:
1. 1.
if $k_{1}=k_{2}$, then $r_{\beta(\Gamma)+k}=\\\
{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{1}}{{p}\choose{k_{2}}}+{{p}\choose{k_{1}-1}}{{p}\choose{p-1}}^{p-(k_{1}-1)}{{2}\choose{2}}{{p}\choose{k_{2}}}+{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{2}}{{p}\choose{k_{2}-1}}$,
2. 2.
if $k_{1}-1=k_{2}$, then $r_{\beta(\Gamma)+k}=\\\
{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{1}}{{p}\choose{k_{2}}}+{{p}\choose{k_{2}}}{{p}\choose{p-1}}^{p-k_{2}}{{2}\choose{1}}{{p}\choose{k_{1}}}+{{p}\choose{k_{1}-1}}{{p}\choose{p-1}}^{p-(k_{1}-1)}{{2}\choose{2}}{{p}\choose{k_{2}}}+{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{2}}{{p}\choose{k_{2}-1}}+{{p}\choose{k_{2}-1}}{{p}\choose{p-1}}^{p-(k_{2}-1)}{{2}\choose{2}}{{p}\choose{k_{1}}}$,
and
3. 3.
if $k_{1}=k_{2}-1$, then $r_{\beta(\Gamma)+k}=\\\
{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{1}}{{p}\choose{k_{2}}}+{{p}\choose{k_{2}}}{{p}\choose{p-1}}^{p-k_{2}}{{2}\choose{1}}{{p}\choose{k_{1}}}+{{p}\choose{k_{1}-1}}{{p}\choose{p-1}}^{p-(k_{1}-1)}{{2}\choose{2}}{{p}\choose{k_{2}}}+{{p}\choose{k_{1}}}{{p}\choose{p-1}}^{p-k_{1}}{{2}\choose{2}}{{p}\choose{k_{2}-1}}+{{p}\choose{k_{2}}}{{p}\choose{p-1}}^{p-k_{2}}{{2}\choose{2}}{{p}\choose{k_{1}-1}}$.
## References
* [1] B. Csákány and G. Pollák, The graph of subgroups of a finite group (Russian), Czechoslovak Math. J., 19(1969), 241–247.
* [2] Ivy Chakrabarty, Shamik Ghosh, T.K. Mukherjee and M.K. Sen, Intersection graphs of ideals of rings, Discrete mathematics, 309(17) (2009), 5381–5392.
* [3] Rulin Shen, Intersection graphs of subgroups of finite groups, Czechoslovak mathematical journal, 60(4) (2010), 945–950.
* [4] C. Hernando, M. Mora, I. M. Pelayo, C. Seera and D. R. Wood, Extremal graph theory for metric dimension and diameter, The Elec. J. of Combin., 17 (2010), Research–Paper.
* [5] H. Wiener, Structural determination of the paraffin boiling points, J. Am. Chem. Soc., 69(1947), 17–20.
* [6] R. Rajkumara and P. Devib, Intersection graph of subgroups of some non-abelian groups, Malaya Journal of Mathematik, 4(2)(2016), 238–242.
* [7] R. Rajkumar and P. Devi, Intersection graphs of cyclic subgroups of groups, Electronic Notes in Discrete Mathematics, 53 (2016), 15–24.
* [8] G. Chartrand, L. Eroh, M.A. Johnson and O.R. Oellermann, Resolvability in graphs and the metric dimension of a graph, Disc. Appl. Math., 105(2000), 99–113.
* [9] Bohdan Zelinka, Intersection graphs of finite abelian groups, Czechoslovak Mathematical Journal, 25(2)(1975), 171–174.
* [10] D.J. Klein, I. Lukovits and I. Gutman, On the definition of the hyper-Wiener index for cycle-containing structures, J. Chem. Inf. Comput. Sci., 35(1995), 50–52.
* [11] I. Gutman, W. Yan, Y.-N. Yeh and B.-Y. Yang, Generalized Wiener indices of zigzagging pentachains, J. Math. Chem., 42(2007), 103–117.
* [12] V. Sharma, R. Goswami and A.K. Madan, Eccentric connectivity index: A novel highly discriminating topological descriptor for structure-property and structure-activity studies, J. Chem. Inf. Comput. Sci., 37(1997), 273–282.
* [13] I. Gutman and N. Trinajstić, Graph theory and molecular orbitals, Total ${\pi}$-electron energy of alternant hydrocarbons, Chem. Phys. Lett., 17(1972), 535–538.
* [14] H.P. Schultz, Topological organic chemistry 1. Graph theory and topological indices of Alkanes, J. Chem. Inf. Comput. Sci., 29(1989), 227–228.
* [15] I. Gutman, Selected properties of the Schultz molecular topological index, J. Chem. Inf. Comput. Sci., 34(1994), 1087–1089.
|
# Higher-order topological Anderson insulators in quasicrystals
Tan Peng Department of Physics, Hubei University, Wuhan 430062, China Chun-
Bo Hua Department of Physics, Hubei University, Wuhan 430062, China Rui Chen
Shenzhen Institute for Quantum Science and Engineering and Department of
Physics, Southern University of Science and Technology (SUSTech), Shenzhen
518055, China Zheng-Rong Liu Department of Physics, Hubei University, Wuhan
430062, China Dong-Hui Xu Department of Physics, Hubei University, Wuhan
430062, China Bin Zhou<EMAIL_ADDRESS>Department of Physics, Hubei
University, Wuhan 430062, China
###### Abstract
The disorder effects on higher-order topological phases in periodic systems
have attracted much attention. However, in aperiodic systems, such as
quasicrystalline systems, the interplay between disorder and higher-order
topology is still unclear. In this paper, we investigate the effects of
disorder on two types of second-order topological insulators, including a
quasicrystalline quadrupole insulator and a modified quantum spin Hall
insulator, in a two-dimensional Amman-Beenker tiling quasicrystalline lattice.
We demonstrate that the higher-order topological insulators are robust against
weak disorder in both models. More striking, the disorder-induced higher-order
topological insulators called higher-order topological Anderson insulators are
found at a certain region of disorder strength in both models. Our paper
extends the study of the interplay between disorder and higher-order topology
to quasicrystalline systems.
## I Introduction
A higher-order topological insulator (HOTI), a generalization of conventional
topological insulator (TI), has been a hot point of research in condensed-
matter physics [1, 2, 3, 4, 5, 6, 7, 8]. Unlike the conventional TI, an $n$th-
order topological insulator that has $d$ dimensions will have gapless boundary
states in $d-n$ dimensions ($d\geq n$). For instance, a two-dimensional (2D)
second-order topological insulator (SOTI) has zero-dimensional (0D) corner
states localized at its boundary. Analogously, a three-dimensional (3D)
second- (third-) order topological insulator has one-dimensional (1D) (0D)
hinge (corner) states localized at its boundary. These novel bulk-boundary
correspondences, which are quite different from the conventional TIs, can be
described by the nested-Wilson-loop method [4, 5, 9] and the real-space
quadrupole moment [4, 7, 8, 6, 10, 5].
The HOTIs have been extensively studied in various systems [10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]. Up to now, the great
majority of the previous works about the HOTIs were discussed in crystalline
systems. However, the aperiodic systems, especially the quasicrystalline
systems, which lack translational symmetry and possess forbidden symmetries in
crystals, such as the fivefold, eightfold, and twelvefold rotational
symmetries, have also been used to realize HOTIs [66, 67, 68, 69]. For
instance, Chen _et al_. proposed that two distinct types of SOTIs can be
realized in quasicrystalline lattices [67]. One is the quasicrystalline
quadrupole insulator which can be constructed in a modified Benlcazar-
Bernevig-Hughes model [4, 5], and this kind of SOTI is protected by chiral
symmetry. The other is the modified quantum spin Hall insulator which is
formed by a TI model with a mass term which gaps the counterpropagating edge
states and induces the appearance of topological corner states. They proved
that these types of the topological corner states are protected by combined
symmetries $C_{4}m_{z}$ and $C_{8}m_{z}$ with different boundary conditions.
Very recently, Lv _et al_. reported that the HOTI has been experimentally
implemented in a quasicrystalline lattice constructed by electrical circuits
[70].
Figure 1: (a) Schematic of the Ammann-Beenker tiling quasicrystal containing
$94$ cells. Each cell includes four sites, marked by orange. (b) An Ammann-
Beenker tiling quasicrystal containing $301$ vertices with the square boundary
condition. (c) An Ammann-Beenker tiling quasicrystal containing $297$ vertices
with the octagonal boundary condition. The first three nearest-neighbor
intercell bonds correspond to the short diagonal of the rhombus tile, the edge
of square and rhombus tiles, and the diagonal of the square tile,
respectively. The distance ratio of the three bonds is
$r_{0}:r_{1}:r_{2}=2\sin\frac{\pi}{8}:1:2\sin\frac{\pi}{4}$.
Another interesting topic is disorder-induced topological phase transition.
Generally, the topological phase is robust against weak disorder and
suppressed by strong disorder where the energy gap is closed and a topological
phase transition appears. Furthermore, a fascinating phenomenon is that
disorder can encourage the generation of a topological phase by adding a
certain strength of disorder to a topologically trivial phase. The disorder-
induced topological phase which is a so-called topological Anderson insulator
(TAI) was first proposed by Li _et al_. in 2009 [71]. Then, the TAIs have been
extensive studied in various systems [72, 73, 74, 75, 76, 77, 78, 79, 80, 81,
82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97] and realized
in several experiment platforms [98, 99, 100], such as the 1D-disordered
atomic wires [98] and the photonic lattices [99, 100]. TAI was implemented in
the crystalline systems, as well as in quasicrystalline systems. For instance,
TAI has been proposed in the Penrose tiling pentagonal quasicrystal [101] and
the Ammann-Beenker tiling octagonal quasicrystal [102, 103]. In addition, the
disorder-induced HOTI, dubbed the higher-order topological Anderson insulator
(HOTAI), has been studied in various condensed-matter systems, including
topological quadrupole insulators [104, 105], topological superconductors
[106], and topological Weyl semimetals [107]. Even more striking is that Zhang
_et al_. demonstrated that the HOTAI can appear in a modified Haldane model
and realize experimentally in the electric circuit setup [108]. However, the
investigation of the disorder-induced HOTI in quasicrystalline systems remains
lacking. An interesting question is whether the HOTAI can be realized in
quasicrystalline systems.
In this paper, we investigate the disorder-induced topological phase
transition in an Ammann-Beenker tiling octagonal quasicrystal with two types
of SOTI containing a quasicrystalline quadrupole insulator (named model I) and
a modified quantum spin Hall insulator with gapless corner states (named model
II) as mentioned in Ref. [67]. For model I, the lattice is cut into a square,
and each cell contains four sites as shown in Fig. 1(a). For model II, we
divide the discussion into two cases: a square boundary [shown in Fig. 1(b)]
and an octagonal boundary [shown in Fig. 1(c)]. By calculating the quadrupole
moment and the probability density of the in-gap eigenstates, we find that the
SOTI phases in the two models with the square boundary conditions are robust
against the weak disorder, and the HOTAI phase induced from an initial
topological trivial phase occurs at a certain area of disorder strength with
four localized gapless corner states characterized by a quantized quadrupole
moment ($q_{xy}=0.5$). More striking, a HOTAI phase with eight localized
gapless corner states is found in model II with an octagonal boundary, and
this HOTAI phase is a unique topological phase which cannot be realized in
crystalline systems.
The rest of the paper is organized as follows. In Sec. II, we introduce two
types of SOTIs with disorder in the 2D quasicrystalline lattice and give the
details of numerical methods. Then, we provide numerical results for studying
the topological phase transitions of the two models in Secs. III and IV,
respectively. Finally, we summarize our conclusions in Sec. V.
## II Models and Method
We start with a tight-binding model of a quadrupole insulator with disorder in
an Ammann-Beenker tiling quasicrystalline lattice which has a square boundary
condition. Each vertex of the quasicrystalline lattice contains four sites as
shown in Fig. 1(a). In this section, we consider the first three nearest-
neighbor intercell hoppings and the nearest-neighbor intracell hopping. The
model Hamiltonian is given by [67]
$\displaystyle H_{1}$ $\displaystyle=$ $\displaystyle\lambda\sum_{m\neq
n}\frac{l(r_{mn})}{2}c_{m}^{\dagger}(\left|\cos\psi_{mn}\right|\Gamma_{4}-i\cos\psi_{mn}\Gamma_{3}$
(1)
$\displaystyle+\left|\sin\psi_{mn}\right|\Gamma_{2}-i\sin\psi_{mn}\Gamma_{1})c_{n}$
$\displaystyle+\sum_{m}c_{m}^{\dagger}[(\gamma_{x}+U_{m})\Gamma_{4}+\gamma_{y}\Gamma_{2}]c_{m},$
where
$c_{m}^{{\dagger}}=(c_{m1}^{{\dagger}},c_{m2}^{{\dagger}},c_{m3}^{{\dagger}},c_{m4}^{{\dagger}})$
is the creation operator in cell $m$. $\gamma_{x,y}$ are the intracell hopping
amplitudes along the $x$ axis and $y$ axis, respectively. $\lambda$ denotes
the intercell hopping amplitude. $U_{m}$ is the uniform random variable chosen
from $[-W/2,W/2]$, and $W$ is the disorder strength.
$\Gamma_{4}=\tau_{1}\tau_{0}$ and $\Gamma_{\mu}=-\tau_{2}\tau_{\mu}$ with
$\mu=1,2,3$. $\tau_{1-3}$ are the Pauli matrices acting on the sites in one
cell, and $\tau_{0}$ is the $2\times 2$ identity matrix. In polar coordinate
space, $\psi_{mn}$ is the polar angle between cells $m$ and $n$.
$l(r_{mn})=e^{1-r_{mn}/\zeta}$ is the spatial decay factor of hopping
amplitudes with the decay length $\zeta$ and $r_{mn}$ representing the spatial
distance of arbitrary two cells. Here, the spatial decay length $\zeta$ and
the side length of the rhombus and square $r_{1}$ are fixed as $1$, and the
energy unit is set as $\lambda=1$ for simplicity. The Hamiltonian $H_{1}$
respects time-reversal symmetry, particle-hole symmetry, and chiral symmetry
in the clean limit, i.e., $W=0$, and the time-reversal symmetry, particle-hole
symmetry, and chiral symmetry operators are $T=\tau_{0}\tau_{0}K$,
$P=\tau_{3}\tau_{0}K$ and $S=TP=\tau_{3}\tau_{0}$, respectively, where $K$ is
the complex conjugate operator. When the disorder strength is not zero, the
system also maintains these three symmetries. In fact, the Hamiltonian $H_{1}$
is a derivation of the Benalcazar-Bernevig-Hughes model [4, 5] in some sense.
In addition, we also will investigate the effects of disorder on the higher-
order topological phase of a modified Bernevig-Hughes-Zhang model in the
Ammann-Beenker tiling quasicrystalline lattice with the square boundary
condition and octagon boundary condition, respectively. As shown in Figs. 1(b)
and 1(c), each vertex of the quasicrystalline lattice is exactly one lattice
site. Only the first three nearest-neighbor hoppings are considered in our
computing. This model lattice can be described by a tight-binding Hamiltonian
with the form of [67]
$\displaystyle H_{2}$ $\displaystyle=$ $\displaystyle-\sum_{m\neq
n}\frac{l(r_{mn})}{2}c_{m}^{\dagger}[it_{1}(s_{3}\tau_{1}\cos\psi_{mn}+s_{0}\tau_{2}\sin\psi_{mn})$
(2)
$\displaystyle+t_{2}s_{0}\tau_{3}+t_{3}s_{1}\tau_{1}\cos(\xi\psi_{mn})]c_{n}$
$\displaystyle+\sum_{m}(M+2t_{2}+U_{m})c_{m}^{\dagger}s_{0}\tau_{3}c_{m},$
Table 1: Symmetries of the Hamiltonian $H_{2}$ on an Ammann-Beenker tiling quasicrystalline lattice with the square and octagonal boundaries without ($W=0$) and with ($W\neq 0$) disorder. $\sigma_{x,y,z}$ and $\tau_{x,y,z}$ are the Pauli matrices. $K$ is the complex conjugate operator, and $\mathcal{I}$ is the $N\times N$ unit matrix with the lattice number $N$. $\mathcal{M}_{x,y,}$ are orthogonal matrices permuting the sites of the tiling to flip the whole system vertically and horizontally. $\mathcal{R}_{4,8}$ are orthogonal matrix permuting the sites of the tiling to rotate the whole system by an angle of $\pi/2$ and $\pi/4$, respectively. Check mark indicates that the symmetry in this case is preserved, and a cross mark means the symmetry is absent. | | Square | Octagon
---|---|---|---
| | $W=0$ | $W\not=0$ | $W=0$ | $W\not=0$
$P=\sigma_{z}\tau_{x}\mathcal{I}K$ | $PHP^{-1}=-H$ | ✓ | $\times$ | ✓ | $\times$
$T=i\sigma_{y}\tau_{0}\mathcal{I}K$ | $THT^{-1}=H$ | $\times$ | $\times$ | $\times$ | $\times$
$S=PT$ | $SHS^{-1}=-H$ | $\times$ | $\times$ | $\times$ | $\times$
$m_{x}=\sigma_{x}\tau_{0}\mathcal{M}_{x}$ | $m_{x}Hm_{x}^{-1}=H$ | ✓ | $\times$ | ✓ | $\times$
$m_{y}=\sigma_{y}\tau_{z}\mathcal{M}_{y}$ | $m_{y}Hm_{y}^{-1}=H$ | ✓ | $\times$ | ✓ | $\times$
$m_{z}=\sigma_{z}\tau_{0}\mathcal{I}$ | $m_{z}Hm_{z}^{-1}=H$ | $\times$ | $\times$ | $\times$ | $\times$
| | | | |
$C_{4}=e^{-i\frac{\pi}{4}\sigma_{z}\tau_{z}}\mathcal{R}_{4}$ | $C_{4}HC_{4}^{-1}=H$ | $\times$ | $\times$ | $\times$ | $\times$
$C_{4}T$ | $C_{4}TH(C_{4}T)^{-1}=H$ | ✓ | $\times$ | $\times$ | $\times$
$C_{4}m_{x}$ | $C_{4}m_{x}H(C_{4}m_{x})^{-1}=H$ | $\times$ | $\times$ | ✓ | $\times$
$C_{4}m_{y}$ | $C_{4}m_{y}H(C_{4}m_{y})^{-1}=H$ | $\times$ | $\times$ | ✓ | $\times$
$C_{4}m_{z}$ | $C_{4}m_{z}H(C_{4}m_{z})^{-1}=H$ | ✓ | $\times$ | $\times$ | $\times$
| | | | |
$C_{8}=e^{-i\frac{\pi}{8}\sigma_{z}\tau_{z}}\mathcal{R}_{8}$ | $C_{8}HC_{8}^{-1}=H$ | $\times$ | $\times$ | $\times$ | $\times$
$C_{8}T$ | $C_{8}TH(C_{8}T)^{-1}=H$ | $\times$ | $\times$ | ✓ | $\times$
$C_{8}m_{x}$ | $C_{8}m_{x}H(C_{8}m_{x})^{-1}=H$ | $\times$ | $\times$ | $\times$ | $\times$
$C_{8}m_{y}$ | $C_{8}m_{y}H(C_{8}m_{y})^{-1}=H$ | $\times$ | $\times$ | $\times$ | $\times$
$C_{8}m_{z}$ | $C_{8}m_{z}H(C_{8}m_{z})^{-1}=H$ | $\times$ | $\times$ | ✓ | $\times$
where
$c_{m}^{{\dagger}}=(c_{m\alpha\uparrow}^{{\dagger}},c_{m\alpha\downarrow}^{{\dagger}},c_{m\beta\uparrow}^{{\dagger}},c_{m\beta\downarrow}^{{\dagger}})$
represents the creation operator of an electron on a site $m$. In each site,
$\alpha$ ($\beta$) is the index of orbitals, and $\uparrow$ ($\downarrow$)
represents the spin direction. $s_{1-3}$ and $\tau_{1-3}$ are the Pauli
matrices acting on the spin and orbital degree of freedom, respectively.
$s_{0}$ and $\tau_{0}$ are the $2\times 2$ identity matrices. $t_{1-3}$ are
the hopping strength, and $M$ is the Dirac mass. The term containing $t_{3}$
is actually equivalent to a mass term that destroys the time-reversal symmetry
of the system so that the original helical boundary state of the system opens
the energy gap and evolves into a higher-order corner state [67, 68]. $\xi$ is
the varying period of the mass term, and $\xi=2$ (4) for square (octagonal)
samples. In the clean limit, the Hamiltonian $H_{2}$ respects particle-hole
symmetry, mirror symmetry $m_{x,y}$, and some combined symmetries, such as
$C_{4}T$, $C_{4}m_{z}$ with the square boundary condition and $C_{4}m_{x}$,
$C_{4}m_{y}$, $C_{8}T$, $C_{8}m_{z}$ with the octagonal boundary condition as
shown in Table 1. In fact, it has been demonstrated that the higher-order
corner state is protected by the combined symmetry $C_{4}m_{z}$ ($C_{8}m_{z}$)
with the square (octagonal) boundary condition by employing some uniform
perturbations to test the stability of the corner states [67]. However, all
symmetries are broken when the disorder is introduced and whether the higher-
order corner states induced by the disorder can appear is unclear. Without
loss of generality, we will set $t_{1}=t_{2}=1$.
The nested-Wilson-loop method [4, 9, 5] in the momentum space is efficient to
characterize the topological phase of an electric quadrupole insulator.
However, the quasicrystalline lattice is the lack of the translation
invariance, thus, the topological invariant defined in the momentum space is
no longer applicable for our models of quasicrystals. Therefore, we employ a
real-space quadrupole moment to characterize the topological phases of the
quasicrystalline lattice with disorder. The real-space quadrupole moment is
given by [109, 110, 111, 104, 112]
$q_{xy}=\frac{1}{2\pi}{\rm{Im}}\ln[\det(\Psi_{occ}^{\dagger}\hat{U}\Psi_{occ})\sqrt{\det(\hat{U}^{\dagger})}],$
(3)
where $\Psi_{occ}$ is the eigenvector of occupied states.
$\hat{U}\equiv\exp[i2\pi\hat{X}\hat{Y}/N]$ where $\hat{X}$ and $\hat{Y}$ are
the position operators, and $N$ represents the total number of the lattice
sites. If $q_{xy}=0.5$, the system is a SOTI phase with topological corner
states. Besides, $q_{xy}=0$ indicates a trivial phase. Note that the
subsequent calculations of $q_{xy}$ in this paper are based on the periodic
boundary condition. It is also noted that the validity of the formulation of
the bulk quadrupole moment proposed by two previous works [109, 110] is still
controversial. Ono _et al_. presented that the proposed definition of the bulk
quadrupole moment fails even for a simple noninteracting example [113]. Thus,
a satisfactory formulation of the bulk quadrupole moment should be worthy of
further study in the future works.
## III Model I: Chiral symmetry-protected higher-order topological insulator
In this section, we focus on the disorder-induced topological phase transition
with chiral symmetry in an Ammann-Beenker tiling quasicrystal with the square
boundary condition. The disorder is of the $U_{m}\Gamma_{4}$ type, which does
not destroy the chiral symmetry.
Figure 2 shows the real-space quadrupole moment as a function of disorder
strength $W$ and intracell hopping amplitude along the $y$-axis $t_{y}$ with
fixed $t_{x}$. The color map shows the magnitude of the real-space quadrupole
moment. It is found that when $W=0$, that is, in the clean limit, the system
is in a SOTI phase with $q_{xy}=0.5$ if $\gamma_{y}$ satisfies
$-1.8<\gamma_{y}<-0.9$. However, with the gradual increase in the disorder
strength, the SOTI phase will transform to the Anderson localized state phase
with $q_{xy}$ changing from $0.5$ to $0$. There are a series of critical
maximum disorder strengths increased monotonically with the increase in
$\gamma_{y}$. Meanwhile, we also find a phase which is a disorder-induced SOTI
phase in the region where $\gamma_{y}>-0.9$. However, in our calculation,
$q_{xy}$ is not a quantum number strictly equal to $0.5$ in the phase region.
We believe that this is due to the finite-size effect of the system and will
be discussed in the follow-up.
Figure 2: Topological phase diagram of the Ammann-Beenker tiling quasicrystal
in ($W,\gamma_{y}$) space obtained by calculating the real-space topological
invariant quadrupole moment $q_{xy}$ with $\gamma_{x}=-1.5$. The system is cut
to a square sample containing $1257$ cells with periodical boundary
conditions. Some $100$ random configuration averages are taken in our
computing.
In order to explore the role of the disorder effect in the quasicrytalline
lattice with chiral symmetry in more depth, we take two specific parameter
values of $\gamma_{y}$ and plot the variation of $q_{xy}$ with respect to the
strength of disorder as shown in Figs. 3(a) and 3(b). For the case of
$\gamma_{y}=-1.5$, the second-order phase remains stable in a weakly
disordered situation ($W<2.5$) with a quantized quadrupole moment plateau and
is destroyed in the strongly disordered situation ($W>7$) where $q_{xy}=0$. On
the other hand, when $\gamma_{y}=-0.75$, the system hosts a trivial phase with
$q_{xy}=0$ in the clean limit. As the strength of disorder increasing,
$q_{xy}$ gradually increases from $0$ and approaches $0.5$, indicating that
the system has undergone a phase transition from a trivial phase to a
topological nontrivial phase. Actually, there is not a quantized quadrupole
moment plateau in Fig. 3(b), and we attribute this to the finite-size effect.
Therefore, we plot $q_{xy}$ versus system size $N$ when $W=5.5$ with $500$
disorder configurations in the inset in Fig. 3(b). It is found that $q_{xy}$
approaches $0.5$ with a large system size ($N=16437$). To further certify the
existence of SOTI phases, we set some specific values of $W$ in Figs. 3(a) and
3(b) to give the energy spectrum and wave-function diagram of the system. It
is found that the SOTI phase is robust against the weak disorder [$W=1.5$ in
Figs. 3(c) and 3(e)] since the system hosts four zero-energy modes which are
localized at the four corners of the lattice. This property is similar to the
first-order topological state. Similarly, when $W=4$, four zero-energy modes
appear at the four corners of the lattice which indicate the presence of the
disorder-induced SOTI phases. The corner states are protected by the chiral
symmetry which is quite similar to the corner states that appeared in
crystalline systems in some previous works [104, 105].
Figure 3: The real-space quadrupole moment $q_{xy}$ versus disorder strength
$W$ with different initial states including (a) a higher-order topological
phase with $\gamma_{x}=-1.5,\gamma_{y}=-1.5$ and (b) a topological trivial
phase with $\gamma_{x}=-1.5,\gamma_{y}=-0.75$. The periodic boundary condition
is taken, and 500 disorder configurations are performed. The inset shows the
quadrupole moments $q_{xy}$ versus $N$ when $W=5.5$. $N$ is the total number
of the cells. The energy modes near the zero energy for (c) a higher-order
topological initial phase with $\gamma_{y}=-1.5$, $W=1.5$ and (d) a trivial
initial phase with $\gamma_{y}=-0.75$, $W=4$, respectively. (e) and (f) The
wave-function distribution of the zero modes corresponds to (c) and (d),
respectively. The system contains $1257$ cells, and the open boundary
condition is taken in (c)-(f).
## IV Model II: Combined symmetry-protected higher-order topological
insulator
In this section, we concentrate on the effects of disorder on the combined
symmetry-protected higher-order topological phase in an Ammann-Beenker tiling
quasicrystal with square and octagonal boundary conditions, respectively. All
calculations are based on the Hamiltonian $H_{2}$. In the clean limit, the
HOTI phase is protected by combined symmetry $C_{4}m_{z}$ and $C_{8}m_{z}$ for
different boundary conditions. So far, it was revealed that the HOTI phase is
protected by the symmetries, such as chiral, partial-hole, $C_{4}T$,
$C_{4}m_{z}$ and $C_{8}m_{z}$ symmetries. According to the previous work
[104], the values of $q_{xy}$ can be quantized to $0$ or $1/2$ only if the
system has chiral or partial-hole symmetry. However, all of these symmetries
are destroyed when the disorder is introduced in the Hamiltonian $H_{2}$ (see
Table I). Hence, the real-space quadrupole moment discussed in Sec. III may
not be appropriate for model II with the square boundary condition. In
addition, there is no well-defined topological invariant for a lattice with an
octagonal boundary condition. One appropriate way to characterize the higher-
order topological phase is to adopt the existence of the corner states as a
working definition [114, 108]. Thus, we calculate the energy spectrum and
wave-function distribution of the system to determine whether the corner
states exist. To reveal HOTAI in the quasicrystal described by model II, in
the following calculations, we will not only perform disorder configuration
average by many enough times, but also try to ensure that the size of the
samples is large enough.
### IV.1 Square boundary condition
In Figs. 4(a) and 4(b), we plot the eigenspectrum of the open lattice as the
function of disorders with different $M$. For the case of $M=-1$, the
probability density of the four in-gap eigenstates near zero energy in the
clean limit presents a picture with four corner states localized at the four
corner of the lattice [see Fig. 7(a) in the Appendix], indicating that the
system hosts a SOTI phase. Upon introducing the disorder and increasing its
strength, in Fig. 4(a), it is shown that the midgap modes remain stable until
$W\approx 5.5$, beyond which the bulk gap disappears, and the system is
converted to an Anderson localized state phase. To further illustrate the
stability of the SOTI phase, Figs. 4(c) and 4(e) display the eigenspectrum and
probability density of the in-gap eigenstates with $W=1.5$. It is found that
the four corner states are stable under weak disorder, indicating that the
second-order phase is robust against the weak disorder. For another case of
$M=1.6$, the system is a normal insulator phase in the clean limit due to the
fact that the middle four eigenstates near the zero energy are localized in
the bulk [see Fig. 7(b) in the Appendix]. With the increase in $W$, two
topological phase transitions occur in Fig. 4(b). First, in the region
$4<W<8$, the four middle eigenvalues gradually tend to be degenerate near the
zero energy, and midgap modes are generated, indicating that a phase
transition from normal insulator phase to the HOTI phase occurs. Figures 4(d)
and 4(f) show the energy spectrum and probability density of the in-gap
eigenstates of $H_{2}$ under the open boundary condition with $M=1.6$ and
$W=6.6$. It is found that there are fourfold energy degenerate in-gap states
under the condition of this set of parameters. What is more interesting is
that the wave functions corresponding to these degenerate energies are all
localized at the four corners of the lattice, which are the so-called corner
states as shown in Fig. 4(f). The corner states, induced by disorder, is
strong evidence for the emergence of the HOTAI. Then, with the increase in the
disorder strength, the higher-order phase converts to an Anderson insulator
phase at $W\approx 8$ with the energy gap closure and all eigenstates being
localized. Based on our calculations, we can draw two conclusions: first, the
higher-order topological phase is relatively stable under weak disorder;
second, disorder can also induce the higher-order topological phase in model
II.
Figure 4: The eigenspectrum versus disorder strength $W$ with different
initial states including: (a) a higher-order topological phase with $M=-1$ and
(b) a topological trivial phase with $M=1.6$. Some $200$ disorder
configurations are performed with a square sample containing $4061$ sites. The
energy modes near the zero energy for (c) a higher-order topological initial
phase with $M=-1$, $W=1.5$ and (d) a trivial initial phase with $M=1.6$,
$W=6.6$, respectively. (e) and (f) The wave-function distribution of the four
in-gap states corresponds to (c) and (d), respectively. All calculations are
based on the open boundary condition.
As mentioned above, since both the chiral and the partial-hole symmetries are
broken in model II with disorder, the necessary condition for the application
of the real-space formula of the quadrupole moment is not be satisfied, and
the real-space quadrupole moment should not be applied to characterize the
higher-order topological phase in model II with the square sample. However,
here we also try to calculate the real-space quadrupole moment $q_{xy}$ versus
disorder strength with different $M$ as shown in Fig. 5. Strikingly, it is
found that the values of $q_{xy}$ are also quantized $1/2$ in certain disorder
strength regions. For the case of $M=-1$ [see Fig. 5(a)], the system hosts a
SOTI phase with $q_{xy}=1/2$ in the clean limit, and a typical plateau is
accompanied by quantized $q_{xy}$ until the strength of $W$ reaches a certain
value ($W\approx 5.5$), indicating that the SOTI phase is robust against the
weak disorder. However, the SOTI phase is eventually destroyed by strong
disorder. For another case of $M=1.6$ [see Fig. 5(b)], the system is a normal
insulator phase with $q_{xy}=0$ in the clean limit. With the increase in $W$,
two topological phase transitions occur, accompanied by $q_{xy}$ changing from
$0$ to $0.5$ at $W\approx 4$ and returning to $q_{xy}=0$ at $W\approx 8$. In
the region $4<W<8$, a remarkable plateau of quantized $q_{xy}=0.5$ appears,
which indicates a SOTI phase induced by disorder. Thus, it is shown that the
results given by $q_{xy}$ match well with the energy spectrum [comparing Figs.
4(a) and 4(b) with Figs. 5(a) and 5(b)]. It is implied that the validity of
the operator-based formulation of the bulk quadruploe moment proposed by two
pervious works [109, 110] is still an open issue. An intriguing question is
whether this real-space quadrupole moment can still characterize the topology
of the system in these situations without any symmetry constraint, and it will
be further investigated in future work.
Figure 5: The real-space quadrupole moment $q_{xy}$ versus disorder strength
$W$ with different initial states including (a) a higher-order topological
phase with $M=-1$ and (b) a topological trivial phase with $M=1.6$. The
periodic boundary condition is taken, and $500$ disorder configurations are
performed. The system is cut to a square sample which contains $1257$ sites.
### IV.2 Octagonal boundary condition
In Fig. 6(a), we plot the eigenspectrum of the open lattice as the function of
disorders with $M=-1$. The probability density of the eight in-gap eigenstates
near zero energy in the clean limit presents a picture with eight corner
states localized at the eight corners of the lattice [see Fig. 7(c) in the
Appendix], indicating that the system hosts a SOTI phase. Upon introducing the
disorder and increasing its strength, the midgap modes remain stable until
$W\approx 4$, beyond which the bulk gap disappears, and the system is
converted to an Anderson localized state phase. To further illustrate the
stability of the SOTI phase, Figs. 6(c) and 6(e) display the eigenspectrum and
probability density of the in-gap eigenstates with $W=2$. It is found that the
eight corner states are stable under weak disorder.
In Fig. 6(b), we plot the eigenspectrum of the open lattice as the function of
disorders with $M=1.6$. The probability density of the middle eight
eigenstates near the zero energy in the clean limit are localized in the bulk
[see Fig. 7(d) in the Appendix], indicating that the system hosts a trivial
phase. Upon introducing the disorder and increasing its strength, a series of
interesting changes occur in the energy spectrum. First, in the region
$0<W<5.5$, the eight middle eigenvalues gradually tend to be degenerate near
the zero energy, and midgap modes are generated, indicating that a phase
transition from a normal insulator phase to a HOTI phase may occur. To verify
this conclusion, we plot the eigenspectrum and probability density of the
midgap eigenstates at $W=6.6$ as shown in Figs. 6(d) and 6(f). It is shown
that the eight midgap states are localized at the eight corners of the
lattice, and these corner states are the powerful proof of disorder-induced
HOTI. Then, with the increase in the disorder strength, the higher-order phase
converts to an Anderson insulator phase at $W\approx 8$ with the energy gap
closure and all eigenstates being localized.
Figure 6: The eigenspectrum versus disorder strength $W$ with different
initial states including (a) a higher-order topological phase with $M=-1$ and
(b) a topological trivial phase with $M=1.6$. Some $200$ disorder
configurations are performed with a octagonal sample containing $13289$ sites.
The energy modes near the zero energy for (c) a higher-order topological
initial phase with $M=-1$, $W=2$ and (d) a trivial initial phase with $M=1.6$,
$W=6.6$, respectively. (e) and (f) The wave-function distribution of the eight
in-gap states corresponds to (c) and (d), respectively. All calculations are
based on the open boundary condition.
## V Conclusions and discussions
In this paper, we investigate the disorder-induced higher-order topological
phase transition in an Ammann-Beenker tiling quasicrystal. Two types of SOTI
phases are considered: One is the quasicrystalline quadrupole insulator (model
I), and the other is a quantum spin Hall insulator with a mass term which
gapped the edge states and the topological corner states emerge (model II).
Without disorder, model I (II) in the SOTI phase hosts gapless topological
corner states protected by chiral ($C_{4}m_{z}$ or $C_{8}m_{z}$) symmetry and
localized at the lattice corners. Based on calculating the quadrupole moment
and the probability density of the middle gap eigenstates, it is found that in
both models, the SOTI phases stay stable with weak disorder and are destroyed
by strong disorder. More interesting is that the chiral symmetry-protected
disorder-induced HOTAI is found by adding a certain strength of disorder to a
topological trivial phase in model I. Meanwhile, a topological phase
transition from a topological trivial phase to a HOTAI phase with topological
corner states is also found in model II.
Based on the self-consistent Born approximation (SCBA), the disorder-induced
topological phase transition forms a topological trivial phase to a
topological nontrivial phase is attributed to the disorder that renormalizes
the system parameters, such as the mass term, hopping term, and chemical
potential [78, 115, 87, 75, 89, 81, 90, 91, 104]. However, the SCBA theory is
invalid for the aperiodic systems, such as amorphous and quasicrystalline
lattices which are the lack of translation symmetry. Up to now, there is not a
well-defined theory to reveal the generating mechanism of TAI in aperiodic
systems, and it is will be studied in the future work. Nevertheless, analogous
to the generating mechanism of the TAI in a periodic system, we suppose that
the generation of the TAI or HOTAI in the quasicrystalline system is also due
to the renormalization of the parameters caused by disorder, and the initial
trivial phase is converted to the HOTAI phase. In addition, disorder in model
I does not destroy the chiral symmetry, and this symmetry also protects the
topology of the system [104]. In model II, the introducing of disorder has
caused all of the symmetries of the system to be broken. It seems difficult to
find a symmetry to guarantee the topology of the system, however, our
calculations show that the HOTAI phases can be also induced by disorder in
model II. We note that the quadrupole moment is easy to get a quantized value
in model II. This may be caused by the following two points. One is that model
I is more sensitive to the finite-size effect. Two is that the wave function
of HOTI in model II is more local than model I. In previous work, Fu _et al_.
have proposed that, when the disorder is strong, the topological surface
states exist, due to symmetries that are destroyed by disorder but remain
unbroken on average [116]. Here, two key points are employed to guarantee that
the averaged symmetries exist. One is that enough disorder configurations are
needed for the average. The other one is the size of the systems should be
large enough. Under these conditions, we suppose that the combined symmetries,
such as $C_{4}m_{z}$ and $C_{8}m_{z}$ which are broken by random disorder will
recovered statistically by taking an ensemble average [117] and the HOTAI
phases are protected by the average combined symmetries. More details about
average symmetry which can keep the HOTI phases with disorder in
quasicrystalline systems will be further investigated in the future work.
Recently, the HOTAI has been successfully implemented in a modified Haldane
model based on electric circuits system [108]. Moreover, the quasicrystalline
quadrupole topological insulatiors has been experimentally realized in
electrical circuits [70]. Therefore, we propose an experimental setup to
construct the quasicrystalline lattice in electronic circuits and realize the
introduction of random disorder by changing the magnitude of the inductors and
the capacitors. By this way, we believe that the HOTAI phase in the
quasicrystalline system can be observed.
## Acknowledgments
B.Z. was supported by the NSFC (under Grant No. 12074107), and the program of
outstanding young and middle-aged scientific and technological innovation team
of colleges and universities in Hubei Province (under Grant No. T2020001).
D.-H.X. was supported by the NSFC (under Grant No. 12074108). D.-H.X. also
acknowledges financial support from the Chutian Scholars Program in Hubei
Province.
## Appendix: Wave function with square and octagonal boundary condition in the
clean limit
In this Appendix, we plot the probability density of the four (eight)
eigenstates which are nearest to zero energy in the clean limit with different
Dirac mass $M$ to identify the initial phase of the system. All calculations
are based on $H_{2}$. As shown in Figs. 7(a) and 7(c), four and eight in-gap
states symmetrically distributed at the corners of a quasicrystal octagon,
indicating that the system is a HOTI phase at $M=-1$ in the clean limit.
Meanwhile, when $M=1.6$, the system is in a topological trivial phase as shown
in Fig. 7(b) and 7(d).
Figure 7: Probability density of the eigenstates in the clean limit with
different Dirac mass (a) and (c)$M=-1$ and (b) and (d) $M=1.6$. The system is
cut to a square sample which contains $4061$ sites for (a) and (b). For (c)
and (d), the system is cut into a octagonal sample with $13289$ sites.
## References
* [1] A. Saha and A. M. Jayannavar, “Higher order topological systems: A new paradigm”, arXiv:2107.00847 .
* Schindler [2020] F. Schindler, “Dirac equation perspective on higher-order topological insulators”, J. Appl. Phys. 128, 221102 (2020).
* Xie _et al._ [2021] B. Xie, H.-X. Wang, X. Zhang, P. Zhan, J.-H. Jiang, M. Lu, and Y. Chen, “Higher-order band topology”, Nat. Rev. Phys. 3, 520 (2021).
* Benalcazar _et al._ [2017a] W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, “Quantized electric multipole insulators”, Science 357, 61 (2017a).
* Benalcazar _et al._ [2017b] W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, “Electric multipole moments, topological multipole moment pumping, and chiral hinge states in crystalline insulators”, Phys. Rev. B 96, 245115 (2017b).
* Schindler _et al._ [2018a] F. Schindler, A. M. Cook, M. G. Vergniory, Z. Wang, S. S. P. Parkin, B. A. Bernevig, and T. Neupert, “Higher-order topological insulators”, Sci. Adv. 4, eaat0346 (2018a).
* Langbehn _et al._ [2017] J. Langbehn, Y. Peng, L. Trifunovic, F. von Oppen, and P. W. Brouwer, “Reflection-symmetric second-order topological insulators and superconductors”, Phys. Rev. Lett. 119, 246401 (2017).
* Song _et al._ [2017] Z. Song, Z. Fang, and C. Fang, “$(d-2)$-dimensional edge states of rotation symmetry protected topological states”, Phys. Rev. Lett. 119, 246402 (2017).
* [9] C. Shang, X. Zang, W. Gao, U. Schwingenschlogl, and A. Manchon, “Second-order topological insulator and fragile topology in topological circuitry simulation”, arXiv:2009.09167 .
* Geier _et al._ [2018] M. Geier, L. Trifunovic, M. Hoskam, and P. W. Brouwer, “Second-order topological insulators and superconductors with an order-two crystalline symmetry”, Phys. Rev. B 97, 205135 (2018).
* Fang and Fu [2019] C. Fang and L. Fu, “New classes of topological crystalline insulators having surface rotation anomaly”, Sci. Adv. 5, eaat2374 (2019).
* [12] Y. Xu, R. Xue, and S. Wan, “Topological corner states on kagome lattice based chiral higher-order topological insulator”, arXiv:1711.09202 .
* Ezawa [2018a] M. Ezawa, “Higher-order topological insulators and semimetals on the breathing kagome and pyrochlore lattices”, Phys. Rev. Lett. 120, 026801 (2018a).
* Ezawa [2018b] M. Ezawa, “Topological switch between second-order topological insulators and topological crystalline insulators”, Phys. Rev. Lett. 121, 116801 (2018b).
* Ezawa [2018c] M. Ezawa, “Magnetic second-order topological insulators and semimetals”, Phys. Rev. B 97, 155305 (2018c).
* Khalaf [2018] E. Khalaf, “Higher-order topological insulators and superconductors protected by inversion symmetry”, Phys. Rev. B 97, 205136 (2018).
* Ezawa [2018d] M. Ezawa, “Strong and weak second-order topological insulators with hexagonal symmetry and $\mathbb{z}$3 index”, Phys. Rev. B 97, 241402(R) (2018d).
* Kunst _et al._ [2018] F. K. Kunst, G. van Miert, and E. J. Bergholtz, “Lattice models with exactly solvable topological hinge and corner states”, Phys. Rev. B 97, 241405(R) (2018).
* Ezawa [2018e] M. Ezawa, “Minimal models for wannier-type higher-order topological insulators and phosphorene”, Phys. Rev. B 98, 045125 (2018e).
* Yan _et al._ [2018] Z. Yan, F. Song, and Z. Wang, “Majorana corner modes in a high-temperature platform”, Phys. Rev. Lett. 121, 096803 (2018).
* Wang _et al._ [2018a] Q. Wang, C.-C. Liu, Y.-M. Lu, and F. Zhang, “High-temperature majorana corner states”, Phys. Rev. Lett. 121, 186801 (2018a).
* Shapourian _et al._ [2018] H. Shapourian, Y. Wang, and S. Ryu, “Topological crystalline superconductivity and second-order topological superconductivity in nodal-loop materials”, Phys. Rev. B 97, 094508 (2018).
* You _et al._ [2018] Y. You, T. Devakul, F. J. Burnell, and T. Neupert, “Higher-order symmetry-protected topological states for interacting bosons and fermions”, Phys. Rev. B 98, 235102 (2018).
* Lin and Hughes [2018] M. Lin and T. L. Hughes, “Topological quadrupolar semimetals”, Phys. Rev. B 98, 241103(R) (2018).
* Kooi _et al._ [2018] S. H. Kooi, G. van Miert, and C. Ortix, “Inversion-symmetry protected chiral hinge states in stacks of doped quantum hall layers”, Phys. Rev. B 98, 245102 (2018).
* Lee _et al._ [2019] C. H. Lee, L. Li, and J. Gong, “Hybrid higher-order skin-topological modes in nonreciprocal systems”, Phys. Rev. Lett. 123, 016805 (2019).
* Liu _et al._ [2019a] T. Liu, Y.-R. Zhang, Q. Ai, Z. Gong, K. Kawabata, M. Ueda, and F. Nori, “Second-order topological phases in non-hermitian systems”, Phys. Rev. Lett. 122, 076801 (2019a).
* Fan _et al._ [2019] H. Fan, B. Xia, L. Tong, S. Zheng, and D. Yu, “Elastic higher-order topological insulator with topologically protected corner states”, Phys. Rev. Lett. 122, 204301 (2019).
* Pozo _et al._ [2019] O. Pozo, C. Repellin, and A. G. Grushin, “Quantization in chiral higher order topological insulators: Circular dichroism and local chern marker”, Phys. Rev. Lett. 123, 247401 (2019).
* Sheng _et al._ [2019] X.-L. Sheng, C. Chen, H. Liu, Z. Chen, Z.-M. Yu, Y. X. Zhao, and S. A. Yang, “Two-dimensional second-order topological insulator in graphdiyne”, Phys. Rev. Lett. 123, 256402 (2019).
* Benalcazar _et al._ [2019] W. A. Benalcazar, T. Li, and T. L. Hughes, “Quantization of fractional corner charge in ${C}_{n}$-symmetric higher-order topological crystalline insulators”, Phys. Rev. B 99, 245151 (2019).
* Rodriguez-Vega _et al._ [2019] M. Rodriguez-Vega, A. Kumar, and B. Seradjeh, “Higher-order floquet topological phases with corner and bulk bound states”, Phys. Rev. B 100, 085138 (2019).
* Okugawa _et al._ [2019] R. Okugawa, S. Hayashi, and T. Nakanishi, “Second-order topological phases protected by chiral symmetry”, Phys. Rev. B 100, 235302 (2019).
* Ding _et al._ [2020] Y.-R. Ding, D.-H. Xu, C.-Z. Chen, and X. C. Xie, “Hinged quantum spin hall effect in antiferromagnetic topological insulators”, Phys. Rev. B 101, 041404(R) (2020).
* Zhu [2019] X. Zhu, “Second-order topological superconductors with mixed pairing”, Phys. Rev. Lett. 122, 236401 (2019).
* Bultinck _et al._ [2019] N. Bultinck, B. A. Bernevig, and M. P. Zaletel, “Three-dimensional superconductors with hybrid higher-order topology”, Phys. Rev. B 99, 125149 (2019).
* Yan [2019a] Z. Yan, “Majorana corner and hinge modes in second-order topological insulator/superconductor heterostructures”, Phys. Rev. B 100, 205406 (2019a).
* Hsu _et al._ [2020] Y.-T. Hsu, W. S. Cole, R.-X. Zhang, and J. D. Sau, “Inversion-protected higher-order topological superconductivity in monolayer ${\mathrm{wte}}_{2}$”, Phys. Rev. Lett. 125, 097001 (2020).
* van Miert and Ortix [2018] G. van Miert and C. Ortix, “Higher-order topological insulators protected by inversion and rotoinversion symmetries”, Phys. Rev. B 98, 081110(R) (2018).
* Wang _et al._ [2018b] Y. Wang, M. Lin, and T. L. Hughes, “Weak-pairing higher order topological superconductors”, Phys. Rev. B 98, 165144 (2018b).
* Franca _et al._ [2018] S. Franca, J. van den Brink, and I. C. Fulga, “An anomalous higher-order topological insulator”, Phys. Rev. B 98, 201114(R) (2018).
* Huang and Liu [2020] B. Huang and W. V. Liu, “Floquet higher-order topological insulators with anomalous dynamical polarization”, Phys. Rev. Lett. 124, 216601 (2020).
* Trifunovic and Brouwer [2019] L. Trifunovic and P. W. Brouwer, “Higher-order bulk-boundary correspondence for topological crystalline phases”, Phys. Rev. X 9, 011012 (2019).
* Liu _et al._ [2019b] F. Liu, H.-Y. Deng, and K. Wakabayashi, “Helical topological edge states in a quadrupole phase”, Phys. Rev. Lett. 122, 086804 (2019b).
* Pan _et al._ [2019] X.-H. Pan, K.-J. Yang, L. Chen, G. Xu, C.-X. Liu, and X. Liu, “Lattice-symmetry-assisted second-order topological superconductors and majorana patterns”, Phys. Rev. Lett. 123, 156801 (2019).
* Zhang _et al._ [2019] R.-X. Zhang, W. S. Cole, X. Wu, and S. Das Sarma, “Higher-order topology and nodal topological superconductivity in fe(se,te) heterostructures”, Phys. Rev. Lett. 123, 167001 (2019).
* Yan [2019b] Z. Yan, “Higher-order topological odd-parity superconductors”, Phys. Rev. Lett. 123, 177001 (2019b).
* Wang _et al._ [2019] Z. Wang, B. J. Wieder, J. Li, B. Yan, and B. A. Bernevig, “Higher-order topology, monopole nodal lines, and the origin of large fermi arcs in transition metal dichalcogenides $x{\mathrm{te}}_{2}$ ($x=\mathrm{Mo},\mathrm{W}$)”, Phys. Rev. Lett. 123, 186401 (2019).
* Park _et al._ [2019] M. J. Park, Y. Kim, G. Y. Cho, and S. Lee, “Higher-order topological insulator in twisted bilayer graphene”, Phys. Rev. Lett. 123, 216803 (2019).
* Călugăru _et al._ [2019] D. Călugăru, V. Juričić, and B. Roy, “Higher-order topological phases: A general principle of construction”, Phys. Rev. B 99, 041301(R) (2019).
* Dubinkin and Hughes [2019] O. Dubinkin and T. L. Hughes, “Higher-order bosonic topological phases in spin models”, Phys. Rev. B 99, 235132 (2019).
* Khalaf _et al._ [2021] E. Khalaf, W. A. Benalcazar, T. L. Hughes, and R. Queiroz, “Boundary-obstructed topological phases”, Phys. Rev. Research 3, 013239 (2021).
* Zhang _et al._ [2020a] R.-X. Zhang, Y.-T. Hsu, and S. Das Sarma, “Higher-order topological dirac superconductors”, Phys. Rev. B 102, 094503 (2020a).
* Wieder _et al._ [2020] B. J. Wieder, Z. Wang, J. Cano, X. Dai, L. M. Schoop, B. Bradlyn, and B. A. Bernevig, “Strong and fragile topological dirac semimetals with higher-order fermi arcs”, Nat. Commun. 11, 1 (2020).
* Yang _et al._ [2020] Y.-B. Yang, K. Li, L.-M. Duan, and Y. Xu, “Type-ii quadrupole topological insulators”, Phys. Rev. Research 2, 033029 (2020).
* Schindler _et al._ [2018b] F. Schindler, Z. Wang, M. G. Vergniory, A. M. Cook, A. Murani, S. Sengupta, _et al._ , “Higher-order topology in bismuth”, Nat. Phys. 14, 918 (2018b).
* Serra-Garcia _et al._ [2018] M. Serra-Garcia, V. Peri, R. Süsstrunk, O. R. Bilal, T. Larsen, L. G. Villanueva, and S. D. Huber, “Observation of a phononic quadrupole topological insulator”, Nature (London) 555, 342 (2018).
* Xue _et al._ [2019] H. Xue, Y. Yang, F. Gao, Y. Chong, and B. Zhang, “Acoustic higher-order topological insulator on a kagome lattice”, Nat. Mater. 18, 108 (2019).
* Ni _et al._ [2019] X. Ni, M. Weiner, A. Alu, and A. B. Khanikaev, “Observation of higher-order topological acoustic states protected by generalized chiral symmetry”, Nat. Mater. 18, 113 (2019).
* Peterson _et al._ [2018] C. W. Peterson, W. A. Benalcazar, T. L. Hughes, and G. Bahl, “A quantized microwave quadrupole insulator with topologically protected corner states”, Nature (London) 555, 346 (2018).
* Mittal _et al._ [2019] S. Mittal, V. V. Orre, G. Zhu, M. A. Gorlach, A. Poddubny, and M. Hafezi, “Photonic quadrupole topological phases”, Nat. Photonics 13, 692 (2019).
* Zhang _et al._ [2020b] W. Zhang, X. Xie, H. Hao, J. Dang, S. Xiao, S. Shi, _et al._ , “Low-threshold topological nanolasers based on the second-order corner state”, Light: Sci. Appl. 9, 1 (2020b).
* Noh _et al._ [2018] J. Noh, W. A. Benalcazar, S. Huang, M. J. Collins, K. P. Chen, T. L. Hughes, and M. C. Rechtsman, “Topological protection of photonic mid-gap defect modes”, Nat. Photonics 12, 408 (2018).
* Imhof _et al._ [2018] S. Imhof, C. Berger, F. Bayer, J. Brehm, L. W. Molenkamp, T. Kiessling, _et al._ , “Topolectrical-circuit realization of topological corner modes”, Nat. Phys. 14, 925 (2018).
* Bao _et al._ [2019] J. Bao, D. Zou, W. Zhang, W. He, H. Sun, and X. Zhang, “Topoelectrical circuit octupole insulator with topologically protected corner states”, Phys. Rev. B 100, 201406(R) (2019).
* Varjas _et al._ [2019] D. Varjas, A. Lau, K. Pöyhönen, A. R. Akhmerov, D. I. Pikulin, and I. C. Fulga, “Topological phases without crystalline counterparts”, Phys. Rev. Lett. 123, 196401 (2019).
* Chen _et al._ [2020] R. Chen, C.-Z. Chen, J.-H. Gao, B. Zhou, and D.-H. Xu, “Higher-order topological insulators in quasicrystals”, Phys. Rev. Lett. 124, 036803 (2020).
* Hua _et al._ [2020] C.-B. Hua, R. Chen, B. Zhou, and D.-H. Xu, “Higher-order topological insulator in a dodecagonal quasicrystal”, Phys. Rev. B 102, 241102(R) (2020).
* Spurrier and Cooper [2020] S. Spurrier and N. R. Cooper, “Kane-mele with a twist: Quasicrystalline higher-order topological insulators with fractional mass kinks”, Phys. Rev. Research 2, 033071 (2020).
* Lv _et al._ [2021] B. Lv, R. Chen, R. Li, C. Guan, B. Zhou, G. Dong, _et al._ , “Realization of quasicrystalline quadrupole topological insulators in electrical circuits”, Commun. Phys. 4, 1 (2021).
* Li _et al._ [2009] J. Li, R.-L. Chu, J. K. Jain, and S.-Q. Shen, “Topological anderson insulator”, Phys. Rev. Lett. 102, 136806 (2009).
* Prodan _et al._ [2010] E. Prodan, T. L. Hughes, and B. A. Bernevig, “Entanglement spectrum of a disordered topological chern insulator”, Phys. Rev. Lett. 105, 115501 (2010).
* Zhang _et al._ [2013] Y.-F. Zhang, Y.-Y. Yang, Y. Ju, L. Sheng, R. Shen, D.-N. Sheng, and D.-Y. Xing, “Coupling-matrix approach to the chern number calculation in disordered systems”, Chin. Phys. B 22, 117312 (2013).
* Castro _et al._ [2015] E. V. Castro, M. P. López-Sancho, and M. A. H. Vozmediano, “Anderson localization and topological transition in chern insulators”, Phys. Rev. B 92, 085410 (2015).
* Liu _et al._ [2016] S. Liu, T. Ohtsuki, and R. Shindou, “Effect of disorder in a three-dimensional layered chern insulator”, Phys. Rev. Lett. 116, 066401 (2016).
* Kuno [2019] Y. Kuno, “Disorder-induced chern insulator in the harper-hofstadter-hatsugai model”, Phys. Rev. B 100, 054108 (2019).
* Jiang _et al._ [2009] H. Jiang, L. Wang, Q.-f. Sun, and X. C. Xie, “Numerical study of the topological anderson insulator in hgte/cdte quantum wells”, Phys. Rev. B 80, 165316 (2009).
* Groth _et al._ [2009] C. W. Groth, M. Wimmer, A. R. Akhmerov, J. Tworzydło, and C. W. J. Beenakker, “Theory of the topological anderson insulator”, Phys. Rev. Lett. 103, 196805 (2009).
* Guo _et al._ [2011] H. Guo, S. Feng, and S.-Q. Shen, “Quantum spin hall effect induced by nonmagnetic and magnetic staggered potentials”, Phys. Rev. B 83, 045114 (2011).
* Chen _et al._ [2015a] C.-Z. Chen, H. Liu, H. Jiang, Q.-f. Sun, Z. Wang, and X. C. Xie, “Tunable anderson metal-insulator transition in quantum spin-hall insulators”, Phys. Rev. B 91, 214202 (2015a).
* Chen _et al._ [2017a] R. Chen, D.-H. Xu, and B. Zhou, “Disorder-induced topological phase transitions on lieb lattices”, Phys. Rev. B 96, 205304 (2017a).
* Xing _et al._ [2011] Y. Xing, L. Zhang, and J. Wang, “Topological anderson insulator phenomena”, Phys. Rev. B 84, 035110 (2011).
* Orth _et al._ [2016] C. P. Orth, T. Sekera, C. Bruder, and T. L. Schmidt, “The topological anderson insulator phase in the kane-mele model”, Sci. Rep. 6, 24007 (2016).
* Guo _et al._ [2010] H.-M. Guo, G. Rosenberg, G. Refael, and M. Franz, “Topological anderson insulator in three dimensions”, Phys. Rev. Lett. 105, 216601 (2010).
* Guo [2010] H.-M. Guo, “Topological invariant in three-dimensional band insulators with disorder”, Phys. Rev. B 82, 115122 (2010).
* Mondragon-Shem _et al._ [2014] I. Mondragon-Shem, T. L. Hughes, J. Song, and E. Prodan, “Topological criticality in the chiral-symmetric aiii class at strong disorder”, Phys. Rev. Lett. 113, 046802 (2014).
* Chen _et al._ [2015b] C.-Z. Chen, J. Song, H. Jiang, Q.-f. Sun, Z. Wang, and X. C. Xie, “Disorder and metal-insulator transitions in weyl semimetals”, Phys. Rev. Lett. 115, 246603 (2015b).
* Shapourian and Hughes [2016] H. Shapourian and T. L. Hughes, “Phase diagrams of disordered weyl semimetals”, Phys. Rev. B 93, 075108 (2016).
* Chen _et al._ [2017b] R. Chen, D.-H. Xu, and B. Zhou, “Topological anderson insulator phase in a dirac-semimetal thin film”, Phys. Rev. B 95, 245305 (2017b).
* Chen _et al._ [2018a] R. Chen, C.-Z. Chen, J.-H. Sun, B. Zhou, and D.-H. Xu, “Phase diagrams of weyl semimetals with competing intraorbital and interorbital disorders”, Phys. Rev. B 97, 235109 (2018a).
* Chen _et al._ [2018b] R. Chen, D.-H. Xu, and B. Zhou, “Floquet topological insulator phase in a weyl semimetal thin film with disorder”, Phys. Rev. B 98, 235159 (2018b).
* Sriluckshmy _et al._ [2018] P. V. Sriluckshmy, K. Saha, and R. Moessner, “Interplay between topology and disorder in a two-dimensional semi-dirac material”, Phys. Rev. B 97, 024204 (2018).
* Borchmann _et al._ [2016] J. Borchmann, A. Farrell, and T. Pereg-Barnea, “Anderson topological superconductor”, Phys. Rev. B 93, 125133 (2016).
* Qin _et al._ [2016] W. Qin, D. Xiao, K. Chang, S.-Q. Shen, and Z. Zhang, “Disorder-induced topological phase transitions in two-dimensional spin-orbit coupled superconductors”, Sci. Rep. 6, 39188 (2016).
* Lieu _et al._ [2018] S. Lieu, D. K. K. Lee, and J. Knolle, “Disorder protected and induced local zero-modes in longer-range kitaev chains”, Phys. Rev. B 98, 134507 (2018).
* Hua _et al._ [2019] C.-B. Hua, R. Chen, D.-H. Xu, and B. Zhou, “Disorder-induced majorana zero modes in a dimerized kitaev superconductor chain”, Phys. Rev. B 100, 205302 (2019).
* Tang _et al._ [2020] L.-Z. Tang, L.-F. Zhang, G.-Q. Zhang, and D.-W. Zhang, “Topological anderson insulators in two-dimensional non-hermitian disordered systems”, Phys. Rev. A 101, 063612 (2020).
* Meier _et al._ [2018] E. J. Meier, F. A. An, A. Dauphin, M. Maffei, P. Massignan, T. L. Hughes, and B. Gadway, “Observation of the topological anderson insulator in disordered atomic wires”, Science 362, 929 (2018).
* Stützer _et al._ [2018] S. Stützer, Y. Plotnik, Y. Lumer, P. Titum, N. H. Lindner, M. Segev, M. C. Rechtsman, and A. Szameit, “Photonic topological anderson insulators”, Nature London 560, 461 (2018).
* Liu _et al._ [2020] G.-G. Liu, Y. Yang, X. Ren, H. Xue, X. Lin, Y.-H. Hu, _et al._ , “Topological anderson insulator in disordered photonic crystals”, Phys. Rev. Lett. 125, 133603 (2020).
* Chen _et al._ [2019] R. Chen, D.-H. Xu, and B. Zhou, “Topological anderson insulator phase in a quasicrystal lattice”, Phys. Rev. B 100, 115311 (2019).
* Peng _et al._ [2021] T. Peng, C.-B. Hua, R. Chen, D.-H. Xu, and B. Zhou, “Topological anderson insulators in an ammann-beenker quasicrystal and a snub-square crystal”, Phys. Rev. B 103, 085307 (2021).
* Hua _et al._ [2021] C.-B. Hua, Z.-R. Liu, T. Peng, R. Chen, D.-H. Xu, and B. Zhou, “Disorder-induced chiral and helical majorana edge modes in a two-dimensional ammann-beenker quasicrystal”, Phys. Rev. B 104, 155304 (2021).
* Li _et al._ [2020] C.-A. Li, B. Fu, Z.-A. Hu, J. Li, and S.-Q. Shen, “Topological phase transitions in disordered electric quadrupole insulators”, Phys. Rev. Lett. 125, 166801 (2020).
* Yang _et al._ [2021] Y.-B. Yang, K. Li, L.-M. Duan, and Y. Xu, “Higher-order topological anderson insulators”, Phys. Rev. B 103, 085408 (2021).
* Franca _et al._ [2019] S. Franca, D. V. Efremov, and I. C. Fulga, “Phase-tunable second-order topological superconductor”, Phys. Rev. B 100, 075415 (2019).
* Zhang _et al._ [2021a] Z.-Q. Zhang, B.-L. Wu, C.-Z. Chen, and H. Jiang, “Global phase diagram of disordered higher-order weyl semimetals”, Phys. Rev. B 104, 014203 (2021a).
* Zhang _et al._ [2021b] W. Zhang, D. Zou, Q. Pei, W. He, J. Bao, H. Sun, and X. Zhang, “Experimental observation of higher-order topological anderson insulators”, Phys. Rev. Lett. 126, 146802 (2021b).
* Kang _et al._ [2019] B. Kang, K. Shiozaki, and G. Y. Cho, “Many-body order parameters for multipoles in solids”, Phys. Rev. B 100, 245134 (2019).
* Wheeler _et al._ [2019] W. A. Wheeler, L. K. Wagner, and T. L. Hughes, “Many-body electric multipole operators in extended systems”, Phys. Rev. B 100, 245135 (2019).
* Li and Wu [2020] C.-A. Li and S.-S. Wu, “Topological states in generalized electric quadrupole insulators”, Phys. Rev. B 101, 195309 (2020).
* Agarwala _et al._ [2020] A. Agarwala, V. Juričić, and B. Roy, “Higher-order topological insulators in amorphous solids”, Phys. Rev. Research 2, 012067 (2020).
* Ono _et al._ [2019] S. Ono, L. Trifunovic, and H. Watanabe, “Difficulties in operator-based formulation of the bulk quadrupole moment”, Phys. Rev. B 100, 245133 (2019).
* Araki _et al._ [2019] H. Araki, T. Mizoguchi, and Y. Hatsugai, “Phase diagram of a disordered higher-order topological insulator: A machine learning study”, Phys. Rev. B 99, 085406 (2019).
* Park _et al._ [2017] M. J. Park, B. Basa, and M. J. Gilbert, “Disorder-induced phase transitions of type-ii weyl semimetals”, Phys. Rev. B 95, 094201 (2017).
* Fu and Kane [2012] L. Fu and C. L. Kane, “Topology, delocalization via average symmetry and the symplectic anderson transition”, Phys. Rev. Lett. 109, 246605 (2012).
* Yoshioka _et al._ [2018] N. Yoshioka, Y. Akagi, and H. Katsura, “Learning disordered topological phases by statistical recovery of symmetry”, Phys. Rev. B 97, 205110 (2018).
|
††thanks: Denotes equal contribution
# Molecular van der Waals fluids in cavity quantum electrodynamics
John P. Philbin¶<EMAIL_ADDRESS>Harvard John A. Paulson School of
Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
College of Letters and Science, University of California, Los Angeles, CA
90095, USA Tor S. Haugland¶ Department of Chemistry, Norwegian University of
Science and Technology, 7491 Trondheim, Norway Tushar K. Ghosh¶ Department of
Chemistry, Purdue University, West Lafayette, IN 47907, USA Enrico Ronca
Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di
Perugia, Via Elce di Sotto, 8, 06123, Perugia, Italy Max Planck Institute for
the Structure and Dynamics of Matter and Center Free-Electron Laser Science,
Luruper Chaussee 149, 22761 Hamburg, Germany Ming Chen<EMAIL_ADDRESS>Department of Chemistry, Purdue University, West Lafayette, IN 47907, USA
Prineha Narang<EMAIL_ADDRESS>Harvard John A. Paulson School of Engineering
and Applied Sciences, Harvard University, Cambridge, MA 02138, USA College of
Letters and Science, University of California, Los Angeles, CA 90095, USA
Henrik Koch<EMAIL_ADDRESS>Department of Chemistry, Norwegian University
of Science and Technology, 7491 Trondheim, Norway Scuola Normale Superiore,
Piazza dei Cavalieri, 7, 56124 Pisa, Italy
###### Abstract
Intermolecular van der Waals interactions are central to chemical and physical
phenomena ranging from biomolecule binding to soft-matter phase transitions.
However, there are currently very limited approaches to manipulate van der
Waals interactions. In this work, we demonstrate that strong light-matter
coupling can be used to tune van der Waals interactions, and, thus, control
the thermodynamic properties of many-molecule systems. Our analyses reveal
orientation dependent single molecule energies and interaction energies for
van der Waals molecules (for example, H2). For example, we find intermolecular
interactions that depend on the distance between the molecules $R$ as $R^{-3}$
and $R^{0}$. Moreover, we employ non-perturbative ab initio cavity quantum
electrodynamics calculations to develop machine learning-based interaction
potentials for molecules inside optical cavities. By simulating systems
ranging from $12$ H2 to $144$ H2 molecules, we demonstrate that strong light-
matter coupling can tune the structural and thermodynamic properties of
molecular fluids. In particular, we observe varying degrees of orientational
order as a consequence of cavity-modified interactions, and we explain how
quantum nuclear effects, light-matter coupling strengths, number of cavity
modes, molecular anisotropies, and system size all impact the extent of
orientational order. These simulations and analyses demonstrate both local and
collective effects induced by strong light-matter coupling and open new paths
for controlling the properties of molecular clusters.
††preprint: APS/123-QED††preprint: APS/123-QED
Van der Waals interactions are ubiquitous in chemistry and physics, playing
important roles in diverse scientific fields ranging from DNA base stacking to
2D material interlayer interactions.Hobza and Šponer (2002); Novoselov _et
al._ (2016); Sternbach _et al._ (2021) There has been a long history of
attempting to elucidate the origin of van der Waals interactions;Maitland _et
al._ (1981); Stone (2013) the first quantum mechanical derivation was
performed by London in the 1930s using second-order perturbation theory.London
(1937) London found that two molecules that do not have permanent dipoles
(e.g. H2), which we refer to as van der Waals molecules, have an attractive
interaction between them that scales with the distance between the molecules
$R$ as $R^{-6}$.London (1937) This $R^{-6}$ attractive force is commonly used
as the long-distance asymptotic form of van der Waals interactions in many
force fields and to correct van der Waals interactions in ab initio
calculations, which have both achieved great successes in modeling
thermodynamic properties in a variety of systems.Halgren (1992); Grimme _et
al._ (2010) Despite van der Waals interactions being central to many
properties of molecular and condensed matter systems, limited approaches have
been proposed to manipulate intermolecular van der Waals interactions.
However, applied electromagnetic fields have been shown to modify van der
Waals interactions between atoms and molecules,Thirunamachandran (1980);
Milonni and Smith (1996); Sherkunov (2009); Fiscelli _et al._ (2020) and
Haugland et al.Haugland _et al._ (2021) recently showed numerically that van
der Waals interactions are significantly altered by strong light-matter
coupling in optical cavities. These studies open the possibility of
controlling the properties and structure of molecular fluids by tuning the
light-matter coupling parameters, the coupling strength and frequency.
Figure 1: (A) Schematic representation of the findings from our simulations of
a fluid of H2 molecules outside and inside a cavity. Specifically,
orientational order can be observed inside a cavity whereas the H2 molecules
can rotate freely outside of a cavity. The dashed lines represent the
different intermolecular interaction length scales outside and inside a
cavity. (B) Diagram describing the computational workflow used in this work.
Ab initio cavity QED energies and corresponding symmetry preserving features
(see Fig. S3, Table S3 and Section SIV.A.1 for details of symmetry preserving
features) of many $2$H2 configurations are used to develop neural network-
based intermolecular pair potentials capable of being utilized in path
integral molecular dynamics simulations of fluids of H2 molecules.
The goal of this work is to understand how the structure of molecular van der
Waals fluids can be modulated using enhanced vacuum electromagnetic quantum
fields, and we focus on the impact that a single strongly coupled photon mode
can have on the properties of a model molecular van der Waals fluids. To this
end, we leverage recent developments in cavity quantum electrodynamics (QED)
simulations and neural network pair potentials to simulate molecular fluids of
H2 molecules strongly coupled to a single photon mode (Fig. 1). By analyzing
how cavity-modified single molecule energies and cavity-mediated
intermolecular interactions depend on the orientation of the H2 molecules both
relative to the cavity polarization vector and relative to one another, we can
explain how cavities impact the structure and orientational order of molecular
van der Waals fluids. The findings reported herein should readily be
transferable to other molecules and light-matter regimes (e.g. vibrational
polaritons) given the generality of the cavity QED Hamiltonian used in this
work.Ribeiro _et al._ (2018); Rivera _et al._ (2019); Thomas _et al._
(2019); Li _et al._ (2020); Garcia-Vidal _et al._ (2021); Li _et al._
(2021a) We also discuss how the light-matter coupling strength, number of
cavity modes, temperature, anisotropic polarizabilities of molecules, quantum
nuclear effects, and molecular concentrations can all impact the extent of
orientational order observed in any particular cavity QED experiment.Vahala
(2003); Cortese _et al._ (2017); Joseph _et al._ (2021); Fukushima _et al._
(2022); Sandeep _et al._ (2022)
In molecular dynamics (MD) simulations, the nuclei move along electronic
potential energy surfaces. In the cavity case, where the photon contributions
are added, these surfaces have been termed polaritonic potential energy
surfaces.Galego _et al._ (2015); Lacombe _et al._ (2019); Fregoni _et al._
(2022) In both cases, the total potential energy of $N$ H2 molecules can be
calculated as a many-body expansion,
$E_{\text{total}}=\sum_{A}E_{A}+\sum_{\left\langle
A,B\right\rangle}E_{AB}+{\sum_{\left\langle A,B,C\right\rangle}E_{ABC}}+...,$
(1)
where $E_{A}$ represents the single-molecule energies, $E_{AB}$ represents the
intermolecular interaction energies between all unique pairs of molecules, and
so on for higher-body terms. In this work, we focus on contributions to the
total energy in Eq. 1 arising from at most two-body interactions. The three-
body and higher-body terms are significantly smaller than the two-body
interactions per interaction, see the Supplementary Information (SI) for
details. Outside the cavity, the one-body term does not depend on the
orientation of the H2 molecule. On the other hand, inside the cavity, the
molecule-field interaction causes the one-body energies to depend on the
orientation of the H2 molecules with respect to the optical cavity
polarization vector, $\bm{\varepsilon}$. Furthermore, the two-body energies
depends on the orientation between the two molecules as well as their
orientation relative to the field as a consequence of the anisotropic
polarizability of H2 molecules, in contrast to isotropic polarizabilities of
atoms.Thirunamachandran (1980); Milonni and Smith (1996); Sherkunov (2009);
Fiscelli _et al._ (2020)
Figure 2: (A-B) Snapshots taken at thermal equilibrium from molecular dynamic
(MD) simulations in the case of (A) no cavity (orange) and (B) cavity-modified
one-body and two-body terms (blue). (C) The impact of quantum nuclear effects
are demonstrated by comparing the molecular bond axis to cavity polarization
vector ($\theta_{A\varepsilon}$), angular probability distribution function,
$P\left(\theta_{A\varepsilon}\right)$ for path integral molecular dynamics
(PIMD) simulations of H2, D2, T2, and a classical MD simulation of H2. (D)
Molecular bond axis of molecule $A$ to molecular bond axis of molecule $B$
($\theta_{AB}$) angular probability distribution function,
$P\left(\theta_{AB}\right)$ and (E) $P\left(\theta_{A\varepsilon}\right)$ are
shown for PIMD simulations for no cavity (orange), cavity (blue), and cavity-
modified one-body term but no cavity two-body term (green) cases. (F)
$P\left(\theta_{A\varepsilon}\right)$ is shown for two different PIMD
simulations containing different numbers of H2 molecules within the same
cavity volume (i.e. changing the molecular density). All PIMD simulations
shown in this figure were performed using neural networks trained with CCSD
(no cavity) or QED-CCSD-12-SD1 with $\lambda=0.1$ a.u. (cavity) calculated
energies. All entropic contributions to angle distribution functions are
removed.
We calculate $E_{A}$ and $E_{AB}$ by solving the Schrödinger equation for the
cavity QED Hamiltonian in the dipole approximation with a single photon mode
using accurate coupled cluster (QED-CCSD-12-SD1) and near exact full
configuration interaction (QED-FCI-5).Haugland _et al._ (2020) Our single
photon mode has a coupling constant of $\lambda=0.1$ a.u. and energy of
$\hbar\omega_{c}=13.6$ eV unless specified otherwise. This coupling constant
is rather large as it corresponds to the coupling of at least $5$ independent
modes where each has an effective volume of $0.9$ nm3. We detail below how the
cavity-modified local interactions and cavity-induced collective effects
depend on $\lambda$. More than $100,000$ H2 dimer configurations are used as
inputs to a fully-connected neural network that serves as our intermolecular
pair potential, which is trained and tested against the calculated energies.
The trained potential energy functions were carefully tested, and, in the SI,
we demonstrate that our machine learning models are fully capable of
reproducing the potential energy surfaces. In Fig. 1B, we show the
computational workflow used in this work schematically. In this study, we
focus on path integral molecular dynamics (PIMD) simulations in order to
account for quantum nuclear effects. Our PIMD simulations of fluids of H2
molecules were performed with a fixed number of molecules ($N$), temperature
($T$), and volume ($V$). All PIMD simulations presented herein were performed
with a molecular density of $13$ molecules per nm3, temperature of $70$ K, and
$N=12$ unless otherwise specified. More details on the simulations, including
comparisons of QED-CCSD-12-SD1 with QED-FCI-5, comparisons of MD with PIMD,
and additional parameter regimes (e.g. smaller $\lambda$ values), are provided
in the SI.
Figure 3: (A) Energy difference, $\Delta E$, between a single H2 molecule
inside a cavity aligned perfectly along the cavity polarization vector,
$\bm{\varepsilon}$, and different angles relative to the cavity polarization
vector. The inset shows the energy of a single molecule within a cavity
increases with $\lambda^{2}$. (B) Intermolecular interaction energies,
$E_{AB}$, and fits to a Lennard-Jones type potential given by Eq. 3 (dashed
lines) and cavity-modified Lennard Jones type potential given by Eq. 4 (solid
line). (C) Intermolecular interaction energies, $E_{AB}$, at $25$ Å for
various high symmetry molecular orientations and cavity polarizations. All
calculations shown in this figure were performed using QED-CCSD-12-SD1 with
$\lambda=0.1$ a.u.
The structural properties of the molecular van der Waals fluids are analyzed
using PIMD simulation trajectories. In Fig. Fig. S13, we summarize the main
findings of our PIMD and classical MD simulations. Fig. Fig. S13A and Fig.
Fig. S13B show representative thermal equilibrium configurations for the no
cavity (orange) and cavity (blue) scenarios, respectively. The impact of the
cavity-modified interactions are observable in the orientational order of the
H2 molecules both relative to the cavity polarization vector
($\theta_{A\varepsilon}$, Figs. Fig. S13C, E and F) and relative to other H2
molecules ($\theta_{AB}$, Fig. Fig. S13D). Specifically, Figs. Fig. S13C-F
show that the cavity-modified energies enhance the probability of finding two
molecules oriented parallel to one another (i.e. $\theta_{AB}=0,\pi$) and
perpendicular to the cavity polarization vector (i.e.
$\theta_{A\varepsilon}=\frac{\pi}{2}$). However, the extent of this
orientational order depends on many factors, including the magnitude of
quantum nuclear effects, the light-matter coupling strengths, molecular
anisotropies, and number of molecules. To elucidate the importance of quantum
nuclear effects, we compare the orientational order observed in PIMD
simulations of H2, D2, and T2 with a classical MD simulation of H2 in Fig.
Fig. S13C; the degree of orientational order monotonically increases upon
increasing the molecular masses from H2 to D2 to T2 (which reduces quantum
nuclear effects) and is further enhanced when quantum nuclear effects are
completely removed as in the classical MD simulation. Next, in Figs. Fig.
S13D-F, we show how cavity-modified one-body energies and two-body
intermolecular energies each impact the orientational order. Fig. Fig. S13D
and Fig. Fig. S13E demonstrate that the cavity-modified one-body energies are
the dominant driver of the orientational order for the case of $12$ H2
molecules. The orange lines in Figs. Fig. S13D,E show that the H2 molecules
have no preferred orientation axis outside the cavity, consistent with the
global rotational symmetry of the electronic and nuclear Hamiltonian in
absence of the cavity. However, the presence of the bilinear coupling and
dipole self-energy terms break this symmetry such that H2 molecules prefer to
orient their bond axis in specific orientations relative to the cavity
polarization vector and relative to one another. In particular, the dipole
self-energy term outcompetes the bilinear coupling term and is responsible for
the $12$ molecule simulations preferentially aligning perpendicular to the
cavity polarization vector (Fig. 3A). However, Figs. Fig. S13E,F demonstrate
that the cavity-modified one-body energies lead to this perpendicular
alignment whereas the cavity-modified two-body intermolecular interactions
attempt to align the molecules parallel to the cavity polarization vector.
Specifically, the green line in Fig. Fig. S13E shows that the cavity-modified
one-body term causes H2 molecules to preferentially align perpendicular to the
cavity polarization vector (i.e. $\theta_{A\varepsilon}=\frac{\pi}{2}$), and
the inclusion of cavity-modified two-body interactions begins to counteract
this effect as seen in the blue line in Fig. Fig. S13E reducing the
orientational alignment. This effect of the two-body interactions causing the
H2 molecules to preferentially align parallel to the cavity polarization
vector (i.e. $\theta_{A\varepsilon}=0,\pi$) and the collective nature of the
cavity-modified intermolecular interactions are highlighted in Fig. Fig. S13F
and Fig. S13. We find that for a small number of molecules (e.g. $N=12$) the
one-body term dominates and the molecules preferentially align perpendicular
to the cavity polarization vector, but as $N$ increases to $144$ H2 molecules
with a fixed coupling and cavity volume the orientational order is lost due
the cavity-modified one-body and two-body effects perfectly canceling one
another. Additionally, the extent of orientational order induced by the cavity
decreases as the light-matter coupling strength decreases as shown in Fig. S8
and explained analytically below.
Although we performed non-perturbative ab initio cavity QED calculations,
perturbation theory can be used to further analyze and explain the major
findings of our PIMD and MD simulations. We summarize our key findings here
and in Fig. 3, and the complete analysis is provided in the SI. The cavity
modifications to the one-body energies, $E_{A}$, results in the H2 molecules
aligning their bonds orthogonal to the cavity polarization. This occurs
because H2 is most polarizable along its bond axis, and, from perturbation
theory, we can obtain an expression for the cavity-modified one-body energy as
$E_{A}^{\text{cavity}}\approx E_{A}^{\text{no
cavity}}+c\,(\alpha_{\parallel}\cos^{2}{\theta_{A\varepsilon}}+\alpha_{\perp}\sin^{2}{\theta_{A\varepsilon}}),$
(2)
where $\alpha_{\parallel}$ and $\alpha_{\perp}$ are the polarizabilities of
molecular hydrogen along its bond axis and perpendicular axes, respectively,
and $c$ is a positive scalar constant proportional to the molecule-cavity
coupling squared (i.e. $c\propto\lambda^{2}$). Eq. 2 is in agreement with the
ab initio calculations shown in Fig. 3A. Interestingly, the dipole self-energy
term increases the energy of a single molecule in a cavity more than the
bilinear coupling term decreases the energy (Eq. S12); thus, the lowest energy
orientation of a single molecule in a cavity is such that its most polarizable
axis is perpendicular to the cavity polarization vector (or vectors in terms
of multimode cavities).
In terms of the cavity modifications to the two-body energies, Fig. 3B shows
the intermolecular interaction between two H2 molecules as a function of the
center-to-center distance ($R$). The impact of the cavity on this dissociation
curve at first glance appears modest, even for the rather large light-matter
coupling of $\lambda=0.1$ a.u., but these modifications can impact the
structural and thermodynamic properties of molecular van der Waals systems for
a few reasons. First, a standard intermolecular van der Waals interaction
potential given by
$E_{AB}^{\text{no cavity}}=\frac{c_{6}}{R^{6}}+E_{\text{short-range}},$ (3)
where $E_{\text{short-range}}$ accounts for the short-range repulsion between
van der Waals molecules and the $R^{-6}$ term is the usual attractive London
dispersion interaction, is not applicable inside an optical cavity (Fig.
3B).Thirunamachandran (1980); Milonni and Smith (1996); Sherkunov (2009);
Fiscelli _et al._ (2020) A modified interaction potential that includes
angle-dependent terms that scale as $R^{-3}$ and $R^{0}$ is necessary inside
an optical cavity such that the interaction between two van der Waals
molecules is given by
$E_{AB}^{\text{cavity}}=\frac{c_{0}}{R^{0}}+\frac{c_{3}}{R^{3}}+\frac{c_{6}}{R^{6}}+E_{\text{short-
range}}.$ (4)
These interactions arise as early as second-order perturbation theory (see SI
Eq. S9).Thirunamachandran (1980) The $R^{0}$ interaction between a single pair
of molecules is rather weak ($c_{0}\propto\lambda^{4}$) as shown in Fig. 3C.
However, due to its long-range nature, a single molecule interacts with all
other molecules, and, thus, the collective effect of this interaction can
become large in many-molecule simulations. Importantly, this interaction
strength depends on the orientations of both molecular bonds relative to the
cavity polarization (Fig. 3C). Specifically, the interaction energy is
minimized when the molecular bonds of both molecules are parallel to the
cavity polarization vector, because the interaction strength of this term is
approximately related to the product of the polarizability of each molecule
along $\bm{\varepsilon}$
($c_{0}\propto\alpha_{A\varepsilon}\alpha_{B\varepsilon}$). And because
$c_{0}$ is always negative, this $R^{0}$ intermolecular interaction increases
the probability of finding H2 molecules parallel to the cavity polarization
vector and decreases the probability to find the molecules perpendicular to
the polarization vector (Fig. Fig. S13E,F). The collective nature of this
interaction is demonstrated in Fig. Fig. S13F and Fig. S13 where the
orientational order depends on the number of H2 molecules for simulations with
the same simulation volume but different molecular densities. At $N=144$, the
orientational order due to the two-body interactions have become so large that
they entirely cancel out the orientational effects from the cavity modified
one-body energies that are dominated by dipole self-energy effects for $N=12$
molecules. As $N$ increases further, we expect that the system will completely
flip, and instead align parallel to the polarization vector. This is
demonstrated in the SI (Fig. S13), but the number of molecules required
($N\geq 1000$) is too large to justify in a realistic system with the coupling
we are using currently. Both the cavity-modified $R^{-6}$ and cavity-induced
$R^{-3}$ interactions scale with $\lambda^{2}$ at lowest order. Importantly,
the $R^{-3}$ interaction is not a result of the cavity inducing a dipole
moment in the H2 molecules but rather an interaction taking place via the
cavity mode. As discussed in the SI in more detail, the intermolecular angle
and molecule-cavity angle dependencies of the perturbation potential combine
to create the orientational order shown throughout Fig. Fig. S13.
In summary, we have demonstrated that strong light-matter coupling to a single
photon mode can have profound impacts on the properties of molecular van der
Waals fluids by combining ab initio cavity QED calculations with path integral
molecular dynamics simulations of many H2 molecules. We found that cavity-
modified single molecule and intermolecular interaction energies result in
significantly changed molecular orientational order, even in the fluid phase.
We look forward to seeing future experimental and theoretical studies that aim
to elucidate how processes such as ion and molecular diffusion, intermolecular
energy transfer,Zhong _et al._ (2016); Du _et al._ (2018); Xiang _et al._
(2020) and chemical reactivityHerrera and Spano (2016); Thomas _et al._
(2019); Yang and Cao (2021); Li _et al._ (2021b); Simpkins _et al._ (2021);
Philbin _et al._ (2022) are impacted by the unique properties of molecular
fluids in cavity QED reported here.
###### Acknowledgements.
We thank Jonathan Curtis, Davis Welakuh, Wenjie Dou, and Rosario R. Riso for
helpful discussions. This work was primarily supported by the Department of
Energy, Photonics at Thermodynamic Limits Energy Frontier Research Center,
under Grant No. DE-SC0019140 and European Research Council under the European
Union’s Horizon 2020 Research and Innovation Programme grant agreement No.
101020016. An award of computer time was provided by the INCITE program. This
research also used resources of the Oak Ridge Leadership Computing Facility,
which is a DOE Office of Science User Facility supported under Contract DE-
AC05-00OR22725 J.P.P. also acknowledges support from the Harvard University
Center for the Environment. T.K.G. and M.C. acknowledge support from Purdue
startup funding. T.S.H. and H.K. also acknowledges funding from the Research
Council of Norway through FRINATEK project 275506. P.N. acknowledges support
as a Moore Inventor Fellow through Grant No. GBMF8048 and gratefully
acknowledges support from the Gordon and Betty Moore Foundation as well as
support from a NSF CAREER Award under Grant No. NSF-ECCS-1944085. E.R
acknowledges funding from the European Research Council (ERC) under the
European Union’s Horizon Europe Research and Innovation Programme (Grant n.
ERC-StG-2021-101040197 - QED-SPIN).
## References
* Hobza and Šponer (2002) P. Hobza and J. Šponer, J. Am. Chem. Soc. 124, 11802 (2002).
* Novoselov _et al._ (2016) K. S. Novoselov, A. Mishchenko, A. Carvalho, and A. H. C. Neto, Science 353, aac9439 (2016).
* Sternbach _et al._ (2021) A. J. Sternbach, S. H. Chae, S. Latini, A. A. Rikhter, Y. Shao, B. Li, D. Rhodes, B. Kim, P. J. Schuck, X. Xu, X. Y. Zhu, R. D. Averitt, J. Hone, M. M. Fogler, A. Rubio, and D. N. Basov, Science 371, 617 (2021).
* Maitland _et al._ (1981) G. C. Maitland, G. D. Maitland, M. Rigby, E. B. Smith, and W. A. Wakeham, _Intermolecular Forces: Their Origin and Determination_ (Oxford University Press, USA, 1981).
* Stone (2013) A. Stone, _The Theory of Intermolecular Forces_, 2nd ed. (Oxford University Press, Oxford, 2013) p. 352.
* London (1937) F. London, Trans. Faraday Soc. 33, 8b (1937).
* Halgren (1992) T. A. Halgren, J. Am. Chem. Soc. 114, 7827 (1992).
* Grimme _et al._ (2010) S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, J. Chem. Phys. 132, 154104 (2010).
* Thirunamachandran (1980) T. Thirunamachandran, Mol. Phys. 40, 393 (1980).
* Milonni and Smith (1996) P. W. Milonni and A. Smith, Phys. Rev. A 53, 3484 (1996).
* Sherkunov (2009) Y. Sherkunov, J. Phys. Conf. Ser. 161, 012041 (2009).
* Fiscelli _et al._ (2020) G. Fiscelli, L. Rizzuto, and R. Passante, Phys. Rev. Lett. 124, 013604 (2020).
* Haugland _et al._ (2021) T. S. Haugland, C. Schäfer, E. Ronca, A. Rubio, and H. Koch, J. Chem. Phys. 154, 094113 (2021).
* Ribeiro _et al._ (2018) R. F. Ribeiro, L. A. Martínez-Martínez, M. Du, J. Campos-Gonzalez-Angulo, and J. Yuen-Zhou, Chem. Sci. 9, 6325 (2018).
* Rivera _et al._ (2019) N. Rivera, J. Flick, and P. Narang, Phys. Rev. Lett. 122, 193603 (2019).
* Thomas _et al._ (2019) A. Thomas, L. Lethuillier-Karl, K. Nagarajan, R. M. A. Vergauwe, J. George, T. Chervy, A. Shalabney, E. Devaux, C. Genet, J. Moran, and T. W. Ebbesen, Science 363, 615 (2019).
* Li _et al._ (2020) T. E. Li, J. E. Subotnik, and A. Nitzan, Proc. Natl. Acad. Sci. U. S. A. 117, 18324 (2020).
* Garcia-Vidal _et al._ (2021) F. J. Garcia-Vidal, C. Ciuti, and T. W. Ebbesen, Science 373, eabd0336 (2021).
* Li _et al._ (2021a) T. E. Li, A. Nitzan, and J. E. Subotnik, Angew. Chemie 133, 15661 (2021a).
* Vahala (2003) K. J. Vahala, Nature 424, 839 (2003).
* Cortese _et al._ (2017) E. Cortese, P. G. Lagoudakis, and S. De Liberato, Phys. Rev. Lett. 119, 043604 (2017).
* Joseph _et al._ (2021) K. Joseph, S. Kushida, E. Smarsly, D. Ihiawakrim, A. Thomas, G. L. Paravicini-Bagliani, K. Nagarajan, R. Vergauwe, E. Devaux, O. Ersen, U. H. F. Bunz, and T. W. Ebbesen, Angew. Chem. Int. Ed. 60, 19665 (2021).
* Fukushima _et al._ (2022) T. Fukushima, S. Yoshimitsu, and K. Murakoshi, J. Am. Chem. Soc. 144, 12177 (2022).
* Sandeep _et al._ (2022) K. Sandeep, K. Joseph, J. Gautier, K. Nagarajan, M. Sujith, K. G. Thomas, and T. W. Ebbesen, J. Phys. Chem. Lett. 13, 1209 (2022).
* Galego _et al._ (2015) J. Galego, F. J. Garcia-Vidal, and J. Feist, Phys. Rev. X 5, 41022 (2015).
* Lacombe _et al._ (2019) L. Lacombe, N. M. Hoffmann, and N. T. Maitra, Phys. Rev. Lett. 123, 083201 (2019).
* Fregoni _et al._ (2022) J. Fregoni, F. J. Garcia-Vidal, and J. Feist, ACS Photonics 9, 1096 (2022).
* Haugland _et al._ (2020) T. S. Haugland, E. Ronca, E. F. Kjønstad, A. Rubio, and H. Koch, Phys. Rev. X 10, 041043 (2020).
* Zhong _et al._ (2016) X. Zhong, T. Chervy, S. Wang, J. George, A. Thomas, J. A. Hutchison, E. Devaux, C. Genet, and T. W. Ebbesen, Angew. Chem. Int. Ed. 55, 6202 (2016).
* Du _et al._ (2018) M. Du, L. A. Martínez-Martínez, R. F. Ribeiro, Z. Hu, V. M. Menon, and J. Yuen-Zhou, Chem. Sci. 9, 6659 (2018).
* Xiang _et al._ (2020) B. Xiang, R. F. Ribeiro, M. Du, L. Chen, Z. Yang, J. Wang, J. Yuen-Zhou, and W. Xiong, Science 368, 665 (2020).
* Herrera and Spano (2016) F. Herrera and F. C. Spano, Phys. Rev. Lett. 116, 238301 (2016).
* Yang and Cao (2021) P. Y. Yang and J. Cao, J. Phys. Chem. Lett. 12, 9531 (2021).
* Li _et al._ (2021b) X. Li, A. Mandal, and P. Huo, Nat. Commun. 12, 1315 (2021b).
* Simpkins _et al._ (2021) B. S. Simpkins, A. D. Dunkelberger, and J. C. Owrutsky, J. Phys. Chem. C 125, 19081 (2021).
* Philbin _et al._ (2022) J. P. Philbin, Y. Wang, P. Narang, and W. Dou, J. Phys. Chem. C 126, 14908 (2022).
* White _et al._ (2020) A. F. White, Y. Gao, A. J. Minnich, and G. K. L. Chan, J. Chem. Phys. 153, 224112 (2020).
* Eisenschitz and London (1930) R. Eisenschitz and F. London, Zeitschrift für Phys. 60, 491 (1930).
* London (1930) F. London, Zeitschrift für Phys. 63, 245 (1930).
* Dahlke and Truhlar (2007) E. E. Dahlke and D. G. Truhlar, J. Chem. Theory Comput. 3, 46 (2007).
* Barron (2017) J. T. Barron, Continuously differentiable exponential linear units (2017), arXiv:1704.07483 .
* Kingma and Ba (2014) D. P. Kingma and J. Ba, Adam: A method for stochastic optimization (2014), arXiv:1412.6980 .
* Paszke _et al._ (2019) A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, in _Advances in Neural Information Processing Systems 32_, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Curran Associates, Inc., 2019) pp. 8024–8035.
* Bussi and Parrinello (2007) G. Bussi and M. Parrinello, Phys. Rev. E 75, 056707 (2007).
* Ceriotti _et al._ (2009) M. Ceriotti, G. Bussi, and M. Parrinello, Phys. Rev. Lett. 103, 030603 (2009).
* Ceriotti _et al._ (2010a) M. Ceriotti, G. Bussi, and M. Parrinello, J. Chem. Theory Comput. 6, 1170 (2010a).
* Ceriotti _et al._ (2011) M. Ceriotti, D. E. Manolopoulos, and M. Parrinello, J. Chem. Phys. 134, 084104 (2011).
* Ceriotti _et al._ (2010b) M. Ceriotti, M. Parrinello, T. E. Markland, and D. E. Manolopoulos, J. Chem. Phys. 133, 124104 (2010b).
* Ceriotti _et al._ (2014) M. Ceriotti, J. More, and D. E. Manolopoulos, Comput. Phys. Commun. 185, 1019 (2014).
Supplementary Information:
Molecular van der Waals fluids in cavity quantum electrodynamics
###### Contents
1. I Ab Initio Calculations
2. II Perturbation Theory
3. III Many-body Interactions
4. IV Molecular Dynamics
1. IV.1 Training Potential Energy Functions for Simulating Fluids of H2
1. IV.1.1 Neural Network-based Pairwise Interactions
2. IV.1.2 Single Molecule Potential Energies
2. IV.2 Molecular Dynamics
1. IV.2.1 Classical Molecular Dynamics
2. IV.2.2 Path Integral Molecular Dynamics
3. IV.3 Radial Distribution Functions
4. IV.4 Angular Distribution Functions
5. V Additional Results
1. V.1 Comparison of Radial Distribution Functions
2. V.2 Comparison of Classical MD and PIMD
3. V.3 Comparison of QED-FCI-5 and QED-CCSD-12-SD1
4. V.4 $\lambda$ Dependent Molecular Alignment
## I Ab Initio Calculations
The Hamiltonian used in the ab initio calculations is the single mode Pauli-
Fierz Hamiltonian in the length gauge
$\displaystyle H$
$\displaystyle=H_{e}+\lambda\sqrt{\frac{\omega_{c}}{2}}((\bm{d}-\expectationvalue{\bm{d}})\cdot\bm{\varepsilon})(b+b^{\dagger})$
(S1)
$\displaystyle+\frac{\lambda^{2}}{2}((\bm{d}-\expectationvalue{\bm{d}})\cdot\bm{\varepsilon})^{2}+\omega_{c}b^{\dagger}b,$
where $H_{e}$ is the electronic Hamiltonian, $\lambda$ is the bilinear
coupling, $\omega_{c}$ is the cavity frequency, $\bm{d}$ is the molecular
dipole, $\bm{\varepsilon}$ is the cavity polarization vector, and $b$ and
$b^{\dagger}$ are the photon annihilation and creation operators,
respectively.
All electronic structure calculations are run using an aug-cc-pVDZ basis set.
The optical cavity is described by a single linearly polarized mode coupling
parameter $\lambda$ is set to $0.1$ a.u. and the cavity energy
$\hbar\omega_{c}$ is $13.6$ eV, unless otherwise specified.
The large value for the coupling is partially justified by the single mode
approximation. For cavity-induced changes in the ground state, each cavity
mode will to second order in perturbation theory (see Eq. S12) enter the
energy independently. For larger frequencies, the bilinear contribution from
each mode cancels part of the dipole self-energy. For smaller frequencies
compared to electronic excitation energies, we find that only contributions
from the dipole self-energy are significant. Therefore, in the low-frequency
regime, the coupling from $N_{\rm modes}$ modes is given by an effective
coupling $\lambda^{2}_{\rm eff}\approx N_{\rm modes}\lambda^{2}$.
As shown and discussed in Ref. Haugland _et al._ (2021), cavity quantum
electrodynamics Hartree-Fock (QED-HF) and current QED density functional
theory (QEDFT) implementations do not describe intermolecular forces properly,
especially van der Waals interactions in which they fail to predict an
attractive interaction between van der Waals molecules. Therefore, we
performed the ab initio simulations with QED coupled cluster (QED-CCSD-12-SD1)
and QED full configuration interaction (QED-FCI).White _et al._ (2020) QED-
CCSD-12-SD1 is an extension of QED-CCSD-1, as described in Ref. Haugland _et
al._ (2020), with two-photon excitations. The QED-CCSD-12-SD1 cluster operator
is
$T=T_{1}+T_{2}+S_{1}b^{\dagger}+S_{2}b^{\dagger}+\gamma_{1}b^{\dagger}+\gamma_{2}(b^{\dagger})^{2},$
(S2)
where $T_{1}$ and $T_{2}$ are singles and doubles electron excitations,
$S_{1}b^{\dagger}$ and $S_{2}b^{\dagger}$ are singles and doubles coupled
electron-photon excitations, and $\gamma_{1}b^{\dagger}$ and
$\gamma_{2}(b^{\dagger})^{2}$ are singles and doubles photon excitations. The
reference state is QED-HF as described in Ref. Haugland _et al._ (2020). QED-
FCI calculations are run with up to five photons (QED-FCI-5) to ensure that
the energy with respect to photon number is converged.
We use QED-CCSD-12-SD1 instead of QED-CCSD-1 (equivalent to QED-CCSD-1-SD1)
because the two-photon excitations are important for properly modeling the
two-body interactions, as tested against QED-FCI-5 calculations. Without two-
photon excitations, the two-body interactions have the wrong sign in the case
of molecules separated by large distances (e.g. molecules separated by more
than $1$ nm). This is visualized in Fig. S1.
Figure Fig. S1: Calculated intermolecular interaction energies for a C2v
configuration of $2$H2 with the cavity polarization vector parallel to the
center-to-center intermolecular distance vector. All calculations shown in
this figure were performed with $\lambda=0.1$ a.u
In all of our calculations, we use a linearly polarized optical cavity with a
single photon frequency and single polarization vector. In most experiments as
of today, the optical cavity is not limited to just one polarization, but
rather it hosts two degenerate cavity modes with orthogonal polarizations
(both cavity mode polarization vectors are perpendicular to the cavity
wavevector). Since the molecular orientations aligns with the transversal
polarization, we expect that a standard optical cavity, which has both
polarizations, will interact with the system differently. In particular, we
expect that for few molecules, the molecules will orient along the wavevector
$\bm{k}$, perpendicular to both cavity polarization vectors. For many
molecules, we expect that the molecules will align perpendicular to $\bm{k}$,
in the plane defined from the two transversal polarization vectors.
## II Perturbation Theory
As we demonstrate throughout this work, strong coupling to a single photon
mode fundamentally changes the length scales and orientational dependence in
which van der Waals molecules interact with one another. In this section, we
explain these observations by performing perturbation theory in a similar
spirit as Fritz London did in 1930Eisenschitz and London (1930); London (1930,
1937) but with additional perturbative potentials associated with coupling to
the cavity. This analysis shows cavity-mediated intermolecular interactions
between van der Waals molecules that scale with $R^{-3}$ and distance
independent, $R^{0}$, interactions in addition to modifications to London
dispersion forces that have an $R^{-6}$ dependence.Thirunamachandran (1980);
Milonni and Smith (1996); Sherkunov (2009); Fiscelli _et al._ (2020)
The total Hamiltonian is given by $H=H^{0}+H^{1}$ with
$H^{0}=H_{e,A}+H_{e,B}+\omega_{c}b^{\dagger}b$ (S3)
where $b^{\dagger}$ and $b$ are photon creation and annihilation operators for
the cavity mode of frequency $\omega_{c}$ and $H_{e,A}$ and $H_{e,B}$ refer to
the electronic Hamiltonians of molecules $A$ and $B$, respectively. The
perturbative Hamiltonian ($H^{1}$) includes the dipolar coupling between
molecules $A$ and $B$, in the spirit of London’s first derivation of van der
Waals interactions, and the light-matter coupling to a single cavity mode
$H^{1}=-\frac{\bm{d}_{A}\cdot\bm{d}_{B}}{R^{3}}+\frac{3(\bm{d}_{A}\cdot\bm{R})(\bm{d}_{B}\cdot\bm{R})}{R^{5}}+\lambda\sqrt{\frac{\omega_{c}}{2}}(\bm{\varepsilon}\cdot\Delta\bm{d}_{A}+\bm{\varepsilon}\cdot\Delta\bm{d}_{B})(b+b^{\dagger})+\frac{\lambda^{2}}{2}(\bm{\varepsilon}\cdot\Delta\bm{d}_{A}+\bm{\varepsilon}\cdot\Delta\bm{d}_{B})^{2}$
(S4)
where $\Delta\bm{d}_{A}=\bm{d}_{A}-\langle\bm{d}_{A}\rangle$ and
$\Delta\bm{d}_{B}=\bm{d}_{B}-\langle\bm{d}_{B}\rangle$ are the fluctuations of
molecule $A$ and molecule $B$’s dipoles, respectively and $\bm{d}_{A}$ and
$\bm{d}_{B}$ are the dipole operators for molecule $A$ and molecule $B$,
respectively. Recall that in this work we are working with van der Waals
molecules such that both molecules do not have permanent dipoles (i.e.
$\langle\bm{d}_{A}\rangle=\langle\bm{d}_{B}\rangle=0$).
The first-order correction to the energy is given by
$E^{1}=\matrixelement{g}{H^{1}}{g}$ (S5)
where $\ket{g}$ denotes the ground state of the total system,
$\ket{g}=\ket{g_{A}}\ket{g_{B}}\ket{g_{c}}$ where molecule $A$, molecule $B$,
and the cavity are in their ground states. In this illustrative perturbation
theory, we are interested in the asymptotic behavior for when molecule $A$ and
molecule $B$ are far away from one another; thus, the antisymmetry of the
total electronic wavefunctions is ignored. Substituting in Eq. S4 into Eq. S5,
we obtain
$\displaystyle E^{1}$
$\displaystyle=\frac{\lambda^{2}}{2}(\matrixelement{g_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)^{2}}{g_{A}}+\matrixelement{g_{B}}{\left(\bm{d_{B}}\cdot\bm{\varepsilon}\right)^{2}}{g_{B}})$
$\displaystyle=\frac{\lambda^{2}}{2}(E^{1}_{A}+E^{1}_{B})$ (S6)
where
$E^{1}_{A}=\matrixelement{g_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)^{2}}{g_{A}}$
and
$E^{1}_{B}=\matrixelement{g_{B}}{\left(\bm{d_{B}}\cdot\bm{\varepsilon}\right)^{2}}{g_{B}}$
are the dipole self-energies of molecule $A$ and molecule $B$, respectively.
In Eq. S6 we have used the facts that there are no photons in the ground state
of the cavity ($\matrixelement{g_{c}}{b^{\dagger}b}{g_{c}}=0$) and that for
van der Waals molecules, by definition, there is no permanent dipole
($\matrixelement{g_{A}}{\bm{d}_{A}}{g_{A}}=\langle\bm{d}_{A}\rangle=0$ and
$\matrixelement{g_{B}}{\bm{d}_{B}}{g_{B}}=\langle\bm{d}_{B}\rangle=0$). The
fact that molecules $A$ and $B$ do not have permanent dipoles allows us to
express $E^{1}_{A}$ and $E^{1}_{B}$ with a different formula, i.e.
$\displaystyle E^{1}_{A}$
$\displaystyle=\matrixelement{g_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)^{2}}{g_{A}}$
(S7)
$\displaystyle=\matrixelement{g_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)\hat{I}\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)}{g_{A}}$
$\displaystyle=\sum_{e_{A}}|\matrixelement{e_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)}{g_{A}}|^{2}\;\;,$
where $\ket{e_{A}}$ is an excited state of molecule $A$. An important
observation here is that both $E^{1}_{A}$ and $E^{1}_{B}$ are single molecule
terms and are always positive; we will return to these facts after deriving
the second-order energy correction.
The second-order correction to the energy is given by
$E^{2}=-\sum_{e}\frac{\left|\matrixelement{e}{H^{1}}{g}\right|^{2}}{E_{e}-E_{g}}$
(S8)
where $\ket{g}$ is the ground state of the bi-molecule system with energy
$E_{g}$ and $\ket{e}$ indicates an excited state of the bi-molecule system
with energy $E_{e}$. Substituting Eq. S4 into Eq. S8 along with some
simplifications we obtain the second-order correction to the energy to be
$\displaystyle E^{2}=$
$\displaystyle-\sum_{e_{A}e_{B}}\frac{\left|\matrixelement{e_{A}e_{B}}{V_{AB}}{g_{A}g_{B}}\right|^{2}}{E_{e_{A}}-E_{g_{A}}+E_{e_{B}}-E_{g_{B}}}-\lambda^{2}\sum_{e_{A}e_{B}}\frac{\matrixelement{e_{A}e_{B}}{V_{AB}}{g_{A}g_{B}}\matrixelement{e_{A}}{\bm{d_{A}}\cdot\bm{\varepsilon}}{g_{A}}\matrixelement{e_{B}}{\bm{d_{B}}\cdot\bm{\varepsilon}}{g_{B}}}{E_{e_{A}}-E_{g_{A}}+E_{e_{B}}-E_{g_{B}}}$
$\displaystyle-\frac{\lambda^{2}\omega_{c}}{2}\left[\sum_{e_{A}}\frac{\left|\matrixelement{e_{A}}{\bm{d_{A}}\cdot\bm{\varepsilon}}{g_{A}}\right|^{2}}{\omega_{c}+E_{e_{A}}-E_{g_{A}}}+\sum_{e_{B}}\frac{\left|\matrixelement{e_{B}}{\bm{d_{B}}\cdot\bm{\varepsilon}}{g_{B}}\right|^{2}}{\omega_{c}+E_{e_{B}}-E_{g_{B}}}\right]$
$\displaystyle-\frac{\lambda^{4}}{4}\left[\sum_{e_{A}}\frac{\left|\matrixelement{e_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)^{2}}{g_{A}}\right|^{2}}{E_{e_{A}}-E_{g_{A}}}+\sum_{e_{B}}\frac{\left|\matrixelement{e_{B}}{\left(\bm{d_{B}}\cdot\bm{\varepsilon}\right)^{2}}{g_{B}}\right|^{2}}{E_{e_{B}}-E_{g_{B}}}+4\sum_{e_{A}e_{B}}\frac{\left|\matrixelement{e_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)}{g_{A}}\right|^{2}\left|\matrixelement{e_{B}}{\left(\bm{d_{B}}\cdot\bm{\varepsilon}\right)}{g_{B}}\right|^{2}}{E_{e_{A}}-E_{g_{A}}+E_{e_{B}}-E_{g_{B}}}\right]$
$\displaystyle=E^{2}_{AB,d^{0}}+\lambda^{2}E^{2}_{AB,d^{1}}+\frac{\lambda^{2}}{2}(E^{2}_{A,d^{1}}+E^{2}_{B,d^{1}})+\frac{\lambda^{4}}{4}(E^{2}_{A,d^{2}}+E^{2}_{B,d^{2}}+E^{2}_{AB,d^{2}})$
(S9)
where we defined
$V_{AB}=-\frac{\bm{d}_{A}\cdot\bm{d}_{B}}{R^{3}}+\frac{3(\bm{d}_{A}\cdot\bm{R})(\bm{d}_{B}\cdot\bm{R})}{R^{5}}\;\;\ldotp$
(S10)
$E^{2}_{AB,d^{0}}$, $E^{2}_{AB,d^{1}}$, $E^{2}_{A,d^{1}}$, $E^{2}_{B,d^{1}})$,
$E^{2}_{A,d^{2}}$, $E^{2}_{B,d^{2}}$, and $E^{2}_{AB,d^{2}}$ are defined as
$\displaystyle E^{2}_{AB,d^{0}}$
$\displaystyle=-\sum_{e_{A}e_{B}}\frac{\left|\matrixelement{e_{A}e_{B}}{V_{AB}}{g_{A}g_{B}}\right|^{2}}{E_{e_{A}}-E_{g_{A}}+E_{e_{B}}-E_{g_{B}}}$
(S11a) $\displaystyle E^{2}_{AB,d^{1}}$
$\displaystyle=-\sum_{e_{A}e_{B}}\frac{\matrixelement{e_{A}e_{B}}{V_{AB}}{g_{A}g_{B}}\matrixelement{e_{A}}{\bm{d_{A}}\cdot\bm{\varepsilon}}{g_{A}}\matrixelement{e_{B}}{\bm{d_{B}}\cdot\bm{\varepsilon}}{g_{B}}}{E_{e_{A}}-E_{g_{A}}+E_{e_{B}}-E_{g_{B}}}$
(S11b) $\displaystyle E^{2}_{A,d^{1}}$
$\displaystyle=-\omega_{c}\sum_{e_{A}}\frac{\left|\matrixelement{e_{A}}{\bm{d_{A}}\cdot\bm{\varepsilon}}{g_{A}}\right|^{2}}{\omega_{c}+E_{e_{A}}-E_{g_{A}}}$
(S11c) $\displaystyle E^{2}_{B,d^{1}}$
$\displaystyle=-\omega_{c}\sum_{e_{B}}\frac{\left|\matrixelement{e_{B}}{\bm{d_{B}}\cdot\bm{\varepsilon}}{g_{B}}\right|^{2}}{\omega_{c}+E_{e_{B}}-E_{g_{B}}}$
(S11d) $\displaystyle E^{2}_{A,d^{2}}$
$\displaystyle=-\sum_{e_{A}}\frac{\left|\matrixelement{e_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)^{2}}{g_{A}}\right|^{2}}{E_{e_{A}}-E_{g_{A}}}$
(S11e) $\displaystyle E^{2}_{B,d^{2}}$
$\displaystyle=-\sum_{e_{B}}\frac{\left|\matrixelement{e_{B}}{\left(\bm{d_{B}}\cdot\bm{\varepsilon}\right)^{2}}{g_{B}}\right|^{2}}{E_{e_{B}}-E_{g_{B}}}$
(S11f) $\displaystyle E^{2}_{AB,d^{2}}$
$\displaystyle=-4\sum_{e_{A}e_{B}}\frac{\left|\matrixelement{e_{A}}{\left(\bm{d_{A}}\cdot\bm{\varepsilon}\right)}{g_{A}}\right|^{2}\left|\matrixelement{e_{B}}{\left(\bm{d_{B}}\cdot\bm{\varepsilon}\right)}{g_{B}}\right|^{2}}{E_{e_{A}}-E_{g_{A}}+E_{e_{B}}-E_{g_{B}}}\;\;,$
(S11g)
where $\ket{g_{A}}$ ($\ket{g_{B}}$) is the ground state of molecule $A$ ($B$)
with energy $E_{g_{A}}$ ($E_{g_{B}}$), $\ket{e_{A}}$ ($\ket{e_{B}}$) indicates
an excited state of molecule $A$ ($B$) with energy $E_{e_{A}}$ ($E_{e_{B}}$),
and $\matrixelement{e_{A}}{\bm{d}_{A}}{g_{A}}$
($\matrixelement{e_{B}}{\bm{d}_{B}}{g_{B}}$) is the transition dipole moment
of molecule $A$ ($B$) associated with the excited state. Eq. II is an
important result in this work, and the physical interpretation, origin, and
implications of each term are worth exploring in detail. $E^{2}_{AB,d^{0}}$ in
Eq. II is the typical attractive London dispersion interaction with its
prototypical $R^{-6}$ dependence (as each $V_{AB}$ scales with $R^{-3}$). The
remaining terms all arise from interactions through the cavity mode.
$E^{2}_{AB,d^{1}}$ contains a single $V_{AB}$ matrix element giving an
$R^{-3}$ of this term. Interestingly, this term also contains dot products of
transition dipole moments ($\matrixelement{e_{A}}{\bm{d}_{A}}{g_{A}}$) with
the cavity polarization vector ($\bm{\varepsilon}$). This $R^{-3}$ term is
central to this work as it says that van der Waals molecules inside a cavity
have this interesting interaction length scale that also has unique, coupled
molecule-molecule and molecular-cavity angle dependencies. $E^{2}_{A,d^{1}}$
and $E^{2}_{B,d^{1}}$are very similar to $E^{1}_{A}$ and $E^{1}_{B}$ except
that $E^{2}_{A,d^{1}}$ and $E^{2}_{B,d^{1}}$ arise from the bilinear coupling
term and have the opposite sign as $E^{1}_{A}$ and $E^{1}_{B}$. Specifically,
to second-order in the coupling $\lambda$, the one-body energy (e.g. molecule
$A$) is given by
$\displaystyle E_{A}^{\text{cavity}}$ $\displaystyle=E_{A}^{\text{no
cavity}}+\frac{\lambda^{2}}{2}(E^{1}_{A}+E^{2}_{A,d^{1}})$ (S12)
$\displaystyle=E_{A}^{\text{no
cavity}}+\frac{\lambda^{2}}{2}\sum_{e_{A}}\left|\matrixelement{e_{A}}{\bm{d}_{A}\cdot\bm{\varepsilon}}{g_{A}}\right|^{2}$
$\displaystyle\quad-\frac{\lambda^{2}\omega_{c}}{2}\sum_{e_{A}}\frac{\left|\matrixelement{e_{A}}{\bm{d}_{A}\cdot\bm{\varepsilon}}{g_{A}}\right|^{2}}{w_{c}+E_{e_{A}}-E_{g_{A}}}\;\;\ldotp$
A similar energy term can be derived for molecule $B$ as well. We want to
emphasize that $E^{1}_{A}$ arises from the dipole self-energy term in first-
order perturbation theory (Eq. S6) and $E^{2}_{A,d^{1}}$ arises from the
bilinear coupling term in second-order perturbation theory (Eq. II).
Interestingly, $E^{1}_{A}$ and $E^{2}_{A,d^{1}}$ only exactly cancel if the
cavity frequency is much larger than the electronic transition energies
($\omega_{c}\gg E_{e_{A}}-E_{g_{A}}$). Thus, for H2 molecules with a cavity in
the electronic regime ($\omega_{c}=13.6$ eV here) the total energy of a single
molecule ends up increasing with $\lambda^{2}$ (main text Fig. 3A). For H2
molecules, the one-body energy reaches a minimum when the molecular bond is
perpendicular to the cavity polarization vector
($\theta_{A\varepsilon}=\frac{\pi}{2}$). Intuitively, this occurs because H2
is most polarizable along its bond axis which leads to
$\sum_{e_{A}}\left|\matrixelement{e_{A}}{\bm{d}_{A}\cdot\bm{\varepsilon}}{g_{A}}\right|^{2}/(E_{e_{A}}-E_{g_{A}})=\bm{\varepsilon}^{T}\bm{\alpha}\bm{\varepsilon}$
being largest when $\theta_{A\varepsilon}=0,\pi$.
$E^{2}_{A,d^{2}}$, $E^{2}_{B,d^{2}}$, and $E^{2}_{AB,d^{2}}$ arise from two
factors of the dipole self-energy part of Eq. S4 and, thus, scale with
$\lambda^{4}$. While $E^{2}_{A,d^{2}}$ and $E^{2}_{B,d^{2}}$ are corrections
to the one-body energies, $E^{2}_{AB,d^{2}}$ impacts the two-body energies
(i.e. intermolecular interaction energy). Furthermore, this term has no $R$
dependence, and, thus, $E^{2}_{AB,d^{2}}$ is the first term that we have
discussed that gives rise to the collective orientational order reported in
the main text. The magnitude of this term is greatest when both molecules have
their bonds oriented along the cavity polarization vector
($\bm{\varepsilon}$), because
$\bm{\varepsilon}^{T}\bm{\alpha}_{A}\bm{\varepsilon}$ and
$\bm{\varepsilon}^{T}\bm{\alpha}_{B}\bm{\varepsilon}$ are both largest in the
case which both of their bonds are oriented parallel to $\bm{\varepsilon}$.
And because of the negative sign in front of this infinite range interaction
term, it contributes to lowering the energy of molecular configurations in
which the molecular bonds of the hydrogen molecules are oriented parallel to
the cavity polarization vector, as shown in Fig. 3C of the main text.
## III Many-body Interactions
The many-body expansion,
$E=\sum_{A}E_{A}+\sum_{AB}E_{AB}+\sum_{ABC}E_{ABC}+\dots$ (S13)
is a routinely used expansion for modeling and gaining insight into
intermolecular forces.Dahlke and Truhlar (2007) For van der Waals type
intermolecular forces, the higher-order interactions such as $E_{ABC}$ quickly
become negligible with distance and they can be assumed to be much smaller
than the lower-order terms at large distances. QED electronic structure
calculations allow us to test if the three-body and higher-order terms can be
ignored for the strong light-matter coupling cavity QED Hamiltonian with
similar parameters used in the calculations of the main text. In Table S1 and
Fig. S2, we show the intermolecular interactions for molecules separated far
apart, $25$ Å. As expected, QED-HF does not capture the dynamic correlation
and cannot describe the intermolecular forces arising from neither the cavity
nor the van der Waals forces. QED-CCSD-1 captures the dynamic correlation, but
the sign of the two-body interaction is not consistent with QED-FCI. Adding
just one more term to the cluster operator of QED-CCSD-1, the two-photon
$(b^{\dagger})^{2}$ term in QED-CCSD-12-SD1, yields a sufficient description
of the two-body interactions. For QED-CCSD-12-SD1, we find that the higher-
order terms quickly approach zero even for the very strong coupling
$\lambda=0.1$ a.u. From perturbation theory, we find that the $N$-body
interactions are sensitive to the light-matter coupling strength and scale as
$\lambda^{2N}$ (see Fig. S2).
A few additional key points about the many-body expansion of van der Waals
interactions in the context of the nonrelativistic cavity QED Hamiltonian
given in Eq. S1 are worth mentioning here. Because the three-body interactions
have opposite sign to the two-body interactions (Table S1), we expect that the
collective orientational order induced by the infinite range cavity-induced
interactions would be reduced by including the three-body terms in the
molecular dynamics simulations. While the three-body terms are insignificant
on a per interaction basis, the lack of distance ($R$) dependence in the
cavity-induced interactions, see Eq. II, results in all molecules in the
simulation interacting with all other molecules independent of how far away
they are from each other. In a simulation with $n$ molecules, there are
$n(n-1)/2\sim n^{2}$ two-body interactions, $n(n-1)(n-2)/6\sim n^{3}$ three-
body interactions, and similarly for higher-order terms (Table S2). Therefore,
there must exist a number of molecules where the total three-body energy is
larger than the total two-body energy. This makes it very challenging to
extrapolate our results to truly macroscopic systems. Extending these
microscopic equations and calculations to truly macroscopic systems remains an
open question.
Method | 1-body | 2-body | 3-body | 4-body
---|---|---|---|---
QED-HF | 204.9 | 0.0000 | 0.0000 | 0.0000
QED-CCSD-1 | 107.5 | 0.3238 | -0.0571 | 0.0042
QED-CCSD-12-SD1 | 107.1 | -0.5600 | 0.0104 | -0.0004
QED-FCI-5 | 106.7 | -0.6601 | $\ldots$ | $\ldots$
Table Table S1: Cavity-induced $N$-body effects for different QED electronic structure methodologies with $\lambda=0.1$ a.u. The cavity energy is $\hbar\omega_{c}=13.6$ eV and polarization perpendicular to all molecules. The molecules are placed on the edges of a line ($E_{AB}$), equilateral triangle ($E_{ABC}$) and square ($E_{ABCD}$), all with side lengths of $25$ Å. All numbers in the table are meV. QED-FCI-5 is too computationally expensive for more than two H2 molecules in the aug-cc-pVDZ basis set. Figure Fig. S2: $N$-body effects for different coupling strengths $\lambda$. All calculations are performed on $N$ H2 molecules with QED-CCSD-12-SD1. The cavity energy is $\hbar\omega_{c}=13.6$ eV and polarization perpendicular to all molecules. The molecules are placed on the edges of a line ($E_{AB}$), equilateral triangle ($E_{ABC}$) and square ($E_{ABCD}$), all with side lengths of $25$ Å. | 1-body | 2-body | 3-body | 4-body
---|---|---|---|---
Scaling with coupling | $\lambda^{2}$ | $\lambda^{4}$ | $\lambda^{6}$ | $\lambda^{8}$
Number of terms | $n\choose 1$ | $n\choose 2$ | $n\choose 3$ | $n\choose 4$
Table Table S2: The number of interactions and scaling of the cavity-induced
interaction energy in the $N$th body of the $N$-body expansion for a system
with $n$ molecules.
## IV Molecular Dynamics
### IV.1 Training Potential Energy Functions for Simulating Fluids of H2
#### IV.1.1 Neural Network-based Pairwise Interactions
We developed neural network-based potential energy functions (NNPs) for the
pairwise interaction of a pair of hydrogen molecules using ${\it ab~{}initio}$
energy data with CCSD, FCI, QED-CCSD-12-SD1, and QED-FCI levels of theory. The
potential energy functions have the forms,
$E_{\rm AB}^{\rm no~{}cavity}=c_{\rm
exp}\exp(-aR)-\frac{c_{6}\\{\theta\\}}{R^{6}}$ (S14) $E_{\rm AB}^{\rm
cavity}=E_{\rm 2b}^{\rm
no~{}cavity}-\frac{c_{3}\\{\theta\\}}{R^{3}}+\frac{c_{0}\\{\theta\\}}{R^{0}}$
(S15)
where $c_{\rm exp},{a},c_{6},c_{3},c_{0}$ are represented by neural networks
(NNs). Each NN takes symmetry preserved features of a pair of molecules as
input. Symmetry preserved features that have been selected as the input for
the machine learning (ML) model to get the pairwise interaction energy are
shown pictorially in Fig. S3 and are listed in Table S3. In the case without
the cavity field, the interaction energies are obtained using the input
features $\theta_{{\bf R}A},\theta_{{\bf R}B},\theta_{AB},{\left\|\bf
R\right\|}$. For the cavity case, additional terms that depend on the cavity
polarization vector are added. In particular,
$\theta_{A\varepsilon},\theta_{B\varepsilon},\text{ and }\theta_{{\bf
R}\varepsilon}$ are added and $\left\|\bf R\right\|$ is replaced by $R_{\rm
cap}$ and $R_{\rm cap}={\rm C}\tanh(\left\|\bf R\right\|/{\rm C})$, where C is
a cutoff distance. In order to account for molecular and exchange symmetries,
$\cos 2\theta$ and $\sin 2\theta$ are used for any
$\theta\in\Theta\equiv\\{\theta_{{\bf R}A},\theta_{{\bf R}B},\theta_{\rm
AB},\theta_{A\varepsilon},\theta_{B\varepsilon},\theta_{{\bf
R}\varepsilon}\\}$. For each of $c_{\rm exp}$, $a$, $c_{6}$, $c_{3}$, we are
using F($\Theta,R_{\rm cap}$)+F($\tilde{\Theta},\tilde{R}_{\rm cap}$) where
$\tilde{\Theta}$ and $\tilde{R}_{\rm cap}$ are calculated by switching the
index of the two molecules. For $c_{0}$, only Type 1 features as tabulated in
Table S3 were used.
The neural network model has four fully-connected layers including a linear
output layer. The other three linear layers have CELU activation
functions.Barron (2017) The number of neurons per layer is 64 in our model. To
train the model, we used energy data points of pair configurations that are
generated using a classical MD simulation of liquid H2. $10^{5}$ pair
configurations generated by MD simulation were used to compute energies with
CCSD level of theory for training model when no cavity is present. While the
pair configurations generated by MD simulation were good enough to train a
model without a cavity, long range pair configurations are extremely important
to train the model with a cavity. Similarly, short range pair configurations
are very crucial to accurately reproduce the corrected short range repulsion
energies in the potential energy functions in the presence of a cavity. While
MD of liquid H2 produces good random configurations with various possible
orientations, the probability of finding short range pair configurations is
low in an MD simulation. In order to include sufficient number of
configurations at short range, we randomly select $10\%$ of the total
configurations obtained from MD simulation of liquid H2 molecules and scale
the intermolecular distance to be within $2-5$ Å. A similar strategy was
followed to generate very long range configurations between $18-90$ Å for
$10\%$ of the total configurations. A total of $121,000$ data points,
including both the additional short range and long range configurations, were
used to the train the NN model to the QED-CCSD-12-SD1 calculated energies in
the cavity case. For training using the QED-FCI calculated data, we use a
smaller data set of $30,000$ calculated energies. In order to train the model
on this smaller data set, we initialize each NN with the parameters obtained
from our QED-CCSD-12-SD1 fits, which was trained using a larger data set of
$121,000$ calculated energies. We use the Adam optimizer Kingma and Ba (2014)
with $\beta_{1}=0.90$ and $\beta_{2}=0.99$. And we utilize a constant learning
rate of $10^{-5}$ and a batch size of $32$. $90\%$ of the total data points
were used in the training data set and the remaining $10\%$ were used as a
test data set. All training and testing protocols were implemented with
PyTorch.Paszke _et al._ (2019)
Type of feature | Features
---|---
Type 1 | $\cos 2\theta_{A\varepsilon}$, $\sin 2\theta_{A\varepsilon}$,$\cos 2\theta_{B\varepsilon}$, $\sin 2\theta_{B\varepsilon}$
Type 2 | $\cos 2\theta_{\bf R\varepsilon}$, $\sin 2\theta_{\bf R\varepsilon}$, $\cos 2\theta_{{\bf R}A}$, $\sin 2\theta_{{\bf R}A}$
| $\cos 2\theta_{{\bf R}B}$, $\sin 2\theta_{{\bf R}B}$, $\cos
2\theta_{{AB}}$, $\sin 2\theta_{{AB}}$
Type 3 | $\rm C\tanh(\left\|\bf R\right\|/C)$
Table Table S3: Input features involved in the energy contributions. Figure
Fig. S3: Symmetry preserved features that are considered while generating the
pair interaction potential using a neural network based machine learning model
are shown here. Various angles between a pair of molecule which are considered
as input features are shown. $\bf R$ is the distance vector of the center of
mass (COM) of molecule $A$ and molecule $B$. $\varepsilon$ represents the
cavity polarization. Orientation of the molecules are completely specified by
various angles $\\{\theta\\}$. Figure Fig. S4: Energy of a single H2 molecule
inside a cavity with respect to cavity polarization vector, ${\varepsilon}$
using it ab initio QED-CCSD-12-SD1 and ML. Single molecular energy at
${\varepsilon}=0.0$ was set to zero while plotting energies of both QED-
CCSD-12-SD1 and ML.
The energies of the ab initio (CCSD) calculations and the ML predicted
energies of the pairs of molecules without a cavity field are shown in the
Fig. S9A. A linearity plot shows the accuracy of the predicted energy using
our ML model. Apart from the linearity plot, we scanned potential energy
curves for a few selected orientations of pairs of molecules. These results
show that the ML predicted potential energy curves for pairs of hydrogen
molecules are in good agreement with the potential energy curves obtained from
ab initio calculations. These plots are shown in Fig. S9B. A linearity plot
comparing the ab initio (QED-CCSD-12-SD1) calculations and the ML predicted
energies with the cavity field turned on are shown in Fig. S10A. Potential
energy curves (Fig. S10B) were scanned for D2h configuration of a pair of
molecules along three different cavity polarization directions with respect to
the molecular bond axis. These plots shows that our ML model accurately
reproduces the ab initio potential energy curves.
#### IV.1.2 Single Molecule Potential Energies
Single molecule potential energies involve intra-molecular chemical bonds and
the cavity-modified single molecule contributions. Intra-molecular chemical
bonds were modeled within the harmonic approximation. We like to emphasize
that the intra-molecular interaction energy does not play a significant role
in determining the properties that we focused on in this study.
Single molecule energies in the presence of a cavity field is important.
Training of the cavity-modified single molecule energies has been done with a
linear regression method. The following form of energy function is trained for
the single molecule energies,
$E_{\rm A}=\sum_{n=1}^{3}C_{n}\sin 2n{\theta}+\sum_{n=0}^{2}D_{n}\cos
2n{\theta}$ (S16)
where $\theta$ is the angle between the molecular bond axis and the cavity
polarization vector. $C_{n}$ and $D_{n}$ are the trainable parameters. Fig. S4
shows the accuracy of fitting single molecular energies with respect to the ab
initio, QED-CCSD-12-SD1 calculations.
### IV.2 Molecular Dynamics
Molecular dynamics (MD) simulations were used to compute the statistical
properties of fluids of $H_{2}$ molecules at $70$ K by employing the potential
energy functions, generated by our machine learning models. For computing the
statistical behaviour of the system both classical MD and path integral MD
(PIMD) were used.
#### IV.2.1 Classical Molecular Dynamics
NVT ensemble MD simulations were carried out using Langevin dynamics with a
time step of $1.0$ femtosecond (fs) and the friction coefficient for the
Langevin dynamics was chosen $0.0005$ a.u (20.7 ps-1). Random initial atomic
velocities and random initial positions were provided to run MD. In order to
use ML potentials generated with PyTorch, we also implement the MD engine with
PyTorch. The integrator used here is described in Ref. Bussi and Parrinello
(2007). Forces were computed using the PyTorch autograd module and the PyTorch
MD simulations were performed using GPUs.
Since we are simulating a fluid system, the system was confined within a
spherical volume, similar to a cluster of molecules. In practice, a stiff
harmonic potential was used to confine the center of the mass of each molecule
within a spherical volume with radius $R_{\rm c}$ (see Fig. S5). Adopting such
a boundary condition was necessary in order to account for non-decaying nature
of the pair interaction potential inside of an optical cavity. In order to
simulate various different system sizes, $R_{\rm c}$ is scaled appropriately
to preserve the overall molecular density.
#### IV.2.2 Path Integral Molecular Dynamics
In the previous section, we discussed the MD simulations in which the nuclei
were considered as classical particles. However, for light nuclei such as
hydrogen atoms, this assumption could lead to serious problems in predicting
the statistical properties because of strong quantum nuclei effects,
especially at low temperatures. In order to account for quantum nuclei effects
in our MD simulations, we performed path integral molecular dynamics (PIMD)
simulations.
Usually PIMD simulations require a large number of beads to converge
thermodynamics properties at low temperatures. Herein, we used the generalized
Langevin equation (GLE) in PIMD, which can significantly reduce the number of
beads.Ceriotti _et al._ (2009, 2010a, 2011) In the GLE formulation,Ceriotti
_et al._ (2010b) each bead of the simulated system is coupled to several
extended degrees of freedom with an appropriate drift matrix and a diffusion
matrix to approximate a friction kernel function. We used $8$ extra degrees of
freedom in GLE and the drift matrix and diffusion matrix used in GLE were
generated by an online tool called GLE4MD (http://gle4md.org/) with the
maximum physical frequency set to $\omega_{\rm max}=9608$ cm-1. With the GLE
formulation, we observed that using $32$ beads are able to converge the
simulations whereas more than $128$ beads are needed to converge the results
without the GLE formulation. We have developed an interface to i-PI Ceriotti
_et al._ (2014) to run the PIMD simulations using our ML potentials.
### IV.3 Radial Distribution Functions
Figure Fig. S5: Schematic diagram of the radius cutoff that are used in
computing radial distribution functions. $R_{\rm c}$ is the distance at which
a high energy potential barrier has been applied. $R_{\rm 1}$ is the radius of
core region where surface effects due to the spherical boundary are minimal
and molecules found within the radius of $R_{\rm 2}$ are used to compute the
histogram of pairwise distance for the calculations of the radial distribution
functions.
The radial distribution functions (g(r)) of fluid of H2 molecules are computed
from the PIMD trajectories of $1,000$ molecules. As the system we simulated
has a spherical volume without any periodic boundary, computing a bulk-like
g(r) (i.e. a g(r) that converges to $1$ in the long distance limit) is not
straightforward. In order to compute g(r) from such a spherical system, the
following steps are taken. First, a bulk-like core region is chosen within a
certain cutoff distance $R_{1}$.
$\bar{h}(\left|\bf r\right|)=\frac{1}{N_{1}}\sum_{i,R_{i}<R_{1}}h(\left|{\bf
r}-{\bf r}_{i}\right|)$ (S17)
For the $i^{\rm th}$ molecule located at ${\bf r}_{i}$ with $R_{i}=\left|{\bf
r}_{i}\right|<R_{1}$, $h(\left|{\bf r}-{\bf r}_{i}\right|)$ is the histogram
of all distance between any other molecules and the $i^{th}$ molecule with
$(\left|{\bf r}-{\bf r}_{i}\right|)<R_{2}$, $R_{1}+R_{2}<R_{\rm c}$ and
$N_{1}$ is the number of molecules inside $R_{1}$. Second, the average over
each frame of MD or PIMD as well as the average over number of beads was
computed in the calculations of the radial distribution functions. Lastly, the
averaged $\bar{h}(\left|\bf r\right|)$ was normalized by the average density
and $4\pi r^{2}$. In this study, $R_{1}=6.0$ Å and $R_{2}=12$ Å was used.
### IV.4 Angular Distribution Functions
We also computed angular distribution functions for the angle between the
molecular bond axis of molecule $A$ and the molecular bond axis of molecule
$B$ ($\theta_{\rm AB}$) and angular distribution functions for the angle
between the molecular bond axis of molecule $A$ and the cavity polarization
vector ($\theta_{\rm A\varepsilon}$). The probability distributions of
$\theta_{\rm AB}$ and $\theta_{\rm A\varepsilon}$ are proportional to
sin($\theta_{\rm AB}$) and sin($\theta_{\rm A\varepsilon}$), respectively, if
molecules A and B can rotate freely without any interactions. In order to
emphasize the energy contribution, we computed the potentials of mean force by
scaling the probability distributions of $\theta_{\rm AB}$ and $\theta_{\rm
A\varepsilon}$ with their corresponding sine functions. In the case of PIMD,
the average over each frame and the average over the number of beads are
considered when computing the histograms.
## V Additional Results
### V.1 Comparison of Radial Distribution Functions
We compute the radial distribution function at three different situations when
(1) cavity polarization is not active, (2) cavity-modified one-body term is
active but cavity modified two-body term is not active, and (3) both cavity
modified one-body and two-body terms are active. We have observed
differentiable changes in radial distribution function for three different
situations. This indicates the difference in equilibrium structure when cavity
polarization is on. The results are shown in Fig. S12.
### V.2 Comparison of Classical MD and PIMD
In this section we compare the results of our classical MD and the PIMD
simulations with $\lambda=0.1$ a.u. Based on Fig. S6, it is evident that
classical MD and PIMD qualitatively follow the same trend when angular
distribution function of $\theta_{\rm A\varepsilon}$ and $\theta_{\rm AB}$ are
compared. In particular, one observes a strong orientational alignment of the
molecules along direction of the cavity polarization vector occurring inside
of an optical cavity. Inclusion of nuclear quantum effects does not change the
overall conclusion. However, the extent of alignment of the molecules inside
the cavity in our PIMD simulations is considerably reduced compared to our
classical MD simulations.
Figure Fig. S6: Angular distribution functions of molecular bond axis of
molecule $A$ to the molecular bond axis of molecule $B$ ($\theta_{\rm AB}$)
and angular distribution functions of molecular bond axis of molecule $A$ to
the cavity polarization vector ($\theta_{\rm A\varepsilon}$) for $1,000$ H2
molecules of a (A) classical MD simulation and (B) PIMD simulation are shown.
Pair interaction potentials used for the MD simulation were obtained by
training an ML model with the calculated energies from QED-CCSD-12-SD1 level
of theory.
### V.3 Comparison of QED-FCI-5 and QED-CCSD-12-SD1
Here we compare our results of classical MD simulations using the ML
potentials obtained from QED-FCI-5 and QED-CCSD-12-SD1 calculations. As
summarized in Fig. S7, we see that classical MD with ML potentials that are
obtained from the two different levels of $\it ab~{}initio$ calculations
qualitatively match each other. However, the intensities in the angular
distribution functions of $\theta_{\rm A\varepsilon}$ and $\theta_{\rm AB}$
for the two cases are different. These differences are due to the quantitative
differences in predicting the interaction energies using these two methods
(see Fig. S1).
Figure Fig. S7: Angular distribution functions of molecular bond axis of
molecule $A$ to the molecular bond axis of molecule $B$ ($\theta_{\rm AB}$)
and angular distribution functions of molecular bond axis of molecule $A$ to
the cavity polarization vector ($\theta_{\rm A\varepsilon}$) for $1,000$ H2
molecules of a classical MD trajectory with the NN potentials obtained from
training the ML model on (A) QED-CCSD-12-SD1 and (B) QED-FCI-5 data sets.
### V.4 $\lambda$ Dependent Molecular Alignment
Two different $\lambda$ values were considered in our study. In the main text,
we focused our discussion on the results with $\lambda=0.1$ a.u. In this
section, we study the properties of a system with $\lambda=0.02$ a.u. and
compare these results with the results obtained using $\lambda=0.1$ a.u.
In order to train a model with $\lambda=0.02$ a.u. important NN parameters for
$c_{0}$ and $c_{3}$ were transferred and scaled from our training model with
$\lambda=0.1$ a.u. together with the perturbation theory analysis. The
accuracy of the model has been tested by plotting the energies obtained from
the NNPs against the ab initio energies. A linearity plot is obtained as shown
in Fig. S11A. Additionally, scanned potential energy curves of several
selected pair configurations are in good agreement with ab initio potential
energy curves. Some of these plots are shown in Fig. S11B. The accuracy of our
ML model is further justified with in Fig. S11C, where we show that our ML
model correctly predicts the long range interaction energy with different
directions of the cavity polarization vector.
A significant difference in the angular distribution functions of $\theta_{\rm
A\varepsilon}$ is observed when the results of two different $\lambda$ values
are compared for $1,000$ H2 molecules. The distribution function of
$\theta_{\rm A\varepsilon}$ for $1,000$ H2 molecules with $\lambda=0.02$ a.u.
(Fig. S8A) shows molecular alignment perpendicular to the cavity polarization
($\theta_{\rm A\varepsilon}=\frac{\pi}{2}$). On the other hand, we observe in
Fig. S6A that the angular distribution function of $\theta_{\rm A\varepsilon}$
is maximized in the direction of cavity polarization vector ($\theta_{\rm
A\varepsilon}=0,\pi$) when $\lambda=0.1$ a.u. This can be explained from our
perturbation theory analysis where we showed that the cavity-modifications to
the single molecule energies scale with $\lambda^{2}$ and the extremely long
range pairwise interaction scales with $\lambda^{4}$. Thus, the importance of
the pairwise interaction decreases much faster than the single molecule energy
contribution as $\lambda$ decreases. In this particular example of $1,000$ H2
molecules with $\lambda=0.02$ a.u., the single molecule energy dominates
whereas, with $\lambda=0.1$ a.u., the pairwise interaction energy dominates.
$\theta_{\rm AB}$ qualitatively follow the same trend as we observed for
$1,000$ H2 molecules with $\lambda=0.1$ a.u.; however, the intensity of the
peak is reduced which suggests a weaker synchronization of molecular
orientations. This is shown in the inset of Fig. S8A.
From the above discussion, we understand that the energy contributions from a
single molecule can be altered by (1) changing the number of molecules with a
fixed $\lambda$, and (2) changing the value of $\lambda$ for a fix number of
molecules. We ran simulations considering these two possibilities. For the
first possibility, we reduced the number of molecules from $1,000$ to $108$
while keeping $\lambda$ equal to $0.1$ a.u., and we compute the angular
distribution function for $\theta_{\rm A\varepsilon}$. We find that in the
$108$ molecule simulation the preferential alignment of the molecules is
perpendicular to the cavity polarization vector, which is opposite to the
alignment of $1,000$ molecules with $\lambda=0.1$ a.u. (aligned parallel to
the cavity polarization vector). These results are shown in Fig. S6A and Fig.
S8B. For the second possibility, we simulate $1,000$ molecules with a reduced
value of $\lambda=0.02$ a.u. The angular distribution function of $\theta_{\rm
A\varepsilon}$ in this simulation is qualitatively similar to the results
obtained in the first possibility with the molecular alignment perpendicular
to the cavity polarization vector (see Fig. S6A and Fig. S8B). All of our
numerical simulation results reported in this section further confirm the
conceptual validity of our perturbation theory analysis.
Figure Fig. S8: Angular distribution functions of molecular bond axis of
molecule $A$ to the molecular bond axis of molecule $B$ ($\theta_{\rm AB}$)
and angular distribution functions of molecular bond axis of molecule $A$ to
the cavity polarization vector ($\theta_{\rm A\varepsilon}$) for $1,000$ H2
molecules of a classical MD trajectory with the NNPs obtained from the
training ML model on (A) QED-CCSD-12-SD1 and $\lambda=0.02$ a.u. coupling
constant are shown. A zoom-in figure of $\theta_{\rm A\varepsilon}$ is shown
in the inset. (B) Angular distribution functions of molecular bond axis of
molecule $A$ to the cavity polarization vector ($\theta_{\rm A\varepsilon}$)
of $108$ molecules with $\lambda=0.1$ a.u. (dashed line) and $1,000$ molecules
with $\lambda=0.02$ a.u. (solid line) are shown. Figure Fig. S9: (A) Pairwise
interaction energies obtained from ab initio, CCSD calculation (without
cavity) and ML predicted energies are plotted. (B) Scanned potential energy
curve for D2h, C2v and D∞h configuration of a pair of molecules using NNPs and
from ab initio calculation are shown. Figure Fig. S10: (A) Pairwise
interaction energies obtained from ab initio, QED-CCSD-12-SD1 calculation
(with cavity) and ML predicted energies are plotted. (B) Scanned potential
energy curve for D2h configuration with three different direction of cavity
polarization using NNPs and from ab initio calculation are shown. Figure Fig.
S11: (A) Pairwise interaction energies obtained from ab initio, QED-
CCSD-12-SD1 calculation and ML predicted energies with $\lambda=0.02$ a.u. are
plotted. (B) Scanned potential energy curves for D2h configuration of a pair
of molecules using NNPs and from ab initio calculation are shown. Distance
($R$) between molecule $A$ and molecule $B$ over which potential energy is
scanned is shown in the inset of the figure. (C) Scanned potential energy
curves for D2h configuration at the long range are shown. ML model can
accurately distinguish different configurations at long distance. Figure Fig.
S12: Radial distribution function generated using PIMD trajectory with 1000 H2
molecules using pair potential obtained through a ML training on ab initio
calculation with QED-CCSD-12-SD1 and $\lambda=0.1$ a.u. Figure Fig. S13: (A-C)
Snapshots taken at thermal equilibrium from the path integral molecular
dynamic (PIMD) simulations of 1000 H2 molecules in the case of (A) no cavity
(orange), (B) cavity-modified one-body term but no cavity two-body term
(green), and (C) cavity-modified one-body and two-body terms (blue). For these
three cases, the (D) molecular bond axis of molecule $A$ to molecular bond
axis of molecule $B$ ($\theta_{AB}$) angular probability distribution
function, $P\left(\theta_{AB}\right)$ and (E) molecular bond axis to cavity
polarization vector ($\theta_{A\varepsilon}$), angular probability
distribution function, $P\left(\theta_{A\varepsilon}\right)$, are shown. (F)
molecular bond axis to cavity polarization vector ($\theta_{A\varepsilon}$),
angular probability distribution function,
$P\left(\theta_{A\varepsilon}\right)$, are shown for four different
simulations containing different numbers of H2 molecules. All PIMD simulations
shown in this figure were performed using neural networks trained with CCSD
(no cavity) or QED-CCSD-12-SD1 with $\lambda=0.1$ a.u. (cavity) calculated
energies.
## References
* Hobza and Šponer (2002) P. Hobza and J. Šponer, J. Am. Chem. Soc. 124, 11802 (2002).
* Novoselov _et al._ (2016) K. S. Novoselov, A. Mishchenko, A. Carvalho, and A. H. C. Neto, Science 353, aac9439 (2016).
* Sternbach _et al._ (2021) A. J. Sternbach, S. H. Chae, S. Latini, A. A. Rikhter, Y. Shao, B. Li, D. Rhodes, B. Kim, P. J. Schuck, X. Xu, X. Y. Zhu, R. D. Averitt, J. Hone, M. M. Fogler, A. Rubio, and D. N. Basov, Science 371, 617 (2021).
* Maitland _et al._ (1981) G. C. Maitland, G. D. Maitland, M. Rigby, E. B. Smith, and W. A. Wakeham, _Intermolecular Forces: Their Origin and Determination_ (Oxford University Press, USA, 1981).
* Stone (2013) A. Stone, _The Theory of Intermolecular Forces_, 2nd ed. (Oxford University Press, Oxford, 2013) p. 352.
* London (1937) F. London, Trans. Faraday Soc. 33, 8b (1937).
* Halgren (1992) T. A. Halgren, J. Am. Chem. Soc. 114, 7827 (1992).
* Grimme _et al._ (2010) S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, J. Chem. Phys. 132, 154104 (2010).
* Thirunamachandran (1980) T. Thirunamachandran, Mol. Phys. 40, 393 (1980).
* Milonni and Smith (1996) P. W. Milonni and A. Smith, Phys. Rev. A 53, 3484 (1996).
* Sherkunov (2009) Y. Sherkunov, J. Phys. Conf. Ser. 161, 012041 (2009).
* Fiscelli _et al._ (2020) G. Fiscelli, L. Rizzuto, and R. Passante, Phys. Rev. Lett. 124, 013604 (2020).
* Haugland _et al._ (2021) T. S. Haugland, C. Schäfer, E. Ronca, A. Rubio, and H. Koch, J. Chem. Phys. 154, 094113 (2021).
* Ribeiro _et al._ (2018) R. F. Ribeiro, L. A. Martínez-Martínez, M. Du, J. Campos-Gonzalez-Angulo, and J. Yuen-Zhou, Chem. Sci. 9, 6325 (2018).
* Rivera _et al._ (2019) N. Rivera, J. Flick, and P. Narang, Phys. Rev. Lett. 122, 193603 (2019).
* Thomas _et al._ (2019) A. Thomas, L. Lethuillier-Karl, K. Nagarajan, R. M. A. Vergauwe, J. George, T. Chervy, A. Shalabney, E. Devaux, C. Genet, J. Moran, and T. W. Ebbesen, Science 363, 615 (2019).
* Li _et al._ (2020) T. E. Li, J. E. Subotnik, and A. Nitzan, Proc. Natl. Acad. Sci. U. S. A. 117, 18324 (2020).
* Garcia-Vidal _et al._ (2021) F. J. Garcia-Vidal, C. Ciuti, and T. W. Ebbesen, Science 373, eabd0336 (2021).
* Li _et al._ (2021a) T. E. Li, A. Nitzan, and J. E. Subotnik, Angew. Chemie 133, 15661 (2021a).
* Vahala (2003) K. J. Vahala, Nature 424, 839 (2003).
* Cortese _et al._ (2017) E. Cortese, P. G. Lagoudakis, and S. De Liberato, Phys. Rev. Lett. 119, 043604 (2017).
* Joseph _et al._ (2021) K. Joseph, S. Kushida, E. Smarsly, D. Ihiawakrim, A. Thomas, G. L. Paravicini-Bagliani, K. Nagarajan, R. Vergauwe, E. Devaux, O. Ersen, U. H. F. Bunz, and T. W. Ebbesen, Angew. Chem. Int. Ed. 60, 19665 (2021).
* Fukushima _et al._ (2022) T. Fukushima, S. Yoshimitsu, and K. Murakoshi, J. Am. Chem. Soc. 144, 12177 (2022).
* Sandeep _et al._ (2022) K. Sandeep, K. Joseph, J. Gautier, K. Nagarajan, M. Sujith, K. G. Thomas, and T. W. Ebbesen, J. Phys. Chem. Lett. 13, 1209 (2022).
* Galego _et al._ (2015) J. Galego, F. J. Garcia-Vidal, and J. Feist, Phys. Rev. X 5, 41022 (2015).
* Lacombe _et al._ (2019) L. Lacombe, N. M. Hoffmann, and N. T. Maitra, Phys. Rev. Lett. 123, 083201 (2019).
* Fregoni _et al._ (2022) J. Fregoni, F. J. Garcia-Vidal, and J. Feist, ACS Photonics 9, 1096 (2022).
* Haugland _et al._ (2020) T. S. Haugland, E. Ronca, E. F. Kjønstad, A. Rubio, and H. Koch, Phys. Rev. X 10, 041043 (2020).
* Zhong _et al._ (2016) X. Zhong, T. Chervy, S. Wang, J. George, A. Thomas, J. A. Hutchison, E. Devaux, C. Genet, and T. W. Ebbesen, Angew. Chem. Int. Ed. 55, 6202 (2016).
* Du _et al._ (2018) M. Du, L. A. Martínez-Martínez, R. F. Ribeiro, Z. Hu, V. M. Menon, and J. Yuen-Zhou, Chem. Sci. 9, 6659 (2018).
* Xiang _et al._ (2020) B. Xiang, R. F. Ribeiro, M. Du, L. Chen, Z. Yang, J. Wang, J. Yuen-Zhou, and W. Xiong, Science 368, 665 (2020).
* Herrera and Spano (2016) F. Herrera and F. C. Spano, Phys. Rev. Lett. 116, 238301 (2016).
* Yang and Cao (2021) P. Y. Yang and J. Cao, J. Phys. Chem. Lett. 12, 9531 (2021).
* Li _et al._ (2021b) X. Li, A. Mandal, and P. Huo, Nat. Commun. 12, 1315 (2021b).
* Simpkins _et al._ (2021) B. S. Simpkins, A. D. Dunkelberger, and J. C. Owrutsky, J. Phys. Chem. C 125, 19081 (2021).
* Philbin _et al._ (2022) J. P. Philbin, Y. Wang, P. Narang, and W. Dou, J. Phys. Chem. C 126, 14908 (2022).
* White _et al._ (2020) A. F. White, Y. Gao, A. J. Minnich, and G. K. L. Chan, J. Chem. Phys. 153, 224112 (2020).
* Eisenschitz and London (1930) R. Eisenschitz and F. London, Zeitschrift für Phys. 60, 491 (1930).
* London (1930) F. London, Zeitschrift für Phys. 63, 245 (1930).
* Dahlke and Truhlar (2007) E. E. Dahlke and D. G. Truhlar, J. Chem. Theory Comput. 3, 46 (2007).
* Barron (2017) J. T. Barron, Continuously differentiable exponential linear units (2017), arXiv:1704.07483 .
* Kingma and Ba (2014) D. P. Kingma and J. Ba, Adam: A method for stochastic optimization (2014), arXiv:1412.6980 .
* Paszke _et al._ (2019) A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, in _Advances in Neural Information Processing Systems 32_, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Curran Associates, Inc., 2019) pp. 8024–8035.
* Bussi and Parrinello (2007) G. Bussi and M. Parrinello, Phys. Rev. E 75, 056707 (2007).
* Ceriotti _et al._ (2009) M. Ceriotti, G. Bussi, and M. Parrinello, Phys. Rev. Lett. 103, 030603 (2009).
* Ceriotti _et al._ (2010a) M. Ceriotti, G. Bussi, and M. Parrinello, J. Chem. Theory Comput. 6, 1170 (2010a).
* Ceriotti _et al._ (2011) M. Ceriotti, D. E. Manolopoulos, and M. Parrinello, J. Chem. Phys. 134, 084104 (2011).
* Ceriotti _et al._ (2010b) M. Ceriotti, M. Parrinello, T. E. Markland, and D. E. Manolopoulos, J. Chem. Phys. 133, 124104 (2010b).
* Ceriotti _et al._ (2014) M. Ceriotti, J. More, and D. E. Manolopoulos, Comput. Phys. Commun. 185, 1019 (2014).
|
# IntelliCAT: Intelligent Machine Translation Post-Editing with Quality
Estimation and Translation Suggestion
Dongjun Lee Bering Lab, Republic of Korea Junhyeong Ahn Bering Lab,
Republic of Korea Heesoo Park Bering Lab, Republic of Korea Jaemin Jo
Sungkyunkwan University, Republic of Korea
###### Abstract
We present IntelliCAT, an interactive translation interface with neural models
that streamline the post-editing process on machine translation output. We
leverage two quality estimation (QE) models at different granularities:
sentence-level QE, to predict the quality of each machine-translated sentence,
and word-level QE, to locate the parts of the machine-translated sentence that
need correction. Additionally, we introduce a novel translation suggestion
model conditioned on both the left and right contexts, providing alternatives
for specific words or phrases for correction. Finally, with word alignments,
IntelliCAT automatically preserves the original document’s styles in the
translated document. The experimental results show that post-editing based on
the proposed QE and translation suggestions can significantly improve
translation quality. Furthermore, a user study reveals that three features
provided in IntelliCAT significantly accelerate the post-editing task,
achieving a 52.9% speedup in translation time compared to translating from
scratch. The interface is publicly available at
https://intellicat.beringlab.com/.
## 1 Introduction
Existing computer-aided translation (CAT) tools incorporate machine
translation (MT) in two ways: post-editing (PE) or interactive translation
prediction (ITP). PE tools (Federico et al., 2014; Pal et al., 2016) provide a
machine-translated document and ask the translator to edit incorrect parts. By
contrast, ITP tools (Alabau et al., 2014; Green et al., 2014a; Santy et al.,
2019) aim to provide translation suggestions for the next word or phrase given
a partial input from the translator. A recent study with human translators
revealed that PE was 18.7% faster than ITP in terms of translation time (Green
et al., 2014b) and required fewer edits (Do Carmo, 2020). However, many
translators still prefer ITP over PE because of (1) high cognitive loads
(Koehn, 2009) and (2) the lack of subsegment MT suggestions (Moorkens and
O’Brien, 2017) in PE.
In this paper, we introduce IntelliCAT111A demonstration video is available at
https://youtu.be/mDmbdrQE9tc, a hybrid CAT interface designed to provide PE-
level efficiency while retaining the advantages of ITP, such as subsegment
translation suggestions. To mitigate the cognitive loads of human translators,
IntelliCAT aims to automate common post-editing tasks by introducing three
intelligent features: (1) quality estimation, (2) translation suggestion, and
(3) word alignment.
Quality estimation (QE) is the task of estimating the quality of MT output
without reference translations (Specia et al., 2020). We integrate QE into the
CAT interface so that the human translator can easily identify which machine-
translated sentences and which parts of the sentences require corrections.
Furthermore, for words that require post-editing, our interface suggests
possible translations to reduce the translators’ cognitive load. Finally,
based on word alignments, the interface aligns the source and translated
documents in terms of formatting by transferring the styles applied in the
source document (e.g., bold, hyperlink, footnote, equation) to the translated
document to minimize the post-editing time. Our contributions are:
* •
We integrate state-of-the-art sentence-level and word-level QE (Lee, 2020)
techniques into an interactive CAT tool, IntelliCAT.
* •
We introduce a novel words and phrases suggestion model, which is conditioned
on both the left and right contexts, based on XLM-RoBERTa (Conneau et al.,
2020). The model is fine-tuned with a modified translation language modeling
(TLM) objective (Lample and Conneau, 2019).
* •
We conduct quantitative experiments and a user study to evaluate IntelliCAT.
The experimental results on the WMT 2020 English-German QE dataset show that
post-editing with the proposed QE and translation suggestion models could
significantly improve the translation quality ($-$6.01 TER and $+$6.15 BLEU).
Moreover, the user study shows that the three features provided by IntelliCAT
significantly reduce post-editing time (19.2%), which led to a 52.6% reduction
in translation time compared to translating from scratch. Finally, translators
evaluate our interface to be highly effective, with a SUS score of 88.61.
Figure 1: The IntelliCAT Interface. After a document (i.e., an MS Word file)
is uploaded, A sentences from the original document (source) and B the initial
MT output for each sentence (target) are shown side-by-side. C Formatting tags
indicate where a specific style (identified by an integer style id) is applied
and D are automatically inserted at the proper position of the MT output based
on word alignments. E The interface shows the quality of each machine-
translated sentence based on sentence-level QE. F Potentially incorrect words
and G locations of missing words are highlighted based on word-level QE. When
the user selects a sequence of words in the MT output, H the corresponding
words in the source sentence are highlighted with a heat map, and I up to five
alternative translations are recommended.
## 2 Related Work
#### CAT Tool and Post-Editing
In the localization industry, the use of CAT tools is a common practice for
professional translators (Van den Bergh et al., 2015). As MT has improved
substantially in recent years, approaches incorporating MT into CAT tools have
been actively researched (Alabau et al., 2014; Federico et al., 2014; Santy et
al., 2019; Herbig et al., 2020). One of the approaches is post-editing in
which the translator is provided with a machine-translated draft and asked to
improve the draft. Recent studies demonstrate that post-editing MT output not
only improves translation productivity but also reduces translation errors
(Green et al., 2013; Aranberri et al., 2014; Toral et al., 2018).
#### Translation Suggestion
Translation suggestions from interactive translation prediction (ITP) (Alabau
et al., 2014; Santy et al., 2019; Coppers et al., 2018) are conditioned only
on the left context of the word to be inserted. Therefore, ITP has intrinsic
limitations in post-editing tasks where the complete sentence is presented,
and the right context of the words that need correction should also be
considered. We propose a novel translation suggestion model in which
suggestions are conditioned on both the left and right contexts of the words
or phrases to be modified or inserted to provide more accurate suggestions
when post-editing the complete sentence.
#### Cross-Lingual Language Model
Cross-lingual language models (XLMs), which are language models pre-trained in
multiple languages, have led to advances in MT (Lample and Conneau, 2019) and
related tasks such as QE (Lee, 2020), automatic post-editing (Wang et al.,
2020; Lee et al., 2020), and parallel corpus filtering (Lo and Joanis, 2020).
Accordingly, our QE and translation suggestion models are trained on top of
XLM-R (Conneau et al., 2020), an XLM that shows state-of-the-art performance
for a wide range of cross-lingual tasks. To the best of our knowledge,
IntelliCAT is the first CAT interface that leverages XLM to assist human post-
editing for MT outputs.
## 3 System Description
### 3.1 Overview
IntelliCAT is a web-based interactive interface for post-editing MT outputs
(Figure 1). Once loaded, it shows two documents side-by-side: the uploaded
original document (an MS Word file) on the left and the machine-translated
document on the right. Each document is displayed as a list of sentences with
formatting tags inserted, tags that show the style of the original document,
including text styles (e.g., bold, italic, or hyperlinked) and inline contents
(e.g., a media element or an equation).
The user can post-edit MT outputs on the right using the following three
features: (1) sentence-level and word-level QE, (2) word or phrase suggestion,
and (3) automatic tagging based on word alignments. The sentence-level QE
shows the estimated MT quality for each sentence, and word-level QE highlights
the parts of each machine-translated sentence that need correction. When the
user selects a specific word or phrase, the top-$5$ recommended alternatives
appear below, allowing the user to replace the selected words or insert a new
word. Finally, the system automatically captures the original document style
and inserts formatting tags in machine-translated sentences at the appropriate
locations. After post-editing, the user can click on the export button to
download the translated document with the original style preserved. A sample
document and its translated document without human post-editing is presented
in Appendix A.
### 3.2 Machine Translation
Our system provides MT for each sentence in the input document. We build our
NMT model based on Transformer (Vaswani et al., 2017) using OpenNMT-py (Klein
et al., 2017). As training data, the English-German parallel corpus provided
in the 2020 News Translation Task (Barrault et al., 2020) is used. We use
unigram-LM-based subword segmentation (Kudo, 2018) with a vocabulary size of
32K for English and German, respectively, and the remaining hyperparameters
follow the base model of Vaswani et al. (2017).
### 3.3 Quality Estimation
Quality estimation (QE) is the task of estimating the quality of the MT
output, given only the source text (Fonseca et al., 2019). We estimate the
quality at two different granularities: sentence and word levels. Sentence-
level QE aims to predict the human translation error rate (HTER) (Snover et
al., 2006) of a machine-translated sentence, which measures the required
amount of human editing to fix the the machine-translated sentence. By
contrast, word-level QE aims to predict whether each word in the MT output is
OK or BAD and whether there are missing words between each word.
Figure 1 demonstrates the use of QE in our interface. Based on the sentence-
level QE, we show the MT quality for each machine-translated sentence computed
as $1-(predicted\>HTER)$. In addition, based on word-level QE, we show words
that need to be corrected (with red or yellow underlines) or locations for
missing words (with red or yellow checkmarks). To display the confidence of
word-level QE predictions, we encode the predicted probability of the color of
underlines and checkmarks (yellow for $P_{BAD}>0.5$ and red for
$P_{BAD}>0.8$).
For QE training, we use a two-phase cross-lingual language model fine-tuning
approach following Lee (2020), which showed the state-of-the-art performance
on the WMT 2020 QE Shared Task (Specia et al., 2020). We fine-tune XLM-RoBERTa
(Conneau et al., 2020) with a few additional parameters to jointly train
sentence-level and word-level QEs. We train our model in two phases. First, we
pre-train the model with a large artificially generated QE dataset based on a
parallel corpus. Subsequently, we fine-tune the model with the WMT 2020
English-German QE dataset (Specia et al., 2020), which consists of 7,000
triplets consisting of source, MT, and post-edited sentences.
### 3.4 Translation Suggestion
As shown in Figure 1, when the user selects a specific word or phrase to
modify or presses a hotkey (ALT+s) between words to insert a missing word, the
system suggests the top-$5$ alternatives based on fine-tuned XLM-R.
#### XLM-R Fine-Tuning
For translation suggestion, we fine-tune XLM-R with a modified translation
language modeling (TLM) objective (Lample and Conneau, 2019), which is
designed to better predict the masked spans of text in the translation.
Following Lample and Conneau (2019), we tokenize source (English) and target
(German) sentences with the shared BPE model (Sennrich et al., 2016), and
concatenate the source and target tokens with a separation token (</s>).
Unlike the TLM objective of Lample and Conneau (2019), which randomly masked
tokens in both the source and target sentences, we only mask tokens in target
sentences since the complete source sentence is always given in the
translation task. We randomly replace $p$% ($p\in[15,20,25]$) of the BPE
tokens in the target sentences by <mask> tokens and train the model to predict
the actual tokens for the masks. In addition, motivated by SpanBERT (Joshi et
al., 2020), we always mask complete words instead of sub-word tokens since
translation suggestion requires predictions of complete words. As training
data, we use the same parallel corpus that is used for MT training.
#### Inference
To suggest alternative translations for the selected sequence of words, we
first replace it with multiple <mask> tokens. The alternative translations may
consist of sub-word tokens of varying lengths. Hence, we generate $m$ inputs,
where $m$ denotes the maximum number of masks, and in the $i^{th}$ input
($i\in[1,...,m]$), the selected sequence is replaced with $i$ consecutive
<mask> tokens. In other words, we track all cases in which alternative
translations consist of $1$ to $m$ sub-word tokens. Then, each input is fed
into the fine-tuned XLM-R, and <mask> tokens are iteratively replaced by the
predicted tokens from left to right. In each iteration, we use a beam search
with a beam size $k$ to generate the top-$k$ candidates. Finally, all mask
prediction results from $m$ inputs are sorted based on probability, and the
top-$k$ results are shown to the user.
### 3.5 Word Alignment and Automatic Formatting
To obtain word alignments, we jointly train the NMT model (section 3.2) to
produce both translations and alignments following Garg et al. (2019). One
attention head on the Transformer’s penultimate layer is supervised with an
alignment loss to learn the alignments. We use Giza++ (Och and Ney, 2003)
alignments as the guided labels for the training. As sub-word segmentation is
used to train the NMT model, we convert the sub-word-level alignments back to
the word-level. We consider each target word to be aligned with a source word
if any of the target sub-words is aligned with the source sub-words.
We provide two features based on word alignment information. First, when the
user selects a specific word or phrase in the machine-translated sentence, the
corresponding words or phrases in the source sentence are highlighted using a
heatmap. Second, formatting tags are automatically inserted at the appropriate
locations in the machine-translated sentences. We use two types of tags to
represent the formatting of the document: paired tags and unpaired tags.
Paired tags represent styles applied across a section of text (e.g., bold or
italic). To retain the style applied in the source sentence to the MT, we
identify the source word with the highest alignment score for each target word
and apply the the corresponding source word’s style to the target word. By
contrast, unpaired tags represent inline non-text contents such as media
elements and equations. To automatically insert an unpaired tag in the MT, we
identify the target word with the highest alignment score with the source word
right before the tag and insert the corresponding tag after the target word.
| (With Predicted QE) | (With Oracle QE)
---|---|---
Model | TER$\downarrow$ | BLEU$\uparrow$ | TER$\downarrow$ | BLEU$\uparrow$
Baseline (MT) | 31.37 | 50.37 | 31.37 | 50.37
XLM-R | | | |
(Conneau et al., 2020) | | | |
Top-1 | 30.28 (-1.09) | 50.78 (+0.41) | 26.57 (-4.80) | 56.02(+5.65)
Top-3 | 29.47 (-1.90) | 50.89 (+0.52) | 24.10 (-7.27) | 60.28 (+9.91)
Top-5 | 28.75 (-2.62) | 51.85 (+1.48) | 22.78 (-8.59) | 62.40 (+12.03)
Proposed
Top-1 | 29.04 (-2.33) | 51.93 (+1.56) | 24.26 (-7.11) | 59.38 (+9.01)
Top-3 | 26.69 (-4.68) | 54.70 (+4.33) | 19.08 (-12.29) | 67.51 (+17.14)
Top-5 | 25.36 (-6.01) | 56.52 (+6.15) | 17.30 (-14.07) | 70.50 (+20.13)
Table 1: TER and BLEU for machine-translated sentences (Baseline) and post-
edited sentences (XLM-R and Proposed) based on word-level QE and translation
suggestion.
## 4 Experiments
### 4.1 Model Evaluation
#### Experimental Setup
To evaluate the performance of translation suggestions, we measure MT quality
improvement when a sentence is corrected with the suggested words or phrases.
We introduce two selection conditions (Oracle QE and Predicted QE) and two
suggestion methods (XLM-R and Proposed). The selection conditions locate the
words that need to be corrected in a sentence; in Oracle QE condition, the
ground truth word-level QE label is used as a baseline, and in Predicted QE
condition, our word-level QE model is used to identify the target words. The
suggestion methods determine the words that the selected words should be
replaced with. We test two suggestion models, the pre-trained
XLM-R222https://pytext.readthedocs.io/en/master/xlm_r.html and the proposed
model, fine-tuned with the modified TLM objective, with three different
suggestion sizes: top-1, top-3, and top-5.
Each of the QE and translation suggestion models was trained using two Tesla
V100 GPUs. As an evaluation dataset, we use the WMT 2020 English-German QE dev
dataset (Specia et al., 2020). As evaluation metrics, we use the translation
error rate (TER) (Snover et al., 2006) and BLEU (Papineni et al., 2002).
#### Experimental Result
Table 1 shows the translation quality of (1) MT sentences (baseline), (2)
post-edited sentences with XLM-R-based translation suggestion, and (3) post-
edited sentences with the proposed translation suggestion model. When MT
sentences are post-edited based on QE prediction with the top-1 suggestion,
TER and BLEU are improved over the baseline by $-$2.33 and $+$1.56,
respectively. This result suggests that our QE and translation suggestion
models can be used to improve MT performance without human intervention. When
the top-5 suggestions are provided, TER and BLEU are improved by $-$6.01 and
$+$6.15, respectively, for the QE prediction condition and improved by
$-$14.07 and $+$20.13, respectively, for the oracle QE condition. These
results imply that post-editing based on translation suggestions can
significantly improve the translation quality. Finally, the proposed model
significantly outperforms XLM-R in all experimental settings, showing that
fine-tuning XLM-R with the modified TLM objective is effective for the
suggestion performance.
### 4.2 User Study
We conducted a user study to evaluate the effectiveness of IntelliCAT.
#### Tasks and Stimuli
We asked participants to translate an English document to German using the
given interface. As stimuli, we prepared three English documents, each with 12
sentences and 130, 160, and 164 words. The documents included 22, 18, and 20
styles, respectively (e.g., bold, italic, or a footnote), and participants
were also asked to apply these styles in the target document.
#### Translation Interfaces
We compared three translation interfaces: MSWord, MT-Only, and Full. In
MSWord, the participants were asked to translate documents using a popular
word processor, Microsoft Word. In this baseline condition, two Microsoft Word
instances were shown side-by-side: one showing an English document (source)
and the other showing an empty document where one could type the translated
sentences (target). In MT-Only, participants started with a machine-translated
document on IntelliCAT without QE, translation suggestion, and word alignment;
they had to edit incorrect parts and transfer styles by themselves. In Full,
the participants could use all the features of IntelliCAT.
#### Participants and Study Design
We recruited nine participants (aged 23–31 years). All participants majored in
German and were fluent in both English and German. We adopted a within-subject
design; each participant tested all three interfaces and three documents.
Thus, our study consisted of nine (participants) $\times$ 3 (conditions) = 27
trials in total. The order of interfaces and documents was counterbalanced
using a $3\times 3$ Latin square to alleviate the possible bias of learning
effects or fatigue. For each trial, we measured the translation completion
time.
#### Procedure
Participants attended a training session for ten minutes, where they tried
each interface with a short sample document. Subsequently, they performed
three translation tasks with different interfaces. We allowed them to look up
words for which they did not know the translation before starting each
translation task. Upon completing the three tasks, participants responded to a
system usability scale (SUS) questionnaire (Brooke, 1996), and we gathered
subjective feedback. The entire session took approximately 90 min per
participant.
Figure 2: SUS Feedback. The usability of IntelliCAT was evaluated as an excellent level with a score of 88.61±7.82. Interface | Avg. time (s)
---|---
MSWord | 1178.78 $\pm$ 280.41
MT-Only | 688.00 $\pm$ 175.02
Full | 555.66 $\pm$ 200.81
Table 2: Translation completion time. The differences between the three
interface conditions are statistically significant.
#### Result and Discussion
Table 2 summarizes the result of the user study. A repeated measures ANOVA
with a Greenhouse-Geisser correction found a significant difference in
completion time between the three translation interfaces
($F(1.306,10.449)=56.398$, $p<0.001$). Post hoc tests using the Bonferroni
correction revealed that All (555.66 ± 200.81 s) was significantly faster than
MT-Only (688.00 ± 175.02 s) ($p=0.013$) and MT-Only was significantly faster
than MSWord (1,178.78 ± 280.41 s) ($p<0.001$). These results suggest that our
QE, translation suggestion, and word alignment features could further
accelerate post-editing (a 19.2% speedup) (All vs. MT-Only), and our system
could reduce the translation time by more than half (52.9%) compared to
translating from scratch (All vs. MSWord).
We could not find a significant difference between documents
($F(1.964,15.712)=0.430$, $ns$) with the same statistical procedure, which
suggests that the translation difficulties of the three English documents were
not statistically different.
Our interface received a mean SUS score of 88.61 ($\sigma=7.82)$, which is
slightly higher than the score for an “Excellent” adjective ratings (85.58,
Bangor et al. (2008)). Eight out of nine participants reported that QE was
useful for proofreading purposes; P2 stated, “With QE, I could double-check
the words that are possibly wrong.” All participants evaluated the translation
suggestions to be useful; P7 mentioned “Translation suggestion was very
convenient. It might significantly reduce the dependence on the dictionary.”
Overall, the user study results demonstrated the effectiveness of IntelliCAT
both quantitatively and qualitatively, and we found that human translators
could streamline their post-editing process with the three features provided
in IntelliCAT.
## 5 Conclusion and Future Work
In this paper, we introduce IntelliCAT, an intelligent MT post-editing
interface for document translation. The interface provides three neural
network-based features to assist post-editing: (1) sentence-level and word-
level QEs, (2) alternative translation suggestions for words or phrases, and
(3) automatic formatting of the translated document based on word alignments.
The model evaluation shows that post-editing based on the proposed QE and
translation suggestion models can significantly improve the quality of
translation. Moreover, the user study shows that these features significantly
accelerate post-editing, achieving a 52.9% speedup in translation time
compared to translating from scratch. Finally, the usability of IntelliCAT was
evaluated as an “excellent” level, with a SUS score of 88.61.
In future work, we will build a pipeline that continuously improves the
performance of neural models based on automatically collected triplets
consisting of source, MT, and post-edited sentences. We will implement an
automatic post-editing (Chatterjee et al., 2020) model to continuously improve
MT performance and apply online learning to QE models to continually enhance
QE performance.
## References
* Alabau et al. (2014) Vicent Alabau, Christian Buck, Michael Carl, Francisco Casacuberta, Mercedes García-Martínez, Ulrich Germann, Jesús González-Rubio, Robin Hill, Philipp Koehn, Luis A Leiva, et al. 2014. Casmacat: A computer-assisted translation workbench. In _Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 25–28.
* Aranberri et al. (2014) Nora Aranberri, Gorka Labaka, A Diaz de Ilarraza, and Kepa Sarasola. 2014. Comparison of post-editing productivity between professional translators and lay users. In _Proceeding of AMTA Third Workshop on Post-editing Technology and Practice (WPTP-3), Vancouver, Canada_ , pages 20–33.
* Bangor et al. (2008) Aaron Bangor, Philip T Kortum, and James T Miller. 2008. An empirical evaluation of the system usability scale. _Intl. Journal of Human–Computer Interaction_ , 24(6):574–594.
* Barrault et al. (2020) Loïc Barrault, Magdalena Biesialska, Ondřej Bojar, Marta R Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, et al. 2020. Findings of the 2020 conference on machine translation (wmt20). In _Proceedings of the Fifth Conference on Machine Translation_ , pages 1–55.
* Van den Bergh et al. (2015) Jan Van den Bergh, Eva Geurts, Donald Degraen, Mieke Haesen, Iulianna Van der Lek-Ciudin, Karin Coninx, et al. 2015. Recommendations for translation environments to improve translators’ workflows. _Translating and the Computer_ , 37:106–119.
* Brooke (1996) John Brooke. 1996. Sus: a “quick and dirty’usability. _Usability evaluation in industry_ , 189.
* Chatterjee et al. (2020) Rajen Chatterjee, Markus Freitag, Matteo Negri, and Marco Turchi. 2020. Findings of the wmt 2020 shared task on automatic post-editing. In _Proceedings of the Fifth Conference on Machine Translation_ , pages 646–659.
* Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8440–8451.
* Coppers et al. (2018) Sven Coppers, Jan Van den Bergh, Kris Luyten, Karin Coninx, Iulianna Van der Lek-Ciudin, Tom Vanallemeersch, and Vincent Vandeghinste. 2018. Intellingo: An intelligible translation environment. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ , pages 1–13.
* Do Carmo (2020) Félix Do Carmo. 2020. Comparing post-editing based on four editing actions against translating with an auto-complete feature. In _Proceedings of the 22nd Annual Conference of the European Association for Machine Translation_ , pages 421–430.
* Federico et al. (2014) Marcello Federico, Nicola Bertoldi, Mauro Cettolo, Matteo Negri, Marco Turchi, Marco Trombetti, Alessandro Cattelan, Antonio Farina, Domenico Lupinetti, Andrea Martines, et al. 2014. The matecat tool. In _COLING (Demos)_ , pages 129–132.
* Fonseca et al. (2019) Erick Fonseca, Lisa Yankovskaya, André FT Martins, Mark Fishel, and Christian Federmann. 2019. Findings of the wmt 2019 shared tasks on quality estimation. In _Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)_ , pages 1–10.
* Garg et al. (2019) Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4443–4452.
* Green et al. (2014a) Spence Green, Jason Chuang, Jeffrey Heer, and Christopher D Manning. 2014a. Predictive translation memory: A mixed-initiative system for human language translation. In _Proceedings of the 27th annual ACM symposium on User interface software and technology_ , pages 177–187.
* Green et al. (2013) Spence Green, Jeffrey Heer, and Christopher D Manning. 2013. The efficacy of human post-editing for language translation. In _Proceedings of the SIGCHI conference on human factors in computing systems_ , pages 439–448.
* Green et al. (2014b) Spence Green, Sida I Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D Manning. 2014b. Human effort and machine learnability in computer aided translation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1225–1236.
* Herbig et al. (2020) Nico Herbig, Tim Düwel, Santanu Pal, Kalliopi Meladaki, Mahsa Monshizadeh, Antonio Krüger, and Josef van Genabith. 2020. Mmpe: A multi-modal interface for post-editing machine translation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 1691–1702.
* Joshi et al. (2020) Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. _Transactions of the Association for Computational Linguistics_ , 8:64–77.
* Klein et al. (2017) Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017\. OpenNMT: Open-source toolkit for neural machine translation. In _Proc. ACL_.
* Koehn (2009) Philipp Koehn. 2009. A process study of computer-aided translation. _Machine Translation_ , 23(4):241–263.
* Kudo (2018) Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 66–75.
* Lample and Conneau (2019) Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. _arXiv preprint arXiv:1901.07291_.
* Lee (2020) Dongjun Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In _Proceedings of the Fifth Conference on Machine Translation_ , pages 1024–1028, Online. Association for Computational Linguistics.
* Lee et al. (2020) Jihyung Lee, WonKee Lee, Jaehun Shin, Baikjin Jung, Young-Gil Kim, and Jong-Hyeok Lee. 2020. Postech-etri’s submission to the wmt2020 ape shared task: Automatic post-editing with cross-lingual language model. In _Proceedings of the Fifth Conference on Machine Translation_ , pages 777–782.
* Lo and Joanis (2020) Chi-kiu Lo and Eric Joanis. 2020. Improving parallel data identification using iteratively refined sentence alignments and bilingual mappings of pre-trained language models. In _Proceedings of the Fifth Conference on Machine Translation_ , pages 972–978.
* Moorkens and O’Brien (2017) Joss Moorkens and Sharon O’Brien. 2017. Assessing user interface needs of post-editors of machine translation. _Human issues in translation technology_ , pages 109–130.
* Och and Ney (2003) Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. _Computational Linguistics_ , 29(1):19–51.
* Pal et al. (2016) Santanu Pal, Marcos Zampieri, Sudip Kumar Naskar, Tapas Nayak, Mihaela Vela, and Josef van Genabith. 2016. Catalog online: Porting a post-editing tool to the web. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 599–604.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
* Santy et al. (2019) Sebastin Santy, Sandipan Dandapat, Monojit Choudhury, and Kalika Bali. 2019. Inmt: Interactive neural machine translation prediction. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations_ , pages 103–108.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725.
* Snover et al. (2006) Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In _Proceedings of association for machine translation in the Americas_ , volume 200.
* Specia et al. (2020) Lucia Specia, Frédéric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzmén, and André F. T. Martins. 2020. Findings of the wmt 2020 shared task on quality estimation. In _Proceedings of the Fifth Conference on Machine Translation_ , pages 743–764, Online. Association for Computational Linguistics.
* Toral et al. (2018) Antonio Toral, Martijn Wieling, and Andy Way. 2018. Post-editing effort of a novel with statistical and neural machine translation. _Frontiers in Digital Humanities_ , 5:9.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Wang et al. (2020) Jiayi Wang, Ke Wang, Kai Fan, Yuqi Zhang, Jun Lu, Xin Ge, Yangbin Shi, and Yu Zhao. 2020. Alibaba’s submission for the wmt 2020 ape shared task: Improving automatic post-editing with pre-trained conditional cross-lingual bert. In _Proceedings of the Fifth Conference on Machine Translation_ , pages 789–796.
## Appendix A Sample Document Translation
Figure 3 shows a sample document and the translated document using IntelliCAT
without human intervention.
Figure 3: A sample document (left) and the translated document (right)
without human intervention.
|
# Rapid Detection of Aircrafts in Satellite Imagery based on Deep Neural
Networks
††thanks: * Research Center For Modeling and Simulation (RCMS), NUST
Arsalan Tahir * RCMS, NUST
Islamabad, Pakistan
<EMAIL_ADDRESS>Muhammad Adil RCMS, NUST
Islamabad, Pakistan
<EMAIL_ADDRESS>Arslan Ali RCMS, NUST
Islamabad, Pakistan
<EMAIL_ADDRESS>
###### Abstract
Object detection is one of the fundamental objectives in Applied Computer
Vision. In some of the applications, object detection becomes very challenging
such as in the case of satellite image processing. Satellite image processing
has remained the focus of researchers in domains of Precision Agriculture,
Climate Change, Disaster Management, etc. Therefore, object detection in
satellite imagery is one of the most researched problems in this domain. This
paper focuses on aircraft detection. in satellite imagery using deep learning
techniques. In this paper, we used YOLO deep learning framework for aircraft
detection. This method uses satellite images collected by different sources as
learning for the model to perform detection. Object detection in satellite
images is mostly complex because objects have many variations, types, poses,
sizes, complex and dense background. YOLO has some limitations for small size
objects (less than$\sim$32 pixels per object), therefore we upsample the
prediction grid to reduce the coarseness of the model and to accurately detect
the densely clustered objects. The improved model shows good accuracy and
performance on different unknown images having small, rotating, and dense
objects to meet the requirements in real-time.
###### Index Terms:
Deep Learning, Satellite Images, YOLO
## I Introduction
Object detection is a fundamental challenge in computer vision and also a key
part of active research. The purpose of object detection is to find an
instance in a specific location by drawing a bounding box [1]. Many high-level
vision tasks are also solvable using object detection like segmentation,
activity recognition and event capturing. Traditional machine learning
techniques are not suitable to perform in realtime for object detection. With
invent of deep learning, the computer is capable to understand the visual
imagery like a human. In satellite imagery, object detection is a very
complicated task due to low-resolution pixel and have densely clustered
objects. In deep learning three frameworks which give solution for object
detection are Faster RCNN [2], YOLO [3] and SSD [4]. YOLO has the greatest
inference speed and score on the Pascal VOC dataset [5]. The main problem is
to locate aircraft from the large searching area of the image in an efficient
manner.
Figure 1: Bounding boxes in above image show the detection of aircrafts.
For this purpose, a complete autonomous UAVs application are required, which
used classification and localization in real-time. Authors used YOLO based
deep neural network to solve this problem. This paper presents object
detection of an aircraft for satellite images based on YOLO real-time
detector.
This paper is divided into six sections. First section covers the introductory
part and the second section covers related work after the introduction. After
the second section network architecture for object detection is discussed. The
methodology and results are covered in the fourth and fifth section and
conclusion are highlighted in the last section.
## II Related Work
The frequently used techniques of object detection suggested by national and
international academics are split primarily into three categorizations: based
on motion data, based on the extraction of features, and based on the matching
of templates. Qinhan et al. [6] used multiple windows that have the highest
chances of objects and then apply SVM and HOG techniques for the generation of
proposals, but the disadvantage of it used fixed-size windows. Cheng et al.
[7] used subtraction and registration techniques for the identification of
objects in satellite images. Lee et al. [8] applied RCNN for the detection of
objects in images taken by UAVs. Azevedo et al. [9] used median background
techniques for the identification of objects in aerial imagery. J. Khan et al.
[10] proposed automated target detection for satellite images using edge boxes
algorithm. Junyan Lu et al. [11] propose a method for the detection of
vehicles using YOLO deep learning framework. Douillard [12] proposed a deep
learning method for detection of an object in satellite images. They used
RetinaNet architecture on COCW dataset based on Faster RCNN. Lu Zhang et al.
[13] present a hierarchical oil tank detector with deep surrounding features
for high-resolution optical satellite imagery. The proposed method is divided
into three modules named candidate selection, feature extraction and
classification. Marcum et al. [14] propose a method to localize the surface to
air-missile (SAM) sites using sliding window approach for satellite images. If
the object is hundreds of meters then this approach performs better results
rather than small objects which is computationally expensive. For small
objects, millions of sliding window cutouts generate over 10-meter area in
Digital Globe image. With the arrival of deep learning and GPU technology, the
development becomes fast and efficient in the field of computer vision
especially when we are solving problems of pattern recognition and image
processing and these are more robust rather than traditional techniques. Deep
learning techniques plays a very important part in the field of object
detection because it can extract features from an image automatically. Deep
learning provides excellent accuracy in the field of object detection. If we
want to detect category (a horse, a cat, a dog) then first collect the large
dataset and start training on this dataset. After training when we give the
image for prediction and then output is produced in the form of vector scores
for each category. We define an objective function that calculates the error
between desired output and vector output score. For this computation machine
sets internal parameters (weights or real number) to minimize the error. In
deep learning, millions of weights are used for training. So the gradient
vector is used for each weight vector and tells what amount of error is
decreases and increases. The gradient vector is then adjusted in the opposite
direction of the weighted vector. So many practitioners use stochastic
gradient descent to minimizing the loss function with randomly [15] [16] [17].
Therefore this paper uses YOLO deep learning method to obtain real-time
detection and performance in satellite images.
## III Network Architecture
Redmon et.al proposed a method YOLO [3], is a real-time object detector based
on a convolutional neural network. After some time Joseph Redmon and Ali
Farhadi released a new version YOLOv2, which has good performance and speed
[18]. Now the latest version is YOLOv3 proposed by Joseph Redmon and Ali
Farhadi with the increment of layers in architecture to improve speed and
accuracy [19]. YOLO has many advantages rather than traditional algorithms
because of its architecture. The traditional method used region proposal
networks for the generation of proposals and then implement the CNN on these
proposals for feature extraction. These methods are slow and not real-time due
to their two-stage detection architecture for satellite imagery. YOLO takes an
image with resolution 416 × 416 × 3 and divides the input image into S × S
grid. If the center of the object falls into the grid then that grid is
responsible for predicting the object. Each cell in grid predicts the bounding
boxes (B) and confidence scores related to those boxes. YOLO v1 has a large
positioning mistake and low recall rate compared to the region-based proposal
technique such as Fast R-CNN. The primary improvements of YOLOv2 are therefore
to improve the rate of recall, batch normalization, anchor boxes, and
multiscale training. Batch normalization is a popular method to normalize the
data at the time of training and also used to increase the speed with mean 0
and variance 1, which can prevent the gradient descent for vanishing. Batch
normalization also helps to make network convergence faster. Faster RCNN used
to add fully connected layers to predict bounding boxes directly after the
convolutional layers but YOLO uses anchor boxes, which improves the speed and
recall rate.
Figure 2: YOLO CNN Architecture
During training, YOLO adjusts the input after every 10 epoch to make the model
performed well at the test time on the multiscale images. The CNN architecture
using in this paper has 24 convolutional layers followed by two fully
connected layers. To reduce the coarseness model uses prediction grid is 26
$\times 26$ and downsample the factor by 16. This architecture shows the
highest speed and accuracy. Figure 2 displays the whole architecture of CNN
and also preferred due to computational speed and accuracy. The final layer is
used for the classification of objects with probability in between 0 and 1.
## IV Dataset
First, we collect datasets from DigitalGlobe satellite and apply some
preprocessing and data augmentation techniques. In preprocessing we converted
all image in 550 × 350 resolution to reduce the training time. After that we
used labeling tools for tagging of images. After tagging we converted the data
into standard architecture using python language.Datasets have played very
important role in the area of object detection. It is most important factor
for measuring and analysis of performance of different algorithms and also
pushing this field towards challenging and complex problems. The internet
makes it possible to capture diversity and richness of objects in large images
with large number of categories. The increase in large scale datasets with
millions of images has played important role and opened unprecedented
performance in object detection.
The classifiers show poor results on satellite images due to the effect of
different conditions.
* •
Spatial Resolution
* –
Objects are very small and densely clustered in satellite images rather than
the prominent and large object and for small objects like cars, the object is
only ~15 pixels in high-resolution images.
* •
Rotation Invariance
* –
Objects in satellite imagery have many orientations (for example ships have
any orientation ranging from 0 to 360 degree).
* •
Training example frequency
* –
There is relative dearth of data in satellite imagery and objects are not
clearly visible in shape.
* •
Ultra-high resolution
* –
Images are of very high resolution (hundreds of megapixels) but most
algorithms take input images with few hundreds of pixels. Upsampling the image
means object of interest becomes large, dispersed and not feasible for
standard architecture and downsampling the image can change the object shape.
* •
Temporal (time of day/season/year)
* –
Seasonal differences and time of day also effect on satellite images.
Therefore it is difficult for the classifier to detect objects from
conventional datasets due to mentioned reasons on satellite images. For this,
we need a specialized kind of data for satellite images for the processing
which is computationally less expensive and time-efficient. Here the some
datasets, which are using for Aerial images. VEDAI Dataset Razakarivony et.al
[20] made a dataset VEDAI (Vehicle Detection in Aerial Images) collected from
public Utah ARGC database. The images have three RGB channel and one infrared
channel. The authors split the images into 1024 × 1024 RGB channel and perform
downsampling to convert the images into 512 × 512 pixels and ignore the
infrared channel. The authors used just RGB channels and also set the GSD
(Ground Sample Distance) is 12.5 cm. This dataset consist of nine vehicle
classes and total images are 1250 (“plane”, “boat”, “camping car”, “tractor”,
“van”, “pick- up” and “other”). The annotation of images has five parts:
Object class and four coordinates of objects. Mundhenk et al. [21] made a
dataset COWC(Cars Overhead with Context) and collected from six different
locations. The image size of images is 2000 × 2000 pixels and the total number
of images is 53 with the format of TIFF. They covered areas of six locations
namely Columbus, Utah (United States), Selwyn (New Zealand), Postdam
(Germany), Tornoto (Canada) and Vaihingen. The images of Columbus and
Vaihingen are in grayscale while remaining are in the RGB channel. The object
size in the image is 24 pixel with GSD of 15 cm per pixel. They annotate the
32,716 images with car object and annotation includes object class and four
coordinates of objects. DOTA Dataset Guisong et al. [22] made a dataset DOTA
(Dataset for object detection in Aerial images) of aerial images and collected
from different sources like google earth an airplane or ship. We used Bbox
Labeling tool for tagging of aircraft. and sensors. The GSD of images is
diversified and characterized by multiresolution and multi-sensor. DOTA images
are 4000 ×4000 pixels and classes are 15 with annotations namely (“plane”,
“storage tank”, “swimming pool”, “ship”, “harbor”, “bridge”, “helicopter”, and
“other”). The annotation of images has five parts: Object class and four
coordinates of objects. All above mentioned datasets belong to aerial imagery.
There are many reasons behind the creation of dataset. First the dataset of
satellite imagery are not commonly available. Second two or three datasets are
available and those datasets have less number of objects. For this purpose, we
collected images from DigitalGlobe and convert according for standard
architectures by applying some preprocessing techniques. Data annotation is
process of labelling the data of specific instance like human, car etc. which
is understandable for machines. Data annotation is performed manually by human
using the annotation tool and stored large amount of data for machine
learning. The area of objects is cropped through bounding boxes and
coordinates of objects are stored in file for learning of machines. We used
open-source Bounding Box Label tool for ground truth boxes of aircrafts in the
dataset [23].
## V Methodology
We performed two steps in methodology in which, first we make a dataset for
standard architecture and second configure the parameters for training to
obtain results. We collect a dataset of satellite images from different
sources and manually annotate the images and draw anchor boxes on desired
objects. There are two parts in the dataset: images, which are in JPEG format
and labels, which are in text format. Evert text file is saved according to
images, which contain the annotation of objects and the format of the
annotation is:
$\displaystyle<object-class><x,y,w,h>$ (1)
Where x and y are the center points of object and w and h are the width and
height of object correspondence to the image and name of object class. The
input dimension of YOLO is 416 × 416 ×3 for training but you should care about
the image size should not large may lose the useful information. The basic
information of publically available datasets of aerial imagery is described in
Table II.
We process our dataset and convert in the form of standard architecture using:
* •
Center points
$\displaystyle x=(x_{max}+x_{min})/2$ (2)
$\displaystyle y=(y_{max}+y_{min})/2$ (3)
* •
Width and Height
$\displaystyle w=(x_{max}+x_{min})$ (4)
$\displaystyle h=(y_{max}+y_{min})$ (5)
The batch training passes the dataset through learning algorithms and save the
weights. Batch size represents the training examples in one forward pass. The
learning rate is used for optimization and minimizing the loss function of
neural network and loss function maps values of variables onto real number and
also show the associated cost with values. During training of neural network
it is common to use decay because after each update weights are multiplied by
value less than 1 and also prevents the weights from growing too large.
Momentum is used to improve both training speed and accuracy. Our network
consist of 26 × 26 grid and was tested on one object class and returned 26× 26
× 11 tensor. We used batch_size =64 and filter =30 for training.
Figure 3: Testing output of YOLO
Figure 3 shows YOLO generates number of bounding boxes when the input images
is given and it use non max suppression technique to find the correct bounding
boxes around the object with maximum intersection over Union.
## VI Results
In this paper we used NVIDIA GPU Geforce GTX 1060 for training. Authors
changed the architecture of the model according to object size ($\sim$10
pixels per object) and performed training on the custom dataset. The detection
results of unknown images are shown in figure 4. Figure 4 (left) shows that
our model has good results and perform well on small objects. In middle figure
also shows good results but in Figure 4 (right) there is one object, which is
not detected. But the overall model gives good results and also detects
objects within milliseconds. Our model detects more than 96 % of aircrafts.
Results showed in table I that YOLO was able to identify “aircraft” objects in
the dataset with 90.20% accuracy.
TABLE I: Test results on unseen testing images
Indicator | accuracy | Precision | F1-score | fps
---|---|---|---|---
value | 94.20% | 99% | 96 % | 55
Figure 4: Test Results on Unseen Images TABLE II: Comparison with other
Satellite Imagery (Aircraft) datasets
Sr. | Name | Number Of Objects | Type | Description
---|---|---|---|---
1 | NWPU-RESISC45 Dataset | 700 | Aircraft | (class,0,1)
2 | NWPU VHR-10 Dataset | 800 | Aircraft | (class,0,1)
3 | Custom Dataset | 2213 | Aircraft | (class,x,y,w,h)
## VII Conclusion
In this paper rapid aircraft detection based on YOLO deep learning is
presented. We used 2200 objects for training with tuning the parameters to
calculate the anchors for a good intersection over the union. The improved
model has good results on unknown images of small and densely clustered
objects and also meet the real-time requirements. Results show this approach
is fast and robust for aircraft detection in dense airports. Next, we will
increase the number of objects and classes to achieve good performance and
accuracy.
## Conflicts of Interests
The author of this paper shows no conflicts of interest.
## References
* [1] L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, “Deep learning for generic object detection: A survey,” _arXiv preprint arXiv:1809.02165_ , 2018.
* [2] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _Advances in neural information processing systems_ , 2015, pp. 91–99.
* [3] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 779–788.
* [4] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: single shot multibox detector. corr abs/1512.02325 (2015),” _arXiv preprint arXiv:1512.02325_ , 2015.
* [5] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” _International journal of computer vision_ , vol. 88, no. 2, pp. 303–338, 2010.
* [6] Q. Luo and Z. Shi, “Airplane detection in remote sensing images based on object proposal,” in _2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_. IEEE, 2016, pp. 1388–1391.
* [7] P. Cheng, G. Zhou, and Z. Zheng, “Detecting and counting vehicles from small low-cost uav images,” in _ASPRS 2009 Annual Conference, Baltimore_ , vol. 3, 2009, pp. 9–13.
* [8] J. Lee, J. Wang, D. Crandall, S. Sabanovic, and G. Fox, “Real-time object detection for unmanned aerial vehicles based on cloud-based convolutional neural networks,” in _Proc. IEEE International Conference on Robotic Computing (IRC). Taichung, Taiwan. doi_ , vol. 10, 2017.
* [9] C. L. Azevedo, J. L. Cardoso, M. Ben-Akiva, J. P. Costeira, and M. Marques, “Automatic vehicle trajectory extraction by aerial remote sensing,” _Procedia-Social and Behavioral Sciences_ , vol. 111, pp. 849–858, 2014.
* [10] M. J. Khan, A. Yousaf, N. Javed, S. Nadeem, and K. Khurshid, “Automatic target detection in satellite images using deep learning,” _J. Space Technol._ , vol. 7, no. 1, pp. 44–49, 2017.
* [11] J. Lu, C. Ma, L. Li, X. Xing, Y. Zhang, Z. Wang, and J. Xu, “A vehicle detection method for aerial image based on yolo,” _Journal of Computer and Communications_ , vol. 6, pp. 98–107, 2018.
* [12] deoillard, “data from the trenches,” https://medium.com/data-from-the-trenches/object-detection-with-deep-learning-on-aerial-imagery-2465078db8a9, 2018\.
* [13] L. Zhang, Z. Shi, and J. Wu, “A hierarchical oil tank detector with deep surrounding features for high-resolution optical satellite imagery,” _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , vol. 8, no. 10, pp. 4895–4909, 2015.
* [14] R. A. Marcum, C. H. Davis, G. J. Scott, and T. W. Nivin, “Rapid broad area search and detection of chinese surface-to-air missile sites using deep convolutional neural networks,” _Journal of Applied Remote Sensing_ , vol. 11, no. 4, p. 042614, 2017.
* [15] J. Sherrah, “Fully convolutional networks for dense semantic labelling of high-resolution aerial imagery,” _arXiv preprint arXiv:1606.02585_ , 2016\.
* [16] T. Qu, Q. Zhang, and S. Sun, “Vehicle detection from high-resolution aerial images using spatial pyramid pooling-based deep convolutional neural networks,” _Multimedia Tools and Applications_ , vol. 76, no. 20, pp. 21 651–21 663, 2017.
* [17] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 37, no. 9, pp. 1904–1916, 2015\.
* [18] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 7263–7271.
* [19] ——, “Yolov3: An incremental improvement,” _arXiv preprint arXiv:1804.02767_ , 2018.
* [20] S. Razakarivony and F. Jurie, “Vehicle detection in aerial imagery: A small target detection benchmark,” _Journal of Visual Communication and Image Representation_ , vol. 34, pp. 187–203, 2016.
* [21] T. N. Mundhenk, G. Konjevod, W. A. Sakla, and K. Boakye, “A large contextual dataset for classification, detection and counting of cars with deep learning,” in _European Conference on Computer Vision_. Springer, 2016, pp. 785–800.
* [22] G.-S. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, “Dota: A large-scale dataset for object detection in aerial images,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 3974–3983.
* [23] “Bboxing labeling tool,” https://github.com/puzzledqs/BBox-Label-Tool(accessed on 12 April 2015)., 2015.
|
# CHARTOPOLIS: A Small-Scale Labor-art-ory for Research and Reflection on
Autonomous Vehicles, Human–Robot Interaction, and Sociotechnical Imaginaries
Sangeet Sankaramangalam Ulhasa, Aditya Ravichandera, Kathryn A. Johnsonb,
Theodore P. Pavlicc,
Lance Gharavid, and Spring Bermana This work was supported by NSF EAGER Award
#2146691 and the ASU Center for Human, Artificial Intelligence, and Robot
Teaming (CHART).All authors are with Arizona State University (ASU), Tempe, AZ
85287.a S. S. Ulhas, A. Ravichander, and S. Berman are with the ASU School for
Engineering of Matter, Transport and Energy
<EMAIL_ADDRESS>K. A. Johnson is with the ASU
Department of Psychology<EMAIL_ADDRESS>T. P. Pavlic is with the
ASU School of Computing and Augmented Intelligence, the School of
Sustainability, and the School of Complex Adaptive Systems<EMAIL_ADDRESS>L.
Gharavi is with the ASU School of Music, Dance and Theatre
<EMAIL_ADDRESS>
###### Abstract
CHARTOPOLIS is a multi-faceted sociotechnical testbed meant to aid in building
connections among engineers, psychologists, anthropologists, ethicists, and
artists. Superficially, it is an urban autonomous-vehicle testbed that
includes both a physical environment for small-scale robotic vehicles as well
as a high-fidelity virtual replica that provides extra flexibility by way of
computer simulation. However, both environments have been developed to allow
for participatory simulation with human drivers as well. Each physical vehicle
can be remotely operated by human drivers that have a driver-seat point of
view that immerses them within the small-scale testbed, and those same drivers
can also pilot high-fidelity models of those vehicles in a virtual replica of
the environment. Juxtaposing human driving performance across these two
contexts will help identify to what extent human driving behaviors are
sensorimotor responses or involve psychological engagement with a system that
has physical, not virtual, side effects and consequences. Furthermore, through
collaboration with artists, we have designed the physical testbed to make
tangible the reality that technological advancement causes the history of a
city to fork into multiple, parallel timelines that take place within
populations whose increasing isolation effectively creates multiple
independent cities in one. Ultimately, CHARTOPOLIS is meant to challenge
engineers to take a more holistic view when designing autonomous systems,
while also enabling them to gather novel data that will assist them in making
these systems more trustworthy.
## I Introduction
Figure 1: CHARTOPOLIS, a reconfigurable, modular testbed for urban autonomous-
vehicle research.
We are developing _CHARTOPOLIS_ , a small-scale traffic testbed that serves as
a research laboratory, an art installation, and a boundary object [1] that is
intended to increase integration between engineering, psychology,
anthropology, and ethical philosophy. This mind–machine–motor nexus
(M3X)111NSF M3X program: https://beta.nsf.gov/funding/opportunities/mind-
machine-and-motor-nexus-m3x “labor-art-ory” consists of a model physical
driving environment with small robotic vehicles (Fig. 1), a driving station
for remote control of the vehicles by human participants (Fig. 3), and a
driving simulator that serves as a high-fidelity, virtual replica of the
physical environment (Fig. 4).
In contrast to existing small-scale self-driving car testbeds, e.g. [2, 3, 4,
5, 6, 7, 8], CHARTOPOLIS is specifically designed to facilitate participatory
studies of human sensorimotor, behavioral, cognitive, and aesthetic responses
to diverse driving scenarios with the goal of enriching autonomous vehicles
with human-like behavioral profiles. Its matching virtual and physical
environments will enable safe, controlled experimental manipulation of both
typical driving conditions and unavoidable accidents that would be difficult
or hazardous to replicate with full-scale vehicles. Furthermore, by comparing
and contrasting human driving performance in the physical testbed to
performance in the high-fidelity simulator, we can identify commonalities
among human behaviors across the two participatory platforms (i.e., the
physical testbed and virtual replica) that are likely to extend to
hypothetical behaviors in full-scale vehicles. Thus, this comparative approach
aims to elucidate the underlying problems resulting in the Sim2Real gap rather
than attempting to find costly and risk-prone stopgap solutions to them, e.g.,
[9, 10]. More generally, juxtaposing the physical and virtual environments
enables us to investigate the minimal set of features (e.g., sensory stimuli,
dynamical characteristics, psychological association with outcomes in physical
space) required for a participatory driving testbed to effectively engage a
human operator in as realistic of a driving experience as possible. Finally,
CHARTOPOLIS doubles as an art installation that makes a statement about the
effect of technology and autonomy on the evolving history of a city.
## II Goals: Human-Focused Design and Art
### II-A Trustworthy Autonomy via Participatory CHARTOPOLIS
Much of the focus in developing autonomous vehicles (AVs) has been on
improving sensors and algorithms to enable accurate perception and enhance
driver safety. To that end, researchers and manufacturers have worked
intensely at designing AVs that emulate human driving behavior, but little
effort has been placed in determining _which_ humans to emulate. To what
extent are we able to account for the significant variance in human drivers’
personalities, temperaments, values, and moral priorities? Can we design AVs
that reflect the personality types and driving styles of their owners? The
CHARTOPOLIS testbed and driving simulator allow us to investigate this
variability in human driving performance in a safe environment.
#### II-A1 Mapping Parameters to Personality and Values
Gaining the trust and acceptance of human passengers and human drivers of
other vehicles will require fully autonomous vehicles to do more than be
competent at obeying the objective rules of the road; AVs will also have to
emulate human driving behaviors that are acceptable in terms of social and
ethical norms. Within any group of human drivers who all obey driving laws,
the remaining unconstrained degrees of freedom allow for significant
differences to emerge across driving preferences (e.g., following distance,
responsiveness to light changes, responsiveness to upcoming speed-limit
changes, etc.). These driving-style differences, in turn, reflect variability
in individual drivers’ motives, values, moral priorities, and other
psychological states/traits.
Consistent suites of different driving preferences can conspicuously identify
a driver’s “personality” as benevolent/careful/pessimistic/defensive or power-
oriented/egoistic/optimistic/aggressive, and the resulting behaviors can be
placed on a normative ethical scale. An AV’s programmer is free to choose
these driving parameters, possibly reflecting their own driving personality.
In optimal control theory [11], such remaining degrees of freedom might map to
some scalar functional (e.g., energy use or some proxy for physical driving
comfort) that can be optimized through an automated design process. In either
case, no _explicit_ characterization of the _ethical_ dimension of these
choices is incorporated into the design. A major motivation of CHARTOPOLIS is
to develop a framework for formalizing the currently cryptic ethical dimension
of sociotechnical systems’ modeling and control design.
#### II-A2 Beyond the Artificial Moral Dilemma
Prior attempts to characterize morality and ethical behavior in machine
decision-making, e.g. [12], implicitly assume that ethical stances are only
evident as the outcome of often contrived, singular, pathological driving
events. For example, Awad et al. [12] asked humans to judge hypothetical AV
decision-making by using a battery of questions about driving-related
dilemmas, such as whether an AV with a sudden braking failure should crash
into a wall (certainly killing its passengers) or continue driving into a
crowded crosswalk (certainly killing others on the road). Humans evaluating
these two options had significantly more time to deliberate on the correct
answer than the AV would have. Furthermore, some scenarios were only dilemmas
in the myopic perspective; for example, crashing into pedestrians in a
crosswalk does not guarantee that the brake-less AV will not immediately
afterward hit something else that will also kill everyone in the AV. Implicit
in these studies is that the ideal AV will have an explicit rule-based
(deontological) or utility-based (utilitarian) reasoning system, c.f. [13],
that will recognize emergent dilemmas and, at those instants, assert a
(hopefully acceptable) decision.
In contrast with those prior attempts, we recognize that ethical stances are
being made continually throughout the driving process. An “aggressive” driver
might be viewed by an observer as behaving “less ethically” than a “defensive”
driver despite neither of them actually being observed in a formal dilemma.
These ethical stances emerge from the non-trivial combination of the human’s
driving preferences, sensing and actuation dynamics of the human and the
vehicle, and the physical realities of the exterior world. In other words,
ethical stances are an ecological property of the system of the driver’s mind,
the motor (human sensing and actuation dynamics), and the machine (physical
dynamics of the vehicle and the surrounding environment). Engineers should
characterize the ethical stances that their AVs implicitly take as they
operate continually and also formally recognize how their technologies
modulate the ethical stances taken by their human operators.
### II-B CHARTOPOLIS as Art Installation
CHARTOPOLIS will constitute a kind of multivalent work, not only in its
function as laboratory but also in a dual function as artwork and as a kinetic
installation whose collection of vehicles, buildings, roadways, signs, and
inhabitants serve as a boundary object [1] suggesting multiple
interpretations. It will serve as a site for both the practice of science and
for meditation on the worlds that science creates as well as a tool for
thinking through technology, specifically robots and AI, as a dynamic between
opposing imaginaries: salvation and damnation, utopia and dystopia, hope and
dread. Robots, in the differences we draw between _us_ and _them_ , become a
way to think about human values and the fantasy of automation as an ethically
immaculate source of labor. Robots serve for us the function that Viktor
Shklovsky [14] identifies for art: they make us strange.
Our ecological perspective of continually operating ethics in driving contexts
acts on both short time scales and on very long, sociotechnical evolutionary
time scales. The increased penetration of AVs in urban environments results in
the continuous operation of sensing technologies that are aware of features
familiar to human perception (e.g., visible light) as well as features that
are totally invisible to humans (e.g., electronic, subsonic, supersonic,
etc.). As urban infrastructure is altered to better enable the performance of
AVs, those in less AV-friendly areas of a city become hidden from those who
begin to depend on AVs. Moroever, with physical separation comes cultural and
historical separation.
CHARTOPOLIS will capitalize on the narrative ability of physical and computer
simulations to illuminate these ethical dimensions of AV design and control,
which unroll over a wide range of operational and evolutionary time scales.
The testbed design draws on the utopian aesthetic of architectural models and
their vision of a clean, hopeful future, as well as on the weird fiction of
China Miéville [15]. The completed testbed will embody two cities, one visible
and one transparent, that represent two realities occupying the same space.
The “invisible city” will evoke those things that are erased and made
invisible, forgotten or ignored, left out of the fables of the past and
visions of the future.
Figure 2: CHARTOPOLIS testbed layout with dimensions. Figure 3: Computer
station for remote operation of robotic car. Monitor displays a video stream
from onboard camera. Figure 4: Driving simulator utilizing CARLA [16]
environment.
## III Testbed Components
We have developed several iterations of the CHARTOPOLIS testbed. The first
[17] consisted of a grid of roadways with traffic lights at intersections and
several Pheeno [18] differential-drive robots, which were programmed with
autonomous driving functions such as lane tracking and traffic light
detection. The second version was created in conjunction with the Go-CHART
[19] miniature car robot, which emulates sensing and computation capabilities
of a full-size AV, and included traffic lights, signs, and scenery (grass and
trees). For this work, we also developed an initial version of our driving
station for remote control of the robots. Our most recent version of
CHARTOPOLIS (Fig. 1) enhances the versatility and portability of the testbed
and the onboard compute capability of the robotic cars. This version comprises
a customizable driving environment with roads, traffic lights and signs,
reconfigurable buildings, and adjustable lighting, and it uses a modified
version of the JetRacer Pro AI kit [20] (Fig. 5) as the robot car platform.
The following modifications were made to the Jetracer Pro AI Kit and its
control interface. Photo-interrupt encoders were coupled with the four-wheel
drivetrain’s shaft to obtain feedback for speed control. An Arduino Nano was
added to expand the I2C buses on the Jetson Nano and to facilitate the use of
prebuilt Arduino libraries that are not supported on the Jetson Nano. An IMU
sensor, which uses the expanded I2C bus, was mounted on the back of the robot
on an elevated Z-bracket in order to prevent damage to it from collisions. The
Donkey Car web-control interface [21] was modified to include the data from
the encoders and IMU sensor and to improve the accuracy of remote steering
control of the robot via the Logitech G920 steering wheel and pedals by
mapping its throttle and steering angle to the PWM pulses sent from the
PCA9685 I2C controller board.
In both the remote-control driving station (Fig. 3) and driving simulator
(Fig. 4), a human operator has a first-person-view of the roadway on a monitor
and drives the simulated or physical car using a Logitech G920 steering wheel
with accelerator and brake pedals. We are modeling different driving scenarios
in the driving simulator using the open-source virtual driving environment
CARLA [16]. This builds on our previous work developing a driving simulator
[22, 23] to obtain data for our earlier set of studies on human driving
responses (unpublished), described in Section IV. The environment in our
simulations is an exact replica of the road layout (Fig. 2) and buildings on
the CHARTOPOLIS testbed. The layout is replicated to scale in OpenDRIVE format
using RoadRunner [24], and the buildings are imported as assets through the
Unreal Engine editor. The complete simulated CHARTOPOLIS is finally packaged
as a portable CARLA distribution.
The control architecture of the CHARTOPOLIS testbed and simulator is
illustrated in Fig. 6. A single interface common to the physical and simulated
environments helps to characterize and mitigate the Sim2Real gap by allowing
for direct comparison of performance across the two environments. In the
physical testbed, the robot’s pose in a global coordinate frame is obtained
using an overhead camera; further experiments will determine whether a motion-
capture system is necessary to achieve a sufficiently accurate mapping of the
robot’s pose between the physical testbed and simulation. The robot measures
its velocity using its onboard encoder and obtains image data from its wide-
angle camera. The pose, velocity, and images are broadcast to the Jetson Nano,
which computes the robot’s control actions using this feedback. In the
simulator, the CARLA Server obtains the virtual vehicle’s state from simulated
GNSS and images from its onboard RBG sensor, and the Python Client uses these
data to compute the vehicle’s control actions.
Figure 5: Modified JetRacer AI Pro robotic car [20]. Figure 6: CHARTOPOLIS
control architecture, including components of the human-robot interface
(green), physical testbed (orange), and CARLA driving simulator (blue).
## IV Experimental Procedures
In our planned studies, human participants will be given a battery of
questionnaires including: personality traits [25], values [26], moral
priorities [27], and driving style (e.g., positive vs. aberrant) [28]. These
same participants will then be invited to the lab, where they will experience
driving in both the simulator and through remote control of a robot car within
the physical testbed with a first-person video livestream (counterbalanced for
ordering effects).
Our data-analysis plan will be to map the pre-screened personality profiles
and self-reported driving styles of the human drivers onto the driver behavior
data collected in the virtual and physical environments. Our previous,
unpublished data show that individuals with power-oriented vs. benevolent
profiles are more likely to have self-reported aggressive vs. positive
(prosocial) driving styles, respectively. Further, the two driving styles are
highly predictive of the number of real traffic violations (power-oriented,
aggressive drivers having significantly more violations). We expect these
traits and driving styles to be evident in the driving behaviors in the
simulator and matching physical testbed. Ultimately, this information will
allow us to design controllers that mimic, at least these two, personality and
driving styles.
Gathered simulation data will include all state-variable and sensor data that
are necessary to reproduce aspects of the driving experience, including: the
vehicle’s ground-truth position, speed, and acceleration; data from the
onboard navigational sensors (camera, IMU, GNSS); and information about lane
changes and collisions. On the physical robot, speed and acceleration data
will be calculated using sensor fusion of data from a photo-interrupt shaft
encoder on the robotic drivetrain with IMU and positional data from an
overhead camera, and these data will be synchronously recorded. In both the
simulator and testbed, we will collect data on steering, pedal angle, and
braking.
## V Conclusion and Open Challenges
The CHARTOPOLIS labor-art-ory is a work in progress, with further technical
and conceptual challenges to overcome. The many challenges of implementing
ethics in AVs are beyond the scope of this paper, but they warrant further
consideration [29, 13, 30]. One take-away message, however, is that all
stakeholders should be consulted in implementing machine ethics [31].
## VI Acknowledgement
The authors thank Brunella Provvidente for helping with the design and
implementation of the physical testbed.
## References
* [1] S. L. Star and J. R. Griesemer, “Institutional ecology, ‘translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907–39,” _Social Studies of Science_ , vol. 19, no. 3, pp. 387–420, 1989.
* [2] M. Kloock, P. Scheffe, J. Maczijewski, A. Kampmann, A. Mokhtarian, S. Kowalewski, and B. Alrifaee, “Cyber-physical mobility lab: An open-source platform for networked and autonomous vehicles,” in _2021 European Control Conference (ECC)_ , 2021, pp. 1937–1944.
* [3] L. Paull, J. Tani, H. Ahn, J. Alonso-Mora, L. Carlone, M. Cap, Y. F. Chen, C. Choi, J. Dusek, Y. Fang _et al._ , “Duckietown: an open, inexpensive and flexible platform for autonomy education and research,” in _2017 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2017, pp. 1497–1504.
* [4] A. Stager, L. Bhan, A. Malikopoulos, and L. Zhao, “A scaled smart city for experimental validation of connected and automated vehicles,” _IFAC-PapersOnLine_ , vol. 51, no. 9, pp. 130–135, 2018.
* [5] N. Hyldmar, Y. He, and A. Prorok, “A fleet of miniature cars for experiments in cooperative driving,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 3238–3244.
* [6] C. Berger, “From a competition for self-driving miniature cars to a standardized experimental platform: concept, models, architecture, and evaluation,” _arXiv preprint arXiv:1406.7768_ , 2014.
* [7] B. Vincke, S. Rodriguez Florez, and P. Aubert, “An open-source scale model platform for teaching autonomous vehicle technologies,” _Sensors_ , vol. 21, no. 11, p. 3850, 2021.
* [8] “Quanser self-driving car research studio,” April 2021. [Online]. Available: https://www.quanser.com/products/self-driving-car-research-studio/
* [9] X. Huang, W. Chen, W. Zhang, R. Song, J. Cheng, and Y. Li, “Autonomous multi-view navigation via deep reinforcement learning,” in _2021 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2021, pp. 13 798–13 804.
* [10] X. Huang, H. Deng, W. Zhang, R. Song, and Y. Li, “Towards multi-modal perception-based navigation: A deep reinforcement learning method,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 3, pp. 4986–4993, 2021\.
* [11] R. C. Dorf and R. H. Bishop, _Modern control systems_. Pearson Prentice Hall, 2008.
* [12] E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon, and I. Rahwan, “The moral machine experiment,” _Nature_ , vol. 563, no. 7729, pp. 59–64, 2018.
* [13] J. C. Gerdes and S. M. Thornton, “Implementable ethics for autonomous vehicles,” in _Autonomes fahren_. Springer, 2015, pp. 87–102.
* [14] V. Shklovsky _et al._ , “Art as technique,” _Literary theory: An anthology_ , vol. 3, 1917.
* [15] C. Miéville, “The limits of utopia,” _Salvage Zone_ , 2015. [Online]. Available: https://salvage.zone/the-limits-of-utopia/
* [16] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in _Conference on Robot Learning_. PMLR, 2017, pp. 1–16.
* [17] R. Subramanyam, “CHARTOPOLIS: A self driving car test bed,” Master’s thesis in Electrical Engineering, Arizona State University, 2018.
* [18] S. Wilson, R. Gameros, M. Sheely, M. Lin, K. Dover, R. Gevorkyan, M. Haberland, A. Bertozzi, and S. Berman, “Pheeno, a versatile swarm robotic research and education platform,” _IEEE Robotics and Automation Letters_ , vol. 1, no. 2, pp. 884–891, 2016.
* [19] S. Kannapiran and S. Berman, “Go-CHART: A miniature remotely accessible self-driving car robot,” in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2020, pp. 2265–2272.
* [20] “JetRacer Pro AI kit, high speed AI racing robot powered by Jetson Nano, pro version.” [Online]. Available: https://www.waveshare.com/jetracer-pro-ai-kit.htm
* [21] W. Roscoe, “Donkey Car: An opensource DIY self driving platform for small scale cars,” 2019.
* [22] S. S. Ulhas, “Cross platform training of neural networks to enable object identification by autonomous vehicles,” Master’s thesis in Mechanical Engineering, Arizona State University, 2019.
* [23] I. Kankam, “Design of an immersive virtual environment to investigate how different drivers crash in trolley-problem scenarios,” Master’s thesis in Mechanical Engineering, Arizona State University, 2019.
* [24] “RoadRunner: Design 3D scenes for automated driving simulation.” [Online]. Available: https://www.mathworks.com/products/roadrunner.html
* [25] O. P. John and S. Srivastava, “The Big-Five trait taxonomy: History, measurement, and theoretical perspectives,” in _Handbook of Personality: Theory and Research_ , 2nd ed., L. A. Pervin and O. P. John, Eds. New York: The Guilford Press, 1999, pp. 102–138.
* [26] S. H. Schwartz, J. Cieciuch, M. Vecchione, E. Davidov, R. Fischer, C. Beierlein, A. Ramos, M. Verkasalo, J.-E. Lönnqvist, K. Demirutku _et al._ , “Refining the theory of basic individual values.” _Journal of Personality and Social Psychology_ , vol. 103, no. 4, p. 663, 2012\.
* [27] J. Graham, B. A. Nosek, J. Haidt, R. Iyer, S. Koleva, and P. H. Ditto, “Mapping the moral domain.” _Journal of Personality and Social Psychology_ , vol. 101, no. 2, p. 366, 2011.
* [28] T. Özkan and T. Lajunen, “A new addition to DBQ: Positive driver behaviours scale,” _Transportation Research Part F: Traffic Psychology and Behaviour_ , vol. 8, no. 4-5, pp. 355–368, 2005.
* [29] Y. E. Bigman and K. Gray, “People are averse to machines making moral decisions,” _Cognition_ , vol. 181, pp. 21–34, 2018.
* [30] P. Lin, “Why ethics matters for autonomous cars,” in _Autonomous driving_. Springer, Berlin, Heidelberg, 2016, pp. 69–85.
* [31] J. Stilgoe, R. Owen, and P. Macnaghten, “Developing a framework for responsible innovation,” in _The Ethics of Nanotechnology, Geoengineering and Clean Energy_. Routledge, 2020, pp. 347–359.
|
# Intrinsic compressibility effects in near-wall turbulence
Asif Manzoor Hasan1<EMAIL_ADDRESS>Pedro Costa1 Johan Larsson2 Sergio
Pirozzoli3 Rene Pecnik1<EMAIL_ADDRESS>1 Process & Energy Department,
Delft University of Technology, Leeghwaterstraat 39, 2628 CB, Delft, The
Netherlands 2Department of Mechanical Engineering, University of Maryland,
College Park, MD 20742, USA 3 Dipartimento di Ingegneria Meccanica e
Aerospaziale, Sapienza Università di Roma, Via Eudossiana 18, 00184 Roma,
Italy
###### Abstract
The impact of intrinsic compressibility effects—changes in fluid volume due to
pressure variations—on high-speed wall-bounded turbulence has often been
overlooked or incorrectly attributed to mean property variations. To
unambiguously quantify these intrinsic compressibility effects, we perform
direct numerical simulations of compressible turbulent channel flows with
nearly uniform mean properties. Our simulations reveal that intrinsic
compressibility effects yield a significant upward shift in the logarithmic
mean velocity profile that can be attributed to the reduction in the turbulent
shear stress. This reduction stems from the weakening of the near-wall quasi-
streamwise vortices. We in turn attribute this weakening to the spontaneous
opposition of sweeps and ejections from the near-wall expansions and
contractions of the fluid, and provide a theoretical explanation for this
mechanism. Our results also demonstrate that intrinsic compressibility effects
are responsible for the increase in the inner-scaled streamwise turbulence
intensity in compressible flows compared to incompressible flows, previously
regarded to be an effect of mean property variations.
## 1 Introduction
Understanding the impact of compressibility effects on turbulent flow is
crucial for a wide range of engineering applications, as it influences the
performance and efficiency of aerospace vehicles, gas turbines, combustion
processes, and high-speed propulsion systems. Turbulence in compressible flow
involves effects related to heat transfer—also termed as variable-property
effects—and intrinsic compressibility (hereby IC) effects—also termed as
‘true’ compressibility effects (Smits & Dussauge, 2006), ‘genuine’
compressibility effects (Yu et al., 2019), or simply ‘compressibility’ effects
(Lele, 1994). Heat transfer is in turn responsible for two main effects.
First, heat transfer is associated with mean temperature variations and hence
variations in the mean density and viscosity. Second, it can cause
fluctuations in fluid volume (or density) as a result of a change in entropy
(Livescu, 2020). On the other hand, intrinsic compressibility effects are
associated with changes in fluid volume in response to changes in pressure
(Lele, 1994). While variable-property effects can be relevant at any (even
zero) Mach number, IC effects only become important at high Mach numbers.
In 1962, Morkovin postulated that the changes in fluid volume due to entropy
and pressure, mentioned above, are negligible such that only mean property
variations are important. This hypothesis is commonly referred to as
‘Morkovin’s hypothesis’ (Morkovin, 1962; Bradshaw, 1977; Coleman et al., 1995;
Smits & Dussauge, 2006). Some years later, Bradshaw (1977) performed a
detailed study on this hypothesis and provided an engineering estimate as to
when the hypothesis should hold. According to Bradshaw, Morkovin’s postulate
may be true in flows where the root-mean-square ($rms$) of the density
fluctuation is below 10% of the mean density. Subsequently, Coleman et al.
(1995) noted that most of these density fluctuations arise from passive mixing
across mean density gradients. Since Morkovin’s hypothesis implicitly assumes
that the spatial gradients of the mean density, and thus the fluctuations
resulting from them, are small, they argued that the density $rms$ is not a
rigorous evaluator of the hypothesis. Instead, they claimed that, consistent
with the original conjecture, the $rms$ of pressure and total temperature111We
note that pressure fluctuations scaled by mean pressure are a direct measure
of intrinsic compressibility effects. However, the justification for why total
temperature fluctuations should be small for the hypothesis to hold is unclear
(Lele, 1994). scaled by their respective means should be considered. To our
knowledge, there is no engineering estimate for these fluctuations such as the
one for density proposed by Bradshaw.
If Morkovin’s hypothesis holds, then turbulence statistics in compressible
flows can be collapsed onto their incompressible counterparts by simply
accounting for mean property variations. The first key contribution in
accounting for variable-property effects was proposed by Van Driest (1951),
who incorporated mean density variations in the mean shear formulation such
that
$\frac{d\bar{u}}{dy}=\frac{\sqrt{\tau_{w}/\bar{\rho}}}{\kappa y}\mathrm{,}$
(1)
where $u$ is the streamwise velocity, $\tau_{w}$ the wall shear stress, $\rho$
the fluid density, and $\kappa$ the von Kármán constant. The overbar denotes
Reynolds averaging and the subscript $w$ indicates wall values. Equation (1)
led to two major outcomes: (1) the Van Driest mean velocity transformation
(Van Driest, 1956a; Danberg, 1964) given as
$\bar{U}_{VD}^{+}=\int_{0}^{\bar{u}^{+}}{\sqrt{\frac{\bar{\rho}}{\rho_{w}}}}d\bar{u}^{+},$
(2)
where the supercript $+$ denotes wall scaling, and (2) the Van Driest skin-
friction theory (Van Driest, 1956b). These scaling breakthroughs are still
widely used, despite their known shortcomings (Bradshaw, 1977; Huang &
Coleman, 1994; Trettel & Larsson, 2016; Patel et al., 2016; Griffin et al.,
2021; Kumar & Larsson, 2022; Hasan et al., 2024).
Another key contribution is attributed to Morkovin (1962) who proposed scaling
the turbulent shear stress with $\bar{\rho}/\rho_{w}$ such that
$\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}=\frac{\bar{\rho}}{\rho_{w}}\frac{\widetilde{u^{\prime\prime}v^{\prime\prime}}}{u_{\tau}^{2}}$
(3)
collapses with the incompressible distributions. Here,
$u_{\tau}=\sqrt{\tau_{w}/\rho_{w}}$ is the friction velocity scale, the tilde
denotes density-weighted (Favre) averaging, and the double primes denote
fluctuations from Favre average. The contributions of Van Driest and Morkovin
can be consolidated by interpreting their corrections as if they were to
change the definition of the friction velocity scale from $u_{\tau}$ to
$u_{\tau}^{*}=\sqrt{\tau_{w}/\bar{\rho}}$ (termed ‘semi-local’ friction
velocity scale222The friction velocity scale is termed ‘semi-local’ instead of
‘local’ because the total shear stress in its definition is still taken at the
wall.), such that equations (1), (2), and (3) can be rewritten as
$\displaystyle\frac{d\bar{u}}{dy}=\frac{u_{\tau}^{*}}{\kappa y}\mathrm{,}$
$\displaystyle\bar{U}_{VD}^{+}=\int_{0}^{\bar{u}}\frac{1}{u_{\tau}^{*}}d\bar{u},$
$\displaystyle\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}=\frac{\widetilde{u^{\prime\prime}v^{\prime\prime}}}{u_{\tau}^{*2}}\mathrm{.}$
(4)
Similarly, efforts to account for mean density and viscosity variations in the
definition of the viscous length scale were made since the 1950s (Lobb et al.,
1955), giving rise to the well-known semi-local wall-normal coordinate
$y^{*}=y/\delta_{v}^{*}$ (where
$\delta_{v}^{*}=\bar{\mu}/(\bar{\rho}u_{\tau}^{*})$ is the semi-local viscous
length scale). Much later, the companion papers by Huang et al. (1995) and
Coleman et al. (1995) performed a comprehensive analysis where they showed
that turbulence quantities show a much better collapse when reported as a
function of $y^{*}$ rather than $y^{+}$. Another major consequence of using
the semi-local wall coordinate is reflected in velocity transformations. The
semi-local velocity transformation, derived independently by Trettel & Larsson
(2016) and Patel et al. (2016), is an extension to the Van Driest velocity
transformation accounting for variations in the semi-local viscous length
scale. This transformation (also known as the TL transformation) can be
written as
$\bar{U}_{TL}^{+}=\int_{0}^{\bar{u}^{+}}\left(1-\frac{y}{\delta_{v}^{*}}\frac{d\delta_{v}^{*}}{dy}\right)\underbrace{{\frac{u_{\tau}}{u_{\tau}^{*}}}}_{\sqrt{{\bar{\rho}}/{\rho_{w}}}}d\bar{u}^{+}.$
(5)
In short, the above-mentioned scaling theories in equations (4) and (5) show
that heat transfer effects associated with mean property variations can be
accounted for in terms of the semi-local friction velocity and viscous length
scales.
In addition to the studies mentioned above, many other studies have addressed
variable-property effects in low-Mach (Patel et al., 2015) and high-Mach
number flows (Maeder et al., 2001; Morinishi et al., 2004; Foysi et al., 2004;
Duan et al., 2010, 2011; Modesti & Pirozzoli, 2016; Zhang et al., 2018; Cogo
et al., 2022; Zhang et al., 2022; Wenzel et al., 2022; Cogo et al., 2023, to
name a few). However, less emphasis has been placed on studying intrinsic
compressibility effects, possibly due to the belief that Morkovin’s hypothesis
holds for wall-bounded flows even in the hypersonic regime (Duan et al., 2011;
Zhang et al., 2018).
Recently, by isolating intrinsic compressibility effects, Hasan et al. (2023)
found that Morkovin’s hypothesis is inaccurate at high Mach numbers. These
compressibility effects modify the mean velocity scaling, leading to an upward
shift in the logarithmic profile. The authors attributed this shift to the
modified near-wall damping of turbulence and proposed a mean velocity
transformation based on a modification of the Van Driest damping function as
$\bar{U}_{HLPP}^{+}=\int_{0}^{\bar{u}^{+}}\\!\\!\left({\frac{1+\kappa
y^{*}{D(y^{*},M_{\tau})}}{1+\kappa{y^{*}}{D(y^{*},0)}}}\right){\left({1-\frac{y}{\delta_{v}^{*}}\frac{d\delta_{v}^{*}}{dy}}\right)}\sqrt{\frac{\bar{\rho}}{\rho_{w}}}\,{d\bar{u}^{+}}.$
(6)
This transformation was found to be accurate for a wide variety of flows
including (but not limited to) adiabatic and cooled boundary layers, adiabatic
and cooled channels, supercritical flows, and flows with non-air-like
viscosity laws. The modified damping function in (6) reads
$D(y^{*},M_{\tau})=\left[1-\mathrm{exp}\left({\frac{-y^{*}}{A^{+}+f(M_{\tau})}}\right)\right]^{2},$
(7)
with $f(M_{\tau})=19.3M_{\tau}$. Despite the evidence that intrinsic
compressibility effects modify the damping, the underlying physical mechanism
is still unknown.
More evidence on the importance of intrinsic (or ‘genuine’) compressibility
effects has been provided in a series of recent publications by Yu and co-
workers (Yu et al., 2019, 2020; Yu & Xu, 2021), who analysed these effects in
channel flows through direct numerical simulations (DNS). They performed a
Helmholtz decomposition of the velocity field and mainly focused on
dilatational motions and their direct contribution to several turbulence
statistics. Their main observations were: (1) intrinsic compressibility
effects, if present, are likely concentrated in the near-wall region, where
the wall-normal dilatational velocity field exceeds the solenoidal
counterpart; (2) the correlation between the solenoidal streamwise and the
dilatational wall normal velocity is negative and can constitute up to 10% of
the total shear stress; (3) this negative correlation was attributed to the
opposition of sweeps near the wall by dilatational motions; and (4) the
dilatation field (and thus the dilatational velocity) exhibits a travelling
wave-packet-like structure, whose origin is yet unknown (see also Tang et al.,
2020; Gerolymos & Vallet, 2023; Yu et al., 2024).
In this paper, we will focus mainly on the indirect effects of intrinsic
compressibility, namely, those that do not result directly from contributions
by dilatational motions but result as a consequence of changes in the
solenoidal dynamics of turbulence. To achieve this, we first perform direct
numerical simulations employing the methodology described in Coleman et al.
(1995), whereby variable-property effects are essentially removed by
cancelling the aerodynamic heating term in the energy equation. These
simulations will allow us to study intrinsic compressibility effects by
isolating them. With this approach, our main goal is to answer why the near-
wall damping of turbulence changes with increasing Mach number, as observed in
Hasan et al. (2023). Since this is also observed for conventional flows, we
believe that the knowledge obtained from our simplified cases is directly
applicable to those flows. With the simulated cases, we look into various
fundamental statistics of turbulence such as turbulent stresses, pressure-
strain correlation, and into coherent structures, eventually tracing back the
change in near-wall damping of the turbulent shear stress to the weakening of
quasi-streamwise vortices. Subsequently, with the help of what is known from
the incompressible turbulence literature, we provide a theoretical explanation
as to why the vortices weaken.
The paper is structured as follows. §2 describes the cases and methodology
used in this paper. §3 explains the change in damping of near-wall turbulence
as a result of the change in turbulent stress anisotropy, caused by a
reduction in the pressure-strain correlation. §4 connects this reduced
correlation with the weakening of quasi-streamwise vortices, which is then
explained using conditional averaging. Finally, the summary and conclusions
are presented in §5.
## 2 Computational approach and case description
In order to investigate turbulence in high-speed wall-bounded channel flows
with uniform mean temperature (internal energy) in the domain, we perform
direct numerical simulations by solving the compressible Navier-Stokes
equations in conservative form, given as
$\displaystyle\frac{\partial\rho}{\partial t}+\frac{\partial\rho
u_{i}}{\partial x_{i}}$ $\displaystyle=0,$ (8)
$\displaystyle\frac{\partial\rho u_{i}}{\partial t}+\frac{\partial\rho
u_{i}u_{j}}{\partial x_{j}}$ $\displaystyle=-\frac{\partial p}{\partial
x_{i}}+\frac{\partial\tau_{ij}}{\partial x_{j}}+f\delta_{i1},$
$\displaystyle\frac{\partial\rho E}{\partial t}+\frac{\partial\rho
u_{j}E}{\partial x_{j}}$ $\displaystyle=-\frac{\partial pu_{j}}{\partial
x_{j}}-\frac{\partial q_{j}}{\partial
x_{j}}+\frac{\partial\tau_{ij}u_{i}}{\partial x_{j}}+fu_{1}+\Phi.$
The viscous stress tensor and the heat flux vector are given as
$\tau_{ij}=\mu\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial
u_{j}}{\partial x_{i}}-\frac{2}{3}\frac{\partial u_{k}}{\partial
x_{k}}\delta_{ij}\right),~{}q_{j}=-\lambda\frac{\partial T}{\partial x_{j}},$
(9)
where $u_{i}$ is the velocity component in the $i^{th}$ direction, and where
$i=1,2,3$ corresponds to the streamwise ($x$), wall-normal ($y$) and spanwise
($z$) directions, respectively. $\rho$ is the density, $p$ the pressure,
$E=c_{v}T+u_{i}u_{i}/2$ the total energy per unit mass, $\mu$ the viscosity,
$\lambda$ the thermal conductivity and $Pr=\mu c_{p}/\lambda$ the Prandtl
number. $c_{p}$ and $c_{v}$ indicate specific heats at constant pressure and
constant volume, respectively. $f$ is a uniform body force that is adjusted in
time to maintain a constant total mass flux in periodic flows (e.g., a fully
developed turbulent channel or pipe).
As outlined in the introduction, herein we attempt to remove mean property
gradients to isolate intrinsic compressibility effects. For that purpose, we
follow the approach presented by Coleman et al. (1995), whereby the energy
equation is augmented with a source term
$\Phi=-\tau_{ij}\frac{\partial u_{i}}{\partial x_{j}}$ (10)
that counteracts the effects of viscous dissipation. Consequently, the mean
internal energy remains approximately uniform across the entire domain. For an
ideal gas, this implies that the mean temperature is also approximately
constant, which, when combined with a uniform mean pressure, leads to a nearly
uniform mean density. Furthermore, the mean dynamic viscosity and mean thermal
conductivity are also uniform. However, it is important to note that the
simulations still permit fluctuations of these properties—primarily along
isentropes, as we will see below.
Using this approach, four cases with increasing Mach numbers are simulated, as
presented in table 1. These simulations are performed with STREAmS (Bernardini
et al., 2021) using the assumption of a calorically perfect ideal gas
(constant specific heat capacities), a constant Prandtl number of $0.7$, and a
power law for the viscosity with an exponent of $0.75$. The domain is periodic
in the streamwise and spanwise directions, while at the walls an isothermal
boundary condition is used for temperature, and a zero normal gradient is
specified for pressure. Since the four cases have similar $Re_{\tau}$ values,
we use the same grid for all simulations. The computational grid consists of
$n_{x}=1280$, $n_{y}=480$ and $n_{z}=384$ points for a domain of size
$L_{x}=10h$, $L_{y}=2h$ and $L_{z}=3h$, where $h$ is the channel half-height.
This gives a near-wall resolution of $\Delta x^{+}=4.3$ and $\Delta
z^{+}=4.3$. The grid in the wall-normal direction is stretched in such a way
that $y^{+}\leq 1$ is achieved for the first grid point.
Case name | $M_{b}$ | $M_{cl}$ | $M_{\tau}$ | $Re_{\tau}$ | $Re_{\tau_{c}}$ | Line colour
---|---|---|---|---|---|---
Mach 0.3 | 0.3 | 0.34 | 0.0162 | 556 | 556 |
Mach 2.28 | 2.28 | 2.59 | 0.1185 | 546 | 539 |
Mach 3 | 3.0 | 3.37 | 0.1526 | 547 | 527 |
Mach 4 | 4.0 | 4.47 | 0.1968 | 544 | 513 |
Table 1: Description of the cases. $M_{b}=U_{b}/\sqrt{\gamma RT_{w}}$ is the
bulk Mach number, $M_{cl}=U_{c}/\sqrt{\gamma RT_{c}}$ is the channel
centreline Mach number and $M_{\tau}=u_{\tau}/\sqrt{\gamma RT_{w}}$ is the
wall friction Mach number. $Re_{\tau}=\rho_{w}u_{\tau}h/\mu_{w}$ is the
friction Reynolds number based on the channel half-height $h$ and
$Re_{\tau_{c}}$ corresponds to the value of the semi-local friction Reynolds
number ($Re_{\tau}^{*}=\bar{\rho}u_{\tau}^{*}h/\bar{\mu}$) at the channel
centre.
Figure 1 shows the mean density, viscosity, and semi-local Reynolds number
profiles for the four cases introduced in table 1. The figure also shows the
profiles of a conventional boundary layer at a free-stream Mach number of 14,
taken from Zhang et al. (2018). Compared to the conventional $M_{\infty}=14$
boundary layer case, our cases show little to no variation in mean properties.
This implies that mean heat transfer effects are indeed negligible in the
present cases.
To determine whether other heat transfer effects associated with changes in
fluid volume as a result of changes in entropy are important, we compute
density fluctuations using the isentropic relation
$\frac{\rho^{is}_{rms}}{\bar{\rho}}\approx\frac{1}{\gamma}\frac{p_{rms}}{\bar{p}},$
(11)
and compare it with the density fluctuations obtained from DNS in figure 2(a).
With the exception of the viscous sublayer, the two distributions appear to
collapse, which implies that entropic heat transfer effects are negligible in
the present cases. Hence, any deviations from incompressible flows observed in
these cases should be attributed to intrinsic compressibility effects.
Figure 1: Wall-normal distributions of (a) density $\overline{\rho}$, (b)
viscosity $\overline{\mu}$, and (c) the semi-local friction Reynolds number
$Re_{\tau}^{*}=\bar{\rho}u_{\tau}^{*}h/\bar{\mu}$ for the cases described in
table 1. The red lines represent the $M_{\infty}=14$ case of Zhang et al.
(2018). These quantities are plotted as a function of the wall-normal
coordinate scaled by the channel half-height for the channel flow cases, and
by boundary layer thickness ($\delta_{99}$) for the $M_{\infty}=14$ boundary
layer case.
Figure 2(a) also shows the total and isentropic density fluctuations for the
$M_{\infty}=14$ flow case computed by Zhang et al. (2018). As can be seen, the
total density fluctuations are much higher than the isentropic ones in the
buffer layer and beyond, corroborating that both heat transfer and intrinsic
compressibility effects are important. Interestingly, our highest Mach number
case (Mach 4) and Zhang’s $M_{\infty}=14$ boundary layer have similar
isentropic density $rms$ (or similar pressure $rms$). Given that the pressure
$rms$ scaled by mean pressure is an effective measure of intrinsic
compressibility effects (Coleman et al., 1995), we can expect that these
effects are of comparable magnitude for our Mach 4 case and the conventional
$M_{\infty}=14$ boundary layer.
Figure 2: Wall-normal distributions of (a) the root-mean-square ($rms$) of the
total (solid) and isentropic (dashed) density fluctuations [equation (11)];
(b) the turbulence Mach number $M_{t}=\sqrt{2k}/\sqrt{\gamma R\bar{T}}$; and
(c) the semi-local friction Mach number
$M_{\tau}^{*}=u_{\tau}^{*}/\sqrt{\gamma R\bar{T}}$ for the cases described in
table 1. The red lines represent the $M_{\infty}=14$ case of Zhang et al.
(2018).
In addition to the pressure $rms$, intrinsic compressibility effects can also
be quantified in terms of Mach numbers. Figure 2(b) shows the turbulence Mach
number, defined as $M_{t}=\sqrt{2k}/\sqrt{\gamma R\bar{T}}$, where
$k=\overline{\rho u_{i}^{\prime\prime}u_{i}^{\prime\prime}}/2$ is the
turbulence kinetic energy (TKE) and the denominator is the local speed of
sound for ideal gases. Three out of four cases are above the threshold of
$M_{t}=0.3$, above which intrinsic compressibility effects are considered
important (Smits & Dussauge, 2006). Due to the inhomogeneous nature of wall-
bounded flows, $M_{t}$ is not constant throughout the domain, becoming zero at
the wall where the pressure and density $rms$ are the strongest as shown in
figure 2(a).
Other parameters have been proposed in the literature as a better measure of
intrinsic compressibility effects in wall-bounded flows, most prominently the
friction Mach number $M_{\tau}=u_{\tau}/\sqrt{\gamma RT_{w}}$ (Bradshaw, 1977;
Smits & Dussauge, 2006; Yu et al., 2022; Hasan et al., 2023). When defined in
terms of local properties, one obtains the semi-local friction Mach number
$M_{\tau}^{*}=u_{\tau}^{*}/\sqrt{\gamma R\bar{T}}$. Figure 2(c) shows that, in
contrast to $M_{t}$, the distribution of $M_{\tau}^{*}$ is nearly constant,
even for flows with mean property variations. The reason why $M_{\tau}^{*}$ is
constant for flows with ideal gases is because
$\bar{T}/T_{w}\approx\rho_{w}/\bar{\rho}$ such that
$M_{\tau}^{*}=\frac{u_{\tau}^{*}}{\sqrt{\gamma
R\bar{T}}}=\frac{u_{\tau}\sqrt{\rho_{w}/\bar{\rho}}}{\sqrt{\gamma
R\bar{T}}}\approx\frac{u_{\tau}\sqrt{\bar{T}/T_{w}}}{\sqrt{\gamma
R\bar{T}}}=\frac{u_{\tau}}{\sqrt{\gamma RT_{w}}}=M_{\tau}.$ (12)
As seen in figure 2(b) and (c), the profiles of $M_{t}$ and $M_{\tau}^{*}$ are
equivalent for the Mach 4 constant-property and the $M_{\infty}=14$
conventional cases, further supporting the statement made above that the IC
effects in these cases are comparable.
## 3 Intrinsic compressibility effects on turbulence statistics
Having introduced the flow cases, we first discuss the modified near-wall
damping of the turbulent shear stress and its consequence on the mean velocity
scaling. Unless otherwise stated, all quantities will be presented in their
semi-locally scaled form. Nevertheless, since the cases have approximately
constant mean properties, there is no major difference between the classical
wall scaling (denoted by the superscript ‘$+$’) and the semi-local scaling
(denoted by the superscript ‘$*$’).
### 3.1 Outward shift in viscous and turbulent shear stresses
In the inner layer of parallel (or quasi-parallel) shear flows, integration of
the mean streamwise momentum equation implies that the sum of viscous and
turbulent shear stresses is equal to the total shear stress, given as
$\overline{\mu\left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial
x}\right)}-\overline{\rho u^{\prime\prime}v^{\prime\prime}}={\tau_{tot}},$
(13)
where $\tau_{tot}\approx\tau_{w}$ in zero-pressure-gradient boundary layers,
whereas it decreases linearly with the wall distance in channel flows.
Neglecting terms due to viscosity fluctuations and normalizing equation (13)
by $\tau_{w}$, we get for the latter case
$\frac{\bar{\mu}}{\mu_{w}}\frac{d\bar{u}^{+}}{dy^{+}}-\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}\approx
1-\frac{y}{h},$ (14)
where $h$ is the channel half-height.
Integrating the viscous shear stress yields the TL-transformed mean velocity
profile (Trettel & Larsson, 2016; Patel et al., 2016) as
$\bar{U}^{+}_{TL}=\int_{0}^{y^{*}}\frac{\bar{\mu}}{\mu_{w}}\frac{d\bar{u}^{+}}{dy^{+}}dy^{*}\mathrm{.}$
(15)
Figure 3(a) shows the transformed velocity profiles for the cases listed in
table 1 (or simply $\bar{u}^{+}$, since the mean flow properties are nearly
constant). A clear shift in the logarithmic profile is seen that increases
with the Mach number. Based on equation (15), an upward shift in the mean
velocity profile corresponds to an equivalent upward shift (or increase) in
the viscous shear stress. This is evident from figure 3(b). Since the total
shear stress is universal for the four flow cases under inspection, an
increase in the viscous shear stress directly implies a decrease in the
turbulent shear stress. Indeed, figure 3(b) shows that the turbulent shear
stress reduces with increasing Mach number.
Figure 3: (a) TL-transformed mean velocity profiles [equations (5), (15)], and
(b) viscous and turbulent shear stresses for the cases described in table 1.
In other words, the log-law shift observed in figure 3(a) is a consequence of
the modified damping of the turbulent shear stress, as also noted by Hasan et
al. (2023).
### 3.2 Outward shift in wall-normal turbulent stress: change in turbulence
anisotropy
The outward shift in the turbulent shear stress corresponds to an outward
shift in the wall-normal turbulent stress, because wall-normal motions
directly contribute to turbulent shear stress by transporting momentum across
the mean shear (Townsend, 1961; Deshpande et al., 2021). This is also
reflected in the turbulent shear stress budget, whose production is controlled
by the wall-normal turbulent stress (Pope, 2001).
Figure 4: Wall-normal distributions of (a) streamwise, (b) wall-normal and (c)
spanwise turbulent stresses, and (d) the turbulence kinetic energy for the
cases described in table 1.
Figure 4(b) shows profiles of the wall-normal turbulent stress. A clear
outward shift is evident, which is consistent with the observed outward shift
in the turbulent shear stress. Now, the decrease in the wall-normal stress can
either be due to less energy being received from the streamwise component
(inter-component energy transfer), or due to an overall reduction of the
turbulence kinetic energy. In order to clarify this, we report the streamwise
and the spanwise turbulent stresses, along with the turbulence kinetic energy
in panels (a), (c) and (d) of figure 4, respectively.
Figure 4(a) shows that the streamwise turbulent stress becomes stronger with
increasing Mach number. The increase in the peak streamwise turbulence
intensity in compressible flows, compared to incompressible flows at similar
Reynolds numbers, has also been observed in several other studies (Gatski &
Erlebacher, 2002; Pirozzoli et al., 2004; Foysi et al., 2004; Duan et al.,
2010; Modesti & Pirozzoli, 2016; Zhang et al., 2018; Trettel, 2019; Cogo et
al., 2022, 2023). However, none of these studies assessed whether intrinsic
compressibility effects play a role in peak strengthening. In fact, the higher
peak observed in the $M_{\infty}=14$ boundary layer was attributed to
variable-property effects by Zhang et al. (2018). Our results instead
demonstrate unambiguously that intrinsic compressibility effects play a
central role in the strengthening of streamwise turbulence intensity, since
our flow cases are essentially free of variable-property effects.
Similar to the wall-normal stress, the spanwise turbulent stress also
decreases with increasing Mach number, shown in figure 4(c). The increase in
the streamwise stress and the decrease in the wall-normal and spanwise
stresses imply suppression of inter-component energy transfer with increasing
Mach number. However, before discussing this in more detail in the next
subsection, we first note that the increase in the streamwise turbulent stress
is much more pronounced than the decrease in the other two components, which
essentially results in an increase in the turbulence kinetic energy with Mach
number as shown in figure 4(d). This suggests that, in addition to the change
in intercomponent energy transfer, there is also a change in the production of
$\widetilde{u^{\prime\prime}u^{\prime\prime}}^{*}$. This change in production
can be attributed to the changes in viscous and turbulent shear stresses
observed in figure 3, since it is their product that governs the production
term. This is further discussed in detail in Appendix A, where we present the
budget of the streamwise turbulence stress, and provide a phenomenological
explanation for the increase in
$\widetilde{u^{\prime\prime}u^{\prime\prime}}^{*}$.
### 3.3 Reduced inter-component energy transfer
The strengthening of the streamwise turbulent stress and the weakening of the
other two components, as observed in figures 4(a) - (c), imply an increase in
turbulence anisotropy, which was also previously observed in several studies
on compressible wall-bounded flows (Foysi et al., 2004; Duan et al., 2010;
Zhang et al., 2018; Cogo et al., 2022, 2023), mainly regarded as a variable-
property effect.
From turbulence theory, one can argue that the change in turbulence anisotropy
is due to reduced inter-component energy transfer. Since the negative of the
streamwise pressure-strain correlation
($-\Pi_{11}=-2\,\overline{p^{\prime}\partial u^{\prime\prime}/\partial x}$) is
a measure of the energy transferred from the streamwise turbulent stress to
the cross-stream components, we expect it to decrease with increasing Mach
number for our cases. To verify this, figure 5 shows $-\Pi_{11}$ scaled by the
TKE production (Duan et al., 2010; Patel et al., 2015; Cogo et al., 2023), for
(a) Mach 2.28, (b) Mach 3 and (c) Mach 4 cases, compared to the Mach 0.3 case.
Figure 5: Wall-normal distributions of the streamwise pressure-strain
correlation ($-\Pi_{11}$) scaled by the production term ($P_{11}$) for (a)
Mach 2.28, (b) Mach 3 and (c) Mach 4 cases described in table 1, compared to
the Mach 0.3 case.
The figure clearly corroborates our claims. We further note that $\Pi_{11}$
scaled by semi-local units ($\bar{\rho}u_{\tau}^{*3}/\delta_{v}^{*}$) also
reduces for the three high-Mach-number cases compared to the Mach 0.3 case
(not shown).
### 3.4 Identifying direct and indirect effects of intrinsic compressibility
So far we have observed strong intrinsic compressibility effects on various
turbulence statistics. Are these strong effects due to a direct contribution
from the dilatational motions or due to IC effects on the solenoidal motions?
To answer this, we apply Helmholtz decomposition to the velocity field
obtained from DNS to isolate the solenoidal (divergence-free) and dilatational
(curl-free) parts, namely
$u^{\prime\prime}_{i}={u_{i}^{s}}^{\prime\prime}+{u_{i}^{d}}^{\prime\prime}.$
(16)
Appendix B reports details on how the decomposition is actually performed.
Following Yu et al. (2019), the turbulent stresses are then split as
$\widetilde{u_{i}^{\prime\prime}u_{j}^{\prime\prime}}^{*}=\widetilde{{u_{i}^{s}}^{\prime\prime}{u_{j}^{s}}^{\prime\prime}}^{*}+\widetilde{{u_{i}^{d}}^{\prime\prime}{u_{j}^{s}}^{\prime\prime}}^{*}+\widetilde{{u_{j}^{s}}^{\prime\prime}{u_{j}^{d}}^{\prime\prime}}^{*}+\widetilde{{u_{i}^{d}}^{\prime\prime}{u_{j}^{d}}^{\prime\prime}}^{*}.$
(17)
The terms involving dilatational motions are absent in incompressible flows,
and thus any contribution from them is regarded as a _direct_ effect. However,
the first term on the right-hand side is also present in incompressible flows.
Thus, any effect of compressibility on this term will be regarded as an
_indirect_ effect.
Figure 6: Wall-normal distributions of the total and solenoidal (a)
streamwise, (b) wall-normal and (c) spanwise turbulent stresses as per
equation (17), for the cases described in table 1. Inset: profiles of the
terms $\widetilde{{v^{d}}^{\prime\prime}{v^{d}}^{\prime\prime}}^{*}$ (dotted)
and $\widetilde{{v^{s}}^{\prime\prime}{v^{d}}^{\prime\prime}}^{*}$ (dash-
dotted).
Figure 6 shows the first term on the right-hand side of equation (17),
associated with solenoidal velocity fluctuations, for the normal turbulent
stresses. They are seen to almost overlap with the total turbulent stresses,
which is shown in grey. This implies that any change in the total stresses as
a function of the Mach number is reflected in their respective solenoidal
components, and thus intrinsic compressibility effects on turbulence
statistics are mainly indirect. The collapse of the total and solenoidal
stresses also implies that the correlations involving
${u_{i}^{d}}^{\prime\prime}$ are small. However, there are some exceptions,
particularly the terms
$\widetilde{{v^{d}}^{\prime\prime}{v^{d}}^{\prime\prime}}^{*}$ and
$\widetilde{{v^{s}}^{\prime\prime}{v^{d}}^{\prime\prime}}^{*}$, that can have
large contributions in the near-wall region as shown in the inset of figure
6(b). Negative values of
$\widetilde{{v^{s}}^{\prime\prime}{v^{d}}^{\prime\prime}}^{*}$ physically
represent opposition of solenoidal motions (sweeps/ejections) from
dilatational wall-normal velocity. This opposition was first observed by Yu et
al. (2019), and plays a key role in the forthcoming discussion.
## 4 Weakening of the quasi-streamwise vortices
Quasi-streamwise vortices play an important role in transferring energy from
the streamwise to the wall-normal and spanwise components (Jeong et al.,
1997). Thus, any reduction in this inter-component energy transfer (see figure
5), and hence any weakening of the wall-normal and spanwise velocity
fluctuations (see figure 4) is directly related to the weakening of those
vortices. To verify this claim, the root-mean-square of the streamwise
vorticity is shown in figure 7(a). This quantity indeed decreases with
increasing Mach number, implying weakening of the quasi-streamwise vortices.
In contrast, the root-mean-square of the wall-normal and spanwise vorticity
shows a weak Mach number dependence, as seen in figure 7(b) and (c).
Figure 7: Wall-normal distributions of the root-mean-square of (a) streamwise,
(b) wall-normal, and (c) spanwise vorticity fluctuations, scaled by
$u_{\tau}^{*}/\delta_{v}^{*}$, for the cases described in table 1.
Choi et al. (1994) showed that active opposition of sweeps and ejections is
effective in weakening the quasi-streamwise vortices. As noted in §3.4, a
similar opposition also occurs spontaneously in compressible flows, in which
solenoidal motions like sweeps and ejections are opposed by wall-normal
dilatational motions.
To explain the physical origin of near-wall opposition of sweeps and
ejections, and hence the weakening of the quasi-streamwise vortices, we
perform a conditional averaging procedure that identifies shear layers. Shear
layers are in fact inherently associated with quasi-streamwise vortices, being
formed as a consequence of sweeps and ejections initiated by those vortical
structures (Jeong et al., 1997). To educe shear layers, we rely on the
variable interval space averaging (VISA) technique introduced by Kim (1985),
which is the spatial counterpart of the variable interval time averaging
(VITA) technique developed by Blackwelder & Kaplan (1976). Since only the
solenoidal motions carry the imprint of incompressible turbulent structures,
like shear layers, the VISA detection criterion is directly applied to the
solenoidal velocity field. More details on the implementation of the VISA
technique are provided in Appendix C.
### 4.1 Results from the variable interval space averaging technique
Figure 8 shows the conditionally averaged $\xi^{*}-y^{*}$ planes, at
$\zeta^{*}=0$, of various quantities for the Mach 2.28, 3 and 4 cases, only
considering acceleration events. A similar plot with deceleration events is
not shown since they are much less frequent (Johansson et al., 1987). $\xi$
and $\zeta$ indicate streamwise and spanwise coordinates, respectively,
centred at the locations of the detected events.
The first row in figure 8 shows the contours of the conditionally averaged
solenoidal streamwise velocity fluctuations
$\left<{u^{s}}^{\prime\prime}\right>^{*}$, which clearly represent a shear
layer. The second row of the plot shows the contours of the conditionally
averaged solenoidal wall-normal velocity fluctuations
$\left<{v^{s}}^{\prime\prime}\right>^{*}$. Positive streamwise velocity
fluctuations are associated with negative wall-normal fluctuations, resulting
in a sweep event. Similarly, negative streamwise fluctuations are associated
with positive wall-normal velocity, resulting in an ejection event. For
greater clarity, we also show streamlines constructed using
$\left<{u^{s}}^{\prime\prime}\right>^{*}$ and
$\left<{v^{s}}^{\prime\prime}\right>^{*}$, with their thickness being
proportional to the local magnitude of
$\left<{v^{s}}^{\prime\prime}\right>^{*}$.
Figure 8: Conditionally averaged quantities, based on VISA applied to
streamwise velocity fluctuations at $y^{*}\approx 15$ (see Appendix C), for
the Mach 2.28 (left column), Mach 3 (centre column), and Mach 4 (right column)
cases in table 1. The $\xi^{*}-y^{*}$ planes are taken at the centre of the
shear layer ($\zeta^{*}=0$). The velocity contours (first, second and fifth
rows) are scaled by the semi-local friction velocity $u_{\tau}^{*}$, the
pressure contours (third row) are scaled by $\tau_{w}$, and the dilatation
contours (fourth row) are scaled by $u_{\tau}^{*}/\delta_{v}^{*}$. The
overlaying streamlines are constructed using
$\left<{u^{s}}^{\prime\prime}\right>^{*}$ and
$\left<{v^{s}}^{\prime\prime}\right>^{*}$, and their thickness is scaled by
the magnitude of $\left<{v^{s}}^{\prime\prime}\right>^{*}$. The solid black
line indicates $y^{*}\approx 15$ and the dashed black line indicates
$\xi^{*}=0$.
Similar to the velocity field, we also split pressure into solenoidal and
dilatational parts, namely
${p}^{\prime}={p^{s}}^{\prime}+{p^{d}}^{\prime}.$ (18)
Unlike the Helmholtz decomposition for velocities, this splitting is not
unique. In this work, we adhere to the definition of solenoidal pressure given
for homogeneous flows by Ristorcelli (1997); Jagannathan & Donzis (2016); Wang
et al. (2017), which we extend to inhomogeneous flows as follows:
$\frac{\partial^{2}{p^{s}}^{\prime}}{\partial x_{i}\partial
x_{i}}=-\frac{\partial(\overline{\rho}{u_{i}^{s}}^{\prime\prime}{u_{j}^{s}}^{\prime\prime}-\overline{\rho}\overline{{u_{i}^{s}}^{\prime\prime}{u_{j}^{s}}^{\prime\prime}})}{\partial
x_{i}\partial
x_{j}}-2\overline{\rho}\frac{d\widetilde{u}}{dy}\frac{\partial{v^{s}}^{\prime\prime}}{\partial
x}.$ (19)
This part of the pressure field is also referred to as pseudo-pressure
(Ristorcelli, 1997), as it propagates with the flow speed. Looking at the
source terms on the right-hand side of equation (19), the solenoidal pressure
can be interpreted as being generated from vortices and shear layers, similar
to incompressible flows (Bradshaw & Koh, 1981).
The third row of figure 8 shows the conditionally averaged solenoidal pressure
as per equation (19). Clearly, the pressure maxima occur approximately in
between the high-velocity regions, which suggests a phase shift between
velocity and pressure. To shed further light on this point, in figure 9 we
plot the wall-normal velocity at $y^{*}\approx 15$, and the solenoidal
pressure at the wall as a function of the streamwise coordinate ($\xi^{*}$).
Since the wall pressure is mainly contributed by the buffer-layer eddies (Kim,
1989; Johansson et al., 1987; Kim & Hussain, 1993; Luhar et al., 2014), its
convection velocity is comparable with the speed of the buffer-layer coherent
structures (Kim & Hussain, 1993). Using this information and Taylor’s
hypothesis, one can transform the spatial axis in figure 9 to a temporal axis
($\tau$) by taking the mean velocity at $y^{*}\approx 15$ as the propagation
velocity. Reading figure 9 using the temporal axis (axis on the top), we note
that the high negative sweep velocity corresponds to a high negative rate of
change of the wall pressure, and likewise for the ejection velocity, i.e.,
$\frac{\partial\left<{p^{s}_{w}}^{\prime}\right>^{*}}{\partial\tau^{*}}\sim\left<{v^{s}}^{\prime\prime}\right>^{*}.$
(20)
Similar observations were made by Johansson et al. (1987), using the VITA
technique, and by Luhar et al. (2014), using the resolvent analysis. Other
interesting observations can be made from figure 9. First, the magnitude of
the conditionally averaged streamwise fluctuations increases, whereas the
magnitude of the conditionally averaged wall-normal fluctuations decreases
with increasing Mach number, as also seen in the first two rows of figure 8.
This is consistent with the strengthening of the streamwise and weakening of
the wall-normal turbulent stresses observed in figure 6. Second, the wall
pressure maximum shifts upstream with increasing Mach number, as also seen in
the third row of figure 8. While we know that such shift is attributed to the
Mach number dependence of the solenoidal motions that contribute to the source
terms in equation (19), at the moment we cannot provide a detailed explanation
for this and leave it for future studies.
Figure 9: Conditionally averaged profiles of solenoidal streamwise and wall-
normal velocities at $y^{*}\approx 15$, and wall pressure as a function of
space ($\xi^{*}$, at $\zeta^{*}$=0; bottom-axis) and time
($\tau^{*}=\tau/(u_{\tau}^{*}/\delta_{v}^{*})$; top-axis), for (a) Mach 2.28,
(b) Mach 3 and (c) Mach 4 cases in table 1.
After establishing the relation between the solenoidal wall-normal velocity
and the rate of change of the solenoidal pressure in equation (20), we
continue in our attempt to relate the solenoidal and the dilatational velocity
fields. For that purpose, we first isolate the dilatation generated from the
solenoidal pressure—also referred to as ‘pseudo-sound’ dilatation (superscript
$ps$) in the literature (Ristorcelli, 1997; Wang et al., 2017)—, as follows
$d^{ps}\approx\frac{-1}{\gamma\bar{P}}\left(\frac{\partial{p^{s}}^{\prime}}{\partial
t}+u_{j}\frac{\partial{p^{s}}^{\prime}}{\partial x_{j}}\right).$ (21)
Pseudo-sound dilatation represents the volume changes of fluid elements caused
by pressure changes associated with solenoidal turbulent structures such as
vortices and shear layers. Normalization by the wall shear stress yields
$d^{ps}\approx\frac{-\tau_{w}}{\gamma\bar{P}}\left(\frac{\partial{p^{s}}^{\prime*}}{\partial
t}+u_{j}\frac{\partial{p^{s}}^{\prime*}}{\partial x_{j}}\right),$ (22)
where the factor ${\tau_{w}}/({\gamma\bar{P}})$ is equal to the square of the
semi-local friction Mach number for ideal gas flows. Using
$M_{\tau}^{*}\approx M_{\tau}$ (see equation (12) and figure 2), we then
rewrite equation (22) as
$d^{ps}\approx-M_{\tau}^{2}\left(\frac{\partial{p^{s}}^{\prime*}}{\partial
t}+u_{j}\frac{\partial{p^{s}}^{\prime*}}{\partial x_{j}}\right).$ (23)
According to the pseudo-sound theory (Ristorcelli, 1997), the inner-scaled
scaled solenoidal pressure is assumed to be unaffected by compressibility
effects. Thus, from equation (23), one would expect $d^{ps}$ to increase with
the square of the friction Mach number. However, as noted in the discussion
following figure 9, the solenoidal motions change as a function of the Mach
number, thereby affecting the solenoidal pressure as per equation (19). This
suggests that $d^{ps}$ could increase with an exponent that is close to two
but not necessarily equal to two. To assess the correct scaling, in table 2 we
report the root-mean-square of $d^{ps}$ at the wall. Data fitting yields
$d^{ps}\sim M_{\tau}^{2.42}$, hence close to what was suggested by equation
(23).
Case name | $M_{b_{w}}$ | $M_{\tau}$ | $\left(d_{w}^{ps}\right)^{*}_{rms}$ | $\left(v_{p}^{d}\right)^{*}_{rms}$ | $\left(v_{p}^{d,ps}\right)^{*}_{rms}$ | $\left(v_{p}^{d,nps}\right)^{*}_{rms}$
---|---|---|---|---|---|---
Mach 2.28 | 2.28 | 0.1185 | 0.0096 | 0.066 | 0.047 | 0.059
Mach 3 | 3 | 0.1526 | 0.0160 | 0.153 | 0.078 | 0.140
Mach 4 | 4 | 0.1968 | 0.0311 | 0.332 | 0.150 | 0.323
| | $b$ | 2.42 | 3.1 | 2.37 | 3.3
Table 2: Root-mean-square ($rms$) of the pseudo-sound dilatation at the wall
and the peak $rms$ value of the total, pseudo-sound and non-pseudo-sound wall-
normal dilatational velocities. ‘$b$’ is the exponent obtained from power-law
fitting ($aM_{\tau}^{b}$) of the data.
Continuing on our path to relate solenoidal and dilatational motions, close to
the wall we can write
${d_{w}^{ps}}^{*}\approx-
M_{\tau}^{2}\frac{\partial{p_{w}^{s}}^{\prime*}}{\partial t^{*}},$ (24)
where ${d_{w}^{ps}}^{*}={d_{w}^{ps}}/(u_{\tau}^{*}/\delta_{v}^{*})$. This
equation, when conditionally averaged and combined with equation (20), leads
to
$\left<d_{w}^{ps}\right>^{*}\sim-
M_{\tau}^{2}\left<{v^{s}}^{\prime\prime}\right>^{*}.$ (25)
Using this result, we expect positive dilatation events (expansions) to be
mainly associated with sweeps and negative dilatation events (compressions) to
be associated with ejections. The fourth row in figure 8 shows the contours of
conditionally averaged pseudo-sound dilatation defined in equation (23).
Consistent with our expectation, positive dilatation is indeed found to be
associated with sweeps and negative dilatation with ejections, and its
magnitude increases with the Mach number.
Having related the pseudo-sound dilatation and the solenoidal velocity in
equation (25), the next step is to introduce the pseudo-sound dilatational
velocity as
$\displaystyle\frac{\partial^{2}\phi^{ps}}{\partial x_{j}\partial x_{j}}$
$\displaystyle=d^{ps},$ (26) $\displaystyle v^{d,ps}$
$\displaystyle=\dfrac{\partial\phi^{ps}}{\partial y},$
where $\phi^{ps}$ is the scalar potential. Note that this equation is similar
to equations (39) and (40) used to solve for the total dilatational velocity,
as reported in Appendix B. Based on equation (26), one would expect $v^{d,ps}$
to increase with the Mach number at a similar rate as $d^{ps}$. Power-law
fitting of the data reported in table 2 indeed yields $v^{d,ps}\sim
M_{\tau}^{2.37}$, hence close to what was found for $d^{ps}$.
Equation (26) stipulates that the conditionally averaged pseudo-sound
dilatational velocity in the buffer layer should be proportional to and in
phase with the dilatation at the wall. Thus, we can write
$\left<{v^{d,ps}}\right>^{*}\sim\left<{d_{w}^{ps}}\right>^{*}.$ (27)
Using equation (27) and (25) we can finally develop a relation between the
solenoidal and the pseudo-sound dilatational velocity, namely
$\left<{v^{d,ps}}\right>^{*}\sim-
M_{\tau}^{2}\left<{v^{s}}^{\prime\prime}\right>^{*}.$ (28)
In our opinion, this relation is quite meaningful as it theoretically supports
near-wall opposition of sweeps and ejections by dilatational motions.
Moreover, it suggests that the opposition effect should approximately increase
with the square of $M_{\tau}$.
In order to verify this, the final row in figure 8 reports the conditionally
averaged contours of the pseudo-sound wall-normal dilatational velocity given
in equation (26). As suggested from equation (27), the contours of $v^{d,ps}$
appear to be in phase with those of $d^{ps}$. Thus, consistent with the
observations made for the pseudo-sound dilatation, the wall-normal
dilatational velocity is positive during sweeps and negative during ejections,
and its magnitude increases with the Mach number. This opposition is also
clearly seen in figure 10, which shows the conditionally averaged profiles of
${v^{s}}^{\prime\prime}$ and $v^{d,ps}$ at $y^{*}\approx 15$. Additionally, in
figures 8 and 10 we note that the pseudo-sound dilatational velocity contour
(or profile) shifts upstream (leftward) with increasing Mach number. This is
due to the upstream shift in the pressure contour mentioned above.
Figure 10: Conditionally averaged profiles of solenoidal and pseudo-sound
dilatational wall-normal velocities at $y^{*}\approx 15$ as a function of
$\xi^{*}$ (at $\zeta^{*}$=0) for (a) Mach 2.28, (b) Mach 3 and (c) Mach 4
cases in table 1. Figure 11: (a) Conditionally averaged and integrated
[equation (29)] correlations between solenoidal and dilatational velocities.
(b) Conditionally averaged pseudo-sound correlation coefficient ($C^{ps}$) as
defined in equation (30).
To further quantify the opposition effect, we analyse the conditionally
averaged correlation between solenoidal and pseudo-sound dilatational wall-
normal velocity, i.e. $\left<v^{s}v^{d,ps}\right>$. The correlation is
integrated over a window of 300 viscous units in the streamwise direction and
40 viscous units in the spanwise direction (Johansson et al., 1991), at each
wall-normal location as
$\left<{v^{s}}^{\prime\prime}v^{d,ps}\right>_{\xi\zeta}(y^{*})=\int_{\zeta^{*}=-20}^{20}\int_{\xi^{*}=-150}^{150}\left<{v^{s}}^{\prime\prime}v^{d,ps}\right>(\xi^{*},y^{*},\zeta^{*})d\xi^{*}d\zeta^{*}.$
(29)
The integrated correlation, scaled by the squared semi-local friction
velocity, is reported in figure 11 with dashed lines. Figure 11 also shows the
pseudo-sound correlation coefficient defined as
$C^{ps}=\frac{\left<{v^{s}}^{\prime\prime}v^{d,ps}\right>_{\xi\zeta}}{\sqrt{\left<{v^{s}}^{\prime\prime}{v^{s}}^{\prime\prime}\right>_{\xi\zeta}\left<v^{d,ps}v^{d,ps}\right>_{\xi\zeta}}}.$
(30)
The correlation and its coefficient are negative as expected. The magnitude of
the correlation increases approximately with the square of Mach number, as
expected. However, the correlation coefficient almost collapses across all
Mach numbers.
The association of the opposition effect with the quasi-streamwise vortices is
visualised in figure 12 for the Mach 2.28 case, all other cases being
qualitatively similar. Indeed, the figure insets illustrate that sweeps and
ejections initiated by quasi-streamwise vortices are opposed by the near-wall
pseudo-sound dilatational velocity, thereby resulting in their weakening.
Figure 12: Opposition of sweeps and ejections by wall-normal pseudo-sound
dilatational velocity in the context of quasi-streamwise vortices. The shaded
three-dimensional isosurfaces represent quasi-streamwise vortices identified
by applying the Q-criterion to the conditionally averaged velocity field.
Their shadows are also plotted on the wall below, showing that the vortices
are inclined and tilted. Underneath the vortices, the contours of solenoidal
wall pressure are shown. The transparent planes mark regions of high rate of
change of wall pressure and hence high wall-normal pseudo-sound dilatational
velocity $\left<v^{d,ps}\right>^{*}$ (see discussion related to equations (20)
- (28)). The arrows between the vortices indicate $\left<v^{d,ps}\right>^{*}$
as a function of $\xi^{*}$ at $\zeta^{*}=0$ and $y^{*}\approx 20$. Note that
the line along which the arrows are plotted is slightly shifted away from the
wall for better visibility. Insets: contours of pseudo-sound dilatation
$\left<d^{ps}\right>^{*}$ along the transparent planes, overlaid with the
streamlines generated by quasi-streamwise vortices. These streamlines are
constructed using the wall-normal and spanwise solenoidal velocities, i.e.
$\left<v^{s}\right>^{*}$ and $\left<w^{s}\right>^{*}$, with their thickness
being proportional to the magnitude of the local planar velocity.
$\left<v^{d,ps}\right>^{*}$ at $y^{*}\approx 15$ and $y^{*}\approx 25$ is also
shown using arrows in the left and right planes, respectively. These wall-
normal locations correspond to the maximum value of
$\left<v^{d,ps}\right>^{*}$ in those planes. The red and blue colours in the
contour plots indicate positive and negative values, respectively. An
interactive version of this figure can be accessed here.
### 4.2 Role of non-pseudo-sound dilatational velocity in near-wall
opposition
So far we have looked into the pseudo-sound dilatational velocity and provided
an explanation for why they are out-of-phase with respect to the solenoidal
motions. However, from table 2, we see that the peak root-mean-square value of
$v^{d,ps}$ is much smaller than that of the total dilatational velocity.
Hence, a large portion of the dilatational velocity and its correlation (if
any) with the solenoidal velocity is still unexplained. To address this point,
figure 11(a) shows the integrated correlation between solenoidal and total
dilatational velocities, i.e.
$\left<{v^{s}}^{\prime\prime}v^{d}\right>^{*}_{\xi\zeta}$, denoted by solid
grey lines. Except very close to the wall, the total and pseudo-sound
correlations almost overlap. This implies that the contribution from the
remaining portion of the dilatational velocity, referred to as the ‘non-
pseudo-sound’ component and given by
$v^{d,nps}=v^{d}-v^{d,ps},$ (31)
is small. In other words, despite being stronger in magnitude than the pseudo-
sound component, the non-pseudo-sound dilatational velocity does not play an
important role in opposing sweeps and ejections.
Before concluding, we would like to comment on the travelling wave-packet-like
structures, first identified by Yu et al. (2019) and later studied in Yu et
al. (2020); Yu & Xu (2021); Tang et al. (2020); Gerolymos & Vallet (2023); Yu
et al. (2024). Figure 13 shows the $x^{*}-z^{*}$ plane with the instantaneous
contours of the pseudo-sound and non-pseudo-sound dilatational velocity at
$y^{*}\approx 11$, for the Mach 3 case in table 1. The wave-packet structures
are predominantly present in the non-pseudo-sound component, whereas the
pseudo-sound component shows a spotty structure similar to that observed for
the streamwise gradient of wall pressure in incompressible flows (Kim, 1989).
Combining the observation above that the non-pseudo-sound component hardly
contributes to the opposition effect, and that the wave-packet-like structures
are present mainly in this component, one can argue that these structures do
not play an important role in opposing sweeps and ejections.
Figure 13: Instantaneous $x^{*}-z^{*}$ planes at $y^{*}\approx 11$ of (top)
the non-pseudo-sound and (bottom) the pseudo-sound wall-normal dilatational
velocities (see the text for definitions) scaled by their respective root-
mean-squares for the Mach 3 case in table 1. Note that, for clarity, the
colour bar is adjusted such that structures stronger than 1.33 times the root-
mean-square value are highlighted.
## 5 Conclusions
In this paper, we have attempted to provide an explanation for the underlying
mechanism through which intrinsic compressibility effects modulate the near-
wall dynamics of turbulence. To rigorously assess these effects, we have
devised four DNS cases of fully developed high-Mach-number channel flows with
approximately constant mean properties, whereby intrinsic compressibility
effects are isolated. Our findings, sketched as a flow chart in figure 14, are
summarised as follows.
Figure 14: A graphical summary of the present findings. Note that the arrows
are meant to indicate the chain of arguments made in this paper, not relations
of causality.
First, we have decomposed the velocity field into solenoidal and dilatational
parts and educed shear layers by applying conditional averaging to the
solenoidal component. We have noticed that there exists a streamwise phase
shift between the buffer-layer sweeps and ejections that form shear layers,
and the associated ‘solenoidal’ wall pressure. Equivalent observations were
made for incompressible flows by Johansson et al. (1987) and Luhar et al.
(2014). By using Taylor’s hypothesis, this streamwise shift in phase can be
interpreted as a phase shift in time, such that regions of high positive rate
of change of wall pressure correspond to regions of high positive wall-normal
velocity. Similarly, regions of high negative rate of change of wall pressure
correspond to the regions of high negative wall-normal velocity. Close to the
wall, the high rate of change of the solenoidal pressure results in large
dilatation values with an opposite sign (also referred to as pseudo-sound
dilatation), which upon integration results in a wall-normal dilatational
velocity that inherently opposes sweeps and ejections. Since sweeps and
ejections are initiated by quasi-streamwise vortices, their opposition
directly affects the evolution of those vortices, causing their weakening.
This is schematically depicted in figure 12.
Interestingly, we also found that the remaining portion of the dilatational
velocity (also referred to as the non-pseudo-sound component) does not play an
important role in the opposition mechanism. Moreover, we have observed that
the majority of the travelling wave-packet-like structures, recently
discovered in the literature, are present in this non-pseudo-sound component.
The weakening of quasi-streamwise vortices directly hinders the energy
transfer from the streamwise velocity component to the other two components,
resulting in an outward shift (reduction) in the wall-normal turbulent stress
with increasing Mach number. Since the wall-normal motions actively contribute
to the transport of momentum across mean shear, thereby generating turbulent
shear stress, the outward shift in the wall-normal turbulent stress results in
a corresponding outward shift in the turbulent shear stress. This reduction in
the turbulent shear stress is in turn responsible for an upward shift in the
logarithmic mean velocity profile (Hasan et al., 2023).
A longstanding question in the compressible flow community is why the inner-
scaled streamwise turbulent stress is higher in compressible flows than in
incompressible flows, with similar Reynolds numbers. In this respect, our
results suggest that intrinsic compressibility effects play a dominant role.
Specifically, the increase in the peak value is a consequence of the outward
shift in the turbulent and viscous shear stresses, since their product yields
the production of the streamwise turbulence stress. This implies that the
near-wall opposition mechanism outlined above is also responsible for the
strengthening of the streamwise turbulence intensity.
Some questions related to the findings made in this paper remain unanswered as
of yet. First, why do the solenoidal pressure maxima shift upstream with
increasing Mach number (see figures 8 and 9)? Second, what is the Mach number
scaling of the turbulence statistics presented in the paper? This could help
explain the quasi-linear increase in the log-law constant observed by Hasan et
al. (2023). Moreover, knowing the Mach number scaling of the peak streamwise
turbulence intensity would help in developing empirical scaling laws. Third,
why is the dissipation of turbulence kinetic energy, and thus the small scales
of turbulence, not affected by intrinsic compressibility effects (see Appendix
A)? A spectral analysis of the velocity field could shed more light on this
important issue.
## Appendix A Increase in the streamwise turbulence intensity
In order to explain the increase in the streamwise turbulent stress and hence
in the turbulence kinetic energy, we consider the streamwise turbulent stress
budget for a fully-developed compressible channel flow:
$P_{11}+\epsilon_{11}+T_{11}^{\nu}+T_{11}^{u}+\Pi_{11}+C_{11}=0\mathrm{,}$
(32)
where
$\displaystyle
P_{11}=-2\overline{\rho{u}^{\prime\prime}{v}^{\prime\prime}}\frac{\partial\widetilde{u}}{\partial
y},~{}\epsilon_{11}=-2\overline{\tau^{\prime}_{1j}\frac{\partial{u}^{\prime\prime}}{\partial
x_{j}}},$ $\displaystyle T^{\nu}_{11}=2\frac{\partial}{\partial
y}\left(\overline{{\tau^{\prime}_{12}}{u}^{\prime\prime}}\right),~{}T_{11}^{u}=-\frac{\partial}{\partial
y}\left(\overline{\rho{u}^{\prime\prime}{u}^{\prime\prime}{v}^{\prime\prime}}\right),~{}$
$\displaystyle\Pi_{11}=2\overline{p^{\prime}\frac{\partial
u^{\prime}}{\partial
x}},~{}C_{11}=2\overline{u^{\prime\prime}}\frac{\partial\bar{\tau}_{12}}{\partial
y}.$ (33)
The distributions of the production, viscous and turbulent diffusion terms,
and the sum of dissipation and pressure-strain correlation, are shown in
figure A.1, scaled by $\bar{\rho}{u_{\tau}^{*}}^{3}/\bar{\mu}$. The
compressibility term $C_{11}$ is omitted because of its negligible magnitude.
Figure A.1: Wall-normal distributions of (a) the streamwise turbulent stress
budget [see equation (32)] scaled in semi-local units, and (b) the sum of
viscous and turbulent fluxes obtained upon integrating the semi-locally scaled
viscous and turbulent diffusion terms [see equation (36)], for the cases
described in table 1.
Three observations can be made. First, there is an outward shift in
$P_{11}^{*}$ with increasing Mach number. Since the production term in scaled
form is simply the product of the turbulent and viscous shear stresses, its
outward shift is explained by the corresponding shift in the shear stresses in
figure 3(b) as follows. Assuming that the total stress is approximately equal
to $\tau_{w}$, such that the sum of the scaled stresses is unity, one can
substitute the viscous shear by the turbulent shear stress in $P_{11}$ to
obtain (Pope, 2001)
$P_{11}^{*}\approx-2\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}\left(1+\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}\right)=2\left(-\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}\right)-2\left(-\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}\right)^{2}.$
(34)
Taking the derivative of $P^{*}_{11}$ with respect to the turbulent shear
stress yields
$\frac{dP_{11}^{*}}{d\left(-\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}\right)}\approx
2-\,4\left(-\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}\right).$ (35)
Between the wall and the location where
$-\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*}$ is equal to 0.5 the
derivative is positive, while it is negative above this location. On the other
hand, from figure 3(b), we observe that the rate of change of the turbulent
shear stress with the Mach number, i.e.
$\partial(-\widetilde{u^{\prime\prime}v^{\prime\prime}}^{*})/\partial M_{b}$,
at a fixed $y^{*}$ is negative. Combining these two observations, we can
conclude that the rate of change of production of the streamwise turbulent
stress with the Mach number, i.e. $\partial P_{11}^{*}/\partial M_{b}$, is
negative close to the wall and becomes positive away from it, resulting in an
effective outward shift.
Second, except very close to the wall, the sum of the two sink terms in the
budget of the streamwise turbulent stress (32), namely $\epsilon_{11}^{*}$ and
$\Pi_{11}^{*}$, show a weak Mach number dependence. Interestingly, the TKE
dissipation
($2\epsilon_{k}^{*}=\epsilon^{*}_{11}+\epsilon^{*}_{22}+\epsilon^{*}_{33}$),
reported with grey dashed lines in figure A.1, also shows marginal dependence
on the Mach number. This is consistent with the observation made by Hasan et
al. (2023) regarding the universality of the local Kolmogorov length scale.
The universality of $\epsilon_{11}^{*}+\Pi_{11}^{*}$ and $\epsilon_{k}^{*}$
are related as follows. Any Mach-number-dependent reduction in $\Pi_{11}^{*}$
would imply that less energy is being received by the lateral turbulent
stresses, and hence, less TKE is being dissipated through the terms
$\epsilon^{*}_{22}+\epsilon^{*}_{33}$. This suggests that the Mach-number-
dependence of $\Pi_{11}^{*}$ and $\epsilon^{*}_{22}+\epsilon^{*}_{33}$ is
linked, such that the universality of $\epsilon^{*}_{11}+\Pi^{*}_{11}$ is
connected with the universality of the TKE dissipation.
Third, above $y^{*}\approx 12$, the production term is higher at higher Mach
numbers, which combined with the observation that the total sink
$\epsilon^{*}_{11}+\Pi^{*}_{11}$ is universal, implies more negative values of
the diffusion term. This means that the surplus production is transported away
from the buffer layer towards the wall. For further insight, figure A.1(b)
shows the sum of the viscous and turbulent fluxes obtained by integrating the
transport terms as
$\Phi^{*}_{11}=\int_{0}^{y^{*}}\left(T_{11}^{\nu*}+T_{11}^{u*}\right)dy^{*},$
(36)
such that positive values signify that energy is transported towards the wall,
and negative values signify the opposite. As one can observe, the flux is
positive close to the wall and increases with the Mach number. This implies
that more energy is being carried towards the wall at higher Mach numbers.
Between the wall and the peak location of the streamwise turbulence intensity,
the total flux is mainly controlled by the viscous flux, which can be
approximated as $d\widetilde{u^{\prime\prime}u^{\prime\prime}}^{*}/dy^{*}$.
Thus, a higher positive flux at increasing Mach numbers implies a higher
gradient of the streamwise turbulent stress, which results in a higher peak
value upon integration.
The strengthening of the streamwise velocity fluctuations can also be
explained based on a phenomenological mixing-length model. The semi-locally
scaled streamwise stress can be written as
$\left(\overline{u^{\prime}u^{\prime}}^{*}\right)^{1/2}\sim\ell^{*}\frac{d\bar{U}^{*}}{dy^{*}},$
(37)
where $\ell^{*}$ is the mixing length scaled by $\delta_{v}^{*}$, and
${d\bar{U}^{*}}/{dy^{*}}$ is the semi-locally scaled mean velocity gradient,
which is equivalent to $d\bar{U}^{+}_{TL}/{dy^{*}}$. Note that the streamwise
stress is written in the Reynolds averaged form, since we observe that the
peak of both Reynolds and Favre averaged stresses increases alike (not shown),
and therefore the error incurred by excluding density fluctuations from
equation (37) is small. The mixing length is determined as
$\ell^{*}\sim\sqrt{\overline{v^{\prime}v^{\prime}}^{*}}\,\mathcal{T}$ (Durbin,
1991), where $\mathcal{T}\sim k^{*}/\epsilon^{*}$. For the present cases this
definition of mixing length yields universal distributions across the Mach
number range (not shown). This is because the velocity with which a fluid
parcel travels reduces with increasing Mach number. However, the time scale
over which it retains its streamwise momentum increases with the Mach number
(due to higher TKE and almost universal dissipation), thus effectively
travelling the same distance. Due to the universality of the mixing length,
equation (37) implies that the increase in mean shear observed in figure 3 is
directly responsible for an increase in the peak streamwise turbulence
intensity. Interestingly, an increase in the mean shear was also found to be
responsible for higher production in the buffer layer (see figure A.1) that
formed the basis of our explanation above, making the phenomenological model
consistent.
## Appendix B Helmholtz decomposition of the velocity field
The Helmholtz decomposition in compressible flows is the representation of the
velocity field as the sum of divergence-free ‘solenoidal’ and curl-free
‘dilatational’ components. This is mathematically written as
${u}_{i}={u_{i}^{s}}+{u_{i}^{d}},$ (38)
where superscripts ‘$s$’ and ‘$d$’ stand for solenoidal and dilatational
components. This equation is similar to equation (16) in the main text, the
only difference being that there the decomposition was written explicitly for
the fluctuating velocity field.
The dilatational component is computed as the gradient of a scalar potential
$\phi$, namely
${u_{i}^{d}}=\frac{\partial\phi}{\partial x_{i}},$ (39)
where $\phi$ is obtained by solving a Poisson equation as
$\displaystyle\frac{\partial^{2}\phi}{\partial x_{j}\partial
x_{j}}=\frac{\partial u_{i}}{\partial x_{i}}.$ (40)
Equation (40) is solved using a second-order accurate FFT-based Poisson solver
(see Costa (2018) for example) with periodic boundary conditions in the
streamwise and spanwise directions, and no-penetration boundary condition
$\partial\phi/\partial y=0$ (or $v^{d}=0$) at the wall. Note that with these
boundary conditions, no-slip is not satisfied at the wall, that is $u^{d}$ and
$w^{d}$ are not equal to zero. While seemingly counter-intuitive at first
glance, this is not unphysical, as pointed out in Sharma & Girimaji (2023).
Likewise, the solenoidal component can be obtained using the vorticity field
as described in Yu et al. (2019) and Sharma & Girimaji (2023). However, here
we will make use of the fact that the total velocity field is available from
the direct numerical simulation. Thus, the solenoidal field is simply computed
using equation (38) as
${u_{i}^{s}}={u_{i}}-{u_{i}^{d}}.$ (41)
## Appendix C Steps to perform variable interval space averaging
In this conditional average technique, strong sweep and ejection events
resulting in a shear layer are said to occur when the short-space variance,
given by
$\mathrm{var}(x,z,t)=\frac{1}{L}\int_{-\frac{L}{2}}^{\frac{L}{2}}[{u^{s}}^{\prime\prime}(x+s,y_{ref},z,t)]^{2}\,\mathrm{d}s-\left(\frac{1}{L}\int_{-\frac{L}{2}}^{\frac{L}{2}}{u^{s}}^{\prime\prime}(x+s,y_{ref},z,t)\,\mathrm{d}s\right)^{2},$
(42)
exceeds $K[u^{s}_{rms}(y_{ref})]^{2}$, where $K$ is the threshold level. Here,
$y_{ref}$ is the location of the reference $x-z$ plane where the detection
criteria is applied, and $L$ is the size of the averaging window,
representative of the length scale of the shear layer identified by this
technique (Johansson et al., 1987). Following Johansson et al. (1991), we take
$K=1$, $y^{*}_{ref}\approx 15$, and $L^{*}\approx 200$.
Having computed the short-space variance at the reference plane, a condition
variable C is set to non-zero values in regions where the variance exceeds the
threshold, and zero otherwise. The assigned non-zero value is 1 for
acceleration events and -1 for deceleration events. Mathematically, this is
written as
$\mathrm{C}(x,z,t)=\begin{cases}1,&\text{ for
}\operatorname{var}>K[u^{s}_{rms}(y_{ref})]^{2}\text{ and
}\dfrac{\partial{u^{s}}^{\prime\prime}}{\partial x}<0\\\ \\\ -1,&\text{ for
}\operatorname{var}>K[u^{s}_{rms}(y_{ref})]^{2}\text{ and
}\dfrac{\partial{u^{s}}^{\prime\prime}}{\partial x}>0\\\ \\\ 0,&\text{
otherwise, }\end{cases}$ (43)
where $\partial{u^{s}}^{\prime\prime}/{\partial x}<0$ implies
$\partial{u^{s}}^{\prime\prime}/{\partial t}>0$ and vice-versa, as per
Taylor’s hypothesis. This will result in patches on the reference $x-z$ plane
with values of 1 and -1 as shown in figure C.1. Within these patches, the
location where the short-space variance is locally maximum is also shown. Let
the coordinates of these locations be denoted by $(x_{o},z_{o})$. These
coordinates, detected at $y^{*}\approx 15$, will form the basis around which
conditional averaging is performed at all wall-normal locations.
Figure C.1: (Top) $x^{*}-z^{*}$ contour plot of the instantaneous solenoidal
streamwise velocity fluctuations at $y^{*}_{ref}\approx 15$ for the Mach 2.28
case. Boundaries of patches where the short-space variance exceeds the
Reynolds averaged value [see equation (43)] are overlaid on the contour plot.
Additionally, the location inside each patch where the short-space variance is
locally maximum is also displayed by a black circle or a grey square for
acceleration and deceleration events, respectively. (Bottom) Instantaneous
solenoidal streamwise velocity fluctuation along the horizontal line indicated
in the top plot. The black circle is the same point as in the top plot. The
short-space variance [equation (42)] is also shown using a grey dashed line.
With the detected VISA locations, the conditional average of any variable
$\Psi$ is then given as:
$\left<\Psi\right>(\xi,y,\zeta)=\frac{1}{N}\sum_{f=1}^{N_{f}}\sum_{n=1}^{N_{e}}\Psi(x^{n}_{o}+\xi,y,z_{o}^{n}+\zeta,t^{f}),$
(44)
where $\xi$ and $\zeta$ are the streamwise and spanwise displacements with
respect to the reference or detected locations $(x_{o},z_{o})$, and they vary
from $-L_{x}/2$ to $L_{x}/2$ and $-L_{z}/2$ to $L_{z}/2$, respectively. The
inner sum is over the number of detected events ($N_{e}$) in a particular
snapshot $f$ (at time instant $t^{f}$), whereas the outer sum is over the
number of snapshots ($N_{f}$), such that the global sum of the detected events
over all the snapshots is $N$.
Note that equation (44) leads to a conditional average from which phase jitter
is yet to be removed (Johansson et al., 1987). The concept of phase-jitter is
explained with an example as follows. It is known that an acceleration VISA
event detected at the location ($x_{o},y^{+}\approx 15,z_{o}$) corresponds to
a wall pressure peak directly underneath, i.e. at ($x_{o},y^{+}\approx
0,z_{o}$). However, there can be a small and random phase lag or lead. This
means that in reality, the pressure peak may occur at a location that is
randomly shifted in the streamwise-spanwise direction with respect to the
detected location, i.e. it may occur at ($x_{o}+\Delta_{x},y^{+}\approx
0,z_{o}+\Delta_{z}$). This misalignment leads to a reduction in the magnitude
of the pressure peak obtained after conditional averaging.
To fix this issue, we employ a cross-correlation technique (Johansson et al.,
1987) that is described using the above example as follows. We first compute
the conditional average of wall pressure as usual without fixing the phase-
jitter issue. We then cross-correlate this conditionally averaged wall
pressure plane with the instantaneous wall pressure plane using the Fourier
transform. Having done this, we should obtain a $x-z$ plane of correlation
coefficients on the wall that displays a local maximum close to but not
necessarily at the point of detection, i.e. $(x_{o},z_{o})$. This maximum
implies that the conditionally averaged wall pressure profile has its imprint
in the instantaneous plane around the detection location. The shift between
the detection location $(x_{o},z_{o})$ and the local maximum around
$(x_{o},z_{o})$ gives the amount of phase lag or lead in the streamwise and
spanwise directions, i.e. $\Delta_{x}$ and $\Delta_{y}$ discussed above. In
order to remove the phase lag or lead, we compute a new conditional average by
shifting the instantaneous planes by this $\Delta_{x}$ and $\Delta_{y}$ around
the detection points, thereby aligning them. Mathematically, equation (44) is
modified for wall pressure as
$\left<p^{\prime}\right>(\xi,0,\zeta)=\frac{1}{N}\sum_{f=1}^{N_{f}}\sum_{n=1}^{N_{e}}p^{\prime}(x^{n}_{o}+\Delta_{x}^{n}+\xi,\,0,\,z_{o}^{n}+\Delta_{z}^{n}+\zeta,t^{f}).$
(45)
Now, the same procedure described for pressure at the wall can be repeated for
pressure at any wall-normal location. Doing this results in $\Delta_{x}$ and
$\Delta_{y}$ that depend on $y$ for each detected event. With this, equation
(45) can be rewritten for the entire pressure field as
$\left<p^{\prime}\right>(\xi,y,\zeta)=\frac{1}{N}\sum_{f=1}^{N_{f}}\sum_{n=1}^{N_{e}}p^{\prime}(x^{n}_{o}+\Delta_{x}^{n}(y)+\xi,\,y,\,z_{o}^{n}+\Delta_{z}^{n}(y)+\zeta,t^{f}).$
(46)
Although this gives more control on the alignment of three-dimensional
conditionally averaged structures, it may result in a conditionally averaged
profile that may not be very smooth in the wall-normal direction, such that
layering is observed (some layering can be seen in figure 8).
In the phase-jitter removal procedure, events for which the required shift is
greater than approximately 40 viscous lengths in the streamwise or spanwise
directions are excluded from the averaging procedure, and the total number of
detected events ($N$) is reduced accordingly. Since the applied shifts are
wall-normal dependent, the excluded number of events would also be wall-normal
dependent.
Figure C.2 shows the $\xi^{*}-y^{*}$ pressure contours taken at the centre of
the shear layer, i.e. at $\zeta^{*}=0$, after no alignment (equation (44)) and
after one iteration of alignment (equation (46)). As seen, the pressure
contours remain qualitatively similar in both the cases, however, the
magnitude after one iteration of alignment has increased substantially.
Figure C.2: Contours of the solenoidal pressure along the $\xi^{*}-y^{*}$
plane at $\zeta^{*}=0$ after (top row) equation (44) (no alignment), and after
(bottom row) equation (46) (first alignment iteration). The left, middle and
right columns correspond to the Mach 2.28, 3 and 4 cases in table 1,
respectively.
The conditionally averaged profile obtained from equation (46) can be cross-
correlated again with the instantaneous field, and the procedure above can be
repeated to further improve the alignment. However, as noted in Johansson et
al. (1987), and also verified for our cases, the maximum jitter is eliminated
in the first iteration. Thus, the results presented in the main text are
obtained after one iteration.
## Acknowledgments
This work was supported by the European Research Council grant no.
ERC-2019-CoG-864660, Critical; and the Air Force Office of Scientific Research
under grants FA9550-23-1-0228 and FA8655-23-1-7016. The authors acknowledge
the use of computational resources of the Dutch National Supercomputer
Snellius (grant no. 2022/ENW/01251049), and of the DelftBlue supercomputer,
provided by the Delft High Performance Computing Centre.
[Declaration of Interests]The authors report no conflict of interest.
## References
* Bernardini et al. (2021) Bernardini, Matteo, Modesti, Davide, Salvadore, Francesco & Pirozzoli, Sergio 2021 STREAmS: A high-fidelity accelerated solver for direct numerical simulation of compressible turbulent flows. Computer Physics Communications 263, 107906.
* Blackwelder & Kaplan (1976) Blackwelder, RF & Kaplan, RE 1976 On the wall structure of the turbulent boundary layer. Journal of Fluid Mechanics 76 (1), 89–112.
* Bradshaw (1977) Bradshaw, Peter 1977 Compressible turbulent shear layers. Annual Review of Fluid Mechanics 9 (1), 33–52.
* Bradshaw & Koh (1981) Bradshaw, Peter & Koh, YM 1981 A note on Poisson’s equation for pressure in a turbulent flow. The Physics of Fluids 24 (4), 777–777.
* Choi et al. (1994) Choi, Haecheon, Moin, Parviz & Kim, John 1994 Active turbulence control for drag reduction in wall-bounded flows. Journal of Fluid Mechanics 262, 75–110.
* Cogo et al. (2023) Cogo, Michele, Baù, Umberto, Chinappi, Mauro, Bernardini, Matteo & Picano, Francesco 2023 Assessment of heat transfer and Mach number effects on high-speed turbulent boundary layers. Journal of Fluid Mechanics 974, A10.
* Cogo et al. (2022) Cogo, Michele, Salvadore, Francesco, Picano, Francesco & Bernardini, Matteo 2022 Direct numerical simulation of supersonic and hypersonic turbulent boundary layers at moderate-high Reynolds numbers and isothermal wall condition. Journal of Fluid Mechanics 945, A30.
* Coleman et al. (1995) Coleman, Gary N, Kim, John & Moser, Robert D 1995 A numerical study of turbulent supersonic isothermal-wall channel flow. Journal of Fluid Mechanics 305, 159–183.
* Costa (2018) Costa, Pedro 2018 A FFT-based finite-difference solver for massively-parallel direct numerical simulations of turbulent flows. Computers & Mathematics with Applications 76 (8), 1853–1862.
* Danberg (1964) Danberg, James Edward 1964 Characteristics of the turbulent boundary layer with heat and mass transfer at M= 6.7. PhD thesis, Catholic University of America.
* Deshpande et al. (2021) Deshpande, Rahul, Monty, Jason P & Marusic, Ivan 2021 Active and inactive components of the streamwise velocity in wall-bounded turbulence. Journal of Fluid Mechanics 914, A5.
* Duan et al. (2010) Duan, Lian, Beekman, I & Martin, MP 2010 Direct numerical simulation of hypersonic turbulent boundary layers. Part 2. Effect of wall temperature. Journal of Fluid Mechanics 655, 419–445.
* Duan et al. (2011) Duan, L, Beekman, I & Martin, MP 2011 Direct numerical simulation of hypersonic turbulent boundary layers. Part 3. Effect of Mach number. Journal of Fluid Mechanics 672, 245–267.
* Durbin (1991) Durbin, P. A. 1991 Near-wall turbulence closure modeling without “damping functions”. Theoretical and Computational Fluid Dynamics 3 (1), 1–13.
* Foysi et al. (2004) Foysi, H, Sarkar, S & Friedrich, R 2004 Compressibility effects and turbulence scalings in supersonic channel flow. Journal of Fluid Mechanics 509, 207–216.
* Gatski & Erlebacher (2002) Gatski, TB & Erlebacher, G 2002 Numerical simulation of a spatially evolving supersonic turbulent boundary layer. Tech. Rep..
* Gerolymos & Vallet (2023) Gerolymos, GA & Vallet, I 2023 Scaling of pressure fluctuations in compressible turbulent plane channel flow. Journal of Fluid Mechanics 958, A19.
* Griffin et al. (2021) Griffin, Kevin Patrick, Fu, Lin & Moin, Parviz 2021 Velocity transformation for compressible wall-bounded turbulent flows with and without heat transfer. Proceedings of the National Academy of Sciences 118 (34), e2111144118.
* Hasan et al. (2023) Hasan, Asif Manzoor, Larsson, Johan, Pirozzoli, Sergio & Pecnik, Rene 2023 Incorporating intrinsic compressibility effects in velocity transformations for wall-bounded turbulent flows. Physical Review Fluids 8 (11), L112601.
* Hasan et al. (2024) Hasan, Asif Manzoor, Larsson, Johan, Pirozzoli, Sergio & Pecnik, Rene 2024 Estimating mean profiles and fluxes in high-speed turbulent boundary layers using inner/outer-layer scalings. AIAA Journal 62 (2), 848–853.
* Huang & Coleman (1994) Huang, PG & Coleman, Gary N 1994 Van Driest transformation and compressible wall-bounded flows. AIAA Journal 32 (10), 2110–2113.
* Huang et al. (1995) Huang, P. G., Coleman, G. N. & Bradshaw, P. 1995 Compressible turbulent channel flows: DNS results and modelling. Journal of Fluid Mechanics 305, 185–218.
* Jagannathan & Donzis (2016) Jagannathan, Shriram & Donzis, Diego A 2016 Reynolds and Mach number scaling in solenoidally-forced compressible turbulence using high-resolution direct numerical simulations. Journal of Fluid Mechanics 789, 669–707.
* Jeong et al. (1997) Jeong, Jinhee, Hussain, Fazle, Schoppa, Wade & Kim, John 1997 Coherent structures near the wall in a turbulent channel flow. Journal of Fluid Mechanics 332, 185–214.
* Johansson et al. (1991) Johansson, Arne V, Alfredsson, P Henrik & Kim, John 1991 Evolution and dynamics of shear-layer structures in near-wall turbulence. Journal of Fluid Mechanics 224, 579–599.
* Johansson et al. (1987) Johansson, Arne V, Her, Jen-Yuan & Haritonidis, Joseph H 1987 On the generation of high-amplitude wall-pressure peaks in turbulent boundary layers and spots. Journal of Fluid Mechanics 175, 119–142.
* Kim (1985) Kim, John 1985 Turbulence structures associated with the bursting event. The Physics of Fluids 28 (1), 52–58.
* Kim (1989) Kim, John 1989 On the structure of pressure fluctuations in simulated turbulent channel flow. Journal of Fluid Mechanics 205, 421–451.
* Kim & Hussain (1993) Kim, John & Hussain, Fazle 1993 Propagation velocity of perturbations in turbulent channel flow. Physics of Fluids A: Fluid Dynamics 5 (3), 695–706.
* Kumar & Larsson (2022) Kumar, Vedant & Larsson, Johan 2022 Modular method for estimation of velocity and temperature profiles in high-speed boundary layers. AIAA Journal 60 (9), 5165–5172.
* Lele (1994) Lele, Sanjiva K 1994 Compressibility effects on turbulence. Annual Review of Fluid Mechanics 26 (1), 211–254.
* Livescu (2020) Livescu, Daniel 2020 Turbulence with large thermal and compositional density variations. Annual Review of Fluid Mechanics 52, 309–341.
* Lobb et al. (1955) Lobb, RK, Winkler, EM & Persh, Jerome 1955 NOL hypersonic tunnel No. 4, results 7: experimental investigation of turbulent boundary layers in hypersonic flow. Tech. Rep.. U.S. Naval Ordnance Laboratory, White Oak, Maryland.
* Luhar et al. (2014) Luhar, M, Sharma, AS & McKeon, BJ 2014 On the structure and origin of pressure fluctuations in wall turbulence: predictions based on the resolvent analysis. Journal of Fluid Mechanics 751, 38–70.
* Maeder et al. (2001) Maeder, Thierry, Adams, Nikolaus A & Kleiser, Leonhard 2001 Direct simulation of turbulent supersonic boundary layers by an extended temporal approach. Journal of Fluid Mechanics 429, 187–216.
* Modesti & Pirozzoli (2016) Modesti, Davide & Pirozzoli, Sergio 2016 Reynolds and Mach number effects in compressible turbulent channel flow. International Journal of Heat and Fluid Flow 59, 33–49.
* Morinishi et al. (2004) Morinishi, Youhei, Tamano, Shinji & Nakabayashi, Koichi 2004 Direct numerical simulation of compressible turbulent channel flow between adiabatic and isothermal walls. Journal of Fluid Mechanics 502, 273.
* Morkovin (1962) Morkovin, Mark V 1962 Effects of compressibility on turbulent flows. Mécanique de la Turbulence 367 (380), 26\.
* Patel et al. (2016) Patel, A., Boersma, B. J. & Pecnik, R. 2016 The influence of near-wall density and viscosity gradients on turbulence in channel flows. Journal of Fluid Mechanics 809, 793–820.
* Patel et al. (2015) Patel, A., Peeters, J. W. R., Boersma, B. J. & Pecnik, R. 2015 Semi-local scaling and turbulence modulation in variable property turbulent channel flows. Physics of Fluids 27 (9), 095101\.
* Pirozzoli et al. (2004) Pirozzoli, Sergio, Grasso, F & Gatski, TB 2004 Direct numerical simulation and analysis of a spatially evolving supersonic turbulent boundary layer at M= 2.25. Physics of Fluids 16 (3), 530–545.
* Pope (2001) Pope, S. B. 2001 Turbulent flows.
* Ristorcelli (1997) Ristorcelli, JR 1997 A pseudo-sound constitutive relationship for the dilatational covariances in compressible turbulence. Journal of Fluid Mechanics 347, 37–70.
* Sharma & Girimaji (2023) Sharma, Bajrang & Girimaji, Sharath S 2023 Effect of flow–thermodynamics interactions on the stability of compressible boundary layers: insights from Helmholtz decomposition. Journal of Fluid Mechanics 962, A18.
* Smits & Dussauge (2006) Smits, Alexander J & Dussauge, Jean-Paul 2006 Turbulent shear layers in supersonic flow. Springer Science & Business Media.
* Tang et al. (2020) Tang, Jiupeng, Zhao, Zhiye, Wan, Zhen-Hua & Liu, Nan-Sheng 2020 On the near-wall structures and statistics of fluctuating pressure in compressible turbulent channel flows. Physics of Fluids 32 (11), 115121.
* Townsend (1961) Townsend, AA 1961 Equilibrium layers and wall turbulence. Journal of Fluid Mechanics 11 (1), 97–120.
* Trettel & Larsson (2016) Trettel, A. & Larsson, J. 2016 Mean velocity scaling for compressible wall turbulence with heat transfer. Physics of Fluids 28 (2), 026102.
* Trettel (2019) Trettel, Andrew James 2019 Transformations for variable-property turbulent boundary layers. PhD thesis, UCLA.
* Van Driest (1951) Van Driest, Edward R 1951 Turbulent boundary layer in compressible fluids. Journal of the Aeronautical Sciences 18 (3), 145–160.
* Van Driest (1956a) Van Driest, Edward R 1956a On turbulent flow near a wall. Journal of the Aeronautical Sciences 23 (11), 1007–1011.
* Van Driest (1956b) Van Driest, E Reginald 1956b The problem of aerodynamic heating. Institute of the Aeronautical Sciences.
* Wang et al. (2017) Wang, Jianchun, Gotoh, Toshiyuki & Watanabe, Takeshi 2017 Spectra and statistics in compressible isotropic turbulence. Physical Review Fluids 2 (1), 013403.
* Wenzel et al. (2022) Wenzel, Christoph, Gibis, Tobias & Kloker, Markus 2022 About the influences of compressibility, heat transfer and pressure gradients in compressible turbulent boundary layers. Journal of Fluid Mechanics 930, A1.
* Yu et al. (2022) Yu, Ming, Liu, PengXin, Fu, YaLu, Tang, ZhiGong & Yuan, XianXu 2022 Wall shear stress, pressure, and heat flux fluctuations in compressible wall-bounded turbulence, part I: One-point statistics. Physics of Fluids 34 (6), 065139.
* Yu & Xu (2021) Yu, Ming & Xu, Chun-Xiao 2021 Compressibility effects on hypersonic turbulent channel flow with cold walls. Physics of Fluids 33 (7), 075106.
* Yu et al. (2019) Yu, Ming, Xu, Chun-Xiao & Pirozzoli, Sergio 2019 Genuine compressibility effects in wall-bounded turbulence. Physical Review Fluids 4 (12), 123402.
* Yu et al. (2020) Yu, Ming, Xu, Chun-Xiao & Pirozzoli, Sergio 2020 Compressibility effects on pressure fluctuation in compressible turbulent channel flows. Physical Review Fluids 5 (11), 113401.
* Yu et al. (2024) Yu, Ming, Zhou, ZiSong, Dong, SiWei, Yuan, XianXu & Xu, ChunXiao 2024 On the generation of near-wall dilatational motions in hypersonic turbulent boundary layers. Journal of Fluid Mechanics 984, A44.
* Zhang et al. (2018) Zhang, Chao, Duan, Lian & Choudhari, Meelan M 2018 Direct numerical simulation database for supersonic and hypersonic turbulent boundary layers. AIAA Journal 56 (11), 4297–4311.
* Zhang et al. (2022) Zhang, Peng-Jun-Yi, Wan, Zhen-Hua, Liu, Nan-Sheng, Sun, De-Jun & Lu, Xi-Yun 2022 Wall-cooling effects on pressure fluctuations in compressible turbulent boundary layers from subsonic to hypersonic regimes. Journal of Fluid Mechanics 946, A14.
|
# Collective Almost Synchronization in Complex Networks
M. S. Baptista1, Hai-Peng Ren2,1, J. C. M. Swarts3, R. Carareto4,1, H.
Nijmeijer3, C. Grebogi1 1Institute for Complex Systems and Mathematical
Biology, University of Aberdeen, SUPA, AB24 3UE Aberdeen, United Kingdom
2Department of Information and Control Engineering, Xi’an University of
technology, 5 Jinhua South Road, Xi’an, 710048, China 3Department of
Mechanical Engineering, Dynamics and Control Group, Eindhoven University of
Technology, WH 0.144, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
4Escola Politecnica, Universidade de São Paulo, Avenida Prof. Luciano
Gualberto, travessa 3, n. 158, 05508-900 São Paulo, SP, Brazil
###### Abstract
This work introduces the phenomenon of Collective Almost Synchronization
(CAS), which describes a universal way of how patterns can appear in complex
networks even for small coupling strengths. The CAS phenomenon appears due to
the existence of an approximately constant local mean field and is
characterized by having nodes with trajectories evolving around periodic
stable orbits. Common notion based on statistical knowledge would lead one to
interpret the appearance of a local constant mean field as a consequence of
the fact that the behavior of each node is not correlated to the behaviors of
the others. Contrary to this common notion, we show that various well known
weaker forms of synchronization (almost, time-lag, phase synchronization, and
generalized synchronization) appear as a result of the onset of an almost
constant local mean field. If the memory is formed in a brain by minimising
the coupling strength among neurons and maximising the number of possible
patterns, then the CAS phenomenon is a plausible explanation for it.
Spontaneous emergence of collective behavior is common in nature
bouchaud_MD2000 ; couzin_ASB2003 ; helbing_nature2000 . It is a natural
phenomenon characterized by a group of individuals that are connected in a
network by following a dynamical trajectory that is different from the
dynamics of their own. Since the work of Kuramoto kuramoto_LNP1975 , the
spontaneous emergence of collective behavior in networks of phase oscillators
with full connected nodes or with nodes connected by some special topologies
acebron_RMP2005 is analytically well understood. Kuramoto considered a fully
connected network of an infinite number of phase oscillators. If $\theta_{i}$
is the variable describing the phase of an oscillator $i$ in the network, and
$\overline{\theta}$ represents the mean field defined as
$\overline{\theta}=\frac{1}{N}\sum_{i=1}^{N}\theta_{i}$, collective behavior
appears in the network because every node becomes coupled to the mean field.
Peculiar characteristics of this collective behavior is that not only
$\theta_{i}\neq\overline{\theta}$ but also nodes evolve in a way that cannot
be described by the evolution of only one individual node, when isolated from
the network.
In contrast to collective behavior, another widely studied behavior of a
network is when all nodes behave equally, and their evolution can be described
by an individual node when isolated from the network. This state is known as
complete synchronization fujisaka_PTP1983 . If $x_{i}$ represents the state
variables of an arbitrary node $i$ of the network and $x_{j}$ of another node
$j$, and $\overline{x}$ represents the mean field of a network, complete
synchronization appears when $x_{i}=x_{j}=\overline{x}$, for all time. The
main mechanisms responsible for the onset of complete synchronization in
dynamical networks were clarified in pecora_PRL1998 ; nijmeijer_PHYSICAD2009 ;
nijmeijer_IEEE2011 . In networks whose nodes are coupled by non-linear
functions, such as those that depend on time-delays nijmeijer_IEEE2011 or
those that describe how neurons chemically connect baptista_PRE2010 , the
evolution of the synchronous nodes might be different from the evolution of an
individual node, when isolated from the network. However, when complete
synchronization is achieved in such networks, $x_{i}=x_{j}=\overline{x}$.
In natural networks as biological, social, metabolic, neural networks, etc,
barabasi_RMP2002 , the number of nodes is often large but finite; the network
is not fully connected and heterogeneous. The later means that each node has a
different dynamical description or the coupling strengths are not all equal
for every pair of nodes, and one will not find two nodes, say it $x_{i}$ and
$x_{j}$, that have equal trajectories. For such heterogeneous networks, as in
zhou_CHAOS2006 ; gardenes_chaos2011 , found in natural networks and in
experiments juergen_book , one expects to find other weaker forms of
synchronous behavior, such as practical synchronization femat_PLA1999 , phase
synchronization juergen_book , time-lag synchronization rosemblum_PRL1997 ,
and generalized synchronization rulkov_PRE1995 .
We report a phenomenon that may appear in complex networks “far away” from
coupling strengths that typically produce complete synchronization or these
weaker forms of synchronization. However, the reported phenomenon can be
characterized by the same conditions used to verify the existence of these
weaker forms of synchronization. We call it Collective Almost Synchronization
(CAS). It is a consequence of the appearance of an approximately constant
local mean field and is characterized by having nodes with trajectories
evolving around stable periodic orbits, denoted by $\mathbf{\Xi}_{p_{i}}(t)$,
and regarded as a CAS pattern. The appearance of an almost constant mean field
is associated with a regime of weak interaction (weak coupling strength) in
which nodes behave independently jirsa_CN2008 ; batista_PRE2007 . In such
conditions, even weaker forms of synchronization are ruled out to exist. But,
contrary to common notion based on basic statistical arguments, we show that
actually it is the existence of an approximately constant local mean field
that paves the way for weaker forms of synchronization (such as almost, time-
lag, phase, or generalized synchronization) to occur in complex networks.
Denote all the $d$ variables of a node $i$ by ${\mathbf{x}}_{i}$, then we
define that this node presents CAS if the following inequality
$|\mathbf{x}_{i}(t)-\mathbf{\Xi}_{p_{i}}(t-\tau_{i})|<\epsilon_{i}$ (1)
is satisfied for most of the time. The double vertical bar $|\ |$ represents
that we are taking the absolute difference between vector components appearing
inside the bars ($L1$ norm). $\epsilon_{i}$ is a small quantity, not
arbitrarily small, but reasonably smaller than the envelop of the oscillations
of the variables $\mathbf{x}_{i}(t)$. $\mathbf{\Xi}_{p_{i}}(t)$ is the
$d$-dimensional CAS pattern. It is determined by the effective coupling
strength $p_{i}$, a quantity that measures the influence on the node $i$ of
the nodes that are connected to it, and the expected value of the local mean
field at the node $i$, denoted by $\mathbf{C}_{i}$. The local mean field,
denoted by $\overline{\mathbf{x}}_{i}$, is defined only by the nodes that are
connected to the node $i$. The CAS pattern is the solution of a simplified set
of equations describing the network when
$\overline{\mathbf{x}}_{i}=\mathbf{C}_{i}$. According to Eq. (1), if a node in
the network presents the CAS pattern, its trajectory stays intermittently
close to the CAS pattern but with a time-lag between the trajectories of the
node and of the CAS pattern. This property of the CAS phenomenon shares
similarities with the way complete synchronization appears in networks of
nodes coupled under time-delay functions nijmeijer_IEEE2011 . In such
networks, nodes become completely synchronous to a solution of the network
that is different from the solution of an isolated node of the network.
Additionally, the trajectory of the nodes present a time-lag to this solution.
The CAS phenomenon inherits the three main characteristics of a collective
behavior: (a) the variables of a node $i$ ($\mathbf{x}_{i}$) differ from both
the mean field $\overline{\mathbf{x}}$ and the local mean field
$\overline{\mathbf{x}}_{i}$; (b) if the local mean fields of a group of nodes
and their effective coupling are either equal or approximately equal, that
causes all the nodes in this group to follow the same or similar behaviors;
(c) there can exist an infinitely large number of different behaviors (CAS
patterns).
If the CAS phenomenon is present in a network, other weaker forms of
synchronization can be detected. This link is fundamental when making
measurements to detect the CAS phenomenon.
In Ref. femat_PLA1999 , the phenomenon of almost synchronization is
introduced, when a master and a slave in a master-slave system of coupled
oscillators have equal phases but their amplitudes can be different. If a node
$i$ presents the CAS phenomenon [satisfying Eq. (1)] and $\tau_{i}=0$ in Eq.
(1), then the node $i$ is almost synchronous to the pattern
$\mathbf{\Xi}_{p_{i}}$.
Time-lag synchronization rosemblum_PRL1997 is a phenomenon that describes two
identical signals, but whose variables have a time-lag with respect to each
other, i.e. $\mathbf{x}_{i}(t)=\mathbf{x}_{j}(t-\tau)$. In practice, however,
an equality between $\mathbf{x}_{i}(t)$ and $\mathbf{x}_{j}(t-\tau)$ should
not be expected to be typically found, but rather
$\mathbf{x}_{i}(t)\cong\mathbf{x}_{j}(t-\tau),$ (2)
meaning that there is not a constant $\tau$ that can be found such that
$\mathbf{x}_{i}(t)=\mathbf{x}_{j}(t-\tau)$. Another suitable way of writing
Eq. (2) is by $|\mathbf{x}_{i}(t)-\mathbf{x}_{j}(t-\tau)|\leq\gamma$. If two
nodes $i$ and $j$ that present the CAS phenomenon, have the same CAS pattern,
and $\tau_{i}\neq\tau_{j}\neq 0$, then
$|\mathbf{x}_{i}(t)-\mathbf{x}_{j}(t-\tau_{ij})|\leq\epsilon_{ij}$ (3)
or alternatively $\mathbf{x}_{i}(t)\cong\mathbf{x}_{j}(t-\tau_{ij})$, for most
of the time, $\tau_{ij}$ representing the time-lag between $\mathbf{x}_{i}$
and $\mathbf{x}_{j}$. This means that almost time-lag synchronization occurs
for two nodes that present the CAS phenomenon and that are almost locked to
the same CAS pattern. Even though nodes that have equal or similar local mean
field (which usually happens for nodes that have equal or similar degrees)
become synchronous with the same CAS pattern (a stable periodic orbit), the
value of their trajectories at a given time might be different, since their
trajectories reach the neighborhood of their CAS patterns in different places
of the orbit. As a consequence, we expect that two nodes that exhibit the same
CAS should present between themselves a time-lag synchronous behavior. For
some small amounts of time, the difference
$|\mathbf{x}_{i}(t)-\mathbf{x}_{j}(t-\tau_{ij})|$ can be large, since
$\tau_{i}\neq\tau_{j}$ and $\epsilon_{i}\neq\epsilon_{j}$, in Eq. (1). The
closer $\overline{\mathbf{x}}_{i}$ and $\overline{\mathbf{x}}_{j}$ are to
$\mathbf{C}_{i}$, the smaller is $\epsilon_{ij}$ in Eq. (3).
Phase synchronization juergen_book is a phenomenon where the phase
difference, denoted by $\Delta\phi_{ij}$, between the phases of two signals
(or nodes in a network), $\phi_{i}(t)$ and $\phi_{j}(t)$, remains bounded for
all time
$\Delta\phi_{ij}=\left|\phi_{i}(i)-\frac{p}{q}\phi_{j}(t)\right|\leq S.$ (4)
In Ref. juergen_book $S=2\pi$ and $p$ and $q$ are two rational numbers. If
$p$ and $q$ are irrational numbers and $S$ is a reasonably small constant,
then phase synchronization can be referred as to irrational phase
synchronization baptista_PRE2004 . The value of $S$ is calculated in order to
encompass oscillatory systems that possess either a time varying time-scale or
a variable time-lag. Simply make the constant $S$ to represent the growth of
the phase in the faster time scale during one period of the slower time scale.
Phase synchronization between two coupled chaotic oscillators was explained as
being the result of a state where the two oscillators have all their unstable
periodic orbits phase-locked juergen_book . Nodes that present the CAS
phenomenon have unstable periodic orbits that are locked to the stable
periodic orbits described by $\mathbf{\Xi}_{i}(t)$. If $\mathbf{\Xi}_{i}(t)$
has a period $P_{i}$ and the phase of this CAS pattern changes $D\phi_{i}$
within one period, so the angular frequency is $\omega_{i}=D\phi_{i}/P_{i}$.
If $\mathbf{\Xi}_{j}(t)$ has a period $P_{j}$ and the phase of its CAS patter
changes $D\phi_{j}$ within one period, so the angular frequency is
$\omega_{j}=D\phi_{j}/P_{j}$. Then, the CAS patterns of these nodes are phase
synchronous by a ratio of $\frac{p}{q}=\omega_{i}/\omega_{j}$. Since the
trajectories of these nodes are locked to these patterns, the nodes are phase
synchronous by this same ratio, which can be rational or irrational. Assume
additionally that, as one changes the coupling strengths between the nodes,
the expected value $\mathbf{C}_{i}$ of the local mean field of a group of
nodes remains the same. As a consequence, as one changes the coupling
strengths, both the CAS pattern and the ratio
$\frac{p}{q}=\frac{p_{j}D\phi_{i}}{p_{i}D\phi_{j}}$ remain unaltered, and the
observed phase synchronization between nodes in this group is stable under
parameter alterations.
Consider a network of $N$ nodes with nodes connected diffusively (more general
networks are treated in the Supplementary Information) described by
$\dot{\mathbf{x}}_{i}=\mathbf{F}_{i}(\mathbf{x}_{i})+\sigma\sum_{j=1}^{N}{\mathbf{A}_{ij}}{\mathbf{E}}(\mathbf{x}_{j}-\mathbf{x}_{i}),$
(5)
where $\mathbf{x}_{i}\in\Re^{d}$ is a d-dimensional vector describing the
state variables of the node $i$, $\mathbf{F}_{i}$ represents the dynamical
system of the node $i$, and ${\mathbf{A}_{ij}}$ is the adjacent matrix. If
$A_{ij}=1$, then, the node $j$ is connected to the node $i$. ${\mathbf{E}}$ is
the coupling function The degree of a node can be calculated by
$k_{i}=\sum_{j=1}^{N}A_{ij}$.
The CAS phenomenon appears when the local mean field of a node $i$,
$\overline{\mathbf{x}}_{i}(t)=1/k_{i}\sum_{j}A_{ij}\mathbf{x}_{j}$, is
approximately constant and
$\overline{\mathbf{x}}_{i}(t)\approxeq\mathbf{C}_{i}$. Then, the equations for
the network can be described by
$\dot{\mathbf{x}}_{i}=\mathbf{F}_{i}(\mathbf{x}_{i})-p_{i}E(\mathbf{x}_{i})+p_{i}E(\mathbf{C}_{i})+\mathbf{\delta}_{i},$
(6)
where $p_{i}=\sigma k_{i}$ and the residual term is
$\mathbf{\delta}_{i}=p_{i}(\overline{\mathbf{x}}_{i}(t)-\mathbf{C}_{i})$. The
CAS pattern of the node $i$ (a stable periodic orbit) is calculated in the
variables that produce a finite bounded local average field. If all components
of $\mathbf{x}_{i}$ are bounded, then the CAS pattern is given by a solution
of
$\dot{\mathbf{\Xi}}_{p_{i}}=F_{i}(\mathbf{\Xi}_{p_{i}})-p_{i}E(\mathbf{\Xi}_{p_{i}})+p_{i}E(\mathbf{C}_{i}).$
(7)
which is just the same set of equations (6) without the residual term. So, if
$\overline{\mathbf{x}}_{i}(t)=\mathbf{C}_{i}$, the residual term
$\mathbf{\delta}_{i}=0$, and if Eq. (7) has no positive Lyapunov exponents
($\mathbf{\Xi}_{p_{i}}$ is a stable periodic orbit), then the node $x_{i}$
describes a stable periodic orbit. If
$\overline{\mathbf{x}}_{i}(t)-\mathbf{C}_{i}$ is larger than zero but
$\mathbf{\Xi}_{p_{i}}$ is a stable periodic orbit, then the node $x_{i}$
describes a perturbed version of $\mathbf{\Xi}_{p_{i}}$. The closer
$\overline{\mathbf{x}}_{i}$ is to $\mathbf{C}_{i}$, the larger the time that
Eq. (1) is satisfied at a given time. The more stable the periodic orbit is
[the larger the largest negative Lyapunov exponents of Eq. (7)], the longer
Eq. (1) is satisfied at a given time.
If the network has unbounded state variables (as it is the case of Kuramoto
networks kuramoto_LNP1975 ), the CAS pattern is the periodic orbit of period
$T_{i}$ defined in the velocity space such that
$\dot{\mathbf{\Xi}}_{p_{i}}(t)=\dot{\mathbf{\Xi}}_{p_{i}}(t+T_{i})$.
Notice that whereas Eqs. (5) and (6) represent a $Nd$-dimensional system, Eq.
(7) has only dimension $d$.
The existence of this approximately constant local mean field is a consequence
of the Central Limit Theorem, applied to variables with correlation (for more
details, see Supplementary Information). The expected value of the local mean
field can be calculated by
$\mathbf{C}_{i}=_{\lim
t\rightarrow\infty}\frac{1}{t}\int\overline{\mathbf{x}}_{i}(t)dt,$ (8)
where in practice we consider $t$ to be large, but finite. The larger the
degree of a node, the higher is the probability for the local mean field to be
close to an expected value and smaller its variance. If the probability to
find a certain value for the local mean field of the node $i$ does not depend
on the higher order moments of $\overline{\mathbf{x}}_{i}(t)$, then this
probability tends to be Gaussian for sufficiently large $k_{i}$. As a
consequence, the variance $\mu^{2}$ of the local mean field is proportional to
$k_{i}^{-1}$.
There are two criteria for the node $i$ to present the CAS phenomenon:
Criterion 1:
The Central Limit Theorem can be applied, i.e., $\mu^{2}_{i}\propto
k_{i}^{-1}$. Therefore, the larger the degree of a node, the smaller the
variation of the local mean field $\overline{\mathbf{x}}_{i}(t)$ about its
expected value $\mathbf{C}_{i}$.
Criterion 2:
The CAS pattern $\mathbf{\Xi}_{i}(t)$ describes a stable periodic orbit. The
node trajectory can be considered to be a perturbed version of its CAS
pattern. The more stable the faster trajectories of nodes come to the
neighborhood of the periodic orbits (CAS patterns), and the longer they stay
around them.
Whenever the Central Limit Theorem applies, the random variables involved are
independent. But, the Central Limit Theorem can also be applied to variables
with correlation. If nodes that present the CAS phenomenon are locked to the
same CAS pattern, their trajectories still arrive to the CAS pattern at
different “random” times, allowing for the Central Limit Theorem to be
applied. But the time-lag between two nodes ($\tau_{ij}$) is approximately
constant, since the CAS pattern has a well defined period, and the
trajectories of these nodes are locked into it. The local mean field measured
in a node $i$ remains unaltered as one changes the coupling strength either
when the network has an infinite number of nodes (e.g. Kuramoto networks) or
the nodes have a symmetric natural measured (See Secs. C, D, and E of
Supplementary Information). However, as we show in the following example, the
local mean field remains unaltered even when the network has only a finite
number of nodes and it has a natural measure with no special symmetrical
properties.
As an example to illustration how the CAS phenomenon appears in a complex
network, we consider a scaling-free network formed by, say, $N=1000$
Hindmarsh-Rose neurons, with neurons coupled electrically. The network is
described by
$\displaystyle\dot{x}_{i}$ $\displaystyle=$ $\displaystyle
y_{i}+3x_{i}^{2}-x_{i}^{3}-z_{i}+I+\sigma\sum_{j=1}^{N}A_{ij}(x_{j}-x_{i})$
$\displaystyle\dot{y}_{i}$ $\displaystyle=$ $\displaystyle 1-5x_{i}^{2}-y_{i}$
(9) $\displaystyle\dot{z}_{i}$ $\displaystyle=$ $\displaystyle-
rz_{i}+4r(x_{i}+1.618)$
where $I$=3.25 and $r$=0.005. The first coordinate of the equations that
describe the CAS pattern is given by
$\dot{\Xi}_{{x}_{i}}=\Xi_{{y}_{i}}+3{\Xi}_{{x}_{i}}^{2}-{\Xi}_{{x}_{i}}^{3}-{\Xi}_{{z}_{i}}+I_{i}-p_{i}{\Xi}_{{x}_{i}}+p_{i}C_{i}.$
(10)
Figure 1: [Color online] (a) Expected value of the local mean field of the
node $i$ against the node degree $k_{i}$. The error bar indicates the variance
($\mu^{2}_{i}$) of $\overline{x}_{i}$. (b) Black points indicate the value of
$C_{i}$ and $p_{i}$ for Eq. (10) to present a stable periodic orbit (no
positive Lyapunov exponents). The maximal values of the periodic orbits
obtained from Eq. (10) is shown in the bifurcation diagram in (c) considering
$C_{i}=-0.82$ and $\sigma=0.001$. (d) The CAS pattern for a neuron $i$ with
degree $k_{i}$=25 (with $\sigma=0.001$ and $C=-0.82$). In the inset, the same
CAS pattern of the neuron $i$ and some sampled points of the trajectory for
the neuron $i$ and another neuron $j$ with degree $k_{j}=25$. (e) The
difference between the first coordinates of the trajectories of neurons $i$
and $j$, with a time-lag of $\tau_{ij}=34.2$. (f) Phase difference between the
phases of the trajectories for neurons $i$ and $j$.
The others are given by
$\dot{\Xi}_{{y}_{i}}=1-\Xi_{{x}_{i}}^{2}-\Xi_{{y}_{i}}$,
$\dot{\Xi}_{{z}_{i}}=-r\Xi_{{z}_{i}}+4r(\Xi_{{x}_{i}}+1.618)$. In this
network, we have numerically verified that criterion 1 is satisfied for
neurons that have degrees $k\geq 10$ if $\sigma\leq\sigma^{*}$, with
$\sigma^{*}\cong 0.001$. In Fig. 1(a), we show the expected value $C_{i}$ of
the local mean field of the first coordinate ${x}_{i}$ of a neuron $i$ with
respect to the neuron degree (indicated in the horizontal axis), for
$\sigma=0.001$. The error bar indicates the variance of $C_{i}$ which fits to
$\propto k_{i}^{-1.0071}$. In (b), we show a parameter space to demonstrate
that the CAS phenomenon is a robust and stable phenomenon. Numerical
integration of Eqs. (9) for $p_{i}\in[0.001,1]$ produces $C_{i}\in[-0.9,0.7]$.
We integrate Eq. (10) by using $C_{i}\in[-0.9,0.7]$ and $p_{i}\in[0,0.2]$, to
show that the CAS pattern is stable for most of the values. So, variations in
$C_{i}$ of a network caused by changes in a parameter do not modify the
stability of the CAS pattern calculated by Eq. (10). For $\sigma=0.001$, Eqs.
(9) yields many nodes for which $\overline{x}_{i}\cong-0.82$. So, to calculate
the CAS pattern for these nodes, we use $C_{i}=-0.82$ and $\sigma=0.001$ in
Eqs. (10). The CAS pattern obtained, as we vary $p_{i}$, is shown in the
bifurcation diagram in (c), by plotting the local maximal points of the CAS
patterns. Criterion 2 is satisfied for most of the range of values of $p_{i}$
that produces a stable periodic CAS pattern. A neuron that has a degree
$k_{i}$ is locked to the CAS pattern calculated by integrating Eqs. (10) using
$k_{i}\sigma=p_{i}$ and the measured expected value for the local mean field,
$C_{i}$. In (d), we show the periodic orbit corresponding to a CAS pattern
associated to a neuron $i$ with degree $k_{i}=25$ (for $\sigma$=0.001) and in
the inset the sampled points of the trajectories of this same neuron $i$ and
of another neuron $j$ that has not only equal degree ($k_{j}$=25), but it
feels also a local mean field of $C_{j}\cong-0.82$. In (e), we show that these
two neurons have a typical time-lag synchronous behavior. In (f), we observe
$p/q=1$ phase synchronization between these two neurons for a long time,
considering that the phase difference remains bounded by $S=6\times 2\pi$ as
defined in Eq. (4), where the number 6 is the number of spikings within one
period of the slower time-scale. In order to verify Eq. (4) for all time, we
need to choose a ratio that is approximately equal to 1 ($p/q\cong 1$), but
not exactly 1 to account for slight differences in the local mean field of
these two neurons. Since $C_{i}$ depends on $\sigma$ for networks that have
neurons possessing a finite degree, we do not expect to observe a stable phase
synchronization in this network. Small changes in $\sigma$ may cause small
changes in the ratio $p/q$. Notice however that Eq. (4) might be satisfied for
a very long time, for $p/q=1$. If neurons are locked to different CAS patterns
(and therefore have different local mean field), Eqs. (1) and (4) are both
satisfied, but phase synchronization will not be 1:1, but with a ratio of
$p/q$ (see Sec. E in Supplementary Information for an example).
If neurons in this scaling-free network become completely synchronous, it is
necessary that $\sigma(N)\geq 2\sigma^{CS}(N=2)/|\lambda_{2}|$ (Ref.
pecora_PRL1998 ). $\sigma^{CS}(N=2)\cong 0.5$ represents the value of the
coupling strength when two bidirectionally coupled neurons become completely
synchronous. $\lambda_{2}=-2.06$ is the largest non-positive eigenvalue of the
Laplacian matrix defined as $A_{ij}-\mbox{diag}{(k_{i})}$. So,
$\sigma^{CS}(N)\geq 1/2.06\cong 0.5$. The CAS phenomenon appears when
$\sigma^{CAS}(N=1000)\leq 0.001$, a coupling strength 500 times smaller than
the one which produces complete synchronization. Similar conclusions would be
obtained when one considers networks of different sizes, with nodes having the
same dynamical descriptions and same connecting topology.
Concluding, in this work we introduce the phenomenon of Collective Almost
Synchronization (CAS), a phenomenon that is characterized by having nodes
possessing approximately constant local mean fields. The appearance of an
approximately constant mean field is a consequence of a regime of weak
interaction between the nodes responsible to place the node trajectory around
stable periodic orbits. A network has the CAS phenomenon if the Central Limit
Theorem can be applied, and it exists an approximately constant mean field. In
other words, the CAS is invariant to changes in the value of the expected
value of the local mean field, that might appear due to parameter alterations
(e.g. coupling strength). If the expected value of the local field changes,
but the Central limit Theorem can still be applied, nodes of the network will
present the CAS phenomenon and the observed weak forms of synchronization
among the nodes might (or not) be preserved. As examples of how common this
phenomenon could be, we have asserted its appearance in a large networks of
chaotic maps (see supplementary information), Hindmarsh-Rose neurons, and
Kuramoto oscillators (see supplementary information). In the Supplementary
Information, we also discuss that the CAS phenomenon is a possible source of
coherent motion in systems that are models for the appearance of collective
motion in social, economical, and animal behaviour.
## I Supplementary Information
### I.1 CAS and generalized synchronization
Generalized synchronization rulkov_PRE1995 ; abarbanel_PRE1996 is a common
behavior in complex networks hung_PRE2008 ; guan_chaos2009 ; hu_chaos2010 ,
and should be expected to be found typically. This phenomenon is defined as
$x_{i}=\Phi(y_{i})$, where $\Phi$ is considered to be a continuous function.
As explained in Refs. rulkov_PRE1995 ; abarbanel_PRE1996 , generalized
synchronization appears due to the existence of a low-dimensional synchronous
manifold, often a very complicated and unknown manifold.
Recent works zhou_CHAOS2006 ; ballerini_PNAS2008 ; pereira_PRE2010 ;
gardenes_chaos2011 have reported that nodes in the network that are highly
connected become synchronous. As shown in ref. guan_chaos2009 , that is a
manifestation of generalized synchronization rulkov_PRE1995 ;
abarbanel_PRE1996 in complex networks. For a fixed coupling strength among
the nodes with heterogeneous degree distributions and for the usual
diffusively coupling configuration one should expect that the set of hub nodes
(highly connected nodes) provides a skeleton about which synchronization is
developed. Reference hramov_PRE2005 demonstrates how ubiquitous generalized
synchronization is in complex networks. It is shown that a necessary condition
for its appearance in oscillators coupled in a driven-response (master-slave)
configuration is that the modified dynamics of the response system presents a
stable periodic behavior. The modified dynamics is a set of equations
constructed by considering only the variables of the response system. In a
complex network, a modified dynamics of a node is just a system of equations
that contains only variables of that node.
An important contribution to understand why generalized synchronization is a
ubiquitous property in complex network is given by the numerical work of Ref.
guan_chaos2009 and the theoretical work of Ref. hu_chaos2010 . In Refs.
guan_chaos2009 ; hu_chaos2010 the ideas of Ref. hramov_PRE2005 are extended
to complex networks. In particular, the work of Ref. hu_chaos2010 shows that
generalized synchronization occurs whenever there is at least one node whose
modified dynamics is periodic. All the nodes that have a stable and periodic
modified dynamics become synchronous in the generalized sense with the nodes
that have a chaotic modified dynamics. The general theorem presented in Ref.
hu_chaos2010 is a powerful tool for the understanding of weak forms of
synchronization or desynchronous behaviors in complex networks. However,
identifying the occurrence of generalized synchronization does not give much
information about the behavior of the network, since the function that relates
the trajectory among the nodes that are generalized synchronous is usually
unknown. The CAS phenomenon allows one to calculate, at least in an
approximate sense, the equations of motion that describes the pattern to which
the nodes are locked to. More specifically, we can derive the set of equations
governing, in an approximate sense, the time evolution of the nodes, not
covered by the theorem in Ref. hu_chaos2010 .
Finally, if there is a node whose modified dynamics describes a stable
periodic behavior and its CAS pattern is also a stable periodic stable
behavior, then the CAS phenomenon appears when the network presents
generalized synchronization.
### I.2 CAS and other synchronous and weak-synchronous phenomena
Consider a network of $N$ nodes described by
$\dot{\mathbf{x}}_{i}=\mathbf{F}_{i}(\mathbf{x}_{i})+\sigma\sum_{j=1}^{N}{\mathbf{A}_{ij}}{\mathbf{E}}[\mathcal{H}(\mathbf{x}_{j}-\mathbf{x}_{i})]+\mathbf{\zeta}_{i}(t),$
(11)
where $\mathbf{x}_{i}\in\Re^{d}$ is a d-dimensional vector describing the
state variables of the node $i$, $\mathbf{F}_{i}$ is a $d$-dimensional vector
function representing the dynamical system of the node $i$,
${\mathbf{A}_{ij}}$ is the adjacent connection matrix, ${\mathbf{E}}$ is the
coupling function as defined in pecora_PRL1998 , $\mathcal{H}$ is an arbitrary
differentiable transformation, and $\mathbf{\zeta}_{i}(t)$ is an arbitrary
random fluctuation. Assume in the following that $\mathbf{\zeta}_{i}(t)=0$.
Assume that the nodes in the network (11) have equal dynamical descriptions,
i.e., $\mathbf{F}_{i}=\mathbf{F}$, that the network is fully connected, so
every node has a degree $k_{i}=N-1$, and that
$\mathcal{H}(\mathbf{x}_{j}-\mathbf{x}_{i})=(\mathbf{x}_{j}-\mathbf{x}_{i})$.
We can rewrite it in terms of the average field
$\overline{\mathbf{x}}(t)=\frac{1}{N}\sum_{i=1}^{N}\mathbf{x}_{i}(t)$:
$\dot{\mathbf{x}}_{i}=\mathbf{F}_{i}(\mathbf{x}_{i})-p_{i}\mathbf{E}(\mathbf{x}_{i}-\overline{\mathbf{x}}),$
(12)
where $p_{i}=\sigma k_{i}$. Therefore every node becomes “decoupled” from the
network in the sense that their interaction is all mediated by the average
field. Collective behavior is dictated by the behavior of the average field
and the individual dynamics of the node. The linear stability of the network
(12) was used in Ref. zhou_CHAOS2006 as an approximation to justify how
desynchronous behavior about the average field can appear in complex networks.
Notice that this assumption can only be rigorously fulfilled if the network is
fully connected and, therefore, it is natural to understand why the
desynchronous phenomena reported in Ref. zhou_CHAOS2006 happens for nodes
that are highly connected. One can interpret the desynchronous behavior
observed in Ref. zhou_CHAOS2006 as an almost synchronization between a node
and the mean field $\overline{\mathbf{x}}$.
The differences between complete synchronization and synchronization in the
collective sense can be explained through the following example. An
interesting solution of Eq. (12) can be obtained when
$\overline{\mathbf{x}}=\mathbf{x}_{i}(t)$, $\mathbf{x}_{i}(t)$ varying in
time. In this case, the average field is along the synchronization manifold.
The network being completely synchronous, all nodes having equal trajectories,
and $\mathbf{F}_{i}(\mathbf{x}_{i}(t))=\mathbf{x}_{i}(t)$. For such a special
network, collective behavior and complete synchronization are the same. On the
other hand, collective behavior typically appears when the coupling term
$\sigma E(\mathbf{x}_{i}-\overline{\mathbf{x}})$ is different from zero for
most of the time and $\mathbf{F}_{i}(\mathbf{x}_{i})\neq\mathbf{x}_{i}$, but
there is a majority of nodes with similar behavior. In this sense, the
desynchronous behaviors reported in Ref. zhou_CHAOS2006 can be considered as
a collective phenomena that happens to parameters close to the ones that
yields complete synchronization.
To understanding when the CAS phenomenon occurs, consider the solution of Eq.
(12) in the thermodynamics limit $N\rightarrow\infty$ when
$\overline{\mathbf{x}}$ is a constant in time, $\overline{\mathbf{x}}=C$. For
such a situation, the evolution of a node can be described by the same
following d-dimensional system of ODEs
$\dot{\mathbf{x}}=\mathbf{F}(\mathbf{x})-p\mathbf{E}(\mathbf{x}-\mathbf{C}),$
(13)
where $p=\sigma(N-1)$. If complete synchronization takes place, then
$\mathbf{F}_{i}(\mathbf{C})=0$, meaning that there can only exist complete
synchronization if all the nodes lock into the same stable steady state
equilibrium point, likely to happen if $\mathbf{F}_{i}$ is the same for all
the nodes.
Another possible network configuration that leads to
$\overline{\mathbf{x}}=\mathbf{C}$ happens when each node is only weakly
coupled (“independent”) with the others such that the Central Limit Theorem
could be applied. If the network has only a finite number of nodes and
$\overline{\mathbf{x}}(t)$ is not exactly constant in time, but
$\overline{\mathbf{x}}(t)\approxeq\mathbf{C}$, the nodes still behave in the
same predictable way if the dynamics described by
$\dot{\mathbf{x}}=\mathbf{F}(\mathbf{x})-p\mathbf{E}(\mathbf{x})+p\mathbf{E}(\mathbf{C})$
is a sufficiently stable periodic orbit. This is how the CAS phenomenon
appears in fully connected networks. All nodes become locked to the stable
periodic orbit described by
$\dot{\mathbf{x}}=\mathbf{F}(\mathbf{x})-p\mathbf{E}(\mathbf{x})+p\mathbf{E}(\mathbf{C})$.
Now, we break the symmetry of the network, allowing the nodes to be connected
arbitrarily to their neighbors. We still consider diffusive linear couplings,
$\mathcal{H}(\mathbf{x}_{j}-\mathbf{x}_{i})=(\mathbf{x}_{j}-\mathbf{x}_{i})$.
The equations of such a network can be written as
$\dot{\mathbf{x}}_{i}=\mathbf{F}_{i}(\mathbf{x}_{i})-p_{i}\mathbf{E}(\mathbf{x}_{i})+p_{i}\mathbf{E}(\overline{\mathbf{x}}_{i}(t)),$
(14)
where $k_{i}$ is the degree of node $i$ with $k_{l}\leq k_{m}$, if $l<m$, and
$\overline{\mathbf{x}}_{i}(t)$ is the local mean field defined as
$\overline{\mathbf{x}}_{i}(t)=\frac{1}{k_{i}}\sum_{j=1}^{N}A_{ij}\mathbf{x}_{j}(t).$
(15)
Our main assumption is that the local mean field of a variable that is
bounded, either $\overline{\mathbf{x}}_{i}(t)$ or
$\overline{\dot{\mathbf{x}}}_{i}(t)$, exhibits small oscillations about an
expected constant value $\mathbf{C}$. In other words, one can define a time
average $\mathbf{C}$ by either
$\mathbf{C}_{i}=\frac{1}{t}\int_{0}^{t}\overline{\mathbf{x}}_{i}(t)dt,$ (16)
or
$\mathbf{C}_{i}=\frac{1}{t}\int_{0}^{t}\overline{\dot{\mathbf{x}}}_{i}(t)dt.$
(17)
Notice that ${\mathbf{x}}_{i}\in\Re^{d}$ (or
$\dot{\mathbf{x}}_{i}\in\Re^{d}$), and so does $\mathbf{C}\in\Re^{d}$. The CAS
phenomenon appears for a node that has at least one component of the local
mean field ($\overline{\mathbf{x}}_{i}$ or $\overline{\dot{\mathbf{x}}}_{i}$)
that is approximately constant. The appearance of this almost constant value
is a consequence of the Central Limit Theorem. For networks whose nodes are
described by only bounded variables, when calculating the local mean field we
only take into consideration the component receiving the couplings from other
nodes. For networks of Kuramoto oscillators that have one variable (the phase
$\theta$) that is not bounded, a constant local mean field appears in the
component that describes the instantaneous frequency ($\dot{\theta}_{i}$).
In Ref. hu_chaos2010 , it was shown that for chaotic networks described by a
system of equations similar to Eq. (14), generalized synchronization can
appear if the modified dynamics described by
$\dot{\mathbf{x}}_{i}=\mathbf{F}_{i}(\mathbf{x}_{i})-\sigma
k_{i}\mathbf{E}(\mathbf{x}_{i})$ of a certain number of nodes are either
stable equilibrium points ($\dot{\mathbf{x}}_{i}$=0) or they describe stable
periodic solutions (limit cycle). Generalized synchronization appears between
the nodes that have modified dynamics describing stable periodic states and
the nodes that have modified dynamics describing chaotic states.
To understand the phenomenon of collective almost synchronization (CAS),
introduced in this work, consider that
$\mathcal{H}(\mathbf{x}_{j}-\mathbf{x}_{i})=(\mathbf{x}_{j}-\mathbf{x}_{i})$.
It is a phenomena that appears necessarily when
$\overline{\mathbf{x}}_{i}\approxeq\mathbf{C}_{i}$ or
$\overline{\dot{\mathbf{x}}}_{i}\approxeq\mathbf{C}_{i}$. The equations for
the network can then be described by
$\dot{\mathbf{x}}_{i}=F_{i}(\mathbf{x}_{i})-p_{i}\mathbf{E}(\mathbf{x}_{i})+p_{i}\mathbf{E}(\mathbf{C}_{i})+\mathbf{\delta}_{i},$
(18)
where the residual term is
$\delta_{i}=p_{i}(\overline{\mathbf{x}}_{i}-\mathbf{C}_{i})$. This term is
small most of the time but large for some intervals of time;
$\mathbf{\delta}_{i}(t)>0$ for all time, but $\mathbf{\delta}_{i}(t)<\epsilon$
for most of the times. Another requirement for the CAS phenomenon to appear is
that the CAS pattern $\mathbf{\Xi}_{i}(t)$ of a node $i$ that is described by
Eq. (18) ignoring the residual term
$\dot{\mathbf{\Xi}}_{i}=\mathbf{F}_{i}(\mathbf{\Xi}_{i})-p_{i}\mathbf{E}(\mathbf{\Xi}_{i})+p_{i}\mathbf{E}(\mathbf{C}_{i}).$
(19)
must be a stable periodic orbit. We define that a node presents collective
almost synchronization (CAS) if
$|\mathbf{x}_{i}(t)-\mathbf{\Xi}_{i}(t-\tau_{i})|<\epsilon_{i},$ (20)
for most of the time,
Notice from Eq. (19) that for $p_{i}>0$, the CAS pattern will not be described
by $\mathbf{F}(\mathbf{x}_{i})$ and therefore does not belong to the
synchronization manifold. On the other hand, $\mathbf{\Xi}_{i}$ is induced by
the local mean field as typically happens in synchronous phenomenon due to
collective behavior. This property of the CAS phenomenon shares similarities
with the way complete synchronization appears in networks of nodes coupled
under time-delay functions nijmeijer_IEEE2011 . In such networks, nodes become
completely synchronous to a solution of the network that is different from the
solution of an isolated node of the network. Additionally, the trajectory of
the nodes present a time-lag to this solution.
To understand the reason why the CAS phenomenon appears when
$\mathbf{\Xi}_{i}(t)$ is a sufficiently stable periodic orbit, we study the
variational equation of the CAS pattern (19)
$\dot{\mathbf{\xi}}_{i}=[D\mathbf{F}_{i}(\mathbf{\xi}_{i})-p_{i}\mathbf{E}]\mathbf{\xi}_{i}.$
(21)
obtained by linearizing Eq. (19) around $\mathbf{\Xi}_{i}$ by making
$\mathbf{\xi}_{i}=\mathbf{x}_{i}-\mathbf{\Xi}_{i}$. This equation produces no
positive Lyapunov exponents. As a consequence, neglecting the existence of the
time-lag between $\mathbf{x}_{i}(t)$ and $\mathbf{\Xi}(t)_{i}$, the trajectory
of the node $i$ oscillates about $\mathbf{\Xi}_{i}$, and
$\mathbf{x}_{i}-\mathbf{\Xi}_{i}\leq\epsilon_{i}$, for most of the time,
satisfying Eq. (20), where $\epsilon_{i}$ depends on $\mathbf{\delta}_{i}$. If
there are two nodes $i$ and $j$, which feel similar local mean fields,
$\mathbf{\Xi}_{i}\approxeq\mathbf{\Xi}_{j}$, then
$\mathbf{x}_{i}\approxeq\mathbf{x}_{j}$, for most of the time.
To understand why the nodes that present CAS have also between them a time-lag
type of synchronization, integrate Eq. (18), using Eq. (19), to obtain
$\mathbf{x}_{i}(t)=\int_{0}^{t}[\dot{\mathbf{\Xi}}_{i}(t)+\mathbf{\delta}_{i}(t)]dt.$
(22)
This integral is not trivial in the general case. But we have a simple
phenomenological explanation for its solution. When the CAS pattern is
sufficiently stable, the asymptotic time limit state of the variable
$\mathbf{x}_{i}(t)$ is the CAS pattern $\mathbf{\Xi}_{i}(t)$. But due to the
residual term $\mathbf{\delta}_{i}(t)$, the trajectory of $\mathbf{x}_{i}(t)$
arrives in the neighborhood of $\mathbf{\Xi}(t)$ at time $t$ with a time-lag.
As a result, nodes that are collectively almost synchronous obey Eq. (20). In
addition, two nodes that present CAS have also a time-lag between their
trajectories for the same reason. There is an extra contribution to the time-
lag between the trajectories of two nodes if their initial conditions differ.
Phase synchronization juergen_book is a phenomena where the phase difference,
denoted by $\Delta\phi_{ij}$ between the phases of two signals (or nodes in a
network), $\phi_{i}(t)$ and $\phi_{j}(t)$, remains bounded for all time
$\Delta\phi_{ij}=\left|\phi_{i}(i)-\frac{p}{q}\phi_{j}(t)\right|\leq S,$ (23)
where $S=2\pi$, and $p$ and $q$ are two rational numbers juergen_book . For
coupled chaotic oscillators one can also find irrational phase synchronization
baptista_PRE2004 , where Eq. (23) can be satisfied for all time with $p$ and
$q$ irrational. $S$ is a reasonably small constant, that can be larger than
2$\pi$ in order to encompass oscillatory systems that either have a time
varying time-scale or whose time-lag varies in time. This bound can be simply
calculated by making $S$ to represent the growth of the phase in the faster
time scale after one period of the slower time scale.
The link between the CAS phenomenon and phase synchronization can be explained
by thinking that it is a synchronous phenomenon among the nodes that is
mediated by their CAS patterns. The phase of the periodic orbit of the CAS
pattern of the node $i$ grows as
$\tilde{\phi}_{i}(t)=\omega_{i}t+\xi_{i}(t)+\phi_{i}^{0}$ and of the node $j$
grows as $\tilde{\phi}_{j}(t)=\omega_{j}t+\xi_{j}(t)+\phi_{j}^{0}$. The
quantities $\phi_{i}^{0}$ and $\phi_{j}^{0}$ are displacements of the phase
caused by the existence of time-lag, and $\xi_{i}(t)$ and $\xi_{j}(t)$ are
small fluctuations. For $t\rightarrow\infty$ these can be neglected and we
have that
$\frac{\tilde{\phi}_{i}(t)}{\tilde{\phi}_{j}(t)}=\frac{\omega_{i}}{\omega_{j}}=\frac{p}{q},$
(24)
where $\omega_{i}=\lim_{t\rightarrow\infty}\frac{\tilde{\phi}_{i}(t)}{t}$
gives the average frequency of oscillation of the CAS pattern of node $i$, and
$p$ and $q$ are two real numbers.
The phase of the nodes can be written as a function of the phase of the
periodic orbits of the CAS pattern. So,
$\phi(t)_{i}=\tilde{\phi(t)}_{i}+\delta\phi_{i}(t)$ and
$\phi(t)_{j}=\tilde{\phi(t)}_{j}+\delta\phi_{j}(t)$, $\delta_{i}(t)$
represents a variation of the phase of the node $i$ with respect to the phase
of the CAS pattern, and depends on the way the phase is defined
pereira_PLA2007 . The phase difference $\Delta\phi_{ij}(t)$, as written in Eq.
(23), becomes equal to
$|t(q\omega_{i}-p\omega_{j})+q\delta_{i}(t)-p\delta\phi_{j}(t)|$. But, from
Eq. (24), $q\omega_{i}-p\omega_{j}=0$, and therefore,
$\Delta\phi_{ij}(t)\leq\max{(q\delta\phi_{i}(t)-p\delta\phi_{j}(t))}$. But
since the node orbit is locked to the CAS pattern, $\Delta\phi_{ij}(t)$ is
always a small quantity.
In practice, for networks composed by a finite number of nodes, we do not
expect that the quantities $\delta\phi_{i}(t)$ and $\delta\phi_{j}(t)$ to
remain small for all the time. The reason is that the CAS pattern can only be
approximately calculated and in general we do not know the precise real value
of the local mean field. However, our simulations show that these quantities
remain small for time intervals that comprise many periods of oscillations of
the node trajectories. For networks having an expected value of the mean field
$\mathbf{C}_{i}$ that is independent on the coupling strength $\sigma$, the
ratio $p/q$ does not change as one changes the value of $\sigma$, and then
phase synchronization is stable under a parameter variation. For the network
of Kuramoto oscillators, Eq. (23) can be verified for all time with a value of
$p/q$ that remains invariant as one changes $\sigma$.
Assume for now that the nodes have equal dynamics, so
$\mathbf{F}_{i}=\mathbf{F}$. If a node $i$ with degree $k_{i}$ has a periodic
CAS pattern that is sufficiently stable under Eq. (21), all the nodes with
degrees close to $k_{i}$ also have similar CAS patterns that are sufficiently
stable under Eq. (21). Node $i$ is locked to $\mathbf{\Xi}_{i}$ and node $j$
is locked to $\mathbf{\Xi}_{j}$. But since $\mathbf{\Xi}_{i}$ is approximately
equal to $\mathbf{\Xi}_{j}$, thus, $\mathbf{x}_{i}\cong\mathbf{x}_{j}$, for
most of the time. So, if the pattern solution is sufficiently stable, the
external noise $\mathbf{\zeta}_{i}(t)$ can be different from zero, and still
have similar trajectories for that interval of time. The same argument remains
valid if $\mathbf{F}_{i}\neq\mathbf{F}_{j}$, as long as the CAS pattern is
sufficiently stable.
In Ref. pereira_PRE2010 , synchronization was defined in terms of the node
$\mathbf{x}_{N}$ that has the largest number of connections, when
$\mathbf{x}_{i}(t)\cong\mathbf{x}_{N}$ (which is equivalent to stating that
$|\mathbf{x}_{i}(t)-\mathbf{x}_{N}|<\epsilon$), where $\mathbf{x}_{N}$ is
assumed to be very close to the synchronization manifold $\mathbf{s}$ defined
by $\dot{\mathbf{s}}=\mathbf{F}(\mathbf{s})$. This type of synchronous
behavior was shown to exist in scaling free networks whose nodes have equal
dynamics and that are linearly connected. This was called hub synchronization.
The link between the CAS phenomenon with the hub synchronization phenomenon
pereira_PRE2010 , and generalized synchronization can be explained as in the
following. It is not required for nodes that present the CAS phenomenon for
their error dynamics $\mathbf{x}_{j}-\mathbf{x}_{i}$ to be small. But for the
following comparison, assume that
$\mathbf{\vartheta}_{ij}=\mathbf{x}_{j}-\mathbf{x}_{i}$ is small so that we
can linearise Eq. (14) about another node $j$. Assume also that
$\mathbf{F}_{i}=\mathbf{F}$. The variational equations of the error dynamics
between two nodes $i$ and $j$ that have equal degrees are described by
$\dot{\mathbf{\vartheta}}_{ij}=[D\mathbf{F}(\mathbf{x}_{i})-p_{i}E]\mathbf{\vartheta}_{ij}+\mathbf{\eta}_{i}.$
(25)
In Ref. pereira_PRE2010 , hub synchronization exists if Eq. (25), neglecting
the coupling term $\mathbf{\eta}_{i}$, has no positive Lyapunov exponents.
That is another way of stating that hub synchronization between $i$ and $j$
occurs when the variational equations of the modified dynamics
$[\dot{\mathbf{x}}_{i}=\mathbf{F}(\mathbf{x}_{i})-p_{i}E(\mathbf{x}_{i})]$
presents no positive Lyapunov exponent. In other words, in order to have hub
synchronization it is necessary that the modified dynamics of both nodes be
describable by stable periodic oscillations. Hub synchronization is the result
of a weak form of generalized synchronization, defined in terms of the linear
stability of the error dynamics between two highly connected nodes. Unlike
generalized synchronization, hub synchronization offers a way to predict, in
an approximate sense, the trajectory of the synchronous nodes.
In contrast, the CAS phenomenon appears when the CAS pattern, which is
different from the solution of the modified dynamics, becomes periodic.
Another difference between the CAS and the hub synchronization phenomenon is
that whereas $\overline{\mathbf{x}}_{i}\approxeq\mathbf{C}$ in the CAS
phenomenon, $\overline{\mathbf{x}}_{i}\approxeq{\mathbf{x}}_{i}$ in the hub
synchronization, in order for $\mathbf{\eta}_{i}$ to be very small, and
$\mathbf{x}_{i}$ to be close to the synchronization manifold. So, whereas hub
synchronization can be interpreted as being a type of practical
synchronization femat_PLA1999 , CAS is a type of almost synchronization.
In the work of Refs. politi_PRE2006 ; politi_PRL2010 , it was numerically
reported a new desynchronous phenomenon in complex networks. The network has
no positive Lyapunov exponents but it presents a desynchronous non-trivial
collective behavior. A possible situation for the phenomenon to appear is when
$\mathbf{\delta}_{i}$ and $\mathbf{C}_{i}$ in Eq. (18) are either zero or
sufficiently small such that the stability of the network is completely
determined by Eq. (21), and this equation produces no positive Lyapunov
exponent. Assume now that $p_{i}$ in Eq. (19) is appropriately adjusted such
that the CAS pattern for every node $i$ is a stable periodic orbit. The
variational Eqs. (21) for all nodes have no positive Lyapunov exponents. If
additionally, $\overline{\mathbf{x}}_{i}(t)\approxeq\mathbf{C}$, then the
network in Eq. (14) possesses no positive Lyapunov exponent. Therefore,
networks that present the CAS phenomenon for all nodes might present the
desynchronous phenomenon reported in Refs. politi_PRE2006 ; politi_PRL2010 .
The CAS phenomenon becomes different from the phenomenon of
Refs.politi_PRE2006 ; politi_PRL2010 if for at least one node, Eq. (19)
produces a chaotic orbit.
To understand the occurrence of CAS in networks formed by heterogeneous nodes
connected by nonlinear functions such as networks of Kuramoto oscillators, we
rewrite the Kuramoto’s network model in terms of the local mean field,
$\overline{\theta}_{i}=\frac{1}{k_{i}}\sum_{j=1}^{N}A_{ij}\theta_{j}$. Using
the coordinate transformation
$\frac{1}{k_{i}}\sum_{j=1}^{N}A_{ij}\exp{{}^{\mathbb{j}(\theta_{j}-\theta_{i})}}=\tilde{r}_{i}\exp{{}^{\mathbb{j}(\overline{\theta}_{i}-\theta_{i})}},$
(26)
the dynamics of the node $i$ is described by
$\dot{\theta_{i}}=\omega_{i}+p_{i}\tilde{r}_{i}sin(\overline{\theta}_{i}-\theta_{i}).$
(27)
The phase ${\theta_{i}}$ is not a bounded variable and therefore we expect
that typically $\overline{\theta}_{i}$ has not a well defined average. But,
$\overline{\dot{\theta_{i}}}(t)$ is bounded and has a well defined average
value which is an approximately constant quantity ($C_{i}$) for nodes in
networks with sufficiently large number of connections and with sufficiently
small coupling strengths. When $\overline{\dot{\theta}_{i}}\cong C_{i}$, the
node $i$ has the propensity to exhibit the CAS phenomenon, and the CAS pattern
is calculated by Eq. (27) considering that $\overline{\theta_{i}}=C_{i}t$.
Notice that $\overline{\theta_{i}}=\overline{\dot{\theta}_{i}}t\cong C_{i}t$.
Phase synchronization between two nodes in the networks of Eq. (27) is stable
under parameter variations (coupling strength in this case) if these nodes
present the CAS phenomenon. There is irrational (rational) phase
synchronization if
$\frac{\overline{\dot{\theta_{i}}}}{\overline{\dot{\theta_{j}}}}$ is
irrational (rational). If nodes are sufficiently “decoupled” we expect that
$\frac{\overline{\dot{\theta_{i}}}}{\overline{\dot{\theta_{j}}}}\approxeq\omega_{i}/\omega_{j}$.
Phase synchronization will be rational whenever nodes with different natural
frequencies become locked to Arnold tongues’s, induced by the coupling
$p_{i}\tilde{r}_{i}sin(\overline{\theta}_{i}-\theta_{i})$.
There is a special solution of Eq. (27) that produces a bounded state in the
variable ${\theta_{i}}$ when the network is complete synchronous to an
equilibrium point. In such case, $\overline{\theta}_{i}$ becomes constant, and
Eq. (27) has one stable equilibrium
$\theta_{i}=\arcsin{\left(\frac{\omega_{i}}{p_{i}}\right)}$, obtained when
$p_{i}>\omega_{i}$. But, the local mean field becomes constant due to complete
synchronization and not due to the fact that the nodes are “decoupled”. These
conditions do not produce the CAS phenomenon.
We take the thermodynamics limit when the network has infinite nodes with
infinite degrees. $C_{i}$ calculated using Eq. (17) does not change as one
change the coupling $\sigma$, since
$\overline{\dot{\theta}_{i}}=\lim_{k_{i},N\rightarrow\infty}\frac{1}{k_{i}}[\sum_{j=1}^{N}A_{ij}(\omega_{i}+p_{i}\tilde{r}_{i}sin(\overline{\theta_{i}}-\theta_{i}))]$=$\lim_{k_{i},N\rightarrow\infty}\frac{1}{k_{i}}[\sum_{j=1}^{N}A_{ij}\omega_{j}]+[\sum_{j=1}^{N}A_{ij}(\sigma\tilde{r}_{j}sin(\overline{\theta_{j}}-\theta_{i}))]$
=
$\frac{1}{k_{i}}[\sum_{j=1}^{N}A_{ij}\omega_{j}]+\sigma\sum_{j=1}^{N}A_{ij}(\tilde{r}_{j}sin(\overline{\theta_{j}}-\theta_{i}))$.
But, if nodes are sufficiently decoupled
$\sum_{j=1}^{N}A_{ij}(\tilde{r}_{j}sin(\overline{\theta_{j}}-\theta_{i}))$
approaches zero, and therefore, $C_{j}$ only depends on the natural
frequencies:
$\overline{\dot{\theta}_{i}}=C_{i}=\frac{1}{k_{i}}[\sum_{j=1}^{N}A_{ij}\omega_{j}]$.
Assume that there are two nodes, $i$ and $j$, and that for most of the time
$\Xi_{i}\approxeq\Xi_{j}$. Then, for most of the time it is also true that
$\Xi_{i}-\theta_{i}\approxeq\Xi_{j}-\theta_{j}$, which allow us to write that
$sin(\Psi_{j}-\theta_{j})-sin(\Psi_{i}-\theta_{i})\approxeq\cos{(\Psi_{i}-\theta_{i})}[(\Psi_{j}-\theta_{j})-(\Psi_{j}-\theta_{j})]\approxeq\cos{(\Psi_{i}-\theta_{i})}[\theta_{j}-\theta_{j}]$.
Since $\Psi_{i}\approxeq\theta_{i}$, then
$\cos{(\Psi_{i}-\theta_{i})}\approxeq 1$ and
$sin(\Psi_{j}-\theta_{j})-sin(\Psi_{i}-\theta_{i})\approxeq[\theta_{j}-\theta_{j}]$.
Defining the error dynamics between the two nodes to be
$\xi_{ij}=\theta_{j}-\theta_{i}$, we arrive that
$\dot{\xi}_{ij}\approxeq(\omega_{j}-\omega_{i})-p_{i}\xi_{ij}.$ (28)
Therefore, it implies that we expect to find two nodes having the same similar
CAS behavior when both the local mean field is close and when the difference
between their natural frequencies $(\omega_{j}-\omega_{i})$ is small.
The CAS phenomenon can also appear in a system of driven particles
vicsek_PRL1995 that is a simple but powerful model for the onset of pattern
formation in population dynamics couzin_ASB2003 , economical systems
gregoire_physicaD2003 and social systems helbing_nature2000 . In the work of
Ref. vicsek_PRL1995 , it was assumed that individual particles were moving at
a constant speed but with an orientation that depends on the local mean field
of the orientation of the individual particles within a local neighborhood and
under the effect of additional external noise. Writing an equivalent time-
continuous description of the Vicsek particle model vicsek_PRL1995 , the
equations of motion for the direction of movement of a particle $i$, can be
written as
$\dot{\mathbf{x}}_{i}=-\mathbf{x}_{i}+\overline{\mathbf{x}}_{i}+\Delta\mathbf{\theta}_{i},$
(29)
where $\overline{\mathbf{x}}_{i}$ represents the local mean field of the
orientation of the particle $i$ within a local neighborhood and
$\Delta\mathbf{\theta}_{i}$ represents a small noise term. When
$\overline{\mathbf{x}}_{i}$ is approximately constant, the CAS pattern is
described by a solution of
$\dot{\mathbf{x}}_{i}=-\mathbf{x}_{i}+\overline{\mathbf{x}}_{i}$, which will
be a stable equilibrium point as long as $\Delta\mathbf{\theta}_{i}$ is
sufficiently small. From the Central Limit Theorem,
$\overline{\mathbf{x}}_{i}$ will be approximately constant as long as the
neighborhood considered is sufficiently large or the density of particles is
sufficiently large.
### I.3 About the expected value of the local mean field: the Central Limit
Theorem
The Theorem states that, given a set of $t$ observations, each set of
observation containing $k$ measurements
($x_{1},x_{2},x_{3},x_{4},\ldots,x_{k}$), the sum
$S_{N}=\sum_{i=1}^{k}x_{i}(N)$ (for $N=1,2,\ldots,t$), with the variables
$x_{i}(N)$ drawn from an independent random process that has a distribution
with finite variance $\mu^{2}$ and mean $\overline{x}$, converges to a Normal
distribution for sufficiently large $k$. As a consequence, the expected value
of these $t$ observations is given by the mean $\overline{x}$ (additionally,
$\overline{x}=\frac{1}{t}\sum_{N=1}^{t}S_{N}$), and the variance of the
expected value is given by $\frac{\mu^{2}}{k}$. The larger the number $k$ of
variables being summed, the larger is the probability with which one has a sum
close to the expected value. There are many situations when one can apply this
theorem for variables with some sort of correlation hilhorst_BJP2009 , as it
is the case for variables generated by deterministic chaotic systems with
strong mixing properties, for which the decay of correlation is exponentially
fast. In other words, a deterministic trajectory that is strongly chaotic
behaves as an independent random variable in the long-term. For that reason,
the Central Limit Theorem holds for the time average value $\overline{x}(t)$
produced by summing up chaotic trajectories from nodes belonging to a network
that has nodes weakly connected. Consequently, the distribution of
$\overline{x}_{i}(t)=\frac{1}{N}\sum_{j}A_{ij}x_{j}(t)$ for node $i$ should
converge to a Gaussian distribution centered at
$C_{i}=\frac{1}{t}\int_{0}^{t}\overline{x}_{i}(t)dt$ as the degree of the node
is sufficiently large. In addition, the variance $\mu^{2}_{i}$ of the local
mean field $\overline{x}(t)_{i}$ decreases proportional to $k_{i}^{-1}$, as we
have numerically verified for networks of Hindmarsh-Rose neurons
($\mu^{2}_{i}\propto k_{i}^{-1.0071}$) and networks of Kuramoto oscillators
($\mu^{2}_{i}\propto k_{i}^{-1.055}$).
If the network has no positive Lyapunov exponents, we still expect to find an
approximately constant local mean field at a node $i$, as long as the nodes
are weakly connected and its degree is sufficiently large. To understand why,
imagine that every node in the network stays close to a CAS pattern and one of
its coordinates is described by $sin(\omega_{i}t)$. Without loss of generality
we can make that every node has the same frequency $\omega_{i}=\omega$. The
time-lag property in the node trajectories, when they exhibit the CAS pattern,
results in that every node is close to $sin(\omega_{i}t)$ but they will have a
random time-lag in relation to the CAS pattern (due to the decorrelated
property between the node trajectories). So, the selected coordinate can be
described by $sin(\omega t+\phi^{0}_{i})+\delta_{i}(t)$, where $\phi^{0}_{i}$
is a random initial phase and $\delta_{i}(t)$ is a small random term
describing the distance between the node trajectory and the CAS pattern.
Neglecting the term $\delta_{i}(t)$, the distribution of the sum
$\sum_{i=1}^{k}sin(\omega t+\phi^{0}_{i})$ converges to a normal distribution
with a variance that depends on the variance of $sin(\phi^{0}_{i})$.
From previous considerations, if the degree of some of the nodes tend to
infinite, the variance of the local mean field for those nodes tends to zero
and, in this limit, the residual term $\delta_{i}$ in Eq. (18) is zero and the
local mean field of these nodes is a constant. As a consequence, the node is
perfectly locked with the CAS pattern ($\epsilon=0$ in Eq. (20)).
### I.4 CAS in a network of coupled maps
As another example to illustrate how the CAS phenomenon appears in a complex
network, we consider a network of maps whose node dynamics is described by
$F_{i}(x_{i})=2x_{i}$ mod(1). The network composed, say, by $N=1000$ maps, is
represented by
$x_{i}^{(n+1)}=F_{i}(x_{i}^{(n)})+\sigma\sum_{j=1}^{N}A_{ij}(x_{j}^{(n)}-x_{i}^{(n)})$
mod(1), where the upper index $n$ represents the discrete iteration time, and
$A_{ij}$ is the adjacency matrix of a scaling-free network. The map has a
constant probability density. When such a map is connected in a network, the
density is no longer constant, but still symmetric and having an average value
of 0.5. As a consequence, nodes that have a sufficient amount of connections
($k\geq 10$) feel a local mean field, say, within $[0.475,0.525]$, (deviating
of 5$\%$ about $C_{i}$=0.5) and $\mu^{2}_{i}\propto k_{i}^{-1}$ (criterion 1),
as shown in Fig. 2(a). Therefore, such nodes have propensity to present the
CAS phenomenon. In (b) we show a bifurcation diagram of the CAS pattern,
$\Xi_{i}$, obtained from Eq. (19) by using $C_{i}=C=0.5$, as we vary $p_{i}$.
Nodes in this network that have propensity to present the CAS phenomenon will
present it if additionally $p_{i}\in[1,3]$; the CAS pattern is described by a
period-2 stable orbit (criterion 2). This interval can be calculated by
solving $|2-p_{i}|\leq 1$. In (c) we show the probability density function of
the trajectory of a node that present the CAS phenomenon. The density is
centered at the position of the period-2 orbit of the CAS pattern and for most
of the time Eq. (20) is satisfied. The filled circles are fittings assuming
that the probability density is given by a Gaussian distribution. Therefore,
there is a high probability that $\epsilon_{i}$ in Eq. (20) is small. In (d)
we show a plot of the trajectories of two nodes that have the same degree
which is equal to 80. We chose nodes which present no time-lag between their
trajectories and the trajectory of the pattern. If there was a time-lag, the
points in (d) would not be only aligned along the diagonal (identity) line,
but they would also appear off-diagonal.
Figure 2: (a) Expected value of the local mean field of the node $i$ against
the node degree $k_{i}$. The error bar indicates the variance ($\mu^{2}_{i}$)
of $\overline{x}_{i}$. (b) A bifurcation diagram of the CAS pattern [Eq. (19)]
considering $C_{i}=0.5$. (c) Probability density function of the trajectory of
a node with degree $k_{i}$=80 (therefore, $p_{i}=\sigma k_{i}=1.3$,
$\sigma=1.3/80$). (d) A return plot considering two nodes ($i$ and $j$) with
the same degree $k_{i}=k_{j}=$80.
### I.5 CAS in the Kuramoto network
An illustration of this phenomenon in a network composed by nodes having
heterogeneous dynamical descriptions and a nonlinear coupling function is
presented in a random network of $N$=1000 Kuramoto oscillators. We rewrite the
Kuramoto network model in terms of the local mean field,
$\overline{\theta}_{i}=\frac{1}{k_{i}}\sum_{j=1}^{N}A_{ij}\theta_{j}$. Using
the coordinate transformation
$\frac{1}{k_{i}}\sum_{j=1}^{N}A_{ij}\exp{{}^{\mathbb{j}(\theta_{j}-\theta_{i})}}=\tilde{r}_{i}\exp{{}^{\mathbb{j}(\overline{\theta}_{i}-\theta_{i})}}$,
the dynamics of node $i$ is described by
$\dot{\theta_{i}}=\omega_{i}+p_{i}\tilde{r}_{i}sin(\overline{\theta}_{i}-\theta_{i}),$
(30)
where $\omega_{i}$ is the natural frequency of the node $i$, taken from a
Gaussian distribution centered at zero and with standard deviation of 4\. If
$\tilde{r}_{i}$=1, all nodes coupled to node $i$ are completely synchronous
with it. If $\tilde{r}_{i}$=0, there is no synchronization between the nodes
that are coupled to the node $i$. Since the phase is an unbounded variable,
the CAS phenomenon should be verified by the existence of an approximate
constant local mean field in the frequency variable $\dot{\theta_{i}}$. If
$\overline{\dot{\theta}_{i}}(t)\cong C_{i}$, which means that
$\overline{\theta_{i}}=\overline{\dot{\theta}_{i}}t\cong C_{i}t$, then Eq.
(30) describes a periodic orbit (the CAS pattern), regardless the values of
$\omega_{i}$, $p_{i}$, and $\tilde{r}_{i}$, since it is an autonomous two-
dimensional system; chaos cannot exist. Therefore, criterion 2 is always
satisfied in a network of Kuramoto oscillators. We have numerically verified
that criterion 1 is satisfied for this network for
$\sigma\leq\sigma^{CAS}(N=1000)$, where $\sigma^{CAS}(N=1000)\cong 0.075$.
Complete synchronization is achieved in this network for
$\sigma\geq\sigma^{CS}=1.25$. So, the CAS phenomenon is observed for a
coupling strength that is 15 times smaller than the one that produces complete
synchronization.
For the following results, we choose $\sigma=0.001$. Since the natural
frequencies have a distribution centered at zero, it is expected that, for
nodes with higher degrees, the local mean field is close to zero (see Fig.
3(a)). In (b), we show the variance of the local mean field of the nodes with
degree $k_{i}$. The fitting produces $\mu^{2}_{i}\propto k_{i}^{-1.055}$
(criterion 1). In (c), we show the relationship between the value of
$p_{i}\tilde{r}_{i}$ and the value of the degree $k_{i}$. In order to
calculate the CAS pattern of a node with degree $k_{i}$, we need to use the
value of $p_{i}\tilde{r}_{i}$ (which is obtained from this figure) and the
measured $C_{i}$ as an input in Eq. (30). We pick two arbitrary nodes, $i$ and
$j$, with degrees $k_{i}=96$ and $k_{j}=56$, respectively, with natural
frequencies $\omega_{i}\approxeq-5.0547$ and $\omega_{j}\approxeq-5.2080$. In
(d), we show that phase synchronization is verified between these two nodes,
with $p/q=\omega_{i}/\omega_{j}$. We also show the phase difference
$\delta\phi_{j}=\theta_{j}-\Xi_{\theta_{j}}$ between the phases of the
trajectory of the node $i$ with degree $k_{j}=96$ and the phase of its CAS
pattern, for a time interval corresponding to approximately 2500/$P$ cycles,
where the period of the cycles in node $i$ is calculated by
$P=\frac{2\pi}{5.0547}$. Phase synchronization between nodes $i$ and $j$ is a
consequence of the fact that the phase difference between the nodes and their
CAS patterns is bounded.
Figure 3: Results for $\sigma=0.001$. (a) Expected value of the local mean
field $\overline{\dot{\theta}_{i}}$ of a node with degree $k_{i}$. (b) The
variance $\mu^{2}_{i}$ of the local mean field. (c) Relationship between the
value of $p_{i}\tilde{r}_{i}$ and $k_{i}$. (d) Phase difference
$\Delta\phi_{ij}=\theta_{i}-p/q\theta_{j}$ between two nodes, one with degree
$k_{i}=96$ and the other with degree $k_{j}=56$; the phase difference
$\delta\phi_{i}=\theta_{i}-\Xi_{\theta_{i}}$ between the phases of the
trajectory of the node $i$ with degree $k_{i}=96$ and the phase of its CAS
pattern.
In the thermodynamic limit, when a fully connected network has an infinite
number of nodes, $C_{i}$ does not change as one changes the coupling $\sigma$,
since it only depends on the mean field of the frequency variable
($\overline{\dot{\theta}}$). As a consequence, if there is the CAS phenomenon
and phase synchronization between two nodes with a ratio of $p/q$ for a given
value of $\sigma$, changing $\sigma$ does not change the ratio $p/q$.
Therefore phase synchronization is stable under alterations in $\sigma$. Phase
synchronization will be rational and stable whenever nodes with different
natural frequencies $\omega_{i}$ become locked to Arnold tongues jensen ;
arnold_tongue induced by the coupling
$p_{i}\tilde{r}_{i}sin(\overline{\theta}_{i}-\theta_{i})$.
There is a special solution of Eq. (30) that produces a bounded state in the
variable ${\theta_{i}}$ when the network is complete synchronous to an
equilibrium point. In such case, $\overline{\theta}_{i}$ becomes constant, and
Eq. (30) has one stable equilibrium
$\theta_{i}=\arcsin{\left(\frac{\omega_{i}}{p_{i}}\right)}$, obtained when
$p_{i}>\omega_{i}$. But, the local mean field becomes constant due to complete
synchronisation and not due to the fact that the nodes are “decoupled”. These
conditions do not produce the CAS phenomenon.
### I.6 Preserving the CAS pattern in different networks: a way to predict
the onset of the CAS phenomenon in larger networks
Consider two networks, $n_{1}$ and $n_{2}$, whose nodes have equal dynamical
descriptions, the network $n_{1}$ with $N_{1}$ nodes and the network $n_{2}$
with $N_{2}$ nodes ($N_{2}>N_{1}$), and two nodes, $i$ in the network $n_{1}$
and $j$ in the network $n_{2}$. Furthermore, assume that both nodes have
stable periodic CAS patterns (criteria 1 is satisfied), and assume that the
nodes have sufficiently large degrees such that the local mean field of node
$i$ is approximately equal to node $j$. Then the CAS pattern of node $i$ will
be approximately the same as the one of node $j$ if
$\sigma^{CAS}(n_{1})k_{i}(n_{1})=\sigma^{CAS}(n_{2})k_{j}(n_{2}).$ (31)
$\sigma^{CAS}(n_{1})$ and $\sigma^{CAS}(n_{2})$ represent the largest coupling
strengths for which the variance of the local mean field of a node decays with
the inverse of the degree of the node (criterion 2 is satisfied) in the
networks, respectively, and $k_{i}(n_{1})$ and $k_{j}(n_{2})$ are the degrees
of the nodes $i$ and $j$, respectively. In other words, the CAS phenomenon
occur in the network if $\sigma\leq\sigma^{CAS}$.
Therefore, if $\sigma^{CAS}(N_{1})$ is known, $\sigma^{CAS}(N_{2})$ can be
calculated from Eq. (31). In other words, if the CAS phenomenon is observed at
node $i$ for $\sigma\leq\sigma^{CAS}(N_{1})$, the CAS phenomenon will also be
observed at node $j$ for $\sigma(n_{2})\leq\sigma^{CAS}(n_{2})$, where
$\sigma^{CAS}(n_{2})$ satisfies Eq. (31).
Acknowledgment MSB acknowledges the partial financial support of the Northern
Research Partnership. HPR acknowledges the partial financial support of NSFC
Grant 60804040.
## References
* (1) R. Cont and J. P. Bouchaud, Macroecon. Dyn. 4, 170 (2000).
* (2) I. D. Couzin and J. Krause, Adv. Study Behavior 32, 1 (2003).
* (3) D. Helbing, I. Farkas, and T. Vicsek, Nature 407, 487 (2000).
* (4) Y. Kuramoto, in International Symposium on Mathematical Problems in Theoretical Physics, Vol. 39 of Lecture Notes in Physics, edited by H. Araki (Springer Berlin / Heidelberg, ADDRESS, 1975), pp. 420–422.
* (5) J. A. Acebrón et al., Rev. Mod. Phys. 77, 137 (2005).
* (6) H. Fujisaka and T. Yamada, Progress of Theoretical Physics 69, 32 (1983).
* (7) L. M. Pecora and T. Carroll, Phys. Rev. Lett. 80, 2109 (1998).
* (8) E. Steur, I. Tyukin, and H. Nijmeijer, Physica D 238, 2119 (2009).
* (9) E. Steur and H. Nijmeijer, IEEE Trans Circuits I 58, 1358 (2011).
* (10) M. S. Baptista, F. M. M. Kakmeni, and C. Grebogi, Phys. Rev. E 82, 036203 (2010).
* (11) R. Albert and A. L. Barabási, Rev. Mod. Phys. 74, 47 (2002).
* (12) C. Zhou and J. Kurths, Chaos 16, 015104 (2006).
* (13) J. Gomez-Gardenes, Y. Moreno, and A. Arenas, Chaos 21, 016105 (2011).
* (14) A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization: A Universal Concept in Nonlinear Sciences (Cambridge University Press, ADDRESS, 2001).
* (15) R. Femat and G. Solís-Perales, Phys. Lett. A 262, 50 (1999).
* (16) M. G. Rosenblum, A. S. Pikovsky, and J. Kurths, Phys. Rev. Lett. 78, 4193 (1997).
* (17) N. F. Rulkov, M. M. Sushchik, L. S. Tsimring, and H. D. I. Abarbanel, Phys. Rev. E 51, 980 (1995).
* (18) V. Jirsa, Cognitive Neurodynamics 2, 29 (2008), 10.1007/s11571-007-9030-0.
* (19) C. A. S. Batista et al., Phys. Rev. E 76, 016218 (2007).
* (20) M. Baptista, S. Boccaletti, K. Josic, and I. Leyva, Phys. Rev. E 69, 056228 (2004).
* (21) H. D. I. Abarbanel, N. F. Rulkov, and M. M. Sushchik, Phys. Rev. E 53, 4528 (1996).
* (22) Y.-C. Hung, Y.-T. Huang, M.-C. Ho, and C.-K. Hu, Phys. Rev. E 77, 016202 (2008).
* (23) S. Guan et al., Chaos 19, 013130 (2009).
* (24) A. Hu, Z. Xu, and L. Guo, Chaos 20, 013112 (2010).
* (25) M. Ballerini et al., Proc. of the Nat. Acad. of Sci. 105, 1232 (2008).
* (26) T. Pereira, Phys. Rev. E 82, 1 (2010).
* (27) A. E. Hramov and A. A. Koronovskii, Phys. Rev. E 71, 067201 (2005).
* (28) T. Pereira, M. Baptista, and J. Kurths, Phys. Lett. A 362, 159 (2007).
* (29) R. Zillmer, R. Livi, A. Politi, and A. Torcini, Phys. Rev. E 74, 1 (2006).
* (30) S. Luccioli and A. Politi, Phys. Rev. Lett. 105, 1 (2010).
* (31) T. Vicsek et al., Phys. Rev. Lett. 75, 1226 (1995).
* (32) G. Grégoire, H. Chaté, and Y. Tu, Physica D 181, 157 (2003).
* (33) H. J. Hilhorst, Brazilian J. of Physics 39, 371 (2009).
* (34) M. H. Jensen, P. Bak, and T. Bohr, Phys. Rev. A 30, 1960 (1984).
* (35) V. I. Arnold, AMS Transl. Series 46, 213 (1965).
|
# Nonlinear force-free coronal magnetic stereoscopy
Iulia Chifu,1,2 Thomas Wiegelmann1, Bernd Inhester1 1 Max-Planck-Institut für
Sonnensystemforschung, Justus-von-Liebig-Weg 3, 37077 Göttingen, Germany;
<EMAIL_ADDRESS>
2 Astronomical Institute of Romanian Academy, Cutitul de Argint 5, Bucharest,
Romania
(Draft version December 20, 2016.)
###### Abstract
Getting insights into the 3D structure of the solar coronal magnetic field
have been done in the past by two completely different approaches: (1.)
Nonlinear force-free field (NLFFF) extrapolations, which use photospheric
vector magnetograms as boundary condition. (2.) Stereoscopy of coronal
magnetic loops observed in EUV coronal images from different vantage points.
Both approaches have their strength and weaknesses. Extrapolation methods are
sensitive to noise and inconsistencies in the boundary data and the accuracy
of stereoscopy is affected by the ability of identifying the same structure in
different images and by the separation angle between the view directions. As a
consequence, for the same observational data, the computed 3D coronal magnetic
field with the two methods do not necessarily coincide. In an earlier work
(Paper I) we extended our NLFFF optimization code by the inclusion of
stereoscopic constrains. The method was successfully tested with synthetic
data and within this work we apply the newly developed code to a combined
data-set from SDO/HMI, SDO/AIA and the two STEREO spacecraft. The extended
method (called S-NLFFF) contains an additional term that monitors and
minimizes the angle between the local magnetic field direction and the
orientation of the 3D coronal loops reconstructed by stereoscopy. We find that
prescribing the shape of the 3D stereoscopically reconstructed loops the
S-NLFFF method leads to a much better agreement between the modeled field and
the stereoscopically reconstructed loops. We also find an appreciable decrease
by a factor of two in the angle between the current and the magnetic field
which indicates the improved quality of the force-free solution obtained by
S-NLFFF.
Sun: corona, Sun: magnetic fields, methods: numerical
iint iiint iiiint idotsint AMSiint AMSiiint AMSiiiint AMSidotsint
## 1 Introduction
Knowledge of the 3D structure of the solar coronal magnetic field is essential
to understand basically all physical processes in the corona. The reason is
that the magnetic field clearly dominates and structures the corona, because
the plasma $\beta$ (ratio of plasma and magnetic pressure) is very small.
Unfortunately direct measurements of the coronal magnetic field are not
routinely available and two distinct methods have been developed to
reconstruct the coronal magnetic field: 1.) extrapolations of photospheric
vector fields into the corona under the force-free assumption (see Wiegelmann
& Sakurai, 2012, for a review) and 2.) Stereoscopy of coronal images (see
Aschwanden, 2011, for a review). Both methods are not perfect if applied to
observational data. Photospheric vector magnetograms contain noise and are not
necessarily force-free consistent because of the mixed plasma $\beta$ in the
lower solar atmosphere (Gary, 1990). For a stereoscopic reconstruction from
different vantage points one first has to extract loop-like structures from
EUV-images, identify the same loop in both images (association problem) and
finally perform the 3D stereoscopy (large error at loop-top for East-West
loops). Consequently the output of NLFFF and stereoscopy can be different (see
De Rosa et al., 2009, for a comparison of NLFFF-models and stereoscopy).
It is therefore natural to combine photospheric measurements and stereoscopy
to obtain coronal magnetic field measurements which comply with both data
sets. Several such attempts have been made, whereas the methods developed so
far use the photospheric line-of-sight field, rather than the full vector
field, as boundary condition. First attempts have been made about one and a
half decade ago by Wiegelmann & Neukirch (2002) using linear force-free fields
with SOHO/MDI magnetograms as boundary conditions. In this approach the linear
force-free parameter $\alpha$ was computed by comparing the resulting fields
with 3D-loops from dynamic stereoscopy (see Aschwanden et al., 1999). That
time, well before the launch of STEREO, images from different vantage points
have been observed using the rotation of the Sun, and it was therefore
necessary to limit the method to almost stationary structures. The method was
later extended by Carcedo et al. (2003) to compute the linear force-free
$\alpha$ also directly from coronal images from one viewpoint only. In
subsequent works, still within the limitations of linear force-free models,
projections of the magnetic field loops have been used to solve the
stereoscopic association and ambiguity problem. The method was dubbed magnetic
stereoscopy (see Wiegelmann & Inhester, 2006; Feng et al., 2007, for details)
Linear force-free fields have their limitation (see, e.g., Wiegelmann, 2008)
and in particular the best fit value of $\alpha$ for different loops within
one active region are different and $\alpha$ can even change it’s sign.
Aschwanden et al. (2012) incorporated a forward fitting method, which uses
analytic expressions and different values of $\alpha$ along different loops,
thereby approximating a nonlinear force-free field. The method was refined in
Aschwanden (2013a, c) and subsequent code versions allow using 2D-loop
projections rather than 3D-stereo-loops. The method was intensively tested,
compared with extrapolations from vector magnetograms and further refined in a
number of subsequent paper, (e.g. Aschwanden & Malanushenko, 2013; Aschwanden,
2013b; Aschwanden et al., 2014; Aschwanden, 2016). It was dubbed Vertical-
Current Approximation Nonlinear Force-Free Field (VCA-NLFFF) code.
While VCA-NLFFF avoids several problems of magnetic field extrapolations from
photospheric vector magnetograms, e.g. the assumption that the boundary data
are force-free consistent is not necessary, the method uses only the line-of-
sight photospheric magnetic field and not the full vector field.
Malanushenko et al. (2012, 2014) proposed a NLFF field extrapolation method,
called Quasi-Grad-Rubin, which uses the line-of-sight component of the surface
magnetic field and the 2D shapes of the coronal loops from a single image as
constraints for their extrapolation. They tested the method with a semi-
analytic solution and also applied it on observational data.
Within this work, we propose a new method which we call Stereoscopic Nonlinear
force-free field code (S-NLFFF). The method uses both photospheric vector
magnetograms (here from SDO/HMI) and stereoscopic reconstructed 3D-loops as
input. Necessarily providing all these conditions over-imposes the boundary
condition and one cannot find a solution which strictly fulfills constraints
which probably contradicts each other. The advantage of our new method is that
the different constraints (force-freeness, photospheric magnetic field vector,
3D-stereo-loops) are all considered as terms of one functional, each weighted
with certain Lagrangian multipliers. These free parameters allow to specify
measurement errors (both in the photospheric field as well as in the
prescribed 3D-loops) and the code iterates for an optimal solution in the
sense that deviation from the boundary conditions are allowed in regions with
a substantial measurement error (photospheric field vector) and reconstruction
error (stereo-loops). The method was described and tested with synthetic data
in Chifu et al. (2015) (Paper-I).
The paper is outlined as follows: in section 2 we make a short description of
the methods used for the reconstruction of the 3D coronal loops and of the 3D
magnetic field, in section 3 we present the data used for the reconstructions,
in section 4 we show the 3D reconstruction, in section 5 we present the
results and in section 6 we discuss the results.
## 2 Methods
### 2.1 Multiview B-spline Stereoscopic Reconstruction (MBSR)
The 3D shape of solar loop-like structures (e.g. coronal loops, prominences,
leading edge of coronal mass ejections) can be performed using stereoscopic
reconstruction. Two-view directions are sufficient for a 3D reconstruction
from an ideal data set. The use of more views brings more accuracy to the
reconstruction if the data are noisy. The main steps in the stereoscopic
reconstruction are: the identification of the object to be reconstructed in
all of the available views; matching the object by tie-pointing; the
reconstruction (Inhester, 2006). Usually, as a final step the stereoscopically
reconstructed points from the loop-like structure often needs to be smoothed
by fitting a polynomial or a spline curve (Chifu, 2016).
The main idea of the MBSR method is the reconstruction in one go of an entire
loop-like structure. Instead of calculating pairwise reconstructions from
multiple views which in the end needs to be averaged, our code is able to
reconstruct tie-pointed curves from two or more views directly. The tie-points
do not have to be related by a common epipolar coordinate and therefore be
used directly in more than 2 views. It is designed to yield a unique 3D
B-spline as approximation to the reconstructed loop curve, the projections of
which optimally matches all tie-points in all images. The local error depends
only on the projected distances of the tie-points position to the final spline
curve (Chifu, 2016).
### 2.2 Stereoscopic-Nonlinear Force-Free Field extrapolation (S-NLFFF)
The modeling of the magnetic field in the solar corona is possible under
certain assumptions. The plasma $\beta$ model by Gary (2001) shows that in the
corona the magnetic pressure dominates over the plasma pressure and gravity
effects and the kinematic ram pressure of plasma flows are small (Wiegelmann &
Sakurai, 2012), too. In this approach, called the force-free field assumption,
the Lorentz-force vanishes and has to fulfill the non-linear equation
($\mathbf{j}\times\mathbf{B}=0$) together with the solenoidal condition
($\nabla\cdot\mathbf{B}=0$).
To model the coronal magnetic field using nonlinear force-free field
extrapolations, one needs surface observations of all three components of the
magnetic field as boundary condition. We solve the force-free equations with
the help of an optimization approach, which has originally been proposed by
Wheatland et al. (2000) and extended by Wiegelmann (2004); Wiegelmann &
Inhester (2010). Recently, the NLFFF optimization method was extended by
constraining the magnetic field to be aligned to the 3D coronal loops
stereoscopically reconstructed from EUVI images (Chifu et al., 2015).
The essential approach of the extended S-NLFFF method is to minimize a scalar
cost function ($\mathrm{L_{tot}}$) which consists of a number of terms
quantifying constraints the final solution should satisfy. The terms of the
functional are
$\displaystyle\text{L}_{\textit{1}}=\int_{V}w_{f}\frac{|(\nabla\times\mathbf{B})\times\mathbf{B}|^{2}}{B^{2}}\;d^{3}r,$
(1)
$\displaystyle\text{L}_{\textit{2}}=\int_{V}w_{f}|\nabla\cdot\mathbf{B}|^{2}\;dr^{3},$
(2)
$\displaystyle\text{L}_{\textit{3}}=\int_{S}(\mathbf{B}-\mathbf{B}_{obs})\cdot\mathrm{diag(\sigma^{-2}_{\alpha})}\cdot(\mathbf{B}-\mathbf{B}_{obs})\;d^{2}r,$
(3)
$\displaystyle\text{L}_{\textit{4}}=\sum_{i}\int_{\mathbf{c}_{i}}\frac{1}{\sigma^{2}_{c}}{|\mathbf{B}\times\mathbf{t}_{i}|^{2}}\;ds,$
(4) $\displaystyle\text{where}\quad\mathbf{t}_{i}=\frac{d\mathbf{c}_{i}}{ds}.$
(5)
The function to be minimized is
$\mathrm{L_{tot}}=\sum_{n}\xi_{n}L_{n},$ (6)
where $\xi_{i}$ are regularization weights. Our experience from Chifu et al.
(2015) suggests $\xi_{i}=1$ as an acceptable choice for the weights.
The computational box has an inner physical domain surrounded by a buffer zone
on the top and lateral boundaries. The force-free and divergence-free
conditions are satisfied if the first two terms (Eq. 1 and 2) are minimized to
zero. $w_{f}$ is a boundary weight function which is set to unity in the
physical domain and it decreases monotonically to zero towards the outer
buffer zone (see Wiegelmann, 2004, for more details). The third term (Eq. 3)
minimizes the differences between the observed and modeled magnetic field at
the bottom boundary, while the fourth term (Eq. 4) minimizes the angles
between the modeled magnetic field and the tangents of the stereoscopically
reconstructed loops. In Eq. 3, $\sigma_{q}(\mathbf{r})$ are estimated
measurement errors for the three field components $\textit{q}=x,y,z$ on $S$
(see Tadesse et al., 2011, for more details). In Eq. 4, $\sigma_{c_{i}}(s)$ is
a relative measure of the estimated error of the tangent direction
$\mathbf{t}_{i}(s)$ along the loop $i$. A detailed description of the NLFFF
optimization method (the L${}_{\textit{1}}$, L${}_{\textit{2}}$,
L${}_{\textit{3}}$ terms) can be found in Wheatland et al. (2000); Wiegelmann
(2004); Wiegelmann & Inhester (2010) and about S-NLFFF method (the
L${}_{\textit{4}}$ term) can be found in Chifu et al. (2015).
## 3 Observational data
One of the criteria for selecting the data set was the separation angle
between the two STEREO spacecraft.The stereoscopic reconstruction requires a
separation angle between the view points larger than zero degrees and less
than 180∘. For the selected event, the separation angle with respect to the
center of the Sun between the two STEREO spacecraft was approximately 147∘,
between STEREO A and SDO 77∘, between STEREO B and SDO 70∘ (Fig. 1) .
Figure 1: Images of the Sun with the active region AR 11087 from three
different views observed on 2010 July 15 at 08:14 UT in 171 Å wavelength. The
red rectangle marks the active region. In the left panel we display the
EUVI/STEREO B image, in the middle panel, the AIA/SDO image and in the right
panel, the EUVI/STEREO A image.
Figure 2: HMI/SDO vector magnetogram observed on 2010 July 15 at 08:14 UT.
Another selection criteria was the position of the active region on the solar
surface as seen from the SDO spacecraft. As the accuracy of the photospheric
field measurements become strongly reduced towards the limb, we choose ARs
close to the disk center as seen from SDO (Fig. 1, middle panel). A data set
which fulfills these criteria is the active region AR 11087 observed on 2010
July 15. We performed the 3D stereoscopic reconstruction using simultaneously
extreme ultra-violet ($\lambda$ = 171 Å) images recorded by the EUVI telescope
onboard STEREO A and B and by the AIA telescope onboard SDO. The EUVI
telescope has a FOV up to 1.7 R⊙ ($\backsimeq$ 1182.7 Mm) and a spatial
sampling of 1.6 arcsec pixel-1 (Wuelser et al., 2004). AIA onboard SDO takes
EUV images with a FOV of 1.5 R⊙ and 0.6 arcsec pixel-1 spatial sampling at
each 12 seconds (Lemen et al., 2012). For the extrapolation of the NLFFF we
used vector magnetograms provided by HMI/SDO (Fig. 2).
## 4 Data reconstruction
### 4.1 Two and three view stereoscopic reconstruction
One of the very important steps in 3D stereoscopic reconstruction is the
correct identification and matching of the objects for reconstruction (e.g.
coronal loops). In an ideal case, the objects for reconstruction have to be
clearly visible and therefore easily identifiable.
In many of the solar EUV observations the objects for reconstruction are not
traceable in a straight forward manner. According to Stenborg et al. (2008)
the major reasons for poor visualization of the data are the low contrast
between the coronal structures and the background and the multiscale nature of
the coronal features. Another reason is that in the EUV images we see the
line-of-sight (LOS) integration of the radiation emitted by all the loops in a
particular wavelength band. A variety of data processing procedures exists to
enhance the visibility of the loop structures (Stenborg et al., 2008). The
best method for our data processing we found to be the noise adaptive fuzzy
equalization (NAFE) method developed by Druckmüller (2013). The method is
based on histogram equalization and unsharp masking. We have applied this
method for all of the three EUV images used in our 3D reconstructions.
Figure 3: Projection of the 3D stereoscopically reconstructed loops overploted
over the STEREO B (left panel), SDO (middle panel) and STEREO A (right panel).
The magenta loops are reconstructed using all of the three spacecraft, the
green loops are reconstructed using STEREO A and SDO and the light blue loops
are reconstructed using STEREO B and SDO.
While some of the visualization problems can be resolved with image processing
techniques, other problems such as saturated pixels cannot be resolved. In the
data from STEREO A and B patches of saturated pixels restrained our
identification and matching possibilities required by the reconstruction.
The configuration of the three spacecraft does not provide images with a
visibility of the entire AR from all three vantage points simultaneously. Even
though the data captured by the spacecraft fulfills our criteria of selection,
the position of the three telescopes limits the number of loops which we can
identify, trace and reconstruct. While the SDO satellite (see Fig. 1, middle
panel) has a full view of the AR, the STEREO A (see Fig. 1, right panel) and B
(see Fig. 1, left panel) spacecraft were viewing a limited common area. In
spite of all these above difficulties we could identify ten loops. Three loops
were traced in all of the three images, three more loops in STEREO A and SDO
and four loops in STEREO B and SDO.
In Fig. 3 we show the projection of the 3D stereoscopically reconstructed
loops together with their tie-points (the black crosses) on each of the EUV
images. In Fig. 4 we present the 3D configuration of the Sun, represented as a
gray sphere, and the direction of the three spacecraft together with the 3D
reconstructed loops. The red loops are reconstructed using simultaneously all
three spacecraft, the blue loops are reconstructed using the data from STEREO
A and SDO while the green loops are based on the data from STEREO B and SDO.
Figure 4: Solar toy model with the 3D reconstructed loops on top. The blue
segments represents the direction towards the three spacecraft.
### 4.2 S-NLFFF reconstruction
The S-NLFFF reconstruction uses as input the photospheric vector-magnetograms
provided by SDO/HMI and the 3D reconstructed loops described above. The HMI
vector-magnetograms are mapped from the Helioprojective Cartesian to the
Carrington Heliographic - Cylindrical Equal Area (CRLT/CRLN-CEA) coordinate
system (Bobra et al., 2014) in which we compute the 3D field reconstruction.
The stereoscopically reconstructed loops were first calculated in HEEQ
(Heliospheric Earth EQuatorial) coordinates and then mapped to the Carrington
Heliographic coordinate system.
Figure 5: Plot of the 3D stereoscopically reconstructed loops inside the
S-NLFFF computation box. At the bottom of the box, the radial component of the
magnetic field is displayed.
The computational box is 480$\times$272$\times$240 (pixels)3 which is the
equivalent of 350$\times$198$\times$175 (Mm)3. In the Fig. 5 we show a 3D plot
of the radial component of the magnetic field, color-coded at the bottom
surface, along with the 3D stereoscopically reconstructed loops above.
The NLFF field reconstructions are calculated iteratively from an initial
magnetic field until the field has relaxed to a force-free state. In order to
find out how the final solution depends on the initial field and also to
determine the impact of the loop data, we present alternative solution
strategies.
Typically, the initial field for the iteration is the potential field
$\mathbf{B_{\text{pot}}}$ determined in the entire box from the normal
component of the surface field. As an alternative, we iterate
$\mathbf{B_{\text{pot}}}$ first on a coarse 240$\times$136$\times$120 grid and
map the force-free field thus obtained from the coarse to the final
480$\times$272$\times$240 grid (so called multiscale approach). This
interpolated force-free field is then used as initial field for the final
iteration. For the coarse grid iteration, the boundary data is resampled
accordingly from the original vector-magnetogram data. To see the effect of
the loop data, we switch the loop constraint on, at different stages of the
iteration.
We present here the result from five different setups
Setup 1
Starting from $\mathbf{B_{\text{pot}}}$ we iterate the force-free solution
using the NLFFF on the final 480$\times$272$\times$240 grid without loop data.
This is the conventional approach.
Setup 2
Starting from $\mathbf{B_{\text{pot}}}$ we use S-NLFFF on the final grid,
i.e., we include the loop data from the beginning of the iterations.
Setup 3
We use the solution from Setup 1 as initial field for an iteration with
S-NLFFF.
Setup 4
We start from $\mathbf{B_{\text{pot}}}$ on the coarse grid and interpolate the
coarse-grid force-free solution as initial field
($\mathbf{B^{coarse}_{\text{NLFFF}}}$) for NLFFF on the final grid. No loop
data is used.
Setup 5
We use the interpolated coarse-grid field from Setup 4 as initial field
($\mathbf{B^{coarse}_{\text{NLFFF}}}$) for S-NLFFF.
The natural approach would be to apply the S-NLFFF method on the fine grid
(the Setup 2) and to evaluate the L${}_{\textit{1}}$..L${}_{\textit{4}}$ (Eq.
1..4) and the angles between the magnetic field and the tangents of the 3D
loops. We apply the S-NLFFF method to the Setup 2, 3 and 5 to see which one
provides the best solution. We run the Setup 3 to see if the force-freeness is
maintained and in the same time the angles are minimized. Metcalf et al.
(2008) claimed that the solution of the multiscale version of the NLFFF
converges to a lower L (Eq. 6) value when compared with the single grid
solution. For this reason we considered the multiscale approach for the NLFFF
and S-NLFFF method.
## 5 Results
We calculated the angles ($\theta_{\textbf{Bt}_{\textit{i,j}}}$) between the
magnetic field ($\mathbf{B}_{\text{NLFFF}}$) obtained with the NLFFF
optimization method and the tangents ($\mathbf{t_{\textit{i,j}}}$, j=1…10,
i=1..100) of the 3D stereoscopically reconstructed loops (see Fig. 6, 7). The
angles are calculated for each position i along the j${}^{\text{th}}$ loop.
Different colors represent different loops. The misalignment angles between
$\mathbf{B}_{\text{NLFFF}}$ and $\mathbf{t_{\textit{i,j}}}$ are on average 20∘
and reach a maximum of approximately 60∘ (see Fig. 6). The angles from Fig. 6
are obtained using $\mathbf{B}_{\text{NLFFF}}$ as a result of Setup 1, but the
same profile is obtained using $\mathbf{B}_{\text{NLFFF}}$ from Setup 4.
Figure 6: Angles between the NLFF magnetic field and the tangent of the 3D
loops obtained as a result of Setup 1. Different colors represent different
loops.
By applying the S-NLFFF method, the angles
$\theta_{\mathbf{Bt_{\textit{i,j}}}}$ between $\mathbf{B}_{\text{S-NLFFF}}$
and $\mathbf{t}_{\textit{i,j}}$ were reduced by a factor of more then 20 as
shown in Fig. 7. For the calculation of the final angles
$\theta_{\mathbf{Bt_{\textit{i,j}}}}$ from Fig. 7 we used
$\mathbf{B}_{\text{S-NLFFF}}$ as a result of Setup 5. Nevertheless, Fig. 7 is
representative also for the angles between the 3D loop tangents and the
$\mathbf{B}_{\text{S-NLFFF}}$ obtained as a result of Setup 2 and 3.
Figure 7: The final angles between the S-NLFFF extrapolated magnetic field and
the tangents of the 3D loops obtained as a result of Setup 5. Different colors
represent different loops.
With the S-NLFFF method we could recover a magnetic field which is closer to
the force-free condition. In Table 1 we present the values for the terms of
the functional (see the detailed description of the terms in Wiegelmann
(2004); Wiegelmann & Inhester (2010); Chifu et al. (2015)), namely the force-
free ($L_{\textit{1}}$) term, the divergence of the magnetic field
($L_{\textit{2}}$) term, the closeness with the bottom boundary observation
($L_{\textit{3}}$) term and the closeness with coronal observable
($L_{\textit{4}}$) term. The residual values of the functional terms when
applying the S-NLFFF method are lower than those obtained with the NLFFF
method for the Setup 2 and 3 but a slightly larger for the Setup 5.
Table 1: The residual values of each of the functional terms Configuration | No. grids | Initialization | Methods | $L_{1}$ | $L_{2}$ | $L_{3}$ | $L_{4}$
---|---|---|---|---|---|---|---
Setup 1 | one | B${}_{\text{pot}}$ | NLFFF | $5.2$ | $3.2$ | $12.9$ | $-$
Setup 2 | one | B${}_{\text{pot}}$ | S-NLFFF | $4.6$ | $2.7$ | $12.2$ | $0.0011$
Setup 3 | one | B${}_{\text{NLFFF}}$ | S-NLFFF | $4.9$ | $3.0$ | $11.5$ | $0.0041$
Setup 4 | two | B${}_{\text{pot}}$ | NLFFF | $3.7$ | $2.2$ | $12.2$ | $-$
Setup 5 | two | B${}^{coarse}_{\text{NLFFF}}$ | S-NLFFF | $4.0$ | $2.3$ | $11.9$ | $0.0007$
We evaluated the angles ($\phi_{\mathbf{JB}}$) between the magnetic field and
the current for each loop, along the loop. We derived the $\phi_{\mathbf{JB}}$
angles between the potential, NLFF and S-NLFF field and the current. For
comparing the three cases, we calculated the root mean square (RMS) of the
angles $\phi_{\mathbf{JB}}$ for each loop. This is a critical test because the
current $\mathbf{J}$ is derived by differentiation from the magnetic field
$\mathbf{B}$ which amplifies the noise, especially where the field strength is
low. In Fig. 8 we show the RMS of $\phi_{\mathbf{JB}}$ for each loop. Here we
present the angles derived using the $\mathbf{B}_{\text{NLFFF}}$ obtained as a
solution of Setup 1 and the $\mathbf{B}_{\text{S-NLFFF}}$ obtained as a
solution of Setup 5. The evolution of the $\phi_{\mathbf{JB}}$ from Fig. 8 is
representative also for angles derived using the NLFFF solution of Setup 4 and
the S-NLFFF solution of setups 2, 3 and 5. The current is more aligned with
the magnetic field after using the reconstructed 3D loops as constrain for the
S-NLFFF method.
Figure 8: Root mean square of the angles between the current and the potential
field (orange rhombus), the extrapolated NLFFF (magenta squares) and the
extrapolated S-NLFFF (green triangles) for each of the 3D loops.
## 6 Discussions
De Rosa et al. (2009) compared different coronal NLFFF models with EUV coronal
loops observations. The conclusion of the study was that the misalignment
angles between the extrapolated NLFF field and the 3D stereoscopically
reconstructed loops reaches a maximum of approximately 45∘. In agreement with
the results of De Rosa et al. (2009) we derived similar angles between the
magnetic field ($\mathbf{B}_{\text{NLFFF}}$) obtained with the NLFFF
optimization method (for Setup 1 and 4) and the tangents $\mathbf{t}_{i,j}$ of
the 3D stereoscopically reconstructed loops (see Fig. 6).
In a previous paper (Chifu et al., 2015) we presented and tested the S-NLFFF
method with semi-analytic data. The results of the tests predicts that the
S-NLFFF method is capable of reducing the values of the
$\theta_{\mathbf{Bt}_{\textit{i,j}}}$ angles below 2∘. In all of the cases
studied in this paper, the S-NLFFF method was capable to reduce the angles
even further (see Fig. 7).
In an ideal case, the residual values of the functional terms
L${}_{\textit{1}}$..L${}_{\textit{4}}$ (Eq. 1…4) would be zero. Since the
observational data contains errors and the magnetic field model is based on
certain assumptions, the residual values cannot exactly be minimized to zero.
The smaller the residual value L${}_{\textit{1}}$ (Eq. 1), the close is the
field to the force-free condition. For the setups 2 and 3, the S-NLFFF could
bring the magnetic field closer to a force-free solution when compared with
the reference field (Setup 1).
From the evaluation of the root mean square angles ($\phi_{\mathbf{JB}}$)
between the current and the magnetic field, we could see an improvement in the
average alignment for all of the three setups. The large values in the angle
between the force-free magnetic field and current are probably due to the
large uncertainties in the horizontal vector field component, in particular in
the weak regions of magnetic field. Even for the Setup 5 for which the
residual values for the force-free terms did not improve when applying
S-NLFFF, the average angle along the loop between the field and the current
became smaller. Over all we can say that the new method which includes the
constraints from the corona improves not only the agreement between modeling
and observations, but it also improves the force-freenes of the obtained
magnetic field.
For most of the 3D stereoscopically reconstructed loops used as constraint for
the magnetic field, the S-NLFFF method is able to reduce the angles between
the magnetic field and the 3D loop tangents below 2∘. Nevertheless, there are
few loops for which the angles between $\mathbf{B}_{\text{S-NLFFF}}$ and
$\mathbf{t}_{i}$ remain large after S-NLFFF treatment. These loops have a
deviation of $\gtrsim$ 65∘ when compared with the NLFFF model field (Setup 1
and 4). When this field was used as initial condition for S-NLFFF (Setup 3)
the average angle could be reduced by a factor of 2-10 but not below 5∘.
In this paper we present the performance of the S-NLFFF method using ten 3D
coronal loops as a constraint for modeling the coronal magnetic field. For
these ten loops we show that the S-NLFFF method can obtain a good agreement
between the modeled coronal magnetic field and the coronal loops observations.
The S-NLFFF method can also obtain a much better alignment between the current
and the magnetic field which is an indication that we obtain a better field in
terms of force-freenes. The residual value of force-free integral value (Eq.
1) decreases only little. The reason is probably that the few loops we
included improve the field in their local environment but have limited impact
on metrics which average over a much larger volume. We believe that more loops
which occupy a larger fraction of the computational box will also improve the
quality measures over the entire box.would have a larger impact on the
improvement of the force-freenes.
Data are courtesy of NASA/SDO and the AIA and HMI science teams. The authors
thank the STEREO SECCHI consortia for supplying their data. STEREO is a
project of NASA. I.C is grateful to Hans-Peter Doerr for helpful suggestions.
This work was supported by DLR fund 50 OC 1301
## References
* Aschwanden (2011) Aschwanden, M. J. 2011, LRSP, 8
* Aschwanden (2013a) —. 2013a, Sol. Phys., 287, 323
* Aschwanden (2013b) —. 2013b, Sol. Phys., 287, 369
* Aschwanden (2013c) —. 2013c, ApJ, 763, 115
* Aschwanden (2016) —. 2016, ApJS, 224, 25
* Aschwanden & Malanushenko (2013) Aschwanden, M. J., & Malanushenko, A. 2013, Sol. Phys., 287, 345
* Aschwanden et al. (1999) Aschwanden, M. J., Newmark, J. S., Delaboudinière, J.-P., et al. 1999, ApJ, 515, 842
* Aschwanden et al. (2014) Aschwanden, M. J., Sun, X., & Liu, Y. 2014, ApJ, 785, 34
* Aschwanden et al. (2012) Aschwanden, M. J., Wuelser, J.-P., Nitta, N. V., et al. 2012, ApJ, 756, 124
* Bobra et al. (2014) Bobra, M. G., Sun, X., Hoeksema, J. T., et al. 2014, Solar Physics, 289, 3549
* Carcedo et al. (2003) Carcedo, L., Brown, D. S., Hood, A. W., Neukirch, T., & Wiegelmann, T. 2003, Sol. Phys., 218, 29
* Chifu (2016) Chifu, I. 2016, Multi-spacecraft analysis of the solar coronal plasma, PhD thesis
* Chifu et al. (2015) Chifu, I., Inhester, B., & Wiegelmann, T. 2015, A&A, 577, A123
* De Rosa et al. (2009) De Rosa, M. L., Schrijver, C. J., Barnes, G., et al. 2009, ApJ, 696, 1780
* Druckmüller (2013) Druckmüller, M. 2013, ApJS, 207, 25
* Feng et al. (2007) Feng, L., Inhester, B., Solanki, S. K., et al. 2007, ApJ, 671, L205
* Gary (1990) Gary, G. A. 1990, Mem. Soc. Astron. Italiana, 61, 457
* Gary (2001) —. 2001, Sol. Phys., 203, 71
* Inhester (2006) Inhester, B. 2006, ArXiv Astrophysics e-print, arXiv:astro-ph/0612649
* Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17
* Malanushenko et al. (2014) Malanushenko, A., Schrijver, C. J., DeRosa, M. L., & Wheatland, M. S. 2014, ApJ, 783, 102
* Malanushenko et al. (2012) Malanushenko, A., Schrijver, C. J., DeRosa, M. L., Wheatland, M. S., & Gilchrist, S. A. 2012, ApJ, 756, 153
* Metcalf et al. (2008) Metcalf, T. R., De Rosa, M. L., Schrijver, C. J., et al. 2008, Sol. Phys., 247, 269
* Stenborg et al. (2008) Stenborg, G., Vourlidas, A., & Howard, R. A. 2008, ApJ, 674, 1201
* Tadesse et al. (2011) Tadesse, T., Wiegelmann, T., Inhester, B., & Pevtsov, A. 2011, A&A, 527, A30
* Wheatland et al. (2000) Wheatland, M. S., Sturrock, P. A., & Roumeliotis, G. 2000, ApJ, 540, 1150
* Wiegelmann (2004) Wiegelmann, T. 2004, Sol. Phys., 219, 87
* Wiegelmann (2008) —. 2008, Journal of Geophysical Research (Space Physics), 113, A03S02
* Wiegelmann & Inhester (2006) Wiegelmann, T., & Inhester, B. 2006, Sol. Phys., 236, 25
* Wiegelmann & Inhester (2010) —. 2010, A&A, 516, A107
* Wiegelmann & Neukirch (2002) Wiegelmann, T., & Neukirch, T. 2002, Sol. Phys., 208, 233
* Wiegelmann & Sakurai (2012) Wiegelmann, T., & Sakurai, T. 2012, LRSP, 9, 5
* Wuelser et al. (2004) Wuelser, J.-P., Lemen, J. R., Tarbell, T. D., et al. 2004, in Proc. SPIE, Vol. 5171, Telescopes and Instrumentation for Solar Astrophysics, ed. S. Fineschi & M. A. Gummin, 111–122
|
# A Green Bank Telescope search for narrowband technosignatures between 1.1 –
1.9 GHz during 12 Kepler planetary transits
Sofia Z. Sheikh SETI Institute, 339 Bernardo Avenue, Suite 200, Mountain View,
CA 94043, USA Berkeley SETI Research Center, University of California,
Berkeley, CA 94720, USA Penn State Extraterrestrial Intelligence Center, 525
Davey Laboratory, The Pennsylvania State University, University Park, PA,
16802, USA Shubham Kanodia Earth and Planets Laboratory, Carnegie Institution
for Science, 5241 Broad Branch Road, NW, Washington, DC 20015, USA Department
of Astronomy & Astrophysics, 525 Davey Laboratory, The Pennsylvania State
University, University Park, PA, 16802, USA Center for Exoplanets and
Habitable Worlds, 525 Davey Laboratory, The Pennsylvania State University,
University Park, PA, 16802, USA Penn State Extraterrestrial Intelligence
Center, 525 Davey Laboratory, The Pennsylvania State University, University
Park, PA, 16802, USA Emily Lubar Department of Astronomy, University of Texas
at Austin, Austin, TX, USA William P. Bowman Department of Astronomy &
Astrophysics, 525 Davey Laboratory, The Pennsylvania State University,
University Park, PA, 16802, USA Institute for Gravitation and the Cosmos, The
Pennsylvania State University, University Park, PA 16802, USA Caleb I. Cañas
Department of Astronomy & Astrophysics, 525 Davey Laboratory, The Pennsylvania
State University, University Park, PA, 16802, USA Center for Exoplanets and
Habitable Worlds, 525 Davey Laboratory, The Pennsylvania State University,
University Park, PA, 16802, USA Christian Gilbertson Department of Astronomy
& Astrophysics, 525 Davey Laboratory, The Pennsylvania State University,
University Park, PA, 16802, USA Center for Exoplanets and Habitable Worlds,
525 Davey Laboratory, The Pennsylvania State University, University Park, PA,
16802, USA Mariah G. MacDonald Center for Exoplanets and Habitable Worlds,
525 Davey Laboratory, The Pennsylvania State University, University Park, PA,
16802, USA Department of Physics, The College of New Jersey, 2000 Pennington
Road, Ewing, NJ 08628, USA Jason Wright Department of Astronomy &
Astrophysics, 525 Davey Laboratory, The Pennsylvania State University,
University Park, PA, 16802, USA Center for Exoplanets and Habitable Worlds,
525 Davey Laboratory, The Pennsylvania State University, University Park, PA,
16802, USA Penn State Extraterrestrial Intelligence Center, 525 Davey
Laboratory, The Pennsylvania State University, University Park, PA, 16802, USA
David MacMahon Radio Astronomy Lab, University of California, Berkeley, CA
94720, USA Berkeley SETI Research Center, University of California, Berkeley,
CA 94720, USA Steve Croft Radio Astronomy Lab, University of California,
Berkeley, CA 94720, USA Berkeley SETI Research Center, University of
California, Berkeley, CA 94720, USA SETI Institute, 339 Bernardo Avenue,
Suite 200, Mountain View, CA 94043, USA Danny Price Berkeley SETI Research
Center, University of California, Berkeley, CA 94720, USA International
Centre for Radio Astronomy Research, 1 Turner Ave, Bentley WA 6102, Australia
Andrew Siemion Berkeley SETI Research Center, University of California,
Berkeley, CA 94720, USA SETI Institute, 339 Bernardo Avenue, Suite 200,
Mountain View, CA 94043, USA Department of Physics and Astronomy, University
of Manchester, UK Institute of Space Sciences and Astronomy, University of
Malta Jamie Drew Breakthrough Initiatives, NASA Research Park, Moffett
Field, CA 94035, USA S. Pete Worden Breakthrough Initiatives, NASA Research
Park, Moffett Field, CA 94035, USA Elizabeth Trenholm University of
Greenwich, School of Computing and Mathematical Sciences, Park Row SE10 9LS
London, UK Sofia Sheikh<EMAIL_ADDRESS>
(Accepted November 24, 2022)
###### Abstract
A growing avenue for determining the prevalence of life beyond Earth is to
search for “technosignatures” from extraterrestrial intelligences/agents.
Technosignatures require significant energy to be visible across interstellar
space and thus intentional signals might be concentrated in frequency, in
time, or in space, to be found in mutually obvious places. Therefore, it could
be advantageous to search for technosignatures in parts of parameter space
that are mutually-derivable to an observer on Earth and a distant transmitter.
In this work, we used the L-band (1.1–1.9 GHz) receiver on the Robert C. Byrd
Green Bank Telescope (GBT) to perform the first technosignature search pre-
synchronized with exoplanet transits, covering 12 Kepler systems. We used the
Breakthrough Listen turboSETI pipeline to flag narrowband hits ($\sim$3 Hz)
using a maximum drift rate of $\pm$614.4 Hz/s and a signal-to-noise threshold
of 5 — the pipeline returned $\sim 3.4\times 10^{5}$ apparently-localized
features. Visual inspection by a team of citizen scientists ruled out 99.6% of
them. Further analysis found 2 signals-of-interest that warrant follow-up, but
no technosignatures. If the signals-of-interest are not re-detected in future
work, it will imply that the 12 targets in the search are not producing
transit-aligned signals from 1.1 – 1.9 GHz with transmitter powers $>$60 times
that of the former Arecibo radar. This search debuts a range of innovative
technosignature techniques: citizen science vetting of potential signals-of-
interest, a sensitivity-aware search out to extremely high drift rates, a more
flexible method of analyzing on-off cadences, and an extremely low signal-to-
noise threshold.
technosignatures — search for extraterrestrial intelligence — biosignatures —
astrobiology
††journal: The Astronomical Journal††facilities: GBT††software: Astropy
(Astropy Collaboration et al., 2013, 2018), Numpy (Harris et al., 2020),
Matplotlib (Hunter, 2007), blimpy (Breakthrough Listen Collaboration, 2019;
Price et al., 2019), turboSETI (Enriquez & Price, 2019).
## 1 Introduction
A vast and still-growing part of our astronomical exploration is the search
for life elsewhere in the universe. Many programs look for such life on
exoplanets through their biosignatures, surface features (e.g., Coelho et al.,
2022), or atmospheric constituents (e.g., Thompson et al., 2022) that indicate
the presence of biological activity. Another complementary strategy is to look
for the technosignatures of technologically-capable life — Extraterrestrial
Agents (ETA)s, to use the terminology of Döbler & Raab (2021) — which may be
more abundant, long-lived, highly-detectable, and unambiguous than other
previously-described biosignatures (Wright et al., 2022).
The most popular technosignature search strategy to date is radio searches for
artificial emission (as pioneered by Drake, 1961), which has grown
exponentially in recent years, making use of cutting-edge computational
techniques and newly-developed hardware (e.g., Harp et al., 2018; Ma et al.,
2022). However, even with the renewed observational energy, the search space
remains mostly unexplored (Wright et al., 2018a). This provides an opportunity
for radio observation projects, large and small, to make an impact by filling
in unexplored regions of parameter space.
One suggestion on how to best navigate this huge parameter space is to use
“Schelling Points” (Schelling, 1960; Wright, 2020), to prioritize mutually-
derivable parts of parameter space which a transmitter and receiver can both
land upon without any prior communication. This allows for more efficient
traversal of parameter space — potentially leading to a technosignature
detection much sooner — and also can be more power efficient for the
transmitter, which can focus its energy in particular directions and times.
One application of this idea is to prioritize certain places and times
synchronized with an astronomical event (Pace & Walker, 1975). In some of
these synchronized strategies, the event is external to the transmitter and
receiver’s systems, i.e., a nearby supernova or nova (Tang, 1976; Lemarchand,
1994; Davenport et al., 2022). In another application, the synchronizing event
is some feature of the transmitter or receiver’s system, observable or
predictable by both parties (e.g., Corbet, 2003).
Here, we make use of exoplanetary transits as temporal Schelling points
(Kipping & Teachey, 2016): if an ETA transmitter on an exoplanet sends a
signal towards its anti-stellar point, the signal will necessarily arrive to
any observer along the line-of-sight at the same time as the exoplanet appears
to be at mid-transit. This provides a known time and place (during an
exoplanetary transit) for the observer to look for a signal. The transmitter
may be targeting a specific system that happens to lie within its ecliptic,
and thus sends a signal once each of its exoplanetary years. Conversely, the
transmitter may be constantly targeting its exoplanet’s anti-stellar point,
sending out a transmission which sweeps across its ecliptic. In the extreme
case, a tidally-locked planet hosting a single surface-locked transmitter
could be constantly sweeping the ecliptic via the anti-stellar point, perhaps
powered by energy collected photo-voltaically from the star-facing side of the
exoplanet. Observations at this special periodic epoch offer a way to sample
the large possibilities of repetition rates associated with periodic
transmissions (Wright et al., 2018b). Completely divorced from this Schelling
Point logic, ETAs may preferentially emit high-power microwave radiation in
their ecliptics (as we do in our solar system due to space communications),
making transiting systems potentially more favourable for detecting
unintentional artificial signals.
In this work, we perform the first radio technosignature search of exoplanet-
hosting stars, where observations were pre-synchronized with their transits.
This is a complementary approach to that performed by Franz et al. (2022),
which looked back into archival data to identify serendipitous observations
during transit. We also follow the growing tradition of conducting
technosignature work and obtaining novel astronomical results using a cohort-
based, academic course centered research model (Margot et al., 2021; Tusay et
al., 2022).
In Section 2, we describe how the targets were chosen. In Section 3 we discuss
the observing plan, the observation parameters, and the data format. We cover
the methods for the search, including the software, assumptions, and chosen
search parameters in Section 4. We discuss our findings and upper limits in
Section 5. Finally, we discuss and conclude in Section 6 and Section 7
respectively.
## 2 Target Selection
We first compiled a list of 540 exoplanets that transit during a 2-day
potential observing window as a function of distance, using ephemerides from
the NASA Exoplanet Archive (Figure 1; Akeson et al., 2013). Down-sampling from
this subset of transiting exoplanets, we selected planets discovered by the
Kepler mission due to their limited range of celestial coordinates. This
enabled us to minimize our slew times during the nodding sequence, and boost
the observing duty cycle. While there have been previous searches focused on
transiting exoplanets from the Kepler mission (Harp et al., 2016; Siemion et
al., 2013b), these have not prioritized observations during planetary
transits. As explained above, we consider the transit a temporal Schelling
point and hence aim to maximize the fraction of transit that we observed. To
further improve our observing efficiency, we decided to observe alternate
transiting planets as part of the on-off-on sequence used to identify Radio
Frequency Interference (RFI), and identified pairs of exoplanets in the Kepler
field that are transiting at similar times and are particularly close on-
sky111None of these pairs are close enough on the sky to potentially cause
source confusion, i.e., the angular distance between the targets $\gg$ GBT
beam diameter in L band.. . The properties of our 12 observed targets are
listed in Table 1, and their on-sky positions are visualized in Figure 2.
Figure 1: Confirmed transiting exoplanets in the Kepler field as observable from the Green Bank Telescope (GBT) during on March 25, 2018. The horizontal errorbars depict the transit duration, whereas the colour of the markers represent the stellar effective temperature. Points without visible errorbars have extremely well-defined transit midpoints. We did not include a habitability requirement in planet selection. Figure 2: A plot of all exoplanetary systems (blue stars) observed with the GBT for this project, overlaid on the Kepler field of view, which covers 115 square degrees on sky (Mullally et al., 2016). The name of each system as it appears in Table 4 is shown. Boyajian’s star (an off-source target) is also included for reference. Table 1: Stellar and planetary properties for the twelve transiting Kepler planets observed in this work. We use the properties from Morton et al. (2016) for all planets, except Kepler-446b which we pull from Muirhead et al. (2015). Target | RA | Dec | Distance | Radius | Period | $t_{0}$ | $T_{eq}$ | $T_{eff}$
---|---|---|---|---|---|---|---|---
| (J2000) | (J2000) | (pc) | ($R_{\oplus}$) | (d) | (BJD - 2454000) | (K) | (K)
Kepler-446b | 18h49m0.05s | 44d55m15.96s | 120 | 1.5 $\pm$0.25 | 1.57 | 965.91 | 648 | 3464
Kepler-537b | 19h19m31.20s | 51d16m48.00s | 465 | 1.41${}^{+0.06}_{-0.04}$ | 3.25 | 1004.28 | 1181 | 5703
Kepler-723b | 18h59m19.32s | 44d39m29.30s | 965 | 12.19${}^{+1.58}_{-0.79}$ | 4.08 | 1002.64 | 1016 | 5655
Kepler-732c | 18h54m55.68s | 45d57m31.57s | 150 | 1.27${}^{+0.07}_{-0.1}$ | 0.89 | 967.54 | 893 | 3582
Kepler-738b | 19h10m16.73s | 46d34m4.30s | 992 | 2.5${}^{+0.2}_{-0.17}$ | 24.59 | 1006.70 | 530 | 5474
Kepler-842b | 19h29m16.80s | 49d38m60.00s | 552 | 1.6${}^{+0.08}_{-0.09}$ | 1.22 | 966.46 | 1252 | 4838
Kepler-992b | 18h57m43.20s | 47d38m24.00s | 268 | 1.62${}^{+0.04}_{-0.27}$ | 20.16 | 977.34 | 510 | 4975
Kepler-1039b | 19h53m16.80s | 45d18m36.00s | 324 | 1.46${}^{+0.09}_{-0.22}$ | 0.93 | 964.58 | 2080 | 4777
Kepler-1053b | 19h25m40.80s | 39d7m48.00s | 171 | 0.98${}^{+0.05}_{-0.04}$ | 2.41 | 965.43 | 953 | 4507
Kepler-1164b | 19h10m7.20s | 38d53m24.00s | 447 | 1.12${}^{+0.04}_{-0.07}$ | 3.98 | 966.62 | 936 | 5143
Kepler-1222b | 19h39m33.60s | 49d22m48.00s | 455 | 0.79 $\pm$0.06 | 1.92 | 965.47 | 1246 | 5083
Kepler-1332b | 19h24m7.20s | 43d54m36.00s | 465 | 1.37${}^{+0.09}_{-0.05}$ | 11.87 | 973.21 | 728 | 5523
## 3 Observations and Data
We pre-planned our observing sequence to ensure that we would hit each
exoplanet during its transit, using individual 5-minute observing blocks
spanning our 6 hour observing window on March 25, 2018 (starting at 11:00 UT);
we elected to use the standard Breakthrough Listen (BL) Green Bank Telescope
(GBT) integration of 5 minutes, and assumed a 2 minute overhead to account for
e.g., the slewing and settling of the telescope. Table 4 shows the log of
observations, including their relative temporal position to the target
exoplanet’s mid-transit point. We created individual GBT observing scripts for
each pair which toggled back-and-forth between the two targets until they were
replaced by the next target script. This sequence was adjusted dynamically
throughout the observing session to select targets closest to mid-transit, in
light of actual slew times and unanticipated observing overheads. For example,
we started our transit observations with Kepler-992b for scans 0010 and 0012,
the former of which spanned the transit midpoint. During these scans, we
observed Kepler-738b as our ‘off’ target. After covering the transit midpoint
for Kepler-992b, we switched to Kepler-1039b and Kepler-732c, which had the
next occurring transit midpoints. The total in-transit time for each planet in
the sample varied in duration from 0.65–3.9 hours (median: 1.8 hours), so the
5-minute scans covered approximately 5% of each transit.
We followed a similar logic through the rest of our observing window, where we
tried to observe targets for at least 2 scans each during transit;
furthermore, we prioritized observations of transit midpoints and tried to
minimize slew times. We bracketed our observing block with calibration
observations of quasar 3C 295 (scans 0006 and 0007) in the beginning, and
pulsar B0329+54 (scan 0059) at the end. Additionally, we obtained one scan
(scan 0009) of KIC 8462852, commonly known as Boyajian’s star (Boyajian et
al., 2018) before starting our transit sequence; it was a conveniently-located
off-source also targeted by BL laser technosignature searches using Lick
Observatory’s Automated Planet Finder telescope (Lipman et al., 2019).
Data were recorded using the L-band receiver (1.1–1.9 GHz) (Balser et al.,
2021) and the BL backend on the GBT, which in 2018222It is now capable of
digitizing up to 12 GHz of instantaneous bandwidth. was capable of digitizing
up to 6 GHz of instantaneous bandwidth in 2 polarizations at 8-bit resolution
— for more information, see MacMahon et al. (2018). The raw voltage data is
then channelized using the gpuspec code to produce three spectral SIGPROC
filterbank files containing Stokes I total power/frequency/time datacubes,
each file at a different time and spectral resolution (Lebofsky et al., 2019).
In the following analyses, we make use of the high spectral resolution data
product, with a frequency resolution of $\sim 3$ Hz and a time resolution of
$\sim 18$ s.
We performed data quality checks using the calibrator observations at the
beginning and end of the observing session. The pulsar B0329+54 was easily
visible in a prepfold plot (Ransom, 2011), providing a first-order
confirmation that the system was working as expected. In addition, it had an
expected Signal-to-Noise Ratio (S/N) of 9172 given our system parameters,
observing parameters, and its characteristics in the ATNF pulsar
database333https://www.atnf.csiro.au/research/pulsar/psrcat/, and was detected
at an S/N of 5306 (57.85%). Given that pulsars in general (and this pulsar in
particular) are not perfect flux calibrators (they exhibit variability in flux
over time due to scintillation, which was distinctly present in the scan),
this is entirely within the range of expected outcomes.
We also used the first 3C 295 scan as a continuum flux calibrator “on”-source,
and the following observation of Boyajian’s Star as an “off” source, to derive
a system temperature. Assuming a flux density of 22.22 Jy at 1408 MHz (as
measured by Ott et al., 1994), and a spectral index of $\alpha=-0.7$, as found
for the calibrator’s hotspots in a recent LOFAR observation (Bonnassieux et
al., 2022), we obtain an empirical system temperature measurement of
$T_{sys}=22.66$ K. The theoretical value given in the GBT Proposer’s Guide for
the L-band receiver is $T_{sys}\approx 20$ K, which is consistent with the
experimental results.
## 4 Search Methods and Data Reduction
BL has a well-established data pipeline for performing narrowband Search for
Extraterrestrial Intelligence (SETI) searches on high frequency resolution
filterbank files. This pipeline consists of import, plotting, and other
utility functions in the blimpy package (Breakthrough Listen Collaboration,
2019; Price et al., 2019), and a narrowband search code turboSETI (Enriquez &
Price, 2019) based on the de-dispersion algorithm of Taylor (1974) and adapted
for de-Doppler searches. These pipelines have frequently been used for SETI
searches in the recent literature (e.g., Siemion et al., 2013a; Smith et al.,
2021).
For this work, we ran turboSETI’s narrowband hit-finder with a maximum drift
rate of $\pm$ 614.4 Hz/s and a S/N threshold of 5. Both of these parameters
are unusual choices, for the reasons presented below.
A typical drift rate for radio SETI searches is $\sim\pm$10 Hz/s, within a
factor of a few (e.g., Sheikh et al., 2020; Franz et al., 2022). However,
Sheikh et al. (2019) showed that known exoplanetary systems could produce
drift rates up to $\pm$ 200 nHz (i.e., $\pm 200\times\nu$ Hz/s, where $\nu$ is
the observing frequency in GHz). The largest drift rates would be expected
from exoplanets with the largest radial accelerations relative to the receiver
on Earth, e.g., transiting exoplanets that are viewed edge-on. This work marks
the first time that a SETI survey has followed the 200 nHz recommendation, by
using a maximum drift rate of $\pm$614.4 Hz/s; this drift rate is sufficient
to capture drift rates of $\pm$200 nHz even at 1.9 GHz, the highest frequency
in these observations.
It should be noted that turboSETI, in its current configuration, does not
automatically account for the sensitivity loss sustained when incoherently-
dechirping past the “one-to-one point” $\nu_{\mathrm{1-to-1}}$, where a signal
drifts one frequency bin in every time bin (Sheikh et al., 2019). Here, we
implement the first of the two partial solutions to this problem described by
Margot et al. (2021). We search the original high spectral-resolution
filterbank file with a drift rate range from 0 to $\pm\nu_{\mathrm{1-to-1}}$
Hz/s, in steps of $\sim 0.01$ Hz/s. Then, we “scrunch” the file in frequency
by a factor of 2, halving the number of bins by summing every other bin with
its following neighbor. We then search again, using a new drift rate range of
$\pm\nu_{\mathrm{1-to-1}}$ to $\pm 2\times\nu_{\mathrm{1-to-1}}$ Hz/s (the new
one-to-one point in the new file), with a correspondingly doubled drift rate
step. We repeat this process until we have covered the entire desired drift
rate range, which requires a series of 12 scrunch-and-search iterations. It
should be noted that each frequency-scrunch, though it maximizes the S/N
within its range of drift rates, still causes an unavoidable $\sqrt{2}$ loss
in sensitivity due to the original dimensions of the incoherent sum that
produced the data product. The sensitivity losses are discussed further in
Section 5.4, and it should be noted that exact signal positions and drift
rates within the band may also cause irregularity in sensitivity with this
method.
Recent SETI searches have often chosen an S/N of 10 for their hit thresholds,
including Price et al. (2020) and (Margot et al., 2021). In narrowband radio
signal detection these limits are primarily dictated by the filtering
resources of the search, rather than any inherent statistical significance —
the environment is so contaminated by RFI that false alarm rates from other
common astronomical distributions, e.g., white noise or Gaussian statistics,
do not apply. In this work, we instead decided upon a lower S/N of 5, which
allows us to double the sensitivity of the search. It should be noted that
this causes an immense number of hits to pass the turboSETI filtering step, in
addition to the shorter BAB cadence described in the following section: we
managed this step with citizen science (Section 5.2), but also note its
difficulties (Section 6).
## 5 Results
We ran turboSETI’s hit-finding algorithm on every observation in Table 4. We
ignored the region from 1.20–1.34 GHz, which corresponds to the L-band “notch
filter”, which applies to the most RFI–contaminated region of the spectrum at
the GBT. This generated a total of $\sim$2.53 million hits. We then used
turboSETI’s event-finding capability to compare hits in each Kepler
observation (and Boyajian’s star) to hits in each of the observations directly
preceding and following it; this is the equivalent of an off-on-off or BAB
cadence. In rare cases where the same target was observed twice in a row to
realign the observations with the transit timeline (e.g., scans 0018 and 0019,
both targeting Kepler-738b), we used the next closest scan in the
preceding/following direction that was on a different target. This event-
finding process resulted in 338473 unique events. The frequency distribution
of the hits and events are shown in Figure 3.
Figure 3: A histogram showing the frequency distribution of the hits (lighter)
and events (darker) in a logarithmic (left) and a linear (right) scale. The
notch filter is responsible for the absence of hits between 1200–1340 MHz.
Three distinct frequency regions contain more than 100000 hits each: 1165–1185
MHz, 1375–1385 MHz, and 1570–1580 MHz. These three regions are discussed in
further detail in Section 5.1.
### 5.1 RFI-Heavy Bands
We find that three frequency ranges — 1165–1185 MHz, 1375–1385 MHz, and
1570–1580 MHz — contain the majority of the RFI in our observations: 5% of the
band (excluding notch filter) accounts for 56% of the hits. These ranges are
consistent with the interference-heavy regions discussed by Price et al.
(2020). Here, we briefly discuss each of these frequency ranges in turn, and
attempt to associate them with Federal Communications Commission (FCC)
frequency
allocations444https://transition.fcc.gov/oet/spectrum/table/fcctable.pdf.
Figure 4: a) An example of an event in the 1165–1185 MHz region, consistent
with GPS L5. This event is RFI, as it appears in multiple different targets in
consecutive observations. The dashed red line represents turboSETI’s best-fit
drift rate for the detected hit. b) An example of an event in the 1375–1385
MHz region. These events occur for only a fraction of the observation (here,
about 100 seconds) within a single scan — this makes it difficult to determine
whether they are impulsive RFI or whether they are true transient signals
localized on the sky. However, because we observe the same morphology of
signal in multiple targets, and because of the match to the GPS L3 downlink,
we can assign these events as RFI. c) An example of an event in the 1570–1580
MHz region. These events are similar to the example shown in subfigure b in
that they are degenerate between transient and localized signals; this is a
common challenge for single-dish technosignature searches. The same signals
were identified in multiple targets, however, indicating that they are indeed
RFI — likely the GPS L1 downlink.
1165–1185 MHz: There is an FCC frequency allocation for aeronautical
radionavigation and radionavigation-satellite (including space-to-Earth)
transmissions between 1164–1215 MHz, covering this observed interference
region in Figure 3. The distinct peak from 1165–1185 MHz is consistent with
the GPS L5 downlink555https://www.nist.gov/pml/time-and-frequency-
division/popular-links/time-frequency-z/time-and-frequency-z-g. An example of
an event in this region that passed turboSETI’s thresholds is shown in Figure
4a.
1375–1385 MHz: This narrow region of the spectrum is occupied by the GPS L3
downlink band, which provides communications links for the U.S. Nuclear
Detonation Detection System. An example of an event in this region that passed
turboSETI’s thresholds is shown in Figure 4b.
1570–1580 MHz: Once again, this interferer falls within an FCC allocation
dedicated to aeronautical radionavigation and radionavigation-satellite
(including space-to-Earth) transmissions; in this case, however, the
allocation is much wider (1559 MHz–1610 MHz) than the region where we see the
majority of interference. An example of an event in this region that passed
turboSETI’s thresholds is shown in Figure 4c. Given the frequency range, and
the presence of other GPS downlinks within the dataset, these hits can likely
be attributed to the GPS L1 downlink centered at 1575 MHz.
Finally, we detected a series of swooping signals (changing from high drift
rate, to low drift rate, to high again) between 1616–1626.5 MHz that also
passed turboSETI’s event filtering. These signals account for the event spike
above 1600 MHz in the right panel of Figure 3. An example is shown in Figure
4d. We attribute these signals to the Iridium satellite constellation’s L-band
downlink666https://apollosat.com/iridium-satellite-frequency-bands, and note
that these signals have passed turboSETI event filters in multiple GBT L-band
SETI observations (e.g., Enriquez et al., 2017; Tusay et al., 2022).
### 5.2 Filtering By Citizen Scientists
At this stage, with $>340,000$ events, most radio technosignature campaigns
would make a change to the event thresholds to get them to a number suitable
for visual inspection by a single researcher. For example, the S/N could be
raised from 5 to 10 (leaving only $92,000$ events), or even 25 (leaving only
$26,000$ events), or the drift rate could be further reduced. However, making
these changes would lower our sensitivity. For this campaign, we decided
instead to apply the power of crowdsourcing via a limited citizen science
project.
We created .pdf files containing 1000 plots each, making them accessible via
cloud services such as Box and Google Drive, and developed an hour-long
lecture and interactive quiz to train volunteers. Six undergraduate volunteers
from the Penn State Pulsar Search Collaboratory (PSPSC) club777An affiliate of
the Pulsar Search Collaboratory project described by Blumer et al. (2020) and
others. looked through output plots such as those shown in Figure 4, and
flagged any signal that appeared in only the on-source middle panel. Due to
the COVID–19 pandemic, the PSPSC members only completed approximately 20% of
the sample in the Fall 2020 semester — additional volunteers were recruited
from the lead author’s professional network to assist with signal filtering.
In the end, 13 citizen scientists (named in the Acknowledgements) flagged
approximately 0.4% ($\sim 1600$ signals) of the dataset for further analysis,
reducing the number of interesting events to a few thousand. It should be
noted that this number is approximate: some signals were identified multiple
times at different drift rates (see Section 6), and clusters of plots with
repeating signals (such as those in Figure 4a) were grouped as a single
phenomenon. Due to the size of the dataset compared to the number of
volunteers, we did not have the resources to send the same data to multiple
recipients, as is done in projects such as
Zooniverse888https://www.zooniverse.org/. To ensure quality in flagging, the
majority of the work was done in collaborative “hack sessions” with multiple
participants on the same call, so plots-of-interest or borderline cases could
be discussed and viewed by multiple participants simultaneously. The citizen
scientists’ effort meant that the filtered dataset was now at a manageable
size for visual inspection by the authors, all while maintaining the extremely
sensitive S/N 5 threshold.
### 5.3 Filtering Events with Context
Each of these plots, in isolation, had the characteristics of a localized
signal on the sky (only present in the on-source observation). However,
following the technosignature verification framework of Sheikh et al. (2021),
these signals should also be analyzed in the context of similar, nearby
signals. We grouped the remaining few thousand plots by frequency and
morphology. Then, we checked to see if identical signals appear in two
different targets (indicating that they are not local to either) or if we see
a near-identical signal within a set that is definitely interference (the way
that blc1 was eventually disproved by Sheikh et al., 2021). With this
strategy, we reduced the pool of signals-of-interest to 479. The frequencies,
drift rates, and S/Ns of these 479 signals are shown in the context of the
overall hits and events in Figures 5, 6, and 7 respectively.
Figure 5: A histogram showing the frequency distribution of the hits (light
gray), events (light purple), and signals-of-interest (purple). The signals-
of-interest are distributed relatively evenly throughout the spectrum. Note
the gap from 1.20–1.34 GHz due to the GBT L-band receiver’s notch filter.
Figure 6: A histogram showing the drift rate distribution of the hits (light
gray), events (light orange), and signals-of-interest (orange). The signals-
of-interest are found only at low absolute drift rates. Figure 7: A histogram
showing the S/N distribution of the hits (light gray), events (light green),
and signals-of-interest (green). The signals-of-interest are primarily faint.
There are also 5 hits with S/N $>10000$ which are not shown on this plot for
readability.
It is, however, possible that the 479 signals-of-interest have disqualifying
features elsewhere in the observing session; for example, a sub-threshold
repetition of the signal on a non-adjacent scan would not be picked up in the
previous filtering step. Therefore, as a final verification, we plotted each
of the remaining 479 plots in the context of the entire session of
observations (all 52 observations in Table 4), to produce stacked waterfall
plots with extreme aspect ratios. We eliminate a signal-of-interest from our
list if it is a continuation or repetition of signals in other targets at any
time during the morning of observations: this is a similar strategy to that
applied in Section 5.2 but now across the largest possible time baseline.
This process eliminated all but 2 signals-of-interest: one in a scan of
Kepler-1332b and one in a scan of Kepler-842b. Neither of these signals are
actually narrowband, but they were bright enough to be detected by turboSETI
and we did not want to restrict ourselves to only the signal morphologies we
expected, if we serendipitously found a signal with a different morphology.
Interestingly, both the detection in Kepler-1332b (during scan 0031) and the
detection in Kepler-842b (during scan 0056) were during their respective
transit midpoints. The waterfall plots are shown in Figures 8 and 9, and the
signal properties are summarized in Table 2.
Figure 8: The signal-of-interest detected in Kepler-1332b. Figure 9: A portion of the signal-of-interest detected in Kepler-842b. This signal-of-interest spans 1040–1438 MHz, so this is not the full signal, but rather a narrow bandwidth example of one of the “hits” on this signal generated by turboSETI. Target | Scan | MJD of Scan | Frequency | Transit Phase | Signal Type
---|---|---|---|---|---
Kepler-1332b | 0031 | 58202.59350 | 1749.4209 MHz | Transit Midpoint | Pulse
Kepler-842b | 0056 | 58202.69226 | 1040–1438 MHz | Transit Midpoint | Broadband Pulse
Table 2: Properties of the two interesting signals-of-interest reported in
this study.
We are close to the limit of what we can deduce from this dataset for these
two signals-of-interest. With the information we have, it is impossible to
exclude these signals as being due to RFI. Therefore, we recommend a follow-up
observation of these two systems in L-band, during transit, with a different
instrument. We are planning to do this with the newly-refurbished Allen
Telescope Array (ATA) (e.g., Farah et al., 2021). These signals-of-interest do
not merit the scrutiny of blc1 (Sheikh et al., 2021) because they only appear
in a single 18 second integration; while this may be expected for synchronized
transmitters which only send a signal at mid-transit, it also makes the
standards-of-evidence higher (no proof of signal localized on sky, no drift
rate measurement, etc.), requiring reobservation as the next step before more
detailed analyses.
### 5.4 Upper Limits
We calculate technosignature upper limits following the method of Price et al.
(2020). The minimum detectable flux density, $F_{\mathrm{min}}$ for a
narrowband signal is given by:
$F_{\mathrm{min}}=\textrm{S/N}_{\mathrm{min}}\frac{2k_{B}T_{\mathrm{sys}}}{A_{\mathrm{eff}}}\sqrt{\frac{\delta\nu}{n_{\mathrm{pol}}t_{\mathrm{obs}}}}$
(1)
Here, $\textrm{S/N}_{\mathrm{min}}$ is the signal-to-noise threshold, $k_{B}$
is the Boltzmann constant, $T_{\mathrm{sys}}$ is the system temperature,
$A_{\mathrm{eff}}$ is the effective collecting area of the telescope,
$\delta\nu$ is the frequency channel width, and $n_{\mathrm{pol}}$ is the
number of polarizations. We make the same assumption as Price et al. (2020):
that the transmitting signal is the same width as the frequency channel
$\delta\nu$, so no additional constant factor is needed to downweight
$F_{\mathrm{min}}$.
With an $\textrm{S/N}_{\mathrm{min}}$ of 5, a $T_{\mathrm{sys}}$ of 20 K, an
$A_{\mathrm{eff}}$ of 5655 m2 (equivalent to using an effective diameter of
100m for the GBT, with an L-band aperture efficiency of 0.72), a $\delta\nu$
of 3 Hz, an $n_{\mathrm{pol}}$ of 2 and a $t_{\mathrm{obs}}$ of 300 seconds,
we calculate an $F_{\mathrm{min}}$ of 3.45 $\times$ 10-26 W/m2, equivalent to
1.15 Jy. This is consistent with other BL searches at L-band with the GBT
(e.g., Enriquez et al., 2017; Price et al., 2020), but with extra sensitivity
due to the low signal-to-noise threshold.
The population of Kepler exoplanets is relatively distant, with all of the
targets in this study falling between 150–992 pc. The Equivalent Isotropic
Radiated Power (EIRP) is calculated as:
$EIRP_{\mathrm{min}}=4\pi d^{2}F_{\mathrm{min}}$ (2)
where $d$ is the distance to the furthest target in the survey. Using 992 pc,
we get a survey-wide EIRPmin of 406 TW, or $\sim 20\times$ the EIRP of the
Arecibo radar. The two targets that showed signals-of-interest, Kepler-1332b
and Kepler-842b, are located closer, at 465 pc and 552 pc respectively. This
leads to target-specific EIRPs of 89 TW and 125 TW, or $4.5$–$6.3\times$ the
EIRP of Arecibo.
Finally, one can also express an upper limit in the form of a fraction of the
8-dimensional “haystack”, as described by Wright et al. (2018a). Using the
same minimum and maximum axis bounds as described in that work, we calculate a
total haystack fraction of $4.42\times 10^{-20}$ covered by this search,
comparable to Project Phoenix searches with Parkes and Arecibo (e.g., Backus
et al., 2002).
## 6 Discussion
This search represents the lowest S/N ever chosen for a Breakthrough Listen
affiliated project on the GBT. However, caution is advised if following our
procedure. We found that lowering the S/N to 5 made turboSETI’s temporal on-
off filtering much less effective. The lower S/N magnified the likelihood of
e.g., a bright pixel in the “on” target causing a hit to be flagged as an
event. It may be more effective to recoup sensitivity in other parts of the
signal processing chain, rather than as a threshold in turboSETI. This may be
especially relevant for high drift rate signals (using methods such as boxcar
convolution, e.g., Adámek & Armour, 2020).
In addition, vetting $\sim 3.4\times 10^{5}$ events involved a large amount of
work, even considering the citizen science approach. Using a single on-source
with two off-sources produces a degeneracy between intermittent RFI and sky-
localized signals in a single-dish telescope, compounding the issue. We
recommend that future searches invest in a better understanding of their RFI
environment and try to algorithmically filter further before performing visual
classification.
One approach is to better characterize the frequencies, drift rates, and
morphologies of common interference sources, and only down-select signals that
are not particularly consistent with those properties. Another approach is to
use employ spatial filtering with a multi-beam receiver (such as the GBT’s
K-band Focal Plane Array) or an interferometer (such as the Allen Telescope
Array (ATA)).
$\dot{\nu}_{\mathrm{max}}$ (Hz/s) | Sensitivity Loss Factor | Number of Hits in Bin [% of total] | Number of Events in Bin [% of total]
---|---|---|---
0.15 | $1$ | 1838435 [53.1%] | 228741 [54.7%]
0.3 | $1.4$ | 1216882 [35.1%] | 129808 [31.0%]
0.6 | $2$ | 376107 [10.9%] | 46390 [11.1%]
1.2 | $2.8$ | 21999 [0.64%] | 7569 [1.81%]
2.4 | $5.7$ | 5549 [0.16%] | 3232 [0.77%]
4.8 | $8$ | 2607 [0.08%] | 1514 [0.36%]
9.6 | $11.3$ | 816 [0.02%] | 441 [0.11%]
19.2 | $16$ | 251 [$<0.01$%] | 134 [0.03%]
38.4 | $22.6$ | 139 [$<0.01$%] | 68 [0.02%]
76.8 | $32$ | 172 [$<0.01$%] | 90 [0.02%]
153.6 | $45.3$ | 125 [$<0.01$%] | 71 [0.02%]
307.2 | $64$ | 83 [$<0.01$%] | 54 [0.01%]
614.4 | $90.5$ | 0 [$<0.01$%] | 0 [$<0.01$%]
Table 3: Maximum drift rates (absolute value), the corresponding multiplier
for reduction in sensitivity, and the number of hits affected by that
reduction. The reason for the sensitivity reduction is described in Section 4.
99% of hits and 97% of events that we detected suffer a sensitivity loss of 2
or less. We can also consider the implied population of high-drift hits that
we did not detect, due to the sensitivity loss. However, the number of high-
drift rates is so many orders-of-magnitude below the low-drift hits, that we
do not expect the number of “missed” hits to meaningfully affect the total
number. This implies that the sensitivity loss at high-drift rates should not
greatly affect the conclusions of our survey. Note that this logic assumes
that high-drift hits are not preferentially likely to be caused by an ETA.
Also, as Margot et al. (2021) has mentioned, the turboSETI pipeline does not
optimize for signals with high drift rates by default, such that signals
suffer from drift-smearing across frequency bins, lowering the sensitivity.
However, we can recover as much sensitivity as is physically possible (in an
incoherent data product) by performing stepped frequency scrunching. The
remaining sensitivity loss factors for this data, and how many signals were
affected by those factors, are shown in Table 3. While we recommend that other
turboSETI users also perform frequency scrunching if their searches cover
applicably high drift rates, we note that each scrunched data product must be
searched separately. This means that bright-enough hits will be detected
redundantly multiple times, at multiple drift rates, as the hits from each
scrunched data product are produced independently. This provides a strong
motivation to integrate drift-rate-stepped scrunching into turboSETI in a more
robust way.
Another way to address high-drift signals is to coherently correct the raw
voltage data to an accelerating reference frame or fiducial drift rate (e.g.,
$\pm$100 Hz/s) before the creation of the reduced and channelized data
products (as done in pulsar de-dispersion, e.g., Sobey et al., 2022). For
signals at the chosen drift rate, there will be no sensitivity loss, and
signals that are nearby in drift rate space can be searched for incoherently
(with the same sensitivity loss considerations as discussed above), broadening
the applicability of the technique. However, note that for lower loss
tolerances at any given drift rate, the problem gets computationally heavier —
to minimize loss, it is necessary to dedisperse and save the entire array at
every drift rate of interest.
This search, synchronized with planetary transits, is part of a class of
search strategies aligned in time with particular astronomical events, as
discussed in Section 1. For any such search strategy, we expect that the
transmitter will likely not be transmitting continuously, but instead will
only transmit at particular times, potentially with short durations.
Therefore, searches for these kinds of signals benefit from observing with
arrays that can do simultaneous off- and on-sources with multiple beams,
instead of single-dish single-pixel instruments like the GBT at L-band.
The short transmission times also make synchronization strategies more energy-
efficient for the transmitter, similar to the argument made by Gray (2020).
Assume that the energy required to transmit for a full exoplanetary orbit is
$E$. For the exoplanets in our sample, transmitting only during transit (as
seen by a single receiver, in this case Earth), costs only 1–10% of $E$. If
the transmitter instead only signalled during the 5 minutes around the mid-
transit point (corresponding to the length of one of the observations in this
campaign), the energy cost would be 0.1–0.01% of $E$. Another strategy to
improve energy efficiency could be to use a transmitter with a very large
effective area, leading to an extremely narrow beam. In this strategy, the
tiny beam size would cause a short flash right at mid-transit as seen from
Earth. Having such a tiny angular beam size only works well if the observer
knows exactly when to expect the signal, so it pairs well with transit
synchronization. To an order-of-magnitude, using exoplanetary properties
consistent with our 12 exoplanet sample, we can imagine a continuously-
emitting transmitter with an effective area equivalent to an exoplanet’s
projected area. Before, we assumed the effective area would be the same as
that of the GBT, so the new effective area is now a billion times larger. This
factor propagates, leading to such a small beamsize that a transmitted signal
would be visible for a few milliseconds at mid-transit, and could be sent with
a transmitter six orders-of-magnitude less powerful than Arecibo was. This
paper does not account for an optimization this extreme, but it could be an
interesting avenue for more specialized searches in the future.
## 7 Conclusion
In this work, we describe the first radio technosignature search that pre-
planned observations to synchronize with exoplanets during their transits, in
a survey of a dozen exoplanets in the Kepler field. Using 6 hours of L-band
(1.1–1.9 GHz) data taken with the BL bankend on the GBT, we performed a SETI
search using the narrowband signal search code turboSETI. We chose a maximum
drift rate of $\pm$614.4 Hz/s — the first modern radio technosignature project
to encompass such extreme drift rates — in order to account for the full range
of drifts that could be produced from known exoplanetary systems. We also
chose a low S/N of 5. With these parameters, the algorithms flagged $\sim
2.53\times 10^{6}$ hits, which were then temporally filtered with an on-off
method into $3.4\times 10^{5}$ events. Many of these events could be
attributed to GPS satellite downlinks.
Thirteen citizen scientists volunteered their time to assist the science team
with the further filtering of the turboSETI events. From this process, the
list of signals-of-interest was reduced to a few thousand signals that
appeared to be either transient or sky-localized. We further removed signals
that appeared in multiple targets, or that were identical to signals proven to
be RFI, reducing the pool further to 479 signals-of-interest. Upon
investigating these 479 signals in the context of the entire observing
session, we determined that only two remained as signals-of-interest: one at
1749 MHz in Kepler-1332b, and the other from 1040-1438 MHz in Kepler-842b.
These signals do not rise to the level of even “candidate” technosignature
signals because there is no proof that they are spatially isolated (and they
are consistent with anthropogenic RFI), and so do not warrant follow-up with
the rigor described in (Sheikh et al., 2021). Reobservation of these targets
during transits, with a multibeam instrument such as the ATA, will conclude
the experiment and, if nothing is found, complete the null result we report
here. We hope that the “new ground” that we have broken in radio
technosignature parameter space will be extended by more synchronized SETI
searches in the future, across many more instruments and teams.
ETA
Extraterrestrial Agents
GBT
Green Bank Telescope
BL
Breakthrough Listen
RFI
Radio Frequency Interference
S/N
Signal-to-Noise Ratio
SETI
Search for Extraterrestrial Intelligence
FCC
Federal Communications Commission
GPS
Global Positioning System
PSPSC
Penn State Pulsar Search Collaboratory
EIRP
Equivalent Isotropic Radiated Power
ATA
Allen Telescope Array
This work would not have been possible without the following citizen
scientists: Killian Cook, Anish Doshi, Gus Eberlein, Rhett Gentile, Jordan
Hanner, Shara Hussain, Matthew LaFountain, Yika Luo, Cole Penkunas, Livia
Seifert, Nate Smith, Valeria Ventura, and James Wu. This material is based
upon work supported by the Green Bank Observatory which is a major facility
funded by the National Science Foundation operated by Associated Universities,
Inc. We acknowledge Ron Maddalena for GBT observing and scientific assistance.
S.Z.S. acknowledges that this material is based upon work supported by the
National Science Foundation MPS-Ascend Postdoctoral Research Fellowship under
Grant No. 2138147NSF. CIC acknowledges support by NASA Headquarters under the
NASA Earth and Space Science Fellowship Program through grant 80NSSC18K1114
and the Pennsylvania State University’s Bunton-Waller program. C.G.
acknowledges the support of the Pennsylvania State University, the Penn State
Eberly College of Science and Department of Astronomy & Astrophysics, the
Center for Exoplanets and Habitable Worlds and the Center for Astrostatistics.
This paper is a result of the class project for the 2020 graduate course in
SETI at Penn State. We acknowledge Alan Reyes for participation in the class
project. The Pennsylvania State University campuses are located on the
original homelands of the Erie, Haudenosaunee (Seneca, Cayuga, Onondaga,
Oneida, Mohawk, and Tuscarora), Lenape (Delaware Nation, Delaware Tribe,
Stockbridge-Munsee), Shawnee (Absentee, Eastern, and Oklahoma), Susquehannock,
and Wahzhazhe (Osage) Nations. As a land grant institution, we acknowledge and
honor the traditional caretakers of these lands and strive to understand and
model their responsible stewardship. We also acknowledge the longer history of
these lands and our place in that history. Breakthrough Listen is managed by
the Breakthrough Initiatives, sponsored by the Breakthrough Prize Foundation.
The Center for Exoplanets and Habitable Worlds and the Penn State
Extraterrestrial Intelligence Center are supported by the Penn State and the
Eberly College of Science. This research made use of
Astropy,999http://www.astropy.org a community-developed core Python package
for Astronomy (Astropy Collaboration et al., 2013, 2018). This research has
made use of NASA’s Astrophysics Data System Bibliographic Services.
## References
* Adámek & Armour (2020) Adámek, K., & Armour, W. 2020, The Astrophysical Journal Supplement Series, 247, 56
* Akeson et al. (2013) Akeson, R. L., Chen, X., Ciardi, D., et al. 2013, Publications of the Astronomical Society of the Pacific, 125, 989, doi: 10.26133/NEA1
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Backus et al. (2002) Backus, P., et al. 2002, in Single-Dish Radio Astronomy: Techniques and Applications, Vol. 278, 525–527
* Balser et al. (2021) Balser, D., Braatz, J., Frayer, D., et al. 2021, Green Bank Telescope Observer’s Guide, Green Bank Observatory
* Blumer et al. (2020) Blumer, H., McLaughlin, M. A., Stewart, J., et al. 2020, American Journal of Physics, 88, 31
* Bonnassieux et al. (2022) Bonnassieux, E., Sweijen, F., Brienza, M., et al. 2022, Astronomy & Astrophysics, 658, A10
* Boyajian et al. (2018) Boyajian, T. S., Alonso, R., Ammerman, A., et al. 2018, The Astrophysical Journal, 853, L8, doi: 10.3847/2041-8213/aaa405
* Breakthrough Listen Collaboration (2019) Breakthrough Listen Collaboration. 2019, Astrophysics Source Code Library, ascl
* Coelho et al. (2022) Coelho, L. F., Madden, J., Kaltenegger, L., et al. 2022, Astrobiology, 22, 313
* Corbet (2003) Corbet, R. H. 2003, Astrobiology, 3, 305
* Davenport et al. (2022) Davenport, J. R. A., Cabrales, B., Sheikh, S., et al. 2022, arXiv e-prints, arXiv:2206.04092. https://arxiv.org/abs/2206.04092
* Döbler & Raab (2021) Döbler, N. A., & Raab, M. 2021, Acta Astronautica, 189, 699, doi: 10.1016/j.actaastro.2021.09.032
* Drake (1961) Drake, F. D. 1961, Physics Today, 14, 40, doi: 10.1063/1.3057500
* Enriquez & Price (2019) Enriquez, E., & Price, D. 2019, Astrophysics Source Code Library, ascl
* Enriquez et al. (2017) Enriquez, J. E., Siemion, A., Foster, G., et al. 2017, The Astrophysical Journal, 849, 104
* Farah et al. (2021) Farah, W., Pollak, A., Siemion, A., et al. 2021, The Astronomer’s Telegram, 14676, 1
* Franz et al. (2022) Franz, N., Croft, S., Siemion, A. P., et al. 2022, The Astronomical Journal, 163, 104
* Gray (2020) Gray, R. H. 2020, International Journal of Astrobiology, 19, 299
* Harp et al. (2018) Harp, G., Ackermann, R., Astorga, A., et al. 2018, The Astrophysical Journal, 869, 66
* Harp et al. (2016) Harp, G. R., Richards, J., Tarter, J. C., et al. 2016, The Astronomical Journal, 152, 181, doi: 10.3847/0004-6256/152/6/181
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
* Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Kipping & Teachey (2016) Kipping, D. M., & Teachey, A. 2016, Monthly Notices of the Royal Astronomical Society, 459, 1233, doi: 10.1093/mnras/stw672
* Lebofsky et al. (2019) Lebofsky, M., Croft, S., Siemion, A. P., et al. 2019, Publications of the Astronomical Society of the Pacific, 131, 124505, doi: 10.1088/1538-3873/ab3e82
* Lemarchand (1994) Lemarchand, G. A. 1994, Astrophysics and space science, 214, 209
* Lipman et al. (2019) Lipman, D., Isaacson, H., Siemion, A. P. V., et al. 2019, Publications of the Astronomical Society of the Pacific, 131, 034202, doi: 10.1088/1538-3873/aafe86
* Ma et al. (2022) Ma, P., Ng, C., Rizk, L., et al. 2022
* MacMahon et al. (2018) MacMahon, D. H. E., Price, D. C., Lebofsky, M., et al. 2018, Publications of the Astronomical Society of the Pacific, 130, doi: 10.1088/1538-3873/aa80d2
* Margot et al. (2021) Margot, J.-L., Pinchuk, P., Geil, R., et al. 2021, The Astronomical Journal, 161, 55
* Morton et al. (2016) Morton, T. D., Bryson, S. T., Coughlin, J. L., et al. 2016, The Astrophysical Journal, 822, 86, doi: 10.3847/0004-637X/822/2/86
* Muirhead et al. (2015) Muirhead, P. S., Mann, A. W., Vanderburg, A., et al. 2015, The Astrophysical Journal, 801, 18, doi: 10.1088/0004-637X/801/1/18
* Mullally et al. (2016) Mullally, F., Barclay, T., & Barentsen, G. 2016, K2fov: Field of view software for NASA’s K2 mission, Astrophysics Source Code Library, record ascl:1601.009. http://ascl.net/1601.009
* Ott et al. (1994) Ott, M., Witzel, A., Quirrenbach, A., et al. 1994, Astronomy and Astrophysics, 284, 331
* Pace & Walker (1975) Pace, G. W., & Walker, J. C. 1975, Nature, 254, 400
* Price et al. (2019) Price, D., Enriquez, J., Chen, Y., & Siebert, M. 2019, The Journal of Open Source Software, 4, 1554, doi: 10.21105/joss.01554
* Price et al. (2020) Price, D. C., Enriquez, J. E., Brzycki, B., et al. 2020, The Astronomical Journal, 159, 86
* Ransom (2011) Ransom, S. 2011, Astrophysics source code library, ascl
* Schelling (1960) Schelling, T. 1960, The strategy of conflict (Harvard University Press)
* Sheikh et al. (2020) Sheikh, S. Z., Siemion, A., Enriquez, J. E., et al. 2020, The Astronomical Journal, 160, 29
* Sheikh et al. (2019) Sheikh, S. Z., Wright, J. T., Siemion, A., & Enriquez, J. E. 2019, The Astrophysical Journal, 884, 14
* Sheikh et al. (2021) Sheikh, S. Z., Smith, S., Price, D. C., et al. 2021, Nature Astronomy, 5, 1153
* Siemion et al. (2013a) Siemion, A. P., Demorest, P., Korpela, E., et al. 2013a, Astrophysical Journal, 767, doi: 10.1088/0004-637X/767/1/94
* Siemion et al. (2013b) Siemion, A. P. V., Demorest, P., Korpela, E., et al. 2013b, The Astrophysical Journal, 767, 94, doi: 10.1088/0004-637X/767/1/94
* Smith et al. (2021) Smith, S., Price, D. C., Sheikh, S. Z., et al. 2021, Nature Astronomy, 5, 1148
* Sobey et al. (2022) Sobey, C., Bassa, C., O’Sullivan, S., et al. 2022, Astronomy & Astrophysics, 661, A87
* Tang (1976) Tang, T. 1976, Journal of the British Interplanetary Society, 29, 469
* Taylor (1974) Taylor, J. H. 1974, Astronomy & Astrophysics Supplements, 15, 367
* Thompson et al. (2022) Thompson, M. A., Krissansen-Totton, J., Wogan, N., Telus, M., & Fortney, J. J. 2022, Proceedings of the National Academy of Sciences, 119, e2117933119
* Tusay et al. (2022) Tusay, N., Huston, M. J., Dedrick, C. M., et al. 2022, The Astronomical Journal, 164, 116
* Wright (2020) Wright, J. T. 2020, International Journal of Astrobiology, 19, 446
* Wright et al. (2022) Wright, J. T., Haqq-Misra, J., Frank, A., et al. 2022, The Astrophysical Journal Letters, 927, L30
* Wright et al. (2018a) Wright, J. T., Kanodia, S., & Lubar, E. 2018a, The Astronomical Journal, 156, 260
* Wright et al. (2018b) —. 2018b, The Astronomical Journal, 156, 260, doi: 10.3847/1538-3881/aae099
## Appendix A Observation Log
Table 4: The observation log from this project’s Green Bank Telescope observations. The scan ID number is shown in the first column, the target is shown in the second column, the transit synchronization is shown in the third column, and any additional notes are shown in the fourth column. Scan IDs are mostly consecutive, but missing ID numbers indicate scans which did not take data. Targets prefixed with a “K” indicate a “Kepler” system, with the letter suffix indicating which planet in the system was predicted to be transiting during our observing window. The relationship of the observation time to the transit time is indicated by letters: I-M indicates an observation which occurred between a transit’s ingress and midpoint, M indicates an observation which contained the transit midpoint, and M-E indicates an observation which occurred between a transit’s midpoint and egress (blank entries occurred outside of transit). Scan 0051 is missing a node of data (about 1/8th of the bandwidth) due to faulty data collection, encompassing frequencies from 1.25–1.30 GHz (entirely within the notch filter). Scan ID | Target | Relation to Transit | Notes | Scan ID | Target | Relation to Transit | Notes
---|---|---|---|---|---|---|---
0006 | 3C295 | non-planet obs. | Quasar | 0033 | K723b | I-M |
0007 | 3C295 | non-planet obs. | Quasar | 0034 | K1164b | M-E |
0009 | Boyajian’s Star | non-planet obs. | Off-source | 0035 | K723b | M-E |
0010 | K992b | M | | 0036 | K537b | M |
0011 | K738b | I-M | | 0037 | K723b | I-M |
0012 | K992b | M-E | | 0038 | K537b | M-E |
0013 | K738b | I-M | | 0039 | K723b | I-M |
0014 | K1039b | M | | 0040 | K537b | M-E |
0015 | K1039b | M-E | | 0041 | K723b | I-M |
0016 | K732c | M | | 0042 | K723b | M |
0017 | K1039b | M-E | | 0043 | K1332b | |
0018 | K738b | M | | 0044 | K723b | M-E |
0019 | K738b | M-E | | 0045 | K1332b | |
0020 | K732c | M-E | | 0046 | K723b | M-E |
0021 | K1164b | I-M | | 0048 | K723b | M-E |
0022 | K732c | | | 0049 | K446b | I-M |
0023 | K1164b | I-M | | 0050 | K723b | M-E |
0024 | K732c | | | 0051 | K446b | I-M | missing node
0025 | K1164b | I-M | | 0052 | K446b | M |
0026 | K1053b | M | | 0053 | K1222b | I-M |
0027 | K738b | M-E | | 0054 | K1222b | M |
0028 | K1164b | M | | 0055 | K842b | I-M |
0029 | K1053b | M-E | | 0056 | K842b | M |
0030 | K738b | M-E | | 0057 | K723b | M-E |
0031 | K1332b | M | | 0058 | K723b | M-E |
0032 | K1164b | M-E | | 0059 | B0329+54 | non-planet obs. | Pulsar
## Appendix B Mid-resolution plots for the two signals-of-interest
Figure 10: The signal-of-interest in Kepler-1332b data shown in the high
frequency-resolution data (left) and the mid-resolution data (right). The
regions of data shown are related as indicated by the white-dashed rectangles,
with the right plot zoomed-out in frequency and zoomed-in in time. The signal-
of-interest is actually composed of two short ($<$1 second) pulses. Figure 11:
The signal-of-interest in Kepler-842b data shown in the high frequency-
resolution data (left) and the mid-resolution data (right). The regions of
data shown are related as indicated by the white-dashed rectangles, with the
right plot zoomed-out in frequency and zoomed-in in time. The signal-of-
interest has a distinctive, periodic structure in frequency and is present
across the bandwidth. This behaviour is characteristic of RFI.
*[ETA]: Extraterrestrial Agents
*[RFI]: Radio Frequency Interference
*[BL]: Breakthrough Listen
*[GBT]: Green Bank Telescope
*[S/N]: Signal-to-Noise Ratio
*[SETI]: Search for Extraterrestrial Intelligence
*[FCC]: Federal Communications Commission
*[GPS]: Global Positioning System
*[PSPSC]: Penn State Pulsar Search Collaboratory
*[EIRP]: Equivalent Isotropic Radiated Power
*[ATA]: Allen Telescope Array
|
# The SHiP experiment and the RPC technology
G. De Lellis,11footnotetext: on behalf of the SHiP Collaboration
###### Abstract
The discovery of the Higgs boson has fully confirmed the Standard Model of
particles and fields. Nevertheless, there are still fundamental phenomena,
like the existence of dark matter and the baryon asymmetry of the Universe,
which deserve an explanation that could come from the discovery of new
particles. The SHiP experiment at CERN meant to search for very weakly coupled
particles in the few GeV mass domain has been recently proposed. The existence
of such particles, foreseen in Beyond Standard Models, is largely unexplored.
A beam dump facility using high intensity 400 GeV protons is a copious source
of such unknown particles in the GeV mass range. The beam dump is also a
copious source of neutrinos and in particular it is an ideal source of tau
neutrinos, the less known particle in the Standard Model. We report the
physics potential of such an experiment and describe the use of the RPC
technology therein. An anchillary measurement of the charm cross-section will
be carried out in July 2018 and RPC are used as a muon detector. We also
describe the design and construction of these new chambers.
## 1 The SHiP experiment
The discovery of the Higgs boson is certainly a big triumph of the Standard
Model. In particular, given its mass, it could well be that the Standard Model
is an effective theory working all the way up to the Planck scale.
Nevertheless, there are several phenomena deserving an explanation that the
Standard Model is unable to provide: the existence of dark matter and its
nature, the baryonic asymmetry of the Universe and neutrino masses. It is
therefore clear the new physics is there and presumably several new particles
have still to be discovered.
Searches for new physics with accelerators are being carried out at the LHC,
especially suited to look for high mass particles with ordinary couplings to
matter. Complementary searches for very weakly coupled and therefore long-
lived particles require a beam dump facility. Such a facility is made of a
high density proton target, followed by a hadron stopper and a muon shield.
Apart from residual muons, the only remaining particles are electron, muon and
tau neutrinos on top of hidden, long-lived particles produced either in proton
interactions or in secondary particle decays.
A new experiment, Search for Hidden Particles (SHiP), has been proposed [1],
designed to operate at a beam dump facility to be built at CERN and to search
for weakly coupled particles in the few GeV mass range. The physics case for
such an experiment is widely discussed in Ref. [2]. In five years, the
facility will integrate $2\times 10^{20}$ 400 GeV protons, produced by the SPS
accelerator complex, impinging on a 12 interaction length ($\lambda_{int}$)
target made of Molybdenum and Tungsten, followed by a 30 $\lambda_{int}$ iron
hadron absorber. Hidden particles in the GeV mass range would be produced
mostly by the decay of charmed hadrons produced in proton interactions.
$D_{s}$ mesons, copiously produced among charmed hadrons, are a source of tau
neutrinos through their fully leptonic decay. Therefore, the SHiP facility is
ideal also to study the physics of tau neutrinos, the less known particle in
the Standard Model.
Figure 1: The beam dump facility and the SHiP detector.
Figure 1 shows the SHiP facility to be placed in the North Area: downstream of
the target, the hadron absorber filters out all hadrons, therefore only muons
and neutrinos are left. An active muon shield is designed with two sections
with opposite polarity to maximize the muon flux reduction: it reduces the
muon flux from $~{}10^{10}$ down to $~{}10^{5}$ muons per spill. $4\times
10^{13}$ protons are extracted in each spill, designed to be 1s long to reduce
the detector occupancy [3]. A first successful test of the SPS cycle with a 1s
long spill was performed in April 2015.
The neutrino detector is located downstream of the muon shield, followed by
the decay vessel and the detector for hidden particles. The Collaboration will
prepare a document for the European Strategy by the end of 2018 and a
Comprehensive Design Report by 2019, in the framework of the Physics Beyond
Colliders working group, launched in 2016 at CERN. The construction and
installation of the detector will start in 2021 and last until the end of the
third LHC long shutdown such that the data taking is assumed to start in 2026.
The neutrino detector is made of a magnetised region, followed by a muon
identification system, as shown in Figure 2. The magnetised region will host
both the neutrino target and a particle spectrometer. The neutrino target is
based on the emulsion cloud chamber technology employed by the OPERA
experiment [4], with a compact emulsion spectrometer, made of a sequence of
very low density material and emulsion films to measure the charge and
momentum of hadrons in magnetic field. This detector is suitable for the
measurement of momenta up to 12 GeV$/c$. Indeed, this feature would allow to
discriminate between tau neutrinos and anti-neutrinos also in the hadronic
decay channels of the tau lepton. The emulsion target is complemented by high
resolution tracking chambers to provide the time stamp to the event, connect
muon tracks from the target to the muon system and measure the charge and
momentum for particles with momenta above 10 GeV$/c$. The muon system is based
on 23 iron slabs, 5 cm thick each, alternated with 24 RPCs providing the
tracking within the slabs. The muon system will also act as upstream veto
tagger for background processes to the hidden particle search, which motivates
the high sampling choice. Nevertheless, the muon system configuration is still
under optimisation.
Figure 2: The neutrino detector upstream of the decay vessel in different
views.
The emulsion target will also act as the target of dark matter as well as of
any very weakly in- teracting particle produced at the accelerator, when its
mass is in the GeV range. The ongoing optimisation of this detector concerns
the target material, the sampling frequency of the emulsion cloud chamber and
the timing performances of the target tracker that would enable the separation
between neutrinos and heavy particles based on the time of flight measurement.
The detector for hidden particles is located in the downstream part of a 60 m
long evacuated decay vessel, with a conical shape and an elliptical transverse
section at the very downstream end of $5\times 10$ m2, the longer axis being
vertical. The hidden particles are supposed to decay within the vessel. The
requirement to have less than 1 neutrino interaction in the vessel over five
years sets the pressure to about $10^{-3}$ mbar. A magnetic spectrometer is
located downstream of the vessel: it is made of straw tubes with a material
budget of 0.5% $X_{0}$ per station, achieving a position resolution of 120
$\mu$m per straw, with 8 hits per station on average. This gives a momentum
resolution of about 1%. The vessel would be sorrounded by a liquid
scintillator layer to tag particles coming from outside. Downstream of the
spectrometer, an hadronic and electromagnetic calorimeter and a muon filter
are used to identify particles. A timing detector complements the apparatus to
reject vertices from chance coincidences.
## 2 Search for hidden particles and physics with the neutrino detector
Extensions of the Standard Model in the low mass region foresee the existence
of particles as singlets with respect to the Standard Model gauge group. These
particles couple to different singlet composite operators (so-called Portals)
of the Standard Model. The SHiP detector has the potentiality to discover very
weakly interacting and long lived particles in a wide unexplored range of
their masses and couplings, within these Portals. As an example, we report in
the left plot of Figure 3 the sensitivity to heavy neutral leptons, when only
the muon coupling $U_{\mu}$ is considered [5]. For an overview of the
sensitivity to different portals and corresponding particles, we refer to [1,
2].
Figure 3: Left: SHiP sensitivity to heavy neutral leptons [5]. Right:
Improvement of the accuracy on $s^{+}$ with SHiP (red) compared to the present
status (blue) in the $0.02<x<0.35$ range.
The observation of tau neutrinos was confirmed by the DONUT experiment only in
2008 when 9 candidates events were reported [6]. The OPERA experiment [4] has
detected ten tau neutrinos [7, 8, 9, 10, 11, 12], leading to the discovery of
tau neutrino appearance from muon neutrino oscillations [11, 12]. The only
leptonic decay observed by OPERA [9] shows negative charge as expected from a
$\nu_{\tau}$ interaction. Therefore, so far there is no direct evidence for
tau anti-neutrinos. The SHiP facility is a $\nu_{\tau}$ factory, with
$6.6\times 10^{15}$ tau neutrinos produced in primary proton collisions,
equally divided in neutrinos and anti-neutrinos. Given the neutrino target
mass of about 10 tons, one expects more than 10000 interactions of tau
neutrinos and anti-neutrinos.
Charmed hadrons are produced in neutrino and anti-neutrino charged-current
interactions at the level of about 5% [13]. Experiments based on calorimetric
technology identify charmed hadrons only in their muonic decay channel, when
two opposite sign muons are produced in the final state. A cut of 5 GeV is
applied to muons in order to suppress the background due to punch-through
pions. The nuclear emulsion technology, instead, identifies topologically the
charmed hadron by detecting its decay vertex. Energy cuts are therefore much
looser, thus providing a better sensitivity to the charm quark mass. Moroever,
a large statistical gain is provided by the use of hadronic decay modes [13].
Indeed, SHiP will integrate about $10^{5}$ charm candidates, more than one
order of magnitude larger than the present statistics, with a large ($\sim
30$%) contribution from anti-neutrinos. Charm production in neutrino
scattering is extremely sensitive to the strange quark content of the nucleon,
especially with anti-neutrinos where the $s$-quark is dominant. SHiP will
improve significantly the uncertainty on the strange quark distribution in the
nucleon as shown in the right plot of Figure 3 in terms of
$s^{+}=s(x)+\bar{s}(x)$ in the $0.02<x<0.35$ range.
## 3 RPC technology in SHiP
The RPC technology is proposed to be used for two different applications in
SHiP: one is a tracking detector for the muon system of the neutrino detector,
also acting as a veto for the background of hidden particle decays. Prototypes
of these chambers are being constructed for the measurement of the charm
cross-section, where RPC will instrument the muon system. The other
application of the RPC technology in SHiP is the timing detector in the
downstream apparatus for the detection of hidden particles. We describe both
applications here.
Figure 4: Left: Setup of the charm measurement experiment including the
downstream muon filter based on the RPC technology. Right: structure of the 32
mm thick RPC chamber.
### 3.1 Muon system for the charm cross-section measurement
The prediction of the tau neutrino yield is affected by a large uncertainty:
indeed, simulation studies of proton interactions in heavy and thick targets
show that the charmed hadron yield is increased by a factor of 2.3 from the
cascade production [14]. Charmed hadrons are produced either directly from
interactions of the primary protons or from subsequent interactions of the
particles produced in the hadronic cascade showers, including the protons
after a primary elastic collision. The only available measurement of the
associated charm production per nucleon $\sigma_{c\bar{c}}=18.1\pm 1.7$
$\mu$barn [15] was indeed obtained with a thin target where the secondary
production is negligible.
The SHiP Collaboration has proposed the SHiP-charm project [16], aiming at
measuring the associated charm production by employing the SPS 400 GeV/c
proton beam. This proposal includes a study of the cascade effect to be
carried out using ECC techniques, i.e. slabs consisting of a replica of the
SHiP experiment target [1] interleaved with emulsion films. The detector is
hybrid, combining the emulsion technique with electronic detectors to provide
the charge and momentum measurement of charmed hadron decay daughters and the
muon identification. This setup shown in the left part of Figure 4 allows a
full kinematical reconstruction of the event. An optimisation run was approved
at CERN for July 2018 while the full measurement is planned after the long
shutdown LS2 of the CERN accelerator complex, with $5\times 10^{7}$ protons on
target and a charm yield of about 2500 fully reconstructed interactions.
The RPC chambers were designed to operate in avalanche mode, with a time
resolution of about 1 ns. Two orthogonal sets of strips are used for 2D
measurements with an expected position resolution of about 3 mm in both
directions. Their structure is shown in the right part of Figure 4. The 2D
measurement is necessary to cope with the large occupancy of the detector:
indeed, in each event there are on average several tens of particles entering
the first two RPC detectors of the muon system, only one or two being a muon
particle. The bakelite electrodes were produced at Puricelli s.r.l. in Italy.
Each RPC is using 118 horizontal and 184 vertical strips with a pitch of
10.625 mm produced at KODEL in Korea, with a total size of $2100\times 1350$
mm2 and an active area of $1900\times 1200$ mm2. Each RPC is equipped with 20
front-end cards, FEERIC developed by the ALICE Collaboration, 12 connected to
vertical and 8 to horizontal strips. 5 readout boards are used for each
chamber. In total 5 chambers were built and are being tested at CERN before
their installation at the H4 beamline at CERN in July 2018.
### 3.2 Timing detector
One of the SHIP timing detector prototypes is based on timing Resistive Plate
Chamber (tRPC) technology. The prototype uses a novel concept, i.e. the RPC
sensitive volume. With this approach, the gas volume and the High Voltage (HV)
insulation are confined inside a permanent sealed plastic box, decoupling it
from the pick up electrodes located on the top and on the bottom of the
sensitive volume. The main advantages of this sensitive volume are:
versatility, the same volume can be coupled to different readout electrodes;
ease of construction and low cost on the $1\div 2$ m2 scale with a complete
tightness of the plastic box allowing an operation with low gas flux.
The sensitive volume of the SHIP tRPC prototype houses a multi-gap RPC
structure‘[17] with six gas gaps defined by seven 1 mm thick float glass
electrodes of about $1550\times 1250$ mm2 separated by 0.3 mm nylon mono-
filaments. The HV electrodes are made up of a resistive layer applied to the
surface of the outermost glasses with airbrush techniques. The structure is
permanently sealed inside a PMMA gas tight box with a 1 mm lid thickness
equipped with feed-throughs for gas and HV connections.
The RPC chamber is composed of two identical sensitive modules, read out by a
pick-up electrode, located between the modules, made from FR4 Printed Circuit
Board with a thickness of 1.5 mm and equipped with $1600\times 30$ mm2 copper
strips. The set is enclosed in an aluminium case to guarantee the
electromagnetic insulation from the environment and enough mechanical
rigidity. The chamber was operated with pure C2H2F4 and open gas flow. Both
sides of each strip are directly connected to low-jitter high-gain/bandwidth
Front-End Electronics [18] and its digital output connected to a FPGA based
multi-hit TDC [19]. The time of the each strip is calculated as the average of
the times in each side.
Three fast plastic scintillators (BC420, $2\times 3\times 8$ cm3) readout on
both sides by Hamamatsu H6533 photomultipliers are used to trigger on cosmic
muons and to provide a time reference, in order to evaluate the response of
the prototype. The three scintillators are aligned with one of the strips, two
above and one below the chamber. The left plot of Figure 5 shows the time
distribution of the difference between one of the reference scintillators and
the prototype after the walk correction. The time precision of the chamber
after subtracting the contribution of the scintillator ($\sigma\sim 107$ ps)
shows a $\sigma\sim 105$ ps . The right plot of Figure 5 shows the time
accuracy and the chamber efficiency as a function of the HV/gap: a plateau in
the efficiency is reached above 2550 V/gap as well as an accuracy of about 105
ps. At the plateau, the dark counting rate of the detector is about 2.2
kHz/m2. These measurements were carried out at the University of Coimbra in
Portugal and further tests will be performed with the final gas mixture 90%
C2H2F4 and 10% SF6. Nevertheless, the performance already measured makes the
tRPC a good candidate to instrument the 50 m2 SHIP timing detector.
Figure 5: Left: Distribution of the time difference between one of the
reference scintillators and the prototype chamber after the walk correction.
Right: Time accuracy and efficiency as a function of the HV/gap.
## References
* [1] M. Anelli et al., _A facility to Search for Hidden Particles (SHiP) at the CERN SPS_ , arXiv:1504.04956.
* [2] S. Alekhin et al., _A facility to search for hidden particles at the CERN SPS: the SHiP physics case_ , Rep. Prog. Phys. 79 (2016) 124201.
* [3] A. Akmete et al., _The active muon shield in the SHiP experiment_ , JINST 12 (2017) no.05, P05011.
* [4] N. Agafonova et al., _The OPERA experiment in the CERN to Gran Sasso neutrino beam_ , JINST 4 (2009) P04018.
* [5] K. Bondarenko et al., _Phenomenology of GeV-scale Heavy Neutral Leptons_ , arXiv:1805.08567v2.
* [6] K. Kodama et al., _Final tau-neutrino results from the DONuT experiment_ , Phys. Rev. D78 (2008) 052002.
* [7] N. Agafonova et al., _Observation of a first $\nu_{\tau}$ candidate in the OPERA experiment in the CNGS beam_, Phys. Lett. B691 (2010) 138.
* [8] N. Agafonova et al., _New results on $\nu_{\mu}\rightarrow\nu_{\tau}$ appearance with the OPERA experiment in the CNGS beam_, JHEP 1311 (2013) 036.
* [9] N. Agafonova et al., _Evidence for $\nu_{\mu}\rightarrow\nu_{\tau}$ appearance in the CNGS neutrino beam with the OPERA experiment_, Phys. Rev. D89 (2014) 051102.
* [10] N. Agafonova et al., _Observation of tau neutrino appearance in the CNGS beam with the OPERA experiment_ , PTEP 2014 (2014) 101C01.
* [11] N. Agafonova et al., _Discovery of $\tau$ neutrino appearance in the CNGS Neutrino Beam with the OPERA experiment_, Phys. Rev. Lett. 115 (2015) 121802.
* [12] N. Agafonova et al., _Final results of the OPERA experiment on $\nu_{\tau}$ appearance in the CNGS beam_, Phys. Rev. Lett. 120 (2018) 211801.
* [13] G. De Lellis et al., _Charm physics with neutrinos_ , Physics Reports 399 (2004) 227.
* [14] H. Dijkstra and T. Ruf, http://cds.cern.ch/record/2115534/files/SHiP-NOTE-2015-009.pdf.
* [15] C. Lourenco, H.K. Wohri, _Heavy flavour hadro-production from fixed-target to collider energies_ , Physics Reports 433 (2006) 127.
* [16] A. Akmete et al., _Measurement of associated charm production induced by 400 GeV/c protons_ , CERN-SPSC-2017-033, SPSC-EOI-017 (2017).
* [17] E. Cerron Zeballos et al., _A New type of resistive plate chamber: The Multigap RPC_ , NIMA 374 (1996) 132 - 135.
* [18] D. Belver et al., _Performance of the Low-Jitter High-Gain/Bandwidth Front-End Electronics of the HADES tRPC Wall_ , IEEE Transactions on Nuclear Science 57 (2010) 2848-2856.
* [19] A Neiser et al., _TRB3: a 264 channel high precision TDC platform and its applications_ , JINST 8 (2013) C12043.
|
# A note on cyclic vectors in Dirichlet-type spaces in the unit ball of
${\mathbb{C}}^{n}$
Dimitrios Vavitsas<EMAIL_ADDRESS>Institute of
Mathematics, Faculty of Mathematics and Computer Science, Jagiellonian
University, Łojasiewicza 6, 30-348 Kraków, Poland
###### Abstract.
We characterize model polynomials that are cyclic in Dirichlet-type spaces in
the unit ball of $\mathbb{C}^{n},$ and we give a sufficient capacity condition
in order to identify non-cyclic vectors.
###### Key words and phrases:
Dirichlet-type spaces, cyclic vectors, anisotropic capacities
###### 1991 Mathematics Subject Classification:
31C25, 32A37, 47A15
Partially supported by NCN grant SONATA BIS no. 2017/26/E/ST1/00723 of the
National Science Centre, Poland
## 1\. Introduction
Studying Dirichlet-type spaces in the unit ball of ${\mathbb{C}}^{n}$ we can
draw conclusions for classical Hilbert spaces of holomorphic functions such as
the Hardy, Bergman and Dirichlet spaces. General introduction to this theory
can be found in [18], [22].
The purpose of this note is to characterize model polynomials and to study
special families of functions that are cyclic for the shift operators on these
spaces. Moreover, we give a sufficient capacity condition in order to identify
non-cyclic functions. Norm comparisons, sharp decay of norms for special
subspaces, capacity conditions studied in [3], [4], [6], [21] are the main
motivation for this work. The cyclicity of a function $f$ in a space of
holomorphic functions is connected also with the problem of approximating
$1/f$, see [19], [20] for the study of this subject.
Full characterization of polynomials in more than two variables looks like a
hard problem either in the unit ball or the polydisc. The cyclicity problem of
polynomials for the bidisk was solved in [5] and shortly after extended in
[13]. The corresponding problem in the setting of the unit ball of
${\mathbb{C}}^{2}$ was solved in [14].
### 1.1. Dirichlet-type spaces in the unit ball
Denote the unit ball by
${\mathbb{B}_{n}}=\\{z\in{\mathbb{C}}^{n}:||z||<1\\},$
and its boundary, the unit sphere by
$\mathbb{S}_{n}=\\{z\in{\mathbb{C}}^{n}:||z||=1\\},$
where $||z||=\sqrt{|z_{1}|^{2}+...+|z_{n}|^{2}}$ is the associated norm of the
usual _Euclidean inner product_ $\langle
z,w\rangle=z_{1}\bar{w}_{1}+...+z_{n}\bar{w}_{n}.$ Denote the _class of
holomorphic functions_ in ${\mathbb{B}_{n}}$ by
$\textrm{Hol}({\mathbb{B}_{n}}).$ Any function
$f\in\textrm{Hol}({\mathbb{B}_{n}})$ has a power series expansion
(1)
$f(z)=\sum_{k=0}^{\infty}a_{k}z^{k}=\sum_{k_{1}=0}^{\infty}...\sum_{k_{n}=0}^{\infty}a_{k_{1},...,k_{n}}z_{1}^{k_{1}}\cdots
z_{n}^{k_{n}},\quad z\in{\mathbb{B}_{n}},$
where $k=(k_{1},...,k_{n})$ is a n-tuple index of non-negative integers,
$k!=k_{1}!\cdots k_{n}!$ and $z^{k}=z_{1}^{k_{1}}\cdots z_{n}^{k_{n}}.$ The
power series in (1) exist, converges normal in ${\mathbb{B}_{n}}$ and it is
unique since the unit ball is a connected Reinhardt domain containing the
origin, i.e. $(z_{1},...,z_{n})\in{\mathbb{B}_{n}}$ implies
$(e^{i\theta_{1}}z_{1},...,e^{i\theta_{n}}z_{n})\in{\mathbb{B}_{n}}$ for
arbitrary real $\theta_{1},...,\theta_{n},$ (see [11]).
To simplify the notation we may write (1) as follows:
(2)
$f(z)=\sum_{m=0}^{\infty}\sum_{|k|=m}^{\infty}a_{k}z^{k}=\sum_{|k|=0}^{\infty}a_{k}z^{k},\quad
z\in{\mathbb{B}_{n}},$
where $|k|=k_{1}+...+k_{n}.$
Let $f\in\mathrm{Hol}({\mathbb{B}_{n}})$. We say that $f$ belongs to the
Dirichlet-type $space$ $D_{\alpha}({\mathbb{B}_{n}}),$ where
$\alpha\in\mathbb{R}$ is a fixed parameter, if
(3)
$||f||^{2}_{\alpha}:=\sum_{|k|=0}^{\infty}(n+|k|)^{\alpha}\frac{(n-1)!k!}{(n-1+|k|)!}|a_{k}|^{2}<\infty.$
General introduction to the theory of Dirichlet-type spaces in the unit ball
of ${\mathbb{C}}^{n}$ can be found in [1], [2], [15], [16], [19], [21], [22].
One variable Dirichlet-type spaces are discussed in the textbook [12]. The
weights in the norm in (3) are chosen in such a way that
$D_{0}({\mathbb{B}_{n}})$ and $D_{-1}({\mathbb{B}_{n}})$ coincide with the
Hardy and Bergman spaces of the ball, respectively. The Dirichlet space having
Möbius invariant norm corresponds to the parameter choice $\alpha=n.$
By the definition, $D_{\alpha}({\mathbb{B}_{n}})\subset
D_{\beta}({\mathbb{B}_{n}}),$ when $\alpha\geq\beta.$ Polynomials are dense in
the spaces $D_{\alpha}({\mathbb{B}_{n}}),$ $\alpha\in{\mathbb{R}},$ and
$z_{i}\cdot f\in D_{\alpha}({\mathbb{B}_{n}}),$ $i=1,...,n$ whenever $f\in
D_{\alpha}({\mathbb{B}_{n}}).$
A _multiplier_ in $D_{\alpha}({\mathbb{B}_{n}})$ is a holomorphic function
$\phi:{\mathbb{B}_{n}}\rightarrow{\mathbb{C}}$ that satisfies $\phi\cdot f\in
D_{\alpha}({\mathbb{B}_{n}})$ for all $f\in D_{\alpha}({\mathbb{B}_{n}}).$
Polynomials, as well as holomorphic functions in a neighbourhood of the closed
unit ball, are multipliers in every space $D_{\alpha}({\mathbb{B}_{n}})$.
### 1.2. Shift operators and cyclic vectors
Consider the bounded linear operators
$S_{1},...,S_{n}:D_{\alpha}({\mathbb{B}_{n}})\rightarrow
D_{\alpha}({\mathbb{B}_{n}})$ defined by $S_{i}:f\mapsto z_{i}\cdot f.$ We say
that $f\in D_{\alpha}({\mathbb{B}_{n}})$ is a _cyclic vector_ if the closed
invariant subspace, i.e.
$[f]:=\mathrm{clos}\,\mathrm{span}\\{z_{1}^{k_{1}}\cdots
z_{n}^{k_{n}}f:k_{1},...,k_{n}=0,1,...\\}$
coincides with $D_{\alpha}({\mathbb{B}_{n}})$ (the closure is taken with
respect to the $D_{\alpha}({\mathbb{B}_{n}})$ norm). An equivalent definition
is that $f$ is cyclic if and only if $1\in[f].$
Since $D_{\alpha}({\mathbb{B}_{n}})$ enjoys the _bounded point evaluation
property_ a function that is cyclic cannot vanish inside the unit ball. Thus,
we focus on functions non-vanishing in the domain. Also, non-zero constant
functions are cyclic in every space $D_{\alpha}({\mathbb{B}_{n}}).$ More
information regarding cyclic vectors in Dirichlet-type spaces over the disk,
the polydisc and the unit ball can be found in [3], [4], [5], [6], [8], [12],
[13], [14], [20], [21].
Just as in the settings of the bidisk and the unit ball of two variables, the
cyclicity of a function $f\in D_{\alpha}({\mathbb{B}_{n}})$ is inextricably
linked with its zero set
$\mathcal{Z}(f)=\\{z\in{\mathbb{C}}^{n}:f(z)=0\\}.$
The zeros of a function lying on the sphere are called the _boundary zeros_.
### 1.3. Plan of the paper
Section 2 studies Dirichlet-type spaces. In particular, we give a crucial
relation among them. Using fractional radial derivatives and the Cauchy
formula of functions lying in the ball algebra $A({\mathbb{B}_{n}})$ which
contains functions that are continuous on the closed unit ball and holomorphic
in its interior, we give an equivalent norm of Dirichlet-type spaces for a
wide range of parameters $\alpha.$
Section 3 studies diagonal subspaces. In particular, we extend result from
[21]. It makes sense to define functions $f\in\mathrm{Hol}({\mathbb{B}_{n}})$
using functions $\tilde{f}\in\mathrm{Hol}({\mathbb{D}}(\mu))$ for a proper
$\mu>0.$ Geometrically speaking, we are looking at a disk embedded in the ball
but not in a coordinate plane. Thus, we may switch the problem of cyclicity
from the ball to spaces of holomorphic functions of one variable that are well
known. Then we use optimal approximants in order to identify cyclicity.
Moreover, we prove cyclicity for model polynomials for proper parameters. In
the setting of the unit ball of two variables, see [21], the model polynomials
are the following: $1-z_{1}$ which vanishes in the closed unit ball on a
singleton, i.e. $\mathcal{Z}(1-z_{1})\cap\mathbb{S}_{2}=\\{(1,0)\\},$ and
$1-2z_{1}z_{2}$ which vanishes along an analytic curve, i.e.
$\mathcal{Z}(1-2z_{1}z_{2})\cap\mathbb{S}_{2}=\\{(e^{i\theta}/\sqrt{2},e^{-i\theta}/\sqrt{2}):\theta\in{\mathbb{R}}\\}.$
In our case, the corresponding candidates are the following:
$p(z)=1-m^{m/2}z_{1}\cdots z_{m},\quad 1\leq m\leq n.$
They vanish in the closed unit ball along the following analytic sets:
$\mathcal{Z}(p)\cap\mathbb{S}_{n}=\\{1/\sqrt{m}(e^{i\theta_{1}},..,e^{i\theta_{m-1}},e^{-i(\theta_{1}+...+\theta_{m-1})},0,..,0):\theta_{i}\in\mathbb{R}\\}.$
These polynomials are also studied with respect to the Drury-Arveson space in
[19].
In two variables, $1-z_{1}$ is cyclic in $D_{\alpha}(\mathbb{B}_{2})$
precisely when $\alpha\leq 2,$ and $1-2z_{1}z_{2}$ is cyclic in
$D_{\alpha}(\mathbb{B}_{2})$ precisely when $\alpha\leq 3/2.$ Here, there are
more than two fixed parameters. The characterization of cyclicity of these two
polynomials was crucial in [14].
Section 4 studies the radial dilation of a polynomial. Using the equivalent
norm of Section 2, we identify cyclicity for the model polynomials via the
powerful radial dilation method. In particular, we show that if
$p/p_{r}\rightarrow 1$ weakly, where $p_{r}(z)=p(rz)$ is a radial dilation of
$p,$ then $p$ is cyclic, (see [13] for the bidisk settings and [14] for the
unit ball in two variables). This method is quite interesting since it can be
applied to an arbitrary polynomial. Note that in [13], [14] the radial
dilation method is one of the main tools of solving cyclicity problem for
polynomials. The main result of this section verifies the arguments made about
polynomials in Section 3.
Section 5 studies non-cyclic vectors. We use the notion of Riesz
$\alpha$-capacity in order to identify non-cyclic functions. Moreover, we
study Cauchy transforms of Borel measures supported on zero sets of the radial
limits of a given function $f\in D_{\alpha}({\mathbb{B}_{n}})$ and we give
asymptotic expansions of their norms. Then employing a standard scheme due to
Brown and Shields, see [8], we prove the main result. Note that this
sufficient capacity condition for non-cyclicity in Dirichlet-type spaces in
the unit ball of two variables was proved by A. Sola in [21].
## Standard tools
Let us give some standard tools which will be useful in the sequel.
The binomial series:
$\frac{1}{(1-x)^{\alpha}}=\sum_{k=0}^{\infty}\frac{\Gamma(k+\alpha)}{\Gamma(\alpha)k!}x^{k},$
where $|x|<1$ is a complex number and $\alpha$ is a non-negative real number.
The asymptotic behaviour of the $\Gamma$-function is the following:
$\Gamma(k+\alpha)\asymp(k-1)!k^{\alpha},$ where the symbol $\asymp$ denotes
that the ratio of the two quantities either tends to a constant as $k$ tends
to infinity or it is rather two sides bound by constants.
The multinomial formula:
$(x_{1}+...+x_{n})^{k}=\sum_{|j|=k}\frac{k!}{j!}x_{1}^{j_{1}}\cdots
x_{n}^{j_{n}},$
where $j=(j_{1},...,j_{n})$ is a $n$-tuple index of non-negative integers and
$x_{i}$ are complex numbers.
The Stirling formula that describes the asymptotic behaviour of the gamma
function:
$k!\asymp k^{1/2}k^{k}/e^{k}.$
Denote the normalized area measure on ${\mathbb{C}}^{n}={\mathbb{R}}^{2n}$ by
$du(z)$ and the normalized rotation-invariant positive Borel measure on
$\mathbb{S}_{n}$ by $d\sigma(\zeta),$ (see [18], [22]). The measures $du(z)$
and $d\sigma(\zeta)$ are related by the formula
$\int_{{\mathbb{C}}^{n}}f(z)du(z)=2n\int_{0}^{\infty}\int_{\mathbb{S}_{n}}\epsilon^{2n-1}f(\epsilon\zeta)d\sigma(\zeta)d\epsilon.$
The holomorphic monomials are orthogonal to each other in $L^{2}(\sigma),$
that is, if $k$ and $l$ are multi-indices such that $k\neq l,$ then
$\int_{\mathbb{S}_{n}}\zeta^{k}\bar{\zeta}^{l}d\sigma(\zeta)=0.$
Moreover,
$\int_{\mathbb{S}_{n}}|\zeta^{k}|^{2}d\sigma(\zeta)=\frac{(n-1)!k!}{(n-1+|k|)!}\quad\text{and}\quad\int_{{\mathbb{B}_{n}}}|z^{k}|^{2}du(z)=\frac{n!k!}{(n+|k|)!}.$
## 2\. Relation among Dirichlet-type spaces and equivalent norms
We study the structure of Dirichlet-type spaces. Note that
$R(f)(z)=z_{1}\partial_{z_{1}}f(z)+...+z_{n}\partial_{z_{n}}f(z)$
is the _radial derivative_ of a function $f.$ The radial derivative plays a
key role in the function theory of the unit ball. A crucial relation among
these spaces is the following.
###### Proposition 1.
Let $f\in\mathrm{Hol}({\mathbb{B}_{n}})$ and $\alpha\in{\mathbb{R}}$ be fixed.
Then
$f\in D_{\alpha}({\mathbb{B}_{n}})\quad\text{if and only if}\quad
n^{q}f+R^{q}(f)+q\sum_{i=1}^{q-1}n^{i}R^{q-i}(f)\in
D_{\upsilon}({\mathbb{B}_{n}}),$
where $\alpha=2q+\upsilon,$ $q\in\mathbb{N}$ and $R^{q}$ is the $q$-image of
the operator $R.$
###### Proof.
Indeed, it is enough to check that
$||nf+R(f)||^{2}_{\alpha-2}=\sum_{|k|=0}^{\infty}(n+|k|)^{\alpha-2}\frac{(n-1)!k!}{(n-1+|k|)!}(n+|k|)^{2}|a_{k}|^{2}=||f||_{\alpha}^{2}.$
∎
We continue by giving an equivalent characterization of Dirichlet-type norms.
In Dirichlet-type spaces in the unit ball, one of the integral representations
of the norm is achieved in a limited range of parameters.
###### Lemma 2 (see[16]).
If $\alpha\in(-1,1)$, then $||f||^{2}_{\alpha}$ is equivalent to
$|f|^{2}_{\alpha}:=\int_{{\mathbb{B}_{n}}}\frac{||\nabla(f)(z)||^{2}-|R(f)(z)|^{2}}{(1-||z||^{2})^{\alpha}}du(z).$
Above, $\nabla(f)(z)=(\partial_{z_{1}}f(z),...,\partial_{z_{n}}f(z))$ denotes
the _holomorphic gradient_ of a holomorphic function $f.$ Note that
Proposition 1 allows us to use Lemma 2 whenever $\upsilon\in(-1,1).$ . Let
$\gamma,t\in{\mathbb{R}}$ be such that neither $n+\gamma$ nor $n+\gamma+t$ is
a negative integer. If $f=\sum_{|k|=0}^{\infty}a_{k}z^{k}$ is the homogeneous
expansion of a function $f\in\textrm{Hol}({\mathbb{B}_{n}}),$ then we may
define an invertible continuous linear operator with respect to the topology
of uniform convergence on compact subsets of ${\mathbb{B}_{n}},$ denoted by
$R^{\gamma,t}:\textrm{Hol}({\mathbb{B}_{n}})\rightarrow\textrm{Hol}({\mathbb{B}_{n}})$
and having expression
$R^{\gamma,t}f(z)=\sum_{|k|=0}^{\infty}C(\gamma,t,k)a_{k}z^{k},\quad
z\in{\mathbb{B}_{n}},$
where
(4)
$C(\gamma,t,k)=\frac{\Gamma(n+1+\gamma)\Gamma(n+1+|k|+\gamma+t)}{\Gamma(n+1+\gamma+t)\Gamma(n+1+|k|+\gamma)}\asymp|k|^{t}.$
See [22] for more information regarding these fractional radial derivatives.
###### Lemma 3.
Let $t\in{\mathbb{R}}$ be such that $n-1+t\geq 0.$ If $f\in
A({\mathbb{B}_{n}}),$ then
$R^{-1,t}f(z)=\int_{\mathbb{S}_{n}}\frac{f(\zeta)}{(1-\langle
z,\zeta\rangle)^{n+t}}d\sigma(\zeta),\quad z\in{\mathbb{B}_{n}}.$
###### Proof.
The continuous linear operator $R^{\gamma,t},$ see [22], satisfies
$R^{\gamma,t}\Big{(}\frac{1}{(1-\langle
z,w\rangle)^{n+1+\gamma}}\Big{)}=\frac{1}{(1-\langle
z,w\rangle)^{n+1+\gamma+t}}$
for all $w\in{\mathbb{B}_{n}}.$ Next, define $f_{\epsilon}$ for
$\epsilon\in(0,1)$ by
$f_{\epsilon}(z)=\int_{\mathbb{S}_{n}}\frac{f(\zeta)}{(1-\langle
z,\epsilon\zeta\rangle)^{n}}d\sigma(\zeta),\quad z\in{\mathbb{B}_{n}}.$
The Cauchy formula holds for $f\in A({\mathbb{B}_{n}})$ and hence
$f=\lim_{\epsilon\rightarrow 1^{-}}f_{\epsilon}.$ It follows that
$\displaystyle R^{-1,t}f(z)$
$\displaystyle=R^{-1,t}\Big{(}\lim_{\epsilon\rightarrow
1^{-}}\int_{\mathbb{S}_{n}}\frac{f(\zeta)}{(1-\langle
z,\epsilon\zeta\rangle)^{n}}d\sigma(\zeta)\Big{)}$
$\displaystyle=\lim_{\epsilon\rightarrow
1^{-}}R^{-1,t}\Big{(}\int_{\mathbb{S}_{n}}\frac{f(\zeta)}{(1-\langle
z,\epsilon\zeta\rangle)^{n}}d\sigma(\zeta)\Big{)}$
$\displaystyle=\lim_{\epsilon\rightarrow
1^{-}}\int_{\mathbb{S}_{n}}f(\zeta)R^{-1,t}\Big{(}\frac{1}{(1-\langle
z,\epsilon\zeta\rangle)^{n}}\Big{)}d\sigma(\zeta)$
$\displaystyle=\lim_{\epsilon\rightarrow
1^{-}}\int_{\mathbb{S}_{n}}\frac{f(\zeta)}{(1-\langle
z,\epsilon\zeta\rangle)^{n+t}}d\sigma(\zeta)$
$\displaystyle=\int_{\mathbb{S}_{n}}\frac{f(\zeta)}{(1-\langle
z,\zeta\rangle)^{n+t}}d\sigma(\zeta)$
and the assertion follows. ∎
###### Theorem 4.
Let $\alpha\in{\mathbb{R}}$ be such that $n-1+\alpha/2\geq 0$ and $f\in
A({\mathbb{B}_{n}}).$ Then $f\in D_{\alpha}({\mathbb{B}_{n}})$ if and only if
$\int_{{\mathbb{B}_{n}}}(1-||z||^{2})\Big{|}\int_{\mathbb{S}_{n}}\frac{f(\zeta)\bar{\zeta}_{p}}{(1-\langle
z,\zeta\rangle)^{n+\alpha/2+1}}d\sigma(\zeta)\Big{|}^{2}du(z)<\infty$
and
$\int_{{\mathbb{B}_{n}}}\Big{|}\int_{\mathbb{S}_{n}}\frac{(\overline{z_{p}\zeta_{q}-z_{q}\zeta_{p}})f(\zeta)}{(1-\langle
z,\zeta\rangle)^{n+\alpha/2+1}}d\sigma(\zeta)\Big{|}^{2}du(z)<\infty,$
where $p,q=1,...,n.$
###### Proof.
Choose $t$ so that $\alpha=2t.$ Note that $n,t$ are fixed and hence
$||f||^{2}_{\alpha}\asymp\sum_{|k|=0}^{\infty}\frac{(n-1)!k!}{(n-1+|k|)!}||k|^{t}a_{k}|^{2}.$
Thus, (4) implies that $||R^{-1,t}f||_{0}\asymp||f||_{\alpha}.$ One can apply
then the integral representation of Dirichlet-type norms to
$R^{-1,t}f\in\textrm{Hol}({\mathbb{B}_{n}}),$ i.e. $||R^{-1,t}f||_{0}$ is
equivalent to $|R^{-1,t}f|_{0}.$ According to Lemma 3 we get that
$\partial_{z_{p}}(R^{-1,t}f)(z)=\int_{\mathbb{S}_{n}}\frac{f(\zeta)\bar{\zeta}_{p}}{(1-\langle
z,\zeta\rangle)^{n+t+1}}d\sigma(\zeta),\quad z\in{\mathbb{B}_{n}},$
where $p=1,...,n.$ Expand the term $||\nabla(f)||^{2}-|R(f)|^{2}$ as follows:
$||\nabla(f)||^{2}-|R(f)|^{2}=(1-||z||^{2})||\nabla(f)||^{2}+\sum_{p,q}|\bar{z}_{p}\partial_{z_{q}}f-\bar{z}_{q}\partial_{z_{p}}f|^{2}.$
The assertion follows by Lemma 2. ∎
## 3\. Diagonal subspaces
In [3], a method of construction of optimal approximants via determinants in
Dirichlet-type spaces in the unit disk is provided. Similarly, we may define
optimal approximants in several variables, (see [19]).
Fix $N\in\mathbb{N}.$ We define the space of polynomials
$p\in{\mathbb{C}}[z_{1},...,z_{n}]$ with degree at most $nN$ as follows:
$P_{N}^{n}:=\\{p(z)=\sum_{k_{1}=0}^{N}...\sum_{k_{n}=0}^{N}a_{k_{1},...,k_{n}}z_{1}^{k_{1}}\cdots
z_{n}^{k_{n}}\\}.$
###### Remark 5.
Let $(X,||\cdot||)$ be a normed space and fix $x\in X,$ $C\subset X.$ The
distance between $x$ and the set $C$ is the following:
$\mathrm{dist}_{X}(x,C):=\inf\\{||x-c||:c\in C\\}.$
It is well known that if $X$ is a Hilbert space and $C\subset X$ a convex
closed subset, then for any $x\in X,$ there exists a unique $y\in C$ such that
$||x-y||=\mathrm{dist}_{X}(x,C).$ Let $f\in D_{\alpha}({\mathbb{B}_{n}})$ be
non-zero constant. We deduce that for any $N\in\mathbb{N},$ there exists
exactly one $p_{N}\in P_{N}^{n}$ satisfying
$||p_{N}f-1||_{\alpha}=\mathrm{dist}_{D_{\alpha}({\mathbb{B}_{n}})}(1,f\cdot
P_{N}^{n}).$
Let $f\in D_{\alpha}({\mathbb{B}_{n}}).$ We say that a polynomial $p_{N}\in
P^{n}_{N}$ is an optimal approximant of order $N$ to $1/f$ if $p_{N}$
minimizes $||pf-1||_{\alpha}$ among all polynomials $p\in P_{N}^{n}.$ We call
$||p_{N}f-1||_{\alpha}$ the optimal norm of order $N$ associated with $f.$
Let $M=(M_{1},...,M_{n})$ be a multi-index, where $M_{i}$ are non-negative
integers, and $m\in\\{1,...,n\\}.$ Setting
$\mu(m):=\frac{(M_{1}+...+M_{m})^{M_{1}+...+M_{m}}}{M_{1}^{M_{1}}\cdots
M_{m}^{M_{m}}},$
we see that
(5) $\mu(m)^{1/2}|z_{1}|^{M_{1}}\cdots|z_{m}|^{M_{m}}\leq 1,\quad
z\in{\mathbb{B}_{n}}.$
Using (5) we may construct polynomials that vanish in the closed unit ball
along analytic subsets of the unit sphere.
###### Remark 6.
Let $\tilde{f}\in\mathrm{Hol}({\mathbb{D}}(\mu(m)^{-1/4})),$ where
${\mathbb{D}}(\mu)=\\{z\in{\mathbb{C}}:|z|<\mu\\},\quad\mu>0.$
According to (5) we define the following function:
$f(z)=f(z_{1},...,z_{n})=\tilde{f}(\mu(m)^{1/4}z_{1}^{M_{1}}\cdots
z_{m}^{M_{m}}),\quad z\in{\mathbb{B}_{n}}.$
Then $f\in\mathrm{Hol}({\mathbb{B}_{n}})$ and it depends on $m$ variables.
Note that we may change the variables $z_{1},...,z_{m}$ by any other $m$
variables. For convenience, we choose the $m$ first variables. The power $1/4$
will be convenient in the sequel.
Thus, the question that arises out is if we may define closed subspaces of
$D_{\alpha}({\mathbb{B}_{n}})$ passing through one variable functions. We
shall see that these subspaces are called diagonal subspaces due to the nature
of the power series expansion of their elements.
Instead of the classical one variable Dirichlet-type spaces of the unit disk,
we may consider spaces $d_{\beta},$ $\beta\in{\mathbb{R}},$ consisting of
holomorphic functions $\tilde{f}\in\mathrm{Hol}({\mathbb{D}}(\mu^{-1/4})).$
Moreover, such functions with power series expansion
$\tilde{f}(z)=\sum_{l=0}^{\infty}a_{l}z^{l}$ are said to belong to $d_{\beta}$
if
$||\tilde{f}||^{2}_{d_{\beta}}:=\sum_{l=0}^{\infty}\mu^{-l/2}(l+1)^{\beta}|a_{l}|^{2}<\infty.$
There is a natural identification between the function theories of
$D_{\beta}({\mathbb{D}})$: one variable Dirichlet-type spaces of the unit
disk, and $d_{\beta},$ and one verifies that the results in [3] are valid for
$d_{\beta}.$
We are ready to define diagonal closed subspaces. Set
$\beta(\alpha):=\alpha-n+\frac{m+1}{2}.$
Let $\alpha,$ $M,$ $m$ be as above. The diagonal closed subspace of
$D_{\alpha}({\mathbb{B}_{n}})$ is the following:
$J_{\alpha,M,m}:=\\{f\in D_{\alpha}({\mathbb{B}_{n}}):\exists\tilde{f}\in
d_{\beta(\alpha)},f(z)=\tilde{f}(\mu(m)^{1/4}z_{1}^{M_{1}}\cdots
z_{m}^{M_{m}})\\}.$
The existence of a holomorphic function $\tilde{f}$ is unique by identity
principle and hence there is no any amiss in the definition. Any function
$f\in J_{\alpha,M,m}$ has an expansion of the form
$f(z)=\sum_{l=0}^{\infty}a_{l}(z_{1}^{M_{1}}\cdots z_{m}^{M_{m}})^{l}.$
The relation of norms between one variable and diagonal subspaces follows.
###### Proposition 7.
If $f\in J_{\alpha,M,m},$ then
$||f||_{\alpha}\asymp||\tilde{f}||_{d_{\beta(\alpha)}}.$
###### Proof.
If $f\in J_{\alpha,M,m},$ then
$||f||^{2}_{\alpha}\asymp\sum_{l=0}^{\infty}(l+1)^{\alpha}\frac{(M_{1}l)!\cdots(M_{m}l)!}{(n-1+(M_{1}+\cdots
M_{m})l)!}|a_{l}|^{2}.$
By Stirling’s formula, we obtain
$||f||^{2}_{\alpha}\asymp\sum_{l=0}^{\infty}(l+1)^{\alpha-n+m/2+1/2}\mu(m)^{-l}|a_{l}|^{2}.$
On the other hand, define the function
$f^{\prime}(z)=\sum_{l=0}^{\infty}\mu(m)^{-l/4}a_{l}z^{l}.$ Then
$f^{\prime}(\mu(m)^{1/4}z_{1}^{M_{1}}\cdots z_{m}^{M_{m}})=f(z_{1},...,z_{n})$
and
$||f^{\prime}||^{2}_{d_{\beta(\alpha)}}\asymp\sum_{l=0}(l+1)^{\alpha-n+m/2+1/2}\mu(m)^{-l}|a_{l}|^{2}.$
The assertion follows since $f^{\prime}$ coincides with $\tilde{f}.$ ∎
The corresponding Lemma 3.4 of [4] in our case is the following.
###### Lemma 8.
Let $f\in J_{\alpha,M,m},$ where $\alpha,M,m$ be as above. Let $r_{N}\in
P_{N}^{n}$ with expansion
$r_{N}(z)=\sum_{k_{1}=0}^{N}\cdots\sum_{k_{n}=0}^{N}a_{k_{1},...,k_{n}}z_{1}^{k_{1}}...z_{n}^{k_{n}},$
and consider its projection onto $J_{\alpha,M,m}$
$\pi(r_{N})(z)=\sum_{\\{l:M_{1}l,...,M_{m}l\leq
N\\}}c_{M_{1}l,...,M_{m}l,0,...,0}z_{1}^{M_{1}l}\cdots z_{m}^{M_{m}l}.$
Then
$||r_{N}f-1||_{\alpha}\geq||\pi(r_{N})f-1||_{\alpha}.$
Moreover, just as in Proposition 7, there is a relation of optimal
approximants between one variable and diagonal subspaces.
###### Proposition 9.
If $f\in J_{\alpha,M,m},$ then
$\mathrm{dist}_{D_{\alpha}({\mathbb{B}_{n}})}(1,f\cdot
P_{N}^{n})\asymp\mathrm{dist}_{d_{\beta}(\alpha)}(1,\tilde{f}\cdot
P^{1}_{N}).$
###### Proof.
Let $r_{N},$ $\pi(r_{N})$ be as in Lemma 8. Then $\pi(r_{N})f-1\in
J_{\alpha,M,m}.$ It follows that
$\displaystyle||r_{N}f-1||_{\alpha}\geq||\pi(r_{N})f-1||_{\alpha}\asymp||\tilde{\pi}(r_{N})\tilde{f}-1||_{d_{\beta(\alpha)}}\geq\mathrm{dist}_{d_{\beta(\alpha)}}(1,\tilde{f}\cdot
P^{1}_{N}),$
since $\tilde{\pi}(r_{N})\in P^{1}_{N}.$ On the other hand, let
$\mathrm{dist}_{d_{\beta(\alpha)}}(1,\tilde{f}\cdot
P^{1}_{N})=||q_{N}\tilde{f}-1||_{d_{\beta(\alpha)}},\quad
q_{N}(z)=\sum_{l=0}^{N}a_{l}z^{l}.$
Then, the polynomial
$q^{\prime}_{N}(z_{1},...,z_{n})=\sum_{l=0}^{N}\mu(m)^{-l/4}a_{l}z_{1}^{M_{1}l}\cdots
z_{m}^{M_{m}l}$
satisfies $q_{N}^{\prime}\in J_{\alpha,M,m}\cap P_{N}^{n}$ and
$q_{N}^{\prime}f-1\in J_{\alpha,M,m}.$ Thus,
$\displaystyle||q_{N}\tilde{f}-1||_{d_{\beta(\alpha)}}=||\tilde{q}_{N}^{\prime}\tilde{f}-1||_{d_{\beta(\alpha)}}\asymp||q_{N}^{\prime}f-1||_{\alpha}\geq\mathrm{dist}_{D_{\alpha}({\mathbb{B}_{n}})}(1,f\cdot
P_{N}^{n})$
and the assertion follows. ∎
Define the function $\phi_{\beta}:[0,\infty)\rightarrow[0,\infty)$ by
$\phi_{\beta}(t)=\begin{dcases}t^{1-\beta},&\beta<1\\\
\log^{+}(t),&\beta=1\end{dcases},$
where $\log^{+}(t):=\max\\{\log t,0\\}.$ We have the following.
###### Theorem 10.
Let $\alpha\in{\mathbb{R}}$ be such that $\beta(\alpha)\leq 1.$ Let $f\in
J_{\alpha,M,m}$ be as above and suppose the corresponding $\tilde{f}$ has no
zeros inside its domain, has at least one zero on the boundary, and admit an
analytic continuation to a strictly bigger domain. Then $f$ is cyclic in
$D_{\alpha}({\mathbb{B}_{n}})$ whenever $\alpha\leq\frac{2n-m+1}{2}$ and
$\mathrm{dist}^{2}_{D_{\alpha}({\mathbb{B}_{n}})}(1,f\cdot
P_{N}^{n})\asymp\phi_{\beta(\alpha)}(N+1)^{-1}.$
###### Proof.
It is an immediate consequence of the identification between
$D_{\beta}({\mathbb{D}})$ and $d_{\beta}$ and previous lemmas and
propositions. ∎
If we focus on polynomials, then the following is true.
###### Theorem 11.
Consider the polynomial $p(z)=1-m^{m/2}z_{1}\cdots z_{m},$ where $1\leq m\leq
n.$ Then $p$ is cyclic in $D_{\alpha}({\mathbb{B}_{n}})$ whenever
$\alpha\leq\frac{2n+1-m}{2}.$
Note that the Theorem11 is not a characterization. We shall study the case
$\alpha>\frac{2n+1-m}{2}.$
## 4\. Cyclicity for model polynomials via radial dilation
The _radial dilation_ of a function
$f:{\mathbb{B}_{n}}\rightarrow{\mathbb{C}}$ is defined for $r\in(0,1)$ by
$f_{r}(z)=f(rz).$ To prove Theorem 11, it is enough to prove the following
lemma.
###### Lemma 12.
Consider the polynomial $p(z)=1-m^{m/2}z_{1}\cdots z_{m},$ where $1\leq m\leq
n.$ Then $||p/p_{r}||_{\alpha}<\infty$ as $r\rightarrow 1^{-}$ whenever
$\alpha\leq\frac{2n+1-m}{2}.$
We follow the arguments of [14], [13]. Indeed, if Lemma 12 holds, then
$\phi_{r}\cdot p\rightarrow 1$ weakly, where $\phi_{r}:=1/p_{r}.$ This is a
consequence of a crucial property of Dirichlet-type spaces: if
$\\{f_{n}\\}\subset D_{\alpha}({\mathbb{B}_{n}}),$ then $f_{n}\rightarrow 0$
weakly if and only if $f_{n}\rightarrow 0$ pointwise and
$\sup_{n}\\{||f_{n}||_{\alpha}\\}<\infty.$ Since $\phi_{r}$ extends
holomorphically past the closed unit ball, $\phi_{r}$ are multipliers, and
hence, $\phi_{r}\cdot p\in[p].$ Finally, $1$ is weak limit of $\phi_{r}\cdot
p$ and $[p]$ is closed and convex or, equivalently, weakly closed. It is clear
that $1\in[p],$ and hence, $p$ is cyclic.
Moreover, it is enough to prove that $||p/p_{r}||_{\alpha}<\infty,$ as
$r\rightarrow 1^{-},$ for $\alpha_{0}=\frac{2n+1-m}{2}.$ Then the case
$\alpha<\alpha_{0}$ follows since the inclusion
$D_{\alpha_{0}}({\mathbb{B}_{n}})\hookrightarrow D_{\alpha}({\mathbb{B}_{n}})$
is a compact linear map and weak convergence in
$D_{\alpha_{0}}({\mathbb{B}_{n}})$ gives weak convergence in
$D_{\alpha}({\mathbb{B}_{n}}).$
###### Proof of Lemma 12.
By Theorem 4 it is enough to show the following:
$I_{p}:=\int_{{\mathbb{B}_{n}}}(1-||z||^{2})\Big{|}\int_{\mathbb{S}_{n}}\frac{(1-\lambda\zeta_{1}\cdots\zeta_{m})\bar{\zeta}_{p}}{(1-r^{m}\lambda\zeta_{1}\cdots\zeta_{m})(1-\langle
z,\zeta\rangle)^{\beta}}d\sigma(\zeta)\Big{|}^{2}du(z)$
and
$I_{p,q}:=\int_{{\mathbb{B}_{n}}}\Big{|}\int_{\mathbb{S}_{n}}\frac{(\overline{z_{p}\zeta_{q}-z_{q}\zeta_{p}})(1-\lambda\zeta_{1}\cdots\zeta_{m})}{(1-r^{m}\lambda\zeta_{1}\cdots\zeta_{m})(1-\langle
z,\zeta\rangle)^{\beta}}d\sigma(\zeta)\Big{|}^{2}du(z)$
are finite, as $r\rightarrow 1^{-},$ where $\beta=n+t+1,$
$t=\frac{2n+1-m}{4},$ and $\lambda=m^{m/2}.$
Denote
$S_{p}:=\int_{\mathbb{S}_{n}}\frac{(1-\lambda\zeta_{1}\cdots\zeta_{m})\bar{\zeta}_{p}}{(1-r^{m}\lambda\zeta_{1}\cdots\zeta_{m})(1-\langle
z,\zeta\rangle)^{\beta}}d\sigma(\zeta),$
where the last integral is equal to
$\frac{1}{2\pi}\int_{\mathbb{S}_{n}}\int_{0}^{2\pi}\frac{(1-\lambda
e^{im\theta}\zeta_{1}\cdots\zeta_{m})e^{-i\theta}\bar{\zeta}_{p}}{(1-r^{m}\lambda
e^{im\theta}\zeta_{1}\cdots\zeta_{m})(1-e^{-i\theta}\langle
z,\zeta\rangle)^{\beta}}d\theta d\sigma(\zeta).$
Let $z,\zeta$ be fixed. Then
$\int_{0}^{2\pi}\frac{e^{-i\theta}}{(1-e^{-i\theta}\langle
z,\zeta\rangle)^{\beta}}d\theta=0.$
Thus, replacing $p(e^{i\theta}\zeta)/p(re^{i\theta}\zeta)$ by
$p(e^{i\theta}\zeta)/p(re^{i\theta}\zeta)-1$ we obtain
$S_{p}=\frac{\lambda(r^{m}-1)}{2\pi}\int_{\mathbb{S}_{n}}\int_{0}^{2\pi}\frac{\bar{\zeta}_{p}\zeta_{1}\cdots\zeta_{m}e^{i(m-1)\theta}}{(1-r^{m}\lambda
e^{im\theta}\zeta_{1}\cdots\zeta_{m})(1-e^{-i\theta}\langle
z,\zeta\rangle)^{\beta}}d\theta d\sigma(\zeta).$
Next, expand the binomials
$\displaystyle\int_{0}^{2\pi}$
$\displaystyle\frac{e^{i(m-1)\theta}}{(1-r^{m}\lambda
e^{im\theta}\zeta_{1}\cdots\zeta_{m})(1-e^{-i\theta}\langle
z,\zeta\rangle)^{\beta}}d\theta$
$\displaystyle=\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\frac{\Gamma(k+\beta)}{\Gamma(\beta)k!}(r^{m}\lambda\zeta_{1}\cdots\zeta_{m})^{l}\langle
z,\zeta\rangle^{k}\int_{0}^{2\pi}e^{i(m(l+1)-k-1)\theta}d\theta$
$\displaystyle=2\pi\sum_{k=0}^{\infty}\frac{\Gamma(m(k+1)-1+\beta)}{\Gamma(\beta)(m(k+1)-1)!}(r^{m}\lambda\zeta_{1}\cdots\zeta_{m})^{k}\langle
z,\zeta\rangle^{m(k+1)-1}$
$\displaystyle=2\pi\sum_{k=0}^{\infty}\sum_{|j|=m(k+1)-1}\frac{\Gamma(m(k+1)-1+\beta)}{\Gamma(\beta)j!}(r^{m}\lambda\zeta_{1}\cdots\zeta_{m})^{k}z^{j}\bar{\zeta}^{j}.$
Therefore,
$S_{p}=\lambda(r^{m}-1)\sum_{k=0}^{\infty}\sum_{|j|=m(k+1)-1}\frac{\Gamma(m(k+1)-1+\beta)}{\Gamma(\beta)j!}(r^{m}\lambda)^{k}z^{j}c(k),$
where
$c(k)=\int_{\mathbb{S}_{n}}\zeta^{\alpha(k)}\bar{\zeta}^{b(k)}d\sigma(\zeta),$
$\alpha(k)=(k+1,..,k+1\text{(m-comp.)},0,..,0)$ and
$b(k)=(j_{1},...,j_{p-1},j_{p}+1,j_{p+1},...,j_{n}).$ Whence, $1\leq p\leq m.$
Since the holomorphic monomials are orthogonal to each other in
$L^{2}(\sigma)$ we get that
$|S_{p}|\asymp(1-r^{m})\Big{|}z^{\prime}_{p}\sum_{k=0}^{\infty}(k+1)^{\beta-n}(r^{m}\lambda
z_{1}\cdots z_{m})^{k}\Big{|},$
where $z^{\prime}_{p}=z_{1}\cdots z_{p-1}z_{p+1}\cdots z_{m}.$ Hence we obtain
$I_{p}\asymp(1-r^{m})^{2}\sum_{k=0}^{\infty}(k+1)^{2(\beta-n)}(r^{m}\lambda)^{2k}\int_{{\mathbb{B}_{n}}}(1-||z||^{2})|z^{\prime}_{p}|^{2}|z_{1}\cdots
z_{m}|^{2k}du(z),$
where has been used again the orthogonality of the holomorphic monomials in
$L^{2}(\sigma).$ To handle the integral above we use polar coordinates
$\displaystyle\int_{{\mathbb{B}_{n}}}$
$\displaystyle(1-||z||^{2})|z^{\prime}_{p}|^{2}|z_{1}\cdots z_{m}|^{2k}du(z)$
$\displaystyle\asymp\int_{0}^{1}\int_{\mathbb{S}_{n}}\epsilon^{2n-1}(1-\epsilon^{2})\epsilon^{2km+2m-2}|\zeta_{p}^{\prime}|^{2}|\zeta_{1}\cdots\zeta_{m}|^{2k}d\sigma(\zeta)d\epsilon$
$\displaystyle\asymp\frac{[(k+1)!]^{m-1}k!}{(n+m(k+1)-2)!}\cdot\frac{1}{(k+1)^{2}}.$
If we recall that $\beta=n+t+1,$ $t=\frac{2n+1-m}{4}$ and
$\lambda^{2k}=m^{mk},$ then applying the Stirling formula more than one time
we see that
$I_{p}\asymp(1-r^{m})^{2}\sum_{k=0}^{\infty}(k+1)r^{2mk}.$
This proves the assertion made about $I_{p}.$
It remains to estimate the following term:
$I_{p,q}=\int_{{\mathbb{B}_{n}}}\Big{|}\int_{\mathbb{S}_{n}}\frac{(\overline{z_{p}\zeta_{q}-z_{q}\zeta_{p}})(1-\lambda\zeta_{1}\cdots\zeta_{m})}{(1-r^{m}\lambda\zeta_{1}\cdots\zeta_{m})(1-\langle
z,\zeta\rangle)^{\beta}}d\sigma(\zeta)\Big{|}^{2}du(z).$
We shall show that $I_{p,q}\asymp I_{p}.$ Denote again the inner integral by
$S_{p,q}$ which is convenient to expand it as
$S_{p,q}=\bar{z}_{p}S_{q}-\bar{z}_{q}S_{p}.$ Recall that
$z_{p}^{\prime}=z_{1}\cdots z_{p-1}z_{p+1}\cdots z_{m}.$ Similar calculations
to the one above lead to
$|S_{p,q}|\asymp(1-r^{m})|\bar{z}_{p}z_{q}^{\prime}-\bar{z}_{q}z_{p}^{\prime}|\Big{|}\sum_{k=0}^{\infty}(k+1)^{\beta-n}(r^{m}\lambda
z_{1}\cdots z_{m})^{k}\Big{|}.$
Moreover, the orthogonality of the holomorphic monomials in $L^{2}(\sigma)$
gives the following estimation:
$I_{p,q}\asymp(1-r^{m})^{2}\sum_{k=0}^{\infty}(k+1)^{2\beta-2n}(r^{m}\lambda)^{2k}\int_{{\mathbb{B}_{n}}}|\bar{z}_{p}z_{q}^{\prime}-\bar{z}_{q}z_{p}^{\prime}|^{2}|z_{1}\cdots
z_{m}|^{2k}du(z).$
It is easy to see that
$|\bar{z}_{p}z_{q}^{\prime}-\bar{z}_{q}z_{p}^{\prime}|^{2}=|z_{p}|^{2}|z^{\prime}_{q}|^{2}+|z_{q}|^{2}|z^{\prime}_{p}|^{2}-2|z_{1}\cdots
z_{m}|^{2}.$ Let us estimate the integral
$\int_{{\mathbb{B}_{n}}}(|z_{p}|^{2}|z_{q}^{\prime}|^{2}-|z_{1}\cdots
z_{m}|^{2})|z_{1}\cdots z_{m}|^{2k}du(z).$
Passing through polar coordinates we get, for $p\neq q,$ that
$\displaystyle\int_{{\mathbb{B}_{n}}}|z_{p}|^{2}|z_{q}^{\prime}|^{2}$
$\displaystyle|z_{1}\cdots z_{m}|^{2k}du(z)$ $\displaystyle\asymp
2n(n-1)!\frac{[(k+1)!]^{m-1}k!}{(mk+n+m-1)!}\frac{k+2}{2km+2n+2m},$
and
$\displaystyle\int_{{\mathbb{B}_{n}}}|z_{1}\cdots z_{m}|^{2(k+1)}$
$\displaystyle du(z)$
$\displaystyle=2n(n-1)!\frac{[(k+1)!]^{m-1}k!}{(mk+n+m-1)!}\frac{k+1}{2km+2n+2m}.$
Hence we obtain
$\displaystyle\int_{{\mathbb{B}_{n}}}(|z_{p}|^{2}|z_{q}^{\prime}|^{2}-|z_{1}\cdots
z_{m}|^{2})$ $\displaystyle|z_{1}\cdots z_{m}|^{2k}du(z)$
$\displaystyle\asymp\frac{[(k+1)!]^{m-1}k!}{(mk+n+m-2)!(k+1)^{2}}.$
Again, applying the Stirling formula to the one above estimates we obtain
$I_{p,q}\asymp(1-r^{m})^{2}\sum_{k=0}^{\infty}(k+1)r^{2mk}.$
This proves the assertion made about $I_{p,q}.$ ∎
## 5\. Sufficient conditions for non-cyclicity via Cauchy transforms and
$\alpha$-capacities
We consider the Cauchy transform of a complex Borel measure $\mu$ on the unit
sphere by
$C_{[\mu]}(z)=\int_{\mathbb{S}_{n}}\frac{1}{(1-\langle
z,\bar{\zeta}\rangle)^{n}}d\mu(\zeta),\quad z\in{\mathbb{B}_{n}}.$
Note that this definition differs from the classical one.
Let $f\in D_{\alpha}({\mathbb{B}_{n}})$ and put a measure $\mu$ on
$\mathcal{Z}(f^{*})$: the zero set in the sphere of the radial limits of $f$.
The results in [21] about Cauchy transforms and non-cyclicity are valid in our
settings. We deduce that $[f]\neq D_{\alpha}({\mathbb{B}_{n}})$, and hence
non-cyclicity, whenever $C_{[\mu]}\in D_{-\alpha}({\mathbb{B}_{n}}).$ Thus, it
is important to compute the Dirichlet-type norm of the Cauchy transform.
Let $\mu$ be a Borel measure on $\mathbb{S}_{n}$ and set
$\mu^{*}(j)=\int_{\mathbb{S}_{n}}\zeta^{j}d\mu(\zeta),$
$\bar{\mu}^{*}(j)=\int_{\mathbb{S}_{n}}\bar{\zeta}^{j}d\mu(\zeta).$ We have
the following.
###### Lemma 13.
Let $\mu$ be a Borel measure on $\mathbb{S}_{n}.$ Then
$||C_{[\mu]}||_{-\alpha}^{2}\asymp\sum_{k=0}^{\infty}\sum_{|j|=k}\frac{(k+1)^{n-1-\alpha}k!}{j!}|\bar{\mu}^{*}(j)|^{2}.$
###### Proof.
Our Cauchy integral of $\mu$ on ${\mathbb{B}_{n}}$ has the following expansion
$C_{[\mu]}(z)=\sum_{k=0}^{\infty}\sum_{|j|=k}\frac{\Gamma(k+n)}{\Gamma(n)j!}\bar{\mu}^{*}(j)z^{j}.$
Therefore, one can compute the norm of $C_{[\mu]}$ in the space
$D_{-\alpha}({\mathbb{B}_{n}}).$ The assertion follows. ∎
The following lemma is crucial in the sequel. It is probably known, but we
were not able to locate it in the literature, and hence we include its proof.
###### Lemma 14.
Let $j_{1},...,j_{n},k$ be non-negative integers satisfying
$j_{1}+...+j_{n}=nk.$ Then
$j_{1}!\cdots j_{n}!\geq(k!)^{n}.$
###### Proof.
The $\Gamma$-function is logarithmically convex, and hence, we may apply the
Jensen inequality to it:
$\log\Gamma\Big{(}\frac{x_{1}}{n}+...+\frac{x_{n}}{n}\Big{)}\leq\frac{\log\Gamma(x_{1})}{n}+...+\frac{\log\Gamma(x_{n})}{n}.$
Set $x_{i}:=j_{i}+1,$ $i=1,...,n.$ Since $j_{1}+...+j_{n}=nk,$ the assertion
follows. ∎
We may identify non-cyclicity for model polynomials via Cauchy transforms.
###### Lemma 15.
Consider the polynomial $p(z)=1-m^{m/2}z_{1}\cdots z_{m},$ where $1\leq m\leq
n.$ Then $p$ is not cyclic in $D_{\alpha}({\mathbb{B}_{n}})$ whenever
$\alpha>\frac{2n+1-m}{2}.$
###### Proof.
Recall that the model polynomials vanish in the closed unit ball along
analytic sets of the form:
$\mathcal{Z}(p)\cap\mathbb{S}_{n}=\\{1/\sqrt{m}(e^{i\theta_{1}},..,e^{i\theta_{m-1}},e^{-i(\theta_{1}+...+\theta_{m-1})},0,..,0):\theta_{i}\in\mathbb{R}\\}.$
It is easy to see that for a proper measure $\mu$, $\mu^{*}(j)$ is non-zero
when $mj_{m}=k$ and $\mu^{*}(j)\asymp m^{-k/2}.$ By Stirling’s formula and
Lemma 14 we get that
$\displaystyle||C_{[\mu]}||_{-\alpha}^{2}$ $\displaystyle\leq
C\sum_{k=0}^{\infty}\frac{(mk+1)^{n-1-\alpha}(mk)!}{(k!)^{m}m^{mk}}\asymp\sum_{k=0}^{\infty}(k+1)^{1/2(2n-m-1)-\alpha}.$
Thus, $p$ is not cyclic in $D_{\alpha}({\mathbb{B}_{n}})$ for
$\alpha>\frac{2n+1-m}{2}.$ ∎
We consider Riesz $\alpha$-capacity for a fixed parameter $\alpha\in(0,n)$
with respect to the _anisotropic distance_ in $\mathbb{S}_{n}$ given by
$d(\zeta,\eta)=|1-\langle\zeta,\eta\rangle|^{1/2}$
and the non-negative _kernel_ $K_{\alpha}:(0,\infty)\rightarrow[0,\infty)$
given by
$K_{\alpha}(t)=\begin{dcases}t^{\alpha-n},&\alpha\in(0,n)\\\
\log(e/t),&\alpha=n\end{dcases}.$
Note that we may extend the definition of $K$ to $0$ by defining
$K(0):=\lim_{t\rightarrow 0^{+}}K(t).$
Let $\mu$ be any Borel probability measure supported on some Borel set
$E\subset\mathbb{S}_{n}.$ Then the Riesz $\alpha$-energy of $\mu$ is given by
$I_{\alpha}[\mu]=\iint_{\mathbb{S}_{n}}K_{\alpha}(|1-\langle\zeta,\eta\rangle|)d\mu(\zeta)d\mu(\eta)$
and the Riesz $\alpha$-capacity of $E$ by
$\mathrm{cap}_{\alpha}(E)=\inf\\{I_{\alpha}[\mu]:\mu\in\mathcal{P}(E)\\}^{-1},$
where $\mathcal{P}(E)$ is the set of all Borel probability measures supported
on $E.$ Note that if $\text{cap}_{\alpha}(E)>0,$ then there exist at least one
probability measure supported on $E$ having finite Riesz $\alpha$-energy.
Moreover, any $f\in D_{\alpha}({\mathbb{B}_{n}})$ has finite radial limits
$f^{*}$ on $\mathbb{S}_{n},$ except possibly, on a set $E$ having
$\text{cap}_{\alpha}(E)=0.$ Theory regarding to the above standard
construction in potential theory can be found in [1], [9], [12], [17].
The relation between non-cyclicity of a function and the Riesz
$\alpha$-capacity of the zeros of its radial limits follows.
###### Theorem 16.
Fix $\alpha\in(0,n]$ and let $f\in D_{\alpha}({\mathbb{B}_{n}}).$ If
$\mathrm{cap}_{\alpha}(\mathcal{Z}(f^{*}))>0,$ then $f$ is not cyclic in
$D_{\alpha}({\mathbb{B}_{n}}).$
###### Proof.
Let $\mu$ be a probability measure supported in $\mathcal{Z}(f^{*}),$ with
finite Riesz $n$-energy. If $r\in(0,1),$ then
$\displaystyle\log\frac{e}{|1-r\langle\zeta,\eta\rangle|}$
$\displaystyle=1+\textrm{Re}\Big{(}\log\frac{1}{1-r\langle\zeta,\eta\rangle}\Big{)}$
$\displaystyle=1+\textrm{Re}\sum_{k=1}^{\infty}\sum_{|j|=k}\frac{r^{k}k!}{kj!}\zeta^{j}\overline{\eta}^{j}.$
Note that $\mu$ is a probability measure and hence
$\displaystyle\iint_{\mathbb{S}_{n}}\log\frac{e}{|1-r\langle\zeta,\eta\rangle|}d\mu(\zeta)d\mu(\eta)=1+\sum_{k=1}^{\infty}\sum_{|j|=k}\frac{r^{k}k!}{kj!}|\mu^{*}(j)|^{2}.$
Since $|1-w|/|1-rw|\leq 2$ for $r\in(0,1)$ and $w\in\overline{{\mathbb{D}}},$
the dominated convergence theorem and Lemma 13 give
$||C_{[\mu]}||^{2}_{-n}\asymp\sum_{k=1}^{\infty}\sum_{|j|=k}\frac{k!}{kj!}|\mu^{*}(j)|^{2}<\infty.$
The assertion follows.
We continue setting a probability measure $\mu,$ supported in
$\mathcal{Z}(f^{*}),$ with finite Riesz $\alpha$-energy, where
$\alpha\in(0,n).$ If $r\in(0,1),$ then
$\displaystyle\frac{1}{(1-r\langle\zeta,\eta\rangle)^{n-\alpha}}$
$\displaystyle=\sum_{k=0}^{\infty}\sum_{|j|=k}\frac{\Gamma(k+n-\alpha)k!r^{k}}{k!\Gamma(n-\alpha)j!}\zeta^{j}\overline{\eta}^{j}.$
Similar arguments to the one above show that
$\displaystyle I_{\alpha}[\mu]$
$\displaystyle\geq\Big{|}\iint_{\mathbb{S}_{n}}\textrm{Re}\Big{(}\frac{1}{(1-r\langle\zeta,\eta\rangle)^{n-\alpha}}\Big{)}d\mu(\zeta)d\mu(\eta)\Big{|}$
$\displaystyle=\Big{|}\sum_{k=0}^{\infty}\sum_{|j|=k}\frac{\Gamma(k+n-\alpha)k!r^{k}}{k!\Gamma(n-\alpha)j!}\iint_{\mathbb{S}_{n}}\zeta^{j}\overline{\eta}^{j}d\mu(\zeta)d\mu(\eta)\Big{|}$
$\displaystyle\asymp\sum_{k=0}^{\infty}\sum_{|j|=k}\frac{(k+1)^{n-1-\alpha}k!}{j!}r^{k}|\mu^{*}(j)|^{2}.$
Again, letting $r\rightarrow 1^{-}$ by Lemma 13 we obtain that $C_{[\mu]}\in
D_{-\alpha}({\mathbb{B}_{n}}).$ The assertion follows. ∎
###### Remark 17.
According to [14] one can expect that the cyclicity problem of polynomials in
the unit ball of ${\mathbb{C}}^{n}$ depends on the real dimension of their
zero set restricted on the unit sphere:
$\text{dim}_{{\mathbb{R}}}(\mathcal{Z}(p)\cap\mathbb{S}_{n}).$
Let us point out the nature of the boundary zeros of a polynomial non-
vanishing in the ball. See [14] for the two dimensional case where had been
used the Curve Selection Lemma of [10].
Let $p\in{\mathbb{C}}[z_{1},...,z_{n}]$ be a polynomial non-vanishing in the
ball. Looking at $\mathcal{Z}(p)\cap\mathbb{S}_{n}$ as at a semi-algebraic
set, we conclude that it is the disjoint union of a finite number of Nash
manifolds $M_{i},$ each Nash diffeomorphic to an open hypercube
$(0,1)^{\textrm{dim}(M_{i})}.$ Note that the Nash diffeomorphisms over the
closed field of the real numbers satisfy some additional properties (see [7],
Proposition 2.9.10).
One can expect then that the characterization of cyclicity and the nature of
the boundary zeros of the model polynomials, as well as, the unitary
invariance of the Dirichlet norm and the sufficient capacity condition, will
be crucial in the characterization of cyclic polynomials in arbitrary
dimension.
Acknowledgments. I would like to thank Ł. Kosiński for the helpful
conversations during the preparation of the present work. I would like to
thank also the anonymous referee for numerous remarks that substantially
improved the shape of the paper.
## References
* [1] P. Ahern and W. Cohn, Exceptional sets for Hardy Sobolev functions, $p>1,$ Indiana Univ. Math. J. 38 (1989), 417–453.
* [2] F. Beatrous and J. Burbea, On multipliers for Hardy-Sobolev spaces, Proc. Amer. Math. Soc. 136 (2008), 2125–2133.
* [3] C. Bénéteau, A.A. Condori, C. Liaw, D. Seco, and A.A. Sola, Cyclicity in Dirichlet-type spaces and extremal polynomials, J. Anal. Math. 126 (2015), 259–286.
* [4] C. Bénéteau, A.A. Condori, C. Liaw, D. Seco, and A.A. Sola, Cyclicity in Dirichlet-type spaces and extremal polynomials II: functions on the bidisk, Pacific J. Math. 276 (2015), 35–58.
* [5] C. Bénéteau, G. Knese, Ł. Kosiński, C. Liaw, D. Seco, and A. Sola, Cyclic polynomials in two variables, Trans. Amer. Math. Soc. 368 (2016), 8737–8754.
* [6] L. Bergqvist, A note on cyclic polynomials in polydiscs, Anal. Math. Phys. 8 (2018), 197–-211.
* [7] J. Bochnak, M. Coste, M-F. Roy, Real Algebraic Geometry, Ergebnisse der Mathematik und ihrer Grenzgebiete, Folge 3, 36, Springer-Verlag Berlin, Heidelberg, 1998.
* [8] L. Brown and A. Shields, Cyclic vectors in the Dirichlet space, Trans. Amer. Math. Soc. 285 (1984), 269–304.
* [9] W.S. Cohn and I.E. Verbitsky, Nonlinear potential theory on the ball, with applications to exceptional and boundary interpolation sets, Michigan Math. J. 42 (1995), 79–97.
* [10] Z. Denkowska and M.P. Denkowski, A long and winding road to definable sets, Journal of Singularities, 13 (2015), 57–86.
* [11] L. Hörmander, An introduction to complex analysis in several variables, 3rd ed., North-Holland Mathematical library 7, Elsevier Science Publishers B.V., Amsterdam, 1990.
* [12] O. El-Fallah, K. Kellay, J. Mashreghi, and T. Ransford, A primer on the Dirichlet space, Cambridge Tracts in Mathematics 203, Cambridge University Press, 2014.
* [13] G. Knese, Ł. Kosiński, T.J. Ransford, and A.A. Sola, Cyclic polynomials in anisotropic Dirichlet spaces, J. Anal. Math. 138 (2019), 23–47.
* [14] Ł. Kosiński, D. Vavitsas, Cyclic polynomials in Dirichlet-type spaces in the unit ball of ${\mathbb{C}}^{2}$, (2022), to appear in Constructive Approximation Journal.
* [15] S. Li, Some new characterizations of Dirichlet type spaces on the unit ball of ${\mathbb{C}}^{n},$ J. Math. Anal. Appl. 324 (2006), 1073–1083.
* [16] M. Michalska, On Dirichlet type spaces in the unit ball of ${\mathbb{C}}^{n},$ Ann. Univ. Mariae Curie-Skłodowska Sect. A 65 (2011), 75–86.
* [17] D. Pestana and J.M. Rodríguez, Capacity distortion by inner functions in the unit ball of ${\mathbb{C}}^{n},$ Michigan Math. J. 44 (1997), 125–137.
* [18] W. Rudin, Function theory in the unit ball of ${\mathbb{C}}^{n},$ Grundlehren der mathematischen Wissenschaften 241, Springer, New York, 1980.
* [19] M. Sargent, A.A. Sola, Optimal approximants and orthogonal polynomials in several variables, Canad. J. Math. 74 (2022), 428–256.
* [20] M. Sargent, A.A. Sola, Optimal approximants and orthogonal polynomials in several variables II: Families of polynomials in the unit ball, Proc. Amer. Math. Soc. 149 (2021), 5321–5330.
* [21] A. Sola, A note on Dirichlet-type spaces and cyclic vectors in the unit ball of ${\mathbb{C}}^{2}$, Arch. Math. 104 (2015), 247–257.
* [22] K. Zhu, Spaces of holomorphic functions in the unit ball, Graduate Texts in Mathematics 226, Springer, New York, 2005.
|
# Recent Progress in Low Energy Neutrino Scattering Physics and Its
Implications for the Standard and Beyond the Standard Model Physics
V. Pandey<EMAIL_ADDRESS>Fermi National Accelerator Laboratory, Batavia,
Illinois 60510, USA
###### Abstract
Neutrinos continue to provide a testing ground for the structure of the
standard model of particle physics as well as hints towards the physics beyond
the standard model. Neutrinos of energies spanning over several orders of
magnitude, originating in many terrestrial and astrophysical processes, have
been detected via various decay and interaction mechanisms. At MeV scales,
there has been one elusive process, until a few years ago, known as coherent
elastic neutrino-nucleus scattering (CEvNS) that was theoretically predicted
over five decades ago but was never observed experimentally. The recent
experimental observation of the CEvNS process by the COHERENT collaboration at
a stopped pion neutrino source has inspired physicists across many subfields.
This new way of detecting neutrinos has vital implications for nuclear
physics, high-energy physics, astrophysics, and beyond. CEvNS, being a low-
energy process, provides a natural window to study light, weakly-coupled, new
physics in the neutrino sector. Leveraging orders of magnitude higher CEvNS
cross section, new physics can be searched with relatively small detectors.
In this review, we intend to provide the current status of low energy neutrino
scattering physics and its implications for the standard and beyond the
standard model physics. We discuss low energy sources of neutrinos with a
focus on neutrinos from the stopped pions. Stopped pion sources cover energies
in the tens of MeVs and are almost optimal for studying CEvNS. Several
worldwide experimental programs have been or are being set up to detect CEvNS
and new physics signals in the near future with complementary detection
technologies and physics goals. We discuss the general formalism of
calculating the tree-level CEvNS cross section and the estimated theoretical
uncertainties on the CEvNS cross section stemming from different sources. We
also discuss the inelastic scattering of tens of MeV neutrinos that have
implications for supernova detection in future neutrino experiments. The
stopped-pion facilities are also a near-ideal tens of MeV neutrino source to
study inelastic neutrino-nucleus cross sections. We discuss how the CEvNS
experiments can be used as a testing ground for the Standard Model (SM) weak
physics as well as in searching for the Beyond the Standard Model (BSM)
physics signals. Any deviation from the SM predicted event rate either with a
change in the total event rate or with a change in the shape of the recoil
spectrum, could indicate new contributions to the interaction cross-section.
The SM implications include the study of weak nuclear form factor and weak
mixing angle. The BSM studies include non-standard interactions, neutrino
electromagnetic properties, and sterile neutrino searches. Stopped pion
facilities are also a copious source of neutral and changed mesons that allow
study of several dark sector physics scenarios such as vector portal models,
leptophobic dark matter as well as axion-like particle searches.
††preprint: FERMILAB-PUB-23-245-ND
###### Contents
1. I Introduction
2. II Low Energy Neutrino Sources
1. II.1 Stopped Pion Source
3. III Coherent Elastic Neutrino Scattering off Nuclei
1. III.1 Tree-level Cross Section
2. III.2 Uncertainty on the Cross Section
3. III.3 Input from Parity Violating Electron Scattering
4. IV Inelastic Neutrino Scattering off Nuclei
5. V Experimental Landscape
6. VI Implications for the Standard Model Physics
1. VI.1 Weak Nuclear Form Factor
2. VI.2 Weak Mixing Angle
7. VII Implications for Beyond the Standard Model Physics
1. VII.1 Non-standard Interactions of Neutrinos
2. VII.2 Neutrino Electromagnetic Properties
3. VII.3 Sterile Neutrino Oscillations
4. VII.4 Accelerator Produced Light Dark Matter
8. VIII Summary
## I Introduction
Neutrinos, often referred to as elusive or ghostly elementary particles, are
fascinating. Starting from their postulation as a mere theoretical idea of an
undetectable particle by Wolfgang Pauli in 1930, to now known as the most
abundant matter particle in the Universe, neutrinos have played a prominent
role in our understanding of the nature of the Universe. In recent years,
neutrinos have not only provided a testing ground for the structure of the
Standard Model (SM) of particle physics but also continue to provide us hints
towards the physics beyond the SM. One of the most prominent one of those is
the discovery of neutrino mixing and oscillation that imply that neutrinos can
no longer be considered as massless particles as described in the Standard
Model. SM provides the framework describing how neutrinos interact with
leptons and quarks through weak interactions but it does not answer
fundamental questions about neutrinos. What is the origin of neutrino mass and
why are they orders of magnitude smaller compared to other SM particles? We
don’t know if neutrinos are Dirac particles or Majorana particles. Are there
more neutrinos than three flavors, consistent with leptons, or are more of
them as some experimental anomalies suggest? Neutrinos continue to provide
both a testing ground for the SM and direct evidence for physics beyond the SM
Huber:2022lpm ; Balantekin:2022jrq ; deGouvea:2022gut ; Acharya:2023swl .
Neutrinos originate via various mechanisms in many terrestrial and
astrophysical processes, covering energies as low as from sub-eV scale to as
high as EeV scale. We have detected neutrino from a variety of astrophysical
(e.g., solar, supernova) and terrestrial (e.g., reactors and accelerator)
sources Davis:1968cp ; Kamiokande-II:1987idp ; Bionta:1987qt ; IceCube:2018cha
using a variety of interaction processes ranging from inverse beta decay to
scattering off quarks, nucleons, and nuclei. At MeV scale energies, which is
the focus of this article, neutrinos have been detected via several distinct
interaction channels including neutrino-electron elastic scattering, as well
as neutral and charged current inelastic interactions on nucleons and nuclei
Formaggio:2012cpf and via inverse-beta decay process. Among them there has
been one elusive process, until a few years ago, known as Coherent Elastic
Neutrino Nucleus Scattering (CEvNS) that was first postulated over nearly five
decades ago.
CEvNS was suggested soon after the experimental discovery of the weak neutral
current in neutrino interactions Stodolsky:1966zz ; Freedman:1973yd ;
Kopeliovich:1974mv . In his 1974 article, Freedman suggested that “if there is
a weak neutral current, then the elastic process $\nu+A\rightarrow\nu+A$
should have a sharp coherent forward peak just as the $e+A\rightarrow e+A$
does” Freedman:1973yd . Freedman went ahead and declared that the experimental
detection of CEvNS would be an “act of hubris” due to the associated “grave
experimental difficulties”. The experimental difficulty that Freedman referred
to was despite the fact that the CEvNS cross section is larger due to the
$\sim N^{2}$ enhancement it receives. The only experimental signature of the
coherent elastic process is the kinetic energy $T$ of the recoiling nucleus.
The maximum recoil energy is limited by the kinematics of the elastic
scattering
$T_{\rm max}=\frac{E_{\nu}}{1+M_{A}/(2E_{\nu})}$ (1)
where $E_{\nu}$ is the incoming neutrino energy and $M_{A}$ is the mass of the
target nuclei. For tens of MeV incident neutrino energies, where CEvNS cross
section is supposed to dominate, and for medium-sized nuclei, the recoil
energy amounts to several tens of keV, making it experimentally challenging to
detect.
Over after nearly four decades of its predictions by Freedman, the CEvNS
signal was finally detected by the COHERENT collaboration in 2017
COHERENT:2017ipa . The necessary keV scale low-threshold needed to detect
CEvNS signal benefited from the recent developments in the detector
technologies that are primarily driven by dark sector searches that also rely
on tiny nuclear recoils. Typically, the recoil energy is collected in the form
of scintillation photons or ionized charge, depending on the detector
technology. The COHERENT collaboration announced the detection of the first
CEvNS signal using a stopped–pion neutrino source at the Spallation Neutron
Source (SNS) at Oak Ridge National Laboratory with a CsI[Na] scintillating
crystal detector, an experimental discovery of CEvNS signal at the $6.7\sigma$
confidence level COHERENT:2017ipa ; COHERENT:2018imc . In the following years,
COHERENT collaboration presented another CEvNS measurement with a single-phase
liquid argon detector COHERENT:2019iyj ; COHERENT:2020iec , and a follow-up
CsI[Na] COHERENT:2021xmm measurement with a larger exposure.
This new way of detecting neutrinos has wider implications for border
communities that span nuclear physics, particle physics, astrophysics, and
beyond. Leveraging orders of magnitude higher CEvNS cross section, one could
do groundbreaking searches with relatively small detectors as opposed to the
typically large detector size needed for most neutrino experiments. CEvNS,
being a low-energy process, provides a natural window to study light, weakly-
coupled, new physics in the neutrino sector Barranco:2005yy ; Scholberg:2005qs
; Barranco:2007tz ; Dutta:2015nlo ; Lindner:2016wff ; Coloma:2017ncl ;
Farzan:2017xzy ; Billard:2018jnl ; AristizabalSierra:2018eqm ; Brdar:2018qqj ;
Abdullah:2018ykz ; AristizabalSierra:2019zmy ; Miranda:2019skf ; Bell:2019egg
; AristizabalSierra:2019ufd ; Cadeddu:2019eta ; Coloma:2019mbs ; Canas:2019fjw
; Dutta:2019eml ; Denton:2020hop ; Skiba:2020msb ; Cadeddu:2020nbr ;
Canas:2018rng .
The remainder of this article is organized as follows. In Sec. II, we discuss
low-energy sources of neutrinos with a focus on neutrinos from the stopped
pion sources. In Sec. III, we lay out the general formalism of calculating the
tree-level CEvNS cross section and discuss the estimated theoretical
uncertainties on the CEvNS cross section stemming from different sources. Tens
of MeV neutrinos also scatter via inelastic neutrino-nucleus scattering, we
discuss those in Sec. IV. These processes have implications for supernova
detection in future neutrino experiments. The observable final-state particles
of these inelastic scattering have typical energies of the same order as the
incident neutrino energies. CEvNS experiments are, in principle, sensitive to
inelastic processes as well if they have the dynamic range. In Sec. V, we
briefly review current and proposed CEvNS experimental facilities. In Sec. VI,
we discuss how the CEvNS experiments can be used as a testing ground for the
SM weak physics. We continue to discuss the implications of CEvNS physics for
the global efforts of the BSM physics searches in Sec. VII. CEvNS provides a
natural window to study light, weakly-coupled, beyond the standard model
physics in the neutrino sector. Finally, we summarize in Sec. VIII.
## II Low Energy Neutrino Sources
The coherent elastic process dominates only at tens of MeV neutrino energies.
The tens of MeV neutrinos come from several sources including nuclear
reactors, accelerator produced decay at rest sources, as well as astrophysical
sources such as supernova and solar neutrinos. Neutrinos from reactors have
been detected using the inverse beta decay reaction,
$\bar{\nu}_{e}+p\rightarrow e^{+}+n$, by observing both the outgoing positron
and coincident neutron. Nuclear reactors are copious sources of electron anti-
neutrinos therefore, reactors have long been the sources of choice for CEvNS
searches. However, since the typical reactor neutrino energies are of the
order of few MeV and even if the coherence condition for the recoil is largely
preserved, the scattering signal is sub-kev energy scale nuclear recoil making
it even harder to detect even for many sensitive detection technologies.
Furthermore, the total CEvNS cross section scales as a function of incident
neutrino energy, therefore, higher energies are beneficial up until the point
at which CEvNS is strongly dominated relative to inelastic scattering.
Neutrinos from stopped pions sources cover energies in the tens of MeV scale
and are almost optimal for studying CEvNS, finding a sweet spot where the
CEvNS rate is high enough and recoil energies are more easily detectable above
the threshold. So far, CEvNS is observed only at the decay at rest sources.
Therefore most of the discussions in this paper focus on neutrinos from a pion
decay at rest source. We comment on other low-energy neutrino sources where
appropriate.
### II.1 Stopped Pion Source
Figure 1: Standard neutrino energy distribution of pion decay at rest
neutrinos with a 29.8 MeV monoenergetic $\nu_{\mu}$, while the energies of
$\nu_{e}$s and $\bar{\nu}_{\mu}$s ranges upto $m_{\mu}/2$.
An intense beam of protons accelerated to hundreds of MeV to GeV scale,
directed to collide with a target, producing a copious number of secondary
hadrons. Protons with energies $>$ 300 MeV will produce large numbers of
pions, these pions can lose energy in dense material, stop and decay after
coming to rest. Negative pions are often captured by nuclei. To produce clean
stopped-pion neutrinos: (a) the optimum proton energies are of the order of 1
GeV or less Alonso:2010fs , this suppresses the decay-in-flight component that
is heavily used in typical accelerator-based short- and long-baseline neutrino
facilities, (b) the target is preferred to be dense to allow the pions to stop
and decay at rest.
The dominant neutrino production from stopped pions is from the weak-
interaction two-body prompt decay
$\pi^{+}\rightarrow\mu^{+}+\nu_{\mu}~{}~{}(\text{decay time:}~{}\tau\sim
26~{}{\text{n}s})$ (2)
followed by a three-body delayed decay of muons
$\mu^{+}\rightarrow e^{+}+\nu_{e}+\bar{\nu}_{\mu}~{}~{}(\text{decay
time:}~{}\tau\sim 2.2~{}\mu{\text{s}})$ (3)
producing a well know spectrum shape. The spectral functions are given by
$\Phi_{\nu_{\mu}}(E_{\nu})=\frac{2m_{\pi}}{m_{\pi}^{2}-m_{\mu}^{2}}\delta\left(1-\frac{2E_{\nu}m_{\pi}}{m_{\pi}^{2}-m_{\mu}^{2}}\right)$
(4)
$\Phi_{\nu_{e}}(E_{\nu})=\frac{192}{m_{\mu}}\left(\frac{E_{\nu}}{m_{\mu}}\right)^{2}\left(\frac{1}{2}-\frac{E_{\nu}}{m_{\mu}}\right)$
(5)
$\Phi_{\bar{\nu}_{\mu}}(E_{\nu})=\frac{64}{m_{\mu}}\left(\frac{E_{\nu}}{m_{\mu}}\right)^{2}\left(\frac{3}{4}-\frac{E_{\nu}}{m_{\mu}}\right)\\\
\\\ $ (6)
For a pion decay at rest source, $E_{\nu}^{\text{max}}=m_{\mu}/2$ where
$m_{\mu}=105.65$ MeV is the muon mass. The well-known energy spectrum is shown
in Fig. 1 with a 29.8 MeV monoenergetic $\nu_{\mu}$ while the energies of
$\nu_{e}$s and $\bar{\nu}_{\mu}$s range upto $m_{\mu}/2$. Fig. 2 shows the
standard timing distribution with a prompt $\nu_{\mu}$ and delayed $\nu_{e}$
and $\bar{\nu}_{\mu}$ signal. The pulsed time structure gives a strong handle
on suppressing the background.
There are a few percent of radiative corrections on this flux decaying from
pions and muons, these are evaluated in Ref. Tomalak:2021lif by comparing the
tree-level neutrino energy spectra with the $\mathcal{O}(\alpha)$
contributions. Radiative effects modify the expected neutrino fluxes from
around the peak region by 3–4 permille.
Figure 2: Standard timing distribution of pion decay at rest neutrinos with a
prompt $\nu_{\mu}$ and delayed $\nu_{e}$ and $\bar{\nu}_{\mu}$ signal.
## III Coherent Elastic Neutrino Scattering off Nuclei
The coherent elastic neutrino-nucleus scattering process occurs when a
neutrino scatters off an entire nucleus, exchanging a $Z^{0}$ boson,
transferring some of its momenta to the nucleus as a whole, but creating no
internal excitations of the nucleus or ejected particles. It’s elastic in the
sense that no new particles are created in the scattering and the residual
nucleus stays in its ground state. For neutrinos carrying a few tens of MeV
energies and scattering off medium-sized nuclei, a dominant fraction of
interactions are expected to be of coherent type.
Figure 3: Diagrammatic representation of the CEvNS process where a single
$Z^{0}$ boson is exchanged between the neutrino and the target nucleus. The
nucleus stays in its ground state and a keV scale nuclear recoil energy is
deposited in the detector.
### III.1 Tree-level Cross Section
A neutrino with four momentum $k_{i}=(E_{i},\vec{k}_{i})$ scatters off the
nucleus, which is initially at rest in the lab frame with
$p_{A}=(M_{A},\vec{0})$, exchanging a $Z^{0}$ boson. The neutrino scatters
off, carrying away four momentum $k_{f}=(E_{f},\vec{k}_{f})$ while the nucleus
remains in its ground state and receives a small recoil energy $T$, so that
$p^{\prime}_{A}=(M_{A}+T,\vec{p}^{\prime}_{A})$ with
$|\vec{p}^{\prime}_{A}|=\sqrt{(M_{A}+T)^{2}-M_{A}^{2}}$ and $T=q^{2}/2M_{A}$.
Here, $M_{A}$ is the rest mass of the nucleus, $q=|\vec{q}|$ is the absolute
value of the three–momentum transfer which is of the order of keV for neutrino
energies of tens of MeV, $Q^{2}\approx q^{2}=|\vec{k}_{f}-\vec{k}_{i}|^{2}$,
and the velocity dependent factor in the denominator refers to the relative
velocity of the interacting particles. The process is schematically shown in
Fig. 3.
The initial elementary expression for the cross section reads
$\displaystyle\mathrm{d}^{6}\sigma$
$\displaystyle=\frac{1}{\left|\vec{v}_{i}-\vec{v}_{A}\right|}\frac{m_{i}}{E_{i}}\frac{m_{f}}{E_{f}}\frac{\mathrm{d}^{3}\vec{k}_{f}}{(2\pi)^{3}}\frac{M_{A}}{M_{A}+T}\frac{\mathrm{d}^{3}\vec{p}^{\prime}_{A}}{(2\pi)^{3}}$
(7)
$\displaystyle\times(2\pi)^{4}\overline{\sum}_{fi}\left|\mathcal{M}\right|^{2}\delta^{(4)}(k_{i}+p_{A}-k_{f}-p^{\prime}_{A}).$
This expression can be integrated to yield the expression for the cross
section differential in neutrino scattering angle $\theta_{f}$:
$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}\cos{\theta_{f}}}$
$\displaystyle=\frac{m_{i}}{E_{i}}\frac{m_{f}}{E_{f}}\frac{M_{A}}{M_{A}+T}\frac{E_{f}^{2}}{2\pi}f_{rec}^{-1}\overline{\sum}_{fi}\left|\mathcal{M}\right|^{2}.$
(8)
The recoil factor reads
$f_{rec}=\frac{E_{i}}{E_{f}}\frac{M_{A}}{M_{A}+T}.$ (9)
Working out the Feynman amplitude one gets
$\overline{\sum}_{fi}\left|\mathcal{M}\right|^{2}=\frac{G_{F}^{2}}{2}L_{\mu\nu}W^{\mu\nu},$
(10)
with the nuclear tensor $W^{\mu\nu}$ reading
$W^{\mu\nu}=\overline{\sum}_{fi}(\mathcal{J}^{\mu}_{nuc})^{\dagger}\mathcal{J}^{\nu}_{nuc}.$
(11)
The summation symbols in these expressions denote summing and averaging over
initial and final polarizations, respectively. The nuclear tensor depends on
the nuclear current transition amplitudes:
$\mathcal{J}^{\mu}_{nuc}=\langle\Phi_{\textrm{0}}|\widehat{J}^{\mu}(\vec{q})|\Phi_{\textrm{0}}\rangle.$
(12)
Under the assumption that the nuclei of interest are spherically symmetric
with $J^{\pi}=0^{+}$ and taking the z–axis to be along the direction of
$\vec{q}$, one only needs to take into account the zeroth and third component
of the nuclear current’s vector part, which are furthermore connected through
vector current conservation (CVC):
$q^{\mu}\widehat{J}_{\mu}(\vec{q})=0.\\\ $ (13)
Through performing the necessary algebra, one arrives at the final expression
$\frac{\mathrm{d}\sigma}{\mathrm{d}\cos{\theta_{f}}}=\frac{G_{F}^{2}}{2\pi}\frac{E_{f}^{3}}{E_{i}}\left[\frac{Q^{4}}{q^{4}}(1+\cos{\theta_{f}})|\mathcal{J}^{V}_{0}|^{2}\right]$
(14)
where $\mathcal{J}^{V}_{0}$ is the transition amplitude induced by the nuclear
current. One can then safely approximate $\frac{Q^{4}}{q^{4}}\approx 1$ and
express the differential cross section as a function of the neutrino
scattering angle $\theta_{f}$ as:
$\frac{\mathrm{d}\sigma}{\mathrm{d}\cos{\theta_{f}}}=\frac{G_{F}^{2}}{2\pi}\frac{E_{f}^{3}}{E_{i}}(1+\cos{\theta_{f}})\frac{Q_{W}^{2}}{4}F_{W}^{2}(Q^{2})$
(15)
where $G_{F}$ is the Fermi coupling constant, and $Q_{W}$ the tree-level weak
nuclear charge:
$Q^{2}_{W}=[g_{p}^{V}Z+g_{n}^{V}N]^{2}=[(1-4\sin^{2}\theta_{\text{W}})Z-N]^{2}$
(16)
with coupling constants $g_{n}^{V}=-1$ and
$g_{p}^{V}=(1-4\sin^{2}\theta_{\text{W}})$. $N$ and $Z$ are the nucleus’
neutron and proton number, and $\theta_{W}$ is the weak mixing angle. The
value is such that $\sin^{2}{\theta_{W}}=0.23857$, which is valid at low
momentum transfers Ishikawa:2018rlv .
Here we have introduced the elastic form factor, $F_{W}^{2}(Q^{2})$, which we
will discuss later in this subsection. In elastic scattering, the entire
nuclear dynamics is encoded in this form factor. Equivalently one can express
the differential cross section as a function of the nuclear recoil $T$, which
reads:
$\frac{\mathrm{d}\sigma}{\mathrm{d}T}=\frac{G^{2}_{F}}{\pi}M_{A}\left(1-\frac{T}{E_{i}}-\frac{M_{A}T}{2E^{2}_{i}}\right)~{}\frac{Q^{2}_{W}}{4}~{}F_{W}^{2}(Q^{2}).$
(17)
In Eq. (15) and (17), we have expressed the CEvNS kinematic distribution both
in neutrino scattering angle, $\theta_{f}$, and in nuclear recoil energy $T$.
$T=q^{2}/(2M)=E_{\nu}-E_{\nu}^{\prime}$ is the nuclear recoil energy (taking
values in $[0,2E_{\nu}^{2}/(M+2E_{\nu})]$). Terms of order $T/E_{\nu}\lesssim
2E_{\nu}/M_{A}$ are usually neglected since they will be negligible for
neutrino energies $E_{\nu}\lesssim 50$ MeV accessible at the stopped pion
sources. The cross section represents the truly “coherent” contribution, in
the sense that the nuclear structure physics that enter the definition of weak
form factor $F_{\text{W}}$, indeed scale with $Z$ and $N$.
In most experiments, the only signal of a CEvNS event is a nuclear recoil
energy deposition. In principle, future experiments with more advanced
detector technologies may be able to detect both nuclear recoil and angular
distribution simultaneously. Such capabilities are already being explored in
some dark-matter experiments and will significantly enhance the physics
capabilities of future CEvNS experiments Abdullah:2020iiv . The cross section
can also be expressed in terms of the direction of the recoil, converting the
recoil to an angular spectrum. This is referred to in the literature as the
Directional Recoil Spectrum (DRS) Abdullah:2020iiv where the angles are those
of the scattered nucleus measured with respect to the incident neutrino
direction and can be written as
$\frac{\text{d}^{2}R}{\text{d}\Omega\text{d}T}=\frac{1}{2\pi}\left.\frac{\text{d}\sigma}{\text{d}T}\right|_{E_{\nu}=\varepsilon}\,\frac{\varepsilon^{2}}{E_{\nu}^{\text{min}}}\left.\frac{\text{d}\Phi}{\text{d}E_{\nu}}\right|_{E_{\nu}=\varepsilon}\,,$
(18)
where $\text{d}\Phi/\text{d}E_{\nu}$ is the differential neutrino flux,
$E_{\nu}^{\text{min}}=\sqrt{MT/2}$, and
$\frac{1}{\varepsilon}=\frac{\cos\theta}{E_{\nu}^{\text{min}}}-\frac{1}{M}\,.$
(19)
To switch variables directly between $T$ and $\Omega$ one can use the
following relation and the associated Jacobian:
$T=\frac{2ME_{\nu}^{2}\cos^{2}\theta}{(E_{\nu}+M)^{2}-E_{\nu}^{2}\cos^{2}\theta}\,.$
(20)
The directional and energy double differential cross section can be written by
noting that the scattering has azimuthal symmetry about the incoming neutrino
direction. Integrating over outgoing nuclear recoil energy gives
$\frac{\text{d}\sigma}{\text{d}\Omega}=\frac{G_{F}^{2}}{16\pi^{2}}Q_{\text{W}}^{2}E_{\nu}(1+\cos\theta)\big{[}F_{\text{W}}(q^{2})\big{]}^{2}\,,$
(21)
where the angle is defined as
$\text{d}\Omega=2\pi\cos\theta\text{d}\theta$ (22)
and $\theta$ is the scattering angle between the direction of the incoming and
outgoing neutrino.
The scattering process’ cross section is proportional to the squared magnitude
of the transition amplitude induced by the nuclear current. Since the relevant
ground state to ground state transition for spherically symmetrical nuclei is
$0^{+}\rightarrow 0^{+}$, only the vector part of the current will contribute.
The amplitude can be expressed as
$\displaystyle\mathcal{J}^{V}_{0}$
$\displaystyle=\langle\Phi_{0}|\widehat{J}_{0}^{V}(\vec{q})|\Phi_{0}\rangle$
(23) $\displaystyle=\int
e^{i\vec{q}\cdot\vec{r}}\langle\Phi_{0}|\widehat{J}_{0}^{V}(\vec{r})|\Phi_{0}\rangle$
$\displaystyle=\frac{1}{2}\left[\left(1-4\sin^{2}{\theta_{W}}\right)f_{p}(\vec{q})F_{p}(Q^{2})\right.$
$\displaystyle-\left.f_{n}(\vec{q})F_{n}(Q^{2})\right],$
where we have inserted the impulse approximation (IA) expression for the
nuclear current, as a sum of single–body operators:
$\widehat{J}_{0}^{V}(\vec{r})=\sum_{i}F^{Z}(Q^{2},i)\delta^{(3)}(\vec{r}-\vec{r}_{i}),$
(24)
with
$\displaystyle F^{Z}(Q^{2},i)$
$\displaystyle=\left(\frac{1}{2}-\sin^{2}{\theta_{W}}\right)(F_{p}-F_{n})\tau_{3}(i)$
(25) $\displaystyle-\sin^{2}{\theta_{W}}(F_{p}+F_{n}),$
where we used the convention $\tau_{3}(i)=+1$ for proton, -1 for neutrons.
Furthermore, $f_{p}(\vec{q})$ and $f_{n}(\vec{q})$ are the Fourier transforms
of the proton and neutron densities, respectively. $F_{p}$ and $F_{n}$ are
proton and neutron form factors, for which we adopt the standard Galster
parametrization. Note that using a more sophisticated parametrization of the
form factor, other than Galster, will not affect the results at the energies
relevant to this work. The overall structure of the transition amplitude
consists of products of the weak charge with two factors: the nuclear form
factor, determined by the spatial distribution of the nucleons in the nucleus,
as well as the nucleon form factor. We arrive at the expression:
$\displaystyle F_{W}(Q^{2})$
$\displaystyle=\frac{1}{Q_{W}}\left[\left(1-4\sin^{2}{\theta_{W}}\right)f_{p}(\vec{q})F_{p}(Q^{2})\right.$
(26)
$\displaystyle\left.-f_{n}(\vec{q})F_{n}(Q^{2})\right]=\frac{2}{Q_{W}}\mathcal{J}^{V}_{0},$
such that the form factor becomes 1 in the static limit. Note that in writing
down the functional dependence, we can make use of the non–relativistic
approximation $Q\approx|\vec{q}|$, valid in the energy regime considered.
### III.2 Uncertainty on the Cross Section
At tree-level, the theoretical uncertainty on the CEvNS cross section is
driven by the uncertainty on the weak form factor of the nucleus. Although, in
deriving CEvNS cross section in the previous section, a number of subtleties
have been ignored that including subleading kinematic effects, axial-vector
contributions and radiative corrections. In this subsection, we will first
discuss the uncertainty on the tree-level cross section driven by weak form-
factor and then briefly discuss other subleading uncertainties.
The CEvNS cross section is proportional to the weak form factor of Eq. (26).
In general, the form factor can be reasonably approximated by several
different functional forms. The simplest way is to denote neutron and proton
form factors in the Eq. (26) as Fourier transforms of neutron and proton
densities considering the nucleus to be spherically symmetric.
$F_{n}(Q^{2})=\frac{4\pi}{N}\int
dr~{}r^{2}~{}\frac{\sin(Qr)}{Qr}~{}\rho_{n}(r)$ (27)
$F_{p}(Q^{2})=\frac{4\pi}{Z}\int
dr~{}r^{2}~{}\frac{\sin(Qr)}{Qr}~{}\rho_{p}(r)$ (28)
where $\rho_{n}(r)$ and $\rho_{p}(r)$ are neutron and proton density
distributions normalized to the neutron and proton numbers. The value of the
nuclear form factors in the limit for q $\rightarrow$ 0 is 1. A small value of
the coefficient of the proton form factor in Eq. (26) makes the weak form
factor and hence CEvNS is mainly sensitive to neutron density distribution.
The charge density of a nucleus is strongly dominated by the protons and has
been extensively studied with impressive precision in elastic electron
scattering experiments started in the late 1950’s Hofstadter:1956qs followed
by subsequent refinements over the decades DeVries:1987atn ; Fricke:1995zz ;
Angeli:2013epw . On the other hand, the neutron density distributions are hard
to determine, and various efforts using hadronic probes were plagued by
uncontrolled model–dependent uncertainties associated with the strong
interaction Thiel:2019tkm . Electroweak processes such as parity-violating
electron scattering (PVES) Donnelly:1989qs and CEvNS have long been
considered clean and model-independent probes for extracting ground-state
neutron densities. Both of these, though long considered experimentally
challenging, are becoming a reality in recent years.
Phenomenological form factors, such as Helm Helm:1956 and Klein-Nystrand
KN:1999 , are widely used in the CEvNS community where density distributions
are represented by analytical expressions. The empirical value of proton rms
radius, extracted from elastic electron scattering data, is often used to
evaluate the proton form factor, and the same parameterization (or a variation
of that) is assumed for the neutron form factor.
In the Helm approach Helm:1956 , the nucleonic density distribution is
described as a convolution of a uniform density with radius $R_{0}$ and a
Gaussian profile characterized by the folding width $s$, accounting for the
surface thickness and the form factor is expressed as:
$F_{\text{Helm}}(q^{2})=\frac{3j_{1}(qR_{0})}{qR_{0}}e^{-q^{2}s^{2}/2}$ (29)
where $j_{1}(x)=\sin(x)/x^{2}-\cos(x)/x$ is a spherical Bessel function of the
first kind. $R_{0}$ is an effective nuclear radius given as:
$R_{0}^{2}=(1.23A^{1/3}-0.6)^{2}+\frac{7}{3}\pi^{2}r_{0}^{2}-5s^{2}$ with
$r_{0}$ = 0.52 fm and $s$ = 0.9 fm, fitted Duda:2006uk ; Lewin:1995rx to muon
spectroscopy and electron scattering data compiled in Fricke:1995zz .
The Klein–Nystrand (KN) form factor, adapted by the COHERENT Collaboration, is
obtained from the convolution of a short-range Yukawa potential with range
$a_{k}$ = 0.7 fm over a Woods–Saxon distribution approximated as a hard sphere
with radius $R_{A}=1.23A^{1/3}$ fm KN:1999 . The resulting form factor is
expressed as:
Figure 4: Relative differences in the 40Ar weak form factor predictions of
Payne et al. Payne:2019wvy , Yang et al. Yang:2019pbx , Hoferichter et al.
Hoferichter:2020osn , Helm Helm:1956 , Klein–Nystrand KN:1999 and the adapted
Klein–Nystrand AristizabalSierra:2019zmy ; Papoulias:2019xaw , all with
respect to HF calculations of Van Dessel et al. VanDessel:2020epd . Figure
adapted from Ref. VanDessel:2020epd .
$F_{\text{KN}}(q^{2})=\frac{3j_{1}(qR_{A})}{qR_{A}}\left[\frac{1}{1+q^{2}a_{k}^{2}}\right].$
(30)
An adapted version of the KN form factor, (ad.) KN form factor, is often used
where $R_{A}$ is defined as $R_{A}=\sqrt{\frac{5}{3}r_{0}^{2}-10a_{k}^{2}}$
utilizing measured proton rms radii $r_{0}$ of the nucleus
AristizabalSierra:2019zmy ; Papoulias:2019xaw . The measured proton rms radii
of 40Ar is, $r_{0}=3.427$ fm Angeli:2013epw .
More involved nuclear structure calculations which describe a more accurate
picture of the nuclear ground state such as first-principles calculation using
coupled–cluster theory from first principles of Payne et al. Payne:2019wvy ,
shell-model calculations of Hoferichter et al. Hoferichter:2020osn where form
factors are calculated using a large–scale nuclear shell model, relativistic
mean–field method of Yang et al. Yang:2019pbx where form factors predictions
are informed by properties of finite nuclei and neutron star matter, and
Hartree–Fock approach of Van Dessel et al. VanDessel:2020epd where form
factors are computed in a mean-field using Skyrme potential, has been reported
in recent years. In order to quantify differences between different form
factors and the CEvNS cross section due to different underlying nuclear
structure details, we can consider quantities that emphasize the relative
differences between the results of different calculations, arbitrarily using
Hartree–Fock (HF) as a reference calculation, as follows:
Figure 5: Relative differences in the 40Ar CEvNS cross section predictions of
Payne et al. Payne:2019wvy , Yang et al. Yang:2019pbx , Hoferichter et al.
Hoferichter:2020osn , Helm Helm:1956 , Klein–Nystrand KN:1999 and the adapted
Klein–Nystrand AristizabalSierra:2019zmy ; Papoulias:2019xaw , all with
respect to HF calculations of Van Dessel et al. VanDessel:2020epd . Figure
adapted from Ref. VanDessel:2020epd .
$|\Delta
F_{\text{W}}^{i}(q)|~{}=~{}\frac{|F_{\text{W}}^{i}(q)-F_{\text{W}}^{\text{HF}}(q)|}{|F_{\text{W}}^{\text{HF}}(q)|},$
(31)
$|\Delta\sigma_{\text{W}}^{i}(E_{\nu})|~{}=~{}\frac{|\sigma_{\text{W}}^{i}(E_{\nu})-\sigma_{\text{W}}^{\text{HF}}(E_{\nu})|}{|\sigma_{\text{W}}^{\text{HF}}(E_{\nu})|},$
(32)
where $i$ refers to calculations from different approaches as discussed above.
The relative differences are shown in Fig. 4 and Fig. 5. We show only the
low–momentum part of the weak form factor to a maximum value of $q$ = 0.5 fm-1
($\sim$ 100 MeV) that corresponds to a maximum incoming neutrino energy of E
$\sim$ 50 MeV. The relative differences are shown on a linear scale. At
smaller energies, the momentum transfer is low and hence the differences
between form factors are also small. For higher energies, the available
momentum transfer increases and therefore, the differences between the form
factors become more prevalent. The differences in model predictions amount to
$<7.5\%$ over the entire momentum transfer range. The differences rise rapidly
at the higher end of the $q$ range. This translates into relative differences
in CE$\nu$NS cross sections, $\Delta\sigma(E)$, of $<5\%$ over the whole
energy range, where $E\lesssim 55$ MeV, relevant for neutrinos from pion
decay-at-rest.
In writing down the CEvNS cross section, Eq. (17), only the vector operators
were considered. In principle, the axial-vector operator adds an additional
contribution that is not coherently enhanced, including this, modifies the
cross section to the form
$\frac{\text{d}\sigma}{\text{d}T}=\frac{G_{F}^{2}M}{4\pi}\bigg{(}1-\frac{MT}{2E_{\nu}^{2}}-\frac{T}{E_{\nu}}\bigg{)}Q_{\text{W}}^{2}\big{[}F_{\text{W}}(q^{2})\big{]}^{2}+\frac{G_{F}^{2}M}{4\pi}\bigg{(}1+\frac{MT}{2E_{\nu}^{2}}-\frac{T}{E_{\nu}}\bigg{)}F_{A}(q^{2})\,,$
(33)
with an axial-vector form factor $F_{A}(q^{2})$ Hoferichter:2020osn . The
axial-vector form factor depends on the axial charges and radii of the
nucleon. This contribution vanishes for spin-zero nuclei such as 40Ar.
The CEvNS cross section expression of Eq. (17) holds true at tree-level, in
which case $Q_{\text{W}}$ are flavor universal and apply both to neutrino and
electron scattering. Once including radiative corrections, process- and
flavor-dependent contributions arise, in such a way that separate weak charges
need to be defined. Electrons and muons running in loops introduce a non-
trivial dependence on the momentum transfer due to their relatively light
masses. These break the flavor universality because of mass-dependent
electromagnetic radiative corrections. For CEvNS, the corresponding radiative
corrections have been studied in Ref. Tomalak:2020zfh . At next-to-leading
order (NLO) in the electromagnetic coupling constant $\alpha$, photon-mediated
scattering takes place and the cross section inherits a flavor-dependent
contribution entering with a charge form factor of the nucleus.
$\frac{\mathrm{d}\sigma_{\nu_{\ell}}}{\mathrm{d}T}=\frac{\mathrm{G}_{\mathrm{F}}^{2}M_{\mathrm{A}}}{4\pi}\left(1-\frac{T}{E_{\nu}}-\frac{M_{\mathrm{A}}T}{2E_{\nu}^{2}}\right)\left(\mathrm{F}_{\mathrm{W}}\left(Q^{2}\right)+\frac{\alpha}{\pi}[\delta^{\nu_{\ell}}+\delta^{\text{QCD}}]\mathrm{F}_{\mathrm{ch}}(Q^{2})\right)^{2},$
(34)
The expression depends on the weak, $\mathrm{F}_{\mathrm{W}}$, and charge,
$\mathrm{F}_{\mathrm{ch}}$, nuclear form factors. The charge form factor
enters multiplied by $\delta^{\nu_{\ell}}$ and $\delta^{\text{QCD}}$ which are
radiative corrections. The corrections induced by hadronic and/or quark loops,
proportional to $\delta^{\text{QCD}}$, are flavor independent, whereas the
corrections from charged leptons, proportional to $\delta^{\nu_{\ell}}$,
depend on the neutrino flavor $\ell$.
A detailed total theoretical uncertainty on the CEvNS cross sections 40Ar
nucleus was estimated Ref. Tomalak:2020zfh , and is shown in Tab. 1. The
estimated error budget accounts for uncertainties stemming from a variety of
sources including nuclear, nucleon, and quark levels. At higher energies, the
main source of uncertainty for the CEvNS cross section comes from nuclear
physics. In fact, this can be traced down to the error of the neutron
distribution inside the nucleus.
Incident Neutrino Energy | Nuclear Level Uncertainty on ${}^{40}\mathrm{Ar}$ | Total Estimated Theoretical Uncertainty on ${}^{40}\mathrm{Ar}$
---|---|---
10 (MeV) | 0.04% | 0.58%
30 (MeV) | 1.5% | 1.65%
50 (MeV) | 4.0% | 4.05%
Table 1: Estimated theoretical errors budget on the CEvNS cross section on
on$~{}^{40}\mathrm{Ar}$ target. Table adapted from Ref. Tomalak:2020zfh .
### III.3 Input from Parity Violating Electron Scattering
CEvNS and Parity Violating Electron Scattering (PVES) are intimately connected
to each other. From the formal point of view, both processes are described in
first order perturbation theory via the exchange of an electroweak gauge boson
between a lepton and a nucleus. While in CEvNS the lepton is a neutrino and a
$Z^{0}$ boson is exchanged, in PVES the lepton is an electron, but measuring
the asymmetry allows one to select the interference between the $\gamma$ and
$Z^{0}$ exchange. As a result, both the CEvNS cross section and the PVES
asymmetry depend on the weak form factor $F_{W}(Q^{2})$, which is mostly
determined by the neutron distribution within the nucleus. The latter builds
an even stronger anchor between CEvNS and PVES.
The key experimental observable in the elastic scattering of longitudinally
polarized electrons from the unpolarized spin-0 nucleus is the parity-
violating asymmetry ${A}_{\mathrm{PV}}$. The parity-violating asymmetry arises
from the interference of $\gamma$-mediated and $Z$-mediated scattering
diagrams. The asymmetry ${A}_{\mathrm{PV}}$ is determined from the fractional
difference in cross sections between the scattering of positive and negative
helicity electrons
$A_{pv}=\frac{d\sigma/d\Omega_{+}-d\sigma/d\Omega_{-}}{d\sigma/d\Omega_{+}+d\sigma/d\Omega_{-}}$
(35)
where $\pm$ refers to the polarization of the electron. In the Born
approximation at low momentum transfer, $\mathrm{A}_{\mathrm{PV}}$ is
proportional to the ratio of the weak to the charge form factors of the
nucleus
$A_{pv}=\frac{G_{F}q^{2}|Q_{W}|}{4\pi\alpha\sqrt{2}Z}\frac{F_{W}(q^{2})}{F_{ch}(q^{2})}.$
(36)
For a given nucleus, if $\mathrm{F}_{\text{ch}}(Q^{2})$ is already known from
the elastic electron scattering experiment, one can extract
$\mathrm{F}_{\text{W}}(Q^{2})$ from measured $\mathrm{A}_{\mathrm{PV}}$ in at
the momentum transfer of the experiment after accounting for radiative
corrections and Coulomb distortion effects not considered in the Born
approximation Horowitz:1999fk . Coulomb distortions can be theoretically
calculated by solving the Dirac equation for an electron moving in a nuclear
potential Yennie:1954zz ; Yennie:1965zz ; Kim:1996ua ; Kim:2001sq and are
relatively well understood Horowitz:1998vv .
The PREX experiment at the Jefferson Lab (JLab) has recently provided the
first model-independent determination of the weak-charge form factor of 208Pb
at $\mathrm{F}_{\text{W}}(\langle Q^{2}\rangle)=0.204\pm 0.028$ at the average
momentum transfer of the experiment $\langle Q^{2}\rangle\approx
8800~{}\mathrm{MeV}^{2}$ Abrahamyan:2012gp ; Horowitz:2012tj . The follow-up
PREX-II experiment is underway to improve the precision of that measurement.
Another PVES experiment at JLab, CREX, is planned to measure the weak-charge
form factor of 48Ca Kumar:2020ejz . In practice, however, both PREX-II and
CREX measurements will make weak form factor measurements on a single value of
the momentum transfer and is not expected to perform measurements at several
values of the momentum transfer. Future facilities such as the MESA facility
in Mainz, envisioned to start operations in a few years, will also be suited
for high-precision parity-violating experiments Becker:2018ggl . Tab. 2
summarizes current and near-future PVES experiments. It is worth noting that
CEvNS can be used to probe the weak form factor only at low momentum transfers
where the process remains coherent, but accesses a continuum of four-momentum
transfers. In contrast, PVES experiments are usually carried out at a single
value of the momentum transfer at a time. A combination of measurements from
these two independent and complementary scattering techniques is ideal since
systematic uncertainties are largely uncorrelated. This will then provide an
empirical extraction of a nucleus’ weak form factor in a clean and model-
independent fashion.
Experiment | Target | $q^{2}$ (GeV2) | $A_{pv}$ (ppm) | $\pm\delta R_{n}$ (%)
---|---|---|---|---
PREX at JLab | 208Pb | 0.00616 | $0.550\pm 0.018$ | 1.3
CREX at JLab | 48Ca | 0.0297 | | 0.7
Qweak at JLab | 27Al | 0.0236 | $2.16\pm 0.19$ | 4
MREX at MESA | 208Pb | 0.0073 | | 0.52
Table 2: Parity violating elastic electron scattering experiments.
In principle, parity-violating electron scattering experiments offer the least
model-dependent and most precise approach to experimentally probing the
neutron distribution. Any result that will come from the PVES program with the
goal of pinning down the neutron-skin thickness will help improve our
understanding of the weak form factor and hence influence CEvNS. However,
CEvNS has also been proposed as an alternative and attractive opportunity in
the future to constrain the neutron distribution and the neutron radius in
nuclei Amanik:2009zz ; Patton:2012jr ; Cadeddu:2017etk , provided that enough
statistics can be reached.
The main difference lies in the choice of the nuclear target, which is
determined by practical considerations. In the case of PVES, the targets need
to be stable (or almost stable) neutron-rich nuclei, such as 208Pb and 48Ca,
that do not present low-lying excited states that would contribute to the
background noise. In the case of CEvNS, isotopes of sodium, argon, germanium,
cesium and iodine will be used, as the low cost allows to build large
detectors with these materials. Because various electroweak observable
correlate Yang:2019pbx with each other, theoretical calculations will help to
further connect the various nuclear targets and the two endeavors of CEvNS and
PVES. For example, we can expect that constraints experimentally determined on
the neutron-skin thickness of one nuclear target will affect the prediction of
the weak form factor of another target. CEvNS experiments also prefer detector
materials with low scintillation or ionization thresholds in order to
efficiently measure low-energy nuclear recoils. Quite the contrary is needed
as target material in parity violation electron scattering experiments: in
this case, the highest the excited state of the nucleus, the lower the
contamination of the elastic asymmetries by inelastic contributions from the
excited state. In addition, due to the high intensity of the electron beam, a
high melting temperature of the target material is also desirable.
## IV Inelastic Neutrino Scattering off Nuclei
Figure 6: Diagrammatic representation of the inelastic neutrino-nucleus
scattering where a single $W^{+}$ (CC) or $Z^{0}$ (NC) boson is exchanged
between neutrino and target nucleus, exciting the nucleus into low-lying
nuclear states, followed by nuclear de-excitation products, $X$ (gamma or a
nucleon), with energies of the same order as the incident neutrino energies.
CEvNS experiments at stopped–pion sources are also sensitive to inelastic
neutrino-nucleus interactions. For neutrino energies less than about $\sim$100
MeV, the CEvNS interaction channel dominates the neutrino-nucleus cross
section over inelastic charged-current (CC) and neutral-current (NC) neutrino-
nucleus interactions. In the inelastic NC or CC scattering, shown in Fig. 6,
the neutrino excites the target nucleus to a low-lying nuclear state, followed
by nuclear de-excitation products such as gamma rays or ejected nucleon. The
interaction cross sections for these processes lack the $N^{2}$ enhancement
associated with CEvNS and, therefore, tend to be at least one order of
magnitude smaller than that of CEvNS process, as shown in Fig. 7. The
observable final-state particles of these inelastic scattering have typical
energies of the same order as the incident neutrino energies.
Figure 7: CEvNS cross section compared with CC and NC inelastic scattering
cross section on argon.
The inelastic neutrino-nucleus scattering process is schematically shown in
Fig. 6. A neutrino with four-momentum $k_{i}=(E_{i},\vec{k}_{i})$ scatters off
the nucleus, which is initially at rest in the lab frame, exchanging a $W^{+}$
(CC) or a $Z^{0}$ (NC) boson. The nucleus receives four momentum
$Q=(\omega,\vec{q})$, where $\omega=E_{i}-E_{f}$ and
$\vec{q}=\vec{k}_{i}-\vec{k}_{f}$, while the scattered lepton carries away
four momentum $k_{f}=(E_{f},\vec{k}_{f})$. For an inclusive process, the
hadronic part of the final states is integrated out. The inelastic neutrino-
nucleus differential cross section of this process can be written as
$\displaystyle\frac{\mathrm{d}^{3}\sigma}{\mathrm{d}\omega\mathrm{d}\Omega}=$
$\displaystyle\sigma_{W}E_{f}k_{f}\zeta^{2}(Z^{\prime},E_{f})$ (37)
$\displaystyle\times\left(v^{\mathcal{M}}R^{\mathcal{M}}+v^{\mathcal{L}}R^{\mathcal{L}}+v^{\mathcal{ML}}R^{\mathcal{ML}}\right.$
$\displaystyle+\left.v^{T}R^{T}+hv^{TT}R^{TT}\right),$
with the Mott-like cross section prefactor $\sigma_{W}$ defined as
$\sigma_{W}^{CC}=\left(\frac{G_{F}\cos{\theta_{c}}}{2\pi}\right)^{2},~{}\sigma_{W}^{NC}=\left(\frac{G_{F}}{2\pi}\right)^{2},$
where $G_{F}$ is the Fermi constant and $\cos{\theta_{c}}$ the Cabibbo angle.
The factor $\zeta^{2}(Z^{\prime},E_{f})$ is introduced in order to take into
account the distortion of the scattered lepton wave function in the Coulomb
field of the final nucleus with $Z^{\prime}$ protons, in the case of CC
interaction VanDessel:2019obk ; Pandey:2014tza . In the NC case
$\zeta^{2}(Z,E_{f})$ equals $1$. The influence of the lepton helicity on the
cross section is encoded in $h$ which is $+$ for neutrinos and $-$ for
antineutrinos.
The $v$–factors are leptonic functions that are entirely determined by lepton
kinematics. The $R$–factors are the nuclear response functions that depend on
the energy and momentum transfer ($\omega$, $q$) and contain all the nuclear
information involved in this process. The indices $L$ and $T$ correspond to
longitudinal and transverse contributions, relative to the direction of the
momentum transfer. The leptonic coefficients $v^{\mathcal{M}}$,
$v^{\mathcal{L}}$, $v^{\mathcal{M\mathcal{L}}}$, $v^{T}$, and $v^{TT}$ are
given as
Figure 8: Charged current neutrino-argon differential cross section for
neutrino energy 30 and 50 MeV shown as a function of the energy transferred to
the nucleus ($\omega$).
$\displaystyle
v^{\mathcal{M}}=\left[1+\frac{\kappa_{f}}{\varepsilon_{f}}\cos\theta\right],$
(38) $\displaystyle
v^{\mathcal{L}}=\left[1+\frac{\kappa_{f}}{\varepsilon_{f}}\cos\theta-\frac{2\varepsilon_{i}\varepsilon_{f}}{|\vec{q}|^{2}}{\left(\frac{\kappa_{f}}{\varepsilon_{f}}\right)}^{2}\sin^{2}\theta\right],$
(39) $\displaystyle
v^{\mathcal{M}\mathcal{L}}=\left[\frac{\omega}{|\vec{q}|}\left(1+\frac{\kappa_{f}}{\varepsilon_{f}}\cos\theta\right)+\frac{m_{l}^{2}}{\varepsilon_{f}|\vec{q}|}\right],$
(40) $\displaystyle
v^{T}=\left[1-\frac{\kappa_{f}}{\varepsilon_{f}}\cos\theta+\frac{\varepsilon_{i}\varepsilon_{f}}{|\vec{q}|^{2}}{\left(\frac{\kappa_{f}}{\varepsilon_{f}}\right)}^{2}\sin^{2}\theta\right],$
(41) $\displaystyle
v^{TT}=\left[\frac{\varepsilon_{i}+\varepsilon_{f}}{|\vec{q}|}\left(1-\frac{\kappa_{f}}{\varepsilon_{f}}\cos\theta\right)-\frac{m_{l}^{2}}{\varepsilon_{f}|\vec{q}|}\right],$
(42)
and response functions $R^{\mathcal{M}}$, $R^{\mathcal{L}}$,
$R^{\mathcal{ML}}$, $R^{T}$, and $R^{TT}$ are defined as
$\displaystyle R^{\mathcal{M}}=|\langle
J_{f}||\widehat{\mathcal{M}}_{J}(|\vec{q}|)||J_{i}\rangle|^{2},$ (43)
$\displaystyle R^{\mathcal{L}}=|\langle
J_{f}||\widehat{\mathcal{L}}_{J}(|\vec{q}|)||J_{i}\rangle|^{2},$ (44)
$\displaystyle R^{\mathcal{ML}}=~{}\mathcal{R}\left[\langle
J_{f}||\widehat{\mathcal{L}}_{J}(|\vec{q}|)||J_{i}\rangle\langle
J_{f}||\widehat{\mathcal{M}}_{J}(|\vec{q}|)||J_{i}\rangle^{\ast}\right],$ (45)
$\displaystyle R^{T}=\left[|\langle
J_{f}||\widehat{\mathcal{J}}_{J}^{mag}(|\vec{q}|)||J_{i}\rangle|^{2}+|\langle
J_{f}||\widehat{\mathcal{J}}_{J}^{el}(|\vec{q}|)||J_{i}\rangle|^{2}\right],$
$\displaystyle R^{TT}=~{}\mathcal{R}\left[\langle
J_{f}||\widehat{\mathcal{J}}_{J}^{mag}(|\vec{q}|)||J_{i}\rangle\langle
J_{f}||\widehat{\mathcal{J}}_{J}^{el}(|\vec{q}|)||J_{i}\rangle^{\ast}\right].$
Here $\widehat{\mathcal{M}}_{J}$, $\widehat{\mathcal{L}}_{J}$,
$\widehat{\mathcal{J}}_{J}^{mag}$ and $\widehat{\mathcal{J}}_{J}^{el}$ are the
Coulomb, longitudinal, transverse magnetic, and transverse electric operators,
respectively OConnell:1972edu ; Walecka:1995 .
Figure 9: Neutral current neutrino-argon differential cross section for
neutrino energy 30 and 50 MeV shown as a function of the energy transferred to
the nucleus ($\omega$).
The nuclear responses are function of the transition amplitude,
${J}_{\mu}^{nucl}(\omega,q)$, between the initial $|\Phi_{\textrm{0}}\rangle$
and final $|\Phi_{\textrm{f}}\rangle$ state:
${J}_{\mu}^{nucl}(\omega,q)=\langle\Phi_{\textrm{f}}|\hat{J}_{\mu}(q)|\Phi_{\textrm{0}}\rangle,$
(48)
where the nuclear current, $\hat{J}_{\mu}({q})$, is the Fourier transform of
the nuclear current operator in coordinate space:
$\hat{J}_{\mu}(q)=\int\mathrm{d}{x}e^{i{x}\cdot{q}}\hat{J}_{\mu}({x}).$ (49)
Nuclear responses are computed within a nuclear model. Fig. 8 and 9 show
inelastic CC and NC cross-section on 40Ar as a function of $\omega$ for
incoming neutrino energy of 30 and 50 MeV, calculated within a microscopic
many-body nuclear theory approach of Refs. Jachowicz:2002rr ; Pandey:2014tza ;
Pandey:2016jju ; VanDessel:2019atx . With the stopped-pion flux, only
$\nu_{e}$ CC interactions are accessible, given that $\nu_{\mu}$ and
$\bar{\nu}_{\mu}$ are below the CC threshold of $\sim$ 110 MeV, needed to
create a muon. While NC interactions are available for all neutrino types,
$\nu_{e}$, $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ at the stopped-pion facility. The
experimental requirements for CEvNS and inelastic signals are quite different.
Larger masses are needed for inelastics, as well as the dynamic range to
record MeV-scale energy depositions, while very low thresholds are not
required.
Reaction Channel | Experiment | Measurement ($10^{-42}$ cm2)
---|---|---
12C($\nu_{e},e^{-}$)12Ng.s. | KARMEN | $9.1\pm 0.5{\rm(stat)}\pm 0.8{\rm(sys)}$
| E225 | $10.5\pm 1.0{\rm(stat)}\pm 1.0{\rm(sys)}$
| LSND | $8.9\pm 0.3{\rm(stat)}\pm 0.9{\rm(sys)}$
12C($\nu_{e},e^{-}$)12N∗ | KARMEN | $5.1\pm 0.6{\rm(stat)}\pm 0.5{\rm(sys)}$
| E225 | $3.6\pm 2.0{\rm(tot)}$
| LSND | $4.3\pm 0.4{\rm(stat)}\pm 0.6{\rm(sys)}$
12C($\nu_{\mu},\nu_{\mu}$)12C∗ | KARMEN | $3.2\pm 0.5{\rm(stat)}\pm 0.4{\rm(sys)}$
12C($\nu,\nu$)12C∗ | KARMEN | $10.5\pm 1.0{\rm(stat)}\pm 0.9{\rm(sys)}$
56Fe($\nu_{e},e^{-}$) 56Co | KARMEN | $256\pm 108{\rm(stat)}\pm 43{\rm(sys)}$
127I($\nu_{e},e^{-}$)127Xe | LSND | $284\pm 91{\rm(stat)}\pm 25{\rm(sys)}$
127I($\nu_{e},e^{-}$)X | COHERENT | $920^{+2.1}_{-1.8}$
natPb($\nu_{e},Xn$) | COHERENT | – –
Table 3: Flux-averaged cross-sections measured at stopped pion facilties on
various nuclei. Experimental data gathered from the LAMPF Willis:1980pj ,
KARMEN KARMEN:1998xmo ; KARMEN:1991vkr ; Maschuw:1998qh ; Zeitnitz:1994kz ,
E225 Krakauer:1991rf , LSND LSND:2001fbw ; LSND:2002oco ; Distel:2002ch , and
COHERENT COHERENT:2023ffx ; COHERENT:2022eoh experiments. Table adapted from
the Ref. Formaggio:2012cpf .
The detection of the burst of 10s of MeV neutrinos from the galactic core-
collapse supernova is one of the primary physics goals of the future DUNE
experiment DUNE:2020zfm ; DUNE:2023rtr , as stated in the DUNE’s TDR
DUNE:2020lwj ; DUNE:2020ypp ; DUNE:2020mra ; DUNE:2020txw , “Detect and
measure the nue flux from a core-collapse supernova within our galaxy, should
one occur during the lifetime of the DUNE experiment. Such a measurement would
provide a wealth of unique information about the early stages of core
collapse, and could even signal the birth of a black hole”. Detecting
supernova will provide unique insight into the properties of neutrinos, as
well as into the astrophysics of core-collapse supernova. DUNE’s capabilities
of supernova neutrino detection in the relevant tens-of-MeV neutrino energy
range as well as the physics to be learned from a DUNE supernova burst
detection will be limited by the lack of knowledge of the inelastic neutrino-
argon cross section. The inelastic neutrino-argon cross sections in this
energy range have never been measured. In the absence of experimental data,
the uncertainties in the theoretical calculations are not quantified at all.
The theory predictions, in fact, differ by orders of magnitude, see e.g. Fig.
6 in Ref. DUNE:2023rtr . In order to reconstruct the energy of the incoming
neutrinos from a supernova, the energy of all final state particles needs to
be known. These will include nuclear de-excitation products such as
$\gamma$-rays and potential nuclear fragments (neutrons, protons, deuterons,
etc). In a recent analysis performed by the DUNE collaboration, Ref.
DUNE:2023rtr , reports that the total inelastic neutrino-argon cross section
needs to be known at about 5% (in the absence of any external constraints) for
a measurement of the integrated neutrino luminosity with less than 10% bias
with DUNE.
The well-understood stopped-pion neutrino spectrum is a near-ideal 10s of MeV
neutrino source Scholberg:2012id which can provide a unique opportunity to
measure neutrino-argon cross sections in this energy regime. Inelastic
interactions of neutrinos with nuclei are still poorly understood: theory is
sparse and experiments have large error bars. There are very few existing
measurements, none at better than the 10% uncertainty level, they are
summarized in Table 3 Formaggio:2012cpf . So far, there are no measurements on
the argon nucleus performed to date. Because inelastic neutrino interactions
have big uncertainties, in the future it will be crucial to measure inelastic
electron scattering cross sections at energies below the 50 MeV mark and use
those data to calibrate theoretical models for the neutrino scattering
process. Theoretical understanding of these processes is also relatively poor,
due to the strong dependence of the interaction rates on the specific initial-
and final-state nuclear wavefunctions. Inelastic neutrino-argon cross sections
shown in Figs. 7, Figs. 8 and Figs. 9 have never been measured before. CEvNS
experiments at decay at rest sources, such as COHERENT COHERENT:2020iec and
Coherent CAPTAIN-Mills CCM experiments are well suited to make those
measurements. The technical challenge is that the experiment has to have a
dynamic range of detecting keV energy recoil (signal for CEvNS), and MeV
energy nuclear deexcitation and nuclear fragment products (signal for
inelastic scattering) in the same detector. COHERENT experiment has recently
demonstrated their capabilities of measuring inelastic cross section by
performing two measurements, 127I($\nu_{e},e^{-}$)X and natPb($\nu_{e},Xn$)
COHERENT:2023ffx ; COHERENT:2022eoh .
## V Experimental Landscape
Experiment | Nuclear Target | Detector Technology | Mass (kg) | Distance from source (m) | Dates
---|---|---|---|---|---
COHERENT | CsI[Na] | Scintillating crystal | 14 | 19.6 | 2015-2019
(ORNL) | Pb, Fe | Liquid scintillator | 1,000 | 19.0 | 2015-
| NaI[Tl] | Scintillating crystal | 185 | 21.0 | 2016-
| LAr | Noble scintillator | 24 | 27.5 | 2017-
| D2O | Cherenkov | 600 | 22.0 | 2022-
| Ge | HPGe PPC | 18 | 21.0 | 2022-
| NaI[Tl] | Scintillating crystal | 3,388 | 24.0 | 2022-
CCM | LAr | Noble scintillator | 10,000 | 23.0 | 2019 -
(LANL) | | | | |
Table 4: Current CEvNS experiments at the stopped pion sources.
Several experimental programs have been or are being set up to detect CEvNS
and BSM signals in the near future using stopped–pion neutrino sources as well
as with reactor sources. It all started with the COHERENT collaboration
reporting on the first detection of the CEvNS process in 2017. The measurement
was performed with an exposure of 14.6-308 kg-days, the COHERENT collaboration
identified nuclear recoil events from CEvNS viewed by a single photomultiplier
tube (PMT). The measurement was well in excess of the expected background
events for this exposure. A likelihood analysis considering the signal and
background shapes in time and PE yielded a result of 134 $\pm$ 22 CEvNS
events, with the uncertainty being primarily statistical, the SM prediction
for this analysis is 178 $\pm$ 43 CEvNS events. The observed event rate was
consistent with the SM prediction within uncertainties COHERENT:2017ipa . This
led to a flurry of proposals and experiments worldwide with complementary
detection technologies and physics goals. In Table 4, we list currently
running CEvNS experiments at stopped pion sources. There are several proposed
experiments at existing and planned facilities that are not included in the
table but we discuss them below. For the sake of completeness, we also list
CEvNS experiments at reactors in Table 5. These include CONNIE CONNIE , MINER
MINER , $\nu$GEN vGEN , NUCLEUS NUCLEUS , RICOCHET RICOCHET , TEXONO TEXONO ,
NEON NEON and vIOLETA vIOLETA experiments. The current theme of reactor
experiments is the observation of neutrino-nucleus elastic scattering at the
kinematic regime where complete quantum-mechanical coherency is expected.
SNS at ORNL: The Spallation Neutron Source at the Oak Ridge National
Laboratory has the most ambitious CEvNS-based experimental program
Barbeau:2021exu ; Asaadi:2022ojm . SNS is consistently running at 1 GeV proton
energy and 1.4 MW beam power. By 2024 after the next round of upgrades, it
will be running with 1.3 GeV proton energy and 2 MW beam power. The SNS First
Target Station (FTS) proton beam consists of a linear $H^{-}$ ion accelerator,
an accumulator ring, and a proton target. The proton target employs liquid
mercury contained inside a double-walled stainless steel vessel
Henderson:2014paa ; Haines:2014kna . The SNS generates 400-nanosecond bursts
of protons on target at 60 Hz frequency allowing for a highly effective
suppression of backgrounds and simultaneous measurement of neutrino signal and
backgrounds. A second target station (STS) with a solid tungsten target is
planned for the SNS. For this stage, the total beam power will be increased to
2.8 MW and the proton beam will be split between two targets with 45 Hz to the
first target and 15 Hz to the second, creating even more favorable conditions
to suppress steady-state backgrounds. The COHERENT collaboration continued
pursuing several additional detector technologies for CEvNS, to span a range
of N values, as well as detectors to address additional physics goals.
Lujan at LANL: The Lujan Center at the Los Alamos National Laboratory is a
prolific source of neutrinos from decays of stopped pions and muons created by
an 800 MeV proton beam. An 800MeV protons are delivered at a rate of 20Hz in a
280 ns triangular pulse from the LANSCE beamline and interact in a thick
tungsten target, copiously producing charged and neutral mesons. A 10 ton
liquid argon scintillation detector, Coherent CAPTAIN-Mills (CCM), is
currently operating. The CCM upright cylindrical cryostat 2.58 m in diameter
and 2.25 m high. A ton-scale mass and a keV-range energy threshold allow the
CCM detector to possess leading sensitivity to potential dark-sector physics
signals CCM ; CCM:2021yzc ; CCM:2021lhc .
Experiment | Detector Technology | Location | Source
---|---|---|---
CONNIE | Si CCDs | Brazil | Reactor
CONUS | HPGe | Germany | Reactor
MINER | Ge/Si cryogenic | USA | Reactor
NuCleus | Cryogenic CaWO4, Al2O3 calorimeter array | Europe | Reactor
$\nu$GEN | Ge PPC | Russia | Reactor
RED-100 | LXe dual phase | Russia | Reactor
Riochet | Ge,Zn | France | Reactor
TEXONO | p-PCGe | Taiwan | Reactor
NCC-1701 | p-PCGe | Germany | Reactor
Table 5: A list of reactor based CEvNS experiments.
JSNS2 at JPARC: The Japan Spallation Neutron Source of J-PARC is featured by a
1 MW beam of 3 GeV protons incident on a mercury target, creating an intense
neutrino flux from the stopped-pion and stopped-muon decays. The JSNS2 (J-PARC
Sterile Neutrino Search at J-PARC Spallation Neutron Source) experiment aims
to search for the existence of neutrino oscillations and to offer the ultimate
test of the LSND anomaly at a 17-ton fiducial volume Gd-dopped liquid
scintillation detector, new detector is being planned to study not only CEvNS
but potential low-mass dark-matter signals Ajimura:2017fld ; Ajimura:2020qni .
ESS: The European Spallation Source (ESS), sited in Sweden, will combine the
world’s most powerful superconducting proton linac with an advanced hydrogen
moderator, generating the most intense neutron beams for multi-disciplinary
science. It will also generate the largest pulsed neutrino flux suitable for
the detection of CEvNS. The ESS aims to achieve the power of 5 MW and proton
energy of 2 GeV. Several detector technologies sensitive to keV energy nuclear
recoils are being considered, these include a cryogenic undoped CsI
scintillator array, silicon Charge Coupled Devices (CCDs), and high-pressure
gaseous xenon detectors Baxter:2019mcx .
PIP2-BD at FNAL: The Proton Improvement Project II (PIP-II) is the first phase
of a major transformation of the accelerator complex underway at Fermilab to
prepare the lab to host the Deep Underground Neutrino Experiment (DUNE). The
completion of the PIP-II superconducting LINAC at Fermilab as a proton driver
for DUNE/LBNF in the late 2020s creates an attractive opportunity to build
such a dedicated beam dump facility at Fermilab, this will require the
addition of an accumulator ring to bunch the PIP-II beam current into short
proton pulses. A unique feature of this Fermilab beam dump facility is that it
can be optimized from the ground up for HEP. Thus, relative to spallation
neutron facilities dedicated to neutron physics and optimized for neutron
production operating at a similar proton beam power, a HEP-dedicated beam dump
facility would allow for better sensitivity to various physics goals. The
facility could also accommodate multiple, 100-ton-scale HEP experiments
located at different distances from the beam dump and at different angles with
respect to the beam Toups:2022yxs .
## VI Implications for the Standard Model Physics
Since the uncertainty on the SM predicted CEvNS cross sections is relatively
small, CEvNS cross section measurement allows testing of SM weak physics. The
experiments measure the number of events $N$ generated by neutrinos of a given
flavor $\alpha$ and collected by a detector with a finite threshold
$\frac{dN}{dT}=N_{t}\sum_{\alpha=\nu_{e},\nu_{\mu},\bar{\nu}_{\mu}}\int^{E_{\nu}^{\text{max}}}_{E_{\nu}^{\text{min}}}\Phi_{\alpha}(E_{\nu})~{}\frac{d\sigma}{dT}~{}dE_{\nu}$
(50)
where $N_{t}$ is a normalization constant that depends on the number of
protons on target, the neutrino yield per proton, the mass of the detector,
detection efficiency and the distance of the detector from the source. Any
deviation from the SM predicted event rate of Eq. (50), either with a change
in the total event rate or with a change in the shape of the recoil spectrum,
could indicate new contributions to the interaction cross-section. More
generally, what can be probed is the weak nuclear form factor of a nucleus and
weak mixing angle (see, Eq. (17)). There are many important results that can
be extracted from the CEvNS measurements, recent work has considered CEvNS as
a percent-level probe of SM physics Scholberg:2005qs ; Miranda:2019skf ;
Cadeddu:2019eta ; Canas:2018rng ; Papoulias:2019xaw ; Baxter:2019mcx ;
Huang:2019ene ; Bernabeu:2002nw ; Bernabeu:2002pd ; Papavassiliou:2005cs ;
Cadeddu:2018dux . Note that for a given stopped pion production facility,
experiments have control over choosing the baseline (the distance from the
source to the detector), the angular placement of the detector with respect to
the beam axis, and on the nuclear target employed in the detector. These can
be exploited to increase the sensitivity of the primary physics goal of the
experiment. An additional advantage of the stopped pion source is that one
could exploit both timing and energy data. The timing profile, See Fig. 2,
allows the separation of the prompt neutrino flavor from the delayed neutrino
flavor.
### VI.1 Weak Nuclear Form Factor
The modest loss of coherence at stopped-pion energies can be valuable for the
understanding of the nuclear structure, given that one can probe the form
factor for a given nucleus as a function of $Q$. A precise measurement of the
CEvNS cross section can be used to extract the weak form factor, using Eq.
(17), given one measures recoil spectrum shape. Observed recoil energy T can
be used to determine Q; therefore, the observed CEvNS recoil energy spectrum
allows one to map the effect of the weak form factor of the nucleus at low
momentum transfer.
As discussed in Sec. III, the weak nuclear charge is strongly dominated by its
neutron content. The observation of CEvNS can, therefore, further provide
important nuclear structure information through the determination of the weak
form factor, which constrains the neutron density distribution, at least at
low momentum transfers where the process remains coherent
AristizabalSierra:2019zmy ; Payne:2019wvy ; Hoferichter:2020osn ; Yang:2019pbx
; VanDessel:2020epd ; Patton:2012jr ; Cadeddu:2017etk ; Co:2020gwl ;
Ciuffoli:2018qem ; Papoulias:2019lfi . Furthermore, since proton density
distributions are generally well understood, a measure of the mean radius of
the neutron distribution (the “neutron radius”) enables the determination of
the “neutron skin” of a nucleus — the difference between the larger neutron
radius and the proton radius. These measurements complement PVES experiments
not only due to additional data but also due to different energy ranges and
nuclear targets, which could be used to calibrate nuclear-structure
calculations. Furthermore, improved measurements of the neutron skin would
have important consequences for the equation of the state of neutron-rich
matter, which plays an essential role in understanding the structure and
evolution of neutron stars Fattoyev:2017jql ; Reed:2021nqk ; Lattimer:2012xj ;
Hebeler:2013nza ; Hagen:2015yea . With more ambitious precision measurements,
axial-vector contributions to the weak nuclear response can also be
determined, in principle, for nuclei with non-zero spin.
However, arguably one of the most intricate aspects of nuclear-structure input
concerns searches for physics beyond the SM. In principle, CEvNS cross
sections provide constraints on the combination of nuclear responses and the
BSM effects. Therefore external independent experimental information for the
neutron responses, such as from the PVES experiment would be vital. In fact,
in order to derive BSM constraints beyond the level at which current nuclear-
structure calculations constrain the neutron distribution, a combined analysis
of multiple targets and momentum transfers is required to distinguish between
nuclear structure and potential BSM contributions Abdullah:2022zue .
### VI.2 Weak Mixing Angle
In quantum field theory, the weak mixing angle, $\theta_{W}$, depends on the
energy scale at which it is measured. There exists an experimental anomaly
with respect to the SM predictions for neutrino-nucleon scattering at the
$Q\sim$ GeV/c scale NuTeV:2001whx . Since CEvNS cross section and, therefore,
the event rate depends on the weak mixing angle, Eq. (17) and Eq. (16), the
measured CEvNS event counts can be used to infer the weak mixing angle. A
change in $\theta_{W}$ will result in eventual event rate scaling.
Furthermore, measurements on multiple nuclear targets will further enhance the
sensitivity to extracting weak mixing angles from weak nuclear charge
Scholberg:2005qs ; Miranda:2019skf ; Canas:2018rng ; Papoulias:2019xaw ;
Baxter:2019mcx ; Huang:2019ene . As CEvNS measurements become more precise in
the near-future, one could extract the weak mixing angle at $Q$ values of a
few tens of MeV/c; that will be competitive with other methods for determining
$\theta_{W}$ at low $Q$ from parity-violating electron-proton scattering
Qweak:2018tjf , Moller scattering MOLLER:2014iki and atomic parity violation
Roberts:2014bka .
## VII Implications for Beyond the Standard Model Physics
CEvNS, being a low-energy process, provides a natural window to study light,
weakly-coupled, beyond the standard model physics in the neutrino sector.
Several extensions of the SM can be explored at low energy Barranco:2005yy ;
Scholberg:2005qs ; Barranco:2007tz ; Lindner:2016wff ; Coloma:2017ncl ;
Farzan:2017xzy ; AristizabalSierra:2018eqm ; Brdar:2018qqj ; Abdullah:2018ykz
; AristizabalSierra:2019zmy ; Miranda:2019skf ; Bell:2019egg ;
AristizabalSierra:2019ufd ; Cadeddu:2019eta ; Coloma:2019mbs ; Canas:2019fjw ;
Dutta:2019eml ; Denton:2020hop ; Skiba:2020msb ; Cadeddu:2020nbr ;
Abdullah:2020iiv ; Papoulias:2019xaw ; Baxter:2019mcx ;
AristizabalSierra:2017joc ; Giunti:2019xpr ; Liao:2017uzy ; Dent:2017mpr ;
Dent:2019ueq ; Miranda:2020syh ; Suliga:2020jfa ; Dutta:2020che ;
Dutta:2019nbn . Since CEvNS cross section is orders of magnitude higher at low
energies, the BSM searches can be done with relatively small detectors instead
of the typical large neutrino detectors.
Any significant deviations between the SM theory predictions and the
experiment event rate of Eq. (50) will indicate the presence of new physics
and can be expressed as some conventional definition for the low-energy
property. The deviation from the SM predictions can either be reflected as a
change in the total event rate or a change in the shape of the recoil
spectrum. For a given stopped pion production facility, experiments have
control over choosing the baseline (the distance from the source to the
detector), the angular placement of the detector with respect to the beam
axis, and the nuclear target employed in the detector. These can be exploited
to increase the sensitivity of a particular BSM scenario. An additional
advantage of the stopped pion source is that one could exploit the timing
structure of the neutrino background, see Fig. 2, for a given BSM signal.
### VII.1 Non-standard Interactions of Neutrinos
Figure 10: A diagrammatic illustration of the neutral current non-standard
neutrino interactions process where $f$ refers to SM fermions.
In the context of neutrino physics, the term Non-Standard Interactions (NSI)
usually refers to the inclusion of four-fermion operators leading to
modifications of the Wilson coefficients already present in the SM
Farzan:2017xzy ; Wolfenstein:1977ue ; Proceedings:2019qno . NSIs appear in
several appealing SM extensions and provide a general effective field theory
(EFT) framework to parameterize new physics in the neutrino sector. NSIs can
be charged current or neutral current interactions with matter particles e, u,
and/or d. Charged-current NSI leads to modification of both the production and
detection of neutrinos but also leads to charge-lepton flavor violation while
NC NSI does not modify production or detection when charged leptons are
involved.
In a model-independent approach, a useful parametrization of the possible
effects of new physics at low energies is through the addition of higher-
dimensional operators to the SM Lagrangian that respect the SM gauge group.
The allowed set of operators includes four-fermion operators affecting
neutrino production, propagation, and detection processes. For example,
operators of the form
$\mathcal{L}_{NSI}^{CC}~{}=~{}2\sqrt{2}G_{F}~{}\varepsilon_{\alpha\beta}^{ff^{\prime},P}~{}\bar{\nu}_{\alpha}\gamma^{\mu}(1-\gamma_{5})l_{\beta}~{}\bar{f^{\prime}}\gamma_{\mu}f$
(51)
would induce non-standard charged-current (CC) production and detection
processes for neutrinos of flavor $\alpha$, while operators such as
$\mathcal{L}_{NSI}^{NC}~{}=~{}-2\sqrt{2}G_{F}~{}\varepsilon_{\alpha\beta}^{fP}~{}\bar{\nu}_{\alpha}\gamma^{\mu}(1-\gamma_{5})\nu_{\beta}~{}\bar{f}\gamma_{\mu}f$
(52)
would lead to flavor-changing neutral-current (NC) interactions of neutrinos
with other fermions (if $\alpha=\beta$), or to a modified NC interaction rate
with respect to the SM expectation (if $\alpha\neq\beta$). Here, $f$ and
$f^{\prime}$ refer to SM fermions. The parameters $\varepsilon$ describe the
size of non-standard interactions relative to standard charged or neutral
current weak interactions.
In the SM, the weak charge of a nucleus only depends on the SM vector
couplings to protons and neutrons, Eq. (16), and is independent of the
neutrino flavor. In the presence of NC NSI, this effective charge gets
modified by the new operators introduced as
$\displaystyle Q^{2}_{W,NSI}$ $\displaystyle=$
$\displaystyle[(g_{n}^{V}+2\varepsilon_{\alpha\alpha}^{uV}+\varepsilon_{\alpha\alpha}^{dV})N+(g_{p}^{V}+\varepsilon_{\alpha\alpha}^{uV}+2\varepsilon_{\alpha\alpha}^{dV})Z]^{2}$
(53)
$\displaystyle+\sum_{\beta\neq\alpha}[(\varepsilon_{\alpha\beta}^{uV}+2\varepsilon_{\alpha\beta}^{dV})N+(2\varepsilon_{\alpha\beta}^{uV}+\varepsilon_{\alpha\beta}^{dV})Z]^{2}$
Any deviation from SM predicted rates (plus the form factor uncertainty)
signals a probing non-standard interaction of neutrinos. Vector couplings are
characterized by the spin-independent combination while axial couplings are
characterized by the orthogonal spin-dependent combination. Since typically in
the CEvNS process, the axial contribution is negligible in comparison to the
vector contribution (due to spin cancellation), we assume that spin-dependent
axial NSI contributions are small. The effect of non-zero values of epsilons,
which can be either positive or negative, can be either an enhancement or
suppression of the CEvNS rate. Of course, some combinations of NSI parameter
values for a given $N$ and $Z$ can result in the SM CEvNS rate. Therefore, a
combination of CEvNS measurements on targets with different $N$ and $Z$ values
can break any such accidental degeneracies (as well as cancel flux-related
uncertainties). A stopped-pion source contains both electron and muon flavor;
hence a CEvNS measurement gives direct access to all coupling coefficients
except $\varepsilon_{\tau\tau}$. Furthermore, because at the SNS neutrino
flavors can be separated by timing, see Fig. 2, one can in principle probe
electron and muon NSIs separately Coloma:2017ncl ; Liao:2017uzy ; Dent:2017mpr
.
The flavor-changing NSI in neutrino oscillation experiments leads to the
appearance of new degeneracies involving standard oscillation parameters and
NSI operators. These can affect DUNE’s capability in extracting the CP-
violation phase. Data from the CEvNS experiment to the global fits from
oscillation breaks the degeneracies involving flavor-diagonal NSI, since CEvNS
experiments can directly measure the neutrino-nucleus interactions for both
electron and muon neutrinos. Thus constraining NSI parameters with CEvNS
experiments can significantly improve the extraction of the CP-violation phase
in DUNE Coloma:2016gei ; Coloma:2017egw .
### VII.2 Neutrino Electromagnetic Properties
In the SM, neutrino electromagnetic interactions are negligible. BSM processes
can induce significant electromagnetic effects, these can be probed in CEvNS
experiments. Since neutrinos oscillate and therefore have mass, it provides
the best motivation for the existence of non-trivial neutrino electromagnetic
properties such as a neutrino magnetic moment or a neutrino charge radius.
The SM, minimally extended to allow massive neutrinos, predicts a very small
magnetic moment of neutrino ParticleDataGroup:2020ssz
$\mu_{\nu}=3.2\times 10^{-19}\mu_{B}\left(\frac{m_{\nu}}{{\rm eV}}\right).$
(54)
Within the minimum neutrino mass range of $10^{-4}$ to $1$ eV, the SM electron
neutrino magnetic moment ranges [$3\times 10^{-21},3\times 10^{-19}$]
$\mu_{B}$ for Dirac and [$5\times 10^{-25},8\times 10^{-23}$] $\mu_{B}$ for
Majorana case Balantekin:2013sda . The differential cross section in the
presence of a neutrino magnetic moment adds incoherently to the Standard Model
cross section due to the required spin-flip
$\left(\frac{d\sigma}{dT}\right)_{\mathrm{tot}}=\left(\frac{d\sigma}{dT}\right)_{\mathrm{SM}}+\left(\frac{d\sigma}{dT}\right)_{\mathrm{EM}}$
(55)
where the EM contribution has a characteristic $1/T$ dependence, while its
strength is controlled by the size of the neutrino magnetic moment
Vogel:1989iv
$\left(\frac{d\sigma}{dT}\right)_{\mathrm{EM}}=\frac{\pi\alpha^{2}\mu_{\nu}^{2}\,Z^{2}}{m_{e}^{2}}\left(\frac{1-T/E_{\nu}}{T}+\frac{T}{4E_{\nu}^{2}}\right)F_{\text{ch}}^{2}(q^{2})\,\\\
$ (56)
where $m_{e}$ is the mass of the electron, $\alpha$ is the fine structure
constant. $F_{\text{ch}}$, normalized as $F_{\text{ch}}(0)=1$, is the charge
form factor of the nucleus that is known with high precision for many nuclei.
In the presence of neutrino magnetic moments, the neutrino interaction cross
section will be modified. The low-energy cross section is more sensitive to
these small changes. Any measurement of magnetic moment larger than this would
be a signature of BSM physics. Larger neutrino magnetic moments would also
hint that neutrinos are Majorana particles Bell:2006wi . The current best
limits on neutrino magnetic moment have been set by solar and astrophysical
neutrinos ParticleDataGroup:2020ssz ; Borexino:2017fbd . The strongest
magnetic moment limits on $\nu_{\mu}$ are from LSND experiment LSND:2001akn ;
Kosmas:2015sqa . The experimental signature would be an enhancement of the
rate at low recoil energy, both for scattering on electrons and on nuclei
Vogel:1989iv ; Kosmas:2015sqa ; Dodd:1991ni . Therefore, to measure neutrino
magnetic moment, the detector need to have a low energy threshold and good
energy resolution.
The impact of the neutrino charge radius, being a helicity-preserving
quantity, is taken as a shift on the weak mixing angle according to
$\sin^{2}\theta_{W}\rightarrow\sin^{2}\theta_{W}+\frac{\sqrt{2}\pi\alpha}{3G_{F}}\langle
r_{\nu_{\alpha}}^{2}\rangle\,.$ (57)
The neutrino charge radius has a small flavor-dependent effect on the CEvNS
cross section. These can be measured in CEvNS experiments at stopped pion
sources where muon- and electron-neutrino’s CEvNS event rates can be separated
using the timing structure, see Fig. 2. The effects is expected to be of
percent level Papavassiliou:2005cs ; Cadeddu:2018dux .
### VII.3 Sterile Neutrino Oscillations
One possible extension of the SM is the existence of sterile neutrinos, new
neutrino states that do not interact with the SM weak interactions. These new
gauge singlet fermions can be included as a minimal extension of the SM. There
are several experimental anomalies, called short-baseline neutrino anomalies,
that hint at the existence of sterile neutrinos. There are a number of
experiments underway testing short-baseline anomalies. CEvNS is also an
excellent tool for the search for sterile neutrino oscillations by setting up
multiple identical detectors at different baselines from the neutrino
production point. Flavor-blind neutral-currents can be used to probe the
disappearance of active neutrinos. The signal would be both a distortion of
recoil spectra and an overall rate suppression.
Oscillation probabilities are simplest for monoenergetic sources and decay at
rest provides a high-intensity monoenergetic source of $\nu_{\mu}$s, thus
being a natural candidate for carrying out high statistics searches for
$\nu_{\mu}$ disappearance. In this context, CEvNS has been recognized as being
advantageous due to its relatively large cross section Anderson:2012pn ;
Formaggio:2011jt ; Kosmas:2017zbh . Working at lower energies allows for
shorter baselines with equivalent L/E and consequently higher fluxes as
compared to e.g., the decay in-flight experiments. In particular, the
sensitivity to sterile neutrinos is maximized when deploying multiple
detectors at different distance baselines in the range $\sim 20$–$40$ m
Anderson:2012pn . This configuration can probe parameter space that is
consistent with the $\sim$ eV mass-scale hinted at by LSND and MiniBooNE,
thereby providing an independent test of sterile neutrino parameter space
Anderson:2012pn ; Blanco:2019vyp .
### VII.4 Accelerator Produced Light Dark Matter
Figure 11: A diagrammatic representation of the Dark Matter particle, $\chi$,
scattering off a nucleus, $N$, mediated via a dark photon $A^{\prime}$.
The stopped-pion facilities produce a copious amount of neutral and charged
mesons as well as photons. These can decay into dark sector particles that can
either scatter off or decay in the detector material via kinetic mixing with
the SM particles Dutta:2019nbn ; deNiverville:2015mwa ; deNiverville:2016rqh .
Dark matter candidate particle masses can be probed in the few to few hundred
MeV range at these facilities. For example, within a dark photon model, the
dark photon $A^{\prime}$ undergoes kinetic mixing with the SM photon and can
be described by the Lagrangian:
$\mathcal{L}\supset g_{D}A^{\prime}_{\mu}\bar{\chi}\gamma^{\mu}\chi+e\epsilon
Q_{q}A^{\prime}_{\mu}\bar{q}\gamma^{\mu}q$ (58)
where $g_{D}$ is the dark coupling constant, $\epsilon$ is the mixing
parameter, $Q_{q}$ is quark’s electric charge. The dark photon can be produced
at the stopped pion facilities via the processes of pion capture, pion decay,
and the photons emerging from the cascades:
$\displaystyle\pi^{-}+p$ $\displaystyle\rightarrow$ $\displaystyle
n+A^{\prime}$ (59) $\displaystyle\pi^{+}+n$ $\displaystyle\rightarrow$
$\displaystyle p+A^{\prime}$ (60) $\displaystyle\pi^{0}$
$\displaystyle\rightarrow$ $\displaystyle\gamma+A^{\prime}$ (61)
The dark photons then decay into dark matter candidates:
$A^{\prime}\rightarrow\chi\bar{\chi}$. The signature in the detector is either
elastic scattering with specific keV energy nuclear recoil signatures or
inelastic scattering with specific MeV energy nuclear deexcitation gammas
Dutta:2022tav ; Dutta:2023fij . The process is schematically shown in Fig. 11.
The time structure of the beam is especially valuable for separating SM
neutrino scattering recoils from the DM-induced recoils. Since the DM signal
is expected to be prompt, the delayed neutrino signal can provide a powerful
constraint on the neutrino background in the prompt window. The DM signal will
also have a distinctive dependence on the direction with respect to the
source, whereas CEvNS signals will be isotropic. For dark sector searches,
event rate scaling, recoil spectrum shape, timing, and direction with respect
to source are all helpful.
Recent results from the COHERENT and Coherent CAPTAIN-Mills experiments have
demonstrated how detectors capable of measuring coherent elastic neutrino-
nucleus scattering (CEvNS) can also be used to set limits on vector portal and
leptophobic DM at proton beam dumps. They also provide excellent opportunities
to search for axion-like particles (ALPs) Dent:2019ueq .
## VIII Summary
Neutrinos continue to provide a testing ground for the structure of the
standard model and hints toward the physics beyond the standard model.
Neutrinos of energies spanning over several orders of magnitude have been
detected via various mechanisms ranging from inverse beta decay to scattering
off quarks, nucleons, and nuclei. At MeV scales, there has been one elusive
process, until a few years ago, known as coherent elastic neutrino-nucleus
scattering that was theoretically predicted for over five decades ago but was
never observed experimentally. The recent experimental observation of CEvNS by
the COHERENT collaboration at a stopped pion neutrino source has inspired
physicists across many subfields. This new way of detecting neutrinos has
wider implications for border communities that span nuclear physics, particle
physics, astrophysics, and beyond. Leveraging orders of magnitude higher CEvNS
cross section, new physics can be searched with relatively small detectors.
CEvNS, being a low-energy process, provides a natural window to study light,
weakly-coupled, new physics in the neutrino sector.
Neutrinos from stopped pions sources cover energies in the tens of MeVs and
are almost optimal for studying CEvNS, finding a sweet spot where the CEvNS
rate is high enough and recoil energies are more easily detectable above the
threshold. So far, CEvNS is observed only at the decay at rest sources. In
addition, the pulsed time structure of the beam source provides a strong
handle for suppressing the background for new physics searches. Several
worldwide experimental programs have been or are being set up to detect CEvNS
and BSM signals in the near future at stopped–pion neutrino sources (as well
as with reactor sources where the CEvNS process is yet to be detected) with
complementary detection technologies and physics goals, making it an emerging
exciting avenue.
Disentangling new physics signals in these experiments requires a precise
understanding of the CEvNS SM scattering rate. At tree level, the theoretical
uncertainty on the CEvNS cross section is driven by the uncertainty on the
weak form factor of the nucleus. The charge density of a nucleus is strongly
dominated by protons and has been extensively studied with impressive
precision in elastic electron scattering experiments. While the neutron
density, which CEvNS is most sensitive to, leads the overall uncertainties on
the CEvNS rate. For non-zero spin nuclei, the axial-vector part adds an
additional contribution that is often not included in CEvNS estimation. The
CEvNS process also receives a few percent of radiative corrections, from
electrons and muons running in loops introducing a non-trivial dependence on
the momentum transfer due to their relatively light masses. Parity-violating
electron scattering experiments offer complementary input and provide a
precise approach to experimentally probing weak form factors and neutron
distribution. Although, the choice of the nuclear targets, so far in PVES and
CEvNS experiments is not the same, since they are both driven by different
physics motivations and have varied technical needs.
CEvNS experiments at stopped pion sources are also sensitive to tens of MeV
inelastic CC and NC neutrino-nucleus scattering processes. These processes
have implications for supernova detection in future neutrino experiments. The
interaction cross sections for these processes lack the $N^{2}$ enhancement
associated with CEvNS and, therefore, tend to be at least one order of
magnitude smaller than that of the CEvNS rate. The detectable final-state
particles of these inelastic scattering have typical energies of the same
order as the incident neutrino energies. The experimental requirements for
CEvNS and inelastic signals are quite different, larger masses are needed for
inelastic, as well as the dynamic range to record MeV-scale energy
depositions, while very low thresholds are not required. DUNE’s capabilities
of supernova neutrino detection in the relevant tens-of-MeV neutrino energy
range as well as the physics to be learned from a DUNE supernova burst
detection will be limited by the lack of knowledge of the inelastic neutrino-
argon cross section. The well-understood stopped-pion neutrino spectrum is a
near-ideal tens of MeV neutrino source which provides a unique opportunity to
study inelastic neutrino-nucleus cross sections at tens of MeVs.
Since the uncertainty on the SM predicted CEvNS cross sections is relatively
small, CEvNS cross section measurement allows testing of SM weak physics and
in probing new physics signals. For a given stopped pion production facility,
experiments can in principle choose the baseline, the angular placement of the
detector with respect to the beam axis, and the nuclear target employed in the
detector to optimize the sensitivity of the primary physics goal of the
experiment. An additional advantage of the stopped pion source is that one
could exploit both timing and energy data. Any deviation from the SM predicted
event rate either with a change in the total event rate or with a change in
the shape of the recoil spectrum, could indicate new contributions to the
interaction cross-section. In particular, the weak nuclear form factor of a
nucleus and weak mixing angle can be probed. CEvNS, being a low-energy
process, provides a natural window to study light, weakly-coupled, beyond the
standard model physics in the neutrino sector. Several extensions of the SM
can be explored at low energy. Since CEvNS cross section is orders of
magnitude higher at low energies, the BSM searches can be done with the
relatively small detector. In particular, NSIs, neutrino electromagnetic
properties in terms of neutrino magnetic moment and neutrino charge radius,
and sterile neutrinos can be studied. Stopped pion facilities are also a
copious source of neutral and changed mesons as well as photons that allows
proving several dark sector physics scenarios such as vector portal models,
leptophobic dark matter as well as axion-like particle searches.
## Acknowledgements
V.P. thanks Oleksandr Tomalak, Pedro Machado and Ryan Plestid for discussions
on Ref. Tomalak:2020zfh ; Nils Van Dessel, Heather Ray and Natalie Jachowicz
for discussion on Ref. VanDessel:2020epd ; Bhaskar Dutta, Wei-Chih Huang and
Jayden Newstead for discussion on Ref. Dutta:2022tav ; colleagues from the
Coherent CAPTAIN-Mills experiment for various discussions on the experimental
scope of the CEvNS experiments - all of which have motivated the content of
this review. This manuscript has been authored by Fermi Research Alliance, LLC
under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy,
Office of Science, Office of High Energy Physics.
## References
* (1) P. Huber, K. Scholberg, E. Worcester, J. Asaadi, A. B. Balantekin, N. Bowden, P. Coloma, P. B. Denton, A. de Gouvêa and L. Fields, et al. “Snowmass Neutrino Frontier Report,” arXiv:2211.08641 [hep-ex].
* (2) A. B. Balantekin, S. Gardiner, K. Mahn, T. Mohayai, J. Newby, V. Pandey, J. Zettlemoyer, J. Asaadi, M. Betancourt and D. A. Harris, et al. “Snowmass Neutrino Frontier: Neutrino Interaction Cross Sections (NF06) Topical Group Report,” arXiv:2209.06872 [hep-ex].
* (3) A. de Gouvêa, I. Mocioiu, S. Pastore, L. E. Strigari, L. Alvarez-Ruso, A. M. Ankowski, A. B. Balantekin, V. Brdar, M. Cadeddu and S. Carey, et al. “Theory of Neutrino Physics – Snowmass TF11 (aka NF08) Topical Group Report,” arXiv:2209.07983 [hep-ph].
* (4) B. Acharya, C. Adams, A. A. Aleksandrova, K. Alfonso, P. An, S. Baeßler, A. B. Balantekin, P. S. Barbeau, F. Bellini and V. Bellini, et al. “Fundamental Symmetries, Neutrons, and Neutrinos (FSNN): Whitepaper for the 2023 NSAC Long Range Plan,” arXiv:2304.03451 [nucl-ex].
* (5) R. Davis, Jr., D. S. Harmer and K. C. Hoffman, “Search for neutrinos from the sun,” Phys. Rev. Lett. 20, 1205-1209 (1968).
* (6) K. Hirata et al. [Kamiokande-II], “Observation of a Neutrino Burst from the Supernova SN 1987a,” Phys. Rev. Lett. 58, 1490-1493 (1987).
* (7) R. M. Bionta, G. Blewitt, C. B. Bratton, D. Casper, A. Ciocio, R. Claus, B. Cortez, M. Crouch, S. T. Dye and S. Errede, et al. “Observation of a Neutrino Burst in Coincidence with Supernova SN 1987a in the Large Magellanic Cloud,” Phys. Rev. Lett. 58, 1494 (1987).
* (8) M. G. Aartsen et al. [IceCube], “Neutrino emission from the direction of the blazar TXS 0506+056 prior to the IceCube-170922A alert,” Science 361, no.6398, 147-151 (2018).
* (9) J. A. Formaggio and G. P. Zeller, “From eV to EeV: Neutrino Cross Sections Across Energy Scales,” Rev. Mod. Phys. 84, 1307-1341 (2012).
* (10) L. Stodolsky, “Application of Nuclear Coherence Properties to Elementary-Particle Reactions,” Phys. Rev. 144, 1145-1153 (1966).
* (11) D. Z. Freedman, “Coherent Neutrino Nucleus Scattering as a Probe of the Weak Neutral Current,” Phys. Rev. D 9, 1389-1392 (1974).
* (12) V. B. Kopeliovich and L. L. Frankfurt, “Isotopic and chiral structure of neutral current,” JETP Lett. 19, 145-147 (1974).
* (13) D. Akimov et al. [COHERENT], “Observation of Coherent Elastic Neutrino-Nucleus Scattering,” Science 357, no.6356, 1123-1126 (2017).
* (14) D. Akimov et al. [COHERENT], “COHERENT Collaboration data release from the first observation of coherent elastic neutrino-nucleus scattering,” arXiv:1804.09459 [nucl-ex].
* (15) D. Akimov et al. [COHERENT], “First Constraint on Coherent Elastic Neutrino-Nucleus Scattering in Argon,” Phys. Rev. D 100, no.11, 115020 (2019).
* (16) D. Akimov et al. [COHERENT], “First Measurement of Coherent Elastic Neutrino-Nucleus Scattering on Argon,” Phys. Rev. Lett. 126, no.1, 012002 (2021).
* (17) D. Akimov et al. [COHERENT], “Measurement of the Coherent Elastic Neutrino-Nucleus Scattering Cross Section on CsI by COHERENT,” Phys. Rev. Lett. 129, no.8, 081801 (2022).
* (18) J. Barranco, O. G. Miranda and T. I. Rashba, “Probing new physics with coherent neutrino scattering off nuclei,” JHEP 12, 021 (2005).
* (19) K. Scholberg, “Prospects for measuring coherent neutrino-nucleus elastic scattering at a stopped-pion neutrino source,” Phys. Rev. D 73, 033005 (2006).
* (20) J. Barranco, O. G. Miranda and T. I. Rashba, “Low energy neutrino experiments sensitivity to physics beyond the Standard Model,” Phys. Rev. D 76, 073008 (2007).
* (21) B. Dutta, Y. Gao, R. Mahapatra, N. Mirabolfathi, L. E. Strigari and J. W. Walker, “Sensitivity to oscillation with a sterile fourth generation neutrino from ultra-low threshold neutrino-nucleus coherent scattering,” Phys. Rev. D 94, no.9, 093002 (2016).
* (22) M. Lindner, W. Rodejohann and X. J. Xu, “Coherent Neutrino-Nucleus Scattering and new Neutrino Interactions,” JHEP 03, 097 (2017).
* (23) P. Coloma, M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz, “COHERENT Enlightenment of the Neutrino Dark Side,” Phys. Rev. D 96, no.11, 115007 (2017).
* (24) Y. Farzan and M. Tortola, “Neutrino oscillations and Non-Standard Interactions,” Front. in Phys. 6, 10 (2018).
* (25) J. Billard, J. Johnston and B. J. Kavanagh, “Prospects for exploring New Physics in Coherent Elastic Neutrino-Nucleus Scattering,” JCAP 11, 016 (2018).
* (26) D. Aristizabal Sierra, V. De Romeri and N. Rojas, “COHERENT analysis of neutrino generalized interactions,” Phys. Rev. D 98, 075018 (2018).
* (27) V. Brdar, W. Rodejohann and X. J. Xu, “Producing a new Fermion in Coherent Elastic Neutrino-Nucleus Scattering: from Neutrino Mass to Dark Matter,” JHEP 12, 024 (2018).
* (28) M. Abdullah, J. B. Dent, B. Dutta, G. L. Kane, S. Liao and L. E. Strigari, “Coherent elastic neutrino nucleus scattering as a probe of a Z’ through kinetic and mass mixing effects,” Phys. Rev. D 98, no.1, 015005 (2018).
* (29) D. Aristizabal Sierra, J. Liao and D. Marfatia, “Impact of form factor uncertainties on interpretations of coherent elastic neutrino-nucleus scattering data,” JHEP 06, 141 (2019).
* (30) O. G. Miranda, G. Sanchez Garcia and O. Sanders, “Coherent elastic neutrino-nucleus scattering as a precision test for the Standard Model and beyond: the COHERENT proposal case,” Adv. High Energy Phys. 2019, 3902819 (2019).
* (31) N. F. Bell, J. B. Dent, J. L. Newstead, S. Sabharwal and T. J. Weiler, “Migdal effect and photon bremsstrahlung in effective field theories of dark matter direct detection and coherent elastic neutrino-nucleus scattering,” Phys. Rev. D 101, no.1, 015012 (2020).
* (32) D. Aristizabal Sierra, V. De Romeri and N. Rojas, “CP violating effects in coherent elastic neutrino-nucleus scattering processes,” JHEP 09, 069 (2019).
* (33) M. Cadeddu, F. Dordei, C. Giunti, Y. F. Li and Y. Y. Zhang, “Neutrino, electroweak, and nuclear physics from COHERENT elastic neutrino-nucleus scattering with refined quenching factor,” Phys. Rev. D 101, no.3, 033004 (2020).
* (34) P. Coloma, I. Esteban, M. C. Gonzalez-Garcia and M. Maltoni, “Improved global fit to Non-Standard neutrino Interactions using COHERENT energy and timing data,” JHEP 02, 023 (2020).
* (35) B. C. Canas, E. A. Garces, O. G. Miranda, A. Parada and G. Sanchez Garcia, “Interplay between nonstandard and nuclear constraints in coherent elastic neutrino-nucleus scattering experiments,” Phys. Rev. D 101, no.3, 035012 (2020).
* (36) B. Dutta, S. Liao, S. Sinha and L. E. Strigari, “Searching for Beyond the Standard Model Physics with COHERENT Energy and Timing Data,” Phys. Rev. Lett. 123, no.6, 061801 (2019).
* (37) P. B. Denton and J. Gehrlein, “A Statistical Analysis of the COHERENT Data and Applications to New Physics,” JHEP 04, 266 (2021).
* (38) W. Skiba and Q. Xia, “Electroweak constraints from the COHERENT experiment,” JHEP 10, 102 (2022).
* (39) M. Cadeddu, N. Cargioli, F. Dordei, C. Giunti, Y. F. Li, E. Picciau and Y. Y. Zhang, “Constraints on light vector mediators through coherent elastic neutrino nucleus scattering data from COHERENT,” JHEP 01, 116 (2021).
* (40) B. C. Cañas, E. A. Garcés, O. G. Miranda and A. Parada, “Future perspectives for a weak mixing angle measurement in coherent elastic neutrino nucleus scattering experiments,” Phys. Lett. B 784, 159-162 (2018).
* (41) J. Alonso, F. T. Avignone, W. A. Barletta, R. Barlow, H. T. Baumgartner, A. Bernstein, E. Blucher, L. Bugel, L. Calabretta and L. Camilleri, et al. “Expression of Interest for a Novel Search for CP Violation in the Neutrino Sector: DAEdALUS,” arXiv:1006.0260 [physics.ins-det].
* (42) O. Tomalak, “Radiative (anti)neutrino energy spectra from muon, pion, and kaon decays,” Phys. Lett. B 829, 137108 (2022).
* (43) T. Ishikawa, N. Nakazawa and Y. Yasui, “Numerical calculation of the full two-loop electroweak corrections to muon (g-2),” Phys. Rev. D 99, no.7, 073004 (2019).
* (44) M. Abdullah, D. Aristizabal Sierra, B. Dutta and L. E. Strigari, “Coherent Elastic Neutrino-Nucleus Scattering with directional detectors,” Phys. Rev. D 102, no.1, 015009 (2020).
* (45) R. Hofstadter, “Electron scattering and nuclear structure,” Rev. Mod. Phys. 28, 214-254 (1956).
* (46) H. De Vries, C. W. De Jager and C. De Vries, “Nuclear charge and magnetization density distribution parameters from elastic electron scattering,” Atom. Data Nucl. Data Tabl. 36, 495-536 (1987).
* (47) G. Fricke, C. Bernhardt, K. Heilig, L. A. Schaller, L. Schellenberg, E. B. Shera and C. W. de Jager, “Nuclear Ground State Charge Radii from Electromagnetic Interactions,” Atom. Data Nucl. Data Tabl. 60, 177-285 (1995).
* (48) I. Angeli and K. P. Marinova, “Table of experimental nuclear ground state charge radii: An update,” Atom. Data Nucl. Data Tabl. 99, no.1, 69-95 (2013).
* (49) M. Thiel, C. Sfienti, J. Piekarewicz, C. J. Horowitz and M. Vanderhaeghen, “Neutron skins of atomic nuclei: per aspera ad astra,” J. Phys. G 46, no.9, 093003 (2019).
* (50) T. W. Donnelly, J. Dubach and I. Sick, “Isospin Dependences in Parity Violating Electron Scattering,” Nucl. Phys. A 503, 589-631 (1989).
* (51) R. H. Helm, “Inelastic and Elastic Scattering of 187-Mev Electrons from Selected Even-Even Nuclei,” Phys. Rev. 104, 1466-1475 (1956).
* (52) S. Klein and J. Nystrand, “Exclusive vector meson production in relativistic heavy ion collisions,” Phys. Rev. C 60, 014903 (1999).
* (53) G. Duda, A. Kemper and P. Gondolo, “Model Independent Form Factors for Spin Independent Neutralino-Nucleon Scattering from Elastic Electron Scattering Data,” JCAP 04, 012 (2007).
* (54) J. D. Lewin and P. F. Smith, “Review of mathematics, numerical factors, and corrections for dark matter experiments based on elastic nuclear recoil,” Astropart. Phys. 6, 87-112 (1996).
* (55) D. K. Papoulias, T. S. Kosmas and Y. Kuno, “Recent probes of standard and non-standard neutrino physics with nuclei,” Front. in Phys. 7, 191 (2019).
* (56) C. G. Payne, S. Bacca, G. Hagen, W. Jiang and T. Papenbrock, “Coherent elastic neutrino-nucleus scattering on 40Ar from first principles,” Phys. Rev. C 100, no.6, 061304 (2019).
* (57) M. Hoferichter, J. Menéndez and A. Schwenk, “Coherent elastic neutrino-nucleus scattering: EFT analysis and nuclear responses,” Phys. Rev. D 102, no.7, 074018 (2020).
* (58) J. Yang, J. A. Hernandez and J. Piekarewicz, “Electroweak probes of ground state densities,” Phys. Rev. C 100, no.5, 054301 (2019).
* (59) N. Van Dessel, V. Pandey, H. Ray and N. Jachowicz, “Cross sections for coherent elastic and inelastic neutrino-nucleus scattering,” Universe 9, 207 (2023).
* (60) O. Tomalak, P. Machado, V. Pandey and R. Plestid, “Flavor-dependent radiative corrections in coherent elastic neutrino-nucleus scattering,” JHEP 02, 097 (2021).
* (61) C. J. Horowitz, S. J. Pollock, P. A. Souder and R. Michaels, “Parity violating measurements of neutron densities,” Phys. Rev. C 63, 025501 (2001).
* (62) D. R. Yennie, D. G. Ravenhall and R. N. Wilson, “Phase-Shift Calculation of High-Energy Electron Scattering,” Phys. Rev. 95, 500-512 (1954).
* (63) D. R. Yennie, F. L. Boos and D. G. Ravenhall, “Analytic Distorted-Wave Approximation for High-Energy Electron Scattering Calculations,” Phys. Rev. 137, B882-B903 (1965).
* (64) K. S. Kim, L. E. Wright, Y. Jin and D. W. Kosik, “Approximate treatment of electron Coulomb distortion in quasielastic (e,e’) reactions,” Phys. Rev. C 54, 2515 (1996).
* (65) K. S. Kim, L. E. Wright and D. A. Resler, “Coulomb distortion effects for electron or positron induced (e, e-prime) reactions in the quasielastic region,” Phys. Rev. C 64, 044607 (2001).
* (66) C. J. Horowitz, “Parity violating elastic electron scattering and Coulomb distortions,” Phys. Rev. C 57, 3430-3436 (1998).
* (67) S. Abrahamyan, Z. Ahmed, H. Albataineh, K. Aniol, D. S. Armstrong, W. Armstrong, T. Averett, B. Babineau, A. Barbieri and V. Bellini et al., “Measurement of the Neutron Radius of 208Pb Through Parity-Violation in Electron Scattering,” Phys. Rev. Lett. 108, 112502 (2012).
* (68) C. J. Horowitz, Z. Ahmed, C. M. Jen, A. Rakhman, P. A. Souder, M. M. Dalton, N. Liyanage, K. D. Paschke, K. Saenboonruang and R. Silwal, et al., “Weak charge form factor and radius of 208Pb through parity violation in electron scattering,” Phys. Rev. C 85, 032501 (2012).
* (69) K. S. Kumar [PREX and CREX], “Electroweak probe of neutron skins of nuclei,” Annals Phys. 412, 168012 (2020).
* (70) D. Becker, R. Bucoveanu, C. Grzesik, K. Imai, R. Kempf, K. Imai, M. Molitor, A. Tyukin, M. Zimmermann and D. Armstrong et al., “The P2 experiment,” Eur. Phys. J. A 54, no.11, 208 (2018).
* (71) P. S. Amanik and G. C. McLaughlin, “Nuclear neutron form factor from neutrino nucleus coherent elastic scattering,” J. Phys. G 36, 015105 (2009).
* (72) K. Patton, J. Engel, G. C. McLaughlin and N. Schunck, “Neutrino-nucleus coherent scattering as a probe of neutron density distributions,” Phys. Rev. C 86, 024612 (2012).
* (73) M. Cadeddu, C. Giunti, Y. F. Li and Y. Y. Zhang, “Average CsI neutron density distribution from COHERENT data,” Phys. Rev. Lett. 120, no.7, 072501 (2018).
* (74) N. Van Dessel, A. Nikolakopoulos and N. Jachowicz,“Lepton kinematics in low energy neutrino-Argon interactions,” Phys. Rev. C 101, no.4, 045502 (2020).
* (75) V. Pandey, N. Jachowicz, T. Van Cuyck, J. Ryckebusch and M. Martini, “Low-energy excitations and quasielastic contribution to electron-nucleus and neutrino-nucleus scattering in the continuum random-phase approximation,” Phys. Rev. C 92, no.2, 024606 (2015).
* (76) J. S. O’ Connell, T. W. Donnelly and J. D. Walecka, “Semileptonic weak interactions with $C^{12}$,” Phys. Rev. C 6, 719-733 (1972).
* (77) J. D. Walecka, “Theoretical Nuclear and Subnuclear Physics, ” Oxford University Press, New York (1995).
* (78) N. Jachowicz, K. Heyde, J. Ryckebusch and S. Rombouts, “Continuum random phase approximation approach to charged current neutrino nucleus scattering,” Phys. Rev. C 65, 025501 (2002).
* (79) V. Pandey, N. Jachowicz, M. Martini, R. González-Jiménez, J. Ryckebusch, T. Van Cuyck and N. Van Dessel, “Impact of low-energy nuclear excitations on neutrino-nucleus scattering at MiniBooNE and T2K kinematics,” Phys. Rev. C 94, no.5, 054609 (2016).
* (80) N. Van Dessel, N. Jachowicz and A. Nikolakopoulos, “Forbidden transitions in neutral and charged current interactions between low-energy neutrinos and Argon,” Phys. Rev. C 100, no.5, 055503 (2019).
* (81) B. Abi et al. [DUNE], “Supernova neutrino burst detection with the Deep Underground Neutrino Experiment,” Eur. Phys. J. C 81, no.5, 423 (2021).
* (82) A. Abed Abud et al. [DUNE], “Impact of cross-section uncertainties on supernova neutrino spectral parameter fitting in the Deep Underground Neutrino Experiment,” arXiv:2303.17007 [hep-ex].
* (83) B. Abi et al. [DUNE], “Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume I Introduction to DUNE,” JINST 15, no.08, T08008 (2020).
* (84) B. Abi et al. [DUNE], “Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume II: DUNE Physics,” [arXiv:2002.03005 [hep-ex]].
* (85) B. Abi et al. [DUNE], “Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume III: DUNE Far Detector Technical Coordination,” JINST 15, no.08, T08009 (2020).
* (86) B. Abi et al. [DUNE], “Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume IV: Far Detector Single-phase Technology,” JINST 15, no.08, T08010 (2020).
* (87) K. Scholberg, “Supernova Neutrino Detection,” Ann. Rev. Nucl. Part. Sci. 62, 81-103 (2012).
* (88) A. A. Aguilar-Arevalo et al. [CCM], “First dark matter search results from Coherent CAPTAIN-Mills,” Phys. Rev. D 106, no.1, 012001 (2022).
* (89) S. E. Willis, V. W. Hughes, P. Nemethy, R. L. Burman, D. R. Cochran, J. S. Frank, R. P. Redwine, J. Duclos, H. Kaspar and C. K. Hargrove et al., “A Neutrino Experiment to Test the Nature of Muon Number Conservation,” Phys. Rev. Lett. 44, 522 (1980) [erratum: Phys. Rev. Lett. 44, 903 (1980); erratum: Phys. Rev. Lett. 45, 1370 (1980)].
* (90) B. Armbruster et al. [KARMEN], “Measurement of the weak neutral current excitation C-12(nu(mu) nu’(mu))C*-12(1+,1,15.1-MeV) at E(nu(mu)) = 29.8-MeV,” Phys. Lett. B 423, 15-20 (1998).
* (91) G. Drexlin et al. [KARMEN], “First observation of the neutral current nuclear excitation C-12 (nu, nu-prime) C-12* (1+, 1).,” Phys. Lett. B 267, 321-324 (1991).
* (92) R. Maschuw [KARMEN], “Neutrino spectroscopy with KARMEN,” Prog. Part. Nucl. Phys. 40, 183-192 (1998).
* (93) B. Zeitnitz [KARMEN], “KARMEN: Neutrino physics at ISIS,” Prog. Part. Nucl. Phys. 32, 351-373 (1994).
* (94) D. A. Krakauer, R. L. Talaga, R. C. Allen, H. H. Chen, R. Hausammann, W. P. Lee, H. J. Mahler, X. Q. Lu, K. C. Wang and T. J. Bowles et al., “Experimental study of neutrino absorption on carbon,” Phys. Rev. C 45, 2450-2463 (1992).
* (95) L. B. Auerbach et al. [LSND], “Measurements of charged current reactions of nu(e) on 12-C,” Phys. Rev. C 64, 065501 (2001).
* (96) L. B. Auerbach et al. [LSND], “Measurements of charged current reactions of muon neutrinos on C-12,” Phys. Rev. C 66, 015501 (2002).
* (97) J. R. Distel, B. T. Cleveland, K. Lande, C. K. Lee, P. S. Wildenhain, G. E. Allen and R. L. Burman, “Measurement of the cross-section for the reaction I-127 (nu(e), e-) Xe-127(bound states) with neutrinos from the decay of stopped muons,” Phys. Rev. C 68, 054613 (2003).
* (98) P. An et al. [COHERENT], “Measurement of the inclusive electron-neutrino charged-current cross section on 127I with the COHERENT NaI$\nu$E detector,” arXiv:2305.19594 [nucl-ex].
* (99) P. An et al. [COHERENT], “Measurement of natPb($\nu_{e}$,X$n$) production with a stopped-pion neutrino source,”arXiv:2212.11295 [hep-ex].
* (100) A. Aguilar-Arevalo et al. [CONNIE], “Results of the Engineering Run of the Coherent Neutrino Nucleus Interaction Experiment (CONNIE),” JINST 11, no.07, P07024 (2016).
* (101) G. Agnolet et al. [MINER], “Background Studies for the MINER Coherent Neutrino Scattering Reactor Experiment,” Nucl. Instrum. Meth. A 853, 53-60 (2017).
* (102) V. Belov, V. Brudanin, V. Egorov, D. Filosofov, M. Fomina, Y. Gurov, L. Korotkova, A. Lubashevskiy, D. Medvedev and R. Pritula et al., “The $\nu$GeN experiment at the Kalinin Nuclear Power Plant,” JINST 10, no.12, P12011 (2015).
* (103) R. Strauss, J. Rothe, G. Angloher, A. Bento, A. Gütlein, D. Hauff, H. Kluck, M. Mancuso, L. Oberauer and F. Petricca et al., “The $\nu$-cleus experiment: A gram-scale fiducial-volume cryogenic detector for the first detection of coherent neutrino-nucleus scattering,” Eur. Phys. J. C 77, 506 (2017).
* (104) J. Billard, R. Carr, J. Dawson, E. Figueroa-Feliciano, J. A. Formaggio, J. Gascon, S. T. Heine, M. De Jesus, J. Johnston and T. Lasserre et al., “Coherent Neutrino Scattering with Low Temperature Bolometers at Chooz Reactor Complex,” J. Phys. G 44, no.10, 105101 (2017).
* (105) H. T. Wong, “Neutrino-nucleus coherent scattering and dark matter searches with sub-keV germanium detector,” Nucl. Phys. A 844, 229C-233C (2010).
* (106) J. J. Choi et al. [NEON], “Exploring coherent elastic neutrino-nucleus scattering using reactor electron antineutrinos in the NEON experiment,” Eur. Phys. J. C 83, no.3, 226 (2023).
* (107) C. Awe et al. [CHANDLER, CONNIE, CONUS, Daya Bay, JUNO, MTAS, NEOS, NuLat, PROSPECT, RENO, Ricochet, ROADSTR Near-Field Working Group, SoLid, Stereo, Valencia-Nantes TAGS, vIOLETA and WATCHMAN], “High Energy Physics Opportunities Using Reactor Antineutrinos,” arXiv:2203.07214 [hep-ex].
* (108) P. S. Barbeau, Y. Efremenko and K. Scholberg, “COHERENT at the Spallation Neutron Source,” arXiv:2111.07033 [hep-ex].
* (109) J. Asaadi, P. S. Barbeau, B. Bodur, A. Bross, E. Conley, Y. Efremenko, M. Febbraro, A. Galindo-Uribarri, S. Gardiner and D. Gonzalez-Diaz et al., “Physics Opportunities in the ORNL Spallation Neutron Source Second Target Station Era,” arXiv:2209.02883 [hep-ex].
* (110) S. Henderson, “The Spallation Neutron Source accelerator system design,” Nucl. Instrum. Meth. A 763, 610-673 (2014).
* (111) J. R. Haines, T. J. McManamy, T. A. Gabriel, R. E. Battle, K. K. Chipley, J. A. Crabtree, L. L. Jacobs, D. C. Lousteau, M. J. Rennich and B. W. Riemer, “Spallation neutron source target station design, development, and commissioning,” Nucl. Instrum. Meth. A 764, 94-115 (2014).
* (112) A. A. Aguilar-Arevalo et al. [CCM], “First Leptophobic Dark Matter Search from the Coherent–CAPTAIN-Mills Liquid Argon Detector,” Phys. Rev. Lett. 129, no.2, 021801 (2022).
* (113) A. A. Aguilar-Arevalo et al. [CCM], “Axion-Like Particles at Coherent CAPTAIN-Mills,” arXiv:2112.09979 [hep-ph].
* (114) S. Ajimura, M. K. Cheoun, J. H. Choi, H. Furuta, M. Harada, S. Hasegawa, Y. Hino, T. Hiraiwa, E. Iwai and S. Iwata et al., “Technical Design Report (TDR): Searching for a Sterile Neutrino at J-PARC MLF (E56, JSNS2),” arXiv:1705.08629 [physics.ins-det].
* (115) S. Ajimura, M. Botran, J. H. Choi, J. W. Choi, M. K. Cheoun, T. Dodo, H. Furuta, J. Goh, K. Haga and M. Harada, et al. “Proposal: JSNS2-II,” arXiv:2012.10807 [hep-ex].
* (116) D. Baxter, J. I. Collar, P. Coloma, C. E. Dahl, I. Esteban, P. Ferrario, J. J. Gomez-Cadenas, M. C. Gonzalez-Garcia, A. R. L. Kavner and C. M. Lewis et al., “Coherent Elastic Neutrino-Nucleus Scattering at the European Spallation Source,” JHEP 02, 123 (2020).
* (117) M. Toups, R. G. Van de Water, B. Batell, S. J. Brice, P. deNiverville, B. Dutta, J. Eldred, T. Hapitas, R. Harnik and A. Karthikeyan et al., “PIP2-BD: GeV Proton Beam Dump at Fermilab’s PIP-II Linac,” arXiv:2203.08079 [hep-ex].
* (118) X. R. Huang and L. W. Chen, “Neutron Skin in CsI and Low-Energy Effective Weak Mixing Angle from COHERENT Data,” Phys. Rev. D 100, no.7, 071301 (2019).
* (119) J. Bernabeu, J. Papavassiliou and J. Vidal, “On the observability of the neutrino charge radius,” Phys. Rev. Lett. 89, 101802 (2002) [erratum: Phys. Rev. Lett. 89, 229902 (2002)].
* (120) J. Bernabeu, J. Papavassiliou and J. Vidal, “The Neutrino charge radius is a physical observable,” Nucl. Phys. B 680, 450-478 (2004).
* (121) J. Papavassiliou, J. Bernabeu and M. Passera, “Neutrino-nuclear coherent scattering and the effective neutrino charge radius,” PoS HEP2005, 192 (2006).
* (122) M. Cadeddu, C. Giunti, K. A. Kouzakov, Y. F. Li, Y. Y. Zhang and A. I. Studenikin, “Neutrino Charge Radii From Coherent Elastic Neutrino-nucleus Scattering,” Phys. Rev. D 98, no.11, 113010 (2018) [erratum: Phys. Rev. D 101, no.5, 059902 (2020)].
* (123) G. Co’, M. Anguiano and A. M. Lallena, “Nuclear structure uncertainties in coherent elastic neutrino-nucleus scattering,” JCAP 04, 044 (2020).
* (124) E. Ciuffoli, J. Evslin, Q. Fu and J. Tang, “Extracting nuclear form factors with coherent neutrino scattering,” Phys. Rev. D 97, no.11, 113003 (2018).
* (125) D. K. Papoulias, T. S. Kosmas, R. Sahu, V. K. B. Kota and M. Hota, “Constraining nuclear physics parameters with current and future COHERENT data,” Phys. Lett. B 800, 135133 (2020).
* (126) F. J. Fattoyev, J. Piekarewicz and C. J. Horowitz, “Neutron Skins and Neutron Stars in the Multimessenger Era,” Phys. Rev. Lett. 120 (2018) no.17, 172702.
* (127) B. T. Reed, F. J. Fattoyev, C. J. Horowitz and J. Piekarewicz, “Implications of PREX-2 on the Equation of State of Neutron-Rich Matter,” Phys. Rev. Lett. 126 (2021) no.17, 172503.
* (128) J. M. Lattimer and Y. Lim, “Constraining the Symmetry Parameters of the Nuclear Interaction,” Astrophys. J. 771 (2013), 51.
* (129) K. Hebeler, J. M. Lattimer, C. J. Pethick and A. Schwenk, “Equation of state and neutron star properties constrained by nuclear physics and observation,” Astrophys. J. 773 (2013), 11.
* (130) G. Hagen, A. Ekström, C. Forssén, G. R. Jansen, W. Nazarewicz, T. Papenbrock, K. A. Wendt, S. Bacca, N. Barnea and B. Carlsson et al., “Neutron and weak-charge distributions of the 48Ca nucleus,” Nature Phys. 12 (2015) no.2, 186-190.
* (131) M. Abdullah, H. Abele, D. Akimov, G. Angloher, D. Aristizabal Sierra, C. Augier, A. B. Balantekin, L. Balogh, P. S. Barbeau and L. Baudis et al., “Coherent elastic neutrino-nucleus scattering: Terrestrial and astrophysical applications,” arXiv:2203.07361 [hep-ph].
* (132) G. P. Zeller et al. [NuTeV], “A Precise Determination of Electroweak Parameters in Neutrino Nucleon Scattering,” Phys. Rev. Lett. 88, 091802 (2002) [erratum: Phys. Rev. Lett. 90, 239902 (2003)].
* (133) D. Androić et al. [Qweak], “Precision measurement of the weak charge of the proton,” Nature 557, no.7704, 207-211 (2018).
* (134) J. Benesch et al. [MOLLER], “The MOLLER Experiment: An Ultra-Precise Measurement of the Weak Mixing Angle Using M\oller Scattering,” arXiv:1411.4088 [nucl-ex].
* (135) B. M. Roberts, V. A. Dzuba and V. V. Flambaum, “Parity and Time-Reversal Violation in Atomic Systems,” Ann. Rev. Nucl. Part. Sci. 65, 63-86 (2015).
* (136) D. Aristizabal Sierra, N. Rojas and M. H. G. Tytgat, “Neutrino non-standard interactions and dark matter searches with multi-ton scale detectors,” JHEP 03, 197 (2018).
* (137) C. Giunti, “General COHERENT constraints on neutrino nonstandard interactions,” Phys. Rev. D 101, no.3, 035039 (2020).
* (138) J. Liao and D. Marfatia, “COHERENT constraints on nonstandard neutrino interactions,” Phys. Lett. B 775, 54-57 (2017).
* (139) J. B. Dent, B. Dutta, S. Liao, J. L. Newstead, L. E. Strigari and J. W. Walker, “Accelerator and reactor complementarity in coherent neutrino-nucleus scattering,” Phys. Rev. D 97, no.3, 035009 (2018).
* (140) J. B. Dent, B. Dutta, D. Kim, S. Liao, R. Mahapatra, K. Sinha and A. Thompson, “New Directions for Axion Searches via Scattering at Reactor Neutrino Experiments,” Phys. Rev. Lett. 124, no.21, 211804 (2020).
* (141) O. G. Miranda, D. K. Papoulias, O. Sanders, M. Tórtola and J. W. F. Valle, “Future CEvNS experiments as probes of lepton unitarity and light-sterile neutrinos,” Phys. Rev. D 102, 113014 (2020).
* (142) A. M. Suliga and I. Tamborra, “Astrophysical constraints on nonstandard coherent neutrino-nucleus scattering,” Phys. Rev. D 103 (2021) no.8, 083002.
* (143) B. Dutta, R. F. Lang, S. Liao, S. Sinha, L. Strigari and A. Thompson, “A global analysis strategy to resolve neutrino NSI degeneracies with scattering and oscillation data,” JHEP 09 (2020), 106.
* (144) B. Dutta, D. Kim, S. Liao, J. C. Park, S. Shin and L. E. Strigari, “Dark matter signals from timing spectra at neutrino experiments,” Phys. Rev. Lett. 124, no.12, 121802 (2020).
* (145) L. Wolfenstein, “Neutrino Oscillations in Matter,” Phys. Rev. D 17, 2369-2374 (1978).
* (146) P. S. Bhupal Dev, K. S. Babu, P. B. Denton, P. A. N. Machado, C. A. Argüelles, J. L. Barrow, S. S. Chatterjee, M. C. Chen, A. de Gouvêa and B. Dutta et al., “Neutrino Non-Standard Interactions: A Status Report,” SciPost Phys. Proc. 2, 001 (2019).
* (147) P. Coloma and T. Schwetz, “Generalized mass ordering degeneracy in neutrino oscillation experiments,” Phys. Rev. D 94, no.5, 055005 (2016) [erratum: Phys. Rev. D 95, no.7, 079903 (2017)].
* (148) P. Coloma, P. B. Denton, M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz, “Curtailing the Dark Side in Non-Standard Neutrino Interactions,” JHEP 04, 116 (2017).
* (149) P. A. Zyla et al. [Particle Data Group], “Review of Particle Physics,” PTEP 2020, no.8, 083C01 (2020).
* (150) A. B. Balantekin and N. Vassh,“Magnetic moments of active and sterile neutrinos,” Phys. Rev. D 89, no.7, 073013 (2014).
* (151) P. Vogel and J. Engel, “Neutrino Electromagnetic Form-Factors,” Phys. Rev. D 39, 3378 (1989).
* (152) N. F. Bell, M. Gorchtein, M. J. Ramsey-Musolf, P. Vogel and P. Wang, “Model independent bounds on magnetic moments of Majorana neutrinos,” Phys. Lett. B 642, 377-383 (2006).
* (153) M. Agostini et al. [Borexino], “Limiting neutrino magnetic moments with Borexino Phase-II solar neutrino data,” Phys. Rev. D 96, no.9, 091103 (2017).
* (154) L. B. Auerbach et al. [LSND], “Measurement of electron - neutrino - electron elastic scattering,” Phys. Rev. D 63, 112001 (2001).
* (155) T. S. Kosmas, O. G. Miranda, D. K. Papoulias, M. Tortola and J. W. F. Valle, “Probing neutrino magnetic moments at the Spallation Neutron Source facility,” Phys. Rev. D 92, no.1, 013011 (2015).
* (156) A. C. Dodd, E. Papageorgiu and S. Ranfone, “The Effect of a neutrino magnetic moment on nuclear excitation processes,” Phys. Lett. B 266, 434-438 (1991).
* (157) A. J. Anderson, J. M. Conrad, E. Figueroa-Feliciano, C. Ignarra, G. Karagiorgi, K. Scholberg, M. H. Shaevitz and J. Spitz, “Measuring Active-to-Sterile Neutrino Oscillations with Neutral Current Coherent Neutrino-Nucleus Scattering,” Phys. Rev. D 86, 013004 (2012).
* (158) J. A. Formaggio, E. Figueroa-Feliciano and A. J. Anderson, “Sterile Neutrinos, Coherent Scattering and Oscillometry Measurements with Low-temperature Bolometers,” Phys. Rev. D 85, 013009 (2012).
* (159) T. S. Kosmas, D. K. Papoulias, M. Tortola and J. W. F. Valle, “Probing light sterile neutrino signatures at reactor and Spallation Neutron Source neutrino experiments,” Phys. Rev. D 96, no.6, 063013 (2017).
* (160) C. Blanco, D. Hooper and P. Machado, “Constraining Sterile Neutrino Interpretations of the LSND and MiniBooNE Anomalies with Coherent Neutrino Scattering Experiments,” Phys. Rev. D 101, no.7, 075051 (2020).
* (161) P. deNiverville, M. Pospelov and A. Ritz, “Light new physics in coherent neutrino-nucleus scattering experiments,” Phys. Rev. D 92, no.9, 095005 (2015).
* (162) P. deNiverville, C. Y. Chen, M. Pospelov and A. Ritz, “Light dark matter in neutrino beams: production modelling and scattering signatures at MiniBooNE, T2K and SHiP,” Phys. Rev. D 95, no.3, 035006 (2017).
* (163) B. Dutta, W. C. Huang, J. L. Newstead and V. Pandey, “Inelastic nuclear scattering from neutrinos and dark matter,” Phys. Rev. D 106, no.11, 113006 (2022).
* (164) B. Dutta, W. C. Huang and J. L. Newstead, “Probing the dark sector with nuclear transition photons,” arXiv:2302.10250 [hep-ph].
|
# Spatial characterization of the magnetic field profile of a probe tip used
in magnetic resonance force microscopy
E. Nazaretski Los Alamos National Laboratory, Los Alamos, NM 87545 E. A.
Akhadov Los Alamos National Laboratory, Los Alamos, NM 87545 I. Martin Los
Alamos National Laboratory, Los Alamos, NM 87545 D. V. Pelekhov Department
of Physics, Ohio State University, Columbus OH 43210 P. C. Hammel Department
of Physics, Ohio State University, Columbus OH 43210 R. Movshovich Los
Alamos National Laboratory, Los Alamos, NM 87545
###### Abstract
We have developed the experimental approach to characterize spatial
distribution of the magnetic field produced by cantilever tips used in
magnetic resonance force microscopy (MRFM). We performed MRFM measurements on
a well characterized diphenyl-picrylhydrazyl (DPPH) film and mapped the 3D
field profile produced by a $Nd_{2}Fe_{14}B$ probe tip. Using our technique
field profiles of arbitrarily shaped probe magnets can be imaged.
Magnetic resonance force microscopy attracted a lot of interest in the last
few years due to its high force sensitivity and excellent spatial resolution
of magnetic properties. MRFM has been used in studies of electron and nuclear
spin systems culminating in the detection of the force signal originating from
a single electron spin Rugar 2004 . Recent experiments on nuclear spins of
${}^{19}F$ in $CaF_{2}$ samples demonstrated the spatial resolution of 90 nm
Mamin 2007 , orders of magnitude better than conventional magnetic resonance
imaging technique. In the long term, MRFM is envisioned as a possible route to
achieve imaging of individual molecules. Experiments on ferromagnetic systems
showed the potential for spatially resolved ferromagnetic resonance in
continuous and microfabricated samples Nazaretski 2007 ; Mewes 2006 . In MRFM
experiments, force F exerted on a cantilever, is a convolution of the sample’s
magnetization and the gradient of the magnetic field produced by the probe
tip. To perform correct imaging, quantitative knowledge of the spatial
distribution of the tip field is required. At present, the most common way to
characterize magnetic tips is to use the cantilever magnetometry Rossel 1996 ;
Stipe 2001 . It provides information about the magnetic moment of the tip m,
however, it is also sensitive to the relative orientation of m with respect to
the external magnetic field and the direction of cantilever’s oscillations.
Moreover, the detailed spatial field profile of the magnetic tip can not be
inferred. Alternative approach utilizes the spectroscopic nature of MRFM and
has been demonstrated in previous studies Mamin 2007 ; Chao 2004 ; Wago 1998 ;
Bruland 1998 ; Hammel 2003 . In these experiments the strength of the probe
field has been determined from the position of the onset in the MRFM spectra
as a function of the probe-sample separation $z$. Based on this information,
the point dipole approximation has been used to model the magnetic tip. The
situation becomes more complicated if the shape of the tip is irregular or m
is tilted with respect to the $\hat{z}$ direction. Under these circumstances
the one-dimensional approach is insufficient, and does not reveal the spatial
field profile of the probe tip. In this letter we propose a method for
detailed mapping of the tip magnetic field, free of any assumptions about the
tip shape, size, or composition.
In MRFM experiments the magnetic tip of a cantilever is used to generate the
inhomogeneous magnetic field causing local excitation of the spin resonance in
a small volume of the sample known as sensitive slice. The resonance condition
is written as follows
$|{\bf H}_{tot}(r)|=\frac{\omega_{RF}}{\gamma},$ (1)
where $\gamma$ is the gyromagnetic ratio. The total field ${\bf H}_{tot}(r)$
can be expressed as
${\bf H}_{tot}(r)={\bf H}_{ext}+{\bf H}_{tip}(r),$ (2)
where ${\bf H}_{ext}$ is the externally applied magnetic field and ${\bf
H}_{tip}(r)$ is the field of the probe tip. Width $\Delta z$ of the sensitive
slice is determined by the ratio of the resonance linewidth $\Delta H_{res}$
and the strength of the gradient field $\nabla H_{tip}$ produced by the probe
tip, $\Delta z$ = $\frac{\Delta H}{|\nabla H_{tip}|}$ Suter 2002 . Three
dimensional images of electron spin densities can be reconstructed by
performing lateral and vertical scanning of the sensitive slice across the
sampleWago 1998 ; Chao 2004 .
The concept behind our method for detailed characterization of the tip field
profile is illustrated in Fig. 1. It requires a thin-film sample with sharp
edges. When the sensitive slice touches the sample edge, a leading edge signal
is detected. At this location, the sample edge is a tangent line to the
sensitive slice for a reasonable magnetic tip. Thus, scanning in 3D and
recording the locations corresponding to the leading edge enables full
reconstruction of the sensitive slice. If desired, it can be then
parameterized using dipolar, quadrupolar, etc moments.
To illustrate this procedure, we report on MRFM measurements on a well
characterized DPPH film, while laterally scanning the cantilever over its
edge. We used a commercially available Veeco $Si_{3}N_{4}$ cantilever with the
resonance frequency of $\approx$ 8 kHz and the spring constant $k$ of
$\approx$ 0.01 N/m Veeco . The original tip was removed by focused ion milling
and a small magnetic particle of $Nd_{2}Fe_{14}B$ available from Magnequench
Inc. Magnequench has been glued to the end of a cantilever with Stycast 1266
epoxy in the presence of an aligning magnetic field. Consequently, the tip has
been magnetized in the field of 80 kOe. The MRFM tip has a spherical shape
with the diameter of $\approx$ 2.4 $\mu$m and its SEM images are shown in
panels (1) and (2) in Fig. 2. The saturation magnetization of $Nd_{2}Fe_{14}B$
particles has been measured in a SQUID magnetometer, and is equal to $4\pi
M_{s}$ = 13 kG Nazaretski 2006a . Based on the SEM image we estimate the probe
moment to be (7.5$\pm$0.4)$\times$10-9 emu, in agreement with the value of
(6.9$\pm$0.5)$\times$10-9 emu measured by the cantilever magnetometry. The
cantilever is mounted on top of a double scanning stage of a low temperature
MRFM system Nazaretski 2006 ; Attocube . For data acquisition, the
temperature was stabilized at 10 K and the amplitude modulation scheme has
been implemented to couple to the in-resonance spins. The DPPH powder DPPH
was dissolved in acetone and deposited on a 100 $\mu$m thick silicon wafer in
a spin-coater at 3000 rpm. To protect the film, 20 nm of Ti was deposited on
top of DPPH. Approximately 2$\times$1.6 mm2 piece was cleaved from a wafer and
glued to the strip-line resonator of the microscope. The structure of the film
and sharpness of edges were inspected in SEM and are shown in Fig. 2. The film
was found to be continuous, and its thickness varied between 400 and 600 nm.
Fig. 3 shows the typical MRFM spectrum recorded in a DPPH film. When the tip
is located above the film, the strongest tip field experienced by the sample
is situated directly under the probe magnet (assuming ${\bf m}$ $\parallel$
${\bf H}_{ext}$). The field value in the MRFM spectrum where the sensitive
slice just touches the DPPH film is called the leading edge Suter 2002 , and
is indicated by arrows in Fig. 3.
The large positive peak at $\approx$ 3.34 kOe corresponds to the bulk-like
resonance. It originates from the large region of the sample where the tip
field is small, but due to the large number of spins the MRFM signal is
significant. The field difference between the bulk-like resonance and the
position of the leading edge provides the direct measure of the probe field
strength.
Fig. 1 shows the schematic of the characterization experiment. We fixed the
probe-sample separation $z$, and approached different edges of the DPPH film
while tracking the leading edge. The left panel of Fig. 4 shows the field
evolution of the leading edge for two values of $z$ and three different
directions of lateral scanning over the film edge. The almost identical shape
of the curves indicates that m is approximately parallel to the direction of
${\bf H}_{ext}$. In the first approximation, our tip can be modeled as a
magnetic dipole. The field profile produced on the surface of the sample can
be written as follows Jackson 1975 :
$\displaystyle H(R,\theta,\varphi)$ $\displaystyle=\frac{4\pi
M_{s}r_{0}^{3}}{3}\times\\{\frac{-3z(\sin\theta(x\sin\varphi+y\cos\varphi))}{R^{5}}+$
(3)
$\displaystyle+\frac{3z^{2}\cos\theta}{R^{5}}-\frac{\cos\theta}{R^{3}}\\},$
where $4\pi M_{s}$ is the saturation magnetization of $Nd_{2}Fe_{14}B$,
$r_{0}$ is the radius of the tip, $R$ is the vector to the point where the
field is determined, $\theta$ and $\varphi$ are the angles which describe the
spatial orientation of m (see Fig. 1).
The right panel of Fig. 4 shows the z-component of the probe field on the
sample’s surface as a function of $z$. Solid line is the fit using Eq. 3 and
assuming parallel orientation of m and ${\bf H}_{ext}$. Fig. 5(a) shows the
comparison between the lateral field profile of the tip simulated according to
Eq. 3, and the actual data points taken from the left panel of Fig. 4. Good
agreement between the observed and expected behavior suggests that, indeed,
our probe tip can be approximated as a dipole, and its magnetization is
aligned along the direction of ${\bf H}_{ext}$. In case of any significant
misalignment the tip field profile would change substantially, as shown in
Fig. 5(a). For both simulations shown in Fig. 4 and 5, we had to offset the
probe-sample separation by 1.42 $\pm$ 0.03 $\mu$m ($z$ is the only free
parameter in the fit) which suggests that due to the short range probe-sample
interaction the cantilever snaps to the sample at distances smaller than 1.42
$\mu$m Berger 1999 ; Dorofeyev 1999 . The presence of an offset may indicate
the reduced magnetic moment of the tip. However, our cantilever magnetometry
measurements of the tip moment agree well with the expected value, as
mentioned earlier in the paper. Moreover, in Fig. 5(b) we show the calculated
spatial field profile of 2 $\mu$m, 2.2 $\mu$m and 2.4 $\mu$m diameter tips.
The fit for the 2.4 $\mu$m diameter tip provides the best agreement with the
data points. Another argument in support of our tip model pertains to the
magnitude of the MRFM force exerted on a cantilever in a particular sensitive
slice. In Fig. 3 we take the measured MRFM force at $H_{ext}$ = 3.038 kOe and
compare it to our estimates. The calculations yield the force value of
$\approx$ 6.9$\times$10-13 N in good agreement with the measured value of
5.7$\times$10-13 N. Thus, dipolar approximation and our assumptions for the
tip moment were adequate for the present experiment. Importantly, the same
technique could be applied to map field profile from a more irregular tip.
In summary, we have studied the evolution of locally excited electron-spin
resonance in a DPPH film. By tracking the position of the leading edge in MRFM
spectra for different hight and direction of the approach to the sample, we
have determined the spatial field profile of the cantilever tip. Measuring the
MRFM signal onset over the large range of positions with adequate sensitivity
allows to deconvolve the spatial field profile produced by arbitrarily shaped
magnetic tips used in the magnetic resonance force microscopy.
This work was supported by the US Department of Energy and was performed, in
part, at the Center for Integrated Nanotechnologies at Los Alamos and Sandia
National Laboratories. Personnel at Ohio State University was supported by the
US Department of Energy through grant DE-FG02-03ER46054.
## References
* (1) D. Rugar, R. Budakian, H. J. Mamin, and W. Chui, Nature 430, 329 (2004)
* (2) H.J. Mamin, M. Poggio, C. L. Degen, D. Rugar, Nature Nanotech. 2, 301 (2007)
* (3) E. Nazaretski, D. V. Pelekhov, I. Martin, M. Zalalutdinov, J. W. Baldwin, T. Mewes, B. Houston, P. C. Hammel, and R. Movshovich, Appl. Phys. Lett. 90 234105 (2007)
* (4) T. Mewes, J. Kim, D. V. Pelekhov, G. N. Kakazei, P. E. Wigen, S. Batra, and P. C. Hammel, Phys. Rev. B 74, 144424 (2006)
* (5) S. Chao, W. M. Dougherty, J. L. Garbini and J. A. Sidles, Rev. Sci. Inst. 75, 1175 (2004)
* (6) K. Wago, D. Botkin, C. S. Yannoni, and D. Rugar, Appl. Phys. Lett. 72, 2757 (1998)
* (7) K. J. Bruland, W. M. Dougherty, J. L. Garbini, J. A. Sidles, and S. H. Chao, Appl. Phys. Lett. 73, 3159 (1998)
* (8) P. C. Hammel, D. V. Pelekhov, P. E. Wigen, T. R. Gosnell, M. M. Mizdor, and M. L. Roukes, Proceedings of IEEE, 91 789 (2003)
* (9) C. Rossel, P. Bauer, D. Zech, J. Hofer, M. Willemin, and H. Keller, J. Appl. Phys., 79 8166 (1996)
* (10) B. C. Stipe, H. J. Mamin, T. D. Stowe, T. W. Kenny, and D. Rugar, Phys. Rev. Lett. 86, 2874 (2001)
* (11) B. C. Stipe, H. J. Mamin, C. S. Yannoni, T. D. Stowe, T. W. Kenny, and D. Rugar, Phys. Rev. Lett. 87, 277602 (2001)
* (12) Veeco Probes, type MLCT-NO, cantilever C
* (13) http://www.magnequench.com/
* (14) Staveley Sensors piezotube is mounted on top of an Attocube 3D positioner ANPxyz100/LIN/LT/HV equipped with the optical position redout.
* (15) E. Nazaretski, J. D. Thompson, M. Zalalutdinov, J. W. Baldwin, B. Houston, T. Mewes, D. V. Pelekhov, P. Wigen, P. C. Hammel, and R. Movshovich, J. Appl. Phys. 101, 074905 (2007)
* (16) E. Nazaretski, T. Mewes, D. Pelekhov, P. C. Hammel, and R. Movshovich, AIP Conf. Proc. 850, 1641 (2006)
* (17) Sigma Chemical Co. (St. Louis, USA)
* (18) A. Suter, D. Pelekhov, M. Roukes, and P. C. Hammel, J. Magn. Res., 154, 210 (2002)
* (19) J. D. Jackson Classical Electrodynamics 3rd edition, Wiley, New York, 1999
* (20) M. Saint Jean, S. Hudlet, C. Guthmann, and J. Berger, J. Appl. Phys. 86, 5245 (1999)
* (21) I. Dorofeyev, H. Fuchs, G. Wenning, and B. Gotsmann, Phys. Rev. Lett. 83, 2402 (1999)
Figure Caption
FIG.1 Schematic of the tip characterization technique. Detection of the
leading edge signal indicates that the sample edge is tangent to the sensitive
slice. 3D scanning can thus be used to fully reconstruct the shape of the
sensitive slice.
FIG.2 Panel (1)and (2): SEM images of the probe magnet. Panel (3) shows the
edge of the DPPH film and panel (4) is the top view showing fine structures on
the surface of the film.
FIG.3 Amplitude and phase of the MRFM signal recorded at $T$ = 10 K,
$\omega_{RF}$ = 9.35 GHz, $z$ = 0.73 $\mu$m. The position of the leading edge
is indicated by arrows.
FIG.4 Left panel: field evolution of the leading edge as a function of lateral
position over the DPPH film edge. The upper and lower set of curves correspond
to $z$ = 2.35 $\mu$m and $z$ = 0.53 $\mu$m respectively. Circles represent the
approach of the sample from side ’1’, squares from side ’2’ and triangles form
side ’3’ of the sample as shown in Fig. 1. Right panel: the $z$-component of
the tip field as a function of the probe-sample separation (left Y-axis) and
the corresponding field gradient (right Y-axis). Solid curve is the fit to Eq.
3.
FIG.5 (a) Lateral field profile of the tip for approaches of sides ’1’ and ’3’
of the sample, as shown in Fig. 1. Data points are taken from the left panel
in Fig. 4. ’0’ on the X-axis corresponds to the edge of the film. Upper and
lower data points correspond to $z$ = 0.53 $\mu$m and $z$ = 2.35 $\mu$m
respectively. Solid curve is fitted to the data using Eq. 3. Dotted and dashed
lines show the expected field profile of the tip where $\theta$ = $\varphi$ =
20∘ and $\theta$ = -20∘, $\varphi$ = 20∘ respectively. (b) expected field
profile for the tip with $r_{0}$=1.2 $\mu$m, z-offset=1.4 $\mu$m (solid line),
$r_{0}$=1.1 $\mu$m, z-offset=1.12 $\mu$m (dotted line) and $r_{0}$=1.0 $\mu$m,
z-offset=0.85 $\mu$m (dashed line).
Figure 1: Figure 2: Figure 3:
Figure 4: Figure 5:
|
# An Approximation Algorithm for Two-Edge-Connected Subgraph Problem via
Triangle-free Two-Edge-Cover††thanks: This work was partially supported by the
joint project of Kyoto University and Toyota Motor Corporation, titled
“Advanced Mathematical Science for Mobility Society”, and by JSPS KAKENHI
Grant Numbers JP20K11692 and JP22H05001.
Yusuke Kobayashi Research Institute for Mathematical Sciences, Kyoto
University. E-mail: {yusuke<EMAIL_ADDRESS>Takashi
Noguchi22footnotemark: 2
###### Abstract
The $2$-Edge-Connected Spanning Subgraph problem (2-ECSS) is one of the most
fundamental and well-studied problems in the context of network design. In the
problem, we are given an undirected graph $G$, and the objective is to find a
$2$-edge-connected spanning subgraph $H$ of $G$ with the minimum number of
edges. For this problem, a lot of approximation algorithms have been proposed
in the literature. In particular, very recently, Garg, Grandoni, and Ameli
gave an approximation algorithm for 2-ECSS with factor $1.326$, which was the
best approximation ratio. In this paper, we give a
$(1.3+\varepsilon)$-approximation algorithm for 2-ECSS, where $\varepsilon$ is
an arbitrary positive fixed constant, which improves the previously known best
approximation ratio. In our algorithm, we compute a minimum triangle-free
$2$-edge-cover in $G$ with the aid of the algorithm for finding a maximum
triangle-free $2$-matching given by Hartvigsen. Then, with the obtained
triangle-free $2$-edge-cover, we apply the arguments by Garg, Grandoni, and
Ameli.
## 1 Introduction
In the field of survivable network design, a basic problem is to construct a
network with minimum cost that satisfies a certain connectivity constraint. A
seminal result by Jain [13] provides a $2$-approximation algorithm for a wide
class of survivable network design problems. For specific problems among them,
a lot of better approximation algorithms have been investigated in the
literature.
In this paper, we study the $2$-Edge-Connected Spanning Subgraph problem
(2-ECSS), which is one of the most fundamental and well-studied problems in
this context. In 2-ECSS, we are given an undirected graph $G=(V,E)$, and the
objective is to find a $2$-edge-connected spanning subgraph $H$ of $G$ with
the minimum number of edges. It was shown in [4, 5] that 2-ECSS does not admit
a PTAS unless ${\rm P}={\rm NP}$. Khuller and Vishkin [14] gave a
$3/2$-approximation algorithm for this problem, which was the starting point
of the study of approximation algorithms for 2-ECSS. Cheriyan, Sebő, and
Szigeti [1] improved this ratio to $17/12$, and later Hunkenschröder, Vempala,
and Vetta [20, 12] gave a $4/3$-approximation algorithm. By a completely
different approach, Sebő and Vygen [19] achieved the same approximation ratio.
Very recently, Garg, Grandoni, and Ameli [8] improved this ratio to $1.326$ by
introducing powerful reduction steps and developing the techniques in [12].
The contribution of this paper is to present a
$(1.3+\varepsilon)$-approximation algorithm for 2-ECSS for any
$\varepsilon>0$, which improves the previously best approximation ratio.
###### Theorem 1.
For any constant $\varepsilon>0$, there is a deterministic polynomial-time
$(1.3+\varepsilon)$-approximation algorithm for 2-ECSS.
Our algorithm and its analysis are heavily dependent on the well-developed
arguments by Garg, Grandoni, and Ameli [8]. In our algorithm, we first apply
the reduction steps given in [8]. Then, instead of a minimum $2$-edge-cover,
we compute a minimum _triangle-free_ $2$-edge-cover in the graph, which is the
key ingredient in our algorithm. We show that this can be done in polynomial
time with the aid of the algorithm for finding a maximum triangle-free
$2$-matching given by Hartvigsen [10] (see Theorem 4). Finally, we convert the
obtained triangle-free $2$-edge-cover into a spanning $2$-edge-connected
subgraph by using the arguments in [8].
Our main technical contribution is to point out the utility of Hartvigsen’s
algorithm [10] in the arguments by Garg, Grandoni, and Ameli [8]. It should be
noted that Hartvigsen’s algorithm has not received much attention in this
context.
#### Related Work
A natural extension of 2-ECSS is the $k$-Edge-Connected Spanning Subgraph
problem ($k$-ECSS), which is to find a $k$-edge-connected spanning subgraph of
the input graph with the minimum number of edges. For $k$-ECSS, several
approximation algorithms have been proposed, in which approximation factors
depend on $k$ [2, 7, 6]. We can also consider the weigthed variant of 2-ECSS,
in which the objective is to find a $2$-edge-connected spanning subgraph with
the minimum total weight in a given edge-weighted graph. The result of Jain
[13] leads to a $2$-approximation algorithm for the weighted 2-ECSS, and it is
still the best known approximation ratio. For the case when all the edge
weights are $0$ or $1$, which is called the _forest augmentation problem_ ,
Grandoni, Ameli, and Traub [9] recently gave a $1.9973$-approximation
algorithm. See references in [8, 9] for more related work on survivable
network design problems.
It is well-known that a $2$-matching of maximum size can be found in
polynomial-time by using a matching algorithm; see e.g., [18, Section 30]. As
a variant of this problem, the problem of finding a maximum $2$-matching that
contains no cycle of length at most $k$, which is called the _$C_{\leq k}$
-free $2$-matching problem_, has been actively studied. Hartvigsen [10] gave a
polynomial-time algorithm for the $C_{\leq 3}$-free $2$-matching problem (also
called the _triangle-free $2$-matching problem_), and Papadimitriou showed the
NP-hardness for $k\geq 5$ (see [3]). The polynomial solvability of the
$C_{\leq 4}$-free $2$-matching problem has been open for more than 40 years.
The edge weighted variant of the $C_{\leq 3}$-free $2$-matching problem is
also a big open problem in this area, and some positive results are known for
special cases [11, 15, 17, 16]. See references in [16] for more related work
on the $C_{\leq k}$-free $2$-matching problem.
## 2 Preliminary
Throughout the paper, we only consider simple undirected graphs, i.e., every
graph has neither self-loops nor parallel edges.111It is shown in [12] that
this assumption is not essential when we consider $2$-ECSS. A graph $G=(V,E)$
is said to be _$2$ -edge-connected_ if $G\setminus\\{e\\}$ is connected for
any $e\in E$, and it is called _$2$ -vertex-connected_ if $G\setminus\\{v\\}$
is connected for any $v\in V$ and $|V|\geq 3$. For a subgraph $H$ of $G$, its
vertex set and edge set are denoted by $V(H)$ and $E(H)$, respectively. A
subgraph $H$ of $G=(V,E)$ is _spanning_ if $V(H)=V(G)$. In the $2$-Edge-
Connected Spanning Subgraph problem ($2$-ECSS), we are given a graph $G=(V,E)$
and the objective is to find a $2$-edge-connected spanning subgraph $H$ of $G$
with the minimum number of edges (if one exists).
In this paper, a spanning subgraph $H$ is often identified with its edge set
$E(H)$. Let $H$ be a spanning subgraph (or an edge set) of $G$. A connected
component of $H$ which is 2-edge-connected is called a _2EC component of $H$_.
A 2EC component of $H$ is called an _$i$ -cycle 2EC component_ if it is a
cycle of length $i$. In particular, a $3$-cycle 2EC component is called a
_triangle 2EC component_. A maximal $2$-edge-connected subgraph $B$ of $H$ is
called a _block_ of $H$ if $|V(B)|\geq 3$ and $B$ is not a 2EC component. An
edge $e\in E(H)$ is called a _bridge_ of $H$ if $H\setminus\\{e\\}$ has more
connected components than $H$. A block $B$ of $H$ is called a _leaf block_ if
$H$ has exactly one bridge incident to $B$, and an _inner block_ otherwise.
Let $G=(V,E)$ be a graph. For an edge set $F\subseteq E$ and a vertex $v\in
V$, let $d_{F}(v)$ denote the number of edges in $F$ that are incident to $v$.
An edge set $F\subseteq E$ is called a _$2$ -matching_ if $d_{F}(v)\leq 2$ for
any $v\in V$, and it is called a _$2$ -edge-cover_ if $d_{F}(v)\geq 2$ for any
$v\in V$.222Such edge sets are sometimes called _simple_ $2$-matchings and
_simple_ $2$-edge-covers in the literature.
## 3 Algorithm in Previous Work
Since our algorithm is based on the well-developed $1.326$-approximation
algorithm given by Garg, Grandoni, and Ameli [8], we describe some of their
results in this section.
### 3.1 Reduction to Structured Graphs
In the algorithm by Garg, Grandoni, and Ameli [8], they first reduce the
problem to the case when the input graph has some additional conditions, where
such a graph is called a $(5/4,\varepsilon)$-structured graph. In what follows
in this paper, let $\varepsilon>0$ be a sufficiently small positive fixed
constant, which will appear in the approximation factor. In particular, we
suppose that $0\leq\varepsilon\leq 1/24$, which is used in the argument in
[8]. We say that a graph $G=(V,E)$ is _$(5/4,\varepsilon)$ -structured_ if it
is $2$-vertex-connected, it contains at least ${2}/{\varepsilon}$ vertices,
and it does not contain the following structures:
* •
($5/4$-contractible subgraph) a $2$-edge-connected subgraph $C$ of $G$ such
that every $2$-edge-connected spanning subgraph of $G$ contains at least
$\frac{4}{5}|E(C)|$ edges with both endpoints in $V(C)$;
* •
(irrelevant edge) an edge $uv\in E$ such that $G\setminus\\{u,v\\}$ is not
connected;
* •
(non-isolating $2$-vertex-cut) a vertex set $\\{u,v\\}\subseteq V$ of $G$ such
that $G\setminus\\{u,v\\}$ has at least three connected components or has
exactly two connected components, both of which contains at least two
vertices.
The following lemma shows that it suffices to consider
$(5/4,\varepsilon)$-structured graphs when we design approximation algorithms.
###### Lemma 2 (Garg, Grandoni, and Ameli [8, Lemma 2.2]).
For $\alpha\geq\frac{5}{4}$, if there exists a deterministic polynomial-time
$\alpha$-approximation algorithm for 2-ECSS on $(5/4,\varepsilon)$-structured
graphs, then there exists a deterministic polynomial-time
$(\alpha+2\varepsilon)$-approximation algorithm for 2-ECSS.
### 3.2 Semi-Canonical Two-Edge-Cover
A $2$-edge-cover $H$ of $G$ (which is identified with a spanning subgraph) is
called _semi-canonical_ if it satisfies the following conditions.
1. (1)
Each 2EC component of $H$ is a cycle or contains at least $7$ edges.
2. (2)
Each leaf block contains at least $6$ edges and each inner block contains at
least $4$ edges.
3. (3)
There is no pair of edge sets $F\subseteq H$ and $F^{\prime}\subseteq
E\setminus H$ such that $|F|=|F^{\prime}|\leq 3$, $(H\setminus F)\cup
F^{\prime}$ is a $2$-edge-cover with fewer connected components than $H$, and
$F$ contains an edge in some triangle 2EC component of $H$.
4. (4)
There is no pair of edge sets $F\subseteq H$ and $F^{\prime}\subseteq
E\setminus H$ such that $|F|=|F^{\prime}|=2$, $(H\setminus F)\cup F^{\prime}$
is a $2$-edge-cover with fewer connected components than $H$, both edges in
$F^{\prime}$ connect two $4$-cycle 2EC components, say $C_{1}$ and $C_{2}$,
and $F$ is contained in $C_{1}\cup C_{2}$. In other words, by removing $2$
edges and adding $2$ edges, we cannot merge two $4$-cycle 2EC components into
a cycle of length $8$.
###### Lemma 3 (Garg, Grandoni, and Ameli [8, Lemma 2.6]).
Suppose we are given a semi-canonical $2$-edge-cover $H$ of a
$(5/4,\varepsilon)$-structured graph $G$ with $b|H|$ bridges and $t|H|$ edges
belonging to triangle 2EC components of $H$. Then, in polynomial time, we can
compute a $2$-edge-connected spanning subgraph $S$ of size at most
$(\frac{13}{10}+\frac{1}{30}t-\frac{1}{20}b)|H|$.
###### Remark 1.
In the original statement of [8, Lemma 2.6], $H$ is assumed to satisfy a
stronger condition than semi-canonical, called canonical. A $2$-edge-cover $H$
is said to be _canonical_ if it satisfies (1) and (2) in the definition of
semi-canonical $2$-edge-covers, and also the following condition: there is no
pair of edge sets $F\subseteq H$ and $F^{\prime}\subseteq E\setminus H$ such
that $|F|=|F^{\prime}|\leq 3$ and $(H\setminus F)\cup F^{\prime}$ is a
$2$-edge-cover with fewer connected components than $H$. However, one can see
that the condition “canonical” can be relaxed to “semi-canonical” by following
the proof of [8, Lemma 2.6]; see the proofs of Lemmas D.3, D.4, and D.11 in
[8].
## 4 Algorithm via Triangle-Free Two-Edge-Cover
The idea of our algorithm is quite simple: we construct a semi-canonical
$2$-edge-cover $H$ with no triangle 2EC components and then apply Lemma 3. We
say that an edge set $F\subseteq E$ is _triangle-free_ if there is no triangle
2EC components of $F$. Note that a triangle-free edge set $F$ may contain a
cycle of length three that is contained in a larger connected component. In
order to construct a semi-canonical triangle-free $2$-edge-cover, we use a
polynomial-time algorithm for finding a triangle-free $2$-matching given by
Hartvigsen [10].
###### Theorem 4 (Hartvigsen [10, Theorem 3.2 and Proposition 3.4]).
For a graph $G$, we can find a triangle-free $2$-matching in $G$ with maximum
cardinality in polynomial time.
In Section 4.1, we give an algorithm for finding a minimum triangle-free
$2$-edge-cover with the aid of Theorem 4. Then, we transform it into a semi-
canonical triangle-free $2$-edge-cover in Section 4.2. Using the obtained
$2$-edge-cover, we give a proof of Theorem 1 in Section 4.3.
### 4.1 Minimum Triangle-Free Two-Edge-Cover
As with the relationship between $2$-matchings and $2$-edge-covers (see e.g.
[18, Section 30.14]), triangle-free $2$-matchings and triangle-free $2$-edge-
covers are closely related to each other, which can be stated as the following
two lemmas.
###### Lemma 5.
Let $G=(V,E)$ be a connected graph such that the minimum degree is at least
two and $|V|\geq 4$. Given a triangle-free $2$-matching $M$ in $G$, in
polynomial time, we can compute a triangle-free $2$-edge-cover $C$ of $G$ with
size at most $2|V|-|M|$.
###### Proof.
Starting with $F=M$, we perform the following update repeatedly while $F$ is
not a $2$-edge-cover:
> Choose a vertex $v\in V$ with $d_{F}(v)<2$ and an edge $vw\in E\setminus F$
> incident to $v$.
>
> 1. (i)
>
> If $F\cup\\{vw\\}$ is triangle-free, then add $vw$ to $F$.
>
> 2. (ii)
>
> Otherwise, $F\cup\\{vw\\}$ contains a triangle 2EC component with vertex set
> $\\{u,v,w\\}$ for some $u\in V$. In this case, choose an edge $e$ connecting
> $\\{u,v,w\\}$ and $V\setminus\\{u,v,w\\}$, and add both $vw$ and $e$ to $F$.
>
>
If $F$ becomes a $2$-edge-cover, then the procedure terminates by returning
$C=F$. It is obvious that this procedure terminates in polynomial steps and
returns a triangle-free $2$-edge-cover.
We now analyze the size of the output $C$. For an edge set $F\subseteq E$,
define $g(F)=\sum_{v\in V}\max\\{2-d_{F}(v),0\\}$. Then, in each iteration of
the procedure, we observe the following: in case (i), one edge is added to $F$
and $g(F)$ decreases by at least one; in case (ii), two edges are added to $F$
and $g(F)$ decreases by at least two, because $d_{F}(v)=d_{F}(w)=1$ before the
update. With this observation, we see that $|C|-|M|\leq g(M)-g(C)=\sum_{v\in
V}(2-d_{M}(v))$, where we note that $M$ is a $2$-matching and $C$ is a
$2$-edge-cover. Therefore, it holds that
$|C|\leq|M|+\sum_{v\in V}(2-d_{M}(v))=|M|+(2|V|-2|M|)=2|V|-|M|,$
which completes the proof. ∎
###### Lemma 6.
Given a triangle-free $2$-edge-cover $C$ in a graph $G=(V,E)$, in polynomial
time, we can compute a triangle-free $2$-matching $M$ of $G$ with size at
least $2|V|-|C|$.
###### Proof.
Starting with $F=C$, we perform the following update repeatedly while $F$ is
not a $2$-matching:
> Choose a vertex $v\in V$ with $d_{F}(v)>2$ and an edge $vw\in F$ incident to
> $v$.
>
> 1. (i)
>
> If $F\setminus\\{vw\\}$ is triangle-free, then remove $vw$ from $F$.
>
> 2. (ii)
>
> If $F\setminus\\{vw\\}$ contains a triangle 2EC component whose vertex set
> is $\\{v,v_{1},v_{2}\\}$ for some $v_{1},v_{2}\in V$, then remove $vv_{1}$
> from $F$.
>
> 3. (iii)
>
> If neither of the above holds, then $F\setminus\\{vw\\}$ contains a triangle
> 2EC component whose vertex set is $\\{w,w_{1},w_{2}\\}$ for some
> $w_{1},w_{2}\in V$. In this case, remove $ww_{1}$ from $F$.
>
>
If $F$ becomes a $2$-matching, then the procedure terminates by returning
$M=F$. It is obvious that this procedure terminates in polynomial steps and
returns a triangle-free $2$-matching.
We now analyze the size of the output $M$. For an edge set $F\subseteq E$,
define $g(F)=\sum_{v\in V}\max\\{d_{F}(v)-2,0\\}$. Then, in each iteration of
the procedure, we observe that one edge is removed from $F$ and $g(F)$
decreases by at least one, where we note that $d_{F}(w)=3$ before the update
in case (iii). With this observation, we see that $|C|-|M|\leq
g(C)-g(M)=\sum_{v\in V}(d_{C}(v)-2)$, where we note that $C$ is a $2$-edge-
cover and $M$ is a $2$-matching. Therefore, it holds that
$|M|\geq|C|-\sum_{v\in V}(d_{C}(v)-2)=|C|-(2|C|-2|V|)=2|V|-|C|,$
which completes the proof. ∎
By using these lemmas and Theorem 4, we can compute a triangle-free $2$-edge-
cover with minimum cardinality in polynomial time.
###### Proposition 7.
For a graph $G=(V,E)$, we can compute a triangle-free $2$-edge-cover of $G$
with minimum cardinality in polynomial time (if one exists).
###### Proof.
It suffices to consider the case when $G$ is a connected graph such that the
minimum degree is at least two and $|V|\geq 4$. Let $M$ be a triangle-free
$2$-matching in $G$ with maximum cardinality, which can be computed in
polynomial time by Theorem 4. Then, by Lemma 5, we can construct a triangle-
free $2$-edge-cover $C$ of $G$ with size at most $2|V|-|M|$.
We now show that $G$ has no triangle-free $2$-edge-cover $C^{\prime}$ with
$|C^{\prime}|<2|V|-|M|$. Assume to the contrary that there exists a triangle-
free $2$-edge-cover $C^{\prime}$ of size smaller than $2|V|-|M|$. Then, by
Lemma 6, we can construct a triangle-free $2$-matching $M^{\prime}$ of $G$
with size at least $2|V|-|C^{\prime}|$. Since $|M^{\prime}|\geq
2|V|-|C^{\prime}|>2|V|-(2|V|-|M|)=|M|$, this contradicts that $M$ is a
triangle-free $2$-matching with maximum cardinality. Therefore, $G$ has no
triangle-free $2$-edge-cover of size smaller than $2|V|-|M|$, which implies
that $C$ is a triangle-free $2$-edge-cover with minimum cardinality. ∎
### 4.2 Semi-Canonical Triangle-Free Two-Edge-Cover
We show the following lemma saying that a triangle-free $2$-edge-cover can be
transformed into a semi-canonical triangle-free $2$-edge-cover without
increasing the size. Although the proof is almost the same as that of [8,
Lemma 2.4], we describe it for completeness.
###### Lemma 8.
Given a triangle-free $2$-edge-cover $H$ of a $(5/4,\varepsilon)$-structured
graph $G=(V,E)$, in polynomial time, we can compute a triangle-free $2$-edge-
cover $H^{\prime}$ of no larger size which is semi-canonical.
###### Proof.
Recall that an edge set is identified with the corresponding spanning subgraph
of $G$. Starting with $H^{\prime}=H$, while $H^{\prime}$ is not semi-canonical
we apply one of the following operations in this order of priority. We note
that $H^{\prime}$ is always triangle-free during the procedure, and hence it
always satisfies condition (3) in the definition of semi-canonical $2$-edge-
cover.
1. (a)
If there exists an edge $e\in H^{\prime}$ such that
$H^{\prime}\setminus\\{e\\}$ is a triangle-free $2$-edge-cover, then remove
$e$ from $H^{\prime}$.
2. (b)
If $H^{\prime}$ does not satisfy condition (4), then we merge two $4$-cycle
2EC components into a cycle of length $8$ by removing $2$ edges and adding $2$
edges. Note that the obtained edge set is a triangle-free $2$-edge-cover that
has fewer connected components.
3. (c)
Suppose that condition (1) does not hold, i.e., there exists a 2EC component
$C$ of $H^{\prime}$ with fewer than $7$ edges that is not a cycle. Since $C$
is $2$-edge-connected and not a cycle, we obtain $|E(C)|\geq|V(C)|+1$. If
$|V(C)|=4$, then $C$ contains at least $5$ edges and contains a cycle of
length $4$, which contradicts that (a) is not applied. Therefore, $|V(C)|=5$
and $|E(C)|=6$. Since operation (a) is not applied, $C$ is either a bowtie
(i.e., two triangles that share a commmon vertex) or a $K_{2,3}$; see figures
in the proof of [8, Lemma 2.4].
1. (c1)
Suppose that $C$ is a bowtie that has two triangles $\\{v_{1},v_{2},u\\}$ and
$\\{v_{3},v_{4},u\\}$. If $G$ contains an edge between $\\{v_{1},v_{2}\\}$ and
$\\{v_{3},v_{4}\\}$, then we can replace $C$ with a cycle of length $5$, which
decreases the size of $H^{\prime}$. Otherwise, by the $2$-vertex-connectivity
of $G$, there exists an edge $zw\in E\setminus H^{\prime}$ such that $z\in
V\setminus V(C)$ and $w\in\\{v_{1},v_{2},v_{3},v_{4}\\}$. In this case, we
replace $H^{\prime}$ with $(H^{\prime}\setminus\\{uw\\})\cup\\{zw\\}$. Then,
the obtained edge set is a triangle-free $2$-edge-cover with the same size,
which has fewer connected components.
2. (c2)
Suppose that $C$ is a $K_{2,3}$ with two sides $\\{v_{1},v_{2}\\}$ and
$\\{w_{1},w_{2},w_{3}\\}$. If every $w_{i}$ has degree exactly $2$, then every
feasible $2$-edge-connected spanning subgraph contains all the edges of $C$,
and hence $C$ is a $\frac{5}{4}$-contractible subgraph, which contradicts the
assumption that $G$ is $(5/4,\varepsilon)$-structured. If $G$ contains an edge
$w_{i}w_{j}$ for distinct $i,j\in\\{1,2,3\\}$, then we can replace $C$ with a
cycle of length $5$, which decreases the size of $H^{\prime}$. Otherwise,
since some $w_{i}$ has degree at least $3$, there exists an edge $w_{i}u\in
E\setminus H^{\prime}$ such that $i\in\\{1,2,3\\}$ and $u\in V\setminus V(C)$.
In this case, we replace $H^{\prime}$ with
$(H^{\prime}\setminus\\{v_{1}w_{i}\\})\cup\\{w_{i}u\\}$. Then, the obtained
edge set is a triangle-free $2$-edge-cover with the same size, which has fewer
connected components.
4. (d)
Suppose that the first half of condition (2) does not hold, i.e., there exists
a leaf block $B$ that has at most $5$ edges. Let $v_{1}$ be the only vertex in
$B$ such that all the edges connecting $V(B)$ and $V\setminus V(B)$ are
incident to $v_{1}$. Since operation (a) is not applied, we see that $B$ is a
cycle of length at most $5$. Let $v_{1},\dots,v_{\ell}$ be the vertices of $B$
that appear along the cycle in this order. We consider the following cases
separately; see figures in the proof of [8, Lemma 2.4].
1. (d1)
Suppose that there exists an edge $zw\in E\setminus H^{\prime}$ such that
$z\in V\setminus V(B)$ and $w\in\\{v_{2},v_{\ell}\\}$. In this case, we
replace $H^{\prime}$ with $(H^{\prime}\setminus\\{v_{1}w\\})\cup\\{zw\\}$.
2. (d2)
Suppose that $v_{2}$ and $v_{\ell}$ are adjacent only to vertices in $V(B)$ in
$G$, which implies that $\ell\in\\{4,5\\}$. If $v_{2}v_{\ell}\not\in E$, then
every feasible 2EC spanning subgraph contains four edges (incident to $v_{2}$
and $v_{\ell}$) with both endpoints in $V(B)$, and hence $B$ is a
$\frac{5}{4}$-contractible subgraph, which contradicts the assumption that $G$
is $(5/4,\varepsilon)$-structured. Thus, $v_{2}v_{\ell}\in E$. Since there
exists an edge connecting $V\setminus V(B)$ and $V(B)\setminus\\{v_{1}\\}$ by
the $2$-vertex-connectivity of $G$, without loss of generality, we may assume
that $G$ has an edge $v_{3}z$ with $z\in V\setminus V(B)$. In this case, we
replace $H^{\prime}$ with
$(H^{\prime}\setminus\\{v_{1}v_{\ell},v_{2}v_{3}\\})\cup\\{v_{3}z,v_{2}v_{\ell}\\}$.
In both cases, the obtained edge set is a triangle-free $2$-edge-cover with
the same size. Furthermore, we see that either (i) the obtained edge set has
fewer connected components or (ii) it has the same number of connected
components and fewer bridges.
5. (e)
Suppose that the latter half of condition (2) does not hold, i.e., there
exists an inner block $B$ that has at most $3$ edges. Then, $B$ is a triangle.
Let $\\{v_{1},v_{2},v_{3}\\}$ be the vertex set of $B$. If there are at least
two bridge edges incident to distinct vertices in $V(B)$, say $wv_{1}$ and
$zv_{2}$, then edge $v_{1}v_{2}$ has to be removed by operation (a), which is
a contradiction. Therefore, all the bridge edges in $H^{\prime}$ incident to
$B$ are incident to the same vertex $v\in V(B)$. In this case, we apply the
same operation as (d).
We can easily see that each operation above can be done in polynomial time. We
also see that each operation decreases the lexicographical ordering of
$(|H^{\prime}|,{\rm cc}(H^{\prime}),{\rm br}(H^{\prime}))$, where ${\rm
cc}(H^{\prime})$ is the number of connected components in $H^{\prime}$ and
${\rm br}(H^{\prime})$ is the number of bridges in $H^{\prime}$. This shows
that the procedure terminates in polynomial steps. After the procedure,
$H^{\prime}$ is a semi-canonical triangle-free $2$-edge-cover with
$|H^{\prime}|\leq|H|$, which completes the proof. ∎
### 4.3 Proof of Theorem 1
By Lemma 2, in order to prove Theorem 1, it suffices to give a
$\frac{13}{10}$-approximation algorithm for 2-ECSS in
$(5/4,\varepsilon)$-structured graphs for a sufficiently small fixed
$\varepsilon>0$. Let $G=(V,E)$ be a $(5/4,\varepsilon)$-structured graph. By
Proposition 7, we can compute a minimum-size triangle-free $2$-edge-cover $H$
of $G$ in polynomial-time. Note that the optimal value ${\sf OPT}$ of 2-ECSS
in $G$ is at least $|H|$, because every feasible solution for 2-ECSS is a
triangle-free $2$-edge-cover. By Lemma 8, $H$ can be transformed into a semi-
canonical triangle-free $2$-edge-cover $H^{\prime}$ with
$|H^{\prime}|\leq|H|$. Since $H^{\prime}$ is triangle-free, by applying Lemma
3 with $H^{\prime}$, we obtain a $2$-edge-connected spanning subgraph $S$ of
size at most $(\frac{13}{10}-\frac{1}{20}b)|H^{\prime}|$, where $H^{\prime}$
has $b|H^{\prime}|$ bridges. Therefore, we obtain
$|S|\leq\left(\frac{13}{10}-\frac{1}{20}b\right)|H^{\prime}|\leq\frac{13}{10}|H|\leq\frac{13}{10}{\sf
OPT},$
which shows that $S$ is a $\frac{13}{10}$-approximate solution for 2-ECSS in
$G$. This completes the proof of Theorem 1. ∎
## 5 Concluding Remarks
In this paper, we have presented a $(1.3+\varepsilon)$-approximation algorithm
for 2-ECSS, which achieves the currently best approximation ratio. We give a
remark that our algorithm is complicated and far from practical, because we
utilize Hartvigsen’s algorithm [10], which is quite complicated. Therefore, it
will be interesting to design a simple and easy-to-understand approximation
algorithm with (almost) the same approximation ratio as ours. Another possible
direction of future research is to further improve the approximation ratio by
improving Lemma 3.
## References
* [1] Joseph Cheriyan, András Sebő, and Zoltán Szigeti. Improving on the 1.5-approximation of a smallest 2-edge connected spanning subgraph. SIAM Journal on Discrete Mathematics, 14(2):170–180, 2001.
* [2] Joseph Cheriyan and Ramakrishna Thurimella. Approximating minimum-size $k$-connected spanning subgraphs via matching. SIAM Journal on Computing, 30(2):528–560, 2000.
* [3] Gérard Cornuéjols and William Pulleyblank. A matching problem with side conditions. Discrete Mathematics, 29(2):135–159, 1980.
* [4] Artur Czumaj and Andrzej Lingas. On approximability of the minimum-cost $k$-connected spanning subgraph problem. In Proceedings of the 10th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 1999), pages 281–290, 1999.
* [5] Cristina G Fernandes. A better approximation ratio for the minimum size $k$-edge-connected spanning subgraph problem. Journal of Algorithms, 28(1):105–124, 1998.
* [6] Harold N. Gabow and Suzanne R. Gallagher. Iterated rounding algorithms for the smallest $k$-edge connected spanning subgraph. SIAM Journal on Computing, 41(1):61–103, 2012.
* [7] Harold N. Gabow, Michel X. Goemans, Éva Tardos, and David P. Williamson. Approximating the smallest $k$-edge connected spanning subgraph by LP-rounding. Networks, 53(4):345–357, 2009.
* [8] Mohit Garg, Fabrizio Grandoni, and Afrouz Jabal Ameli. Improved approximation for two-edge-connectivity. In Proceedings of the 34th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2023), pages 2368–2410, 2023.
* [9] Fabrizio Grandoni, Afrouz Jabal Ameli, and Vera Traub. Breaching the 2-approximation barrier for the forest augmentation problem. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2022), page 1598–1611, 2022.
* [10] David Hartvigsen. Extensions of Matching Theory. PhD thesis, Carnegie Mellon University, 1984. Available at https://david-hartvigsen.net.
* [11] David Hartvigsen and Yanjun Li. Polyhedron of triangle-free simple $2$-matchings in subcubic graphs. Mathematical Programming, 138:43–82, 2013.
* [12] Christoph Hunkenschröder, Santosh Vempala, and Adrian Vetta. A 4/3-approximation algorithm for the minimum 2-edge connected subgraph problem. ACM Transactions on Algorithms, 15(4):1–28, 2019.
* [13] Kamal Jain. A factor 2 approximation algorithm for the generalized Steiner network problem. Combinatorica, 21:39–60, 1998.
* [14] Samir Khuller and Uzi Vishkin. Biconnectivity approximations and graph carvings. Journal of the ACM, 41(2):214–235, 1994.
* [15] Yusuke Kobayashi. A simple algorithm for finding a maximum triangle-free $2$-matching in subcubic graphs. Discrete Optimization, 7:197–202, 2010.
* [16] Yusuke Kobayashi. Weighted triangle-free 2-matching problem with edge-disjoint forbidden triangles. Mathematical Programming, 192(1):675–702, 2022.
* [17] Katarzyna Paluch and Mateusz Wasylkiewicz. A simple combinatorial algorithm for restricted 2-matchings in subcubic graphs - via half-edges. Information Processing Letters, 171:106146, 2021.
* [18] Alexander Schrijver. Combinatorial Optimization: Polyhedra and Efficiency, volume 24 of Algorithms and Combinatorics. Springer-Verlag, Berlin, 2003.
* [19] András Sebő and Jens Vygen. Shorter tours by nicer ears: 7/5-approximation for the graph-tsp, 3/2 for the path version, and 4/3 for two-edge-connected subgraphs. Combinatorica, 34(5):597–629, 2014.
* [20] Santosh Vempala and Adrian Vetta. Factor 4/3 approximations for minimum 2-connected subgraphs. In Proceedings of the Third International Workshop on Approximation Algorithms for Combinatorial Optimization (APPROX 2000), pages 262–273, 2000.
|
# Closed formula for the transport of micro- nano-particle across model porous
media
Paolo Malgaretti<EMAIL_ADDRESS>Helmholtz Institute Erlangen-
Nürnberg for Renewable Energy (IEK-11), Forschungszentrum Jülich, Erlangen,
Germany Jens Harting Helmholtz Institute Erlangen-Nürnberg for Renewable
Energy (IEK-11), Forschungszentrum Jülich, Erlangen, Germany Department of
Applied Physics, Eindhoven University of Technology, Eindhoven, The
Netherlands
###### Abstract
In the last decade the Fick-Jacobs approximation has been exploited to capture
the transport across constrictions. Here, we review the derivation of the
Fick-Jacobs equation with particular emphasis on its linear response regime.
We show that for fore-aft symmetric channels the flux of non-interacting
systems is fully captured by its linear response regime. For this case we
derive a very simple formula that captures the correct trends and that can be
exploited as a simple tool to design experiments or simulations. Finally, we
show that higher order corrections in the flux may appear for non-symmetric
channels.
## I Introduction
It is common to experience long queues form when a constriction occurs on a
highway Lighthill and Whitham (1955); Wang et al. (2013). Such an (unlucky)
phenomenon is clearly the result of the “local” confinement: due to the
constriction, vehicles slow down hence reducing the local “mass” flux as
compared to the clear part of the highway. Such a local reduction of the mass
flow causes the onset of the annoying queues that every now and then we
experience. This phenomenon does not occur only on highways. It becomes a
major issue close to emergency exits in the case of panic Vermuyten et al.
(2016). The very same dynamics occurs also at smaller scales and for simpler
systems. For example, it is common experience that it is difficult to extract
pills from a container if the opening is too small. Here, pills tend to “clog”
i.e., to form stable structures close to the opening of the container that
prevent pills from going out. The very similar dynamics occurs in silos
containing crops Jeong et al. (2018), in erosion Jäger et al. (2018), in
suspensions of hard and soft particles Marin et al. (2018); Kusters et al.
(2014); Bielinski et al. (2021), in herds of sheep Garcimartín et al. (2015),
in the onset of panic in ants Altshuler et al. (2005), and even humans
Zuriguel et al. (2020).
The effect of confinement does not have to be unpleasant, as it is for traffic
jams, or inconvenient, as it is for clogging of silos. Vice versa, tuning the
shape of the confining media can be an intriguing and novel way to control the
dynamics of the confined system. For example, microfluidic devices exploit
variations of the section of the micro channels they are made of to control
the dynamics of fluid and to induce the formation of droplets Squires and
Quake (2005); Dressaire and Sauret (2017); Douféne et al. (2019); Convery and
Gadegaard (2019). Similarly, Tunable Resistive Pulse Sensing (TRPS) techniques
exploit micro- nano-pores to analyze small particles ranging from a few tens
of nanometers up to micrometric scale Weatherall and Willmott (2015). In
particular, TRPS has been used to direct detect antibody-antigen binding Saleh
and Sohn (2003), to measure elecrophoretic mobility of colloidal particles Ito
et al. (2004), to perform single-molecule detection Heins et al. (2005) and to
measure the zeta-potential of nanometric particles Arjmandi et al. (2012).
Alternatively, Chromatography techniques have been developed to separate
micro- or nano-particles depending on both their size as well as their surface
properties Robards and Ryan (2022); Reithinger and Arlt (2011); Michaud et al.
(2021); Seidel-Morgenstern et al. (2008). Finally, at even smaller scales,
nanopores have been designed to sequence DNA molecules Soni et al. (2010).
Transport in confinement is not relevant only for particle detection/analysis.
Indeed, the flow of fluids across a porous medium is crucial in diverse
scenarios. For example, oil recovery industries have put much effort into
developing techniques to maximize the extraction of oil from the rock matrix
it is embedded in Carvalho (2015); Foroozesh and Kumar (2020). Similarly,
understanding the dependence of the flow of water on the porosity of the soil
is crucial in environmental sciences Farhadian and Nikvar-Hassani (2019).
Moreover, diverse techonlogies related to the energy transition such as blue-
energy Boon and Roij (2011), hydrogen technology P. Preuster (2017); Solymosi
et al. (2022), electrolyzers and fuel cells Suter et al. (2021); Du et al.
(2022), or $CO_{2}$ segregation Hepburn et al. (2019) rely on the transport of
(charged) chemical species across nanoporous materials.
Finally, several biological systems are controlled by the transport of
confined complex fluids. For example, neuronal transmission relies on the
transport of neuro-receptors among neurons and to their specific binding sites
Alberts et al. (2007). Moreover, cell regulation relies on the proper tuning
of the concentrations of electrolytes inside the cell. Such a regulation
occurs via dedicated pores and channels whose shape makes them very sensitive
to specific ions Pethig (1986); Dubyak (2004); Calero et al. (2011); Peyser et
al. (2014); Lee et al. (2017) and RNA is transported across the nuclear
membrane Melnikov et al. (2017); Bacchin (2018); Berezhkovskii et al. (2019).
Moreover, the lymphatic and circulatory systems in mammals rely on the
transport of quite heterogeneous suspensions composed of a variety of
components, spanning from the nanometric size of ions up to the micrometric
size of red blood cells, across varying-section elastic pipes Kusters et al.
(2014); Nipper and Dixon (2011); Wiig and Swartz (2012); Yoganathan et al.
(1988). Finally, the survival of plants relies, at large scales, on the proper
circulation of liquid (sap) along the trunk Jensen et al. (2016) and at short
scales on the cytoplasmic streaming within the cells Shimmen and Yokota
(2004).
All the above mentioned systems rely or depend on the dynamics under
confinement. Therefore, understanding the dynamics and transport properties of
confined complex systems such as ions, molecules, polymers, colloidal
particles, and suspensions is of primary importance for the understanding of a
wide spectrum of phenomena and for the development of technological
applications. Even more, identifying the relevant parameters controlling key
features, like transport or phase transitions, will open a new route for
controlling the dynamics of confined systems upon tuning the geometry of the
confining media.
Up to now, there has been no systematic study of the dependence of the
dynamics of confined systems upon changing the shape of the confining walls.
The main reason is the large effort that such a study requires. Indeed,
experimentally tuning the shape of a pore is a tremendous task since, if
possible at all, it requires to synthesize every time a new item from scratch.
On the theoretical side, studying the dynamics and the transport of confined
systems is a tremendous task since it requires to capture several length, time
and energy scales. In fact, the length scales range from the nanometric scale,
typical for ions and for van der Waals interactions to the micrometric scale
of colloids, polymers and macromolecules up to the millimeters/centimeters
scale of microfluidic devices. Concerning time scales, the spectrum spans the
diffusion time of small particles and ions over their size $\sim\mu\text{sec}$
up to the long time scales typical of transport $\sim\text{sec}$. Concerning
energy scales, they range from thermal energy $k_{B}T$ ($\sim
10^{-21}\text{J}$) up to van der Waals and electrostatic interactions whose
magnitude can be of several $k_{B}T$. On the top of these “direct”
interactions also the effective interactions induced by the confinement should
be accounted for. For example, squeezing a deformable object, like a polymer
or a vesicle, through a constriction can require quite an amount of energy
that can easily reach the order of $100-1000\,k_{B}T$. Given such a
complexity, one typically would rely on numerical techniques such as molecular
dynamics. However, the wide range of interactions (Van der Walls,
electrostatic..) jointly with the wide range of time and length scales imposes
to put forward numerical approaches capable of properly resolving the smallest
length, time and energy scales. At the same time, such an approach should also
resolve the large length, time and energy scales. Accordingly, the numerical
route becomes quite demanding from the perspective of the computational time.
Since both experimental and numerical routes are quite expensive, an
approximated analytical route based on some controllable expansions may become
appealing. Intriguingly, it is possible to obtain simple analytical models
that capture some features of the dynamics of confined systems. The key idea,
is to “project” the dynamics of the system onto some relevant coordinate (in
chemistry sometimes called “reaction coordinate”) and then to study the
dynamics of these few (typically one) degrees of freedom. For example, in the
case of polymer translocation across pores, the most important observable is
the time the polymer takes to cross from one side to the other of the pore.
Therefore, the relevant degree of freedom is the position of the center of
mass of the polymer whereas the degrees of freedom associated with the
position of the monomers can be integrated out.
In this contribution, we briefly review the derivation of the Fick-Jacobs
approximation Zwanzig (1992); Reguera and Rubi (2001); Kalinay and Percus
(2005a); Kalinay and Percus (2005b); Kalinay and Percus (2008); Martens et al.
(2011); Chacón-Acosta et al. (2013); Malgaretti et al. (2013) and its use in
studying transport across corrugated pores and channels. The Fick-Jacobs
approximation has been shown to be applicable to the transport of ions
Malgaretti et al. (2014, 2015); Malgaretti et al. (2016a); Chinappi and
Malgaretti (2018); Malgaretti et al. (2019), colloids Reguera et al. (2006,
2012); Marini Bettolo Marconi et al. (2015); Malgaretti et al. (2016b);
Puertas et al. (2018), rods Malgaretti and Harting (2021), polymers Bianco and
Malgaretti (2016); Malgaretti and Oshanin (2019); Bodrenko et al. (2019), and
more recently even active systems Malgaretti and Stark (2017); Kalinay (2022);
Antunes et al. (2022), chemial reactors Ledesma-Durán et al. (2016) and
pattern-forming systems Chacón-Acosta et al. (2020).
In the following we re-derive the Fick-Jacobs approximation with particular
emphasis on the regime in which the current is proportional to the applied
force. In such a regime, it is possible to derive a closed formula that
accounts for the dependence of the flux on the geometry of the channel.
Interestingly, our derivation naturally highlights a few relations between the
underling Smoluchowski equation and the linear response theory. Even though
this work is motivated by the transport in confined pores and channels, the
results we derive are valid for all $1D$ systems (independently of the
physical origin of the effective potential) in the dilute regime (for which
mutual interactions can be neglected) and whose dynamicsis governed by the
Smoluchowski equation (i.e. in the overdamped regime).
Figure 1: Cartoon of the varying-section channel.
## II Model
In the following we are interested in the transport of a single colloidal
particle confined in an axially symmetric channel characterized by its half
section (see Fig. 1 for a scketch of the system)
$\displaystyle h(x)=h_{0}+h_{1}\cos\left(2\pi\frac{x}{L}\right)\,.$ (1)
The time evolution of the probability density is governed by the Smoluchowski
equation
$\displaystyle\dot{\rho}(\mathbf{r},t)=\nabla\cdot\left[D\nabla\rho(\mathbf{r},t)+D\beta\rho(\mathbf{r},t)\nabla
W(\mathbf{r})\right]\,,$ (2)
where $D$ is the diffusion coefficient, $\beta^{-1}=k_{B}T$ is the inverse
thermal energy, $k_{B}$ the Boltzmann constant, $T$ the absolute temperature
and
$\displaystyle W(\mathbf{r})=\begin{cases}\phi(\mathbf{r})&|r|<h(x)\\\
\infty&\text{else}\end{cases}$ (3)
is the effective potential responsible for both confining the particle within
the channel and for additional soft interactions, $\phi(\mathbf{r})$ with the
channel walls. For smoothly-varying channel cross-sections,
$\partial_{x}h(x)\ll 1$, it is possible to factorize the probability density
Zwanzig (1992); Reguera and Rubi (2001); Kalinay and Percus (2008); Martens et
al. (2011); Chacón-Acosta et al. (2013); Malgaretti et al. (2013)
$\displaystyle\rho(\mathbf{r},t)=p(x,t)\dfrac{e^{-\beta
W(\mathbf{r})}}{e^{-\beta A(x)}}\,,$ (4)
where
$\displaystyle A(x)=-k_{B}T\ln\left[\frac{1}{\pi
h_{0}^{2}}\int_{-\infty}^{\infty}e^{-\beta W(\mathbf{r})}rdr\right]$ (5)
is the local free energy Malgaretti et al. (2016c). Moreover, integrating
along the radial direction leads to
$\displaystyle\dot{p}(x,t)=\partial_{x}\left[D\partial_{x}p(x,t)+D\beta
p(x,t)\partial_{x}A(x)\right]\,.$ (6)
Such a procedure is called Fick-Jacobs approximation Zwanzig (1992); Reguera
and Rubi (2001); Malgaretti et al. (2013). Its regime of validity has been
assessed by several groups Reguera et al. (2006); Berezhkovskii et al. (2007);
Burada et al. (2007); Berezhkovskii et al. (2015); Kalinay and Percus (2005a);
Kalinay and Percus (2005b); Kalinay and Percus (2006); Martens et al. (2011);
Pineda et al. (2012); García-Chung et al. (2015). In particular, it has been
shown that the quantitative reliability of the Fick-Jacobs approximation can
be enhanced by introducing a position dependent diffusion coefficient Reguera
et al. (2006); Berezhkovskii et al. (2007); Burada et al. (2007);
Berezhkovskii et al. (2015); Kalinay and Percus (2005a); Kalinay and Percus
(2005b); Kalinay and Percus (2006); Martens et al. (2011); Pineda et al.
(2012); García-Chung et al. (2015), $D(x)$, hence leading to the set of
equations
$\displaystyle\dot{p}(x,t)$ $\displaystyle=-\partial_{x}J(x,t)$ (7)
$\displaystyle\frac{J}{D(x)}$ $\displaystyle=-\partial_{x}p(x)-\beta
p(x)\partial_{x}A(x)\,.$ (8)
Eq. (8) is completed with the boundary conditions
$\displaystyle p(-L)$ $\displaystyle=p(L)$ (9)
$\displaystyle\int_{-L}^{L}p(x)dx$ $\displaystyle=1.$ (10)
We decompose the effective force $-\partial_{x}A(x)$ as the net force
$\displaystyle f=-\frac{1}{2L}\int_{-L}^{L}\partial_{x}A(x)dx=-\frac{\Delta
A}{2L}$ (11)
and
$\displaystyle A_{eq}(x)=A(x)+fx.$ (12)
$f$ accounts for the net force responsible of the flux and $A_{eq}(x)$
accounts for all the other conservative forces that will not give rise to any
flux. In the following, we expand both the flux, $J$, and the density, $p$,
about the equilibrium case:
$\displaystyle J=$ $\displaystyle J_{0}+J_{1}+J_{2}+...$ (13) $\displaystyle
p(x)=$ $\displaystyle p_{0}(x)+p_{1}(x)+p_{2}(x)+...$ (14)
Note that due to Eq. (10) at zeroth order we have
$\displaystyle\int_{-L}^{L}p_{0}(x)dx$ $\displaystyle=1\,.$ (15)
This implies
$\displaystyle\int_{-L}^{L}p_{n}(x)dx$ $\displaystyle=0\,\,\,\,\forall n\neq
0$ (16)
Accordingly, at order zero we have
$\displaystyle p_{0}(x)$ $\displaystyle=\tilde{p}e^{-\beta A_{eq}(x)}$ (17)
$\displaystyle J_{0}$ $\displaystyle=0$ (18) $\displaystyle\tilde{p}$
$\displaystyle=\frac{1}{\int_{-L}^{L}e^{-\beta A_{eq}(x)}dx}\,.$ (19)
At the generic $n$-th order we have
$\displaystyle\frac{J_{n}}{D(x)}=-\partial_{x}p_{n}(x)-\beta
p_{n}(x)\partial_{x}A_{eq}(x)+\beta p_{n-1}(x)f\,,$ (20)
the solution of which reads
$\displaystyle p_{n}(x)=e^{-\beta
A_{eq}(x)}\left[\int\limits_{-L}^{x}\left[\beta
p_{n-1}(y)f-\frac{J_{n}}{D(y)}\right]e^{\beta A_{eq}(y)}dy+\Pi_{n}\right]\,.$
(21)
Here, $J_{n}$ and $\Pi_{n}$ are integration constants. Imposing the periodic
boundary conditions, $p_{n}(-L)=p_{n}(L)$, and recalling that
$A_{eq}(-L)=A_{eq}(L)$ leads to
$\displaystyle\int_{-L}^{L}\left(\frac{J_{n}}{D(y)}-\beta
p_{n-1}(y)f\right)e^{\beta A_{eq}(y)}dy=0\,,$ (22)
with
$\displaystyle J_{n}=\beta f\dfrac{\int_{-L}^{L}p_{n-1}(y)e^{\beta
A_{eq}(y)}dy}{\int_{-L}^{L}\dfrac{e^{\beta A_{eq}(y)}}{D(y)}dy}=\beta
f\tilde{p}\dfrac{\int_{-L}^{L}\frac{p_{n-1}(y)}{p_{0}(y)}dy}{\int_{-L}^{L}\dfrac{e^{\beta
A_{eq}(y)}}{D(y)}dy}\,.$ (23)
In the last step we used Eq. (17). Finally, $\Pi_{n}$ is determined by
imposing Eqs. (15), (16)
$\displaystyle\Pi_{n}=-\tilde{p}\int_{-L}^{L}e^{-\beta
A_{eq}(x)}\int\limits_{-L}^{x}\left[\beta
p_{n-1}(y)f-\frac{J_{n}}{D(y)}\right]e^{\beta A_{eq}(y)}dydx\,.$ (24)
At leading order in the force, Eqs. (21) (23) read
$\displaystyle p_{1}(x)=$ $\displaystyle\,e^{-\beta A_{eq}(x)}\left[\beta
f\tilde{p}(x+L)-J_{1}\int_{-L}^{x}\frac{e^{\beta
A_{eq}(y)}}{D(y)}dy\right]\,,$ (25) $\displaystyle J_{1}=$
$\displaystyle\,\dfrac{2\beta fL}{\int_{-L}^{L}e^{-\beta
A_{eq}(x)}dx\int_{-L}^{L}\frac{e^{\beta A_{eq}(x)}}{D(x)}dx}\,.$ (26)
Interestingly, from Eq. (26), it is possible to identify a force-independent
channel permeability
$\displaystyle\chi=\dfrac{2\beta L}{\int_{-L}^{L}e^{-\beta
A_{eq}(x)}dx\int_{-L}^{L}\frac{e^{\beta A_{eq}(x)}}{D(x)}dx}\,.$ (27)
As expected, Eq. (27) agrees with the derivation of the effective diffusion
coefficient for a particle at equilibrium and in the presence of entropic
barriers Lifson and Jackson (1962); Reimann et al. (2001). This is in
agreement with the linear response theory within which the transport
coefficients that determine the flux under external forces can be determined
from equilibrium properties.
Some general remarks can be derived in the case of fore-aft symmetric
channels, for which $A_{eq}(x)=A_{eq}(-x)$, and diffusivities, $D(x)=D(-x)$.
For such cases, the magnitude of the flux should depend solely on the
magnitude of the force and not on its sign. This implies that
$\displaystyle J_{2n}=0,\quad\forall n>0\,.$ (28)
In order to proceed, we recall that, for fore-aft symmetric $f(x)$ and $g(x)$,
the following equality holds:
$\displaystyle\int_{-L}^{L}g(x)\int_{-L}^{x}f(y)dydx=\frac{1}{2}\int_{-L}^{L}f(x)dx\int_{-L}^{L}g(x)dx\,$
(29)
Enforcing the condition in Eq. (28) into Eq. (23) and using the last
expression leads to
$\displaystyle\Pi_{n}=0,\quad\forall n>0\,.$ (30)
and, substituting again into Eq. (23) eventually leads to
$\displaystyle J_{n}=0,\quad\forall n\geq 1\,.$ (31)
Interestingly, we note that even though $\Pi_{n>0}=0$ and $J_{n>1}=0$ the
density profile is still sensitive to higher order corrections in the force,
i.e. in general $p_{n}\neq 0$. According to this analysis, Eq. (26) is not
just the linear contributions to the flux rather it provides the exact
expressions at every order in the external force. The outcome of this analysis
is indeed intuitive since it states that for non-interacting systems confined
within fore-aft symmetric channels non-linear effects are absent. The same
results are indeed valid for any $1D$ problem with such a symmetry.
In contrast, if neither the potential, $A(x)$, nor the diffusion profile,
$D(x)$, have a defined parity, then the left-right symmetry is broken, Eq.
(28) does not hold anymore, and indeed a diode effect may set for sufficiently
large external forces. We can assess the dependence of the diode effect on the
geometry of the channel by calculating
$\displaystyle J_{2}=\beta
f\dfrac{\int\limits_{-L}^{L}\int\limits_{-L}^{x}\\!\\!\\!\beta\tilde{p}f-\frac{J_{1}}{D(y)}e^{\beta
A_{eq}(y)}dy+\Pi_{1}dx}{\int_{-L}^{L}\dfrac{e^{\beta A_{eq}(y)}}{D(y)}dy}\,.$
(32)
Using
$\displaystyle\Gamma(x)=\int_{-L}^{x}\dfrac{e^{\beta A_{eq}(y)}}{D(y)}dy$ (33)
and the definition of $J_{1}$ we obtain
$\displaystyle J_{2}=\frac{\beta
f}{\Gamma(L)}\int\limits_{-L}^{L}\beta\tilde{p}f(x+L)-2\beta\tilde{p}fL\dfrac{\Gamma(x)}{\Gamma(L)}+\Pi_{1}dx\,.$
(34)
Finally, using the definition of $\Pi_{1}$ we obtain
$\displaystyle J_{2}=$ $\displaystyle\frac{(\beta
fL)^{2}\tilde{p}}{\Gamma(L)}\frac{1}{L}\int\limits_{-L}^{L}\left[\left(\frac{x}{L}+1\right)-2\dfrac{\Gamma(x)}{\Gamma(L)}\right]\left[1-e^{-\beta
A_{eq}(x)}\right]dx\,.$ (35)
### II.1 Transport across free energy barriers
In the case of transport of point-like particles across $3D$ varying-section
channels with axial symmetry the effective potential reads
$\displaystyle A^{(id)}_{eq}(x)=-2k_{B}T\ln\left[\frac{h(x)}{h_{0}}\right]\,,$
(36)
where $h(x)$ is the local half-section of the channel and $h_{0}$ its average
value (see Fig.1). Accordingly, Eq. (26) reads
$\displaystyle J_{id}=\dfrac{2\beta
fL}{\int_{-L}^{L}\frac{h^{2}(x)}{h_{0}^{2}}dx\int_{-L}^{L}\frac{h_{0}^{2}}{h^{2}(x)D(x)}dx}\,.$
(37)
In the case of micro- or nano-particles that undergo solely excluded volume
interactions with the channel walls, the effective channel half-section
becomes $h(x)-R$ where $R$ is the particle size and we obtain
$\displaystyle
A^{(pcl)}_{eq}(x)=-2k_{B}T\ln\left[\frac{h(x)-R}{h_{0}}\right]\,,$ (38)
which leads to
$\displaystyle J_{pcl}=\dfrac{2\beta
fL}{\int_{-L}^{L}\frac{(h(x)-R)^{2}}{h_{0}^{2}}dx\int_{-L}^{L}\frac{h_{0}^{2}}{(h(x)-R)^{2}D(x)}dx}\,.$
(39)
We recall that $R<h_{0}-h_{1}$ for the particle to be able to cross the
channel. Finally, several groups have shown that the Fick-Jacobs approximation
can be improved by assuming a position-dependent diffusion coefficient Zwanzig
(1992); Reguera and Rubi (2001); Kalinay and Percus (2006, 2008); Martens et
al. (2011); Pineda et al. (2012); Berezhkovskii et al. (2015); García-Chung et
al. (2015). Nowadays, there is general agreement that the approximated formula
for the diffusion coefficient reads Reguera and Rubi (2001) (or is in practice
equivalent to)
$\displaystyle D(x)=\dfrac{D_{0}}{\sqrt{1+(\partial_{x}h(x))^{2}}}\,.$ (40)
Figure 2: Transport across porous media. Upper left: permeability, $\chi$, as
obtained form Eq. (46) (solid lines), Eq. (26) with constant diffusion
coefficient (dashed lines) and Eq. (26) with a diffusion coefficient as given
by Eq. (40) (dashed-dotted lines) normalized by the one across a constant-
section channel $\chi_{o}=D\beta/4L$, as function of the geometry of the
channel $\Delta
S=\ln\frac{h_{0}+h_{1}}{h_{0}-h_{1}}=\ln\frac{h_{max}}{h_{min}}$ for different
values of the particle radius. Upper right: ratio of $\tilde{\chi}$ over
$\chi$ normalized by $\chi$ for the data sets shown in the left panel. Bottom:
permeability, $\chi$, normalized by the one across a constant-section channel
$\chi_{o}=D\beta/4L$, as function of the radius of the particle, $R$,
normalized by the average channel width, $h_{0}$, for different channel
geometries captured by $\Delta S$. Bottom right: ratio of $\tilde{\chi}$ over
$\chi$ normalized by $\chi$ for the data sets shown in the left panel.
### II.2 Piece-wise linear potential and homogeneous diffusion coefficient
In order to get analytical insight it can be useful to approximate the
effective potential $A(x)$ by
$\displaystyle A_{eq}(x)=-\frac{\Delta A_{eq}}{L}|x|\,,$ (41)
where
$\displaystyle\Delta A_{eq}=A^{max}_{eq}-A^{min}_{eq}$ (42)
is the piece-wise linear difference between the maximum and minimum values of
$A_{eq}$. Moreover, if we assume that the diffusion coefficient is homogeneous
$\displaystyle D(x)=D_{0}$ (43)
we get
$\displaystyle\int_{-L}^{L}e^{\beta A_{eq}(x)}dx=$
$\displaystyle\frac{2L}{\beta\Delta A_{eq}}\left(1-e^{-\beta\Delta
A_{eq}}\right)$ (44) $\displaystyle\int_{-L}^{L}e^{-\beta A_{eq}(x)}dx=$
$\displaystyle\frac{2L}{\beta\Delta A_{eq}}\left(e^{\beta\Delta
A_{eq}}-1\right)$ (45)
and finally by substituting the last expressions into Eq. (27) we obtain an
approximated expression for the permeability
$\displaystyle\tilde{\chi}=\frac{D\beta}{4L}\dfrac{\left(\beta\Delta
A_{eq}\right)^{2}}{\cosh(\beta\Delta A_{eq})-1}\,.$ (46)
Interestingly, Eq. (46) shows that $\chi$ is an even function of $\Delta
A_{eq}$. This implies that the transport is insensitive upon flipping the sign
of the free energy barrier $\Delta A$. Finally, Eq.(46) shows that $\chi$
decays exponentially with $\beta\Delta A_{eq}$.
## III Discussion
The reliability of the Fick-Jacobs approximation, namely Eq.(26), has been
addressed for point-like particles and it has shown good quantitative
agreement for forces up to $\beta fL\simeq 10$ Burada et al. (2007). However,
Eq. (26) still requires to numerically compute integrals, whereas Eq. (46)
provides a direct (yet approximated) dependence of $\tilde{\chi}$ on $\Delta
A$. Therefore, it is important to address the reliability of Eq. (46) as
compared to the full solution Eq. (26). Indeed, all the panels of Fig. 2 show
that the permeability calculated with the piece-wise linear model, Eq. (46),
shows some discrepancies as compared to the full expression give in Eq. (26).
In particular, as shown in Fig. 2 for the case under consideration
($h_{0}/L=0.1$) the corrections due to the inhomogeneous diffusion (dashed-
dotted lines) are indistinguishable from those with constant diffusion
coefficient (dashed lies) and hence they do not improve the approximation. On
the other hand Fig. 2 shows that the simple formula in Eq. (46) is sufficient
to properly capture the trends and indeed can be used to estimate the
transport of colloidal particle across porous media. Interestingly, concerning
the magnitude of $\chi$, the bottom panels of Fig. 2 show that the channel
permeability decreases upon increasing the particle size. Interestingly, the
decrease is almost linear for larger corrugations of the channel (larger
values of $\Delta S$) whereas for smaller values of the corrugation it
plateaus at smaller values of $R$.
Figure 3: Dependence of the approximated channel permeability, $\tilde{\chi}$,
(as defined in Eq. (46)) normalized by that of a constant section channel,
$\chi_{o}$ as function of the amplitude of the dimensionless free energy
barrier $\beta\Delta A$ which encodes the physical properties of the confined
system.
Finally, we discuss the dependence of $\tilde{\chi}$ in $\beta\Delta A$ as per
Eq. (46). As shown in Fig. (3), $\tilde{\chi}$ has a maximum for $\beta\Delta
A=0$ and then it decays exponentially for larger values of $\beta\Delta A$.
Interestingly, $\tilde{\chi}$ attains values close to unity up to $\beta\Delta
A\simeq 5$, i.e. for a free energy barrier much larger than the thermal
energy.
The fact that Eq. (46) depends solely on $\Delta A$ allows one to estimate the
transport also in situations in which the particles may have some soft
interactions with the walls, like electrostatic interactions. In that case the
free energy barrier will depend not only on the size of the particle and on
the geometry of the channel but also on the charge of both the particle and
the walls of the channels Malgaretti et al. (2015); Malgaretti et al. (2016c).
Moreover, Eq. (46) allows also to predict the transport of soft or deformable
objects, like proteins or polymers Bianco and Malgaretti (2016); Malgaretti
and Oshanin (2019); Carusela et al. (2021).
## IV Conclusions
We have derived closed formulas for the transport within linear response
theory as well as for higher order corrections. In particular, we have shown
that for the case of non-interacting systems confined in fore-aft symmetric
channels the higher order corrections in both the flux and in the density are
identically zero. Hence, for fore-aft symmetric channels, the full expression
for the flux is indeed the one obtained within the linear response regime.
Accordingly, the channel permeability derived within linear response, Eq.
(27), is related to the well known expression of the effective diffusion
coefficient reported in the literature Lifson and Jackson (1962); Reimann et
al. (2001). Moreover, we have shown that, within the linear response, the
formula for the permeability $\chi$, Eq. (27), can be further simplified by
approximating the local free energy by piece-wise linear potential (Eq. (41))
to obtain Eq. (46), whose overall drop is determined by the values of the free
energy at the bottleneck and at the waist of the channel. We have shown that
such an approximation provides the correct trends and it is reliable within
$\simeq\pm 50\%$ as shown in the right panels of Fig. 2. This feature is
crucial since Eq. (46) can be easily computed and it is valid for all soft-
interactions between the particle and the channel walls.
## Acknowledgments
We thank I. Pagonabarraga and J. Schmid for insightful discussions and
acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) – Project-ID 416229255 – SFB 1411 and Project-ID
431791331 – SFB 1452.
## References
* Lighthill and Whitham (1955) M. J. Lighthill and G. B. Whitham, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 229, 317 (1955).
* Wang et al. (2013) C. Wang, M. A. Quddus, and S. G. Ison, Safety Science 57, 264 (2013), ISSN 0925-7535.
* Vermuyten et al. (2016) H. Vermuyten, J. Belian, L. De Boeck, G. Reniers, and T. Wauters, Safety Science 87, 167 (2016), ISSN 0925-7535.
* Jeong et al. (2018) H. Y. Jeong, S.-C. Jun, J.-Y. Cheon, and M. Park, Geosciences Journal 22, 667 (2018).
* Jäger et al. (2018) R. Jäger, M. Mendoza, and H. J. Herrmann, Phys. Rev. Fluids 3, 074302 (2018).
* Marin et al. (2018) A. Marin, H. Lhuissier, M. Rossi, and C. J. Kähler, Phys. Rev. E 97, 021102 (2018).
* Kusters et al. (2014) R. Kusters, T. van der Heijden, B. Kaoui, J. Harting, and C. Storm, Phys. Rev. E 90, 033006 (2014).
* Bielinski et al. (2021) C. Bielinski, O. Aouane, J. Harting, and B. Kaoui, Phys. Rev. E 104, 065101 (2021).
* Garcimartín et al. (2015) A. Garcimartín, J. M. Pastor, L. M. Ferrer, J. J. Ramos, C. Martín-Gómez, and I. Zuriguel, Phys. Rev. E 91, 022808 (2015).
* Altshuler et al. (2005) E. Altshuler, O. Ramos, Y. Nuñez, J. Fernandez, A. Batista-Leyva, and C. Noda, The American Naturalist 166, 643 (2005).
* Zuriguel et al. (2020) I. Zuriguel, I. Echevería, D. Maza, R. C. H. anésar Martín-Gómez, and A. Garcimarín, Safety Science 121, 394 (2020), ISSN 0925-7535, URL http://www.sciencedirect.com/science/article/pii/S0925753519310203.
* Squires and Quake (2005) T. M. Squires and S. R. Quake, Rev. Mod. Phys. 77, 977 (2005).
* Dressaire and Sauret (2017) E. Dressaire and A. Sauret, Soft Matter 13, 37 (2017).
* Douféne et al. (2019) K. Douféne, C. Tourné-Péteilh, P. Etienne, and A. Aubert-Pouëssel, Langmuir 35, 12597 (2019).
* Convery and Gadegaard (2019) N. Convery and N. Gadegaard, Micro and Nano Engineering 2, 76 (2019), ISSN 2590-0072, URL http://www.sciencedirect.com/science/article/pii/S2590007219300036.
* Weatherall and Willmott (2015) E. Weatherall and G. R. Willmott, Analyst 140, 3318 (2015).
* Saleh and Sohn (2003) O. A. Saleh and L. L. Sohn, Proc. Natl. Acad. Sci. U. S. A. 100, 820 (2003).
* Ito et al. (2004) T. Ito, L. Sun, M. A. Bevan, and R. M. Crooks, Langmuir 20, 6940 (2004).
* Heins et al. (2005) E. A. Heins, Z. S. Siwy, L. A. Baker, and R. C. Martin, Nano Lett. 5, 1824 (2005).
* Arjmandi et al. (2012) N. Arjmandi, W. Van Roy, L. L., and G. Borghs, Anal. Chem. 84, 8490 (2012).
* Robards and Ryan (2022) K. Robards and D. Ryan, _Principles and Practice of Modern Chromatographic Methods_ (Elsevier, Amsterdam, 2022).
* Reithinger and Arlt (2011) M. Reithinger and W. Arlt, Chem. Ing. Tech. 83, 83 (2011).
* Michaud et al. (2021) V. Michaud, J. Pracht, F. Schilfarth, C. Damm, B. Platzer, P. Haines, C. Harreiß, D. M. Guldi, E. Spiecker, and W. Peukert, Nanoscale 13, 13116 (2021).
* Seidel-Morgenstern et al. (2008) A. Seidel-Morgenstern, L. C. Keßler, and M. Kaspereit, Chemical Engineering & Technology 31, 826 (2008).
* Soni et al. (2010) G. V. Soni, A. Singer, Z. Yu, Y. Sun, B. McNally, and A. Meller, Review of Scientific Instruments 81, 014301 (2010).
* Carvalho (2015) M. S. Carvalho, Offshore Technology Conference p. 6 (2015).
* Foroozesh and Kumar (2020) J. Foroozesh and S. Kumar, Journal of Molecular Liquids 316, 113876 (2020), ISSN 0167-7322, URL http://www.sciencedirect.com/science/article/pii/S0167732220324478.
* Farhadian and Nikvar-Hassani (2019) H. Farhadian and A. Nikvar-Hassani, Bulletin of Engineering Geology and the Environment 78, 3833 (2019), ISSN 1435-9537.
* Boon and Roij (2011) N. Boon and R. V. Roij, Molecular Physics 109, 1229 (2011).
* P. Preuster (2017) P. W. P. Preuster, C. Papp, Acc. Chem. Res 50, 74 (2017).
* Solymosi et al. (2022) T. Solymosi, M. Geißelbrecht, S. Mayer, M. Auer, P. Leicht, M. Terlinden, P. Malgaretti, A. Bösmann, P. Preuster, J. Harting, et al., Science Advances 8, eade3262 (2022).
* Suter et al. (2021) T. A. M. Suter, K. Smith, J. Hack, L. Rasha, Z. Rana, G. M. A. Angel, P. R. Shearing, T. S. Miller, and D. J. L. Brett, Advanced Energy Materials 11, 2101025 (2021).
* Du et al. (2022) N. Du, C. Roy, R. Peach, M. Turnbull, S. Thiele, and C. Bock, Chemical Reviews 122, 11830 (2022).
* Hepburn et al. (2019) C. Hepburn, E. Adlen, J. Beddington, E. A. Carter, S. Fuss, N. Mac Dowell, J. C. Minx, P. Smith, and C. K. Williams, Nature 575, 87 (2019).
* Alberts et al. (2007) B. Alberts, A. Johnson, J. Lewis, M. Raff, K. Roberts, and P. Walter, _Molecular Biology of the Cell_ (Garland Science, Oxford, 2007).
* Pethig (1986) R. Pethig, in _Modern Bioelectrochemistry_ , edited by F. Gutmann and H. Keyzer (Springer US, Boston, MA, 1986), pp. 199–239, ISBN 978-1-4613-2105-7, URL https://doi.org/10.1007/978-1-4613-2105-7_7.
* Dubyak (2004) G. R. Dubyak, Advances in Physiology Education 28, 143 (2004).
* Calero et al. (2011) C. Calero, J. Faraudo, and M. Aguilella-Arzo, Phys. Rev. E 83, 021908 (2011).
* Peyser et al. (2014) A. Peyser, D. Gillespie, R. Roth, and W. Nonner, Biophysical Journal 107, 1841 (2014).
* Lee et al. (2017) H. Lee, D. Segets, S. Süß, W. Peukert, S.-C. Chen, and D. Y. Pui, Journal of Membrane Science 524, 682 (2017), ISSN 0376-7388.
* Melnikov et al. (2017) D. V. Melnikov, Z. K. Hulings, and M. E. Gracheva, Physical Review E 95, 063105 (2017).
* Bacchin (2018) P. Bacchin, Membranes 8, 10 (2018).
* Berezhkovskii et al. (2019) A. M. Berezhkovskii, L. Dagdug, and S. M. Bezrukov, J. Chem. Phys. 151, 054113 (2019).
* Nipper and Dixon (2011) M. Nipper and J. Dixon, Cardiovasc. Eng. Technol. 2, 296 (2011).
* Wiig and Swartz (2012) H. Wiig and M. Swartz, Physiol. Rev. 92, 1005 (2012).
* Yoganathan et al. (1988) A. P. Yoganathan, E. G. Cape, H.-W. Sung, F. P. Williams, and A. Jimoh, Journal of the American College of Cardiology 12, 1344 (1988), ISSN 0735-1097, eprint https://www.onlinejacc.org/content/12/5/1344.full.pdf, URL https://www.onlinejacc.org/content/12/5/1344.
* Jensen et al. (2016) K. H. Jensen, K. Berg-Sørensen, H. Bruus, N. M. Holbrook, J. Liesche, A. Schulz, M. A. Zwieniecki, and T. Bohr, Rev. Mod. Phys. 88, 035007 (2016).
* Shimmen and Yokota (2004) T. Shimmen and E. Yokota, Current Opinion in Cell Biology p. 68 (2004).
* Zwanzig (1992) R. Zwanzig, J. Phys. Chem. 96, 3926 (1992).
* Reguera and Rubi (2001) D. Reguera and J. M. Rubi, Phys. Rev. E 64, 061106 (2001).
* Kalinay and Percus (2005a) P. Kalinay and J. K. P. Percus, J. Chem. Phys. 122, 204701 (2005a).
* Kalinay and Percus (2005b) P. Kalinay and J. K. Percus, Phys. Rev. E 72, 061203 (2005b).
* Kalinay and Percus (2008) P. Kalinay and J. K. Percus, Phys. Rev. E 78, 021103 (2008).
* Martens et al. (2011) S. Martens, G. Schmid, L. Schimansky-Geier, and P. Hänggi, Phys. Rev. E 83, 051135 (2011).
* Chacón-Acosta et al. (2013) G. Chacón-Acosta, I. Pineda, and L. Dagdug, J. Chem. Phys. 139, 214115 (2013).
* Malgaretti et al. (2013) P. Malgaretti, I. Pagonabarraga, and J. Rubi, Frontiers in Physics 1, 21 (2013).
* Malgaretti et al. (2014) P. Malgaretti, I. Pagonabarraga, and J. M. Rubi, Phys. Rev. Lett 113, 128301 (2014).
* Malgaretti et al. (2015) P. Malgaretti, I. Pagonabarraga, and J. M. Rubi, Macromol. Symposia 357, 178 (2015).
* Malgaretti et al. (2016a) P. Malgaretti, I. Pagonabarraga, and J. Miguel Rubi, J. Chem. Phys. 144, 034901 (2016a).
* Chinappi and Malgaretti (2018) M. Chinappi and P. Malgaretti, Soft Matter 14, 9083 (2018).
* Malgaretti et al. (2019) P. Malgaretti, M. Janssen, I. Pagonabarraga, and J. M. Rubi, J. Chem. Phys. 151, 084902 (2019).
* Reguera et al. (2006) D. Reguera, G. Schmid, P. S. Burada, J. M. Rubi, P. Reimann, and P. Hänggi, Phys. Rev. Lett. 96, 130603 (2006).
* Reguera et al. (2012) D. Reguera, A. Luque, P. S. Burada, G. Schmid, J. M. Rubi, and P. Hänggi, Phys. Rev. Lett. 108, 020604 (2012).
* Marini Bettolo Marconi et al. (2015) U. Marini Bettolo Marconi, P. Malgaretti, and I. Pagonabarraga, J. Chem. Phys. 143, 184501 (2015).
* Malgaretti et al. (2016b) P. Malgaretti, I. Pagonabarraga, and J. Rubi, Entropy 18, 394 (2016b).
* Puertas et al. (2018) A. Puertas, P. Malgaretti, and I. Pagonabarraga, J. Chem. Phys. 149, 174908 (2018).
* Malgaretti and Harting (2021) P. Malgaretti and J. Harting, Soft Matter 17, 2062 (2021).
* Bianco and Malgaretti (2016) V. Bianco and P. Malgaretti, J. Chem. Phys. 145, 114904 (2016).
* Malgaretti and Oshanin (2019) P. Malgaretti and G. Oshanin, Polymers 11, 251 (2019).
* Bodrenko et al. (2019) I. V. Bodrenko, S. Salis, S. Acosta-Gutierrez, and M. Ceccarelli, J. Chem. Phys. 150, 211102 (2019).
* Malgaretti and Stark (2017) P. Malgaretti and H. Stark, The Journal of Chemical Physics 146, 174901 (2017).
* Kalinay (2022) P. Kalinay, Phys. Rev. E 106, 044126 (2022).
* Antunes et al. (2022) G. C. Antunes, P. Malgaretti, J. Harting, and S. Dietrich, Phys. Rev. Lett. 129, 188003 (2022).
* Ledesma-Durán et al. (2016) A. Ledesma-Durán, S. I. Hernández-Hernández, and I. Santamaría-Holek, The Journal of Physical Chemistry C 120, 7810 (2016).
* Chacón-Acosta et al. (2020) G. Chacón-Acosta, M. Núñez-López, and I. Pineda, J. Chem. Phys. 152, 024101 (2020).
* Malgaretti et al. (2016c) P. Malgaretti, I. Pagonabarraga, and J. Miguel Rubi, J. Chem. Phys. 144, 034901 (2016c).
* Berezhkovskii et al. (2007) A. M. Berezhkovskii, M. A. Pustovoit, and S. M. Bezrukov, J. Chem. Phys. 126, 134706 (2007).
* Burada et al. (2007) P. S. Burada, G. Schmid, D. Reguera, J. M. Rubi, and P. Hänggi, Phys. Rev. E 75, 051111 (2007).
* Berezhkovskii et al. (2015) A. M. Berezhkovskii, L. Dagdug, and S. M. Bezrukov, J. Chem. Phys. 143, 164102 (2015).
* Kalinay and Percus (2006) P. Kalinay and J. K. Percus, Phys. Rev. E 74, 041203 (2006).
* Pineda et al. (2012) I. Pineda, J. Alvarez-Ramirez, and L. Dagdug, J. Chem. Phys. 137, 174103 (2012).
* García-Chung et al. (2015) A. A. García-Chung, G. Chacón-Acosta, and L. Dagdug, J. Chem. Phys. 142, 064105 (2015).
* Lifson and Jackson (1962) S. Lifson and J. L. Jackson, J. Chem. Phys. 36, 2410 (1962).
* Reimann et al. (2001) P. Reimann, C. Van den Broeck, H. Linke, P. Hänggi, J. M. Rubi, and A. Pérez-Madrid, Phys. Rev. Lett. 87, 010602 (2001).
* Carusela et al. (2021) M. F. Carusela, P. Malgaretti, and J. M. Rubi, Phys. Rev. E 103, 062102 (2021).
|
# All Byzantine Agreement Problems are Expensive
Pierre Civit École Polytechnique Fédérale de Lausanne (EPFL)Switzerland ,
Seth Gilbert NUS SingaporeSingapore , Rachid Guerraoui École Polytechnique
Fédérale de Lausanne (EPFL)Switzerland , Jovan Komatovic École Polytechnique
Fédérale de Lausanne (EPFL)Switzerland , Anton Paramonov École Polytechnique
Fédérale de Lausanne (EPFL)Switzerland and Manuel Vidigueira École
Polytechnique Fédérale de Lausanne (EPFL)Switzerland
††copyright: none††conference: ACM Conference on Computer and Communications
Security; Due 06 May 2021; London, TBD††journalyear: 2019
Byzantine agreement, arguably the most fundamental problem in distributed
computing, operates among $n$ processes, out of which $t<n$ can exhibit
arbitrary failures. The problem states that all correct (non-faulty) processes
must eventually decide (termination) the same value (agreement) from a set of
admissible values defined by the proposals of the processes (validity).
Depending on the exact version of the validity property, Byzantine agreement
comes in different forms, from Byzantine broadcast to strong and weak
consensus, to modern variants of the problem introduced in today’s blockchain
systems. Regardless of the specific flavor of the agreement problem, its
communication cost is a fundamental metric whose improvement has been the
focus of decades of research. The Dolev-Reischuk bound, one of the most
celebrated results in distributed computing, proved 40 years ago that, at
least for Byzantine broadcast, no deterministic solution can do better than
$\Omega(t^{2})$ exchanged messages in the worst case. Since then, it remained
unknown whether the quadratic lower bound extends to seemingly weaker variants
of Byzantine agreement. This paper answers the question in the affirmative,
closing this long-standing open problem. Namely, we prove that _any_ non-
trivial agreement problem requires $\Omega(t^{2})$ messages to be exchanged in
the worst case. To prove the general lower bound, we determine the weakest
Byzantine agreement problem and show, via a novel indistinguishability
argument, that it incurs $\Omega(t^{2})$ exchanged messages.
## 1\. Introduction
Byzantine agreement (LSP82, ) is a foundational problem of distributed
computing. Its importance stems from the fact that Byzantine agreement lies at
the heart of state machine replication (CL02, ; adya2002farsite, ;
abd2005fault, ; kotla2004high, ; veronese2011efficient, ; amir2006scaling, ;
kotla2007zyzzyva, ; malkhi2019flexible, ; momose2021multi, ), distributed key
generation (AbrahamJMMST21, ; ShresthaBKN21, ; Kokoris-KogiasM20, ;
DasYXMK022, ), secure multi-party computation (DBLP:conf/tcc/DeligiosHL21, ;
DBLP:conf/eurocrypt/FitziGMR02, ; DBLP:conf/crypto/GennaroIKR02, ), as well as
various distributed services (galil1987cryptographic, ; gilbert2010rambo, ).
Recent years have witnessed a renewed interest in Byzantine agreement due to
the emergence of blockchain systems (abraham2016solida, ; chen2016algorand, ;
abraham2016solidus, ; luu2015scp, ; correia2019byzantine, ; CGL18, ;
buchman2016tendermint, ). Formally, the agreement problem is defined in a
distributed system of $n$ processes; up to $t<n$ processes can be _faulty_ ,
whereas the rest are _correct_. Correct processes behave according to the
prescribed deterministic protocol; faulty processes can deviate arbitrarily
from it. Byzantine agreement exposes the following interface:
* •
input $\mathsf{propose}(v\in\mathcal{V}_{I})$: a process proposes a value $v$
from a (potentially infinite) set $\mathcal{V}_{I}$.
* •
output $\mathsf{decide}(v^{\prime}\in\mathcal{V}_{O})$: a process decides a
value $v^{\prime}$ from a (potentially infinite) set $\mathcal{V}_{O}$.
Byzantine agreement ensures the following properties:
* •
_Termination:_ Every correct process eventually decides.
* •
_Agreement:_ No two correct processes decide different values.
To preclude a trivial solution in which processes agree on a predetermined
value, Byzantine agreement requires an additional property – _validity_ – that
specifies which decisions are admissible.
The exact definition of the validity property yields a specific agreement
problem. For example, Byzantine broadcast (Wan2020, ; Wan2023a, ;
abraham2021good, ; Nayak2020a, ) ensures _Sender Validity_ , i.e., if the
predetermined sender is correct, then its proposed value must be decided by a
correct process. Weak consensus (yin2019hotstuff, ; lewis2022quadratic, ;
civit2022byzantine, ; BKM19, ) guarantees only _Weak Validity_ , i.e., if all
processes are correct and they all propose the same value, that value is the
sole admissible decision. Other notable Byzantine agreement problems include
(1) strong consensus (LSP82, ; civit2022byzantine, ; CGL18, ), ensuring that,
if all correct processes propose the same value, that value must be decided,
(2) interactive consistency (LSP82, ; fischer1981lower, ; ben2003resilient, ),
where correct processes agree on the proposals of all $n$ processes, and (3)
agreement problems employed in today’s blockchain systems (Cachin2001, ;
BKM19, ; yin2019hotstuff, ), which require the decided value to satisfy a
globally verifiable condition (e.g., the value is a transaction correctly
signed by the issuing client).
##### The worst-case communication cost of Byzantine agreement
Motivated by practical implications, one of the most studied aspects of
Byzantine agreement is its communication cost. Since the inception of
Byzantine agreement, research has been focused on minimizing the number of
exchanged bits of information (dolev1985bounds, ; validity_podc, ;
lewis2022quadratic, ; wan2023amortized, ; civit2022byzantine, ;
everyBitCounts, ; DBLP:journals/iandc/CoanW92, ; berman1992bit, ; Chen2021, ;
Nayak2020a, ; Abraham2023a, ). However, there are intrinsic limits. The
seminal Dolev-Reischuk bound (dolev1985bounds, ) proves that Byzantine
broadcast cannot be solved unless $\Omega(t^{2})$ messages are exchanged in
the worst case. (This naturally applies to any problem to which Byzantine
broadcast can be reduced with $o(t^{2})$ messages.) The result of
(dolev1985bounds, ) is shown for any Byzantine broadcast algorithm that
operates in _synchrony_ , where the message delays are known. Inherently, this
lower bound applies to weaker network models as well. Concretely, it extends
to _partial synchrony_ (DLS88, ), in which the communication is asynchronous
(with arbitrary message delays) until some unknown point in time, after which
it becomes synchronous. (Byzantine agreement is known to be unsolvable in full
asynchrony (fischer1985impossibility, ).)
While the Dolev-Reishcuk bound answers the question of what the necessary
message cost is for Byzantine broadcast, it is not general, i.e., it does not
hold for _any_ specific non-trivial agreement problem. (An agreement problem
is trivial if there exists an always-admissible value that can be decided
immediately, i.e., without any communication.) For instance, the Dolev-
Reischuk bound does not apply to weak consensus. Thus, whether all non-trivial
agreement problems require a quadratic number of messages remains unknown. In
this paper, we answer this long-standing question in the affirmative.
###### Theorem 1.
No (non-trivial) Byzantine agreement problem can be solved with fewer than
$\Omega(t^{2})$ exchanged messages in the worst case even in synchrony.
To prove our general lower bound, we study binary
($\mathcal{V}_{I}=\mathcal{V}_{O}=\\{0,1\\}$) weak consensus in synchrony.
Namely, we first prove an $\Omega(t^{2})$ lower bound on the number of
exchanged messages for weak consensus. Then, to generalize the bound, we prove
that weak consensus is the weakest agreement problem by presenting a reduction
from it to any (solvable and non-trivial) agreement problem. As a byproduct,
the reduction allows us to define the entire landscape of solvable (and
unsolvable) agreement problems, thus unifying all previous results on the
solvability of Byzantine agreement. (We believe this result to be important in
its own right.)
##### The fundamental challenge of weak consensus
Recall that the _Weak Validity_ property of weak consensus guarantees only
that, if all processes are correct and they all propose the same value, that
value must be decided. This is a very weak requirement: picking $1$ as the
decision is always allowed except in a _single_ execution $\mathcal{E}$ where
all processes are correct and they all propose $0$. Hence, any weak consensus
algorithm needs only to distinguish _two_ scenarios: either the execution is
(1) $\mathcal{E}$, deciding $0$, or (2) non-$\mathcal{E}$, deciding $1$. This
observation was the starting point for our conjecture that weak consensus is
the _weakest_ (non-trivial) agreement problem (which we prove in this paper),
implying that any lower bound for weak consensus also applies to all other
agreement problems.
To illustrate the difficulty of proving a quadratic lower bound for weak
consensus, we briefly discuss the common point in the classical proof
techniques exploited for similar results (namely, (dolev1985bounds, ) and
(validity_podc, )) and explain why those techniques cannot be easily adapted
to weak consensus in synchrony. The crux of those proof techniques consists in
showing that, unless $\Omega(t^{2})$ messages are exchanged, there necessarily
exists an execution $\mathcal{E}_{1}$ in which some correct process $p$
decides $1$ without receiving any message. The second step of the proof
consists of constructing another execution $\mathcal{E}_{0}$ in which (1) $p$
is correct and receives no messages, and (2) some correct process $q\neq p$
decides $0$. As $p$ cannot distinguish $\mathcal{E}_{0}$ from
$\mathcal{E}_{1}$, $p$ decides $1$ in $\mathcal{E}_{0}$, thus violating
_Agreement_. Unfortunately, while elegant, this approach cannot be directly
adapted to weak consensus in synchrony as both $\mathcal{E}_{0}$ and
$\mathcal{E}_{1}$ inevitably contain detectable faults. Therefore, nothing
prevents a weak consensus algorithm from deciding $1$ in _both_
$\mathcal{E}_{0}$ and $\mathcal{E}_{1}$, making the aforementioned reasoning
inapplicable. Intuitively, the main difficulty in proving a quadratic lower
bound for weak consensus is that _any_ detectable misbehavior immediately
allows an algorithm to choose a predetermined “default” value.
##### Technical overview.
To prove an $\Omega(t^{2})$ lower bound for weak consensus in the Byzantine
failure model, we show that the bound holds even with only _omission_
failures. An omission-faulty process can only misbehave by failing to receive
or send some messages, but not by behaving maliciously. (In contrast to
Byzantine processes, it is reasonable to make claims about the behavior of
omission-faulty processes as they are still _honest_ , i.e., they never act
malevolently.) Our proof utilizes in a novel way the standard concept of
_isolation_ (dolev1985bounds, ; validity_podc, ; AbrahamStern22, ;
Abraham2023revisited, ; Abraham2019c, ; hadzilacos1991message, ), in which a
small subset of omission-faulty processes starts (from some round onward)
“dropping” all messages received from outside the set. Concretely, we obtain
our bound through a sequence of four critical observations about what happens
when _multiple_ groups of processes are isolated. Suppose that there are three
groups: group $A$, which is fully correct and sends $o(t^{2})$ messages, and
groups $B$ and $C$, which are (separately) isolated from rounds $k_{B}$ and
$k_{C}$, respectively. We observe that:
1. (1)
In any execution in which group $B$ (resp., $C$) is isolated, correct
processes from $A$ and a majority of processes from $B$ (resp., $C$) must
decide the _same_ bit; otherwise, we could design an execution which violates
the properties of weak consensus.
2. (2)
If both $B$ and $C$ are isolated from round $1$, group $A$ must decide some
“default” bit _independently_ of their proposals, i.e., group $A$ either
always decides 0 or always decides 1 whenever $B$ and $C$ are isolated from
round $1$.
3. (3)
At some round $R$ in the execution, $A$ must stop deciding the default bit
_even if there are faults afterward_ (e.g., even if $B$ and $C$ are isolated).
For example, if the default bit is $1$, but all processes propose $0$ and act
correctly until the end, then, by an interpolation argument, all correct
processes must at some round $R$ direct their strategy towards deciding $0$
(otherwise, they would violate _Weak Validity_).
4. (4)
Isolating $B$ and $C$ at the same round (e.g., $k_{C}=k_{B}=R$) or one round
apart (e.g., $k_{B}=k_{C}+1=R$) is indistinguishable for processes in $B$ or
$C$. Thus, we can create a situation where processes in $C$ decide the default
bit $1$, while processes in $B$ choose $0$. In this situation, processes in
$A$ necessarily violate the statement of the first observation: if they decide
$1$, they disagree with $B$; if they decide $0$, they disagree with $C$.
To generalize our lower bound, we then show that weak consensus is reducible
at $0$ message cost to any solvable and non-trivial agreement problem in
synchrony. This reduction is possible because, for any Byzantine agreement
problem that is non-trivial and synchronously solvable, its specific validity
property must follow a certain structure. Concretely, we define a simple
combinatorial condition – the _containment condition_ – which we prove to be a
necessary condition for synchronously solvable non-trivial agreement problems.
Interestingly, the containment condition is also _sufficient_ , enabling us to
devise the general solvability theorem for Byzantine agreement problems.
##### Roadmap.
We state the system model and preliminaries in § 2. In § 3, we prove the
$\Omega(t^{2})$ lower bound on exchanged messages for weak consensus. A
generalization of the bound to all (solvable) non-trivial agreement problems
is provided in § 4. In § 5, we present the general solvability theorem for
Byzantine agreement problems. We provide an overview of related work in § 6,
and conclude the paper in § 7. The optional appendix contains omitted proofs.
## 2\. System Model & Preliminaries
##### Processes & adversary.
We consider a static system $\Pi=\\{p_{1},...,p_{n}\\}$ of $n$ processes,
where each process acts as a deterministic state machine. Moreover, we
consider a _static adversary_ which can corrupt up to $t<n$ processes before
each run of the system.111 Note that a lower bound proven for a static
adversary trivially applies to a stronger adaptive adversary which can corrupt
processes during (and not only before) a run of the system. A corrupted
process can behave arbitrarily; a non-corrupted process behaves according to
its state machine. We say that a corrupted process is _faulty_ , whereas a
non-corrupted process is _correct_.
##### Synchronous environment.
Computation unfolds in synchronous rounds. In each round
$1,2,...\in\mathbb{N}$, each process (1) performs (deterministic) local
computations, (2) sends (possibly different) messages to (a subset of) the
other processes, and (3) receives the messages sent to it in the round. We
assume authenticated channels: the receiver of a message is aware of the
sender’s identity.
##### Executions.
Each execution of any algorithm is uniquely identified by (1) the sets of
correct and faulty processes, and (2) the messages faulty processes send (or
do not send) in each round. Given any algorithm $\mathcal{A}$,
$\mathit{execs}(\mathcal{A})$ denotes the set of all $\mathcal{A}$’s
executions with no more than $t$ faulty processes. Lastly,
$\mathit{Correct}_{\mathcal{A}}(\mathcal{E})$ denotes the set of correct
processes in any execution $\mathcal{E}\in\mathit{execs}(\mathcal{A})$.
##### Message complexity.
Let $\mathcal{A}$ be any algorithm and let $\mathcal{E}$ be any execution of
$\mathcal{A}$. The message complexity of $\mathcal{E}$ is the number of
messages sent by correct processes throughout the entire execution
$\mathcal{E}$. (Note that all messages count towards the message complexity of
$\mathcal{E}$, even those sent after all correct processes have already
decided.) The _message complexity_ of $\mathcal{A}$ is then defined as
$\max_{\mathcal{E}\in\mathit{execs}(\mathcal{A})}\bigg{\\{}\text{the message
complexity of }\mathcal{E}\bigg{\\}}.$
## 3\. Lower Bound on Message Complexity of Weak Consensus
To prove our general lower bound, we first show a quadratic lower bound for
weak consensus:
###### Theorem 1.
Any weak consensus algorithm has $\Omega(t^{2})$ message complexity.
In order to prove Theorem 1, we show a strictly stronger lower bound for the
omission failure model in which processes can only fail by “dropping” some
messages they send or receive, but not by behaving maliciously.
##### Omission failures.
In (only) this section, we consider _omission failures_. A static adversary
corrupts up to $t<n$ processes before each execution. A corrupted process can
commit:
* •
_send-omission_ faults, by not sending some messages it is supposed to send;
or
* •
_receive-omission_ faults, by not receiving some messages it is supposed to
receive.
Note that a faulty process cannot misbehave in an arbitrary manner, i.e., it
acts according to its state machine at all times. Moreover, corrupted
processes are unaware that they are corrupted, i.e., they do not know if or
when they omitted some messages. Corrupted processes are said to be _faulty_ ,
whereas non-corrupted processes are said to be _correct_.
Two executions are said to be _indistinguishable_ to a (correct or faulty)
process if and only if (1) the process has the same proposal in both
executions and (2) the process receives identical messages in each round of
both executions. Note that, given two executions indistinguishable to some
process, the process’s actions in each round of both executions are
_identical_ due to the process’s determinism. Concretely, if two $k$-round-
long ($k\in\mathbb{N}$) executions are indistinguishable to a process $p_{i}$,
then (1) $p_{i}$’s internal states at the start of the $(k+1)$-st round of
both executions are identical, and (2) the sets of all messages sent
(including those that are omitted) in the $(k+1)$-st round of both executions
are identical. We relegate a precise definition of the omission failure model
to Appendix A.
##### Notation & remarks.
Given any set of processes $G$, let $\bar{G}=\Pi\setminus{G}$. If a faulty
process omits sending (resp., omits receiving) some message $m$, we say that
the process _send-omits_ (resp., _receive-omits_) $m$. Note that, in the
omission failure model, it is reasonable to make claims about the behaviors of
faulty processes as they always behave according to their state machine.
Finally, observe that any weak consensus algorithm provides guarantees _only_
to correct processes, i.e., it is possible for faulty processes to not
terminate or to disagree (among themselves or with correct processes).
##### Proof of Theorem 1.
As previously mentioned, we prove a quadratic lower bound for weak consensus
by showing that the problem requires at least $\frac{t^{2}}{32}$ messages even
with omission failures:
###### Lemma 0.
Any omission-resilient weak consensus algorithm has at least
$\frac{t^{2}}{32}$ message complexity.
We prove Lemma 2 by contradiction. Fix any $n$ and $t$ such that
$t\in[8,n-1]$. (Without loss of generality, we consider $t$ divisible by $8$.)
Fix any weak consensus algorithm $\mathcal{A}$ which (1) tolerates $t$
omission failures and works among $n$ processes, and (2) whose message
complexity is less than $\frac{t^{2}}{32}$. This implies that correct
processes send fewer than $\frac{t^{2}}{32}$ messages in _every_ execution of
$\mathcal{A}$. Table 1 introduces notation we rely on throughout the proof.
_Notation_ | _Definition_
---|---
$(A,B,C)$ | Any partition of $\Pi$ such that (1) $|B|=\frac{t}{4}$, and (2) $|C|=\frac{t}{4}$ (naturally, $|A|=n-\frac{t}{2}$).
$\mathcal{E}_{0}$ | The infinite execution of $\mathcal{A}$ in which (1) all processes propose $0$, and (2) all processes are correct.
$\mathcal{E}_{0}^{B(k)},k\in\mathbb{N}$ | The infinite execution of $\mathcal{A}$ in which (1) all processes propose $0$, (2) processes from $A\cup C$ are correct, and (3) group $B$ is isolated from round $k$.
$\mathcal{E}_{0}^{C(k)},k\in\mathbb{N}$ | The infinite execution of $\mathcal{A}$ in which (1) all processes propose $0$, (2) processes from $A\cup B$ are correct, and (3) group $C$ is isolated from round $k$.
$\mathcal{E}_{1}^{C(1)}$ | The infinite execution of $\mathcal{A}$ in which (1) all processes propose $1$, (2) processes from $A\cup B$ are correct, and (3) group $C$ is isolated from round $1$.
Table 1. Notation table for the lower bound for weak consensus.
(The concept of group isolation is described in Definition 3.)
First, let us introduce the concept of _isolation_ , which we use extensively
throughout the proof.
###### Definition 0 (Isolation).
A group $G\subsetneq\Pi$ of $|G|\leq t$ processes is _isolated from some round
$k\in\mathbb{N}$_ in an execution $\mathcal{E}$ of $\mathcal{A}$ if and only
if, for every process $p_{G}\in G$, the following holds:
* •
$p_{G}$ is faulty in $\mathcal{E}$; and
* •
$p_{G}$ does not send-omit any message in $\mathcal{E}$; and
* •
for every message $m$ sent by any process $p_{m}$ to $p_{G}$ in any round
$k^{\prime}\in\mathbb{N}$ of $\mathcal{E}$, $p_{G}$ receive-omits $m$ in
$\mathcal{E}$ if and only if (1) $p_{m}\in\bar{G}$, and (2) $k^{\prime}\geq
k$.
Intuitively, a group $G$ is isolated from some round $k$ if and only if no
process $p_{G}\in G$ receives any message from outside of $G$ in any round
$k^{\prime}\geq k$, i.e., $p_{G}$ only receives messages sent by processes in
$G$ from round $k$ onward; other than these receive-omission faults, $p_{G}$
commits no other faults. Figure 1 illustrates the concept of isolation.
Figure 1. Illustration of Definition 3. The colors represent the local
behaviors of processes. Execution $\mathcal{E}_{0}$ has no faults. Execution
$\mathcal{E}_{0}^{G(R)}$ proceeds identically to $\mathcal{E}_{0}$, sending
the same messages (green color) up until round $R$ (inclusive). However, group
$G$ is _isolated_ at round $R$, causing it to drop all messages from group
$\overline{G}$ from then on. This (potentially) changes $G$’s sending behavior
from round $R+1$ onward (red color). By propagation, group $\overline{G}$ is
then (potentially) affected by $G$’s new sending behavior (red color), causing
$\overline{G}$ to deviate from $\mathcal{E}_{0}$ in the messages it sends from
round $R+2$ onward (blue color).
Let $(X,Y,Z)$ be any partition of $\Pi$ such that $|Y|=\frac{t}{4}$ and
$|Z|\leq\frac{t}{4}$. The following lemma proves that in any infinite
execution $\mathcal{E}$ of $\mathcal{A}$ in which processes from $X$ are
correct and processes from $Y\cup Z$ are faulty, more than half of processes
from $Y$ decide the same bit as (all) processes from $X$. If this was not the
case, we could construct an execution that demonstrates that $\mathcal{A}$ is
not a correct weak consensus algorithm. We formally prove the lemma in
Appendix A.
###### Lemma 0.
Let $(X,Y,Z)$ be any partition of $\Pi$ such that (1) $|Y|=\frac{t}{4}$, and
(2) $|Z|\leq\frac{t}{4}$ (naturally, $|X|=n-|Y|-|Z|$). Moreover, let
$\mathcal{E}$ be any infinite execution of $\mathcal{A}$ such that:
* •
processes from $X$ are correct in $\mathcal{E}$, whereas processes from $Y\cup
Z$ are faulty in $\mathcal{E}$; and
* •
all processes from $X$ decide the same bit $b_{X}$ (to satisfy _Termination_
and _Agreement_); and
* •
group $Y$ is isolated from some round $k\in\mathbb{N}$ in $\mathcal{E}$.
Then, there exists a set $Y^{\prime}\subseteq Y$ of
$|Y^{\prime}|>\frac{|Y|}{2}$ processes such that all processes in $Y^{\prime}$
decide $b_{X}$ in $\mathcal{E}$.
Proof Sketch. For every process $p\in Y$, let $\mathcal{M}_{X\to p}$ denote
the set of all messages which are (1) sent by any process $p^{\prime}\in X$ in
$\mathcal{E}$, and (2) receive-omitted by $p$ in $\mathcal{E}$; as $p\in Y$
and group $Y$ is isolated from round $k$ in $\mathcal{E}$, every message
$m\in\mathcal{M}_{X\to p}$ is sent in some round $k^{\prime}\geq k$. For every
set $Y^{\prime\prime}\subseteq Y$, let $\mathcal{M}_{X\to
Y^{\prime\prime}}=\bigcup\limits_{p\in Y^{\prime\prime}}\mathcal{M}_{X\to p}$.
As correct processes (i.e., processes from group $X$) send fewer than
$\frac{t^{2}}{32}$ messages in $\mathcal{E}$, $|\mathcal{M}_{X\to
Y}|<\frac{t^{2}}{32}$. Therefore, there does not exist a set $Y^{*}\subseteq
Y$ of $|Y^{*}|\geq\frac{|Y|}{2}$ processes such that, for every process
$p_{Y^{*}}\in Y^{*}$, $|\mathcal{M}_{X\to p_{Y^{*}}}|\geq\frac{t}{2}$. This
implies that there exists a set $Y^{\prime}\subseteq Y$ of
$|Y^{\prime}|>\frac{|Y|}{2}$ processes such that, for every process
$p_{Y^{\prime}}\in Y^{\prime}$, $|\mathcal{M}_{X\to
p_{Y^{\prime}}}|<\frac{t}{2}$.
Fix any process $p_{Y^{\prime}}\in Y^{\prime}$. By contradiction, suppose that
$p_{Y^{\prime}}$ does not decide $b_{X}$ in $\mathcal{E}$. Let $\mathcal{S}$
denote the set of all processes whose messages $p_{Y^{\prime}}$ receive-omits
in (any round $k^{\prime}\geq k$ of) $\mathcal{E}$; note that
$|\mathcal{S}\cap X|<\frac{t}{2}$ (since $|\mathcal{M}_{X\to
p_{Y^{\prime}}}|<\frac{t}{2}$) and $\mathcal{S}\subsetneq X\cup Z$. Let us
construct another infinite execution $\mathcal{E}^{\prime}$ of $\mathcal{A}$
following the (sequentially-executed) steps below:
1. (1)
Processes in $\mathcal{S}\cup Y\cup Z\setminus{\\{p_{Y^{\prime}}\\}}$ are
faulty in $\mathcal{E}^{\prime}$, whereas all other processes are correct.
2. (2)
Then, we set $\mathcal{E}^{\prime}\leftarrow\mathcal{E}$: every process (at
first) behaves in the same manner as in $\mathcal{E}$.
3. (3)
For every message $m$ such that $p_{Y^{\prime}}$ receive-omits $m$ in
$\mathcal{E}$, $m$ is send-omitted in $\mathcal{E}^{\prime}$. That is, the
sender of $m$ is responsible for $p_{Y^{\prime}}$ not receiving $m$ in
$\mathcal{E}^{\prime}$.
Observe that $p_{Y^{\prime}}$ is indeed correct in $\mathcal{E}^{\prime}$ as
(1) $p_{Y^{\prime}}$ does not commit any send-omission faults (since
$p_{Y^{\prime}}$ does not commit those faults in $\mathcal{E}$), and (2)
$p_{Y^{\prime}}$ does not commit any receive-omission faults (since every
message which is receive-omitted in $\mathcal{E}$ is send-omitted in
$\mathcal{E}^{\prime}$). Moreover, there are $|\mathcal{S}\cup Y\cup
Z\setminus{\\{p_{Y^{\prime}}\\}}|=|(\mathcal{S}\cap X)\cup Y\cup
Z\setminus{\\{p_{Y^{\prime}}\\}}|<\frac{t}{2}+\frac{t}{4}+\frac{t}{4}-1<t$
faulty processes in $\mathcal{E}^{\prime}$. Furthermore, there exists a
process $p_{X}\in X$ which is correct in $\mathcal{E}^{\prime}$ as
$|\mathcal{S}\cap X|<\frac{t}{2}$, $|X|\geq n-\frac{t}{2}$ and $n>t$. Finally,
neither $p_{Y^{\prime}}$ nor $p_{X}$ can distinguish $\mathcal{E}^{\prime}$
from $\mathcal{E}$ as their behaviors in $\mathcal{E}^{\prime}$ and
$\mathcal{E}$ are identical.222Recall that process $p_{Y^{\prime}}$ is unaware
of receive-omission failures it commits in $\mathcal{E}$. Therefore, the fact
that $p_{Y^{\prime}}$ does not commit receive-omission failures in
$\mathcal{E}^{\prime}$ does not allow $p_{Y^{\prime}}$ to distinguish
$\mathcal{E}^{\prime}$ from $\mathcal{E}$. Therefore, either _Termination_ (if
$p_{Y^{\prime}}$ does not decide) or _Agreement_ (if $p_{Y^{\prime}}$ decides
$1-b_{X}$) is violated in $\mathcal{E}^{\prime}$, which contradicts the fact
that $\mathcal{A}$ is a correct weak consensus algorithm. $\square$
Next, we define _mergeable_ executions.
###### Definition 0 (Mergeable executions).
Any two infinite executions $\mathcal{E}_{0}^{B(k_{1})}$
($k_{1}\in\mathbb{N}$) and $\mathcal{E}_{b}^{C(k_{2})}$ ($b\in\\{0,1\\}$,
$k_{2}\in\mathbb{N}$) are _mergeable_ if and only if:
* •
$k_{1}=k_{2}=1$; or
* •
$|k_{1}-k_{2}|\leq 1$ and $b=0$.
In brief, executions $\mathcal{E}_{0}^{B(k_{1})}$ and
$\mathcal{E}_{b}^{C(k_{2})}$ (which are defined in Table 1) are mergeable if
(1) group $B$ (resp., $C$) is isolated from round $1$ in
$\mathcal{E}_{0}^{B(k_{1})}$ (resp., $\mathcal{E}_{b}^{C(k_{2})}$), or (2)
$b=0$ and groups $B$ and $C$ are isolated at most one round apart in their
respective executions. Note that all processes from group $A$ are correct in
any two mergeable executions. The following lemma proves that processes from
group $A$ decide identically in any two mergeable executions, and it
represents a crucial intermediate result in proving our lower bound. We
formally prove the lemma in Appendix A. An illustration of its application can
be seen in Figure 2.
###### Lemma 0.
Let $\mathcal{E}_{0}^{B(k_{1})}$ ($k_{1}\in\mathbb{N}$) and
$\mathcal{E}_{b}^{C(k_{2})}$ ($b\in\\{0,1\\},k_{2}\in\mathbb{N}$) be any two
mergeable executions. Let any process from group $A$ decide $b_{1}$ (resp.,
$b_{2}$) in $\mathcal{E}_{0}^{B(k_{1})}$ (resp.,
$\mathcal{E}_{b}^{C(k_{2})}$). Then, $b_{1}=b_{2}$.
Proof Sketch. For $\mathcal{A}$ to satisfy _Termination_ and _Agreement_ , all
processes from group $A$ decide $b_{1}$ (resp., $b_{2}$) in
$\mathcal{E}_{0}^{B(k_{1})}$ (resp., $\mathcal{E}_{b}^{C(k_{2})}$). Given the
partition $(A\cup C,B,\emptyset)$ of $\Pi$ and the execution
$\mathcal{E}_{0}^{B(k_{1})}$, Lemma 4 proves that there exists a set
$B^{\prime}\subseteq B$ of more than $\frac{|B|}{2}$ processes such that every
process $p_{B^{\prime}}\in B^{\prime}$ decides $b_{1}$ in
$\mathcal{E}_{0}^{B(k_{1})}$. Similarly, given the partition $(A\cup
B,C,\emptyset)$ of $\Pi$ and the execution $\mathcal{E}_{b}^{C(k_{2})}$, Lemma
4 proves that there exists a set $C^{\prime}\subseteq C$ of more than
$\frac{|C|}{2}$ processes such that every process $p_{C^{\prime}}\in
C^{\prime}$ decides $b_{2}$ in $\mathcal{E}_{b}^{C(k_{2})}$.
We now construct another infinite execution $\mathcal{E}$ of $\mathcal{A}$:
1. (1)
Processes from group $A$ are correct, whereas processes from $B\cup C$ are
faulty.
2. (2)
All processes from $A\cup B$ propose $0$, whereas all processes from group $C$
propose $b$.
3. (3)
Every process $p_{B}\in B$ (resp., $p_{C}\in C$) behaves in the same manner as
in $\mathcal{E}^{B(k_{1})}_{0}$ (resp., $\mathcal{E}^{C(k_{2})}_{b}$). Let us
elaborate on why this step of the construction is valid:
* •
Suppose that $k_{1}=k_{2}=1$. Due to the construction of $\mathcal{E}$, every
process $p_{B}\in B$ (resp., $p_{C}\in C$) receives messages only from other
processes in the same group $B$ (resp., $C$) in $\mathcal{E}$. As (1) all
messages received by $p_{B}\in B$ (resp., $p_{C}\in C$) in $\mathcal{E}$ are
sent in $\mathcal{E}_{0}^{B(1)}$ (resp., $\mathcal{E}_{b}^{C(1)}$), and (2)
for every process $p_{B}^{\prime}\in B$ (resp., $p_{C}^{\prime}\in C$), the
set of messages sent by $p_{B}^{\prime}$ (resp., $p_{C}^{\prime}$) in
$\mathcal{E}$ is identical to the set of messages sent by $p_{B}^{\prime}$
(resp., $p_{C}^{\prime}$) in $\mathcal{E}_{0}^{B(1)}$ (resp.,
$\mathcal{E}_{b}^{C(1)}$), the construction step is indeed valid in this case.
* •
Suppose that $|k_{1}-k_{2}|\leq 1$ and $b=0$. As the behavior of each process
from group $B$ (resp., $C$) in $\mathcal{E}$ is identical to its behavior in
$\mathcal{E}_{0}^{B(k_{1})}$ (resp., $\mathcal{E}_{0}^{C(k_{2})}$), the set of
messages received by any process $p_{B}\in B$ (resp., $p_{C}\in C$) in
$\mathcal{E}$ is identical to the set of messages received by $p_{B}\in B$
(resp., $p_{C}\in C$) in $\mathcal{E}_{0}^{B(k_{1})}$ (resp.,
$\mathcal{E}_{0}^{C(k_{2})}$). To prove the validity of the construction step
in this scenario, we show that, for each message received by any process
$p_{B}\in B$ (resp., $p_{C}\in C$) in $\mathcal{E}$, that message is sent in
$\mathcal{E}$.
Without loss of generality, we fix any message $m$ received by any process
$p_{B}\in B$ in $\mathcal{E}$. We denote the sender of $m$ by $p_{m}$. Note
that $m$ is sent by $p_{m}$ in $\mathcal{E}_{0}^{B(k_{1})}$ as $m$ is received
in $\mathcal{E}_{0}^{B(k_{1})}$. If $m$ is received before round
$R=\min(k_{1},k_{2})$, $m$ is sent in $\mathcal{E}$ as, for any process
$p\in\Pi$, $p$’s behaviour until (and excluding) round $R$ is identical in
$\mathcal{E}$ and $\mathcal{E}_{0}^{B(k_{1})}$. If $m$ is received in or after
round $R$, we distinguish two possibilities:
* –
Let $m$ be received before round $k_{1}$. (This is possible only if
$k_{1}>k_{2}$.) Hence, $m$ is received in round $R$. In this case, $m$ is sent
in $\mathcal{E}$ as the set of messages $p_{m}$ sends in $\mathcal{E}$ is
identical to the set of messages $p_{m}$ sends in $\mathcal{E}_{0}^{B(k_{1})}$
(since the internal state of process $p_{m}$ at the beginning of round $R$ is
identical in $\mathcal{E}$ and $\mathcal{E}_{0}^{B(k_{1})}$).
* –
Let $m$ be received in or after round $k_{1}$. In this case, $p_{m}\in B$ (as
group $B$ is isolated from round $k_{1}$ in $\mathcal{E}_{0}^{B(k_{1})}$).
Therefore, $m$ is sent in $\mathcal{E}$ as the behavior of every process from
group $B$ in $\mathcal{E}$ is identical to its behavior in
$\mathcal{E}_{0}^{B(k_{1})}$.
Note that this step of construction ensures that group $B$ (resp., $C$) is
isolated from round $k_{1}$ (resp., $k_{2}$) in $\mathcal{E}$.
As no process $p_{B^{\prime}}\in B^{\prime}$ (resp., $p_{C^{\prime}}\in
C^{\prime}$) distinguishes $\mathcal{E}$ from $\mathcal{E}_{0}^{B(k_{1})}$
(resp., $\mathcal{E}_{b}^{C(k_{2})}$), all processes from $B^{\prime}$ (resp.,
$C^{\prime}$) decide $b_{1}$ (resp., $b_{2}$) in $\mathcal{E}$. Let $b_{A}$ be
the decision of processes from group $A$ in $\mathcal{E}$; such a decision
must exist as $\mathcal{A}$ satisfies _Termination_ and _Agreement_. Given the
partition $(A,B,C)$ of $\Pi$ and the newly constructed execution
$\mathcal{E}$, Lemma 4 proves that $b_{1}=b_{A}$. Similarly, given the
partition $(A,C,B)$ of $\Pi$ and the execution $\mathcal{E}$, Lemma 4 shows
that $b_{2}=b_{A}$. As $b_{1}=b_{A}$ and $b_{A}=b_{2}$, $b_{1}=b_{2}$, which
concludes the proof. $\square$
Lemma 6 implies that all processes from group $A$ decide identical values in
executions $\mathcal{E}_{0}^{B(1)}$ and $\mathcal{E}_{1}^{C(1)}$ as these two
executions are mergeable (see Definition 5). Without loss of generality, the
rest of the proof assumes that all processes from group $A$ decide $1$ in
$\mathcal{E}_{0}^{B(1)}$ (and $\mathcal{E}_{1}^{C(1)}$). Intuitively, the
value $1$ acts as the “default” value for processes in $A$ if they detect
faults early. In the following lemma, we prove that there exists a round
$R\in\mathbb{N}$ such that processes from group $A$ decide $1$ in
$\mathcal{E}_{0}^{B(R)}$ and $0$ in $\mathcal{E}_{0}^{B(R+1)}$. This expresses
the idea that $A$ must, at some critical round (i.e., $R+1$), abandon its
initial strategy of always deciding the “default” value.
###### Lemma 0.
There exists a round $R\in\mathbb{N}$ such that (1) all processes from group
$A$ decide $1$ in $\mathcal{E}_{0}^{B(R)}$, and (2) all processes from group
$A$ decide $0$ in $\mathcal{E}_{0}^{B(R+1)}$.
###### Proof.
Let $R_{\mathit{max}}\in\mathbb{N}$ denote the round before which all
processes decide $0$ in $\mathcal{E}_{0}$, which is the fully correct
execution with all processes proposing $0$ (see Table 1); such a round must
exist for $\mathcal{A}$ to satisfy _Termination_ and _Weak Validity_. Hence,
all processes from group $A$ decide $0$ in
$\mathcal{E}_{0}^{B(R_{\mathit{max}})}$. By our assumption, all processes from
group $A$ decide $1$ in $\mathcal{E}_{0}^{B(1)}$. Therefore, there exists a
round $R\in[1,R_{\mathit{max}})$ which satisfies the statement of the lemma. ∎
Finally, we are ready to prove that $\mathcal{E}$ exchanges at least
$\frac{t^{2}}{32}$ messages.
###### Lemma 0.
The message complexity of $\mathcal{A}$ is at least $\frac{t^{2}}{32}$.
###### Proof.
According to Lemma 7, there exists a round $R\in\mathbb{N}$ such that (1)
processes from group $A$ decide $1$ in $\mathcal{E}_{0}^{B(R)}$, and (2)
processes from group $A$ decide $0$ in $\mathcal{E}_{0}^{B(R+1)}$. By
Definition 5, executions $\mathcal{E}_{0}^{B(R)}$ and $\mathcal{E}_{0}^{C(R)}$
are mergeable. As processes from group $A$ decide $1$ in
$\mathcal{E}_{0}^{B(R)}$, Lemma 6 implies that processes from group $A$ decide
$1$ in $\mathcal{E}_{0}^{C(R)}$. Moreover, executions
$\mathcal{E}_{0}^{B(R+1)}$ and $\mathcal{E}_{0}^{C(R)}$ are mergeable
according to Definition 5. Thus, by Lemma 6, all processes from group $A$
decide $1$ in $\mathcal{E}_{0}^{B(R+1)}$ (as they do so in
$\mathcal{E}_{0}^{C(R)}$). This is a contradiction with the fact that
processes from group $A$ decide $0$ in $\mathcal{E}_{0}^{B(R+1)}$. Hence, the
assumption of $\mathcal{A}$’s message complexity being less than
$\frac{t^{2}}{32}$ must be wrong. ∎
Figure 2. Illustration of Lemma 6 used in the proof of Lemma 8. The arrows
denoting messages are not exhaustive. As in Figure 1, the colors represent the
local behaviors of processes. This picture illustrates why group $A$ is forced
to decide the same value in executions $\mathcal{E}_{0}^{B(R+1)}$ and
$\mathcal{E}_{0}^{C(R)}$. Consider the “merged” execution
$\mathcal{E}_{0}^{B(R+1),C(R)}$ where $B$ and $C$ are isolated at rounds $R+1$
and $R$, respectively. If $A$ decides differently in
$\mathcal{E}_{0}^{B(R+1)}$ (row 1) and $\mathcal{E}_{0}^{C(R)}$ (row 5), then
majorities of $B$ and $C$ decide differently in the
$\mathcal{E}_{0}^{B(R+1),C(R)}$ (rows 2 and 4) due to indistinguishability.
Group $A$ in $\mathcal{E}_{0}^{B(R+1),C(R)}$ (row 3) then disagrees with
either a majority of $B$ (row 2) or a majority of $C$ (row 4), contradicting
Lemma 4.
## 4\. Generalization of the Lower Bound
In this section, we extend the quadratic lower bound proven for weak consensus
(see § 3) to all non-trivial (without an always-admissible decision) Byzantine
agreement problems:
###### Theorem 1.
Any algorithm that solves any non-trivial Byzantine agreement problem has
$\Omega(t^{2})$ message complexity.
To prove the general lower bound (Theorem 1), we show that weak consensus is
the weakest non-trivial agreement problem. Namely, we present a zero-message
reduction from weak consensus to any (solvable) non-trivial agreement problem.
### 4.1. Validity Properties
To capture any specific Byzantine agreement problem, we require a generic
definition of the validity property. For that purpose, we reuse the formalism
(and nomenclature) of (validity_podc, ). In brief, a validity property maps
the proposals of correct processes into a set of admissible decisions.
Let a _process-proposal_ pair be a pair $(p_{i},v)$, where $p_{i}\in\Pi$ is a
process and $v\in\mathcal{V}_{I}$ is a proposal. Given any process-proposal
pair $\mathit{pp}=(p_{i},v)$, we denote by $\mathsf{proposal}(\mathit{pp})=v$
the proposal associated with the pair. An _input configuration_ is a tuple
$\big{[}\mathit{pp}_{1},\mathit{pp}_{2},...,\mathit{pp}_{x}\big{]}$ such that
(1) $n-t\leq x\leq n$, and (2) every process-proposal pair is associated with
a distinct process. In a nutshell, an input configuration is an assignment of
proposals to (all) correct processes. For instance,
$\big{[}(p_{1},v_{1}),(p_{4},v_{4}),(p_{5},v_{5})\big{]}$ is an input
configuration according to which (1) only processes $p_{1}$, $p_{4}$ and
$p_{5}$ are correct, and (2) $p_{1}$ proposes $v_{1}$, $p_{4}$ proposes
$v_{4}$ and $p_{5}$ proposes $v_{5}$.
The set of all input configurations is denoted by $\mathcal{I}$. Moreover,
$\mathcal{I}_{n}\subsetneq\mathcal{I}$ denotes the set of all input
configurations with exactly $n$ process-proposals pairs. Given any input
configuration $c\in\mathcal{I}$, $c[i]$ denotes the process-proposal pair
associated with the process $p_{i}$; if such a process-proposal pair does not
exist, $c[i]=\bot$. Moreover, $\pi(c)=\\{p_{i}\in\Pi\,|\,c[i]\neq\bot\\}$
denotes the set of all correct processes according to any input configuration
$c\in\mathcal{I}$.
##### Execution - input configuration correspondence.
Let $\mathcal{E}$ be any execution of any algorithm $\mathcal{A}$ which
exposes the $\mathsf{propose}(\cdot)/\mathsf{decide}(\cdot)$ interface, and
let $c\in\mathcal{I}$ be any input configuration. We say that $\mathcal{E}$
_corresponds to_ $c$ (in short, $\mathsf{input\\_conf}(\mathcal{E})=c$) if and
only if:
* •
$\pi(c)=\mathit{Correct}_{\mathcal{A}}(\mathcal{E})$, i.e., the set of
processes which are correct in $\mathcal{E}$ is identical to the set of
processes which are correct according to $c$; and
* •
for every process $p_{i}\in\pi(c)$, $p_{i}$’s proposal in $\mathcal{E}$ is
$\mathsf{proposal}(c[i])$.
##### Satisfying validity.
A validity property $\mathit{val}$ is a function $\mathit{val}:\mathcal{I}\to
2^{\mathcal{V}_{O}}$ such that $\mathit{val}(c)\neq\emptyset$, for every input
configuration $c\in\mathcal{I}$. We say that any algorithm $\mathcal{A}$ which
exposes the $\mathsf{propose}(\cdot)/\mathsf{decide}(\cdot)$ interface
_satisfies_ a validity property $\mathit{val}$ if and only if, in any
execution $\mathcal{E}\in\mathit{execs}(\mathcal{A})$, no correct process
decides any value
$v^{\prime}\notin\mathit{val}\big{(}\mathsf{input\\_conf}(\mathcal{E})\big{)}$.
Intuitively, an algorithm satisfies a validity property if correct processes
only decide admissible values.
##### The defining property of Byzantine agreement.
Observe that an exact definition of validity uniquely defines a _specific_
agreement problem. Indeed, any validity property encodes information about (1)
$n$, the total number of processes, (2) $t$, the upper bound on the number of
failures, (3) $\mathcal{V}_{I}$, the set of proposals, and (4)
$\mathcal{V}_{O}$, the set of decisions. We refer to a specific agreement
problem with a validity property $\mathit{val}$ as the
“$\mathit{val}$-agreement” problem. Lastly, we recall that the
$\mathit{val}$-agreement problem, for some validity property $\mathit{val}$,
is _trivial_ if and only if there exists an always-admissible value, i.e.,
$\exists
v^{\prime}\in\mathcal{V}_{O}:v^{\prime}\in\bigcap\limits_{c\in\mathcal{I}}\mathit{val}(c).$
### 4.2. Weak Consensus: The Weakest Non-Trivial Byzantine Agreement Problem
In this subsection, we prove that any solution to any non-trivial agreement
problem yields, at no additional communication cost, a solution to weak
consensus:
###### Lemma 0.
There exists a zero-message reduction from weak consensus to any solvable non-
trivial Byzantine agreement problem.
Before presenting the reduction, we introduce the _containment relation_.
##### Containment relation.
We define the containment relation (“$\sqsupseteq$”) between input
configurations:
$\forall c_{1},c_{2}\in\mathcal{I}:c_{1}\sqsupseteq
c_{2}\iff(\pi(c_{1})\supseteq\pi(c_{2}))\land(\forall
p_{i}\in\pi(c_{2}):c_{1}[i]=c_{2}[i]).$
Intuitively, $c_{1}$ contains $c_{2}$ if and only if (1) each process in
$c_{2}$ belongs to $c_{1}$, and (2) for each process in $c_{2}$, its proposals
in $c_{1}$ and $c_{2}$ are identical. For example, when $n=3$ and $t=1$,
$\big{[}(p_{1},v_{1}),(p_{2},v_{2}),(p_{3},v_{3})\big{]}$ contains
$\big{[}(p_{1},v_{1}),(p_{3},v_{3})\big{]}$, but it does not contain
$\big{[}(p_{1},v_{1}),(p_{3},v_{3}^{\prime}\neq v_{3})\big{]}$. Note that the
containment relation is reflexive (for every $c\in\mathcal{I}$, $c\sqsupseteq
c$). For any input configuration $c\in\mathcal{I}$, we define its _containment
set_ $\mathit{Cnt}(c)$ as the set of all input configurations which $c$
contains:
$\mathit{Cnt}(c)=\\{c^{\prime}\in\mathcal{I}\,|\,c\sqsupseteq c^{\prime}\\}.$
The following lemma proves that, in any execution that corresponds to some
input configuration $c$, if any agreement algorithm decides some value
$v^{\prime}$, then $v^{\prime}$ must be admissible according to all input
configurations $c$ contains. Otherwise, the same scenario can correspond to
_another_ input configuration for which $v^{\prime}$ is not admissible, thus
violating the considered validity property. A formal proof of the following
lemma is relegated to Appendix B.
###### Lemma 0.
Let $\mathcal{A}$ be any algorithm that solves the $\mathit{val}$-agreement
problem, for any validity property $\mathit{val}$. Let $\mathcal{E}$ be any
(potentially infinite) execution of $\mathcal{A}$, and let
$c=\mathsf{input\\_conf}(\mathcal{E})$, for some input configuration $c$. If a
correct process decides a value $v^{\prime}\in\mathcal{V}_{O}$ in
$\mathcal{E}$, then
$v^{\prime}\in\bigcap\limits_{c^{\prime}\in\mathit{Cnt}(c)}\mathit{val}(c^{\prime})$.
##### Reduction.
We fix any solvable non-trivial agreement problem $\mathcal{P}$, and any
algorithm $\mathcal{A}$ which solves $\mathcal{P}$. Let $\mathit{val}$ denote
the specific validity property of $\mathcal{P}$. Moreover, we fix the
following notation:
_Notation_ | _Definition & commentary_
---|---
$c_{0}\in\mathcal{I}_{n}$ | Any input configuration (of $\mathcal{P}$) according to which all processes are correct ($\pi(c_{0})=\Pi$).
$\mathcal{E}_{0}\in\mathit{execs}(\mathcal{A})$ | The infinite execution of $\mathcal{A}$ such that $\mathsf{input\\_conf}(\mathcal{E}_{0})=c_{0}$.
$v_{0}^{\prime}\in\mathcal{V}_{O}$ | The value decided in $\mathcal{E}_{0}$. Note that such a value exists as $\mathcal{A}$ satisfies _Termination_ and _Agreement_.
$c_{1}^{*}\in\mathcal{I}$ | Any input configuration (of $\mathcal{P}$) such that $v_{0}^{\prime}\notin\mathit{val}(c_{1}^{*})$. Note that such an input configuration exists as $\mathcal{P}$ is non-trivial.
$c_{1}\in\mathcal{I}_{n}$ | Any input configuration (of $\mathcal{P}$) such that (1) $c_{1}\sqsupseteq c_{1}^{*}$, and (2) all processes are correct according to $c_{1}$ ($\pi(c_{1})=\Pi$). Note that such an input configuration exists as the containment condition is reflexive.
$\mathcal{E}_{1}\in\mathit{execs}(\mathcal{A})$ | The infinite execution of $\mathcal{A}$ such that $\mathsf{input\\_conf}(\mathcal{E}_{1})=c_{1}$.
$v_{1}^{\prime}\in\mathcal{V}_{O}$ | The value decided in $\mathcal{E}_{1}$. Note that such a value exists as $\mathcal{A}$ satisfies _Termination_ and _Agreement_. Crucially, as $c_{1}\sqsupseteq c_{1}^{*}$ and $v_{0}^{\prime}\notin\mathit{val}(c_{1}^{*})$, Lemma 3 proves that $v_{1}^{\prime}\neq v_{0}^{\prime}$.
Table 2. Notation table for the reduction.
The reduction from weak consensus to $\mathcal{P}$ is presented in Algorithm
1. Our crucial observation is that $\mathcal{A}$, the fixed algorithm solving
$\mathcal{P}$, decides _different_ values in $\mathcal{E}_{0}$ and
$\mathcal{E}_{1}$: by Lemma 3, the value $v_{1}^{\prime}$ decided in
$\mathcal{E}_{1}$ is admissible according to $c_{1}^{*}$ (as $c_{1}\sqsupseteq
c_{1}^{*}$), which implies that $v_{1}^{\prime}\neq v_{0}^{\prime}$. We
utilize the aforementioned fact to distinguish (1) the fully correct execution
$\mathcal{E}_{0}^{w}$ of weak consensus where all processes propose $0$, and
(2) the fully correct execution $\mathcal{E}_{1}^{w}$ of weak consensus where
all processes propose $1$. Namely, our reduction works as follows: If a
correct process $p_{i}$ proposes $0$ (resp., $1$) to weak consensus, $p_{i}$
proposes its proposal from the input configuration $c_{0}$ (resp., $c_{1}$) to
the underlying algorithm $\mathcal{A}$. Moreover, if $p_{i}$ decides
$v_{0}^{\prime}$ from $\mathcal{A}$, $p_{i}$ decides $0$ from weak consensus;
otherwise, $p_{i}$ decides $1$ from weak consensus. Thus, if all processes are
correct and propose $0$ (resp., $1$) to weak consensus, $\mathcal{A}$
_necessarily_ decides $v_{0}^{\prime}$ (resp., $v_{1}^{\prime}\neq
v_{0}^{\prime}$), which then implies that all correct processes decide $0$
(resp., $1$) from weak consensus, thus satisfying _Weak Validity_. The
correctness of our reduction is proven in Appendix C.
Algorithm 1 Reduction from weak consensus to $\mathcal{P}$: Pseudocode for
process $p_{i}$
1:Uses:
2: $\mathcal{A}$, an algorithm solving the non-trivial agreement problem
$\mathcal{P}$
3:upon $\mathsf{propose}(b\in\\{0,1\\})$:
4: if $b=0$:
5: invoke
$\mathcal{A}.\mathsf{propose}\big{(}\mathsf{proposal}(c_{0}[i])\big{)}$
6: else:
7: invoke
$\mathcal{A}.\mathsf{propose}\big{(}\mathsf{proposal}(c_{1}[i])\big{)}$
8:upon $\mathcal{A}.\mathsf{decide}(\mathit{decision}\in\mathcal{V}_{O})$:
9: if $\mathit{decision}=v_{0}^{\prime}$:
10: trigger $\mathsf{decide}(0)$
11: else
12: trigger $\mathsf{decide}(1)$
Importantly, our reduction proves the general quadratic lower bound (Theorem
1). Indeed, if there was a sub-quadratic algorithm $\mathcal{A}$ which solves
any non-trivial Byzantine agreement problem, the introduced reduction would
yield a sub-quadratic weak consensus algorithm, thus contradicting the
quadratic lower bound for weak consensus (proven in § 3).
### 4.3. On the Lower Bound for the Blockchain-Specific Agreement Problem
At the heart of today’s blockchain systems lies an agreement problem that
requires the decided value to satisfy a globally verifiable condition.
Concretely, modern blockchain systems satisfy the following validity property:
* •
_External Validity_ (Cachin2001, ): If a correct process decides a value
$v^{\prime}$, then $\mathsf{valid}(v^{\prime})=\mathit{true}$, where
$\mathsf{valid}(\cdot)$ is a globally verifiable predicate.
This subsection underlines that the general quadratic lower bound (Theorem 1)
extends to all “reasonable” agreement problems with _External Validity_.
_External Validity_ emerged as the validity property of blockchain systems
because stronger notions of validity have limited applicability in this
setting. For example, consider _Strong Validity_ which guarantees only that,
if all correct processes propose the same value, that value must be decided.
Whenever correct processes do not propose the same value, _Strong Validity_
provides no guarantees, e.g., a value proposed by a faulty process can be
decided. In a blockchain setting, it will rarely be the case that all correct
validators (i.e., processes that operate the blockchain) construct and propose
an identical block with the clients’ pending transactions. Hence, the chain
could be comprised of “faulty blocks”, thus allowing faulty validators to
commit invalid (e.g., incorrectly signed) transactions. _External Validity_
eliminates this problem by allowing only valid blocks to be committed.
As mentioned in (validity_podc, ), the formalism we use for defining validity
properties (see § 4.1) is not suitable for expressing _External Validity_.
Namely, the formalism would technically classify _External Validity_ as a
trivial validity property since any fixed valid value is admissible according
to _every_ input configuration. However, in practice, the agreement problem
with _External Validity_ does not allow for a trivial solution in the
blockchain setting. For example, the fact that some transaction $\mathit{tx}$,
which is _correctly signed_ by some client, is valid does not mean that
validators can always decide $\mathit{tx}$. Indeed, for a validator to decide
$\mathit{tx}$, it needs to first _learn_ about $\mathit{tx}$ (otherwise,
cryptographic hardness assumptions on signatures would break). Therefore,
validators cannot decide $\mathit{tx}$ “on their own”, which precludes a
trivial solution to agreement problems with _External Validity_.
Nonetheless, our quadratic lower bound applies to _any_ algorithm
$\mathcal{A}$ which solves Byzantine agreement with _External Validity_ as
long as the algorithm has two fully correct executions with different
decisions. Indeed, if $\mathcal{A}$ has two fully correct infinite executions
$\mathcal{E}_{0}$ and $\mathcal{E}_{1}$ that decide different values,
Algorithm 1 (see § 4.2) solves weak consensus using $\mathcal{A}$ by employing
$c_{0}=\mathsf{input\\_conf}(\mathcal{E}_{0})$ (line 5 of Algorithm 1) and
$c_{1}=\mathsf{input\\_conf}(\mathcal{E}_{1})$ (line 7 of Algorithm 1). To the
best of our knowledge, every known agreement algorithm with _External
Validity_ (e.g., (yin2019hotstuff, ; BKM19, ; CGL18, ; lewis2022quadratic, ))
has different fully correct executions in which different values are decided.
Concretely, it is ensured that, if all processes are correct and they all
propose the same value, that value will be decided.333In other words, all
these agreement algorithms satisfy _both_ _External Validity_ and _Weak
Validity_.
###### Corollary 0.
Let $\mathcal{A}$ be any algorithm that solves Byzantine agreement with
_External Validity_. Moreover, let there exist two executions
$\mathcal{E}_{0}$ and $\mathcal{E}_{1}$ of $\mathcal{A}$ such that (1) all
processes are correct in both $\mathcal{E}_{0}$ and $\mathcal{E}_{1}$, (2)
some value $v_{0}^{\prime}$ is decided in $\mathcal{E}_{0}$, and (3) some
value $v_{1}^{\prime}\neq v_{0}^{\prime}$ is decided in $\mathcal{E}_{1}$.
Then, $\mathcal{A}$ has at least $\frac{t^{2}}{32}$ message complexity.
## 5\. Solvability of Byzantine Agreement Problems
In this section, we observe that a deeper study of the containment relation
(introduced in § 4.2) enables us to deduce which Byzantine agreement problems
are solvable in synchrony. Concretely, we introduce the general solvability
theorem, which unifies all previous results on the synchronous solvability of
Byzantine agreement problems (e.g., (LSP82, ; FLM85, ; lynch1996distributed, ;
dolev1983authenticated, ; abraham2022authenticated, ; fitzi2003efficient, )).
### 5.1. Authenticated & Unauthenticated Algorithms
When it comes to the solvability of Byzantine agreement problems in synchrony,
authentication makes a significant difference. For instance,
(dolev1983authenticated, ) proved that authenticated Byzantine broadcast can
tolerate any number $t<n$ of corrupted processes, whereas (LSP82, ) showed
that unauthenticated Byzantine broadcast cannot be solved unless $n>3t$. We
thus distinguish two types of algorithms:
* •
_Authenticated algorithms_ , which allow processes to sign their messages in a
way that prevents their signature from being forged by any other process
(Canetti04, ).
* •
_Unauthenticated algorithms_ , which do not provide any mechanism for
signatures. (Note that the receiver of a message knows the identity of its
sender.)
A Byzantine agreement problem $\mathcal{P}$ is _authenticated-solvable_
(resp., _unauthenticated-solvable_) if and only if there exists an
authenticated (resp., unauthenticated) algorithm which solves
$\mathcal{P}$.444Recall that the exact specification of $\mathcal{P}$
(concretely, $\mathcal{P}$’s validity property) encodes the resilience of
$\mathcal{P}$.
##### Remark about unauthenticated algorithms.
This section assumes that unauthenticated algorithms confront the adversary
that is able to simulate other processes. In other words, we do not assume the
resource-restricted paradigm (Garay2020RRC, ), where the adversary’s
capability to simulate other processes can be restricted assuming a per-
process bounded rate of cryptographic puzzle-solving capability with no bound
on the number of corruptions and without any setup (i.e., without any
authentication mechanism) (Andrychowicz2015, ; Katz2014, ).
### 5.2. General Solvability Theorem
Before presenting our solvability theorem, we define its key component – the
_containment condition_.
###### Definition 0 (Containment condition).
A non-trivial agreement problem $\mathcal{P}$ with some validity property
$\mathit{val}$ satisfies the _containment condition_ ($\mathcal{CC}$, in
short) if and only if there exists a Turing-computable function
$\Gamma:\mathcal{I}\to\mathcal{V}_{O}$ such that:
$\forall
c\in\mathcal{I}:\Gamma(c)\in\bigcap\limits_{c^{\prime}\in\mathit{Cnt}(c)}\mathit{val}(c^{\prime}).$
Intuitively, a non-trivial agreement problem satisfies $\mathcal{CC}$ if and
only if there exists a finite procedure which, for every input configuration
$c\in\mathcal{I}$, returns a value that is admissible according to _all_ input
configurations to which $c$ reduces.
We are now ready to introduce the general solvability theorem:
###### Theorem 2 (General solvability theorem).
A non-trivial Byzantine agreement problem $\mathcal{P}$ is:
* •
authenticated-solvable if and only if $\mathcal{P}$ satisfies $\mathcal{CC}$;
and
* •
unauthenticated-solvable if and only if (1) $\mathcal{P}$ satisfies
$\mathcal{CC}$, and (2) $n>3t$.
To prove the general solvability theorem (Theorem 2), we show the following
three results:
* •
_Necessity of $\mathcal{CC}$:_ If a non-trivial Byzantine agreement problem
$\mathcal{P}$ is authenticated- or unauthenticated-solvable, then
$\mathcal{P}$ satisfies $\mathcal{CC}$.
* •
_Sufficiency of $\mathcal{CC}$:_ If a non-trivial Byzantine agreement problem
$\mathcal{P}$ satisfies $\mathcal{CC}$ (resp., satisfies $\mathcal{CC}$ and
$n>3t$), then $\mathcal{P}$ is authenticated-solvable (resp., unauthenticated-
solvable).
* •
_Unauthenticated triviality when $n\leq 3t$:_ If a Byzantine agreement problem
$\mathcal{P}$ is unauthenticated-solvable with $n\leq 3t$, then $\mathcal{P}$
is trivial.
#### 5.2.1. Necessity of $\mathcal{CC}$
The necessity of $\mathcal{CC}$ for solvable non-trivial agreement problems
follows directly from Lemma 3:
###### Lemma 0.
If a non-trivial Byzantine agreement problem $\mathcal{P}$ is authenticated-
or unauthenticated-solvable, then $\mathcal{P}$ satisfies $\mathcal{CC}$.
###### Proof.
Let $\mathcal{P}$ be any authenticated- or unauthenticated-solvable non-
trivial Byzantine agreement problem. Let $\mathit{val}$ denote the validity
property of $\mathcal{P}$. As $\mathcal{P}$ is solvable, there exists an
(authenticated or unauthenticated) algorithm $\mathcal{A}$ which solves it.
Let us fix any input configuration $c\in\mathcal{I}$. Consider any infinite
execution $\mathcal{E}\in\mathit{execs}(\mathcal{A})$ such that
$c=\mathsf{input\\_conf}(\mathcal{E})$. As $\mathcal{E}$ is infinite, some
correct process decides (in finitely many rounds) some value
$v^{\prime}\in\mathcal{V}_{O}$ (to satisfy _Termination_). Due to Lemma 3,
$v^{\prime}\in\bigcap\limits_{c^{\prime}\in\mathit{Cnt}(c)}\mathit{val}(c^{\prime})$.
Thus, $\Gamma(c)$ is defined (as $\Gamma(c)=v^{\prime}$) and is Turing-
computable ($\mathcal{A}$ computes it in $\mathcal{E}$). Hence, $\mathcal{P}$
satisfies $\mathcal{CC}$. ∎
#### 5.2.2. Sufficiency of $\mathcal{CC}$
Let us start by recalling interactive consistency, a specific Byzantine
agreement problem. In interactive consistency, each process proposes its
value, and processes decide vectors of $n$ elements, one for each process
(i.e., $\mathcal{V}_{O}=\mathcal{I}_{n}$). Besides _Termination_ and
_Agreement_ , interactive consistency requires the following validity property
to hold:
* •
_IC-Validity_ : Let $V$ denote the vector decided by a correct process. If a
correct process $p_{i}$ proposed a value $v$, then $V[i]=v$.
Using our formalism, _IC-Validity_ can be expressed as $\text{\emph{IC-
Validity}}(c)=\\{c^{\prime}\in\mathcal{I}_{n}\,|\,c^{\prime}\sqsupseteq c\\}$.
Importantly, interactive consistency is authenticated-solvable for any $n$ and
any $t\in[1,n-1]$ (dolev1983authenticated, ). On the other hand, interactive
consistency is unauthenticated-solvable if $n>3t$ (LSP82, ; FLM85, ).
To prove the sufficiency of $\mathcal{CC}$, we prove that any non-trivial
Byzantine agreement problem that satisfies $\mathcal{CC}$ can be reduced to
interactive consistency at no resilience penalty.
###### Lemma 0.
If a non-trivial Byzantine agreement problem $\mathcal{P}$ satisfies
$\mathcal{CC}$ (resp., satisfies $\mathcal{CC}$ and $n>3t$), then
$\mathcal{P}$ is authenticated-solvable (resp., unauthenticated-solvable).
###### Proof.
To prove the lemma, we design a reduction from $\mathcal{P}$ to interactive
consistency (Algorithm 2). Our reduction is comprised of two steps: (1) When a
correct process proposes to $\mathcal{P}$ (line 3), the process forwards its
proposal to the underlying interactive consistency algorithm (line 4). (2)
Once a correct process decides a vector $\mathit{vec}$ of $n$ proposals from
interactive consistency (line 5), the process decides $\Gamma(\mathit{vec})$
from $\mathcal{P}$ (line 6). _Termination_ and _Agreement_ of the reduction
algorithm follow directly from _Termination_ and _Agreement_ of interactive
consistency, respectively. Finally, let us prove that the reduction algorithm
satisfies the specific validity property $\mathit{val}$ of $\mathcal{P}$.
Consider any specific execution $\mathcal{E}$ of the reduction algorithm such
that $\mathsf{input\\_conf}(\mathcal{E})=c$, for some input configuration
$c\in\mathcal{I}$. Let $\mathit{vec}\in\mathcal{I}_{n}$ denote the vector
which a correct process decides from the underlying interactive consistency
algorithm (line 5). _IC-Validity_ ensures that $\mathit{vec}\sqsupseteq c$ as,
for every correct process $p_{i}$, $\mathit{vec}[i]=\mathsf{proposal}(c[i])$.
As $\mathcal{P}$ satisfies $\mathcal{CC}$,
$\Gamma(\mathit{vec})\in\mathit{val}(c)$, which proves that the reduction
algorithm satisfies $\mathit{val}$.
As interactive consistency is authenticated-solvable for any $n$ and any
$t\in[1,n-1]$ (dolev1983authenticated, ), a non-trivial Byzantine agreement
problem $\mathcal{P}$ which satisfies $\mathcal{CC}$ is authenticated-
solvable. Similarly, as interactive consistency is unauthenticated-solvable if
$n>3t$ (LSP82, ; FLM85, ), a non-trivial Byzantine agreement problem
$\mathcal{P}$ which satisfies $\mathcal{CC}$ with $n>3t$ is unauthenticated-
solvable. ∎
Algorithm 2 Reduction from $\mathcal{P}$ to interactive consistency:
Pseudocode for process $p_{i}$
1:Uses:
2: Interactive consistency, instance $\mathcal{IC}$
3:upon $\mathsf{propose}(v\in\mathcal{V}_{I})$:
4: invoke $\mathcal{IC}.\mathsf{propose}(v)$
5:upon $\mathcal{IC}.\mathsf{decide}(\mathit{vec}\in\mathcal{I}_{n})$:
$\triangleright$ processes decide input configurations with $n$ process-
proposal pairs
6: trigger $\mathsf{decide}\big{(}\Gamma(\mathit{vec})\big{)}$
#### 5.2.3. Unauthenticated triviality when $n\leq 3t$
We prove that any agreement problem that is unauthenticated-solvable with
$n\leq 3t$ is trivial by contradiction. Namely, if there existed a non-trivial
agreement problem $\mathcal{P}$ that is unauthenticated-solvable with $n\leq
3t$, the reduction from weak consensus to $\mathcal{P}$ presented in Algorithm
1 would yield an unauthenticated weak consensus algorithm with $n\leq 3t$,
which is known to be impossible (FLM85, ).
###### Lemma 0.
If a Byzantine agreement problem $\mathcal{P}$ is unauthenticated-solvable
with $n\leq 3t$, then $\mathcal{P}$ is trivial.
###### Proof.
By contradiction, let $\mathcal{P}$ be non-trivial. As weak consensus can be
reduced to any solvable non-trivial agreement problem at no resilience penalty
(see Algorithm 1), weak consensus is unauthenticated-solvable with $n\leq 3t$.
This is a contradiction with the fact that weak consensus is unauthenticated-
solvable only if $n>3t$ (FLM85, ). ∎
### 5.3. General Solvability Theorem: Application to Strong Consensus
Here, we show how the general solvability theorem (Theorem 2) can be applied
with the example of strong consensus. (Recall that strong consensus satisfies
_Strong Validity_ : if all correct processes propose the same value, that
value must be decided.) Namely, it is known that strong consensus is
authenticated-solvable only if $n>2t$ (abraham2022authenticated, ). The
general solvability theorem enables us to obtain another proof of this claim.
###### Theorem 6 (Proven in (abraham2022authenticated, )).
Strong consensus is authenticated-solvable only if $n>2t$.
###### Proof.
To prove the theorem, we show that strong consensus does not satisfy
$\mathcal{CC}$ with $n\leq 2t$. Without loss of generality, let $n=2t$ and let
$\mathcal{V}_{I}=\mathcal{V}_{O}=\\{0,1\\}$. Consider the input configuration
$c\in\mathcal{I}_{n}$ such that (1) for every $i\in[1,t]$,
$\mathsf{proposal}(c[i])=0$, and (2) for every $i\in[t+1,n]$,
$\mathsf{proposal}(c[i])=1$. That is, the proposal of the first $t$ processes
is $0$, whereas the proposal of the other processes is $1$. Note that both $0$
and $1$ are admissible according to $c$. Importantly, $c$ contains
$c_{0}\in\mathcal{I}_{t}$, where $\pi(c_{0})=\\{p_{1},p_{2},...,p_{t}\\}$ and
$\mathsf{proposal}(c_{0}[i])=0$, for every $i\in[1,t]$. Similarly, $c$
contains $c_{1}\in\mathcal{I}_{t}$, where
$\pi(c_{0})=\\{p_{t+1},p_{t+2},...,p_{n}\\}$ and
$\mathsf{proposal}(c_{0}[i])=1$, for every $i\in[t+1,n]$. According to $c_{0}$
(resp., $c_{1}$), only $0$ (resp., $1$) is admissible. Hence, strong consensus
with $n\leq 2t$ does not satisfy $\mathcal{CC}$ as $c$ contains two input
configurations (namely, $c_{0}$ and $c_{1}$) which do not have a common
admissible value. ∎
## 6\. Related Work
##### Reductions and equivalences between Byzantine agreement problems.
Interactive consistency can be reduced to $n$ (parallel) instances of
Byzantine broadcast (Nayak2020a, ). In the honest-majority setting
($t<\frac{n}{2}$), Byzantine broadcast and strong consensus are
computationally equivalent (lynch1996distributed, ; AW04, ). Moreover,
Byzantine broadcast can be reduced to strong consensus with only $O(n)$
additional exchanged messages (lynch1996distributed, ; AW04, ). Furthermore,
it is known that weak consensus is reducible to strong consensus (and, thus,
to Byzantine broadcast) (lynch1996distributed, ; AW04, ).
##### Deterministic Byzantine agreement in synchrony.
In their seminal paper, (dolev1985bounds, ) established a quadratic lower
bound on message complexity of deterministic Byzantine broadcast (and,
consequently, strong consensus). It is shown that, in the authenticated
setting (with idealized digital signatures (Canetti04, )), deterministic
Byzantine broadcast algorithms must exchange $\Omega(nt)$ signatures and
$\Omega(n+t^{2})$ messages. Similarly, their proof shows that, in the
unauthenticated setting, there exists an execution with $\Omega(nt)$ exchanged
messages. The $\Omega(nt)$ bound on exchanged signatures is proven to be tight
when $t<\frac{n}{2}-O(1)$ and $t\in\Theta(n)$ (Momose2021, ). Additionally,
(berman1992bit, ) proved that the $\Omega(nt)$ bound on message complexity in
the unauthenticated setting is tight when $t\in\Theta(n)$. The
$\Omega(n+t^{2})$ bound on message complexity in the authenticated setting has
recently been proven to be tight (Chlebus23, ). A quadratic lower bound on the
message complexity of binary crusader broadcast, a problem in which
disagreements are sometimes allowed, has also been shown in (AbrahamStern22,
). Lower bounds on other relevant metrics, such as resilience, network
connectivity, or latency, have also been established (FLM85, ;
dolev1983authenticated, ; dolev2013early, ).
By employing threshold signatures (Shoup00, ), which extend beyond the
idealized authenticated model, the word complexity of $O(n(f+1))$, where
$f\leq t<\frac{n}{2}$ represents the actual number of failures and a word
contains a constant number of values and signatures, can be achieved for
Byzantine agreement with _External Validity_ (spiegelman2020search, ) and
Byzantine broadcast (cohen2023make, ; strong, ) by utilizing the algorithm of
(Momose2021, ). Additionally, an amortized cost of $O(n)$ is attainable in
multi-shot Byzantine broadcast (wan2023amortized, ). Amortization is similarly
possible with long inputs (Chen2021, ; Nayak2020a, ). In the dishonest-
majority setting (with $t\geq\frac{n}{2}$), the most efficient broadcast
constructions are based on the deterministic broadcast protocol of
(dolev1985bounds, ) with a cubic message complexity.
##### Randomized Byzantine agreement in synchrony.
Even with randomization, no Byzantine broadcast algorithm can achieve sub-
quadratic expected message complexity against a strongly rushing adaptive
adversary equipped with after-the-fact message removal capabilities
(Abraham2019c, ). However, designing randomized synchronous Byzantine
agreement algorithms with sub-quadratic expected message complexity is
possible against a weaker adversary. In certain models, such as those with a
static adversary (Boyle2021, ; King2011a, ) or with (only) private channels
(King2011, ), algorithms with a sub-linear number of messages (or bits) sent
per correct process can be designed (Gelles, ; Gelles23, ; King2011a, ;
King2009, ; King2011, ; Boyle2021, ; Boyle2018b, ).
When the adversary is adaptive (without after-the-fact message removal
capabilities) and computationally bounded, there exist Byzantine agreement
algorithms (Chen2019, ; Abraham2019c, ; RambaudBootstrapping, ) which achieve
both sub-quadratic (but unbalanced) communication and constant latency in
expectation by relying on a verifiable random function (VRF)
(DBLP:conf/focs/MicaliRV99, ). It has been shown that, in the idealized
authenticated setting (Canetti04, ) (which is strictly weaker than bare or
bulletin-board PKI (Canetti00, ; Boyle2021, ; RambaudBootstrapping, )), in the
presence of a rushing adaptive adversary, no randomized protocol can achieve a
sub-quadratic expected communication complexity in the synchronous multi-cast
model, where a sent message is necessarily sent to all the processes
(RambaudBootstrapping, ).
A VRF setup and a sub-quadratic binary strong consensus algorithm were shown
to yield a $O(n\ell+n\mathit{poly}(\kappa))$ bit complexity, where $\ell$ is
the proposal size and $\kappa$ is the security parameter, for solving strong
consensus with long inputs (Bhangale2022, ). State-of-the-art algorithms for
interactive consistency with long inputs (of size $\ell$) yield the bit
complexity of $O(n^{2}\ell+n^{2}\kappa^{3})$ (Bhangale2022, ) or
$O(n^{2}\ell+n^{3}\kappa)$ (Abraham2023a, ).
In the dishonest-majority setting, (Blum2023, ) establishes new lower bounds
on the expected message complexity for Byzantine broadcast: no (randomized)
algorithm can achieve sub-quadratic message complexity with only $O(1)$
correct processes. The algorithm of (Chan2020, ) achieves the bit complexity
of $O(n^{2}\kappa^{2})$ for binary Byzantine broadcast. (Tsimos2022, ) proves
that an $\tilde{O}(n^{2}\mathit{poly}(\kappa))$ bit complexity can be achieved
for binary interactive consistency. Randomization additionally helps in
circumventing the Dolev-Strong lower bound (dolev1983authenticated, ) which
states that $t+1$ rounds are necessary in the worst case to deterministically
solve Byzantine broadcast (dolev1983authenticated, ). While using
randomization for circumventing the Dolev-Strong lower bound is well-
established for the honest-majority setting (KatzKoo2009, ;
abraham2019synchronous, ; Abraham2019c, ; RambaudBootstrapping, ), recent
findings have proven that the same approach can be utilized even in the
presence of a dishonest majority (Wan2020, ; Wan2020a, ; Chan2020, ).
##### Byzantine agreement in partial synchrony and asynchrony.
The worst-case complexity of all Byzantine agreement problems in partial
synchrony was studied in (validity_podc, ) where it was proven that any
specific Byzantine agreement problem requires $\Theta(n^{2})$ exchanged
messages (after the network has stabilized) in the worst case. Prior to
(validity_podc, ), it was shown that there exist deterministic algorithms,
building on top of threshold signatures and HotStuff (YMR19, ), which achieve
$O(n^{2})$ word complexity for strong consensus (lewis2022quadratic, ;
civit2022byzantine, ). Recently, (everyBitCounts, ) proved that vector
consensus, a Byzantine agreement problem in which processes agree on the
proposals of $n-t$ processes, can be solved with $O(n^{2.5})$ or $O(n^{2})$
words (when employing STARK proofs (Ben-Sasson_stark, )). In the randomized
paradigm, there exist VRF-based sub-quadratic Byzantine agreement protocols
(Chen2019, ; Abraham2019c, ; RambaudBootstrapping, ; Sheng22, ). Moreover, it
is possible to achieve $O(n\ell+n\mathit{poly}(\kappa))$ bit complexity for
strong consensus with long inputs of size $\ell$ (Bhangale2022, ).
Furthermore, reaching the communication complexity of $O(n\ell+n^{2}\kappa)$
for validated asynchronous Byzantine agreement was proven to be possible:
(LL0W20, ) and (Nayak2020a, ) achieve the aforementioned bound by extending
the VABA protocol of (abraham2019asymptotically, ). With some additional
assumptions (e.g., private setup or delayed adversary), it is possible to
design a sub-quadratic asynchronous Byzantine agreement algorithm (Blum2020, ;
CKS20, ). A generic transformation proposed in (Bhangale2022, ) produces, on
top of any asynchronous sub-quadratic Byzantine agreement algorithm, an
asynchronous solution with $O(n\ell+n\mathit{poly}(\kappa))$ bit complexity.
## 7\. Concluding Remarks
We study in this paper the necessary worst-case communication cost for all
Byzantine agreement problems. We show that any (deterministic) solution to any
solvable non-trivial Byzantine agreement problem exchanges $\Omega(t^{2})$
messages in the worst-case. We prove the general lower bound in two steps: (1)
we show that weak consensus (yin2019hotstuff, ; lewis2022quadratic, ;
civit2022byzantine, ; BKM19, ) requires $\Omega(t^{2})$ exchanged messages
even in synchrony; (2) we design a reduction from weak consensus to any
solvable non-trivial Byzantine agreement problem, thus generalizing the
$\Omega(t^{2})$ bound. Interestingly, our reduction allows us to determine a
general result about the synchronous solvability of Byzantine agreement, thus
demarcating the entire landscape of solvable (and unsolvable) variants of the
problem.
We plan on extending our results to the randomized setting. Concretely, the
goal is to study the cost of solving randomized Byzantine agreement against an
adaptive adversary, with after-the-fact message removal capabilities
(Abraham2019c, ) or the ability to access the internal states of all processes
(DBLP:conf/soda/HuangPZ23, ). It would also be interesting to extend our work
to problems which do not require agreement (e.g., approximate (AbrahamAD04, ;
MendesH13, ; GhineaLW22, ; ghinea2023multidimensional, ) or $k$-set
(BouzidIR16, ; Delporte-Gallet20, ; Delporte-Gallet22, ; lynch1996distributed,
) agreement). Finally, improving the known upper bounds on the cost of solving
Byzantine agreement problems constitutes another important research direction.
## References
* (1) Abd-El-Malek, M., Ganger, G. R., Goodson, G. R., Reiter, M. K., and Wylie, J. J. Fault-Scalable Byzantine Fault-Tolerant Services. ACM SIGOPS Operating Systems Review 39, 5 (2005), 59–74.
* (2) Abraham, I., Amit, Y., and Dolev, D. Optimal Resilience Asynchronous Approximate Agreement. In Principles of Distributed Systems, 8th International Conference, OPODIS 2004, Grenoble, France, December 15-17, 2004, Revised Selected Papers (2004), T. Higashino, Ed., vol. 3544 of Lecture Notes in Computer Science, Springer, pp. 229–239.
* (3) Abraham, I., Chan, T. H., Dolev, D., Nayak, K., Pass, R., Ren, L., and Shi, E. Communication complexity of byzantine agreement, revisited. Proceedings of the Annual ACM Symposium on Principles of Distributed Computing (2019), 317–326.
* (4) Abraham, I., Chan, T. H., Dolev, D., Nayak, K., Pass, R., Ren, L., and Shi, E. Communication complexity of byzantine agreement, revisited. Distributed Comput. 36, 1 (2023), 3–28.
* (5) Abraham, I., Devadas, S., Dolev, D., Nayak, K., and Ren, L. Synchronous Byzantine Agreement with Expected $O(1)$ Rounds, Expected $O(n^{2})$ Communication, and Optimal Resilience. In Financial Cryptography and Data Security - 23rd International Conference (2019), vol. 11598 LNCS, pp. 320–334.
* (6) Abraham, I., Dolev, D., Kagan, A., and Stern, G. Authenticated Consensus in Synchronous Systems with Mixed Faults. Cryptology ePrint Archive (2022).
* (7) Abraham, I., Jovanovic, P., Maller, M., Meiklejohn, S., Stern, G., and Tomescu, A. Reaching Consensus for Asynchronous Distributed Key Generation. In PODC ’21: ACM Symposium on Principles of Distributed Computing, Virtual Event, Italy, July 26-30, 2021 (2021), A. Miller, K. Censor-Hillel, and J. H. Korhonen, Eds., ACM, pp. 363–373.
* (8) Abraham, I., Malkhi, D., Nayak, K., Ren, L., and Spiegelman, A. Solida: A Blockchain Protocol Based on Reconfigurable Byzantine Consensus. arXiv preprint arXiv:1612.02916 (2016).
* (9) Abraham, I., Malkhi, D., Nayak, K., Ren, L., and Spiegelman, A. Solidus: An Incentive-compatible Cryptocurrency Based on Permissionless Byzantine Consensus. CoRR, abs/1612.02916 (2016).
* (10) Abraham, I., Malkhi, D., and Spiegelman, A. Asymptotically Optimal Validated Asynchronous Byzantine Agreement. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing (2019), pp. 337–346.
* (11) Abraham, I., Nayak, K., Ren, L., and Xiang, Z. Good-case Latency of Byzantine Broadcast: A Complete Categorization. In Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing (2021), pp. 331–341.
* (12) Abraham, I., Nayak, K., and Shrestha, N. Communication and Round Efficient Parallel Broadcast Protocols. In IACR Cryptol. ePrint Arch. (2023), pp. 1–22.
* (13) Abraham, I., and Stern, G. New dolev-reischuk lower bounds meet blockchain eclipse attacks. In 26th International Conference on Principles of Distributed Systems, OPODIS 2022, December 13-15, 2022, Brussels, Belgium (2022), E. Hillel, R. Palmieri, and E. Rivière, Eds., vol. 253 of LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 16:1–16:18.
* (14) Adya, A., Bolosky, W., Castro, M., Cermak, G., Chaiken, R., Douceur, J., Howell, J., Lorch, J., Theimer, M., and Wattenhofer, R. $\\{$FARSITE$\\}$: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment. In 5th Symposium on Operating Systems Design and Implementation (OSDI 02) (2002).
* (15) Amir, Y., Danilov, C., Kirsch, J., Lane, J., Dolev, D., Nita-Rotaru, C., Olsen, J., and Zage, D. Scaling Byzantine Fault-Tolerant Replication to Wide Area Networks. In International Conference on Dependable Systems and Networks (DSN’06) (2006), IEEE, pp. 105–114.
* (16) Andrychowicz, M., and Dziembowski, S. Pow-based distributed cryptography with no trusted setup. In Advances in Cryptology - CRYPTO 2015 - 35th Annual Cryptology Conference, Santa Barbara, CA, USA, August 16-20, 2015, Proceedings, Part II (2015), R. Gennaro and M. Robshaw, Eds., vol. 9216 of Lecture Notes in Computer Science, Springer, pp. 379–399.
* (17) Attiya, H., and Welch, J. L. Distributed computing - fundamentals, simulations, and advanced topics (2. ed.). Wiley series on parallel and distributed computing. Wiley, 2004.
* (18) Ben-Or, M., and El-Yaniv, R. Resilient-optimal interactive consistency in constant time. Distributed Computing 16, 4 (2003), 249–262.
* (19) Ben-Sasson, E., Bentov, I., Horesh, Y., and Riabzev, M. Scalable, transparent, and post-quantum secure computational integrity. IACR Cryptol. ePrint Arch. (2018), 46.
* (20) Berman, P., Garay, J. A., and Perry, K. J. Bit Optimal Distributed Consensus. In Computer science: research and applications. Springer, 1992, pp. 313–321.
* (21) Bhangale, A., Liu-Zhang, C. D., Loss, J., and Nayak, K. Efficient Adaptively-Secure Byzantine Agreement for Long Messages. In Advances in Cryptology - ASIACRYPT - 28th International Conference on the Theory and Application of Cryptology and Information Security (Taipei, Taiwan, 2022), vol. 13791 LNCS, pp. 504–525.
* (22) Blum, E., Boyle, E., Cohen, R., and Liu-Zhang, C.-D. Communication Lower Bounds for Cryptographic Broadcast Protocols. In 37th International Symposium on Distributed Computing (DISC) (L’Aquila, Italy, 2023), pp. 10:1—-10:19.
* (23) Blum, E., Katz, J., Liu-Zhang, C., and Loss, J. Asynchronous byzantine agreement with subquadratic communication. In Theory of Cryptography - 18th International Conference, TCC 2020, Durham, NC, USA, November 16-19, 2020, Proceedings, Part I (2020), R. Pass and K. Pietrzak, Eds., vol. 12550 of Lecture Notes in Computer Science, Springer, pp. 353–380.
* (24) Bouzid, Z., Imbs, D., and Raynal, M. A necessary condition for Byzantine k-set agreement. Inf. Process. Lett. 116, 12 (2016), 757–759.
* (25) Boyle, E., Cohen, R., and Goel, A. Breaking the O($\surd$ n)-bit barrier: Byzantine agreement with polylog bits per party. Proceedings of the Annual ACM Symposium on Principles of Distributed Computing (2021), 319–330.
* (26) Boyle, E., Jain, A., Prabhakaran, M., and Yu, C. H. The bottleneck complexity of secure multiparty computation. In 45th International Colloquium on Automata, Languages, and Programming, (ICALP) (Prague, Czech Republic, 2018), vol. 107, pp. 1–16.
* (27) Buchman, E. Tendermint: Byzantine Fault Tolerance in the Age of Blockchains. PhD thesis, University of Guelph, 2016.
* (28) Buchman, E., Kwon, J., and Milosevic, Z. The latest gossip on BFT consensus. Tech. Rep. 1807.04938, arXiv, 2019.
* (29) Cachin, C., Kursawe, K., Petzold, F., and Shoup, V. Secure and Efficient Asynchronous Broadcast Protocols. In Advances in Cryptology - CRYPTO 2001, 21st Annual International Cryptology Conference, Santa Barbara, California, USA, August 19-23, 2001, Proceedings (2001), J. Kilian, Ed., vol. 2139 of Lecture Notes in Computer Science, Springer, pp. 524–541.
* (30) Canetti, R. Universally composable signature, certification, and authentication. In 17th IEEE Computer Security Foundations Workshop, (CSFW-17 2004), 28-30 June 2004, Pacific Grove, CA, USA (2004), IEEE Computer Society, p. 219.
* (31) Canetti, R., Goldreich, O., Goldwasser, S., and Micali, S. Resettable zero-knowledge (extended abstract). In Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, May 21-23, 2000, Portland, OR, USA (2000), F. F. Yao and E. M. Luks, Eds., ACM, pp. 235–244.
* (32) Castro, M., and Liskov, B. Practical Byzantine Fault Tolerance and Proactive Recovery. ACM Transactions on Computer Systems 20, 4 (2002).
* (33) Chan, T. H., Pass, R., and Shi, E. Sublinear-Round Byzantine Agreement Under Corrupt Majority. In Public-Key Cryptography (Edinburgh, UK, 2020), vol. 12111 LNCS, pp. 246–265.
* (34) Chen, J. Optimal error-free multi-valued byzantine agreement, 2021.
* (35) Chen, J., and Micali, S. Algorand: A secure and efficient distributed ledger. Theoretical Computer Science 777 (2019), 155–183.
* (36) Chlebus, B. S., Kowalski, D. R., and Olkowski, J. Deterministic fault-tolerant distributed computing in linear time and communication. In Proceedings of the 2023 ACM Symposium on Principles of Distributed Computing, PODC 2023, Orlando, FL, USA, June 19-23, 2023 (2023), R. Oshman, A. Nolin, M. M. Halldórsson, and A. Balliu, Eds., ACM, pp. 344–354.
* (37) Civit, P., Dzulfikar, M. A., Gilbert, S., Gramoli, V., Guerraoui, R., Komatovic, J., and Vidigueira, M. Byzantine Consensus is $\Theta(n^{2})$: The Dolev-Reischuk Bound is Tight even in Partial Synchrony! In 36th International Symposium on Distributed Computing (DISC 2022) (Dagstuhl, Germany, 2022), C. Scheideler, Ed., vol. 246 of Leibniz International Proceedings in Informatics (LIPIcs), Schloss Dagstuhl – Leibniz-Zentrum für Informatik, pp. 14:1–14:21.
* (38) Civit, P., Gilbert, S., Guerraoui, R., Komatovic, J., Monti, M., and Vidigueira, M. Every Bit Counts in Consensus. In 37th International Symposium on Distributed Computing, DISC 2023, October 10-12, 2023, L’Aquila, Italy (2023), R. Oshman, Ed., vol. 281 of LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 13:1–13:26.
* (39) Civit, P., Gilbert, S., Guerraoui, R., Komatovic, J., and Vidigueira, M. On the Validity of Consensus. In Proceedings of the 2023 ACM Symposium on Principles of Distributed Computing, PODC 2023, Orlando, FL, USA, June 19-23, 2023 (2023), R. Oshman, A. Nolin, M. M. Halldórsson, and A. Balliu, Eds., ACM, pp. 332–343.
* (40) Civit, P., Gilbert, S., Guerraoui, R., Komatovic, J., and Vidigueira, M. Strong byzantine agreement with adaptive word complexity. arXiv preprint arXiv:2308.03524 (2023).
* (41) Coan, B. A., and Welch, J. L. Modular Construction of a Byzantine Agreement Protocol with Optimal Message Bit Complexity. Inf. Comput. 97, 1 (1992), 61–85.
* (42) Cohen, S., Keidar, I., and Spiegelman, A. Not a coincidence: Sub-quadratic asynchronous byzantine agreement WHP. In 34th International Symposium on Distributed Computing, DISC 2020, October 12-16, 2020, Virtual Conference (2020), H. Attiya, Ed., vol. 179 of LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 25:1–25:17.
* (43) Cohen, S., Keidar, I., and Spiegelman, A. Make every word count: Adaptive byzantine agreement with fewer words. In 26th International Conference on Principles of Distributed Systems (OPODIS 2022) (2023), Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
* (44) Correia, M. From Byzantine Consensus to Blockchain Consensus. In Essentials of Blockchain Technology. Chapman and Hall/CRC, 2019, pp. 41–80.
* (45) Crain, T., Gramoli, V., Larrea, M., and Raynal, M. DBFT: Efficient Leaderless Byzantine Consensus and its Applications to Blockchains. In Proceedings of the 17th IEEE International Symposium on Network Computing and Applications (NCA’18) (2018), IEEE.
* (46) Das, S., Yurek, T., Xiang, Z., Miller, A., Kokoris-Kogias, L., and Ren, L. Practical Asynchronous Distributed Key Generation. In 43rd IEEE Symposium on Security and Privacy, SP 2022, San Francisco, CA, USA, May 22-26, 2022 (2022), IEEE, pp. 2518–2534.
* (47) Deligios, G., Hirt, M., and Liu-Zhang, C. Round-Efficient Byzantine Agreement and Multi-party Computation with Asynchronous Fallback. In Theory of Cryptography - 19th International Conference, TCC 2021, Raleigh, NC, USA, November 8-11, 2021, Proceedings, Part I (2021), K. Nissim and B. Waters, Eds., vol. 13042 of Lecture Notes in Computer Science, Springer, pp. 623–653.
* (48) Delporte-Gallet, C., Fauconnier, H., Raynal, M., and Safir, M. Optimal algorithms for synchronous byzantine k-set agreement. In Stabilization, Safety, and Security of Distributed Systems - 24th International Symposium, SSS 2022, Clermont-Ferrand, France, November 15-17, 2022, Proceedings (2022), S. Devismes, F. Petit, K. Altisen, G. A. D. Luna, and A. F. Anta, Eds., vol. 13751 of Lecture Notes in Computer Science, Springer, pp. 178–192.
* (49) Delporte-Gallet, C., Fauconnier, H., and Safir, M. Byzantine k-Set Agreement. In Networked Systems - 8th International Conference, NETYS 2020, Marrakech, Morocco, June 3-5, 2020, Proceedings (2020), C. Georgiou and R. Majumdar, Eds., vol. 12129 of Lecture Notes in Computer Science, Springer, pp. 183–191.
* (50) Dolev, D., and Lenzen, C. Early-deciding consensus is expensive. In Proceedings of the 2013 ACM symposium on Principles of distributed computing (2013), pp. 270–279.
* (51) Dolev, D., and Reischuk, R. Bounds on Information Exchange for Byzantine Agreement. Journal of the ACM (JACM) 32, 1 (1985), 191–204.
* (52) Dolev, D., and Strong, H. R. Authenticated Algorithms for Byzantine Agreement. SIAM Journal on Computing 12, 4 (1983), 656–666.
* (53) Dwork, C., Lynch, N., and Stockmeyer, L. Consensus in the Presence of Partial Synchrony. Journal of the Association for Computing Machinery, Vol. 35, No. 2, pp.288-323 (1988).
* (54) Fischer, M. J., and Lynch, N. A. A lower bound for the time to assure interactive consistency. Inf. Process. Lett. 14, 4 (1982), 183–186.
* (55) Fischer, M. J., Lynch, N. A., and Merritt, M. Easy Impossibility Proofs for Distributed Consensus Problems. In Proceedings of the Fourth Annual ACM Symposium on Principles of Distributed Computing, Minaki, Ontario, Canada, August 5-7, 1985 (1985), M. A. Malcolm and H. R. Strong, Eds., ACM, pp. 59–70.
* (56) Fischer, M. J., Lynch, N. A., and Paterson, M. S. Impossibility of Distributed Consensus with One Faulty Process. Journal of the ACM (JACM) 32, 2 (1985), 374–382.
* (57) Fitzi, M., and Garay, J. A. Efficient Player-Optimal Protocols for Strong and Differential Consensus. In Proceedings of the twenty-second annual symposium on Principles of distributed computing (2003), pp. 211–220.
* (58) Fitzi, M., Gisin, N., Maurer, U. M., and von Rotz, O. Unconditional Byzantine Agreement and Multi-party Computation Secure against Dishonest Minorities from Scratch. In Advances in Cryptology - EUROCRYPT 2002, International Conference on the Theory and Applications of Cryptographic Techniques, Amsterdam, The Netherlands, April 28 - May 2, 2002, Proceedings (2002), L. R. Knudsen, Ed., vol. 2332 of Lecture Notes in Computer Science, Springer, pp. 482–501.
* (59) Galil, Z., Haber, S., and Yung, M. Cryptographic Computation: Secure Fault-Tolerant Protocols and the Public-Key Model. In Conference on the Theory and Application of Cryptographic Techniques (1987), Springer, pp. 135–155.
* (60) Garay, J. A., Kiayias, A., Ostrovsky, R. M., Panagiotakos, G., and Zikas, V. Resource-restricted cryptography: Revisiting MPC bounds in the proof-of-work era. In Advances in Cryptology - EUROCRYPT 2020 - 39th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zagreb, Croatia, May 10-14, 2020, Proceedings, Part II (2020), A. Canteaut and Y. Ishai, Eds., vol. 12106 of Lecture Notes in Computer Science, Springer, pp. 129–158.
* (61) Gelles, Y., and Komargodski, I. Brief Announcement: Scalable Agreement Protocols with Optimal Optimistic Efficiency. 37th International Symposium on Distributed Computing, (DISC) (2023), 42:1—-42:6.
* (62) Gelles, Y., and Komargodski, I. Optimal Load-Balanced Scalable Distributed Agreement. In IACR Cryptol. ePrint Arch. (2023).
* (63) Gennaro, R., Ishai, Y., Kushilevitz, E., and Rabin, T. On 2-Round Secure Multiparty Computation. In Advances in Cryptology - CRYPTO 2002, 22nd Annual International Cryptology Conference, Santa Barbara, California, USA, August 18-22, 2002, Proceedings (2002), M. Yung, Ed., vol. 2442 of Lecture Notes in Computer Science, Springer, pp. 178–193.
* (64) Ghinea, D., Liu-Zhang, C., and Wattenhofer, R. Optimal Synchronous Approximate Agreement with Asynchronous Fallback. In PODC ’22: ACM Symposium on Principles of Distributed Computing, Salerno, Italy, July 25 - 29, 2022 (2022), A. Milani and P. Woelfel, Eds., ACM, pp. 70–80.
* (65) Ghinea, D., Liu-Zhang, C.-D., and Wattenhofer, R. Multidimensional Approximate Agreement with Asynchronous Fallback. Cryptology ePrint Archive (2023).
* (66) Gilad, Y., Hemo, R., Micali, S., Vlachos, G., and Zeldovich, N. Algorand: Scaling Byzantine Agreements for Cryptocurrencies. In Proceedings of the 26th Symposium on Operating Systems Principles (New York, NY, USA, 2017), SOSP ’17, Association for Computing Machinery, p. 51–68.
* (67) Gilbert, S., Lynch, N. A., and Shvartsman, A. A. Rambo: A Robust, Reconfigurable Atomic Memory Service for Dynamic Networks. Distributed Computing 23, 4 (2010), 225–272.
* (68) Hadzilacos, V., and Halpern, J. Y. Message-optimal protocols for byzantine agreement. In Proceedings of the tenth annual ACM symposium on Principles of distributed computing (1991), pp. 309–323.
* (69) Huang, S., Pettie, S., and Zhu, L. Byzantine Agreement with Optimal Resilience via Statistical Fraud Detection. In Proceedings of the 2023 ACM-SIAM Symposium on Discrete Algorithms, SODA 2023, Florence, Italy, January 22-25, 2023 (2023), N. Bansal and V. Nagarajan, Eds., SIAM, pp. 4335–4353.
* (70) Katz, J., and Koo, C. On expected constant-round protocols for byzantine agreement. J. Comput. Syst. Sci. 75, 2 (2009), 91–112.
* (71) Katz, J., Miller, A., and Shi, E. Pseudonymous secure computation from time-lock puzzles. IACR Cryptol. ePrint Arch. (2014), 857.
* (72) King, V., Lonargan, S., Saia, J., and Trehan, A. Load balanced scalable byzantine agreement through quorum building, with full information. Distributed Computing and Networking - 12th International Conference (ICDCN) 6522 LNCS (2011), 203–214.
* (73) King, V., and Saia, J. From almost everywhere to everywhere: Byzantine agreement with Õ (n3/2) bits. In Distributed Computing, 23rd International Symposium (DISC) (2009), vol. 5805 LNCS, pp. 464–478.
* (74) King, V., and Saia, J. Breaking the $O(n^{2})$ bit barrier: Scalable byzantine agreement with an adaptive adversary. Journal of the ACM 58, 4 (2011), 1–24.
* (75) Kokoris-Kogias, E., Malkhi, D., and Spiegelman, A. Asynchronous Distributed Key Generation for Computationally-Secure Randomness, Consensus, and Threshold Signatures. In CCS ’20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020 (2020), J. Ligatti, X. Ou, J. Katz, and G. Vigna, Eds., ACM, pp. 1751–1767.
* (76) Kotla, R., Alvisi, L., Dahlin, M., Clement, A., and Wong, E. Zyzzyva: Speculative Byzantine Fault Tolerance. In Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles (2007), pp. 45–58.
* (77) Kotla, R., and Dahlin, M. High Throughput Byzantine Fault Tolerance. In International Conference on Dependable Systems and Networks, 2004 (2004), IEEE, pp. 575–584.
* (78) Lamport, L., Shostak, R., and Pease, M. The Byzantine Generals Problem. ACM Transactions on Programming Languages and Systems 4, 3 (1982), 382–401.
* (79) Lewis-Pye, A. Quadratic worst-case message complexity for State Machine Replication in the partial synchrony model. arXiv preprint arXiv:2201.01107 (2022).
* (80) Lu, Y., Lu, Z., Tang, Q., and Wang, G. Dumbo-MVBA: Optimal Multi-Valued Validated Asynchronous Byzantine Agreement, Revisited. In PODC ’20: ACM Symposium on Principles of Distributed Computing, Virtual Event, Italy, August 3-7, 2020 (2020), Y. Emek and C. Cachin, Eds., ACM, pp. 129–138.
* (81) Luu, L., Narayanan, V., Baweja, K., Zheng, C., Gilbert, S., and Saxena, P. SCP: A Computationally-Scalable Byzantine Consensus Protocol For Blockchains. Cryptology ePrint Archive (2015).
* (82) Lynch, N. A. Distributed Algorithms. Elsevier, 1996.
* (83) Malkhi, D., Nayak, K., and Ren, L. Flexible Byzantine Fault Tolerance. In Proceedings of the 2019 ACM SIGSAC conference on computer and communications security (2019), pp. 1041–1053.
* (84) Mendes, H., and Herlihy, M. Multidimensional Approximate Agreement in Byzantine Asynchronous Systems. In Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013 (2013), D. Boneh, T. Roughgarden, and J. Feigenbaum, Eds., ACM, pp. 391–400.
* (85) Micali, S., Rabin, M. O., and Vadhan, S. P. Verifiable Random Functions. In 40th Annual Symposium on Foundations of Computer Science, FOCS ’99, 17-18 October, 1999, New York, NY, USA (1999), IEEE Computer Society, pp. 120–130.
* (86) Momose, A., and Ren, L. Multi-Threshold Byzantine Fault Tolerance. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (2021), pp. 1686–1699.
* (87) Momose, A., and Ren, L. Optimal Communication Complexity of Authenticated Byzantine Agreement. In 35th International Symposium on Distributed Computing, DISC 2021, October 4-8, 2021, Freiburg, Germany (Virtual Conference) (2021), S. Gilbert, Ed., vol. 209 of LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 32:1–32:16.
* (88) Nayak, K., Ren, L., Shi, E., Vaidya, N. H., and Xiang, Z. Improved extension protocols for byzantine broadcast and agreement. 34th International Symposium on Distributed Computing (DISC) 2020 179 (2020), 1–29.
* (89) Rambaud, M. Bootstrapping Message-Linear-Constant-Round Consensus from a Bare PKI Setup , and Separation Bounds from the Idealized Message-Authentication Model.
* (90) Sheng, P., Wang, G., Nayak, K., Kannan, S., and Viswanath, P. Player-replaceability and forensic support are two sides of the same (crypto) coin. IACR Cryptol. ePrint Arch. (2022), 1513.
* (91) Shoup, V. Practical threshold signatures. In Advances in Cryptology - EUROCRYPT 2000, International Conference on the Theory and Application of Cryptographic Techniques, Bruges, Belgium, May 14-18, 2000, Proceeding (2000), B. Preneel, Ed., vol. 1807 of Lecture Notes in Computer Science, Springer, pp. 207–220.
* (92) Shrestha, N., Bhat, A., Kate, A., and Nayak, K. Synchronous Distributed Key Generation without Broadcasts. IACR Cryptol. ePrint Arch. (2021), 1635.
* (93) Spiegelman, A. In search for an optimal authenticated byzantine agreement. arXiv preprint arXiv:2002.06993 (2020).
* (94) Tsimos, G., Loss, J., and Papamanthou, C. Gossiping for Communication-Efficient Broadcast. Advances in Cryptology - CRYPTO - 42nd Annual International Cryptology Conference 13509 LNCS (2022), 439–469.
* (95) Veronese, G. S., Correia, M., Bessani, A. N., Lung, L. C., and Verissimo, P. Efficient Byzantine Fault-Tolerance. IEEE Transactions on Computers 62, 1 (2011), 16–30.
* (96) Wan, J., Momose, A., Ren, L., Shi, E., and Xiang, Z. On the Amortized Communication Complexity of Byzantine Broadcast. Proceedings of the Annual ACM Symposium on Principles of Distributed Computing (2023), 253–261.
* (97) Wan, J., Momose, A., Ren, L., Shi, E., and Xiang, Z. On the Amortized Communication Complexity of Byzantine Broadcast. In Proceedings of the 2023 ACM Symposium on Principles of Distributed Computing (2023), pp. 253–261.
* (98) Wan, J., Xiao, H., Devadas, S., and Shi, E. Round-efficient byzantine broadcast under strongly adaptive and majority corruptions. In Theory of Cryptography (TCC) (Durham, NC, USA,, 2020), vol. 12550 LNCS, pp. 412–456.
* (99) Wan, J., Xiao, H., Shi, E., and Devadas, S. Expected constant round byzantine broadcast under dishonest majority. Theory of Cryptography - 18th International Conference (TCC) 12550 LNCS (2020), 381–411.
* (100) Yin, M., Malkhi, D., Reiter, M. K., Golan-Gueta, G., and Abraham, I. HotStuff: BFT consensus with linearity and responsiveness. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing (2019), pp. 347–356.
* (101) Yin, M., Malkhi, D., Reiter, M. K., Gueta, G. G., and Abraham, I. HotStuff: BFT Consensus with Linearity and Responsiveness. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing (2019), pp. 347–356.
## Appendix A Proofs of Lemmas 4 and 6
In this section, we formally prove lemmas 4 and 6. Recall that, in § 3, we fix
a weak consensus algorithm $\mathcal{A}$ which tolerates up to $t<n$ omission
failures. First, we introduce the computational model (§ A.1). Then, we show
preliminary lemmas (§ A.2) required for proving lemmas 4 and 6 (§ A.3).
### A.1. Computational Model
Without loss of generality, we assume that each process sends at most one
message to any specific process in a single round. That is, $\mathcal{A}$
instructs no process $p_{i}\in\Pi$ to send two (or more) messages to any
specific process $p_{j}\in\Pi$ in a single round. Moreover, we assume that no
process sends messages to itself.
#### A.1.1. Messages.
Let $\mathcal{M}$ denote the set of messages. Each message $m\in\mathcal{M}$
encodes the following:
* •
the sender of $m$ (denoted by $m.\mathsf{sender}\in\Pi$); and
* •
the receiver of $m$ (denoted by $m.\mathsf{receiver}\in\Pi$); and
* •
the round (denoted by $m.\mathsf{round}\in\mathbb{N}$).
Due to our assumption that only one message is sent in a round from any
specific process to any other specific process, no message $m$ is sent more
than once in any execution of $\mathcal{A}$.
#### A.1.2. States.
Let $\mathcal{S}$ denote the set of states. Each state $s\in\mathcal{S}$
encodes the following:
* •
the process associated with $s$ (denoted by $s.\mathsf{process}\in\Pi$); and
* •
the round associated with $s$ (denoted by $s.\mathsf{round}\in\mathbb{N}$);
and
* •
the proposal-bit associated with $s$ (denoted by
$s.\mathsf{proposal}\in\\{0,1\\}$); and
* •
the decision-bit associated with $s$ (denoted by
$s.\mathsf{decision}\in\\{\bot,0,1\\}$).
Intuitively, a state $s\in\mathcal{S}$, where (1) $s.\mathsf{process}=p_{i}$,
(2) $s.\mathsf{round}=k$, (3) $s.\mathsf{proposal}=b$, and (4)
$s.\mathsf{decision}=b^{\prime}$, denotes the state of process $p_{i}$ at the
start of round $k$ with $p_{i}$’s proposal being $b$ and $p_{i}$’s decision
being $b^{\prime}$ (if $b^{\prime}=\bot$, $p_{i}$ has not yet decided by the
start of round $k$).
For each process $p_{i}$, there are two _initial states_ $0_{i}\in\mathcal{S}$
and $1_{i}\in\mathcal{S}$ associated with $p_{i}$ such that (1)
$0_{i}.\mathsf{process}=1_{i}.\mathsf{process}=p_{i}$, (2)
$0_{i}.\mathsf{proposal}=0$, and (3) $1_{i}.\mathsf{proposal}=1$. Each process
$p_{i}$ starts round $1$ in state $0_{i}$ or state $1_{i}$.
#### A.1.3. State-Transition Function.
Algorithm $\mathcal{A}$ maps (1) the state of a process at the start of a
round, and (2) messages the process received in the round into (a) a new state
of the process at the start of the next round, and (b) messages the process
sends in the next round. Formally, given (1) a state $s\in\mathcal{S}$, and
(2) a set of messages $M^{R}\subsetneq\mathcal{M}$ such that, for every
message $m\in M^{R}$, $m.\mathsf{receiver}=s.\mathsf{process}$ and
$m.\mathsf{round}=s.\mathsf{round}$,
$\mathcal{A}(s,M^{R})=(s^{\prime},M^{S})$, where
* •
$s^{\prime}\in\mathcal{S}$ is a state such that:
* –
$s^{\prime}.\mathsf{process}=s.\mathsf{process}$,
* –
$s^{\prime}.\mathsf{round}=s.\mathsf{round}+1$,
* –
$s^{\prime}.\mathsf{proposal}=s.\mathsf{proposal}$,
* –
if $s.\mathsf{decision}\neq\bot$,
$s^{\prime}.\mathsf{decision}=s.\mathsf{decision}$; and
* •
$M^{S}\subsetneq\mathcal{M}$ is a set of messages such that, for every message
$m\in M^{S}$, the following holds:
* –
$m.\mathsf{sender}=s^{\prime}.\mathsf{process}=s.\mathsf{process}$,
* –
$m.\mathsf{round}=s^{\prime}.\mathsf{round}=s.\mathsf{round}+1$,
* –
$m.\mathsf{receiver}\neq m.\mathsf{sender}$,
* –
there is no message $m^{\prime}\in M^{S}$ such that (1) $m^{\prime}\neq m$,
and (2) $m.\mathsf{receiver}=m^{\prime}.\mathsf{receiver}$.
The messages each process $p_{i}$ sends in the first round depend solely on
$p_{i}$’s initial state:
* •
If $p_{i}$’s state at the start of the first round is $0_{i}$, then
$\mathcal{M}_{i}^{0}$ denotes the messages $p_{i}$ sends in the first round.
For every message $m\in\mathcal{M}_{i}^{0}$, the following holds: (1)
$m.\mathsf{sender}=p_{i}$, (2) $m.\mathsf{round}=1$, (3)
$m.\mathsf{receiver}\neq p_{i}$, and (4) there is no message
$m^{\prime}\in\mathcal{M}_{i}^{0}$ such that (a) $m^{\prime}\neq m$, and (b)
$m^{\prime}.\mathsf{receiver}=m.\mathsf{receiver}$.
* •
If $p_{i}$’s state at the start of the first round is $1_{i}$, then
$\mathcal{M}_{i}^{1}$ denotes the messages $p_{i}$ sends in the first round.
For every message $m\in\mathcal{M}_{i}^{1}$, the following holds: (1)
$m.\mathsf{sender}=p_{i}$, (2) $m.\mathsf{round}=1$, (3)
$m.\mathsf{receiver}\neq p_{i}$, and (4) there is no message
$m^{\prime}\in\mathcal{M}_{i}^{1}$ such that (a) $m^{\prime}\neq m$, and (b)
$m^{\prime}.\mathsf{receiver}=m.\mathsf{receiver}$.
#### A.1.4. Fragments.
A tuple
$\mathcal{FR}=\Bigl{(}s,M^{S},M^{\mathit{SO}},M^{R},M^{\mathit{RO}}\Bigl{)}$,
where $s\in\mathcal{S}$ and
$M^{S},M^{\mathit{SO}},M^{R},M^{\mathit{RO}}\subsetneq\mathcal{M}$, is a _$k$
-round fragment_, for some $k\in\mathbb{N}\cup\\{+\infty\\}$, of a process
$p_{i}$ if and only if:
1. (1)
$s.\mathsf{process}=p_{i}$; and
2. (2)
$s.\mathsf{round}=k$; and
3. (3)
for every message $m\in M^{S}\cup M^{\mathit{SO}}\cup M^{R}\cup
M^{\mathit{RO}}$, $m.\mathsf{round}=k$; and
4. (4)
$M^{S}\cap M^{\mathit{SO}}=\emptyset$; and
5. (5)
$M^{R}\cap M^{\mathit{RO}}=\emptyset$; and
6. (6)
for every message $m\in M^{S}\cup M^{\mathit{SO}}$, $m.\mathsf{sender}=p_{i}$;
and
7. (7)
for every message $m\in M^{R}\cup M^{\mathit{RO}}$,
$m.\mathsf{receiver}=p_{i}$; and
8. (8)
there is no message $m\in M^{S}\cup M^{\mathit{SO}}\cup M^{R}\cup
M^{\mathit{RO}}$ such that $m.\mathsf{sender}=m.\mathsf{receiver}=p_{i}$; and
9. (9)
there are no two messages $m,m^{\prime}\in M^{S}\cup M^{\mathit{SO}}$ such
that $m.\mathsf{receiver}=m^{\prime}.\mathsf{receiver}$; and
10. (10)
there are no two messages $m,m^{\prime}\in M^{R}\cup M^{\mathit{RO}}$ such
that $m.\mathsf{sender}=m^{\prime}.\mathsf{sender}$.
Intuitively, a $k$-round fragment of a process describes what happens at a
process from the perspective of an omniscient external observer in the $k$-th
round.
##### Intermediate results on fragments.
We now present a few simple results.
###### Lemma 0.
Consider any $k$-round ($k\in\mathbb{N}\cup\\{+\infty\\}$) fragment
$\mathcal{FR}=\Bigl{(}s,M^{S},M^{\mathit{SO}},M^{R},M^{\mathit{RO}}\Bigl{)}$
of any process $p_{i}$, and any tuple
$\mathcal{FR}^{\prime}=\Bigl{(}s,M^{S},M^{\mathit{SO}},M^{R},X\subsetneq\mathcal{M}\Bigl{)}$
for which the following holds:
1. (i)
for every message $m\in X$, $m.\mathsf{round}=k$; and
2. (ii)
$M^{R}\cap X=\emptyset$; and
3. (iii)
for every $m\in X$, $m.\mathsf{receiver}=p_{i}$; and
4. (iv)
there is no message $m\in X$, $m.\mathsf{sender}=m.\mathsf{receiver}=p_{i}$;
and
5. (v)
there are no two messages $m,m^{\prime}\in M^{R}\cup X$ such that
$m.\mathsf{sender}=m^{\prime}.\mathsf{sender}$.
Then, $\mathcal{FR}^{\prime}$ is a $k$-round fragment of $p_{i}$.
###### Proof.
To prove that $\mathcal{FR}^{\prime}$ is a $k$-round fragment of $p_{i}$, we
prove that all ten conditions hold for $\mathcal{FR}^{\prime}$. By the
statement of the lemma, $\mathcal{FR}$ is a $k$-round fragment of $p_{i}$.
Conditions (1), (2), (4), (6), and (9) hold for $\mathcal{FR}^{\prime}$ as the
first four elements of the tuple $\mathcal{FR}^{\prime}$ are identical to the
first four elements of $\mathcal{FR}$. Conditions (3), (5), (7), (8), and (10)
hold due to conditions (i), (ii), (iii), (iv), and (v), respectively. ∎
###### Lemma 0.
Consider any $k$-round ($k\in\mathbb{N}\cup\\{+\infty\\}$) fragment
$\mathcal{FR}=\Bigl{(}s,M^{S},M^{\mathit{SO}},M^{R},M^{\mathit{RO}}\Bigl{)}$
of any process $p_{i}$, and any tuple
$\mathcal{FR}^{\prime}=\Bigl{(}s,X\subsetneq\mathcal{M},Y\subsetneq\mathcal{M},M^{R},M^{\mathit{RO}}\Bigl{)}$
for which the following holds:
1. (i)
for every message $m\in X\cup Y$, $m.\mathsf{round}=k$; and
2. (ii)
$X\cap Y=\emptyset$; and
3. (iii)
for every message $m\in X\cup Y$, $m.\mathsf{sender}=p_{i}$; and
4. (iv)
there is no message $m\in X\cup Y$,
$m.\mathsf{sender}=m.\mathsf{receiver}=p_{i}$; and
5. (v)
there are no two messages $m,m^{\prime}\in X\cup Y$ such that
$m.\mathsf{receiver}=m^{\prime}.\mathsf{receiver}$.
Then, $\mathcal{FR}^{\prime}$ is a $k$-round fragment of $p_{i}$.
###### Proof.
Due to the statement of the lemma, $\mathcal{FR}$ is a $k$-round fragment of
$p_{i}$. Therefore, conditions (1), (2), (5), (7), and (10) hold directly for
$\mathcal{FR}^{\prime}$ as the first, fourth, and fifth elements of
$\mathcal{FR}^{\prime}$ are identical to the first, fourth and fifth elements
of $\mathcal{FR}$. Conditions (3), (4), (6), (8), and (9) hold due to
conditions (i), (ii), (iii), (iv), and (v), respectively. ∎
#### A.1.5. Behaviors.
In this subsection, we define behaviors of processes. A tuple
$\mathcal{B}=\Bigl{\langle}\mathcal{FR}^{1}=\Bigl{(}s^{1},M^{S(1)},M^{\mathit{SO}(1)},M^{\mathit{R}(1)},M^{\mathit{RO}(1)}\Bigl{)},...,\mathcal{FR}^{k}=\Bigl{(}s^{k},M^{S(k)},M^{\mathit{SO}(k)},M^{\mathit{R}(k)},M^{\mathit{RO}(k)}\Bigl{)}\Bigl{\rangle}$
is a _$k$ -round behavior_, for some $k\in\mathbb{N}\cup\\{+\infty\\}$, of a
process $p_{i}$ if and only if:
1. (1)
for every $j\in[1,k]$, $\mathcal{FR}^{j}$ is a $j$-round fragment of $p_{i}$;
and
2. (2)
$s^{1}=0_{i}$ or $s^{1}=1_{i}$; and
3. (3)
if $s^{1}=0_{i}$, then $M^{S(1)}\cup M^{\mathit{SO}(1)}=\mathcal{M}_{i}^{0}$;
and
4. (4)
if $s^{1}=1_{i}$, then $M^{S(1)}\cup M^{\mathit{SO}(1)}=\mathcal{M}_{i}^{1}$;
and
5. (5)
$s^{1}.\mathsf{proposal}=s^{2}.\mathsf{proposal}=...=s^{k}.\mathsf{proposal}$;
and
6. (6)
if there exists $j\in[1,k]$ such that $s^{j}.\mathsf{decision}\neq\bot$, then
there exists $j^{*}\in[1,j]$ such that (1) for every
$j^{\prime}\in[1,j^{*}-1]$, $s^{j^{\prime}}.\mathsf{decision}=\bot$, and (2)
$s^{j^{*}}.\mathsf{decision}=s^{j^{*}+1}.\mathsf{decision}=...=s^{k}.\mathsf{decision}$;
and
7. (7)
for every $j\in[1,k-1]$, $\mathcal{A}(s^{j},M^{R(j)})=(s^{j+1},M^{S(j+1)}\cup
M^{\mathit{SO}(j+1)})$.
If $k=+\infty$, we say that $\mathcal{B}$ is an _infinite behavior_ of
$p_{i}$. Intuitively, a process’s behavior describes the states and sets of
sent and received messages (including those that are omitted) of that process.
##### Intermediate results on behaviors.
We first introduce a few functions concerned with behaviors (see the
_Functions_ table) before proving two intermediate results (lemmas 3 and 4).
Functions defined on the $k$-round behavior $\mathcal{B}$ defined above
1:function $\mathsf{state}(\mathcal{B},j\in[1,k])$:
2: return $s^{j}$ $\triangleright$ returns the state at the start of round $j$
3:function $\mathsf{sent}(\mathcal{B},j\in[1,k])$:
4: return $M^{S(j)}$ $\triangleright$ returns the messages (successfully) sent
in round $j$
5:function $\mathsf{send\\_omitted}(\mathcal{B},j\in[1,k])$:
6: return $M^{\mathit{SO}(j)}$ $\triangleright$ returns the messages send-
omitted in round $j$
7:function $\mathsf{received}(\mathcal{B},j\in[1,k])$:
8: return $M^{R(j)}$ $\triangleright$ returns the messages received in round
$j$
9:function $\mathsf{receive\\_omitted}(\mathcal{B},j\in[1,k])$:
10: return $M^{\mathit{RO}(j)}$ $\triangleright$ returns the messages receive-
omitted in round $j$
11:function $\mathsf{all\\_sent}(\mathcal{B})$:
12: return $\bigcup\limits_{j\in[1,k]}M^{\mathit{S}(j)}$ $\triangleright$
returns all (successfully) sent messages
13:function $\mathsf{all\\_send\\_omitted}(\mathcal{B})$:
14: return $\bigcup\limits_{j\in[1,k]}M^{\mathit{SO}(j)}$ $\triangleright$
returns all send-omitted messages
15:function $\mathsf{all\\_receive\\_omitted}(\mathcal{B})$:
16: return $\bigcup\limits_{j\in[1,k]}M^{\mathit{RO}(j)}$ $\triangleright$
returns all receive-omitted messages
###### Lemma 0.
Consider any $k$-round ($k\in\mathbb{N}\cup\\{+\infty\\}$) behavior
$\mathcal{B}=\Bigl{\langle}\mathcal{F}^{1},\cdots,\mathcal{F}^{k}\Bigl{\rangle}$
of any process $p_{i}$, and any tuple
$\mathcal{B}^{\prime}=\Bigl{\langle}\mathcal{FR}^{1},\cdots,\mathcal{FR}^{k}\Bigl{\rangle}$.
For every $j\in[1,k]$,
$\mathcal{F}^{j}=\Bigl{(}s^{j},M^{S(j)},M^{\mathit{SO}(j)},M^{\mathit{R}(j)},M^{\mathit{RO}(j)}\Bigl{)}$.
Moreover, for every $j\in[1,k]$,
$\mathcal{FR}^{j}=\Bigl{(}s^{j},M^{S(j)},M^{\mathit{SO}(j)},M^{R(j)},X^{j}\subsetneq\mathcal{M}\Bigl{)}$
and the following holds:
1. (i)
for every message $m\in X^{j}$, $m.\mathsf{round}=j$; and
2. (ii)
$M^{R(j)}\cap X^{j}=\emptyset$; and
3. (iii)
for every message $m\in X^{j}$, $m.\mathsf{receiver}=p_{i}$; and
4. (iv)
there is no message $m\in X^{j}$,
$m.\mathsf{sender}=m.\mathsf{receiver}=p_{i}$; and
5. (v)
there are no two messages $m,m^{\prime}\in M^{R(j)}\cup X^{j}$ such that
$m.\mathsf{sender}=m^{\prime}.\mathsf{sender}$.
Then, $\mathcal{B}^{\prime}$ is a $k$-round behavior of $p_{i}$.
###### Proof.
Since $\mathcal{B}$ is a behavior of $p_{i}$,
$\mathcal{F}^{1},\cdots,\mathcal{F}^{k}$ are fragments of $p_{i}$. Thus, for
every $j\in[1,k]$, $\mathcal{FR}^{j}$ is a $j$-round fragment of $p_{i}$ due
to conditions (i) to (v) and Lemma 1, which implies that condition (1) holds
for $\mathcal{B}^{\prime}$. Conditions (3) and (4) hold for
$\mathcal{B}^{\prime}$ as, for every $j\in[1,k]$, the second and third
elements of $\mathcal{FR}^{j}$ are identical to the second and third elements
of $\mathcal{F}^{j}$. Similarly, conditions (2), (5) and (6) hold for
$\mathcal{B}^{\prime}$ as, for every $j\in[1,k]$, the first element of
$\mathcal{FR}^{j}$ is identical to the state from $\mathcal{F}^{j}$. Finally,
condition (7) holds for $\mathcal{B}^{\prime}$: first, for every $j\in[1,k]$,
the first four elements of $\mathcal{FR}^{j}$ are identical to the first four
elements of $\mathcal{F}^{j}$; second, condition (7) holds for $\mathcal{B}$.
∎
###### Lemma 0.
Consider any $k$-round ($k\in\mathbb{N}\cup\\{+\infty\\}$) behavior
$\mathcal{B}=\Bigl{\langle}\mathcal{F}^{1},\cdots,\mathcal{F}^{k}\Bigl{\rangle}$
of any process $p_{i}$ and a tuple
$\mathcal{B}^{\prime}=\Bigl{\langle}\mathcal{FR}^{1},\cdots,\mathcal{FR}^{k}\Bigl{\rangle}$.
For every $j\in[1,k]$,
$\mathcal{F}^{j}=\Bigl{(}s^{j},M^{S(j)},M^{\mathit{SO}(j)},M^{\mathit{R}(j)},M^{\mathit{RO}(j)}\Bigl{)}$.
Moreover, for every $j\in[1,k]$,
$\mathcal{FR}^{j}=\Bigl{(}s^{j},X^{j}\subsetneq\mathcal{M},Y^{j}\subsetneq\mathcal{M},M^{\mathit{R}(j)},M^{\mathit{RO}(j)}\Bigl{)}$
such that (1) $X^{j}\cup Y^{j}=M^{S(j)}\cup M^{\mathit{SO}(j)}$, and (2)
$X^{j}\cap Y^{j}=\emptyset$. Then, $\mathcal{B}^{\prime}$ is a $k$-round
behavior of $p_{i}$.
###### Proof.
Since $\mathcal{B}$ is a behavior of $p_{i}$,
$\mathcal{F}^{1},\cdots,\mathcal{F}^{k}$ are fragments of $p_{i}$. Thus, for
every $j\in[1,k]$, $\mathcal{FR}^{j}$ is a $j$-round fragment of $p_{i}$ due
to Lemma 2, which implies that condition (1) holds for $\mathcal{B}^{\prime}$.
Conditions (3) and (4) hold for $\mathcal{B}^{\prime}$ as (1) both conditions
hold for $\mathcal{B}$, and (2) for every $j\in[1,k]$, $X^{j}\cup
Y^{j}=M^{S(j)}\cup M^{\mathit{SO}(j)}$. Similarly, conditions (2), (5) and (6)
hold for $\mathcal{B}^{\prime}$ as, for every $j\in[1,k]$, the first element
of $\mathcal{FR}^{j}$ is identical to the state from $\mathcal{F}^{j}$.
Finally, condition (7) holds for $\mathcal{B}^{\prime}$: first, for every
$j\in[1,k]$, $X^{j}\cup Y^{j}=M^{S(j)}\cup M^{\mathit{SO}(j)}$ and the first
and the fourth elements of $\mathcal{FR}^{j}$ are identical to the first and
the fourth elements of $\mathcal{F}^{j}$; second, condition (7) holds for
$\mathcal{B}$. ∎
#### A.1.6. Executions.
A _$k$ -round execution_ $\mathcal{E}$, for some
$k\in\mathbb{N}\cup\\{+\infty\\}$, is a tuple
$\mathcal{E}=[\mathcal{F}\subsetneq\Pi,\mathcal{B}_{1},...,\mathcal{B}_{n}]$
such that the following guarantees hold:
* •
_Faulty processes:_ $\mathcal{F}$ is a set of $|\mathcal{F}|\leq t$ processes.
* •
_Composition:_ For every $j\in[1,n]$, $\mathcal{B}_{j}$ is a $k$-round
behavior of process $p_{j}$.
* •
_Send-validity:_ If there exists a message $m$, where
$p_{s}=m.\mathsf{sender}$, $p_{r}=m.\mathsf{receiver}$ and
$j=m.\mathsf{round}$, such that $m\in\mathsf{sent}(\mathcal{B}_{s},j)$, then
the following holds: $m\in\mathsf{received}(\mathcal{B}_{r},j)$ or
$m\in\mathsf{receive\\_omitted}(\mathcal{B}_{r},j)$. That is, if a message is
(successfully) sent, the message is either received or receive-omitted in the
same round.
* •
_Receive-validity:_ If there exists a message $m$, where
$p_{s}=m.\mathsf{sender}$, $p_{r}=m.\mathsf{receiver}$ and
$j=m.\mathsf{round}$, such that
$m\in\mathsf{received}(\mathcal{B}_{r},j)\cup\mathsf{receive\\_omitted}(\mathcal{B}_{r},j)$,
then $m\in\mathsf{sent}(\mathcal{B}_{s},j)$. That is, if a message is received
or receive-omitted, the message is (successfully) sent in the same round.
* •
_Omission-validity:_ If there exists a process $p_{i}$ and $j\in[1,k]$ such
that (1) $\mathsf{send\\_omitted}(\mathcal{B}_{i},j)\neq\emptyset$, or (2)
$\mathsf{receive\\_omitted}(\mathcal{B}_{i},j)\neq\emptyset$, then
$p_{i}\in\mathcal{F}$. That is, if a process commits an omission fault, the
process belongs to $\mathcal{F}$.
If $k=+\infty$, we say that $\mathcal{E}$ is an _infinite execution_.
### A.2. Preliminary Lemmas
We start by defining the $\mathsf{swap\\_omission}$ procedure (Algorithm 4).
Algorithm 4 Procedure $\mathsf{swap\\_omission}$
1:procedure $\mathsf{swap\\_omission}(\mathsf{Execution}\text{
}\mathcal{E}=[\mathcal{F},\mathcal{B}_{1},...,\mathcal{B}_{n}],p_{i}\in\Pi)$:
2: let $M\leftarrow\mathsf{all\\_receive\\_omitted}(\mathcal{B}_{i})$
$\triangleright$ $M$ contains all messages which are receive-omitted by
$p_{i}$
3: let $\mathcal{F}^{\prime}\leftarrow\emptyset$ $\triangleright$ new set of
faulty processes
4: for each $p_{z}\in\Pi$:
5: let
$\mathcal{B}_{z}=\langle\mathcal{FR}_{z}^{1},...,\mathcal{FR}_{z}^{k}\rangle$,
for some $k\in\mathbb{N}\cup\\{+\infty\\}$
6: for each $j\in[1,k]$:
7: let $\mathit{sent}_{z}\leftarrow\\{m\in\mathcal{M}\,|\,m\in M\land
m.\mathsf{round}=j\land m.\mathsf{sender}=p_{z}\\}$ $\triangleright$ messages
from $M$ sent by $p_{z}$
8: let
$\mathcal{FR}_{z}^{j}=(s^{j},M^{S(j)},M^{\mathit{SO}(j)},M^{R(j)},M^{\mathit{RO}(j)})$
$\triangleright$ old fragment
9: let
$\mathcal{FR}^{j}\leftarrow(s^{j},M^{S(j)}\setminus\mathit{sent}_{z},M^{\mathit{SO}(j)}\cup\mathit{sent}_{z},M^{R(j)},M^{\mathit{RO}(j)}\setminus{M})$
$\triangleright$ new fragment
10: if
$(M^{\mathit{SO}(j)}\cup\mathit{sent}_{z})\cup(M^{\mathit{RO}(j)}\setminus{M})\neq\emptyset$:
$\triangleright$ check for an omission fault
11: $\mathcal{F}^{\prime}\leftarrow\mathcal{F}^{\prime}\cup\\{p_{z}\\}$
$\triangleright$ $p_{z}$ is faulty
12: let
$\mathcal{B}_{z}^{\prime}\leftarrow\langle\mathcal{FR}^{1},...,\mathcal{FR}^{k}\rangle$
13: return
$[\mathcal{F}^{\prime},\mathcal{B}_{1}^{\prime},...,\mathcal{B}_{n}^{\prime}]$
Intuitively, $\mathsf{swap\\_omission}(\mathcal{E},p_{i})$, for some execution
$\mathcal{E}$ and process $p_{i}$, constructs an execution
$\mathcal{E}^{\prime}$ in which receive-omission faults of process $p_{i}$ are
“swapped” for send-omission faults of other processes. The following lemma
proves that, if some preconditions are true, $\mathcal{E}^{\prime}$ is indeed
an execution and it satisfies certain properties.
###### Lemma 0.
Let $\mathcal{E}=[\mathcal{F},\mathcal{B}_{1},...,\mathcal{B}_{n}]$ be any
$k$-round ($k\in\mathbb{N}\cup\\{+\infty\\}$) execution. Moreover, let
$[\mathcal{F}^{\prime},\mathcal{B}_{1}^{\prime},...,\mathcal{B}_{n}^{\prime}]\leftarrow\mathsf{swap\\_omission}(\mathcal{E},p_{i})$,
for some process $p_{i}$. Let the following hold:
* •
$|\mathcal{F}^{\prime}|\leq t$; and
* •
$\mathsf{all\\_send\\_omitted}(\mathcal{B}_{i})=\emptyset$; and
* •
there exists a process $p_{h}\in\Pi\setminus{\mathcal{F}}$ such that
$p_{h}\neq p_{i}$ and
$\mathsf{all\\_sent}(\mathcal{B}_{h})\cap\mathsf{all\\_receive\\_omitted}(\mathcal{B}_{i})=\emptyset$.
Then, (1)
$[\mathcal{F}^{\prime},\mathcal{B}_{1}^{\prime},...,\mathcal{B}_{n}^{\prime}]$
is a $k$-round execution, (2) $\mathcal{E}$ and
$[\mathcal{F}^{\prime},\mathcal{B}_{1}^{\prime},...,\mathcal{B}_{n}^{\prime}]$
are indistinguishable to every process $p_{z}\in\Pi$, (3)
$p_{i}\notin\mathcal{F}^{\prime}$, and (4) $p_{h}\notin\mathcal{F}^{\prime}$.
###### Proof.
To prove the lemma, we first prove that all guarantees that an execution needs
to satisfy (see § A.1.6) are indeed satisfied for the tuple
$[\mathcal{F}^{\prime},\mathcal{B}_{1}^{\prime},...,\mathcal{B}_{n}^{\prime}]$.
* •
_Faulty processes:_ Follows from the precondition of the lemma.
* •
_Composition:_ As $\mathcal{E}$ is a $k$-round execution, $\mathcal{B}_{i}$ is
a $k$-round behavior of every process $p_{i}$. Therefore, for every process
$p_{i}$, $\mathcal{B}^{\prime}_{i}$ is a $k$-round behavior of $p_{i}$ due to
lemmas 3 and 4.
* •
_Send-validity:_ Consider any message $m$, where $p_{s}=m.\mathsf{sender}$,
$p_{r}=m.\mathsf{receiver}$ and $j=m.\mathsf{round}$, such that $m$ is sent in
$\mathcal{B}^{\prime}_{s}$. Note that $m\in\mathsf{sent}(\mathcal{B}_{s},j)$
(as no new sent messages are added to $\mathcal{B}_{s}^{\prime}$ at line 9).
Therefore, $m\in\mathsf{sent}(\mathcal{B}_{s}^{\prime},j)$ and
$m\in\mathsf{received}(\mathcal{B}_{r},j)\cup\mathsf{receive\\_omitted}(\mathcal{B}_{r},j)$
(due to the send-validity property of $\mathcal{E}$). As $m$ is sent in
$\mathcal{B}^{\prime}_{s}$ (i.e., $m\in M^{S(j)}$ at process $p_{s}$; line 9),
$m\notin M$. Thus, $m$ is not excluded from $M^{\mathit{RO}(j)}$ at process
$p_{r}$ (line 9), which implies $m\in
M^{R(j)}\cup(M^{\mathit{RO}(j)}\setminus{M})$ at process $p_{r}$. Thus, send-
validity holds.
* •
_Receive-validity:_ Consider any message $m$, where $p_{s}=m.\mathsf{sender}$,
$p_{r}=m.\mathsf{receiver}$ and $j=m.\mathsf{round}$, such that $m$ is
received or receive-omitted in $\mathcal{B}_{r}^{\prime}$. As $m$ is received
or receive-omitted in $\mathcal{B}_{r}^{\prime}$, $m$ is received or receive-
omitted in $\mathcal{B}_{r}$ (as no new received or receive-omitted messages
are added to $\mathcal{B}_{r}^{\prime}$ at line 9). Moreover,
$m\in\mathsf{received}(\mathcal{B}_{r},j)\cup\mathsf{receive\\_omitted}(\mathcal{B}_{r},j)$
(as $\mathcal{B}_{r}$ is $k$-round behavior of $p_{r}$), which then implies
that $m\in\mathsf{sent}(\mathcal{B}_{s},j)$ (as $\mathcal{E}$ satisfies
receive-validity). Furthermore, $m\notin M$; otherwise, $m$ would not be
received nor receive-omitted in $\mathcal{B}_{r}^{\prime}$. Therefore, $m$ is
not excluded from $M^{S(j)}$ at process $p_{s}$ (line 9), which proves that
receive-validity is satisfied.
* •
_Omission-validity:_ Follows directly from the check at line 10.
As all guarantees are satisfied,
$[\mathcal{F}^{\prime},\mathcal{B}_{1}^{\prime},...,\mathcal{B}_{n}^{\prime}]$
is a $k$-round execution, which proves the first statement of the lemma.
Second, we prove the indistinguishability statement for every process
$p_{z}\in\Pi$. The $\mathsf{swap\\_omission}$ procedure (Algorithm 4) ensures
that
$\mathsf{received}(\mathcal{B}_{z}^{\prime},j)=\mathsf{received}(\mathcal{B}_{z},j)$
(line 9), for every round $j\in[1,k]$. Moreover, for every round $j\in[1,k]$,
$\mathsf{state}(\mathcal{B}_{z}^{\prime},j)=\mathsf{state}(\mathcal{B}_{z},j)$
and
$\mathsf{sent}(\mathcal{B}_{z}^{\prime},j)\cup\mathsf{send\\_omitted}(\mathcal{B}_{z}^{\prime},j)=\mathsf{sent}(\mathcal{B}_{z},j)\cup\mathsf{send\\_omitted}(\mathcal{B}_{z},j)$
(line 9). Therefore, the second statement of the lemma holds.
Third, we prove that $p_{i}\notin\mathcal{F}^{\prime}$. As no process sends
messages to itself, $\mathit{sent}_{i}=\emptyset$ (line 7) in every round
$j\in[1,k]$. Hence,
$\mathsf{all\\_send\\_omitted}(\mathcal{B}^{\prime}_{i})=\emptyset$ (line 9).
Moreover,
$M=\mathsf{all\\_receive\\_omitted}(\mathcal{B}_{i})=\bigcup\limits_{j\in[1,k]}M^{\mathit{RO}(j)}$
(line 2). Therefore,
$\mathsf{all\\_receive\\_omitted}(\mathcal{B}^{\prime}_{i})=\emptyset$, which
implies that the third statement of the lemma holds.
Finally, we prove that $p_{h}\notin\mathcal{F}^{\prime}$. As
$p_{h}\notin\mathcal{F}$,
$\mathsf{all\\_send\\_omitted}(\mathcal{B}_{h})=\mathsf{all\\_receive\\_omitted}(\mathcal{B}_{h})=\emptyset$.
As $\mathsf{all\\_receive\\_omitted}(\mathcal{B}_{h})=\emptyset$,
$\mathsf{all\\_receive\\_omitted}(\mathcal{B}_{h}^{\prime})=\emptyset$ (line
9). Moreover, $\mathit{sent}_{h}=\emptyset$ (line 7) in every round
$j\in[1,k]$. Therefore,
$\mathsf{all\\_send\\_omitted}(\mathcal{B}_{h}^{\prime})=\emptyset$ (line 9).
Hence, $p_{h}\notin\mathcal{F}^{\prime}$, which concludes the proof of the
lemma. ∎
Algorithm 5 defines the $\mathsf{merge}$ procedure which constructs a new
execution from two mergeable ones; recall that mergeable executions are
defined by Definition 5. The following lemma proves that the result of the
$\mathsf{merge}$ procedure (Algorithm 5) is an execution that is
indistinguishable from the original one and satisfies some important
properties.
Algorithm 5 Procedure $\mathsf{merge}$
1:procedure $\mathsf{merge}(\mathsf{Execution}\text{
}\mathcal{E}_{0}^{B(k_{1})}=[B,\mathcal{B}_{1},...,\mathcal{B}_{n}],\mathsf{Execution}\text{
}\mathcal{E}_{b}^{C(k_{2})}=[C,\mathcal{B}_{1}^{\prime},...,\mathcal{B}^{\prime}_{n}])$:
2: assert ($\mathcal{E}_{0}^{B(k_{1})}$ and $\mathcal{E}_{b}^{C(k_{2})}$ are
mergeable executions)
3: let $\mathit{sent}\leftarrow\bigcup\limits_{p_{i}\in A\cup
B}\mathcal{M}_{i}^{0}\cup\bigcup\limits_{p_{i}\in C}\mathcal{M}_{i}^{b}$
$\triangleright$ messages sent in the first round
4: let $s_{i}\leftarrow 0_{i}$, for every process $p_{i}\in A\cup B$
$\triangleright$ the initial state of processes in $A\cup B$
5: let $\mathit{sent}_{i}\leftarrow\mathcal{M}_{i}^{0}$, for every process
$p_{i}\in A\cup B$ $\triangleright$ the initial messages sent by processes in
$A\cup B$
6: let $s_{i}\leftarrow b_{i}$, for every process $p_{i}\in C$
$\triangleright$ the initial state of processes in $C$
7: let $\mathit{sent}_{i}\leftarrow\mathcal{M}_{i}^{b}$, for every process
$p_{i}\in C$ $\triangleright$ the initial messages sent by processes in $C$
8: for each $j\geq 1$:
9: for each $p_{i}\in\Pi$:
10: let $\mathit{to}_{i}\leftarrow\\{m\,|\,m\in\mathit{sent}\land
m.\mathsf{receiver}=p_{i}\\}$ $\triangleright$ messages sent in the round $j$
to $p_{i}$
11: let $\mathit{received}_{i}\leftarrow\emptyset$
12: if $p_{i}\in A$:
13: let
$\mathcal{FR}_{i}^{j}=(s_{i},\mathit{sent}_{i},\emptyset,\mathit{to}_{i},\emptyset)$
14: $\mathit{received}_{i}\leftarrow\mathit{to}_{i}$
15: else:
16: if $p_{i}\in B$:
$\mathit{received}_{i}\leftarrow\mathsf{received}(\mathcal{B}_{i},j)$
$\triangleright$ receive messages from $\mathcal{B}_{i}$
17: else:
$\mathit{received}_{i}\leftarrow\mathsf{received}(\mathcal{B}_{i}^{\prime},j)$
$\triangleright$ receive messages from $\mathcal{B}^{\prime}_{i}$
18: let
$\mathcal{FR}_{i}^{j}=(s_{i},\mathit{sent}_{i},\emptyset,\mathit{received}_{i},\mathit{to}_{i}\setminus{\mathit{received}_{i}})$
19:
$(s_{i},\mathit{sent}_{i})\leftarrow\mathcal{A}(s_{i},\mathit{received}_{i})$
$\triangleright$ compute new state and newly sent messages
20: $\mathit{sent}\leftarrow\bigcup\limits_{p_{i}\in\Pi}\mathit{sent}_{i}$
$\triangleright$ update sent messages
21: for each $p_{i}\in\Pi$:
22: let
$\mathcal{B}_{i}^{*}=\langle\mathcal{FR}_{i}^{1},\mathcal{FR}_{i}^{2},...\rangle$
23: return $[B\cup C,\mathcal{B}_{1}^{*},...,\mathcal{B}_{n}^{*}]$
###### Lemma 0.
Let executions $\mathcal{E}_{0}^{B(k_{1})}$ ($k_{1}\in\mathbb{N}$) and
$\mathcal{E}_{b}^{C(k_{2})}$ ($b\in\\{0,1\\}$, $k_{2}\in\mathbb{N}$) be
mergeable. Then, (1)
$\mathcal{E}^{*}=\mathsf{merge}(\mathcal{E}_{0}^{B(k_{1})},\mathcal{E}_{b}^{C(k_{2})})$
is an infinite execution, (2) $\mathcal{E}^{*}$ is indistinguishable from
$\mathcal{E}_{0}^{B(k_{1})}$ (resp., $\mathcal{E}_{b}^{C(k_{2})}$) to every
process $p_{B}\in B$ (resp., $p_{C}\in C$), and (3) group $B$ (resp., $C$) is
isolated from round $k_{1}$ (resp., $k_{2}$) in $\mathcal{E}^{*}$.
###### Proof.
Let $\mathcal{E}^{*}=[B\cup C,\mathcal{B}_{1}^{*},...,\mathcal{B}_{n}^{*}]$.
Let $\mathcal{E}_{0}^{B(k_{1})}=[B,\mathcal{B}_{1},...,\mathcal{B}_{n}]$ and
$\mathcal{E}_{b}^{C(k_{2})}=[C,\mathcal{B}_{1}^{\prime},...,\mathcal{B}_{n}^{\prime}]$.
To prove the lemma, we first prove that all guarantees from § A.1.6 are
satisfied by $\mathcal{E}^{*}$:
* •
_Faulty processes:_ As $\mathcal{F}^{*}=B\cup C$,
$|\mathcal{F}^{*}|=\frac{t}{4}+\frac{t}{4}\leq t$.
* •
_Composition:_ For each process $p_{i}\in\Pi$, we construct
$\mathcal{B}_{i}^{*}$ by following the valid transitions of the algorithm
$\mathcal{A}$ (line 19). Thus, for each process $p_{i}\in\Pi$,
$\mathcal{B}_{i}^{*}$ is an infinite behavior of $p_{i}$.
* •
_Send-validity:_ Consider any message $m$, where $p_{s}=m.\mathsf{sender}$,
$p_{r}=m.\mathsf{receiver}$ and $j=m.\mathsf{round}$, such that $m$ is sent
$\mathcal{B}_{s}^{*}$. As $m.\mathsf{round}=j$, $m\in\mathit{sent}_{s}$ in the
$j$-th iteration of the for loop at line 8. Therefore, $m\in\mathit{to}_{r}$
(line 10) in the $j$-th iteration of the for loop at line 8. Hence,
$m\in\mathsf{received}(\mathcal{B}_{r}^{*},j)\cup\mathsf{receive\\_omitted}(\mathcal{B}_{r}^{*},j)$
(line 13 or line 18).
* • |
# Beurling-integers with lacunarity
Imre Z. Ruzsa Alfréd Rényi Institute of Mathematics
Budapest, Pf. 127
H-1364 Hungary<EMAIL_ADDRESS>
###### Abstract.
We present examples of multiplicative semigroups of positive reals (Beurling’s
generalized integers) with gaps bounded from below.
###### Key words and phrases:
Beurling, generalized integers
###### 2020 Mathematics Subject Classification:
11P32, 11N99
Supported by NKFI grants K-129335, K-119528, KKP-133819.
## 1\. Introduction
Let $G=\\{g_{1},g_{2},\ldots\\}$ be a sequence of real numbers, $1<g_{1}\leq
g_{2}\leq\ldots$ (generators) and $B=\\{b_{0},b_{1},\ldots\\}$,
$b_{0}=1<b_{1}\leq b_{2}\leq\ldots$ the sequence of products formed by
elements of $G$. If $G$ is the set of primes, $B$ will be the set of positive
integers. The name honours Beurling, who was the first to study analogs of the
prime-number theorem for such systems.
If $G$ is a set of multiplicatively independent integers, $B$ will be a subset
of positive integers, hence $b_{i+1}-b_{i}\geq 1$. If furthermore $G$ contains
all but finitely many primes, then $b_{i+1}-b_{i}$ will also be bounded from
above. Lagarias [3] proved that there is no other example consisting of
integers, and asked whether there is another example made of reals.
I conjecture that such a set does not exist.
As a first step towards this conjecture, we show that a certain simple attempt
to construct such a set must fail, namely we cannot omit a small set of primes
and replace them by non-integers.
###### Theorem 1.
Let $P$ be a set of primes such that
(1.1) $\sum_{p\notin P}1/p<\infty$
and $\alpha\in{\mathbb{R}}\setminus{\mathbb{Z}}$, $\alpha>1$. With
$G=P\cup\\{\alpha\\}$ we have
$\liminf b_{i+1}-b_{i}=0.$
On the other hand, we can add extra elements to a very thin set of primes.
###### Theorem 2.
Let $P$ be a set of primes such that
$\sum_{p\in P}\frac{1}{\sqrt{p}}<\infty.$
There exist numbers $\alpha\in{\mathbb{R}}\setminus{\mathbb{Z}}$, $\alpha>1$
such that for $G=P\cup\\{\alpha\\}$ we have $b_{i+1}-b_{i}\geq 1$. The set of
such numbers $\alpha$ has positive measure.
The above theorem was stated as to form a contrast to Theorem 1, but in fact
there is nothing special in the primes.
###### Theorem 3.
Let $G^{\prime}$ be a set of reals such that
(1.2) $\sum_{g\in G^{\prime}}\frac{1}{\sqrt{g}}<\infty.$
Let $B^{\prime}$ be the sequence of products formed by elements of
$G^{\prime}$. Assume that $b_{i+1}^{\prime}-b_{i}^{\prime}\geq\delta>0$ for
all $i$. There exist numbers $\alpha\in{\mathbb{R}}\setminus{\mathbb{Z}}$,
$\alpha>1$ such that for $G=G^{\prime}\cup\\{\alpha\\}$ we have
$b_{i+1}-b_{i}\geq\delta$. The set of such numbers $\alpha$ has positive
measure.
Unfortunately we cannot say much about sets of primes which are neither almost
full nor very thin. The metric appoach of Theorem 3 cannot be substantially
improved. We illustrate this by the example of squares, where conditon (1.2)
“just” fails.
###### Theorem 4.
Let $G^{\prime}=\\{p^{2}\\}$ be the set of prime squares,
$B^{\prime}=\\{n^{2}\\}$ the set of squares.
There exist infinitely many numbers
$\alpha\in{\mathbb{R}}\setminus{\mathbb{Z}}$, $\alpha>1$ such that for
$G=G^{\prime}\cup\\{\alpha\\}$ we have $b_{i+1}-b_{i}\geq 1$. The set of such
numbers $\alpha$ has measure 0.
Call a set of Beurling-integers _maximal lacunary_ , if $\inf
b_{i+1}-b_{i}>0$, but the inclusion of any number to $G$ spoils this property.
###### Problem.
How thin can a maximal lacunary Beurling-set be? Is $B(x)=O(\sqrt{x})$
possible?
$x^{1/2+\varepsilon}$ is possible, as the following easy example shows.
###### Theorem 5.
Let $1<c<2$, $G=\\{p^{c}\\}$ be the set of $c$-th powers of primes,
$B=\\{n^{c}\\}$. This set is maximal lacunary.
The densest example of a lacunary $B$ we could construct which is different
from the integers is as follows.
###### Theorem 6.
There exists a set $G$ of irrational numbers such that
$G(x)=\left|{\\{g\in G,g\leq x\\}}\right|>cx/\log x$
and $b_{i+1}-b_{i}\geq 1$.
## 2\. Proof of Theorem 1
Let $E$ be the set of primes missing from $P$, and $R$ the set of integers
composed only of primes from $P$. We show that for every $\delta>0$ there are
integers $x,y\in R$ such that
$|\alpha^{m}x-y|<\delta.$
Case 1. $\alpha$ is rational, say $\alpha=a/b$. We want to find $x,y\in R$
with
$\left|{a^{m}x-b^{m}y}\right|<\delta b^{m}.$
Fix $k$ so that $\delta b^{m}>2$. Let $d=2$ if $ab$ is odd, $d=1$ otherwise.
Fix odd numbers $u,v$ with $a^{m}u-b^{m}v=d$. We will find $x,y$ in the form
$x=u+2zb^{m},\ y=v+2za^{m}.$
With such a choice we have $a^{m}x-b^{m}y=d<\delta b^{m}$. We need that $x,y$
be free of primes from $E$. We shall estimate the number of integers $z<Z$
with this property.
For a prime $p\in E$, the divisibility $p|u+2zb^{m}$ excludes at most one
residue class modulo $p$. (Exactly one if $p\nmid b$ and none if $p|b$, since
the assumption $a^{m}u-b^{m}v=d$ excludes $p|(u,b)$.) For $p=2$ this
divisibility cannot hold. Similarly the divisibility $p|v+2za^{m}$ may exclude
a residue class, hence at least $p-2$ remain.
Write
$\eta=\prod_{p\in E,p>2}\left({1-\frac{2}{p}}\right)$
and select $T$ so that
$\sum_{p\in E,p>T}1/p<\eta/3.$
Let $q=\prod_{p\in E,2<p\leq T}p$. In each interval of length $q$ there are at
least
$\prod_{p\in E,2<p\leq T}(p-2)\geq\eta q$
integers that avoid the excluded residue classes for every $p\leq T$. Up to
$Z$ this is at least $\eta Z$ integers if $q|Z$.
Any prime divisor $p>T$ must be less than
$\max(u+2zb^{m},v+2za^{m})<cz,$
and excludes 2 residue classes which means at most $2(1+z/p)$ integers. There
remain at least
$\eta Z-\sum_{p\in E,T<p<cZ}2(1+z/p)>\eta Z/3-2\pi(cZ)>0$
for large $Z$.
Case 2. $\alpha$ is irrational. Consider two convergents from the continued
fraction development of $\alpha$ , say
$\frac{a_{k}}{r_{k}}<\alpha<\frac{a_{k+1}}{r_{k+1}}.$
Any median
$\mu=\frac{xa_{k}+ya_{k+1}}{xr_{k}+yr_{k+1}},\ x,y>0$
satisfies
$\frac{a_{k}}{r_{k}}<\mu<\frac{a_{k+1}}{r_{k+1}},$
hence
$|\alpha-\mu|<\frac{a_{k+1}}{r_{k+1}}-\frac{a_{k}}{r_{k}}=\frac{1}{r_{k}r_{k+1}}.$
We try to find $x,y$ so that the numerator and denominator of $\mu$ be free of
primes from $E$ and
(2.1)
$\left|{\alpha({xr_{k}+yr_{k+1}})-({xa_{k}+ya_{k+1}})}\right|<\frac{xr_{k}+yr_{k+1}}{r_{k}r_{k+1}}<\delta.$
For the last inequality to hold we require
(2.2) $x<X=\delta r_{k+1}/2,\ y<Y=\delta r_{k}/2.$
First we fix the parity of $x$ and $y$ to make the numerator and denominator
odd. If $a_{k}r_{k}$ is odd, we set $2|y$, $2\nmid x$. If $a_{k+1}r_{k+1}$ is
odd, we set $2\nmid y$, $2|x$. If neither happens, then we set $2\nmid y$ and
$2\nmid x$. The fact that $a_{k+1}r_{k}-a_{k}r_{k+1}=1$ ensures that this
works.
Given $y$, for a prime $p>2$ the divisibility $p|{xa_{k}+ya_{k+1}}$ means a
single residue class modulo $p$ if $p\nmid a_{k}$. It is impossible if
$p|a_{k}$ and $p\nmid y$, and it always holds if $p|(y,a_{k})$. Similarly, the
divisibility $p|{xr_{k}+yr_{k+1}}$ means a single residue class modulo $p$ if
$p\nmid r_{k}$, it is impossible if $p|r_{k}$ and $p\nmid y$, and it always
holds if $p|(y,r_{k})$. That is, at most two residue classes are excluded
modulo $p$ unless $p|(y,a_{k})$ or $p|(y,r_{k})$. As we have little control
over prime divisors of $a_{k}$ and $r_{k}$, we will require that $y$ be free
of prime divisors from $E$ up to a limit.
Write
$\eta=\frac{1}{2}\prod_{p\in E,p>2}\left({1-\frac{2}{p}}\right),\
\eta^{\prime}=\frac{1}{2}\prod_{p\in E,p>2}\left({1-\frac{1}{p}}\right)$
and select $T$ so that
$\sum_{p\in E,p>T}1/p<\delta\eta\eta^{\prime}/10.$
Let $q=2\prod_{p\in E,2<p\leq T}p$.
In each interval of length $q$ there are at least
$\prod_{p\in E,2<p\leq T}(p-1)\geq\eta^{\prime}q$
integersfree of prime divisors $p\in E$, $p\leq T$. Up to $Y$ this is at least
$\eta^{\prime}Y-q$ integers. For each such $y$, in each interval of length $q$
there are at least
$\prod_{p\in E,2<p\leq T}(p-2)\geq\eta q$
integers that avoid the excluded residue classes for every $p\leq T$. Up to
$X$ this is at least $\eta X-q$ integers. This leaves us with at least
$(\eta X-q)(\eta^{\prime}y-q)>\delta^{2}\eta\eta^{\prime}r_{k}r_{k+1}/5$
possible pairs $(x,y)$.
Consider prime divisors $p>T$. The integers, which should not be divisible by
these primes, are all less than
$Xa_{k}+Ya_{k+1}<(\delta/2)(a_{k}r_{k+1}+r_{k}a_{k+1})<U=2\delta
r_{k}r_{k+1};$
hence this is also a bound for $p$. The numbers $xa_{k}+ya_{k+1}$ are all
distinct by the coprimality of $a_{k}$ and $a_{k+1}$, and so are the numbes
$xr_{k}+yr_{k+1}$, but we cannot exclude that the two kinds overlap. Hence an
upper estimate for the number of pairs $x,y$ with an illegal divisibility is
$2(U/p+1)$. Summing this for all $p<U$ we obtain
$\sum_{T<p<U}2(U/p+1)<2U\sum_{p\in
E,p>T}1/p+2\pi(U)<\delta^{2}\eta\eta^{\prime}r_{k}r_{k+1}/5$
if $r_{k}$ is large enough.
## 3\. Proof of Theorems 3 and 2
We need to find numbers $\alpha$ such that
$\left|{\alpha^{k}m-\alpha^{j}n}\right|\geq\delta$ for all $m,n\in G^{\prime}$
and positive integers $j<k$. Since for $j\leq k$ we have
$\left|{\alpha^{k}m-\alpha^{j}n}\right|=\alpha^{j}\left|{\alpha^{k-j}m-n}\right|\geq\left|{\alpha^{k-j}m-n}\right|,$
it is sufficient to consider the case $j=0$.
We will show that the measure of such $\alpha$ in the interval
$[e^{t},e^{2t}]$ is positive for sufficiently large $t$.
The event we want to avoid is $\left|{\alpha^{k}m-n}\right|<\delta$, which can
be rewritten as
$\alpha^{k}\frac{m}{n}\in\left({1-\frac{\delta}{n},1+\frac{\delta}{n}}\right).$
Note that $\left|{\alpha^{k}m-n}\right|<\delta$ implies $n>\alpha m-\delta$,
whence $n>2\delta$ and $n>\alpha m/2>m$, assuming that $\alpha>3\delta$ which
holds for $t>\log 3\delta$.
We take logarithms to infer, with the notation $\beta=\log\alpha$, that
$k\beta+\log m-\log n\in(-2\delta/n,\delta/n),$
that is,
$\beta\in\frac{\log n-\log
m}{k}+\left({\frac{-2\delta}{kn},\frac{\delta}{kn}}\right).$
To estimate the measure of such numbers $\beta$ we add $3\delta/(kn)$ for all
triplets $m,n,k$ such that the above interval intersects the interval
$[t,2t]$. If $t>4\delta$, this intersection implies
$\frac{\log n-\log m}{k}\in(t/2,3t).$
Hence
$\frac{\log n-\log m}{3t}<k<2\frac{\log n-\log m}{t}.$
The ratio of the upper and lower bounds is 6, hence the sum of $1/k$ in this
interval is less thn $c=1+\log 6$. Consequently the sum of $3\delta/(kn)$ for
all triplets $m,n,k$ is at most the sum of $3c\delta/n$ for all possible pairs
$m,n$. These pairs staisfy $n>\alpha m/2>e^{t}m/2$, so
$\sum_{m,n}\frac{1}{n}<2e^{-t/2}\sum_{m,n\in
B^{\prime}}\frac{1}{\sqrt{mn}}=2e^{-t/2}\left({\sum_{m\in
B^{\prime}}\frac{1}{\sqrt{m}}}\right)^{2}.$
This series is convergent, indeed
$\sum_{m\in B^{\prime}}\frac{1}{\sqrt{m}}=\prod_{g\in
G^{\prime}}\left({1+\frac{1}{\sqrt{g}-1}}\right)<\infty$
by assumption (1.2).
The estimate we found for the measure of bad $\beta$ is
$6c\delta e^{-t/2}\left({\sum_{m\in
B^{\prime}}\frac{1}{\sqrt{m}}}\right)^{2},$
which is less than $t$, the length of the interval for large enough $t$.
## 4\. Proof of Theorem 4
Let $q$ be a squarefree integer, $a,b$ positive integers and
$\alpha=\bigl{(}a\sqrt{q}+b\bigr{)}^{2}.$
We show that for these numbers $B$ has the lacunarity property.
The elements of $B$ are numbers of the form $\alpha^{k}m^{2}$, and we need to
show that
$\left|{\alpha^{k}m^{2}-\alpha^{j}n^{2}}\right|\geq 1.$
Since for $j\leq k$ we have
$\left|{\alpha^{k}m-\alpha^{j}n}\right|=\alpha^{j}\left|{\alpha^{k-j}m-n}\right|\geq\left|{\alpha^{k-j}m-n}\right|,$
it is sufficient to consider the case $j=0$.
Put $\beta=\bigl{(}a\sqrt{q}+b\bigr{)}^{k}$. This number is of the form
$\beta=u\sqrt{q}+v$
with positive integers $u,v$. Now we have
$\alpha^{k}m^{2}-n^{2}=(\beta
m)^{2}-n^{2}=\bigl{(}vm+n+um\sqrt{q}\bigr{)}\bigl{(}vm-n+um\sqrt{q}\bigr{)}$
$=\frac{\bigl{(}vm+n+um\sqrt{q}\bigr{)}}{\bigl{(}vm-n-
um\sqrt{q}\bigr{)}}\left({(vm-n)^{2}-(um)^{2}q}\right).$
The enumerator exceeds the absolute value of the denominator, and the second
factor is a nonzero integer, so the absolute value of the expression is $>1$.
Now we show that for such numbers $\sqrt{\alpha}$ are badly approximable.
Assume that it is well approximable, that is, for every $\varepsilon>0$ there
are integers $a,b$ such that
$\left|{\sqrt{\alpha}-\frac{a}{b}}\right|<\frac{\varepsilon}{b^{2}}.$
Clearly $a<2\sqrt{\alpha}b$ and then
$\left|{\alpha
b^{2}-a^{2}}\right|=(\sqrt{\alpha}b-a)(\sqrt{\alpha}b+a)<3\varepsilon\sqrt{\alpha}.$
Badly approximable numbers have measure 0 by a theorem of Hinchin [2].
## 5\. Proof of Theorem 5
Try to include a number $\alpha$. Take integers $a,b$ such that
$\left|{\alpha^{1/c}-\frac{a}{b}}\right|<\frac{1}{b^{2}}.$
From the mean value theorem we see that
$\frac{\alpha b^{c}-a^{c}}{\alpha^{1/c}b-a}=cz^{c-1}$
with some $z$ between $\alpha^{1/c}b$ and $a$, so $z=O(b)$. Hence
$\alpha b^{c}-a^{c}=O(b^{c-2})$
can be arbitrarily small.
## 6\. Proof of Theorem 6
We give two examples, one with quadratic irrationals and the other with
transcendental numbers. Both arise from a subset of primes through a
transformation.
Example 1: quadratic.
Take those odd primes that split in ${\mathbb{Q}}[\sqrt{2}]$. They are the
primes $p\equiv\pm 1\pmod{8}$ (about half of the primes). For such a prime
there are positive integers $x,y$ such that
$\pm p=x^{2}-2y^{2}=(x-y\sqrt{2})(x+y\sqrt{2}).$
Put $f(p)=\min(x+y\sqrt{2})$ over all such representations. This satisfies
$f(p)<C\sqrt{p}$ with some constant $C$. This can be seen by comparing the
minimal representation with the one obtained by $x^{\prime}=|x-2y|$,
$y^{\prime}=|y-x|$ which corresponds to a multiplication by the unit
$1-\sqrt{2}$ of ${\mathbb{Q}}[\sqrt{2}]$. (It is not difficult to calculate
the best value of $C$, but not too important for this argument.) Extend $f$
multiplicatively to all integers composed exclusively of primes $p\equiv\pm
1\pmod{8}$. For every such integer $n$ we have
$f(n)=x+y\sqrt{2},\ x,y>0,\ x^{2}-2y^{2}=n.$
Put $g(n)=f(n)^{2}$. Our generators will be the numbers $g(p)$ for $p$ prime,
$B$ will be the set of values of $g(n)$ for the above described special $n$.
As $g(p)<C^{2}p$ and half of the primes are used, $G(x)>cx/\log x$ holds for
large $x$ with $c=1/(2C^{2})$.
Now we show that $|g(m)-g(n)|>1$ for $m\neq n$. Let
$f(m)=u+v\sqrt{2},\ f(n)=x+y\sqrt{2}.$
We have
$f(m)^{2}-f(n)^{2}=\left({(u+x)+(v+y)\sqrt{2}}\right)\left({(u-x)+(v-y)\sqrt{2}}\right)$
$=\frac{(u+x)+(v+y)\sqrt{2}}{(u-x)-(v-y)\sqrt{2}}\left({(u-x)^{2}-2(v-y)^{2}}\right).$
The enumerator exceeds the absolute value of the denominator, and the second
factor is a nonzero integer, so the absolute value of the expression is $>1$.
The similarity to the proof of Theorem 4 hints that the two arguments could be
combined, and the above example can be extended by including squares of
integers. However, this does not substantially increase the size of $B(x)$ and
$G(x)$.
Example 2: transcendental.
Consider primes $p\equiv 1\pmod{4}$. Write $p=a^{2}+b^{2}$ with $0<a<b$ and
let
$\rho(p)=ia+b=\sqrt{p}e^{ih(p)},\ 0<h(p)<\pi/2.$
Here $\rho(p)$ is one of the Gaussian primes in the decomposition of $p$ inthe
ring of Gaussian integers. Extend $\rho$ multiplicatively to the product of
such primes, that is, odd integers that can be written as a sum of two
squares.
Since together with a Gaussian prime its conjugate is never selected, the
numbers $\rho(n)$ for $n\neq 1$, and $\rho(m)/\rho(n)$ for $m\neq n$ will
never be real. Indeed, $\rho(m)/\rho(n)$ is a product of our selected primes
with (positive and negative) exponents, and its conjugate can be obtained by
taking the conjugate primes, and by the unicity of prime factorization these
are different numbers.
Given a prime $p$ let $f(p)=h(p)+2k\pi$ with the integer $k$ chosen so that
$\log p<f(p)<\log p+2\pi$. Extend $f$ additively. We will always have
$e^{if(n)}=\frac{\rho(n)}{\sqrt{n}}.$
Finally we put
$g(p)=e^{f(p)}<e^{2\pi}p.$
These numbers form the set $G$, and (since again half of the primes was used)
$G(x)>cx/\log x,\ c=e^{-2\pi}/2$
for large $x$. $B$ is the set of values of the multiplicative extension of
$g$. Since $g(n)$ is one of the values of
$\left({\rho(n)/\sqrt{n}}\right)^{-i}$, it is transcendental by the Gelfond-
Schneider theorem, see for insance [1].
We show the lacunarity property. For $m\neq n$ consider the triangle in the
integer lattice with vertices $0,\rho(m),\rho(n)$. Since it is a nondegenerate
triangle, its area is at least 1/2, on the other hand it is exactly
$\frac{1}{2}\sqrt{mn}\left|{\sin\left({f(m)-f(n)}\right)}\right|.$
We infer that
$\left|{\sin\left({f(m)-f(n)}\right)}\right|\geq\frac{1}{\sqrt{mn}}.$
Finally
$g(m)-g(n)=e^{f(m)}-e^{f(n)}=e^{\frac{f(m)+f(n)}{2}}\left({e^{\frac{f(m)-f(n)}{2}}-e^{\frac{f(n)-f(m)}{2}}}\right).$
The first factor is
$\sqrt{g(m)g(n)}\geq\sqrt{mn}.$
To estimate the second note that
$\left|{e^{x}-e^{-x}}\right|>2|x|>|\sin(2x)|,$
so it exceeds $\left|{\sin\left({f(m)-f(n)}\right)}\right|$ which was shown to
exceed $1/\sqrt{mn}$.
Acknowledgement. This work was inspired by converstions with Szilárd G.
Révész.
## References
* [1] E. B. Burger and R. Tubbs, _Making transcendence transparent_ , Springer, 2004\.
* [2] A. Ya. Khintchine, _Continued fractions_ , Noordhoff, Groningen, 1963, English transi, by P. Wynn.
* [3] Jeffrey Lagarias, _Beurling generalized integers with the Delone property_ , Forum Mathematicum 11 (1997).
|
# The Seventeenth Data Release of the Sloan Digital Sky Surveys: Complete
Release of MaNGA, MaStar and APOGEE-2 Data
Abdurro’uf11affiliation: Academia Sinica Institute of Astronomy and
Astrophysics, 11F of AS/NTU, Astronomy-Mathematics Building, No.1, Sec. 4,
Roosevelt Rd, Taipei, 10617, Taiwan , Katherine Accetta22affiliation:
Department of Astrophysical Sciences, Princeton University, Princeton, NJ
08544, USA , Conny Aerts33affiliation: Institute of Astronomy, KU Leuven,
Celestijnenlaan 200D, B-3001 Leuven, Belgium , Víctor Silva
Aguirre44affiliation: Stellar Astrophysics Centre, Department of Physics and
Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark ,
Romina Ahumada55affiliation: Instituto de Astronomía, Universidad Católica del
Norte, Av. Angamos 0610, Antofagasta, Chile , Nikhil Ajgaonkar66affiliation:
Department of Physics and Astronomy, University of Kentucky, 505 Rose St.,
Lexington, KY, 40506-0055, USA , N. Filiz Ak77affiliation: Department of
Astronomy and Space Sciences, Erciyes University, 38039 Kayseri, Turkey ,
Shadab Alam88affiliation: Institute for Astronomy, University of Edinburgh,
Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK , Carlos Allende
Prieto99affiliation: Instituto de Astrofísica de Canarias (IAC), C/ Via Láctea
s/n, E-38205 La Laguna, Tenerife, Spain 1010affiliation: Universidad de La
Laguna (ULL), Departamento de Astrofísica, E-38206 La Laguna, Tenerife Spain ,
Andrés Almeida1111affiliation: Department of Astronomy, University of
Virginia, Charlottesville, VA 22904-4325, USA , Friedrich
Anders1212affiliation: Leibniz-Institut fur Astrophysik Potsdam (AIP), An der
Sternwarte 16, D-14482 Potsdam, Germany 1313affiliation: Institut de Ciències
del Cosmos, Universitat de Barcelona (IEEC-UB), Carrer Martí i Franquès 1,
E-08028 Barcelona, Spain , Scott F. Anderson1414affiliation: Department of
Astronomy, University of Washington, Box 351580, Seattle, WA 98195, USA ,
Brett H. Andrews1515affiliation: PITT PACC, Department of Physics and
Astronomy, University of Pittsburgh, Pittsburgh, PA 15260, USA , Borja
Anguiano1111affiliation: Department of Astronomy, University of Virginia,
Charlottesville, VA 22904-4325, USA , Erik Aquino-Ortíz1616affiliation:
Instituto de Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264,
04510, Mexico, D.F., México , Alfonso Aragón-Salamanca1717affiliation: School
of Physics and Astronomy, University of Nottingham, University Park,
Nottingham, NG7 2RD, UK , Maria Argudo-Fernández1818affiliation: Instituto de
Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059,
Valparaíso, Chile , Metin Ata1919affiliation: Kavli Institute for the Physics
and Mathematics of the Universe (WPI), University of Tokyo, Kashiwa 277-8583,
Japan , Marie Aubert2020affiliation: Aix Marseille Université, CNRS/IN2P3,
CPPM, Marseille, France , Vladimir Avila-Reese1616affiliation: Instituto de
Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264, 04510,
Mexico, D.F., México , Carles Badenes1515affiliation: PITT PACC, Department of
Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260, USA ,
Rodolfo H. Barbá2121affiliation: Departamento de Astronomía, Universidad de La
Serena, Av. Juan Cisternas 1200 Norte, La Serena, Chile , Kat
Barger2222affiliation: Department of Physics & Astronomy, Texas Christian
University, Fort Worth, TX 76129, USA , Jorge K. Barrera-
Ballesteros1616affiliation: Instituto de Astronomía, Universidad Nacional
Autónoma de México, A.P. 70-264, 04510, Mexico, D.F., México , Rachael L.
Beaton2323affiliation: The Observatories of the Carnegie Institution for
Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA , Timothy C.
Beers2424affiliation: Department of Physics and JINA Center for the Evolution
of the Elements, University of Notre Dame, Notre Dame, IN 46556, USA ,
Francesco Belfiore2525affiliation: INAF - Osservatorio Astrofisico di Arcetri,
Largo E. Fermi 5, 50125 Firenze, Italy , Chad F. Bender2626affiliation:
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson,
AZ 85721-0065, USA , Mariangela Bernardi2727affiliation: Department of Physics
and Astronomy, University of Pennsylvania, Philadelphia, PA 19104, USA ,
Matthew A. Bershady2828affiliation: Department of Astronomy, University of
Wisconsin-Madison, 475N. Charter St., Madison WI 53703, USA 2929affiliation:
South African Astronomical Observatory, P.O. Box 9, Observatory 7935, Cape
Town, South Africa 3030affiliation: Department of Astronomy, University of
Cape Town, Private Bag X3, Rondebosch 7701, South Africa , Florian
Beutler88affiliation: Institute for Astronomy, University of Edinburgh, Royal
Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK , Christian Moni
Bidin55affiliation: Instituto de Astronomía, Universidad Católica del Norte,
Av. Angamos 0610, Antofagasta, Chile , Jonathan C. Bird3131affiliation:
Department of Physics and Astronomy, Vanderbilt University, VU Station 1807,
Nashville, TN 37235, USA , Dmitry Bizyaev3232affiliation: Apache Point
Observatory, P.O. Box 59, Sunspot, NM 88349, USA 3333affiliation: Sternberg
Astronomical Institute, Moscow State University, Moscow, 119992, Russia ,
Guillermo A. Blanc2323affiliation: The Observatories of the Carnegie
Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA ,
Michael R. Blanton3434affiliation: Center for Cosmology and Particle Physics,
Department of Physics, 726 Broadway, Room 1005, New York University, New York,
NY 10003, USA , Nicholas Fraser Boardman3535affiliation: Department of Physics
and Astronomy, University of Utah, 115 S. 1400 E., Salt Lake City, UT 84112,
USA 3636affiliation: School of Physics and Astronomy, University of St
Andrews, North Haugh, St Andrews KY16 9SS, UK , Adam S. Bolton3737affiliation:
NSF’s National Optical-Infrared Astronomy Research Laboratory, 950 North
Cherry Avenue, Tucson, AZ 85719, USA , Médéric Boquien3838affiliation: Centro
de Astronomía (CITEVA), Universidad de Antofagasta, Avenida Angamos 601,
Antofagasta 1270300, Chile , Jura Borissova3939affiliation: Instituto de
Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa
Ancha, Casilla 5030, Chile. 4040affiliation: Millennium Institute of
Astrophysics, MAS, Nuncio Monsenor Sotero Sanz 100, Of. 104, Providencia,
Santiago, Chile , Jo Bovy4141affiliation: David A. Dunlap Department of
Astronomy & Astrophysics, University of Toronto, 50 St. George Street,
Toronto, ON, M5S 3H4, Canada 4242affiliation: Dunlap Institute for Astronomy
and Astrophysics, University of Toronto, 50 St. George Street, Toronto,
Ontario M5S 3H4, Canada , W.N. Brandt4343affiliation: Department of Astronomy
& Astrophysics, Eberly College of Science, The Pennsylvania State University,
525 Davey Laboratory, University Park, PA 16802, USA 4444affiliation:
Institute for Gravitation and the Cosmos, The Pennsylvania State University,
University Park, PA 16802, USA 4545affiliation: Department of Physics, Eberly
College of Science, The Pennsylvania State University, 104 Davey Laboratory,
University Park, PA 16802, USA , Jordan Brown4646affiliation: Department of
Biological and Physical Sciences, South Carolina State University, P.O. Box
7024, Orangeburg, SC 29117, USA , Joel R. Brownstein3535affiliation:
Department of Physics and Astronomy, University of Utah, 115 S. 1400 E., Salt
Lake City, UT 84112, USA , Marcella Brusa4747affiliation: Dipartimento di
Fisica e Astronomia ”Augusto Righi”, Università di Bologna, via Gobetti 93/2,
40129 Bologna, Italy 4848affiliation: INAF - Osservatorio di Astrofisica e
Scienza dello Spazio di Bologna, via Gobetti 93/3, 40129 Bologna, Italy ,
Johannes Buchner4949affiliation: Max-Planck-Institut für extraterrestrische
Physik, Gießenbachstraße 1, 85748 Garching, Germany , Kevin
Bundy5050affiliation: UCO/Lick Observatory, University of California, Santa
Cruz, 1156 High St. Santa Cruz, CA 95064, USA , Joseph N.
Burchett5151affiliation: Department of Astronomy, New Mexico State University,
Las Cruces, NM 88003, USA , Martin Bureau5252affiliation: Sub-department of
Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson
Building, Keble Road, Oxford OX1 3RH, UK , Adam Burgasser5353affiliation:
Center for Astrophysics and Space Science, University of California San Diego,
La Jolla, CA 92093, USA , Tuesday K. Cabang4646affiliation: Department of
Biological and Physical Sciences, South Carolina State University, P.O. Box
7024, Orangeburg, SC 29117, USA , Stephanie Campbell3636affiliation: School of
Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16
9SS, UK , Michele Cappellari5252affiliation: Sub-department of Astrophysics,
Department of Physics, University of Oxford, Denys Wilkinson Building, Keble
Road, Oxford OX1 3RH, UK , Joleen K. Carlberg5454affiliation: Space Telescope
Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA , Fábio
Carneiro Wanderley5555affiliation: Observatório Nacional, Rio de Janeiro,
Brasil , Ricardo Carrera5656affiliation: Astronomical Observatory of Padova,
National Institute of Astrophysics, Vicolo Osservatorio 5 - 35122 - Padova,
Italy , Jennifer Cash4646affiliation: Department of Biological and Physical
Sciences, South Carolina State University, P.O. Box 7024, Orangeburg, SC
29117, USA , Yan-Ping Chen5757affiliation: NYU Abu Dhabi, PO Box 129188, Abu
Dhabi, UAE , Wei-Huai Chen11affiliation: Academia Sinica Institute of
Astronomy and Astrophysics, 11F of AS/NTU, Astronomy-Mathematics Building,
No.1, Sec. 4, Roosevelt Rd, Taipei, 10617, Taiwan 5858affiliation: Department
of Physics, National Taiwan University, Taipei 10617, Taiwan , Brian
Cherinka5454affiliation: Space Telescope Science Institute, 3700 San Martin
Drive, Baltimore, MD 21218, USA , Cristina Chiappini1212affiliation: Leibniz-
Institut fur Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam,
Germany , Peter Doohyun Choi5959affiliation: Department of Astronomy and Space
Science, Sejong University, 209, Neungdong-ro, Gwangjin-gu, Seoul, South Korea
, S. Drew Chojnowski5151affiliation: Department of Astronomy, New Mexico State
University, Las Cruces, NM 88003, USA , Haeun Chung2626affiliation: Steward
Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ
85721-0065, USA , Nicolas Clerc6060affiliation: IRAP Institut de Recherche en
Astrophysique et Planétologie, Université de Toulouse, CNRS, UPS, CNES,
Toulouse, France , Roger E. Cohen5454affiliation: Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA , Julia M.
Comerford6161affiliation: Center for Astrophysics and Space Astronomy,
Department of Astrophysical and Planetary Sciences, University of Colorado,
389 UCB, Boulder, CO 80309-0389, USA , Johan Comparat4949affiliation: Max-
Planck-Institut für extraterrestrische Physik, Gießenbachstraße 1, 85748
Garching, Germany , Luiz da Costa6262affiliation: Laboratório
Interinstitucional de e-Astronomia, 77 Rua General José Cristino, Rio de
Janeiro, 20921-400, Brasil , Kevin Covey6363affiliation: Department of Physics
and Astronomy, Western Washington University, 516 High Street, Bellingham, WA
98225, USA , Jeffrey D. Crane2323affiliation: The Observatories of the
Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA
91101, USA , Irene Cruz-Gonzalez1616affiliation: Instituto de Astronomía,
Universidad Nacional Autónoma de México, A.P. 70-264, 04510, Mexico, D.F.,
México , Connor Culhane6363affiliation: Department of Physics and Astronomy,
Western Washington University, 516 High Street, Bellingham, WA 98225, USA ,
Katia Cunha5555affiliation: Observatório Nacional, Rio de Janeiro, Brasil
2626affiliation: Steward Observatory, University of Arizona, 933 North Cherry
Avenue, Tucson, AZ 85721-0065, USA , Y. Sophia Dai (戴昱)6464affiliation:
National Astronomical Observatories of China, Chinese Academy of Sciences, 20A
Datun Road, Chaoyang District, Beijing 100012, China , Guillermo
Damke6565affiliation: Instituto de Investigación Multidisciplinario en Ciencia
y Tecnología, Universidad de La Serena. Avenida Raúl Bitrán S/N, La Serena,
Chile 6666affiliation: AURA Observatory in Chile, Avda. Juan Cisternas 1500,
La Serena, Chile , Jeremy Darling6161affiliation: Center for Astrophysics and
Space Astronomy, Department of Astrophysical and Planetary Sciences,
University of Colorado, 389 UCB, Boulder, CO 80309-0389, USA , James W.
Davidson Jr.1111affiliation: Department of Astronomy, University of Virginia,
Charlottesville, VA 22904-4325, USA , Roger Davies5252affiliation: Sub-
department of Astrophysics, Department of Physics, University of Oxford, Denys
Wilkinson Building, Keble Road, Oxford OX1 3RH, UK , Kyle
Dawson3535affiliation: Department of Physics and Astronomy, University of
Utah, 115 S. 1400 E., Salt Lake City, UT 84112, USA , Nathan De
Lee6767affiliation: Department of Physics, Geology, and Engineering Tech,
Northern Kentucky University, Highland Heights, KY 41099, USA , Aleksandar M.
Diamond-Stanic6868affiliation: Department of Physics and Astronomy, Bates
College, 44 Campus Avenue, Lewiston ME 04240, USA , Mariana Cano-
Díaz1616affiliation: Instituto de Astronomía, Universidad Nacional Autónoma de
México, A.P. 70-264, 04510, Mexico, D.F., México , Helena Domínguez
Sánchez6969affiliation: Institute of Space Sciences (ICE, CSIC), Carrer de Can
Magrans S/N, Campus UAB, Barcelona, E-08193, Spain , John
Donor2222affiliation: Department of Physics & Astronomy, Texas Christian
University, Fort Worth, TX 76129, USA , Chris Duckworth3636affiliation: School
of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews
KY16 9SS, UK , Tom Dwelly4949affiliation: Max-Planck-Institut für
extraterrestrische Physik, Gießenbachstraße 1, 85748 Garching, Germany ,
Daniel J. Eisenstein7070affiliation: Harvard-Smithsonian Center for
Astrophysics, 60 Garden St., MS 20, Cambridge, MA 02138, USA , Yvonne P.
Elsworth7171affiliation: School of Physics and Astronomy, University of
Birmingham, Edgbaston, Birmingham B15 2TT, UK , Eric Emsellem7272affiliation:
European Southern Observatory, Karl-Schwarzschild-Str. 2, 85748 Garching,
Germany 7373affiliation: Univ Lyon, Univ Lyon1, ENS de Lyon, CNRS, Centre de
Recherche Astrophysique de Lyon UMR5574, F-69230 Saint-Genis-Laval France ,
Mike Eracleous4343affiliation: Department of Astronomy & Astrophysics, Eberly
College of Science, The Pennsylvania State University, 525 Davey Laboratory,
University Park, PA 16802, USA , Stephanie Escoffier2020affiliation: Aix
Marseille Université, CNRS/IN2P3, CPPM, Marseille, France , Xiaohui
Fan2626affiliation: Steward Observatory, University of Arizona, 933 North
Cherry Avenue, Tucson, AZ 85721-0065, USA , Emily Farr1414affiliation:
Department of Astronomy, University of Washington, Box 351580, Seattle, WA
98195, USA , Shuai Feng7474affiliation: College of Physics, Hebei Normal
University, Shijiazhuang 050024, China , José G. Fernández-
Trincado7575affiliation: Instituto de Astronomía y Ciencias Planetarias,
Universidad de Atacama, Copayapu 485, Copiapó, Chile 55affiliation: Instituto
de Astronomía, Universidad Católica del Norte, Av. Angamos 0610, Antofagasta,
Chile , Diane Feuillet7676affiliation: Max-Planck-Institut für Astronomie,
Königstuhl 17, D-69117 Heidelberg, Germany 7777affiliation: Lund Observatory,
Department of Astronomy and Theoretical Physics, Lund University, Box 43,
SE-22100 Lund, Sweden , Andreas Filipp7878affiliation: Max-Planck-Institut für
Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching, Germany , Sean P
Fillingham1414affiliation: Department of Astronomy, University of Washington,
Box 351580, Seattle, WA 98195, USA , Peter M. Frinchaboy2222affiliation:
Department of Physics & Astronomy, Texas Christian University, Fort Worth, TX
76129, USA , Sebastien Fromenteau7979affiliation: Instituto de Ciencias Fśicas
(ICF), Universidad Nacional Autónoma de México, Av. Universidad s/n, Col.
Chamilpa, Cuernavaca, Morelos, 62210, México , Lluís Galbany6969affiliation:
Institute of Space Sciences (ICE, CSIC), Carrer de Can Magrans S/N, Campus
UAB, Barcelona, E-08193, Spain , Rafael A. García8080affiliation: AIM, CEA,
CNRS, Université Paris-Saclay, Université Paris Diderot, Sorbonne Paris Cité,
F-91191 Gif-sur-Yvette, France , D. A. García-Hernández99affiliation:
Instituto de Astrofísica de Canarias (IAC), C/ Via Láctea s/n, E-38205 La
Laguna, Tenerife, Spain 1010affiliation: Universidad de La Laguna (ULL),
Departamento de Astrofísica, E-38206 La Laguna, Tenerife Spain , Junqiang
Ge6464affiliation: National Astronomical Observatories of China, Chinese
Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, China
, Doug Geisler8181affiliation: Departmento de Astronomía, Universidad de
Concepción, Casilla 160-C, Concepción, Chile 6565affiliation: Instituto de
Investigación Multidisciplinario en Ciencia y Tecnología, Universidad de La
Serena. Avenida Raúl Bitrán S/N, La Serena, Chile 8282affiliation:
Departamento de Física y Astronomía, Facultad de Ciencias, Universidad de La
Serena. Av. Juan Cisternas 1200, La Serena, Chile , Joseph
Gelfand3434affiliation: Center for Cosmology and Particle Physics, Department
of Physics, 726 Broadway, Room 1005, New York University, New York, NY 10003,
USA , Tobias Géron5252affiliation: Sub-department of Astrophysics, Department
of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford
OX1 3RH, UK , Benjamin J. Gibson3535affiliation: Department of Physics and
Astronomy, University of Utah, 115 S. 1400 E., Salt Lake City, UT 84112, USA ,
Julian Goddy8383affiliation: Departments of Physics and Astronomy, Haverford
College, 370 Lancaster Ave, Haverford, PA 19041, USA , Diego Godoy-
Rivera8484affiliation: Department of Astronomy and Center for Cosmology and
AstroParticle Physics, The Ohio State University, 140 W. 18th Ave, Columbus,
OH, 43210, USA , Kathleen Grabowski3232affiliation: Apache Point Observatory,
P.O. Box 59, Sunspot, NM 88349, USA , Paul J. Green7070affiliation: Harvard-
Smithsonian Center for Astrophysics, 60 Garden St., MS 20, Cambridge, MA
02138, USA , Michael Greener1717affiliation: School of Physics and Astronomy,
University of Nottingham, University Park, Nottingham, NG7 2RD, UK , Catherine
J. Grier2626affiliation: Steward Observatory, University of Arizona, 933 North
Cherry Avenue, Tucson, AZ 85721-0065, USA , Emily Griffith8484affiliation:
Department of Astronomy and Center for Cosmology and AstroParticle Physics,
The Ohio State University, 140 W. 18th Ave, Columbus, OH, 43210, USA , Hong
Guo8585affiliation: Shanghai Astronomical Observatory, Chinese Academy of
Sciences, 80 Nandan Road, Shanghai 200030, China , Julien Guy8686affiliation:
Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720,
USA , Massinissa Hadjara8787affiliation: Departamento de Astronomía,
Universidad de Chile, Camino El Observatorio 1515, Las Condes, Chile
8888affiliation: Chinese Academy of Sciences South America Center for
Astronomy, National Astronomical Observatories, CAS, Beijing 100101, China ,
Paul Harding8989affiliation: Department of Astronomy, Case Western Reserve
University, Cleveland, OH 44106, USA , Sten Hasselquist3535affiliation:
Department of Physics and Astronomy, University of Utah, 115 S. 1400 E., Salt
Lake City, UT 84112, USA 9090affiliation: NSF Astronomy and Astrophysics
Postdoctoral Fellow , Christian R. Hayes1414affiliation: Department of
Astronomy, University of Washington, Box 351580, Seattle, WA 98195, USA , Fred
Hearty4343affiliation: Department of Astronomy & Astrophysics, Eberly College
of Science, The Pennsylvania State University, 525 Davey Laboratory,
University Park, PA 16802, USA , Jesús Hernández9191affiliation: Universidad
Nacional Autónoma de México, Instituto de Astronomía, AP 106, Ensenada 22800,
BC, Mexico , Lewis Hill9292affiliation: Institute of Cosmology & Gravitation,
University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX, UK ,
David W. Hogg3434affiliation: Center for Cosmology and Particle Physics,
Department of Physics, 726 Broadway, Room 1005, New York University, New York,
NY 10003, USA , Jon A. Holtzman5151affiliation: Department of Astronomy, New
Mexico State University, Las Cruces, NM 88003, USA , Danny
Horta9393affiliation: Astrophysics Research Institute, Liverpool John Moores
University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool L3 5RF,
UK , Bau-Ching Hsieh11affiliation: Academia Sinica Institute of Astronomy and
Astrophysics, 11F of AS/NTU, Astronomy-Mathematics Building, No.1, Sec. 4,
Roosevelt Rd, Taipei, 10617, Taiwan , Chin-Hao Hsu11affiliation: Academia
Sinica Institute of Astronomy and Astrophysics, 11F of AS/NTU, Astronomy-
Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei, 10617, Taiwan , Yun-
Hsin Hsu11affiliation: Academia Sinica Institute of Astronomy and
Astrophysics, 11F of AS/NTU, Astronomy-Mathematics Building, No.1, Sec. 4,
Roosevelt Rd, Taipei, 10617, Taiwan 9494affiliation: Institute of Astronomy,
National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu
30013, Taiwan , Daniel Huber9595affiliation: Institute for Astronomy,
University of Hawai’i, 2680 Woodlawn Drive, Honolulu, HI 96822, USA , Marc
Huertas-Company99affiliation: Instituto de Astrofísica de Canarias (IAC), C/
Via Láctea s/n, E-38205 La Laguna, Tenerife, Spain 9696affiliation: LERMA, UMR
8112, PSL University, University of Paris, 75014, Paris, France , Brian
Hutchinson9797affiliation: Computer Science Department, Western Washington
University, 516 High Street, Bellingham, WA 98225, USA 9898affiliation:
Computing & Analytics Division, Pacific Northwest, Richland, WA USA , Ho Seong
Hwang9999affiliation: Korea Astronomy and Space Science Institute, 776
Daedeokdae-ro, Yuseong-gu, Daejeon 305-348, Republic of Korea
100100affiliation: Astronomy Program, Department of Physics and Astronomy,
Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of
Korea , Héctor J. Ibarra-Medel101101affiliation: Department of Astronomy,
University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA , Jacob Ider
Chitham4949affiliation: Max-Planck-Institut für extraterrestrische Physik,
Gießenbachstraße 1, 85748 Garching, Germany , Gabriele S. Ilha6262affiliation:
Laboratório Interinstitucional de e-Astronomia, 77 Rua General José Cristino,
Rio de Janeiro, 20921-400, Brasil 102102affiliation: Departamento de Física,
Centro de Ciências Naturais e Exatas, Universidade Federal de Santa Maria,
97105-900, Santa Maria, RS, Brazil , Julie Imig5151affiliation: Department of
Astronomy, New Mexico State University, Las Cruces, NM 88003, USA , Will
Jaekle6868affiliation: Department of Physics and Astronomy, Bates College, 44
Campus Avenue, Lewiston ME 04240, USA , Tharindu Jayasinghe8484affiliation:
Department of Astronomy and Center for Cosmology and AstroParticle Physics,
The Ohio State University, 140 W. 18th Ave, Columbus, OH, 43210, USA , Xihan
Ji66affiliation: Department of Physics and Astronomy, University of Kentucky,
505 Rose St., Lexington, KY, 40506-0055, USA , Jennifer A.
Johnson8484affiliation: Department of Astronomy and Center for Cosmology and
AstroParticle Physics, The Ohio State University, 140 W. 18th Ave, Columbus,
OH, 43210, USA , Amy Jones5454affiliation: Space Telescope Science Institute,
3700 San Martin Drive, Baltimore, MD 21218, USA , Henrik
Jönsson103103affiliation: Materials Science and Applied Mathematics, Malmö
University, SE-205 06 Malmö, Sweden , Ivan Katkov5757affiliation: NYU Abu
Dhabi, PO Box 129188, Abu Dhabi, UAE 3333affiliation: Sternberg Astronomical
Institute, Moscow State University, Moscow, 119992, Russia , Dr. Arman
Khalatyan1212affiliation: Leibniz-Institut fur Astrophysik Potsdam (AIP), An
der Sternwarte 16, D-14482 Potsdam, Germany , Karen Kinemuchi3232affiliation:
Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349, USA , Shobhit
Kisku9393affiliation: Astrophysics Research Institute, Liverpool John Moores
University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool L3 5RF,
UK , Johan H. Knapen99affiliation: Instituto de Astrofísica de Canarias (IAC),
C/ Via Láctea s/n, E-38205 La Laguna, Tenerife, Spain 1010affiliation:
Universidad de La Laguna (ULL), Departamento de Astrofísica, E-38206 La
Laguna, Tenerife Spain , Jean-Paul Kneib104104affiliation: Institute of
Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne
(EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland , Juna A.
Kollmeier2323affiliation: The Observatories of the Carnegie Institution for
Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA , Miranda
Kong105105affiliation: Bryn Mawr College, 101 North Merion Ave, Bryn Mawr, PA
19010, USA , Marina Kounkel3131affiliation: Department of Physics and
Astronomy, Vanderbilt University, VU Station 1807, Nashville, TN 37235, USA
6363affiliation: Department of Physics and Astronomy, Western Washington
University, 516 High Street, Bellingham, WA 98225, USA , Kathryn
Kreckel106106affiliation: Astronomisches Rechen-Institut, Zentrum für
Astronomie der Universität Heidelberg, Mönchhofstraße 12-14, D-69120
Heidelberg, Germany , Dhanesh Krishnarao2828affiliation: Department of
Astronomy, University of Wisconsin-Madison, 475N. Charter St., Madison WI
53703, USA , Ivan Lacerna7575affiliation: Instituto de Astronomía y Ciencias
Planetarias, Universidad de Atacama, Copayapu 485, Copiapó, Chile
4040affiliation: Millennium Institute of Astrophysics, MAS, Nuncio Monsenor
Sotero Sanz 100, Of. 104, Providencia, Santiago, Chile , Richard R.
Lane107107affiliation: Centro de Investigación en Astronomía, Universidad
Bernardo O’Higgins, Avenida Viel 1497, Santiago, Chile. , Rachel
Langgin105105affiliation: Bryn Mawr College, 101 North Merion Ave, Bryn Mawr,
PA 19010, USA , Ramon Lavender4646affiliation: Department of Biological and
Physical Sciences, South Carolina State University, P.O. Box 7024, Orangeburg,
SC 29117, USA , David R. Law5454affiliation: Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA , Daniel
Lazarz66affiliation: Department of Physics and Astronomy, University of
Kentucky, 505 Rose St., Lexington, KY, 40506-0055, USA , Henry W.
Leung4141affiliation: David A. Dunlap Department of Astronomy & Astrophysics,
University of Toronto, 50 St. George Street, Toronto, ON, M5S 3H4, Canada ,
Ho-Hin Leung3636affiliation: School of Physics and Astronomy, University of St
Andrews, North Haugh, St Andrews KY16 9SS, UK , Hannah M.
Lewis1111affiliation: Department of Astronomy, University of Virginia,
Charlottesville, VA 22904-4325, USA , Cheng Li108108affiliation: Department of
Astronomy, Tsinghua University, Beijing 100084, China , Ran Li6464affiliation:
National Astronomical Observatories of China, Chinese Academy of Sciences, 20A
Datun Road, Chaoyang District, Beijing 100012, China , Jianhui
Lian3535affiliation: Department of Physics and Astronomy, University of Utah,
115 S. 1400 E., Salt Lake City, UT 84112, USA , Fu-Heng
Liang108108affiliation: Department of Astronomy, Tsinghua University, Beijing
100084, China 5252affiliation: Sub-department of Astrophysics, Department of
Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford
OX1 3RH, UK , Lihwai Lin (林俐暉)11affiliation: Academia Sinica Institute of
Astronomy and Astrophysics, 11F of AS/NTU, Astronomy-Mathematics Building,
No.1, Sec. 4, Roosevelt Rd, Taipei, 10617, Taiwan , Yen-Ting Lin11affiliation:
Academia Sinica Institute of Astronomy and Astrophysics, 11F of AS/NTU,
Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei, 10617,
Taiwan , Sicheng Lin3434affiliation: Center for Cosmology and Particle
Physics, Department of Physics, 726 Broadway, Room 1005, New York University,
New York, NY 10003, USA , Chris Lintott5252affiliation: Sub-department of
Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson
Building, Keble Road, Oxford OX1 3RH, UK , Dan Long3232affiliation: Apache
Point Observatory, P.O. Box 59, Sunspot, NM 88349, USA , Penélope Longa-
Peña3838affiliation: Centro de Astronomía (CITEVA), Universidad de
Antofagasta, Avenida Angamos 601, Antofagasta 1270300, Chile , Carlos López-
Cobá11affiliation: Academia Sinica Institute of Astronomy and Astrophysics,
11F of AS/NTU, Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd,
Taipei, 10617, Taiwan , Shengdong Lu108108affiliation: Department of
Astronomy, Tsinghua University, Beijing 100084, China , Britt F.
Lundgren109109affiliation: Department of Physics and Astronomy, University of
North Carolina Asheville, One University Heights, Asheville, NC 28804, USA ,
Yuanze Luo110110affiliation: Center for Astrophysical Sciences, Department of
Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street,
Baltimore, MD 21218, USA , J. Ted Mackereth111111affiliation: Canadian
Institute for Theoretical Astrophysics, University of Toronto, 60 St. George
Street, Toronto, ON, M5S 3H8, Canada 4242affiliation: Dunlap Institute for
Astronomy and Astrophysics, University of Toronto, 50 St. George Street,
Toronto, Ontario M5S 3H4, Canada 4141affiliation: David A. Dunlap Department
of Astronomy & Astrophysics, University of Toronto, 50 St. George Street,
Toronto, ON, M5S 3H4, Canada , Axel de la Macorra112112affiliation: Instituto
de Física Universidad Nacional Autónoma de México, Cd. de México 04510, México
, Suvrath Mahadevan4343affiliation: Department of Astronomy & Astrophysics,
Eberly College of Science, The Pennsylvania State University, 525 Davey
Laboratory, University Park, PA 16802, USA , Steven R.
Majewski1111affiliation: Department of Astronomy, University of Virginia,
Charlottesville, VA 22904-4325, USA , Arturo Manchado99affiliation: Instituto
de Astrofísica de Canarias (IAC), C/ Via Láctea s/n, E-38205 La Laguna,
Tenerife, Spain 1010affiliation: Universidad de La Laguna (ULL), Departamento
de Astrofísica, E-38206 La Laguna, Tenerife Spain 113113affiliation: CSIC,
Spain , Travis Mandeville1414affiliation: Department of Astronomy, University
of Washington, Box 351580, Seattle, WA 98195, USA , Claudia
Maraston9292affiliation: Institute of Cosmology & Gravitation, University of
Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX, UK , Berta Margalef-
Bentabol2727affiliation: Department of Physics and Astronomy, University of
Pennsylvania, Philadelphia, PA 19104, USA , Thomas Masseron99affiliation:
Instituto de Astrofísica de Canarias (IAC), C/ Via Láctea s/n, E-38205 La
Laguna, Tenerife, Spain 1010affiliation: Universidad de La Laguna (ULL),
Departamento de Astrofísica, E-38206 La Laguna, Tenerife Spain , Karen L.
Masters8383affiliation: Departments of Physics and Astronomy, Haverford
College, 370 Lancaster Ave, Haverford, PA 19041, USA 114114affiliation: SDSS-
IV Spokesperson , Savita Mathur99affiliation: Instituto de Astrofísica de
Canarias (IAC), C/ Via Láctea s/n, E-38205 La Laguna, Tenerife, Spain
1010affiliation: Universidad de La Laguna (ULL), Departamento de Astrofísica,
E-38206 La Laguna, Tenerife Spain , Richard M. McDermid115115affiliation:
Department of Physics and Astronomy, Macquarie University, Sydney NSW 2109,
Australia 116116affiliation: ARC Centre of Excellence for All Sky Astrophysics
in 3 Dimensions (ASTRO 3D), Australia , Myles Mckay1414affiliation: Department
of Astronomy, University of Washington, Box 351580, Seattle, WA 98195, USA ,
Andrea Merloni4949affiliation: Max-Planck-Institut für extraterrestrische
Physik, Gießenbachstraße 1, 85748 Garching, Germany , Michael
Merrifield1717affiliation: School of Physics and Astronomy, University of
Nottingham, University Park, Nottingham, NG7 2RD, UK , Szabolcs
Meszaros117117affiliation: ELTE Eötvös Loránd University, Gothard
Astrophysical Observatory, 9700 Szombathely, Szent Imre H. st. 112, Hungary
118118affiliation: MTA-ELTE Lendület Milky Way Research Group, Hungary
119119affiliation: MTA-ELTE Exoplanet Research Group, Hungary , Andrea
Miglio4747affiliation: Dipartimento di Fisica e Astronomia ”Augusto Righi”,
Università di Bologna, via Gobetti 93/2, 40129 Bologna, Italy , Francesco Di
Mille120120affiliation: Las Campanas Observatory, Colina El Pino Casilla 601
La Serena, Chile , Dante Minniti121121affiliation: Departamento de Ciencias
Fısicas, Universidad Andres Bello, Av. Republica 220, Santiago, Chile
153153affiliation: Vatican Observatory, V00120 Vatican City State, Italy ,
Rebecca Minsley6868affiliation: Department of Physics and Astronomy, Bates
College, 44 Campus Avenue, Lewiston ME 04240, USA , Antonela
Monachesi6565affiliation: Instituto de Investigación Multidisciplinario en
Ciencia y Tecnología, Universidad de La Serena. Avenida Raúl Bitrán S/N, La
Serena, Chile , Jeongin Moon5959affiliation: Department of Astronomy and Space
Science, Sejong University, 209, Neungdong-ro, Gwangjin-gu, Seoul, South Korea
, Benoit Mosser122122affiliation: LESIA, Observatoire de Paris, Université
PSL, CNRS, Sorbonne Université, Université de Paris, 5 place Jules Janssen,
92195 Meudon, France , John Mulchaey2323affiliation: The Observatories of the
Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA
91101, USA , Demitri Muna8484affiliation: Department of Astronomy and Center
for Cosmology and AstroParticle Physics, The Ohio State University, 140 W.
18th Ave, Columbus, OH, 43210, USA , Ricardo R. Muñoz8787affiliation:
Departamento de Astronomía, Universidad de Chile, Camino El Observatorio 1515,
Las Condes, Chile , Adam D. Myers123123affiliation: Department of Physics and
Astronomy, University of Wyoming, Laramie, WY 82071, USA , Natalie
Myers2222affiliation: Department of Physics & Astronomy, Texas Christian
University, Fort Worth, TX 76129, USA , Seshadri Nadathur124124affiliation:
Department of Physics & Astronomy, University College London, Gower Street,
London, WC1E 6BT, UK , Preethi Nair125125affiliation: Department of Physics
and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA , Kirpal
Nandra4949affiliation: Max-Planck-Institut für extraterrestrische Physik,
Gießenbachstraße 1, 85748 Garching, Germany , Justus Neumann9292affiliation:
Institute of Cosmology & Gravitation, University of Portsmouth, Dennis Sciama
Building, Portsmouth, PO1 3FX, UK , Jeffrey A. Newman1515affiliation: PITT
PACC, Department of Physics and Astronomy, University of Pittsburgh,
Pittsburgh, PA 15260, USA , David L. Nidever126126affiliation: Department of
Physics, Montana State University, P.O. Box 173840, Bozeman, MT 59717-3840,
USA , Farnik Nikakhtar2727affiliation: Department of Physics and Astronomy,
University of Pennsylvania, Philadelphia, PA 19104, USA , Christian
Nitschelm3838affiliation: Centro de Astronomía (CITEVA), Universidad de
Antofagasta, Avenida Angamos 601, Antofagasta 1270300, Chile , Julia E.
O’Connell2222affiliation: Department of Physics & Astronomy, Texas Christian
University, Fort Worth, TX 76129, USA 8181affiliation: Departmento de
Astronomía, Universidad de Concepción, Casilla 160-C, Concepción, Chile , Luis
Garma-Oehmichen1616affiliation: Instituto de Astronomía, Universidad Nacional
Autónoma de México, A.P. 70-264, 04510, Mexico, D.F., México , Gabriel Luan
Souza de Oliveira102102affiliation: Departamento de Física, Centro de Ciências
Naturais e Exatas, Universidade Federal de Santa Maria, 97105-900, Santa
Maria, RS, Brazil 6262affiliation: Laboratório Interinstitucional de
e-Astronomia, 77 Rua General José Cristino, Rio de Janeiro, 20921-400, Brasil
, Richard Olney6363affiliation: Department of Physics and Astronomy, Western
Washington University, 516 High Street, Bellingham, WA 98225, USA , Daniel
Oravetz3232affiliation: Apache Point Observatory, P.O. Box 59, Sunspot, NM
88349, USA , Mario Ortigoza-Urdaneta7575affiliation: Instituto de Astronomía y
Ciencias Planetarias, Universidad de Atacama, Copayapu 485, Copiapó, Chile ,
Yeisson Osorio99affiliation: Instituto de Astrofísica de Canarias (IAC), C/
Via Láctea s/n, E-38205 La Laguna, Tenerife, Spain , Justin
Otter110110affiliation: Center for Astrophysical Sciences, Department of
Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street,
Baltimore, MD 21218, USA , Zachary J. Pace2828affiliation: Department of
Astronomy, University of Wisconsin-Madison, 475N. Charter St., Madison WI
53703, USA , Nelson Padilla127127affiliation: Instituto de Astrofísica,
Pontificia Universidad Católica de Chile, Av. Vicuna Mackenna 4860, 782-0436
Macul, Santiago, Chile , Kaike Pan3232affiliation: Apache Point Observatory,
P.O. Box 59, Sunspot, NM 88349, USA , Hsi-An Pan7676affiliation: Max-Planck-
Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany , Taniya
Parikh4949affiliation: Max-Planck-Institut für extraterrestrische Physik,
Gießenbachstraße 1, 85748 Garching, Germany , James Parker3232affiliation:
Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349, USA , Sebastien
Peirani128128affiliation: Institut d’Astrophysique de Paris, UMR 7095, SU-
CNRS, 98bis bd Arago, 75014 Paris, France , Karla Peña Ramírez3838affiliation:
Centro de Astronomía (CITEVA), Universidad de Antofagasta, Avenida Angamos
601, Antofagasta 1270300, Chile , Samantha Penny9292affiliation: Institute of
Cosmology & Gravitation, University of Portsmouth, Dennis Sciama Building,
Portsmouth, PO1 3FX, UK , Will J. Percival129129affiliation: Waterloo Centre
for Astrophysics, University of Waterloo, Waterloo, ON N2L 3G1, Canada
130130affiliation: Department of Physics and Astronomy, University of
Waterloo, Waterloo, ON N2L 3G1, Canada 131131affiliation: Perimeter Institute
for Theoretical Physics, Waterloo, ON N2L 2Y5, Canada , Ismael Perez-
Fournon99affiliation: Instituto de Astrofísica de Canarias (IAC), C/ Via
Láctea s/n, E-38205 La Laguna, Tenerife, Spain 1010affiliation: Universidad de
La Laguna (ULL), Departamento de Astrofísica, E-38206 La Laguna, Tenerife
Spain , Marc Pinsonneault8484affiliation: Department of Astronomy and Center
for Cosmology and AstroParticle Physics, The Ohio State University, 140 W.
18th Ave, Columbus, OH, 43210, USA , Frédérick Poidevin99affiliation:
Instituto de Astrofísica de Canarias (IAC), C/ Via Láctea s/n, E-38205 La
Laguna, Tenerife, Spain 1010affiliation: Universidad de La Laguna (ULL),
Departamento de Astrofísica, E-38206 La Laguna, Tenerife Spain , Vijith Jacob
Poovelil3535affiliation: Department of Physics and Astronomy, University of
Utah, 115 S. 1400 E., Salt Lake City, UT 84112, USA , Adrian M. Price-
Whelan132132affiliation: Center for Computational Astrophysics, Flatiron
Institute, 162 Fifth Avenue, New York, NY, 10010 , Anna Bárbara de Andrade
Queiroz1212affiliation: Leibniz-Institut fur Astrophysik Potsdam (AIP), An der
Sternwarte 16, D-14482 Potsdam, Germany , M. Jordan Raddick110110affiliation:
Center for Astrophysical Sciences, Department of Physics and Astronomy, Johns
Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA , Amy
Ray2222affiliation: Department of Physics & Astronomy, Texas Christian
University, Fort Worth, TX 76129, USA , Sandro Barboza
Rembold102102affiliation: Departamento de Física, Centro de Ciências Naturais
e Exatas, Universidade Federal de Santa Maria, 97105-900, Santa Maria, RS,
Brazil 6262affiliation: Laboratório Interinstitucional de e-Astronomia, 77 Rua
General José Cristino, Rio de Janeiro, 20921-400, Brasil , Nicole
Riddle2222affiliation: Department of Physics & Astronomy, Texas Christian
University, Fort Worth, TX 76129, USA , Rogemar A. Riffel6262affiliation:
Laboratório Interinstitucional de e-Astronomia, 77 Rua General José Cristino,
Rio de Janeiro, 20921-400, Brasil 102102affiliation: Departamento de Física,
Centro de Ciências Naturais e Exatas, Universidade Federal de Santa Maria,
97105-900, Santa Maria, RS, Brazil , Rogério Riffel133133affiliation:
Departamento de Astronomia, Instituto de Física, Universidade Federal do Rio
Grande do Sul. Av. Bento Goncalves 9500, 91501-970, Porto Alegre, RS, Brazil
6262affiliation: Laboratório Interinstitucional de e-Astronomia, 77 Rua
General José Cristino, Rio de Janeiro, 20921-400, Brasil , Hans-Walter
Rix7676affiliation: Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117
Heidelberg, Germany , Annie C. Robin134134affiliation: Institut UTINAM, CNRS,
OSU THETA Franche-Comté Bourgogne, Univ. Bourgogne Franche-Comté, 25000
Besançon, France , Aldo Rodríguez-Puebla1616affiliation: Instituto de
Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264, 04510,
Mexico, D.F., México , Alexandre Roman-Lopes2121affiliation: Departamento de
Astronomía, Universidad de La Serena, Av. Juan Cisternas 1200 Norte, La
Serena, Chile , Carlos Román-Zúñiga9191affiliation: Universidad Nacional
Autónoma de México, Instituto de Astronomía, AP 106, Ensenada 22800, BC,
Mexico , Benjamin Rose2424affiliation: Department of Physics and JINA Center
for the Evolution of the Elements, University of Notre Dame, Notre Dame, IN
46556, USA , Ashley J. Ross135135affiliation: Department of Physics and Center
for Cosmology and AstroParticle Physics, The Ohio State University, Columbus,
OH 43210, USA , Graziano Rossi5959affiliation: Department of Astronomy and
Space Science, Sejong University, 209, Neungdong-ro, Gwangjin-gu, Seoul, South
Korea , Kate H. R. Rubin136136affiliation: Department of Astronomy, San Diego
State University, San Diego, CA 92182, USA 5353affiliation: Center for
Astrophysics and Space Science, University of California San Diego, La Jolla,
CA 92093, USA , Mara Salvato4949affiliation: Max-Planck-Institut für
extraterrestrische Physik, Gießenbachstraße 1, 85748 Garching, Germany ,
Sebástian F. Sánchez1616affiliation: Instituto de Astronomía, Universidad
Nacional Autónoma de México, A.P. 70-264, 04510, Mexico, D.F., México , José
R. Sánchez-Gallego1414affiliation: Department of Astronomy, University of
Washington, Box 351580, Seattle, WA 98195, USA , Robyn
Sanderson2727affiliation: Department of Physics and Astronomy, University of
Pennsylvania, Philadelphia, PA 19104, USA 132132affiliation: Center for
Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York,
NY, 10010 , Felipe Antonio Santana Rojas8787affiliation: Departamento de
Astronomía, Universidad de Chile, Camino El Observatorio 1515, Las Condes,
Chile , Edgar Sarceno6868affiliation: Department of Physics and Astronomy,
Bates College, 44 Campus Avenue, Lewiston ME 04240, USA , Regina
Sarmiento99affiliation: Instituto de Astrofísica de Canarias (IAC), C/ Via
Láctea s/n, E-38205 La Laguna, Tenerife, Spain 1010affiliation: Universidad de
La Laguna (ULL), Departamento de Astrofísica, E-38206 La Laguna, Tenerife
Spain , Conor Sayres1414affiliation: Department of Astronomy, University of
Washington, Box 351580, Seattle, WA 98195, USA , Elizaveta
Sazonova110110affiliation: Center for Astrophysical Sciences, Department of
Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street,
Baltimore, MD 21218, USA , Adam L. Schaefer7878affiliation: Max-Planck-
Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching, Germany
, David J Schlegel8686affiliation: Lawrence Berkeley National Laboratory, 1
Cyclotron Road, Berkeley, CA 94720, USA , Donald P. Schneider4343affiliation:
Department of Astronomy & Astrophysics, Eberly College of Science, The
Pennsylvania State University, 525 Davey Laboratory, University Park, PA
16802, USA 4444affiliation: Institute for Gravitation and the Cosmos, The
Pennsylvania State University, University Park, PA 16802, USA , Ricardo
Schiavon9393affiliation: Astrophysics Research Institute, Liverpool John
Moores University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool
L3 5RF, UK , Mathias Schultheis137137affiliation: Observatoire de la Côte
d’Azur, Laboratoire Lagrange, 06304 Nice Cedex 4, France , Axel
Schwope1212affiliation: Leibniz-Institut fur Astrophysik Potsdam (AIP), An der
Sternwarte 16, D-14482 Potsdam, Germany , Aldo Serenelli6969affiliation:
Institute of Space Sciences (ICE, CSIC), Carrer de Can Magrans S/N, Campus
UAB, Barcelona, E-08193, Spain 138138affiliation: Institut d’Estudis Espacials
de Catalunya, C. Gran Capita 2-4, Barcelona, Spain , Javier
Serna1616affiliation: Instituto de Astronomía, Universidad Nacional Autónoma
de México, A.P. 70-264, 04510, Mexico, D.F., México , Zhengyi
Shao8585affiliation: Shanghai Astronomical Observatory, Chinese Academy of
Sciences, 80 Nandan Road, Shanghai 200030, China , Griffin
Shapiro139139affiliation: Middlebury College, Middlebury, Vermont 05753, USA ,
Anubhav Sharma8383affiliation: Departments of Physics and Astronomy, Haverford
College, 370 Lancaster Ave, Haverford, PA 19041, USA , Yue
Shen101101affiliation: Department of Astronomy, University of Illinois at
Urbana-Champaign, Urbana, IL 61801, USA , Matthew Shetrone5050affiliation:
UCO/Lick Observatory, University of California, Santa Cruz, 1156 High St.
Santa Cruz, CA 95064, USA , Yiping Shu7878affiliation: Max-Planck-Institut für
Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching, Germany , Joshua D.
Simon2323affiliation: The Observatories of the Carnegie Institution for
Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA , M. F.
Skrutskie1111affiliation: Department of Astronomy, University of Virginia,
Charlottesville, VA 22904-4325, USA , Rebecca Smethurst5252affiliation: Sub-
department of Astrophysics, Department of Physics, University of Oxford, Denys
Wilkinson Building, Keble Road, Oxford OX1 3RH, UK , Verne
Smith3737affiliation: NSF’s National Optical-Infrared Astronomy Research
Laboratory, 950 North Cherry Avenue, Tucson, AZ 85719, USA , Jennifer
Sobeck1414affiliation: Department of Astronomy, University of Washington, Box
351580, Seattle, WA 98195, USA , Taylor Spoo2222affiliation: Department of
Physics & Astronomy, Texas Christian University, Fort Worth, TX 76129, USA ,
Dani Sprague9797affiliation: Computer Science Department, Western Washington
University, 516 High Street, Bellingham, WA 98225, USA , David V.
Stark8383affiliation: Departments of Physics and Astronomy, Haverford College,
370 Lancaster Ave, Haverford, PA 19041, USA , Keivan G.
Stassun3131affiliation: Department of Physics and Astronomy, Vanderbilt
University, VU Station 1807, Nashville, TN 37235, USA , Matthias
Steinmetz1212affiliation: Leibniz-Institut fur Astrophysik Potsdam (AIP), An
der Sternwarte 16, D-14482 Potsdam, Germany , Dennis Stello140140affiliation:
Sydney Institute for Astronomy, School of Physics, University of Sydney, NSW
2006, Australia 141141affiliation: School of Physics, UNSW Sydney, NSW 2052,
Australia , Alexander Stone-Martinez5151affiliation: Department of Astronomy,
New Mexico State University, Las Cruces, NM 88003, USA , Thaisa Storchi-
Bergmann133133affiliation: Departamento de Astronomia, Instituto de Física,
Universidade Federal do Rio Grande do Sul. Av. Bento Goncalves 9500,
91501-970, Porto Alegre, RS, Brazil 6262affiliation: Laboratório
Interinstitucional de e-Astronomia, 77 Rua General José Cristino, Rio de
Janeiro, 20921-400, Brasil , Guy S. Stringfellow6161affiliation: Center for
Astrophysics and Space Astronomy, Department of Astrophysical and Planetary
Sciences, University of Colorado, 389 UCB, Boulder, CO 80309-0389, USA ,
Amelia Stutz8181affiliation: Departmento de Astronomía, Universidad de
Concepción, Casilla 160-C, Concepción, Chile , Yung-Chau Su11affiliation:
Academia Sinica Institute of Astronomy and Astrophysics, 11F of AS/NTU,
Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei, 10617,
Taiwan 5858affiliation: Department of Physics, National Taiwan University,
Taipei 10617, Taiwan , Manuchehr Taghizadeh-Popp110110affiliation: Center for
Astrophysical Sciences, Department of Physics and Astronomy, Johns Hopkins
University, 3400 North Charles Street, Baltimore, MD 21218, USA , Michael S.
Talbot3535affiliation: Department of Physics and Astronomy, University of
Utah, 115 S. 1400 E., Salt Lake City, UT 84112, USA , Jamie
Tayar9595affiliation: Institute for Astronomy, University of Hawai’i, 2680
Woodlawn Drive, Honolulu, HI 96822, USA 142142affiliation: Hubble Fellow ,
Eduardo Telles5555affiliation: Observatório Nacional, Rio de Janeiro, Brasil ,
Johanna Teske143143affiliation: Carnegie Institution for Science, Earth and
Planets Laboratory, 5241 Broad Branch Road NW, Washington, DC 20015, USA , Ani
Thakar110110affiliation: Center for Astrophysical Sciences, Department of
Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street,
Baltimore, MD 21218, USA , Christopher Theissen5353affiliation: Center for
Astrophysics and Space Science, University of California San Diego, La Jolla,
CA 92093, USA , Daniel Thomas9292affiliation: Institute of Cosmology &
Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1
3FX, UK , Andrew Tkachenko33affiliation: Institute of Astronomy, KU Leuven,
Celestijnenlaan 200D, B-3001 Leuven, Belgium , Rita Tojeiro3636affiliation:
School of Physics and Astronomy, University of St Andrews, North Haugh, St
Andrews KY16 9SS, UK , Hector Hernandez Toledo1616affiliation: Instituto de
Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264, 04510,
Mexico, D.F., México , Nicholas W. Troup144144affiliation: Department of
Physics, Salisbury University, 1101 Camden Ave., Salisbury, MD 21804, USA ,
Jonathan R. Trump145145affiliation: Department of Physics, University of
Connecticut, 2152 Hillside Road, Unit 3046, Storrs, CT 06269, USA , James
Trussler146146affiliation: Cavendish Laboratory, University of Cambridge, 19
J. J. Thomson Avenue, Cambridge CB3 0HE, UK 147147affiliation: Kavli Institute
for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA,
United Kingdom , Jacqueline Turner8383affiliation: Departments of Physics and
Astronomy, Haverford College, 370 Lancaster Ave, Haverford, PA 19041, USA ,
Sarah Tuttle1414affiliation: Department of Astronomy, University of
Washington, Box 351580, Seattle, WA 98195, USA , Eduardo Unda-
Sanzana3838affiliation: Centro de Astronomía (CITEVA), Universidad de
Antofagasta, Avenida Angamos 601, Antofagasta 1270300, Chile , José Antonio
Vázquez-Mata1616affiliation: Instituto de Astronomía, Universidad Nacional
Autónoma de México, A.P. 70-264, 04510, Mexico, D.F., México
148148affiliation: Departamento de Física, Facultad de Ciencias, Universidad
Nacional Autónoma de México, Ciudad Universitaria, CDMX, 04510, México ,
Marica Valentini1212affiliation: Leibniz-Institut fur Astrophysik Potsdam
(AIP), An der Sternwarte 16, D-14482 Potsdam, Germany , Octavio
Valenzuela1616affiliation: Instituto de Astronomía, Universidad Nacional
Autónoma de México, A.P. 70-264, 04510, Mexico, D.F., México , Jaime Vargas-
González149149affiliation: Centre for Astrophysics Research, School of
Physics, Astronomy and Mathematics, University of Hertfordshire, College Lane,
Hatfield AL10 9AB, UK , Mariana Vargas-Magaña112112affiliation: Instituto de
Física Universidad Nacional Autónoma de México, Cd. de México 04510, México ,
Pablo Vera Alfaro2121affiliation: Departamento de Astronomía, Universidad de
La Serena, Av. Juan Cisternas 1200 Norte, La Serena, Chile , Sandro
Villanova8181affiliation: Departmento de Astronomía, Universidad de
Concepción, Casilla 160-C, Concepción, Chile , Fiorenzo
Vincenzo8484affiliation: Department of Astronomy and Center for Cosmology and
AstroParticle Physics, The Ohio State University, 140 W. 18th Ave, Columbus,
OH, 43210, USA , David Wake109109affiliation: Department of Physics and
Astronomy, University of North Carolina Asheville, One University Heights,
Asheville, NC 28804, USA , Jack T. Warfield1111affiliation: Department of
Astronomy, University of Virginia, Charlottesville, VA 22904-4325, USA ,
Jessica Diane Washington150150affiliation: Wellesley College Address: 106
Central St, Wellesley, MA 02481, USA , Benjamin Alan Weaver3737affiliation:
NSF’s National Optical-Infrared Astronomy Research Laboratory, 950 North
Cherry Avenue, Tucson, AZ 85719, USA , Anne-Marie Weijmans3636affiliation:
School of Physics and Astronomy, University of St Andrews, North Haugh, St
Andrews KY16 9SS, UK , David H. Weinberg8484affiliation: Department of
Astronomy and Center for Cosmology and AstroParticle Physics, The Ohio State
University, 140 W. 18th Ave, Columbus, OH, 43210, USA , Achim
Weiss7878affiliation: Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-
Str. 1, D-85748 Garching, Germany , Kyle B. Westfall5050affiliation: UCO/Lick
Observatory, University of California, Santa Cruz, 1156 High St. Santa Cruz,
CA 95064, USA , Vivienne Wild3636affiliation: School of Physics and Astronomy,
University of St Andrews, North Haugh, St Andrews KY16 9SS, UK , Matthew C.
Wilde1414affiliation: Department of Astronomy, University of Washington, Box
351580, Seattle, WA 98195, USA , John C. Wilson1111affiliation: Department of
Astronomy, University of Virginia, Charlottesville, VA 22904-4325, USA ,
Robert F. Wilson1111affiliation: Department of Astronomy, University of
Virginia, Charlottesville, VA 22904-4325, USA , Mikayla Wilson2222affiliation:
Department of Physics & Astronomy, Texas Christian University, Fort Worth, TX
76129, USA , Julien Wolf4949affiliation: Max-Planck-Institut für
extraterrestrische Physik, Gießenbachstraße 1, 85748 Garching, Germany
151151affiliation: Exzellenzcluster ORIGINS, Boltzmannstr. 2, D-85748
Garching, Germany , W. M. Wood-Vasey1515affiliation: PITT PACC, Department of
Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260, USA ,
Renbin Yan (严人斌)152152affiliation: Department of Physics, The Chinese
University of Hong Kong, Shatin, N.T., Hong Kong SAR, China 66affiliation:
Department of Physics and Astronomy, University of Kentucky, 505 Rose St.,
Lexington, KY, 40506-0055, USA , Olga Zamora99affiliation: Instituto de
Astrofísica de Canarias (IAC), C/ Via Láctea s/n, E-38205 La Laguna, Tenerife,
Spain , Gail Zasowski3535affiliation: Department of Physics and Astronomy,
University of Utah, 115 S. 1400 E., Salt Lake City, UT 84112, USA , Kai
Zhang8686affiliation: Lawrence Berkeley National Laboratory, 1 Cyclotron Road,
Berkeley, CA 94720, USA , Cheng Zhao104104affiliation: Institute of Physics,
Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL),
Observatoire de Sauverny, 1290 Versoix, Switzerland , Zheng
Zheng3535affiliation: Department of Physics and Astronomy, University of Utah,
115 S. 1400 E., Salt Lake City, UT 84112, USA , Zheng Zheng6464affiliation:
National Astronomical Observatories of China, Chinese Academy of Sciences, 20A
Datun Road, Chaoyang District, Beijing 100012, China , Kai Zhu6464affiliation:
National Astronomical Observatories of China, Chinese Academy of Sciences, 20A
Datun Road, Chaoyang District, Beijing 100012, China
###### Abstract
This paper documents the seventeenth data release (DR17) from the Sloan
Digital Sky Surveys; the fifth and final release from the fourth phase (SDSS-
IV). DR17 contains the complete release of the Mapping Nearby Galaxies at
Apache Point Observatory (MaNGA) survey, which reached its goal of surveying
over 10,000 nearby galaxies. The complete release of the MaNGA Stellar Library
(MaStar) accompanies this data, providing observations of almost 30,000 stars
through the MaNGA instrument during bright time. DR17 also contains the
complete release of the Apache Point Observatory Galactic Evolution Experiment
2 (APOGEE-2) survey which publicly releases infra-red spectra of over 650,000
stars. The main sample from the Extended Baryon Oscillation Spectroscopic
Survey (eBOSS), as well as the sub-survey Time Domain Spectroscopic Survey
(TDSS) data were fully released in DR16. New single-fiber optical spectroscopy
released in DR17 is from the SPectroscipic IDentification of ERosita Survey
(SPIDERS) sub-survey and the eBOSS-RM program. Along with the primary data
sets, DR17 includes 25 new or updated Value Added Catalogs (VACs). This paper
concludes the release of SDSS-IV survey data. SDSS continues into its fifth
phase with observations already underway for the Milky Way Mapper (MWM), Local
Volume Mapper (LVM) and Black Hole Mapper (BHM) surveys.
###### Subject headings:
Atlases — Catalogs — Surveys
## 1\. Introduction
The Sloan Digital Sky Surveys (SDSS) have been almost continuously observing
the skies for over 20 years, since the project began with a first phase in
1998 (SDSS-I; York et al. 2000). SDSS has now completed four phases of
operations (with a fifth ongoing; see §8). Since 2017, SDSS has had a dual
hemisphere view of the sky, observing from both Las Campanas Observatory
(LCO), using the du Pont Telescope and the Sloan Foundation 2.5-m Telescope,
(Gunn et al., 2006) at Apache Point Observatory (APO). This paper describes
data taken during the fourth phase of SDSS (SDSS-IV; Blanton et al. 2017),
which consisted of three main surveys; the Extended Baryon Oscillation
Spectroscopic Survey (eBOSS; Dawson et al. 2016), Mapping Nearby Galaxies at
APO (MaNGA; Bundy et al. 2015), and the APO Galactic Evolution Experiment 2
(APOGEE-2; Majewski et al. 2017). Within eBOSS, SDSS-IV also conducted two
smaller programs: the SPectroscopic IDentification of ERosita Sources
(SPIDERS; Clerc et al. 2016; Dwelly et al. 2017) and the Time Domain
Spectroscopic Survey (TDSS; Morganson et al. 2015), and continued the SDSS
Reverberation Mapping (SDSS-RM) program to measure black hole masses out to
redshifts $z\sim 1$–2 using single fiber spectra. Finally, the use of dual
observing modes with the MaNGA and APOGEE instruments (Drory et al. 2015;
Wilson et al. 2019) facilitated the development of the MaNGA Stellar Library
(MaStar; Yan et al. 2019), which observed stars using the MaNGA fiber bundles
during APOGEE-led bright time observing.
This suite of SDSS-IV programs was developed to map the Universe on a range of
scales, from stars in the Milky Way and nearby satellites in APOGEE-2, to
nearby galaxies in MaNGA, and out to cosmological scales with eBOSS. SPIDERS
provided follow-up observations of X-ray emitting sources, especially from
eROSITA (Merloni et al. 2012; Predehl et al. 2014), and TDSS and SDSS-RM
provided a spectroscopic view of the variable sky.
The final year’s schedule for SDSS-IV was substantially altered due to the
COVID-19 pandemic. Originally, the SDSS-IV observations were scheduled to end
at APO on the night of June 30, 2020 and at LCO on the night of September 8,
2020. Closures in response to COVID-19 altered this plan. APO closed on the
morning of March 24, 2020 and the 2.5-m Sloan Foundation Telescope reopened
for science observations the night of June 2, 2020. The summer shutdown
ordinarily scheduled in July and August was delayed and instead SDSS-IV
observations continued through the morning of August 24, 2020. LCO closed on
the morning of March 17, 2020 and the du Pont Telescope reopened for science
observations the night of October 20, 2020. The du Pont Telescope was used
almost continuously for SDSS-IV through the morning of January 21, 2021. These
changes led to different sky coverages than were originally planned for SDSS-
IV but still allowed it to achieve or exceed all of its original goals.
This paper documents the seventeenth data release (DR17) from SDSS overall,
and is the fifth and final annual release from SDSS-IV (following DR13:
Albareti et al. 2017; DR14: Abolfathi et al. 2018, DR15: Aguado et al. 2019
and DR16: Ahumada et al. 2020). With this release SDSS-IV has completed the
goals set out in Blanton et al. (2017).
A complete overview of the scope of DR17 is provided in §2, and information on
how to access the data can be found in §3. We have separate sections on MaNGA
(§5), MaStar (§6) and APOGEE-2 (§4), and while there is no new main eBOSS
survey or TDSS data in this release, we document releases from SPIDERS and the
eBOSS-RM program as well as eBOSS related value added cataloges (VACs) in §7.
We conclude with a summary of the current status of SDSS-V now in active
operations along with describing plans for future data releases (§8).
## 2\. Scope of DR17
SDSS data releases have always been cumulative, and DR17 follows that
tradition, meaning that the most up-to-date reduction of data in all previous
data releases are included in DR17. The exact data products and catalogs of
previous releases also remain accessible on our servers. However, we
emphatically advise users to access any SDSS data from the most recent SDSS
data release, because data may have been reprocessed using updated data
reduction pipelines, and catalogs may have been updated with new entries
and/or improved analysis methods. Changes between the processing methods used
in DR17 compared to previous data releases are documented on the DR17 version
of the SDSS website https://www.sdss.org/dr17 as well as in this article.
This data release itself includes over 46 million new files totalling over 222
TB. Although many of these files replace previous versions, the total volume
of all SDSS files including all previous versions now exceeds 623 TB on the
Science Archive Server (SAS). The growth of the volume of data on the SAS
since DR8 (which was the first data release of SDSS-III) is shown in Figure 1.
Figure 1.— The growth in data volume hosted by the SDSS Science Archive Server (SAS) since DR8. For a more detailed break down of data volume see https://sdss.org/dr17/data_access/volume Table 1SDSS-IV spectroscopic data in all releases (DR13–DR17) Target Category | DR13 | DR14 | DR15 | DR16 | DR17
---|---|---|---|---|---
APOGEE-2 | |
Main Red Star Sample | 109376 | 184148 | 184148 | 281575 | 372458
AllStar Entries | 164562 | 277371 | 277371 | 473307 | 733901
APOGEE-2S Main Red Star Sample | - | - | - | 56480 | 96547
APOGEE-2S AllStar Entries | - | - | - | 102200 | 204193
APOGEE-2S Contributed AllStar Entries | - | - | - | 37409 | 92152
NMSU 1-meter AllStar Entries | 894 | 1018 | 1018 | 1071 | 1175
Telluric AllStar Entries | 17293 | 27127 | 27127 | 34016 | 45803
MaNGA | |
All Cubes | 1390 | 2812 | 4824 | 4824 | 11273
Main galaxy sample: | |
PRIMARY_v1_2 | 600 | 1278 | 2126 | 2126 | 4621
SECONDARY_v1_2 | 473 | 947 | 1665 | 1665 | 3724
COLOR-ENHANCED_v1_2 | 216 | 447 | 710 | 710 | 1514
Other targets33Data cubes not in any of the 3 main galaxy samples, including both ancillary program targets and non-galaxy data cubes. | 31 | 121 | 324 | 324 | 1420
MaStar (MaNGA Stellar Library) | |
All Cubes | 0 | 0 | 3321 | 3321 | 24130
eBOSS | |
LRG samples | 32968 | 138777 | 138777 | 298762 | 298762
ELG samples | 14459 | 35094 | 35094 | 269889 | 269889
Main QSO sample | 33928 | 188277 | 188277 | 434820 | 434820
Variability selected QSOs | 22756 | 87270 | 87270 | 185816 | 186625
Other QSO samples | 24840 | 43502 | 43502 | 70785 | 73574
TDSS targets | 17927 | 57675 | 57675 | 131552 | 131552
SPIDERS targets | 3133 | 16394 | 16394 | 36300 | 41969
Reverberation Mapping | 84911The number of RM targets remains the same, but the number of visits increases. | 84911The number of RM targets remains the same, but the number of visits increases. | 84911The number of RM targets remains the same, but the number of visits increases. | 84911The number of RM targets remains the same, but the number of visits increases. | 84911The number of RM targets remains the same, but the number of visits increases.
Standard Stars/White Dwarfs | 53584 | 63880 | 63880 | 84605 | 85105
| | | | |
Table 1 shows the growth of SDSS-IV data separated by survey and target types
across our five annual data releases. These numbers are a mixture of counts of
unique spectra and unique objects, and while correct to the best of our
ability, can be subject to change based on which quality control flags are
implemented. We also summarize these information below:
1. 1.
APOGEE-2 is including 879,437 new infrared spectra.111The number of spectra
are tallied as the number of new entries in the AllVisit file. Table 1 conveys
the numbers of unique targets that come from the AllStar file. These data come
from observations taken from MJD 58302 to MJD 59160 (i.e., from July 2, 2018
to November 07, 2020) for APOGEE-2 North (APOGEE-2N) at APO and from MJD 58358
to MJD 59234 (August 29, 2018 to January 20, 2021) for APOGEE-2 South
(APOGEE-2S) at LCO and the new spectra comprise both observations of 260,594
new targets and additional epochs on targets included in previous DRs. The
majority of the targets are in the Milky Way galaxy, but DR17 also contains
observations of stars in the Large and Small Magellanic Clouds and eight dwarf
spheroidal satellites as well as integrated light observations of both M33 and
M31. Notably, DR17 contains 408,118 new spectra taken with the APOGEE-S
spectrograph at LCO; this brings the total APOGEE-2S observations to 671,379
spectra of 204,193 unique stars. DR17 also includes all previously released
APOGEE and APOGEE-2 spectra for a cumulative total of 2,659,178 individual
spectra, all of which have been re-reduced with the latest version of the
APOGEE data reduction and analysis pipeline (J. Holtzman et al. in prep.). In
addition to the reduced spectra, element abundances and stellar parameters are
included in this data release. APOGEE-2 is also releasing a number of VACs,
which are listed in Table 2.
2. 2.
MaNGA and MaStar are releasing all scientific data products from the now-
completed surveys. This contains a substantial number of new galaxy and star
observations respectively, along with updated products for all observations
previously released in DR15 and before. These updated data products include
modifications to achieve sub-percent accuracy in the spectral line-spread
function, revised flux calibration, and Data Analysis Pipeline (DAP) products
that now use stellar templates constructed from the MaStar observations to
model the MaNGA galaxy stellar continuum throughout the full optical and near-
infrared (NIR) wavelength range. MaNGA reached its target goal of observing
more than 10,000 nearby galaxies, as well as a small number of non-galaxy
targets, while bright time observations enable MaStar to collect spectra for
almost 30,000 stars through the MaNGA instrument. MaNGA is also releasing a
number of VACs (Table 2).
3. 3.
There is no change in the main survey eBOSS data released since DR16, when a
total of 1.4 million eBOSS spectra were released, completing its main survey
goals. However, a number of Value Added Catalogs (VACs) useful for
cosmological and other applications are released in DR17. The TDSS survey also
released its complete dataset in DR16. However, on-going eBOSS-like
observations of X-ray sources under the SPIDERS program and continued
monitoring of quasars under the reverberation mapping program (SDSS-RM) are
released in DR17.
4. 4.
DR17 also includes data from all previous SDSS data releases. All MaNGA, BOSS,
eBOSS, APOGEE and APOGEE-2 spectra that were previously released have all been
reprocessed with the latest reduction and analysis pipelines. eBOSS main
survey data were last released in DR16 (Ahumada et al., 2020), SDSS-III
MARVELS spectra were finalized in DR12 (Alam et al., 2015). SDSS Legacy
Spectra were released in its final form in DR8 (Aihara et al., 2011), and the
SEGUE-1 and SEGUE-2 surveys had their final reductions released with DR9 (Ahn
et al., 2012). The SDSS imaging had its most recent release in DR13 (Albareti
et al., 2017), when it was recalibrated for eBOSS imaging purposes.
A numerical overview of the total content of DR17 is given in Table 1. An
overview of the value-added catalogs that are new or updated in DR17 can be
found in Table 2; adding these to the VACs previously released in SDSS, the
total number of VACs in SDSS as of DR17 is now 63 (DR17 updates 14 existing
VACs and introduces 11 new ones). DR17 also contains the VACs that were first
published in the mini-data release DR16+ on 20 June 2020. DR16+ did not
contain any new spectra, and consisted of VACs only. Most of the VACs in DR16+
were based on the final eBOSS DR16 spectra, and these include large scale
structure and quasar catalogs. In addition, DR16+ contained three VACs based
on DR15 MaNGA sample. The DR16+ VACs can be found in Table 2, and are
described in more detail in the sections listed there.
Table 2New or Updated Value Added Catalogs (DR16+ where noted, otherwise new or updated for DR17) Name (see Section for Acronym definitions) | Section | Reference(s)
---|---|---
APOGEE-2
Open Cluster Chemical Abundances and Mapping catalog | §4.4.1 | Frinchaboy et al. (2013); Donor et al. (2018, 2020),
(OCCAM) | | N. Myers et al. (in prep.)
Red-Clump (RC) Catalog | §4.4.1 | Bovy et al. (2014)
APOGEE-Joker | §4.4.1 | A. Price-Whelan et al. (in prep.)
Double lined spectroscopic binaries in APOGEE spectra | §4.4.1 | Kounkel et al. (2021)
StarHorse for APOGEE DR17 + Gaia EDR3 | §4.4.2 | Queiroz et al. (2020)
AstroNN | §4.4.2 | Leung & Bovy (2019a, b); Mackereth et al. (2019a)
APOGEE Net: a unified spectral model | §4.4.3 | Olney et al. (2020); Sprague et al. (2022)
APOGEE on FIRE Simulation Mocks | §4.4.4 | Sanderson et al. (2020), Nikakhtar et al. (2021)
MaNGA
NSA Images (DR16+) | §5.5.1 | Blanton et al. (2011); Wake et al. (2017)
SWIFT VAC (DR16+) | §5.5.1 | Molina et al. (2020)
Galaxy Zoo: 3D | §5.5.2 | Masters et al. (2021)
Updated Galaxy Zoo Morphologies (SDSS, UKIDSS and DESI) | §5.5.2 | Hart et al. (2016); Walmsley et al. (2022)
Visual Morphologies from SDSS + DESI images (DR16+) | §5.5.2 | Vázquez-Mata et al. (2021)
PyMorph DR17 photometric catalog | §5.5.2 | Domínguez Sánchez et al. (2022)
Morphology Deep Learning DR17 catalog | §5.5.2 | Domínguez Sánchez et al. (2022)
PCA VAC (DR17) | §5.5.3 | Pace et al. (2019a, b).
Firefly Stellar Populations | §5.5.3 | Goddard et al. (2017), Neumann et al. (in prep.)
Pipe3D | §5.5.3 | Sánchez et al. (2016, 2018)
HI-MaNGA DR3 | §5.5.4 | Masters et al. (2019); Stark et al. (2021)
The MaNGA AGN Catalog | §5.5.5 | Comerford et al. (2020)
Galaxy Environment for MaNGA (GEMA) | §5.5.6 | Argudo-Fernández et al. (2015)
Spectroscopic Redshifts for DR17 | §5.5.7 | Talbot et al. (2018), M. Talbot et al. (in prep.)
Strong Gravitational Lens Candidate Catalog | §5.5.8 | M. Talbot et al. (in prep.)
MaStar
Photometry Crossmatch | §6.4 | R. Yan et al. (in prep.)
Stellar Parameters | §6.5 | R. Yan et al. (in prep.)
eBOSS
ELG (DR16+) | §7.1.1 | Raichoor et al. (2017, 2021)
LRG (DR16+) | §7.1.1 | Prakash et al. (2016); Ross et al. (2020)
QSO (DR16+) | §7.1.1 | Myers et al. (2015); Ross et al. (2020)
DR16 Large-scale structure multi-tracer EZmock catalogs | §7.1.2 | Zhao et al. (2021)
DR16Q catalog (DR16+) | §7.1.3 | Lyke et al. (2020)
Ly$\alpha$ catalog (DR16+) | §7.1.4 | du Mas des Bourboux et al. (2020)
Strong Gravitational Lens Catalog (DR16+) | §7.2.1 | Talbot et al. (2021)
ELG-LAE Strong Lens Catalog | §7.2.2 | Shu et al. (2016)
Cosmic Web Environmental Densities from MCPM | §7.2.3 | Burchett et al. (2020)
## 3\. Data Access
There are various ways to access the SDSS DR17 data products, and an overview
of all these methods is available on the SDSS website
https://www.sdss.org/dr17/data_access/, and in Table 3. In general, the best
way to access a data product will depend on the particular data product and
what the data product will be used for. We give an overview of all different
access methods below, and also refer to tutorials and examples on data access
available on this website: https://www.sdss.org/dr17/tutorials/.
Table 3Summary of Methods for Accessing SDSS Data Name | Brief Description
---|---
SAS | Science Archive Server - direct access to reduced images and spectra, and downloadable catalog files
SAW | Science Archive Webservers - for visualisation of images and 1D spectra
CAS | Catalog Archive Server - for optimized access to searchable catalog data from a database management system
SkyServer | web app providing visual browsing and synchronous query access to the CAS
Explore | a visual browsing tool in SkyServer to examine individual objects
Quicklook | a more succinct version of the Explore tool in SkyServer
CasJobs | batch (asynchronous) query access to the CAS
SciServer | science platform for server-side analysis. Includes browser-based and Jupyter notebook access to SkyServer, CasJobs and Marvin
Marvin | a webapp and python package to access MaNGA data
SpecDash | a SciServer tool to visualize 1D spectra with standalone and Jupyter notebook access
Voyages | an immersive introduction to data and access tools for K-12 education purposes
For those users interested in the reduced images and spectra of the SDSS, we
recommend that they access these data products through the SDSS Science
Archive Server (SAS, https://data.sdss.org/sas/). These data products were all
derived through the official SDSS data reduction pipelines, which are also
publicly available through SVN or GitHub
(https://www.sdss.org/dr17/software/). The SAS also contains the VACs that
science team members have contributed to the data releases (see Table 2), as
well as raw and intermediate data products. All files available through the
SAS have a data model that explains their content
(https://data.sdss.org/datamodel/). Data products can be downloaded from the
SAS either directly through browsing, or by using methods such as wget, rsync
and Globus Online (see https://www.sdss.org/dr17/data_access/bulk, for more
details and examples). For large data downloads, we recommend the use of
Globus Online. Since SDSS data releases are cumulative, in that data products
released in earlier data releases are included in DR17, and will have been
processed by the latest available pipelines, we reiterate that users should
always use the latest data release, as pipelines have often been updated to
improve their output and fix previously known bugs.
The Science Archive Webservers (SAW) provides visualisations of most of the
reduced images and data products available on the SAS. The SAW offers the
option to display spectra with their model fits, and to search spectra based
on a variety of parameters (e.g. observing program, redshift, coordinates).
These searches can be saved as permalinks, so that they can be consulted again
in the future and be shared with collaborators. All SAW webapps are available
from https://dr17.sdss.org/, and allow for displaying and searching of images
(SDSS-I/II), optical single-fiber spectra (SDSS-I/II, SEGUE, BOSS and eBOSS),
infrared spectra (APOGEE-1 and APOGEE-2), and MaStar stellar library spectra.
Images and spectra can be downloaded through the SAW, and previous data
releases are available back to DR8. The SAW also offers direct links to
SkyServer Explore pages (see below).
The MaNGA integral-field data is not incorporated in the SAW due to its more
complex data structure, and can instead be accessed through Marvin
(https://dr17.sdss.org/marvin/; Cherinka et al. 2019). Marvin offers not only
visualisation options through its web interface, but also allows the user to
query the data and analyze data products remotely through a suite of Python
tools. Marvin also offers access to various MaNGA value added catalogs, as
described in §5.5. Marvin’s Python tools are available through pip-install,
and installation instructions as well as tutorials and examples are available
here: https://sdss-marvin.readthedocs.io/en/stable/. No installation is
required to use Marvin’s Python tools in SciServer, as described later in this
section and in §5.3.
Catalogs of derived data products are available on the SAS, but can be
accessed more directly through the Catalog Archive Server (CAS, Thakar et al.,
2008). These include photometric and spectroscopic properties, as well as some
value added catalogs. The SkyServer webapp (https://skyserver.sdss.org) allows
for visual inspection of objects using e.g. the QuickLook and Explore tools,
and is also suitable for synchronous SQL queries in the browser. Tutorials and
examples explaining the SQL syntax and how to query in SkyServer are available
at http://skyserver.sdss.org/en/help/docs/docshome.aspx. For DR17, the
SkyServer underwent a significant upgrade, which includes a completely
redesigned user interface as well as migration of the back end to a platform
independent, modular architecture. Although SkyServer is optimal for smaller
queries that can run in the browser, for larger ones we recommend using
CASJobs (https://skyserver.sdss.org/casjobs). CASJobs allows for asynchronous
queries in batch mode, and offers the user free storage space for query
results in a personal database (MyDB) for server-side analysis that minimizes
data movement (Li & Thakar, 2008).
SkyServer and CASJobs are now part of the SciServer science platform
(Taghizadeh-Popp et al., 2020, https://www.sciserver.org), which is accessible
with free registration on a single-sign-on portal, and offers server-side
analysis with Jupyter notebooks in both interactive and batch mode, via
SciServer Compute. SciServer is fully integrated with the CAS, and users will
be able to access the data and store their notebooks in their personal account
(shared with CASJobs). SciServer offers data and resource sharing via its
Groups functionality that greatly facilitates its use in the classroom, to
organize classes with student, teacher and teaching assistant privileges.
Several SciServer Jupyter notebooks with use cases of SDSS data are available
through the SDSS education webpages (https://www.sdss.org/education/), some of
which have been used by SDSS members in college-level based courses as an
introduction to working with astronomical data. SciServer has prominently
featured in the “SDSS in the Classroom” workshops at AAS meetings.
Users can now analyze the MaNGA DR17 data in SciServer, using the Marvin suite
of Python tools. SciServer integration enables users to use the access and
analysis capabilities of Marvin without having a local installation. In the
SciServer Compute system222https://www.sciserver.org/about/compute/, the MaNGA
dataset is available as an attachable MaNGA Data Volume, with the Marvin
toolkit available as a loadable Marvin Compute Image. Once loaded, the Marvin
package along with a set of Marvin Jupyter example notebooks and tutorials are
available on the compute platform.
With DR17, we are also releasing in SciServer a new feature called SpecDash
(Taghizadeh-Popp, 2021) to interactively analyze and visualize one-dimensional
optical spectra from SDSS Legacy and eBOSS surveys, and soon from APOGEE as
well. SpecDash is available both as stand-alone
website333https://specdash.idies.jhu.edu/, and as a Jupyter notebook widget in
SciServer.
Users can load and compare multiple spectra at the same time, smooth them with
several kernels, overlay error bars, spectral masks and lines, and show
individual exposure frames, sky background and model spectra. For analysis and
modeling, spectral regions can be interactively selected for fitting the
continuum or spectral lines with several predefined models. All spectra and
models shown in SpecDash can be downloaded, shared, and then uploaded again
for subsequent analysis and reproducibility. Although the web-based version
shares the same functionality as the Jupyter widget version, the latter has
the advantage that users can use the SpecDash python library to
programmatically load any kind of 1-D spectra, and analyze or model them using
their own models and kernels.
All tools and data access points described above are designed to serve a wide
range of users from undergraduate level to expert users with significant
programming experience. In addition, Voyages (https://voyages.sdss.org/)
provides an introduction to astronomical concepts and the SDSS data for less
experienced users, and can also be used by teachers in a classroom setting.
The Voyages activities were specifically developed around pointers to K-12 US
science standards, and a Spanish language version of the site is available at
https://voyages.sdss.org/es/.
## 4\. APOGEE-2 : Full Release
The central goal of APOGEE is to map the chemodynamics of all structural
components of the Milky Way Galaxy via near-twin, multiplexed NIR high-
resolution spectrographs operating simultaneously in both hemispheres
(APOGEE-N and APOGEE-S spectrographs respectively; both described in Wilson et
al., 2019). DR17 constitutes the sixth release of data from APOGEE, which has
run in two phases (APOGEE-1 and APOGEE-2) spanning both SDSS-III and SDSS-IV.
As part of SDSS-III, the APOGEE-1 survey operated for approximately 3 years
from August 2011 to July 2014 using the 2.5-m Sloan Foundation Telescope at
APO. At the start of of SDSS-IV, APOGEE-2 continued its operations in the
Northern Hemisphere by initiating a $\sim$6-year survey (APOGEE-2N). Thanks to
unanticipated on-sky efficiency, APOGEE-2N operations concluded in November
2020 with an effective $\sim$7.5 years of bright time observations, with many
programs expanded from their original 6-year baseline. In April 2017,
operations began with the newly built APOGEE-S spectrograph and associated
fiber plugplate infrastructure on the 2.5-m Irénée du Pont Telescope at LCO;
APOGEE-2S observations concluded in January 2021. A full overview of the
APOGEE-1 scientific portfolio and operations was given in Majewski et al.
(2017) and a parallel overview for APOGEE-2 is forthcoming (S. Majewski et
al., in prep.).
The APOGEE data in DR17 encompass all SDSS-III APOGEE-1 and SDSS-IV APOGEE-2
observations acquired with both instruments from the start of operations at
APO in SDSS-III (September 2011) through the conclusion of SDSS-IV operations
at APO and LCO (in November 2020 and January 2021, respectively). Compared to
the previous APOGEE data release (DR16), DR17 contains roughly two additional
years of observations in both hemispheres; this doubles the number of targets
observed from APOGEE-2S (see Table 1).
DR17 contains APOGEE data and information for 657,135 unique targets, with
372,458 of these (57%) as part of the main red star sample that uses a simple
selection function based on de-reddened colors and magnitudes (for more
details see Zasowski et al., 2013, 2017). The primary data products are: (1)
reduced visit and visit-combined spectra, (2) radial velocity measurements,
(3) atmospheric parameters (eight in total), and (4) individual element
abundances (up to 20 species). Approximately 2.6 million individual visit
spectra are included in DR17; 399,505 sources have three or more visits (54%)
and 35,009 sources (5%) have ten or more visits.
The final APOGEE survey map is shown in Figure 2, where each circle represents
a single “field” that is color-coded by survey phase: APOGEE-1 (cyan),
APOGEE-2N (blue), or APOGEE-2S (red). The difference in field-of-view between
APOGEE-N and APOGEE-S is visible by the size of the symbol, with each APOGEE-S
field spanning 2.8 deg2 and APOGEE-N spanning 7 deg2 (for the instrument
descriptions, see Wilson et al., 2019). Those fields with any new data in DR17
are encircled in black; new data can either be fields observed for the first
time or fields receiving additional epochs. The irregular high Galactic
latitude coverage is largely due to piggyback “co-observing” with MaNGA during
dark time. Notably, these cooperative operations resulted in observations of
an additional 162,817 targets, or 22% of the total DR17 targets ($\sim$30% of
targets in APOGEE-2), which is a comparable number of targets as were observed
in all of APOGEE-1.
Figure 2.— The DR17 final APOGEE sky coverage shown in Galactic coordinates
with fields color-coded by the survey phase in which the field was observed:
APOGEE-1 (cyan), APOGEE-2N (blue), and APOGEE-2S (red). The fiber plugplates
used with the APOGEE-N spectrograph have a 7 square degree field-of-view while
those used with the APOGEE-S spectrograph have a 2.8 square degree field of
view. Those fields with any new observations in DR17 are highlighted with a
black outline. Figure 3.— A sky map in Galactic coordinates showing the
number of stars per APOGEE field. The disk is targeted with a more or less
systematic grid of pointings within $|b|<15\deg$. For $\ell<30\deg$ there is
more dense coverage of the bulge and inner Galaxy. The circle sizes reflect
the different field-of-view of APOGEE-N and APOGEE-S. The dense coverage at
the North Galactic Cap is due to co-observing with the MaNGA survey, which
contributed 22% of the targets in DR17.
A different visualization of the final field plan is given in Figure 3, where
now each field is color-coded by the number of unique stars targeted in each
field. APOGEE plates have 300 fibers, but APOGEE targeting uses a “cohorting”
strategy by which exposure is accumulated over many visits for the faintest
targets in a field while brighter targets are swapped in and out over time
(for a schematic see Zasowski et al., 2013, Figure 1 therein). Moreover, some
fields were included in multiple programs, like those in the _Kepler_
footprint, and as many as 1600 unique targets were accommodated in a single 7
deg2 APOGEE-2N field over the full span of the APOGEE-1 and APOGEE-2 observing
programs.
Extensive descriptions of the target selection and strategy are found in
Zasowski et al. (2013) for APOGEE-1 and in Zasowski et al. (2017) for
APOGEE-2. Details about the final target selection schemes used for APOGEE-2N
and APOGEE-2S, which evolved over time, are presented in Beaton et al. (2021)
and Santana et al. (2021), respectively.
### 4.1. DR17 Sample Highlights
DR17 represents the culmination of the APOGEE-2 program (and, indeed, all of
APOGEE) and presents a number of large, focused subsamples that are worth
noting briefly. DR17 contains over 18,000 targets in the TESS Northern
Continuous Viewing Zone (CVZ) and over 35,000 targets in the TESS Southern CVZ
(Ricker et al., 2016). In DR17, there are over 35,000 targets which are part
of 13 of the _Kepler_ K2 Campaigns and over 20,000 in the primary _Kepler_
field. In total, over 100,000 targets are also found in high-cadence, space-
based photometry programs. Among all scientific targeting programs, there are
more than 13,000 targets that have more than 18 individual epochs, spanning
all parts of the Galaxy.
DR17 includes extensive APOGEE coverage for numerous star clusters, including
29 open clusters, 35 globular clusters, and 18 young clusters. However,
detailed membership characterization identifies at least one possible member
in as many as 126 open clusters and 48 globular clusters, after accounting for
targets in Contributed and Ancillary Science programs (N. Myers et al., in
prep, R. Schiavon et al., in prep.). Thus, some observations exist in DR17 for
approximately 200 star clusters spanning a range of ages and physical
properties.
In addition, DR17 contains measurements of resolved stars from ten dwarf
satellite galaxies of the Milky Way (including the dwarf spheroidal systems
Boötes I, Sextans, Carina, Fornax, Sculptor, Sagittarius, Draco, and Ursa
Minor, as well as the Large and Small Magellanic Clouds); 14,000 of the over
20,000 targets toward dwarf satellites are in the Magellanic System. In
addition, DR17 contains integrated light observations of star clusters in
Fornax, M31, and M33 and of the central regions of M31 and of its
highest–surface brightness dwarf satellites.
### 4.2. APOGEE DR17 Data Products
The basic procedure for processing and analysis of APOGEE data is similar to
that from previous data releases (Abolfathi et al., 2018; Holtzman et al.,
2018; Jönsson et al., 2020), but a few notable differences are highlighted
here. More details are presented in J. Holtzman et al. (in prep.).
#### 4.2.1 Spectral Reduction and Radial Velocity Determinations
Nidever et al. (2015) describes the original reduction procedure for APOGEE
data, and the various APOGEE Data Release papers present updates (Abolfathi et
al., 2018; Holtzman et al., 2018; Jönsson et al., 2020, J. Holtzman et al. in
prep.). For DR17, at the visit reduction level, a small change was made to the
criteria by which pixels are flagged as being potentially affected by poor sky
subtraction.
The routines for combination of the individual visit spectra were rewritten
for DR17 to incorporate a new radial velocity analysis, called Doppler
(Nidever et al., 2021). Doppler performs a least squares fit to a set of visit
spectra, solving simultaneously for basic stellar parameters ($T_{\rm eff}$,
$\log\leavevmode\nobreak\ g$, and [M/H]) and the radial velocity for each
visit. The fitting is accomplished by using a series of Cannon (Ness et al.,
2015; Casey et al., 2016) models to generate spectra for arbitrary choices of
stellar parameters across the Hertzsprung-Russell diagram (from 3500 K to
20,000 K in $T_{\rm eff}$); the Cannon models were trained on a grid of
spectra produced using Synspec (e.g., Hubeny & Lanz, 2017; Hubeny et al.,
2021) with Kurucz model atmospheres (Kurucz, 1979; Castelli & Kurucz, 2003;
Munari et al., 2005). The primary output of Doppler are the radial
velocities; while the stellar parameters from Doppler are stored, they are not
adopted as the final values (see ASPCAP, §4.2.2 below). The Doppler routine
produces slightly better results for radial velocities in most cases, as
judged by scatter across repeated visits of stars. Details will be given in J.
Holtzman et al. (in prep), but, for example, for $\sim$ 85,000 stars that have
more than 3 visits, VSCATTER$<$ 1 km/s, TEFF$<$ 6000 K, and no additional data
since DR17, the median VSCATTER is reduced from 128 m/s to 96 m/s.
In addition to the new methodology, the radial velocities for faint stars were
improved. This was accomplished by making an initial combination of the visit
spectra using only the barycentric correction. This initial combination
provided a combined spectrum from which a radial velocity was determined. The
radial velocity for each individual visit was then determined separately, but
was required to be within 50 km/s of the original estimate. This yielded a
higher fraction of successful radial velocities for faint stars, as judged by
looking at targets in nearby dwarf spheroidal galaxies.
#### 4.2.2 Atmospheric Parameter and Element Abundance Derivations
Stellar parameters and abundances are determined using the APOGEE Stellar
Parameters and Chemical Abundance Pipeline (ASPCAP, García Pérez et al. 2016)
that relies on the FERRE optimization code (Allende Prieto et al.,
2006).444https://github.com/sdss/apogee
The basic methodology of ASPCAP remained the same for DR17 as in previous
releases, but new synthetic spectral grids were created. These took advantage
of new, non-local thermodynamic equilibrium (NLTE) population calculations by
Osorio et al. (2020) for four elements: Na, Mg, K, and Ca; as discussed in
Osorio et al. (2020) the H-band abundance differences between LTE and NLTE
were always less than 0.1 dex. Adopting these calculations, however, required
the adoption of a different spectral synthesis code from that used in the last
several APOGEE data releases: for DR17, the Synspec code (e.g., Hubeny & Lanz,
2017; Hubeny et al., 2021) was adopted for the primary analysis instead of the
Turbospectrum code (Alvarez & Plez, 1998; Plez, 2012) used in previous
releases. This was not a straightforward choice because, while Synspec allows
the NLTE levels to be used, it calculates the synthetic spectra under the
assumption of plane parallel geometry, which becomes less valid for the
largest giant stars. On the other hand, Turbospectrum can use spherical
geometry, but does not accommodate NLTE populations to be specified.
DR17 uses multiple sub-grids to span from $T_{\rm eff}$=3000 K (M dwarf) to
$T_{\rm eff}$=20,000 K (BA), with $\log\leavevmode\nobreak\ g$ ranges from 0
to 5 (3 to 5 for the BA grid). The full details of these grids and the
reliability of the parameters as a function of stellar type are provided in J.
Holtzman et al. (in prep.). Modifications to the linelists used for the
syntheses are described in Smith et al. (2021), which is an augmentation to
prior linelist work for APOGEE (Shetrone et al., 2015; Hasselquist et al.,
2016; Cunha et al., 2017).
The ASPCAP results from the new Synspec grid are the primary APOGEE DR17
results and the majority of users will likely be satisfied with the results in
this catalog; only this primary catalog will be loaded into the CAS. However,
unlike prior releases, DR17 also includes supplemental analyses constructed
using alternate libraries that have different underlying physical assumptions.
The different analyses in DR17 are provided in separate summary files and
include:
1. 1.
the primary library using Synspec including NLTE calculations for Na, Mg, K,
and Ca (with files on the SAS under dr17/synspec_rev1)555This is a revised
version of the dr17/synspec directories, correcting a minor problem with the
LSF convolution for a subset of stars observed at LCO, however, since Value
Added Catalogs were constructed with the original dr17/synspec we have
retained it for completness.;
2. 2.
one created using Synspec, but assuming LTE for all elements (files under
dr17/synspec_lte);
3. 3.
another created using Turbospectrum 20 (files under dr17/turbo20), using
spherical geometry for $\log\leavevmode\nobreak\ g$$<=$3;
4. 4.
one created with Turbospectrum, but with plane parallel geometry (files under
dr17/turbo20_pp) for all stars.
All of the libraries use the same underlying MARCS stellar atmospheres for
stars with $T_{\rm eff}$$<$ 8000 K, computed with spherical geometry for
$\log\leavevmode\nobreak\ g$$<=$3\. A full description of these spectral grids
will be presented in J. Holtzman et al. (in prep.) and a focused discussion on
the differences between the libraries and the physical implications will be
presented in Y. Osorio et al. (in prep.). In summary, however, the differences
are subtle in most cases. We encourage those using the APOGEE DR17 results to
clearly specify the catalog version that they are using in their
analyses666Users may find the library version in the name of the summary file,
as well as in the ASPCAP_ID tag provided for each source in these files..
For DR17, we present 20 elemental abundances: C, C I, N, O, Na, Mg, Al, Si, S,
K, Ca, Ti, Ti II, V, Cr, Mn, Fe, Co, Ni, and Ce. In DR16, we attempted to
measure the abundances of Ge, Rb, and Yb, but given the poor results for
extremely weak lines, we did not attempt these in DR17. While we attempted
measurements of P, Cu, Nd, and 13C in DR17, these were judged to be
unsuccessful. Overall, the spectral windows used to measure the abundances
were largely unchanged, but several additional windows were added for Cerium,
such that the results for Ce appear to be significantly improved over those in
DR16.
As in DR16, both the raw spectroscopic stellar parameters as well as
calibrated parameters and abundances are provided. Calibrated effective
temperatures are determined by a comparison to photometric effective
temperatures, as determined from the relations of (González Hernández &
Bonifacio, 2009), using stars with low reddening. Calibrated surface
gravities are provided by comparison to a set of surface gravities from
asteroseismology (Serenelli et al., 2017, M. Pinsonneault et al. in prep.) and
isochrones (Berger et al., 2020). For DR17, the surface gravity calibration
was applied using a neural network, unlike previous data releases where
separate calibrations were derived and applied for different groups (red
giants, red clump, and main sequence) of stars. The new approach eliminates
small discontinuities that were previously apparent, and is described in more
detail in J. Holtzman et al. (in prep.). For the elemental abundances,
calibration just consists of a zeropoint offset (separately for dwarfs and
giants), determined by setting the median abundance [X/M] of solar metallicity
stars in the solar neighborhood with thin disk kinematics such that [X/M]=0.
Additional details on the ASPCAP changes are described in J. Holtzman et al.
(in prep.).
#### 4.2.3 Additional data
Several other modifications were made for DR17.
1. 1.
The summary data files for APOGEE that are available on the Science Archive
Server now include data from the Gaia Early Data Release 3 (EDR3) for the
APOGEE targets (Gaia Collaboration et al., 2021, 2016). Positional matches
were performed by the APOGEE team. More specifically, the following data are
included:
* •
Gaia EDR3 identifiers (Gaia Collaboration et al., 2021),
* •
Gaia EDR3 parallaxes and proper motions (Lindegren et al., 2021),
* •
Gaia EDR3 photometry (Riello et al., 2021),
* •
Gaia EDR3 RVs (Seabroke et al., 2021),
* •
Distances and uncertainties following Bailer-Jones et al. (2021).
2. 2.
Likely membership for a set of open clusters, globular clusters, and dwarf
spheroidal galaxies, as determined from position, radial velocity, proper
motion, and distance, is provided in a MEMBERS column. More specifically,
initial memberships were computed based on position and literature RVs, and
these are then used to determine proper motion and distance criteria.
Literature RVs were taken from:
* •
APOGEE-based mean RVs for the well-sampled “calibration clusters” in Holtzman
et al. (2018),
* •
mean RVs for globular clusters from Harris (2010)777This is the 2010 update to
the Harris (1996) catalog., and
* •
mean RVs for dwarf spheroidal galaxies from McConnachie (2012).
Users interested in the properties of the clusters or satellite galaxies are
encouraged to do more detailed membership characterization and probabilities
(e.g., Masseron et al., 2019; Mészáros et al., 2020; Hasselquist et al., 2021,
Schiavon et al., in prep., Shetrone et al., in prep.)
3. 3.
Some spectroscopic binary identification is provided through bits in the
STARFLAG and ASPCAPFLAG bitmasks. A more comprehensive analysis of
spectroscopic binaries is provided in a VAC (see §4.4.1 below) .
We encourage those utilizing these data in our summary catalogs to cite the
original references as given above.
### 4.3. Data Quality
The overall quality of the DR17 results for radial velocities, stellar
parameters, and chemical abundances is similar to that of previous APOGEE data
releases (full evaluation will be provided in Holtzman et al. in prep.).888The
web documentation contains the details of the data model. Morevoer, the
documentation communicates how data was flagged, including a brief list of
changes relative to prior releases. As in DR16, uncertainties for stellar
parameters and abundances are estimated by analyzing the scatter in repeat
observations of a set of targets.
Users should be aware that deriving consistent abundances across a wide range
of parameter space is challenging, so some systematic features and trends
arise. Users should be careful when comparing abundances of stars with
significantly different stellar parameters. Also, the quality of the abundance
measurements varies between different elements, across parameter space, and
with signal-to-noise.
Some regions of parameter space present larger challenges than others. In
particular, it is challenging to model the spectra of the coolest stars and,
while abundances are derived for the coolest stars in DR17, there seem to be
significant systematic issues for the dwarfs with $T_{\rm eff}$$<$ 3500 K such
that although we provide calibrated results in the PARAM array, we do not
populate the “named tags.” Separately, for warm/hot stars ($T_{\rm
eff}$$>$7000), information on many abundances is lacking in the spectra, and
uncertainties in the model grids at these temperatures may lead to systematic
issues with the DR17 stellar parameters.
As a demonstration of the quality and scientific potential of the data, Figure
4 shows a set of [Mg/Fe] versus [Fe/H] diagrams for different three-
dimensional spatial zones within the disk of the Milky Way, restricted to
giant-stars with 1 $<$ $\log\leavevmode\nobreak\ g$$<$ 2.5 to minimize
potential systematics or sampling bias. Spectrophotometric distances to
individual stars are determined from Value Added Catalogs999In this
visualization, from the DistMass VAC to be released in 2022 that uses a Neural
Net at the parameter level to determine spectroscopic distances. and then are
used with stellar positions to determine the Galactocentric radius ($R_{G}$)
and height above the plane ($z$) for each individual star; this highlights the
scientific potential enabled via the analyses in the Value Added Catalogs. The
color coding indicates the orbital eccentricity based on calculations from
GalPy (Bovy, 2015) using Gaia EDR3 proper motions (Gaia Collaboration et al.,
2021) and APOGEE DR17 radial velocities. Figure 4 is a merging of similar
visualizations previously presented in Hayden et al. (2015) and Mackereth et
al. (2019b), such that the spatial zones of the former are merged with the
dynamical inference of the latter. The stars of the solar neighborhood (middle
panel, $7<R_{G}<9$) show two distinct chemical sequences, commonly referred to
the the low- and high- [$\alpha$/Fe] sequences that are also somewhat
dynamically distinct (apparent in the color-coding by orbital eccentricity).
The inner Galaxy, however, is dominated both by high-eccentricity (bulge-like
orbits) stars on the high-[$\alpha$/Fe] sequence just as the outer galaxy is
dominated by low-eccentricity (near circular orbits) stars on the
low-[$\alpha$/Fe] sequence, with some slight dependence on $z$. The relative
contributions of low-eccentricity versus high-eccentricity and
low-[$\alpha$/Fe] versus high-[$\alpha$/Fe] sequences shift throughout the
Galaxy. These spatial, chemical, and dynamical groupings provide evidence for
various disk-formation and disk-evolution scenarios (e.g., as discussed in
Hayden et al., 2015; Mackereth et al., 2019b, among others) that add
complexity and nuance to the canonical schemes. .
Figure 4.— A series of [Mg/Fe] vs [Fe/H] plots from APOGEE DR17 for different
zones in the Milky Way. Distances from the DistMass VAC are used to determine
Galactocentric radius ($R_{G}$) and height above the plane ($z$). Points are
color-coded by orbital eccentricities as computed with GalPy (Bovy, 2015)
using Gaia EDR3 proper motions and APOGEE radial velocities.
### 4.4. APOGEE Value Added Catalogs
There are a large number of APOGEE-associated VACs in DR17. In what follows we
provide brief descriptions of each VAC along with references where the reader
can find more detail. Broadly speaking, APOGEE VACs can be split into
characterising special subsamples, like binary stars, open clusters, and
photometric variables, those which calculate stellar or orbital parameters for
all (or most) APOGEE target stars (e.g. Starhorse, APOGEEnet and others). We
also document the release of a mock catalog of APOGEE based on a
hydrodynamical simulation.
#### 4.4.1 VACs Describing Categories of Objects in APOGEE
The first set of APOGEE VACs describe special categories of objects in APOGEE
data and in most cases provide additional information/characteristics for
these objects. They are:
1. 1.
Open Cluster Chemical Abundances and Mapping catalog (OCCAM): The goal of
OCCAM is to leverage the APOGEE survey to create a large, uniform catalog of
open cluster chemical abundances and use these clusters to study Galactic
chemical evolution. The catalog contains average chemical abundances for each
cluster and membership probability estimates for APOGEE stars in the cluster
area. We combine proper motion (PM) and radial velocity (RV) measurements from
Gaia EDR3 (Gaia Collaboration et al., 2021) with RV and metallicity
measurements from APOGEE to establish cluster membership probabilities for
each star observed by APOGEE. The VAC includes 26,699 stars in the areas of
153 cataloged disk clusters. Detailed descriptions of the OCCAM survey,
including targeting and the methodology for membership determinations, are
presented in Frinchaboy et al. (2013), Donor et al. (2018), and Donor et al.
(2020). This third catalog from the OCCAM survey includes 44 new open
clusters, including many in the Southern hemisphere and those targeted
specifically in GC size ($R_{GC}$) ranges with little coverage in the DR16
catalog (specific targeting described in Beaton et al. 2021; Santana et al.
2021). Average RV, PM, and abundances for reliable ASPCAP elements are
provided for each cluster, along with the visual quality determination.
Membership probabilities based individually upon PM, RV, and [Fe/H] are
provided for each star, stars are considered 3$\sigma$ members if they have
probability $>0.01$ in all three membership dimensions 101010However, some
stars near the main sequence turn-off may “fail” the [Fe/H] cut due to
evolutionary diffusion effects (Souto et al., 2018, 2019). The results and
caveats from this VAC will be discussed thoroughly in N. Myers et al. (in
prep.).
2. 2.
APOGEE Red-Clump (RC) Catalog: DR17 contains an updated version of the APOGEE
red-clump (APOGEE-RC) catalog. This catalog is created in the same way as the
previous DR14 and DR16 versions of the catalog, with a more stringent $\log g$
cut compared to the original version of the catalog (Bovy et al., 2014). The
catalog contains 50,837 unique stars, about 30% more than in DR16. The
catalog is created using a spectrophotometric technique first presented in
Bovy et al. (2014) that results in a rather pure sample of red-clump stars
(e.g., minimal contamination from red-giant-branch, secondary-red-clump, and
asymptotic-giant-branch stars that have similar CMD and H-R positions). Bovy
et al. estimated a purity of $\sim$95%. The narrowness of the RC locus in
color-metallicity-luminosity space allows distances to the stars to be
assigned with an accuracy of 5%-10%, which exceeds the precision of
spectrophotometric distances in other parts of the H-R diagram. We recommend
users adopt the most recent catalog (DR17) for their analyses; additional
discussion on how to use the catalog is given in Bovy et al. (2014). While the
overall datamodel is similar to previous versions of the catalog, the proper
motions are from Gaia EDR3 (Gaia Collaboration et al., 2021;
Gaia_EDR3_Astrometry).
3. 3.
APOGEE-Joker: The APOGEE-Joker VAC contains posterior samples for binary-star
orbital parameters (Keplerian orbital elements) for 358,350 sources with three
or more APOGEE visit spectra that pass a set of quality cuts as described in
A. Price-Whelan et al. (in prep.). The posterior samples are generated using
The Joker, a custom Monte Carlo sampler designed to handle the multi-modal
likelihood functions that arise when inferring orbital parameters with
sparsely-sampled or noisy radial velocity time data (Price-Whelan et al.,
2017). This VAC deprecates the previous iterations of the catalog (Price-
Whelan et al., 2018, 2020).
For 2,819 stars, the orbital parameters are well constrained, and the returned
samples are effectively unimodal in period. For these cases, we use the
sample(s) returned from The Joker to initialize standard MCMC sampling of the
Keplerian parameters using the time-series optimized MCMC code known as
exoplanet111111https://docs.exoplanet.codes/en/latest/ (Foreman-Mackey et al.,
2021) and provide these MCMC samples. For all stars, we provide a catalog
containing metadata about the samplings, such as the maximum a posteriori
(MAP) parameter values and sample statistics for the MAP sample. A. Price-
Whelan et al. (in prep.) describes the data analysis procedure in more detail,
and defines and analyzes a catalog of $\gtrsim$40,000 binary star systems
selected using the raw orbital parameter samples released in this VAC.
4. 4.
Double lined spectroscopic binaries in APOGEE spectra: Generally, APOGEE
fibers capture a spectrum of single stars. Sometimes, however, there may be
multiple stars of comparable brightness with the sky separations closer than
the fiber radius whose individual spectra are captured by the same recorded
spectrum. Most often, these stars are double-lined spectroscopic binaries or
higher order multiples (SBs), but on an occasion they may also be chance line-
of-sight alignments of random field stars (most often observed towards the
Galactic center). Through analyzing the cross-correlation function (CCF) of
the APOGEE spectra, Kounkel et al. (2021) have developed a routine to
automatically identify these SBs using Gaussian deconvolution of the CCFs
(Kounkel, 2021)121212https://github.com/mkounkel/apogeesb2, and to measure RVs
of the individual stars. The catalog of these sources and the sub-component
RVs are presented here as a VAC. For the subset of sources that had a
sufficient number of measurements to fully characterize the motion of both
stars, the orbit is also constructed.
The data obtained though April/May 2020 were processed with the DR16 version
of the APOGEE radial velocity pipeline and this processing was made available
internally to the collaboration as an intermediate data release. All of the
SBs identified in this internal data release have undergone rigorous visual
vetting to ensure that every component that can be detected is included and
that spurious detections have been removed. However, the final DR17 radial
velocity pipeline is distinct from that used for DR16 (summarized above; J.
Holtzman et al. in prep.) and the reductions are sufficiently different that
they introduce minor discrepancies within the catalog. In comparison to DR16,
the DR17 pipeline limits the span of the CCF for some stars to a velocity
range around the mean radial velocity to ensure a more stable overall set of
RV measurements; on the other hand the DR16 pipeline itself may fail on a
larger number of individual visit spectra and thus not produce a full set of
outputs. For the sources that have both good parameters and a complete CCF
coverage for both DR16 and DR17, the widely resolved components of SBs are
generally consistent with one another; close companions that have only small
RV separations are not always identified in both datasets. For this reason,
SBs that could be identified in both the DR16 and DR17 reductions are kept as
separate entries in the catalog. Visual vetting was limited only to the data
processed with the DR16 pipeline (e.g., data through April/May 2020); the full
automatic deconvolutions of the DR17 CCFs are presented as-is.
#### 4.4.2 VACs of Distances and other parameters
VACs providing distances and other properties (mostly related to orbital
parameters) are released (or re-released):
1. 1.
StarHorse distances, extinctions, and stellar parameters for APOGEE DR17 +
Gaia EDR3: We combine high-resolution spectroscopic data from APOGEE DR17 with
broad-band photometric data from 2MASS, unWISE and PanSTARRS-1, as well as
parallaxes from Gaia EDR3. Using the Bayesian isochrone-fitting code StarHorse
(Santiago et al., 2016; Queiroz et al., 2018), we derive distances,
extinctions, and astrophysical parameters. We achieve typical distance
uncertainties of $\sim$ 5 % and extinction uncertainties in V-band amount to
$\sim$ 0.05 mag for stars with available PanSTARRS-1 photometry, and $\sim$
0.17 mag for stars with only infra-red photometry. The estimated StarHorse
parameters are robust to changes in the Galactic priors assumed and
corrections for Gaia parallax zero-point offset. This work represents an
update of DR16-based results presented in Queiroz et al. (2020).
2. 2.
APOGEE-astroNN: The APOGEE-astroNN value-added catalog holds the results from
applying the astroNN deep-learning code to APOGEE spectra to determine stellar
parameters, individual stellar abundances (Leung & Bovy, 2019a), distances
(Leung & Bovy, 2019b), and ages (Mackereth et al., 2019a). For DR17, we have
re-trained all neural networks using the latest data, i.e., APOGEE DR17
results for the abundances, Gaia EDR3 parallax measurements, and an
intermediate APOKASC data set with stellar ages (v6.6.1, March 2020 using DR16
ASPCAP). Additionally, we augmented the APOKASC age data with low-metallicity
asteroseismic ages from Montalbán et al. (2021) to improve the accuracy of
ages at low metallicities; the Montalbán et al. (2021) analysis is similar to
that of APOKASC, but performed by an independent team. As in DR16, we correct
for systematic differences between spectra taken at LCO and APO by applying
the median difference between stars observed at both observatories. In
addition to abundances, distances, and ages, properties of the orbits in the
Milky Way (and their uncertainties) for all stars are computed using the fast
method of Mackereth & Bovy (2018) assuming the MWPotential2014 gravitational
potential from Bovy (2015). Typical uncertainties in the parameters are 35 K
in $T_{\rm eff}$, 0.1 dex in $\log\leavevmode\nobreak\ g$, 0.05 dex in
elemental abundances, 5 % in distance, and 30 % in age. Orbital properties
such as the eccentricity, maximum height above the mid-plane, radial, and
vertical action are typically precise to 4 to 8 %.
#### 4.4.3 APOGEE Net: a unified spectral model
A number of different pipelines are available for extracting spectral
parameters from the APOGEE spectra. These pipelines generally manage to
achieve optimal performance for red giants and, increasingly, G & K dwarfs,
which compose the bulk of the stars in the catalog. However, the APOGEE2
catalog contains a number of parameter spaces that are often not well
characterized by the primary pipelines. Such parameter spaces include pre-main
sequence stars and low mass stars, with their measured parameters showing
systematic $T_{\rm eff}$ & $\log g$ deviations making them inconsistent from
the isochrones and the main sequence. OBA stars are also less well constrained
and in prior data releases many were classified as F dwarfs (due to grid-edge
effects) and have their $T_{\rm eff}$ underestimated in the formal results. By
using data-driven techniques, we attempt to fill in those gaps to construct a
unified model of APOGEE spectra. In the past, we have developed a neural
network, APOGEE Net (Olney et al., 2020), which was shown to perform well to
extract $T_{\rm eff}$, $\log g$, & [Fe/H] on all stars with $T_{\rm
eff}<$6,500 K, including pre-main sequence stars. We now expand these efforts
to also characterize hotter stars with 6,500$<T_{\rm eff}<$50,000 K. APOGEE
NET II is described in Sprague et al. (2022).
#### 4.4.4 APOGEE FIRE VAC
Mock catalogs made by making simulated observations of sophisticated galaxy
simulations provide unique opportunities for observational projects, in
particular, the ability to test for or constrain the impact of selection
functions, field plans, and algorithms on scientific inferences. One of the
most realistic galaxy simulations to date is the Latte simulation suite, which
uses FIRE-2 (Hopkins et al., 2018) to produce galaxies in Milky Way-mass halos
in a cosmological framework (Wetzel et al., 2016). Sanderson et al. (2020)
translated three of the simulations into realistic mock catalogs (using three
solar locations, resulting in nine catalogs), known as the Ananke
simulations131313For data access see:
https://fire.northwestern.edu/ananke/#dm. Ananke contains key Gaia
measureables for the star particles in the simulations and these include
radial velocity, proper motion, parallax, and photometry in the Gaia bands as
well as chemistry (10 chemical elements are tracked in the simulation), and
other stellar properties. Because the input physics and the global structure
of the model galaxy are known, these mock catalogs provide an experimental
laboratory to make connections between the resolved stellar populations and
global galaxy studies.
In this VAC, Ananke is expanded to permit APOGEE-style sampling of the mock-
catalogs. For all observed quantities both the intrinsic, e.g., error-free,
and the observed values are reported; the observed values are the intrinsic
values convolved with an error-model derived from observational data for
similar object types. As described in Nikakhtar et al. (2021), Ananke mock-
catalogs now contain: (i) 2MASS ($JHK_{s}$) photometry and reddening, (ii)
abundance uncertainties following APOGEE DR16 performance (following Poovelil
et al., 2020; Jönsson et al., 2020), and (iii) a column that applies a basic
survey map (Zasowski et al., 2013, 2017; Beaton et al., 2021; Santana et al.,
2021). The full mock-catalogs are released such that users can impose their
own selection function to constructs a mock APOGEE survey in the simulation.
Mock-surveys can then be used to test the performance of methods and
algorithms to recover the true underlying galactic physics as demonstrated in
Nikakhtar et al. (2021).
## 5\. MaNGA: Full Release of Final Sample
The MaNGA survey (Bundy et al., 2015) uses a custom-built set of hexagonal
integral field unit (IFU) fiber bundles (Drory et al., 2015) to feed
spectroscopic fibers into the BOSS spectrograph (Smee et al., 2013). Over its
operational lifetime, MaNGA has successfully met its goal of obtaining
integral field spectroscopy for $\sim$ 10,000 nearby galaxies (Law et al.,
2015; Yan et al., 2016a) at redshift $z\sim 0.03$ with a nearly flat
distribution in stellar mass (Wake et al., 2017).
DR17 contains all MaNGA observations taken throughout SDSS-IV, and more than
doubles the sample size of fully reduced galaxy data products previously
released in DR15 (Aguado et al., 2019). These data products include raw data,
intermediate reductions such as flux-calibrated spectra from individual
exposures, and final calibrated data cubes and row-stacked spectra (RSS)
produced using the MaNGA Data Reduction Pipeline (DRP; Law et al., 2016,
2021a; Yan et al., 2016b).
DR17 includes DRP data products (see §5.1) for 11,273 MaNGA cubes distributed
amongst 674 plates. 10,296 of these data cubes are for “traditional” MaNGA
type galaxies, and 977 represent data cubes associated with non-standard
ancillary programs (targeting a variety of objects including globular
clusters, faint galaxies and intracluster light in the Coma cluster,
background reference sky, and also tiling of the large nearby galaxies M31 and
IC342; see §5.4 for more details). Of the 10,296 galaxy cubes, 10,145 have the
highest data quality with no warning flags indicating significant issues with
the data reduction process. These 10,145 data cubes correspond to 10,010
unique targets (as identified via their MANGAID) with a small number of repeat
observations taken for cross-calibration purposes (each has an individual
plate-ifu code, MANGAID needs to be used to identify unique galaxies). As in
previous releases, DR17 also includes the release of derived spectroscopic
products (e.g., stellar kinematics, emission-line diagnostic maps, etc.) from
the MaNGA Data Analysis Pipeline (DAP; Belfiore et al., 2019; Westfall et al.,
2019); see §5.2. Additionally, DR17 contains the final data release for the
MaNGA Stellar Library (MaStar; Yan et al., 2019, and §6), which includes
calibrated 1D spectra for 28,124 unique stars spanning a wide range of stellar
types.
We illustrate the sky footprint of MaNGA galaxies released in DR17 in Figure
5, along with colored boxes indicating the locations of a selection of other
galaxy surveys, namely the HI surveys Apertif (K. Hess et al. in prep) and
ALFALFA (or Arecibo Legacy Fast ALFA, Haynes et al. 2018; also see §5.5.4 for
more HI followup); IR surveys like Herschel-ATLAS, (H-ATLAS, Smith et al.
2017), the UKIRT Infrared Deep Sky Survey, (UKIDSS, Lawrence et al. 2007), and
other optical surveys, like Galaxy and Mass Assembly Survey (GAMA, Liske et
al. 2015), the footprint of which includes most of the SAMI IFU observations,
(Croom et al., 2021, in total, 74 galaxies are observed by both MaNGA and
SAMI) and Hyper Suprime-Cam (HSC, Aihara et al. 2019). In some cases the
prioritization of which MaNGA plates to observe was driven by the availability
of these ancillary data (e.g. note how observed plates fill in parts of the
UKIDSS footprint). MaNGA plates in an earlier projected footprint of Apertif
were also prioritized but changes in Apertif observation plans has
significantly reduced the final overlap.
Figure 5.— DR17 final MaNGA survey area; blue tiles indicate observed fields
(plates), grey tiles indicate potential fields from which the MaNGA final
sample was drawn. Colored boxes indicate the regions observed by a variety of
other surveys as described in the text.
### 5.1. MaNGA Data Reduction Pipeline and Products
The MaNGA DRP has evolved substantially throughout the survey across a variety
of both public (DR) and internal (“MaNGA Product Launch”, or MPL) data
releases. A summary of these various DRP versions and the number of unique
galaxies in each is given by Law et al. (2021a, see their Table 1). These
authors also provide a detailed description of the differences in the DRP for
DR17 compared to previous releases.141414Strictly Law et al. (2021a) describe
the team-internal data release MPL-10, but these data are practically
identical to the final public data release DR17 (which is the team internal
release MPL-11) in everything except the total number of galaxies. In brief,
changes in the DR17 data products compared to DR15 include:
1. 1.
Updated spectral line-spread function (LSF): Many stages of the pipeline have
been rewritten to further improve the accuracy of the LSF estimate, which is
now good to better than 1%. As demonstrated by Law et al. (2021a) by
comparison against observations with higher-resolution spectrographs, this
allows MaNGA emission-line velocity dispersions to be reliable down to 20 km
s-1 at signal-to-noise ratio (SNR) above 50, which is well below the 70 km s-1
instrumental resolution.
2. 2.
Multiple pipeline changes have affected the overall MaNGA survey flux
calibration. The most significant changes included adoption of a different
extinction model for the calibration standard stars and correction for a few-
percent scale error in lab measurements of the MaNGA fiber bundle metrology
using on-sky self calibrations (see Law et al., 2021a, their Appendix A).
3. 3.
New data quality flags have been defined to better identify potential
reduction problems. These include a new UNUSUAL data quality bit to identify
cubes that are different from ordinary data quality but still useful for many
analyzes (e.g., that may be missing a fraction of the field of view due to
hardware problems). These are distinct from the previously-defined CRITICAL
data quality bit that indicates data with significant problems that should
preclude it from most scientific analyzes ($<1$% of the total sample).
4. 4.
Introduction of a new processing step to detect and subtract bright electronic
artifacts (dubbed the “blowtorch”) arising from a persistent electronic
artifact within the Charge-coupled devices (CCDs) in one of the red cameras
during the final year of survey operations (see Law et al., 2021a, their
Appendix B).
### 5.2. MaNGA Data Analysis Pipeline and Products
In this section we describe two specific changes to the DAP analysis between
MaNGA data released in DR15 and DR17. The first is a change in the stellar
continuum templates used for the emission line measurements; this change only
affects emission line measurements and does not affect stellar kinematic
measurements. The second is the addition of new spectral index measurements
more appropriate for stacking analyzes and coaddition of spaxels; the
previously existing spectral index measurements are not affected by this
addition.
The MaNGA Data Analysis Pipeline (DAP) as a whole is discussed extensively in
the DR15 paper (Aguado et al., 2019) and in Westfall et al. (2019), Belfiore
et al. (2019), and Law et al. (2021a). The last provides a summary of other
improvements made to the DAP since DR15.
The SDSS data release website (https://www.sdss.org/) provides information on
data access and changes to the DAP data models in DR17 for its major output
products. Further information can be found in the documentation of the code
base.151515https://sdss-mangadap.readthedocs.io/en/latest/
#### 5.2.1 Stellar Continuum Templates
In DR17, we use different spectral templates to model the galaxy continuum for
emission line measurements than we use for stellar kinematics measurements. In
DR15, we used the same templates in both cases, but as discussed by Law et al.
(2021a), these template sets diverged starting with our ninth internal data
set (MPL-9; between DR15 and DR17). For the emission line measurements, the
new templates are based on the MaStar survey, allowing us to take advantage of
the full MaNGA spectral range (3600-10000 Å) and, e.g., model the [S
III]$\lambda\lambda$9071,9533Å doublet and some of the blue Paschen lines. For
the stellar kinematics measurements, we have continued to use the same
templates used in DR15, the MILES-HC library, taking advantage of its modestly
higher spectral resolution than MaStar. Since MILES only spans between 3575 to
7400 Å, this means MaNGA stellar kinematics do not include, e.g.,
contributions from the calcium near-infrared triplet near 8600 Å.
In DR17, we provide DAP emission line measurements based on two different
continuum template sets, both based on the MaStar Survey (Yan et al., 2019,
and §6), and referred to as MASTARSSP and MASTARHC2. There are four different
analysis approaches, indicated by DAPTYPE. Three use MASTARSSP, with three
different spatial binning approaches, and the fourth uses MASTARHC2.
The template set referred to as the MASTARSSP library by the DAP are a subset
of simple-stellar-population (SSP) models provided by Maraston et al. (2020).
Largely to decrease execution time, we down-selected templates from the larger
library provided by Maraston et al. (2020) to only those spectra with a
Salpeter Initial Mass Function (IMF) and the following grid in SSP age and
metallicity, for a total of 54 spectra:
1. 1.
Age/[1 Gyr] = 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 9, 14
2. 2.
$\log(Z/Z_{\odot})$ = -1.35, -1., -0.7, -0.33, 0, 0.35.
Extensive testing was done to check differences in stellar-continuum fits
based on this choice; small differences that were found are well within the
limits described by Belfiore et al. (2019). Section 5.3 of Law et al. (2021b)
show further analysis, including a direct comparison of results for the BPT
emission-line diagnostics plots when using either the MASTARHC2 or MASTARSSP
templates showing that the templates have a limited effect on their analysis.
Importantly, note that the DAP places no constraints on how these templates
can be combined (e.g., unlike methods which use the Penalized PiXel-Fitting,
or pPXF; Cappellari & Emsellem 2004; Cappellari 2017, implementation of
regularized weights), and the weight applied to each template is not used to
construct luminosity-weighted ages or metallicities for the fitted spectra.
The use of the SSP models, as opposed to spectra of single stars, is meant
only to impose a physically relevant prior on the best-fitting continua, even
if minimally so compared to more sophisticated stellar-population modeling.
The template set referred to as the MASTARHC2161616MASTARHC2 was the second of
two library versions based on hierarchical clustering (HC) of MaStar spectra.
MASTARHC1 is also available from the DAP code repository, but it was only used
in the processing for MPL-9. library by the DAP is a set of 65 hierarchically
clustered templates based on $\sim$2800 MaStar spectra from MPL-10. Only one
of the four DAPTYPEs provided in DR17 uses these templates; however, we note
that the results based on these templates are the primary data sets used by
Law et al. (2021b, a) to improve the DRP (see above). The approach used to |
# TreeFlow: probabilistic programming and automatic differentiation for
phylogenetics
Christiaan Swanepoel,∗,1,2 Mathieu Fourment,3 Xiang Ji,4
Hassan Nasif,5 Marc A Suchard,6,7,8
Frederick A Matsen IV,5,9,10 Alexei Drummond1,11
1Centre for Computational Evolution, The University of Auckland, Auckland, New
Zealand;
2School of Computer Science, The University of Auckland, Auckland, New
Zealand, 1010;
3Australian Institute for Microbiology and Infection, University of Technology
Sydney, Ultimo NSW, Australia;
4Department of Mathematics, Tulane University, New Orleans, USA;
5Public Health Sciences Division, Fred Hutchinson Cancer Research Center,
Seattle, Washington, USA;
6Department of Human Genetics, University of California, Los Angeles, USA;
7Department of Computational Medicine, University of California, Los Angeles,
USA;
8Department of Biostatistics, University of California, Los Angeles, USA;
9Department of Statistics, University of Washington, Seattle, USA;
10Department of Genome Sciences, University of Washington, Seattle, USA;
11School of Biological Sciences, The University of Auckland, Auckland, New
Zealand, 1010;
## Abstract
Probabilistic programming frameworks are powerful tools for statistical
modelling and inference. They are not immediately generalisable to
phylogenetic problems due to the particular computational properties of the
phylogenetic tree object. TreeFlow is a software library for probabilistic
programming and automatic differentiation with phylogenetic trees. It
implements inference algorithms for phylogenetic tree times and model
parameters given a tree topology. We demonstrate how TreeFlow can be used to
quickly implement and assess new models. We also show that it provides
reasonable performance for gradient-based inference algorithms compared to
specialized computational libraries for phylogenetics.
## Introduction
Traditionally, phylogenetic analyses have been performed by specialized
software [51, 19, 30]. A number of software packages exist that implement a
broad but predefined collection of models and associated specialized inference
methods. Typically, inference is handled by carefully crafted but
computationally costly stochastic optimisation or Markov chain Monte Carlo
(MCMC) methods [42, 28]. In contrast, in other realms of statistical analysis,
probabilistic programming software libraries have entered into widespread use.
These allow the specification of almost any model as a probabilistic program,
and inference is provided automatically with a generic inference method.
Exploiting the power of probabilistic programming in phylogenetic analyses
could significantly accelerate research by making the process of developing
new models and implementing inference faster and more flexible.
Probabilistic programming tools, such as BUGS [39], Stan [11], PyMC3 [49],
Pyro [6], and TensorFlow Probability [12] allow users to specify probabilistic
models by describing the generative process with code as a probabilistic
program. Advancements in automatic inference methods have allow these tools to
perform efficient inference on almost any model. Some notable examples,
including automatic differentiation variational inference [36] (ADVI) and
Hamiltonian Monte Carlo [20] (HMC), use local gradient information from the
model’s probability density function to efficiently navigate the parameter
space.
One key technology that enables these gradient-based automatic inference
algorithms is automatic differentiation. Automatic differentiation refers to
methods which efficiently calculate machine-precision gradients of functions,
specified by computer programs, without extra analytical derivation or
excessive computational overhead compared to the function evaluation.
Automatic differentiation frameworks that have statistical inference packages
built on top of them include Theano [5], Pytorch [44], JAX [10] and TensorFlow
[1]. Some of these extend to non-trivial computational constructs such as
control flow and recursion [63].
The structure of the phylogenetic tree object is a major barrier to
implementing probabilistic programming for phylogenetics. It is not clear how
the association between its discrete and continuous quantities (the topology
and branch lengths respectively) should be represented and handled in
inference. Also, the combinatorial explosion of the size of discrete part of
the phylogenetic state space presents a major challenge to any inference
algorithm. Generic random search methods for discrete variables, as in the
naive implementation of MCMC sampling, do not scale appropriately to allow
inference on modern datasets with thousands of taxa.
Efforts have already been made to apply probabilistic programming to
phylogenetic methods [24, 48, 15]. It has been shown that universal
probabilistic programming languages, a particularly expressive extension of
the class of traditional probabilistic programming languages, can be used to
express generative processes for trees, and use them to generate Sequential
Monte Carlo inference schemes for complex speciation models [48]. Another
tool, LinguaPhylo, provides a domain specific modelling language for
phylogenetics and has the capability to generate a specification for an MCMC
sampler for inference [15]. These approaches both potentially lack the
inherent scalability that the automatic inference methods that accompany
general-purpose probabilistic modelling tools provide.
Applying scalable inference methods to phylogenetics is a major challenge.
Probabilistic path Hamiltonian Monte Carlo has been developed to use gradient
information to sample phylogenetic posteriors across multiple tree topologies
[13], though moves between tree topologies are not totally informed by
gradient information and are similar to the random-walk proposal distributions
available to standard MCMC routines. Hamiltonian Monte Carlo proposals for
continuous parameters within tree topologies has been paired with standard
MCMC moves between topologies for estimating local mutation rates and
divergence times using scalable algorithms for gradient calculation [31, 23].
Another approach has used Stan to apply automatic differentiation variational
inference to phylogenetic inference within a fixed rooted tree topology [24].
Finally, variational inference has been applied to phylogenetic tree inference
using a clade-based distribution on unrooted tree topologies [65]. This
approach has been extended to a more expressive family of approximating
distributions [64], and applied to rooted phylogenies [66].
One core computation that presents a challenge to automatic differentiation is
the phylogenetic likelihood [22]. This computes the probability function of a
collection of sequences given a phylogenetic tree, integrating out the
character states at ancestral nodes. This is most efficiently performed
through dynamic programming, which requires sequential control flow or
recursion, and thus can be non-trivial to implement in a functional automatic
differentiation framework such as TensorFlow. Additionally, since the
computational cost scales linearly with the number of sequences, naively
computing the likelihood’s gradient with respect to each branch of the tree
yields a quadratic computational cost [32].
## Description
TreeFlow is a library for probabilistic programming in Python. It is built on
TensorFlow, a computational framework for machine learning. TreeFlow leverages
TensorFlow’s capabilities for accelerated numerical computation and automatic
differentiation. It uses the existing probabilistic modelling infrastructure
provided by TensorFlow Probability, which implements standard statistical
distributions and inferential machinery [12]. TreeFlow provides a phylogenetic
tree representation for TensorFlow and associated input and output methods, a
range of widely used phylogenetic distributions and functions, and tools for
applying modern statistical inference methods to phylogenetic models.
TensorFlow’s core computational object is the Tensor, a multi-dimensional
array with a uniform data type. Phylogenetic trees are not immediately at home
in a Tensor-centric universe, as they are a complex data structure, often
defined recursively, with both continuous and discrete components. TreeFlow
represents trees as a structure of Tensors; floating point Tensors
representing the times and branch lengths, and integer Tensors representing
the topology. The topological Tensors include indices of node parents,
children, and for pre- and post-order traversals. TensorFlow has extensive
support for ‘nested’ structures of Tensors, including using them as arguments
to compiled functions and defining distributions over complex objects. This
means computations and models involving phylogenetic trees can be expressed
naturally.
A range of phylogenetic distributions and models are implemented in TreeFlow.
These are primarily generative models for phylogenetic trees and models of
molecular sequence evolution. Generative models of phylogenetic trees, such as
Kingman’s coalescent [37] and Birth-Death-Sampling speciation processes [50],
can be used to infer parameters related to population dynamics from genetic
sequence data. TreeFlow implements models of nucleotide sequence evolution
such as the Jukes-Cantor [34], HKY85 [27], and General Time Reversible (GTR)
[54] models. It includes a standard approach for dealing with heterogeneity in
mutation rate across sites based on a marginalizing over a discretized site
rate distribution [59]. The probabilistic programming framework, however,
allows for the use of any appropriate distribution as the base site rate
distribution rather than just the standard single-parameter Gamma
distribution. For example, it is straightforward to replace the base Gamma
distribution with a Weibull distribution, which has a quantile function that
is much easier to compute [24]. Thanks to TensorFlow’s vectorized arithmetic
operations, it is also natural to model variations in mutation rate over
lineages by specifying parameters for multiple rates (possibly with a
hierarchical prior) and multiplying by the branch lengths of the phylogenetic
(time) tree. Models which can be naturally expressed this way include the log-
Normal random local clock [16] and auto-correlated relaxed clock models [55].
A computation that requires special treatment is the phylogenetic likelihood,
the probability of a sequence alignment given a phylogeny and model of
sequence evolution. This typically involves integrating out character states
at unsampled ancestral nodes using a dynamic programming computation known as
Felsenstein’s pruning algorithm [22]. The postorder tree traversal and dynamic
data structure are not obviously compatible with TensorFlow’s immutable data
structures and focus on vectorized computations. Additionally, naive
implementations result in gradient computations with problematic scaling. The
computational cost of computing the derivatives of this likelihood with
respect to all the branches of the phylogenetic tree could grow quadratically
with respect to the number of taxa, and would prohibit gradient-based
inference on large datasets [32]. These issues are overcome in TreeFlow by
implementing the dynamic programming data structure with TensorFlow’s
TensorArray construct [63]. The TensorArray is a data structure representing a
collection of tensors which allows efficient implementation of the sequential
computation. The write-once property enforced on its constituent tensors
ensures that gradient computations have appropriate scaling, as evidenced by
benchmarks (see Figure 8).
Another useful tool for phylogenetic inference implemented in TreeFlow is the
node height ratio transform [35]. This has been used to infer times on
phylogenetic trees using maximum likelihood in the PAML software package [60].
The ratio transform parametrizes the internal node heights of a tree as the
ratio between a node’s height and its parent’s height. The heights can be
computed from the ratios in a pre-order tree traversal. This transformation
has a triangular Jacobian matrix, which means computing the determinant
required for change of variable of a probability density can be computed in
linear time with respect to the number of internal node heights [24]. In
combination with a log transformation of the root height and a logit
transformation of the ratios, a multivariate distribution that takes real
values can be transformed into a distribution on internal node heights of
rooted phylogenies. This has been applied to Bayesian phylogenetic inference
in the context of automatic differentiation variational inference [24] and
Hamiltonian Monte Carlo [31]. The ratio transform is implemented using a
TensorArray-based computation as a TensorFlow Probability Bijector which
provides a convenient interface for transforming real-valued distributions
into phylogenetic tree distributions.
TensorFlow Probability distributions can be composed into a probabilistic
graphical model using TensorFlow Probability’s joint distribution
functionality [45]. The code to specify a joint distribution provides a
concise representation of the model used in a data analysis. The ability to
implement phylogenetic models in this framework means that automatic inference
algorithms implemented in TensorFlow can be leveraged. The discrete topology
element of phylogenetic trees is an obstacle in the usage of these algorithms,
which are typically restricted to continuous latent variables. Often, the
phylogenetic tree topology is not the latent variable of interest, and is not
a significant source of uncertainty [61]. This can be the case when divergence
times or other substitution or speciation model parameters are the focus. In
this scenario, useful results can be obtained by performing inference with a
fixed tree topology, such as one obtained from fast maximum likelihood
methods. This is the approach taken by the NextStrain platform [26], which
uses the scalability afforded by a fixed tree topology to allow large-scale
rapid phylodynamic analysis of pathogen molecular sequence data.
One form of statistical inference for which the gradient computation is
essential is variational Bayesian inference [33]. The goal of Bayesian
inference is to characterise a posterior distribution which represents
uncertainty over model parameters. Variational Bayesian inference achieves
this by optimizing an approximation to the posterior distribution which has
more convenient analytical properties. Typically, the optimisation routine
used in variational inference can scale to a larger number of model parameters
than the random-walk sampling methods used by MCMC methods.
One concrete implementation of variational inference is automatic
differentiation variational inference (ADVI) [36]. ADVI can perform inference
on a wide range of probabilistic graphical models composed of continuous
variables. It automatically constructs an approximation to the posterior by
transforming a convenient base distribution to respect the domains of the
model’s component distributions. It then optimizes this approximation using
stochastic gradient methods [47, 8]. Typically, independent Normal
distributions are used as the base distribution for computational convenience.
This is known as the mean field approximation, and for posterior distributions
that have significant correlation-structure, skew, multi-modality, or heavy
tails, can introduce error in parameter estimation [7]. Possible solutions to
this approximation error include highly flexible variational approximations
with large numbers of parameters [46] or structured approximations that are
informed by the form of the model [3].
TreeFlow implements ADVI using TensorFlow Probability’s bijector framework to
transform a base distribution and leverages the stochastic gradient optimizers
already implemented in TensorFlow. Tree variables are estimated by fixing the
topology. The base distribution for the divergence times on the tree is
transformed into a distribution on ratios using a logit transformation, and
then into a valid set of divergence times using the ratio transformation
described above [60]. ADVI opens the door to using TensorFlow’s neural network
framework to implement deep-learning-based variants such as variational
inference with normalizing flows [46], which transform the base distribution
through invertible trainable neural network layers to better approximate
complex posterior distributions.
As well as a library for probabilistic programming with TensorFlow
Probability, TreeFlow provides command line interfaces for fixed-topology
inference. These allow inference on standard phylogenetic models such as those
performed by specialized software. For inputs, they take a nucleotide sequence
alignment in the FASTA format, a tree topology in Newick format, and a model
specification in a specially structued YAML file. These allow specification of
the models described above for speciation and nucleotide substitution, as well
as parameter priors from a range of standard probability distributions. This
specification differs from the files used by other phylogenetic inference
software [17, 29] in that it does not include any description of inferential
machinery and simply provides a terse description of the model. Command line
interfaces are provided for both automatic differentiation variational
inference and maximum a posteriori inference.
## Biological examples
We used TreeFlow to perform fixed-topology phylogenetic analyses of two
datasets. The first is an alignment of 62 mitochondrial DNA sequences from
carnivores [52]. The second is a dataset of 980 influenza sequences [57]. In
both datasets maximum likelihood unrooted topologies are estimated using RAxML
[51]. These topologies are rooted with Least Squares Dating [56].
In the carnivores dataset, we demonstrate the flexibility of probabilistic
programming with TreeFlow by investigating variation in the ratio of
transitions to transversions in the nucleotide substitution process across
lineages in the tree. Early maximum likelihood analyses of mitochondrial DNA
in primates detected variation in this ratio, but without a biological basis,
it was attributed to a saturation effect [62]. A later simulation-based
investigation showed that this was a reasonable explanation, which exposed the
limitations of nucleotide substitution models for estimating the length of
branches deep in the tree [21].
Figure 1: Carnivores base model posterior marginal parameter estimates
This problem could be approached by means of Bayesian model comparison; a lack
of preference for a model allowing between-lineage variation of the ratio
could indicate that the substitution model lacks the power to separate
variation ‘signal’ from saturation ‘noise’. Firstly, we construct a standard
phylogenetic model with a HKY substitution model with a single transition-
transversion ratio parameter (kappa). We perform inference using ADVI, and
also using MCMC as implemented in BEAST 2 [9]. Since this model is of the form
of a standard phylogenetic analysis, it could be fit using TreeFlow’s command
line interface. Figure 1 compares the marginal parameter estimates obtained
from TreeFlow and BEAST 2. The discrepancies in distribution, apparent in the
estimates of the frequencies and tree height, can be attributed to the
approximation error introduced by ADVI. Most importantly, the estimate of the
parameter of interest, kappa, appears reasonable.
⬇
site_category_count = 4
pattern_counts = alignment.get_weights_tensor()
subst_model = HKY()
\pardef build_sequence_dist(tree, kappa, frequencies, site_gamma_shape):
unrooted_tree = tree.get_unrooted_tree()
site_rate_distribution = DiscretizedDistribution(
category_count=site_category_count,
distribution=Gamma(
concentration=site_gamma_shape,
rate=site_gamma_shape
),
)
transition_probs_tree = get_transition_probabilities_tree(
unrooted_tree,
subst_model,
rate_categories=site_rate_distribution.normalised_support,
frequencies=frequencies,
frequencies=tf.broadcast_to(
tf.expand_dims(frequencies, -2),
kappa.shape + (4,)
),
kappa=kappa
)
return SampleWeighted(
DiscreteParameterMixture(
site_rate_distribution,
LeafCTMC(
transition_probs_tree,
expand_dims(frequencies, -2),
),
),
sample_shape=alignment.site_count,
weights=pattern_counts
)
\parmodel = JointDistributionNamed(dict(
birth_rate=LogNormal(c(1.0), c(1.5)),
tree=lambda birth_rate: Yule(tree.taxon_count, birth_rate),
kappa=LogNormal(c(0.0), c(2.0)),
kappa=Sample(
LogNormal(c(0.0), c(2.0)),
tree.branch_lengths.shape
),
site_gamma_shape=LogNormal(c(0.0), c(1.0)),
frequencies=Dirichlet(c([2.0, 2.0, 2.0, 2.0])),
sequences=build_sequence_dist
))
Figure 2: Code to specify models for carnivores analysis. Highlighted lines
show changes (from previous line) to add kappa variation across lineages.
We then altered this model to estimate a separate ratio for every lineage on
the tree; the implementation of this was simple and scalable as a result of
TensorFlow’s vectorization and broadcasting functionality. We compare the
models using estimates of the marginal likelihood [40]. The marginal
likelihood, or evidence, the integral of the likelihood of the data over model
parameters, is typically analytically intractable and challenging to compute
using MCMC methods [58]. The closed-form approximation to the posterior
distribution provided by variational inference means we can easily estimate
the marginal likelihood using importance sampling, a utility provided by
TreeFlow. This corrects for some of the posterior approximation error
described above.
Figure 3: Lineage age (number of expected substitutions per site before the
present) vs estimated transition/tranversion ratio (Kappa) for carnivores
dataset Figure 4: Node age estimates for base model vs per-lineage kappa
variation model
The estimated per-lineage kappa parameters are shown in Figure 3. Kappa
estimates decline with the age of the lineage, which agrees with the results
of the previous studies. The marginal likelihood for the model with
transition-transversion ratio variation across lineages was higher than for
the base model. This means that, under the other components of this model, the
data supports variation in the ratio. This does not necessarily imply that
transition-transversion ratio in the true generative process of the data
varies in the same way, but could be a useful consideration in designing more
sophisticated models of nucleotide substitution. The growth in uncertainty in
kappa estimates deeper in the tree supports previous conclusions that
nucleotide substitution models are unable to effectively estimate the number
of substitutions on older branches [21]. Figure 4 shows that the kappa
variation model shortens older branches, leading to a substantially reduced
overall tree height estimate, with proportionally similar uncertainty in node
height estimates. This is not a proper dating analysis as it does not consider
uncertainty in the mutation rate and is not time-calibrated, but it is clear
that this model could significantly affect time estimates.
⬇
clock:
strict:
clock_rate:
lognormal:
loc: -2.0
scale: 2.0
site:
discrete_gamma:
category_count: 4
site_gamma_shape:
lognormal:
loc: 0.0
scale: 1.0
substitution:
gtr_rel:
frequencies:
dirichlet:
concentration:
- 2.0
- 2.0
- 2.0
- 2.0
rate_ac:
gamma:
concentration: 0.05
rate: 0.05
rate_ag:
gamma:
concentration: 0.05
rate: 0.05
rate_at:
gamma:
concentration: 0.05
rate: 0.05
rate_cg:
gamma:
concentration: 0.05
rate: 0.05
rate_gt:
gamma:
concentration: 0.05
rate: 0.05
tree:
coalescent:
pop_size:
lognormal:
loc: 1.0
scale: 1.5
Figure 5: YAML model definition for the analysis of the influenza dataset
Figure 6: Influenza parameter marginal posterior distributions Figure 7:
Influenza node height posterior distribution statistics
In the analysis of the 980-taxon influenza dataset, we demonstrate the
scalability of variational inference. We performed variational inference using
TreeFlow’s command line interface. The model used for inference included a
coalescent prior with a constant population size on the tree, a strict
molecular clock, a discretized Gamma model of site rate variation and a GTR
substitution model. The YAML model definition code for this model, including
parameter priors, is shown in Figure 5. This parameterization of the GTR
substitution model (here named gtr_rel), with independent priors on five of
the relative rates and one held fixed to 1, is used to allow comparison of
parameter estimates with BEAST 2. It is also possible to use a six-rate GTR
parameterisation with a Dirichlet prior in TreeFlow.
Figure 6 shows the marginal parameter estimates obtained from this dataset,
compared to those obtained from a fixed-topology MCMC analysis using BEAST 2.
The posterior distributions of most parameters are approximated with high
fidelity by the mean field approximation. The uncertainty in the tree height,
coalescent effective population size, and clock rate are slightly
underestimated. This is a result of using an approximation that ignores the
correlations between these parameters that are present in the posterior.
Figure 7 compares the divergence time estimates. TreeFlow’s mean field
variational approximation assumes the posterior distribution on the times of
the tree are independent Normal distributions transformed through the node
height ratio transformation to respect the time constraints of the tree.
However, the true posterior is almost certainly not of this form, so the
approximation introduces error. In general, the mean of the posterior
distribution is well approximated, in particular for the oldest seven nodes of
the tree. The posterior means of other divergence times are generally close
but not identical to the true posterior. The error is more apparent in the
posterior uncertainty; the standard deviation estimates produced by
variational inference often differ substantially from the true posterior.
While mean field variational inference seems effective for estimating the
parameters of the phylogenetic model the estimates of divergence times appear
less reliable. Variational approximations that better capture the form of the
true posterior would improve the quality of these estimates.
BEAST 2 MCMC sampling was run for 30,000,000 iterations. Convergence was
checked using effective sample size, which was computed for all parameters
using ArviZ [38]. Multiple runs were performed to tune MCMC operator weights
to improve ESSs. Generally, the minimum acceptable ESS is 100, though 200 is
preferred [14]. The analysis resulted in a minimum effective sample size of
125. The wall clock runtime for the BEAST 2 analysis was 5 hours and 57
minutes. The TreeFlow analysis converged in 20,000 iterations which took 3
hours and 3 minutes. This shows that variational inference has favourable
scaling for large phylogenetic datasets despite the more expensive gradient
computations required, and that TreeFlow’s slower phylogenetic likelihood
implementation performs well enough to be useful in real-world inference
scenarios.
## Benchmarks
The phylogenetic likelihood is the computation that dominates the
computational cost of model-based phylogenetic inference. We benchmark the
performance of our TensorFlow-based likelihood implementation against
specialized libraries. Since gradient computations are of equal importance to
likelihood evaluations in modern automatic inference regimes, we also
benchmark computation of derivatives of the phylogenetic likelihood, with
respect to both the continuous elements of the tree and the parameters of the
substitution model. A clear difference emerges in the implementation of
derivatives; TreeFlow’s are based on automatic differentiation while bespoke
libraries need analytically-derived gradients. Therefore, we do not
necessarily expect our implementation to be as fast as bespoke software, but
it does not rely on analytical derivation of gradient expressions for every
model and therefore automatically supports a wider range of models.
We compare the performance of TreeFlow’s likelihood implementation against
BEAGLE [4]. BEAGLE is library for high performance computation of phylogenetic
likelihoods. Recent versions of BEAGLE implement analytical gradients with
respect to tree branch lengths [32]. We use BEAGLE via the bito software
package [41], which provides a Python interface to BEAGLE and also numerically
computes derivatives with respect to substitution model parameters using a
finite differences approximation.
We also compare TreeFlow with a simple likelihood implementation [53] based on
another automatic differentiation framework, JAX [10]. In contrast to
TensorFlow’s function mode, which the benchmarked TreeFlow implementation
uses, JAX uses an eager execution model and is compatible with native Python
control flow.
Benchmarks are performed on simulated data. Simulation allows the generation
of a large number of data replicates with appropriate properties. Since we
want to investigate the scaling of likelihood and gradient calculations with
respect to the number of sequences, we simulate data for a range of sequence
counts. Sequence counts are selected as increasing powers of 2 to better
display asymptotic properties of the implementations. We simulate data with
sequence counts ranging from 32 to 2048. Trees are simulated under a
coalescent model for a given number of taxa, and then nucleotide sequences of
length 1000 are simulated under a fixed rate of evolution and a HKY
substitution model. Tree and sequence simulations are performed using BEAST 2
[9].
We benchmark 4 distinct computations on these datasets. Firstly, for each
replicate we compute the phylogenetic likelihood under a very simple model of
sequence evolution. This uses a Jukes-Cantor model of nucleotide substitution
and no model of site rate variation. Secondly, we calculate derivatives with
respect to branch lengths under this simple model. Thirdly, we compute the
likelihood under a more sophisticated substitution model, with a discretized
Weibull distribution with 4 rate categories to model site rate variation and a
GTR model of nucleotide substitution. We selected the Weibull distribution for
site rate variation since it is implemented in bito. Finally, we compute
derivatives with respect to both branch lengths and the parameters of the
substitution model (the 6 GTR relative substitution rates, 4 base nucleotide
frequencies, and the Weibull site rate shape parameter). Each computation is
performed 100 times on 10 simulated datasets.
Figure 8: Times from phylogenetic likelihood benchmark (100 evaluations for 1000 sites) | | | Mean time (512 taxa) | log-log slope
---|---|---|---|---
Model | Computation | Method | |
JC | Likelihood | TreeFlow | 6.23 | 1.13
bito/BEAGLE | 0.16 | 1.27
JAX | 168.79 | 1.15
Gradients | TreeFlow | 14.55 | 1.23
bito/BEAGLE | 0.87 | 1.33
JAX | 644.22 | 1.19
GTR/Weibull | Likelihood | TreeFlow | 10.73 | 1.12
bito/BEAGLE | 0.58 | 1.40
JAX | 908.24 | 1.58
Gradients | TreeFlow | 36.14 | 1.34
bito/BEAGLE | 14.74 | 1.41
JAX | 2801.35 | 1.58
Table 1: Results of phylogenetic likelihood benchmark. Times for gradients for
the GTR/Weibull are highlighted as they are the most relevant computation for
gradient-based inference on real data.
Figure 8 shows the results of the benchmarks with a log scale on both axes.
For likelihood computation with both models and branch gradient computation
with the simple model bito/BEAGLE are at least an order of magnitude faster
than TreeFlow. This is expected as BEAGLE is a highly optimized special-
purpose library written in native code. The performance gap grows much smaller
for computing the gradients of the more complex substitution model. bito
performs at least 2 likelihood evaluations for each additional parameter when
calculating the gradient with respect to substitution model parameters, while
the overhead for substitution model parameters with automatic differentiation
is minimal. We expect TreeFlow to surpass bito/BEAGLE for substitution models
with even more parameters (e.g. amino acid substitution models [2]), or those
where the number of parameters grows with the number of sequences, as in the
example below.
For typical real-world phylogenetic analyses, a model with many parameters,
such as the GTR/Weibull model in our benchmarks, is used. For modern Bayesian
inference methods such as variational inference and Hamiltonian Monte Carlo,
the gradient is the primary computation performed at each iteration. We
observe that TreeFlow’s times for this combination of model and computation
are within an order of magnitude of bito/BEAGLE’s, around 2-3 times slower.
Therefore for applied analyses the automatic differentiation-based
computations of TreeFlow presents a reasonable alternative to specialized
software while similtaneously offering greater flexibility.
We also observe that the runtimes for TreeFlow are roughly an order of
magnitude less than those of the JAX-based implementation. These indicate that
the control flow constructs and execution model of TensorFlow are a good
choice for implementing tree-based computations compared to the eager
execution model of JAX.
Table 1 shows the coefficients obtained from fitting a linear model to the
benchmark times where the predictor is the log-transformed number of sequences
and the target is the log-transformed runtime. The slope parameter estimates
from these fits is a rough empirical estimate of the polynomial degree of the
computational scaling. The slope parameters for all TreeFlow computations are
well below 2 and indicate a roughly linear scaling with the size of the data.
For the JC benchmark, the only gradients computed are with respect to branch
lengths, so the whole computation for bito/BEAGLE is the analytical linear-
time branch gradient. TreeFlow’s scaling is certainly not worse in this case,
indicating that the TensorArray-based likelihood implementation enables
linear-time branch gradients.
## Discussion
The combination of flexibility and scalability of libraries like TreeFlow has
the potential to be useful for rapid analysis of large modern genetic
datasets, such as those assembled for tracking infectious diseases [26]. For
this purpose, TreeFlow’s true power would be unlocked with the implementation
of improved modelling and inference tools for evolutionary rates and
population dynamics.
For principled dating analyses, uncertainty in evolutionary rate parameters
must be considered. Posterior distributions involving these parameters may
have significant correlation structure which would be ignored by variational
inference using a mean-field posterior approximation. These parameters
typically receive special treatment in MCMC sampling routines [16, 67]. As
observed in our analysis on the influenza dataset, the mean field
approximation fails to accurately capture the posterior distribution on all
internal divergence time estimates. Implementing structured posterior
approximations for these models could substantially improve the reliability of
parameter estimates obtained using variational inference while scaling to
extremely large datasets.
Additionally, implementing flexible models for population dynamics, such as
nonparametric coalescent models [18, 43, 25], could provide valuable insights
from large datasets. These models would result in complex posterior
distributions, which would need to be accounted for in a variational inference
scheme. If these coalescent models could be made to work effectively in
TreeFlow, the probabilistic programming paradigm would allow for rapid
computational experimentation with novel models of time-varying population
parameters.
Finally, the most significant functionality for phylogenetic inference missing
from TreeFlow is inference of the tree topology. TreeFlow’s tree
representation could allow for the topology to become an estimated parameter,
such as in existing work on variational Bayesian phylogenetic inference [65].
Efficiently implementing the computations required for these algorithms in the
TensorFlow framework would certainly be a major computational challenge.
## Conclusion
TreeFlow is a software library that provides tools for statistical modelling
and inference on phylogenetic problems. Probabilistic programming provides
flexible model definition involving phylogenetic trees and molecular
sequences, and automatic differentiation enables the application of modern
scalable inference algorithms. TreeFlow’s automatic differentiation-based
implementations of core gradient computations provide reasonable performance
compared to specialized phylogenetic libraries. These building blocks have the
potential to be used in valuable phylogenetic analyses of large and complex
genetic datasets.
## Software Availability
TreeFlow is an open-source Python package. Instructions for installing
TreeFlow, and examples of using it as a library and command line application
can be found at https://github.com/christiaanjs/treeflow.
## References
* [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16), pages 265–283, 2016.
* [2] Jun Adachi and Masami Hasegawa. Model of amino acid substitution in proteins encoded by mitochondrial DNA. Journal of Molecular Evolution, 42(4):459–468, 1996.
* [3] Luca Ambrogioni, Kate Lin, Emily Fertig, Sharad Vikram, Max Hinne, Dave Moore, and Marcel van Gerven. Automatic structured variational inference. In International Conference on Artificial Intelligence and Statistics, pages 676–684. PMLR, 2021.
* [4] Daniel L Ayres, Michael P Cummings, Guy Baele, Aaron E Darling, Paul O Lewis, David L Swofford, John P Huelsenbeck, Philippe Lemey, Andrew Rambaut, and Marc A Suchard. Beagle 3: Improved performance, scaling, and usability for a high-performance computing library for statistical phylogenetics. Systematic Biology, 68(6):1052–1061, 2019.
* [5] James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a cpu and gpu math expression compiler. In Proceedings of the 9th Python in Science Conference, pages 1–7, 2010.
* [6] Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. Pyro: Deep universal probabilistic programming. The Journal of Machine Learning Research, 20(1):973–978, 2019.
* [7] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
* [8] Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010\.
* [9] Remco Bouckaert, Timothy G Vaughan, Joëlle Barido-Sottani, Sebastián Duchêne, Mathieu Fourment, Alexandra Gavryushkina, Joseph Heled, Graham Jones, Denise Kühnert, Nicola De Maio, et al. Beast 2.5: An advanced software platform for bayesian evolutionary analysis. PLoS computational biology, 15(4):e1006650, 2019.
* [10] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018\.
* [11] Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. Stan: A probabilistic programming language. Journal of Statistical Software, 76(1), 2017.
* [12] Joshua V Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A Saurous. Tensorflow distributions. arXiv preprint arXiv:1711.10604, 2017.
* [13] Vu Dinh, Arman Bilge, Cheng Zhang, and Frederick A Matsen IV. Probabilistic path hamiltonian monte carlo. In International Conference on Machine Learning, pages 1009–1018. PMLR, 2017.
* [14] Alexei J Drummond and Remco R Bouckaert. Bayesian evolutionary analysis with BEAST. Cambridge University Press, 2015.
* [15] Alexei J Drummond, Kylie Chen, Fábio K Mendes, and Dong Xie. Linguaphylo: a probabilistic model specification language for reproducible phylogenetic analyses. bioRxiv, 2022.
* [16] Alexei J Drummond, Simon YW Ho, Matthew J Phillips, and Andrew Rambaut. Relaxed phylogenetics and dating with confidence. PLoS Biol, 4(5):e88, 2006.
* [17] Alexei J Drummond and Andrew Rambaut. Beast: Bayesian evolutionary analysis by sampling trees. BMC Evolutionary Biology, 7(1):1–8, 2007.
* [18] Alexei J Drummond, Andrew Rambaut, BETH Shapiro, and Oliver G Pybus. Bayesian coalescent inference of past population dynamics from molecular sequences. Molecular Biology and Evolution, 22(5):1185–1192, 2005.
* [19] Alexei J Drummond, Marc A Suchard, Dong Xie, and Andrew Rambaut. Bayesian phylogenetics with beauti and the beast 1.7. Molecular Biology and Evolution, 29(8):1969–1973, 2012.
* [20] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid monte carlo. Physics Letters B, 195(2):216–222, 1987.
* [21] Sebastián Duchêne, Simon YW Ho, and Edward C Holmes. Declining transition/transversion ratios through time reveal limitations to the accuracy of nucleotide substitution models. BMC Evolutionary Biology, 15(1):1–10, 2015.
* [22] Joseph Felsenstein. Evolutionary trees from DNA sequences: a maximum likelihood approach. Journal of Molecular Evolution, 17(6):368–376, 1981.
* [23] Alexander A Fisher, Xiang Ji, Akihiko Nishimura, Philippe Lemey, and Marc A Suchard. Shrinkage-based random local clocks with scalable inference. arXiv preprint arXiv:2105.07119, 2021.
* [24] Mathieu Fourment and Aaron E Darling. Evaluating probabilistic programming and fast variational bayesian inference in phylogenetics. PeerJ, 7:e8272, 2019.
* [25] Mandev S Gill, Philippe Lemey, Nuno R Faria, Andrew Rambaut, Beth Shapiro, and Marc A Suchard. Improving bayesian population dynamics inference: a coalescent-based model for multiple loci. Molecular Biology and Evolution, 30(3):713–724, 2013.
* [26] James Hadfield, Colin Megill, Sidney M Bell, John Huddleston, Barney Potter, Charlton Callender, Pavel Sagulenko, Trevor Bedford, and Richard A Neher. Nextstrain: real-time tracking of pathogen evolution. Bioinformatics, 34(23):4121–4123, 2018.
* [27] Masami Hasegawa, Hirohisa Kishino, and Taka-aki Yano. Dating of the human-ape splitting by a molecular clock of mitochondrial DNA. Journal of Molecular Evolution, 22(2):160–174, 1985.
* [28] W. K. Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97–109, 04 1970.
* [29] Sebastian Höhna, Michael J Landis, Tracy A Heath, Bastien Boussau, Nicolas Lartillot, Brian R Moore, John P Huelsenbeck, and Fredrik Ronquist. Revbayes: Bayesian phylogenetic inference using graphical models and an interactive model-specification language. Systematic Biology, 65(4):726–736, 2016.
* [30] John P Huelsenbeck and Fredrik Ronquist. Mrbayes: Bayesian inference of phylogenetic trees. Bioinformatics, 17(8):754–755, 2001.
* [31] Xiang Ji, Alexander A Fisher, Shuo Su, Jeffrey L Thorne, Barney Potter, Philippe Lemey, Guy Baele, and Marc A Suchard. Scalable bayesian divergence time estimation with ratio transformations. arXiv preprint arXiv:2110.13298, 2021.
* [32] Xiang Ji, Zhenyu Zhang, Andrew Holbrook, Akihiko Nishimura, Guy Baele, Andrew Rambaut, Philippe Lemey, and Marc A Suchard. Gradients do grow on trees: a linear-time o (n)-dimensional gradient for statistical phylogenetics. Molecular Biology and Evolution, 37(10):3047–3060, 2020.
* [33] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233, 1999.
* [34] Thomas H Jukes, Charles R Cantor, et al. Evolution of protein molecules. Mammalian protein metabolism, 3:21–132, 1969.
* [35] Hirohisa Kishino, Jeffrey L Thorne, and William J Bruno. Performance of a divergence time estimation method under a probabilistic model of rate evolution. Molecular Biology and Evolution, 18(3):352–361, 2001.
* [36] Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic differentiation variational inference. The Journal of Machine Learning Research, 18(1):430–474, 2017.
* [37] Mary K Kuhner, Jon Yamato, and Joseph Felsenstein. Estimating effective population size and mutation rate from sequence data using metropolis-hastings sampling. Genetics, 140(4):1421–1430, 1995.
* [38] Ravin Kumar, Colin Carroll, Ari Hartikainen, and Osvaldo Martin. Arviz a unified library for exploratory analysis of bayesian models in python. Journal of Open Source Software, 4(33):1143, 2019.
* [39] David J Lunn, Andrew Thomas, Nicky Best, and David Spiegelhalter. Winbugs-a bayesian modelling framework: concepts, structure, and extensibility. Statistics and Computing, 10(4):325–337, 2000.
* [40] David JC MacKay et al. Information theory, inference and learning algorithms. Cambridge University Press, 2003.
* [41] Erick Matsen, Dave H Rich, Ognian Milanov, Mathieu Fourment, Seong-Hwan Jun, Hassan Nassif, Anna Kooperberg, Sho Kiami, Tanvi Ganapathy, Lucy Yang, and et al. bito.
* [42] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087–1092, 1953.
* [43] Vladimir N Minin, Erik W Bloomquist, and Marc A Suchard. Smooth skyride through a rough skyline: Bayesian coalescent-based inference of population dynamics. Molecular Biology and Evolution, 25(7):1459–1471, 2008.
* [44] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32, 2019.
* [45] Dan Piponi, Dave Moore, and Joshua V Dillon. Joint distributions for tensorflow probability. arXiv preprint arXiv:2001.11819, 2020.
* [46] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1530–1538, Lille, France, 07–09 Jul 2015\. PMLR.
* [47] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951.
* [48] Fredrik Ronquist, Jan Kudlicka, Viktor Senderov, Johannes Borgström, Nicolas Lartillot, Daniel Lundén, Lawrence Murray, Thomas B Schön, and David Broman. Universal probabilistic programming offers a powerful approach to statistical phylogenetics. Communications Biology, 4(1):1–10, 2021.
* [49] John Salvatier, Thomas V Wiecki, and Christopher Fonnesbeck. Probabilistic programming in python using pymc3. PeerJ Computer Science, 2:e55, 2016.
* [50] Tanja Stadler. On incomplete sampling under birth–death models and connections to the sampling-based coalescent. Journal of theoretical biology, 261(1):58–66, 2009.
* [51] Alexandros Stamatakis. Raxml version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics, 30(9):1312–1313, 2014.
* [52] Marc A Suchard and Andrew Rambaut. Many-core algorithms for statistical phylogenetics. Bioinformatics, 25(11):1370–1376, 2009.
* [53] Christiaan Swanepoel. phylojax.
* [54] Simon Tavaré et al. Some probabilistic and statistical problems in the analysis of DNA sequences. Lectures on Mathematics in the Life Sciences, 17(2):57–86, 1986\.
* [55] Jeffrey L Thorne, Hirohisa Kishino, and Ian S Painter. Estimating the rate of evolution of the rate of molecular evolution. Molecular Biology and Evolution, 15(12):1647–1657, 1998.
* [56] Thu-Hien To, Matthieu Jung, Samantha Lycett, and Olivier Gascuel. Fast dating using least-squares criteria and algorithms. Systematic Biology, 65(1):82–97, 2016.
* [57] Timothy G Vaughan, Denise Kühnert, Alex Popinga, David Welch, and Alexei J Drummond. Efficient bayesian inference under the structured coalescent. Bioinformatics, 30(16):2272–2279, 2014.
* [58] Wangang Xie, Paul O Lewis, Yu Fan, Lynn Kuo, and Ming-Hui Chen. Improving marginal likelihood estimation for bayesian phylogenetic model selection. Systematic Biology, 60(2):150–160, 2011.
* [59] Ziheng Yang. Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: approximate methods. Journal of Molecular Evolution, 39(3):306–314, 1994.
* [60] Ziheng Yang. Paml 4: phylogenetic analysis by maximum likelihood. Molecular Biology and Evolution, 24(8):1586–1591, 2007.
* [61] Ziheng Yang, Rasmus Nielsen, Nick Goldman, and Anne-Mette Krabbe Pedersen. Codon-substitution models for heterogeneous selection pressure at amino acid sites. Genetics, 155(1):431–449, 2000.
* [62] Ziheng Yang and Anne D Yoder. Estimation of the transition/transversion rate bias and species sampling. Journal of Molecular Evolution, 48(3):274–283, 1999.
* [63] Yuan Yu, Martín Abadi, Paul Barham, Eugene Brevdo, Mike Burrows, Andy Davis, Jeff Dean, Sanjay Ghemawat, Tim Harley, Peter Hawkins, et al. Dynamic control flow in large-scale machine learning. In Proceedings of the Thirteenth EuroSys Conference, pages 1–15, 2018.
* [64] Cheng Zhang. Improved variational bayesian phylogenetic inference with normalizing flows. Advances in Neural Information Processing Systems, 33:18760–18771, 2020.
* [65] Cheng Zhang and Frederick A Matsen IV. Variational bayesian phylogenetic inference. In International Conference on Learning Representations, 2018.
* [66] Cheng Zhang and Frederick A Matsen IV. A variational approach to bayesian phylogenetic inference. arXiv preprint arXiv:2204.07747, 2022.
* [67] Rong Zhang and Alexei Drummond. Improving the performance of bayesian phylogenetic inference under relaxed clock models. BMC Evolutionary Biology, 20:1–28, 2020.
|
# Axially-deformed solution of the Skyrme-Hartree-Fock-Bogoliubov equations
using the transformed harmonic oscillator basis (IV) hfbtho (v4.0):
A new version of the program
P. Marević N. Schunck E. M. Ney R. Navarro Pérez M. Verriere J. O’Neal
Nuclear and Chemical Science Division, Lawrence Livermore National Laboratory,
Livermore, CA 94551, USA Department of Physics, Faculty of Science,
University of Zagreb, HR-10000 Zagreb, Croatia Department of Physics and
Astronomy, CB 3255, University of North Carolina, Chapel Hill, North Carolina
27599-3255, USA Department of Physics, San Diego State University, 5500
Campanile Drive, San Diego, California 02182-1233, USA Mathematics and
Computer Science Division, Argonne National Laboratory, Lemont, IL 60439, USA
###### Abstract
We describe the new version 4.0 of the code hfbtho that solves the nuclear
Hartree-Fock-Bogoliubov problem by using the deformed harmonic oscillator
basis in cylindrical coordinates. In the new version, we have implemented the
restoration of rotational, particle number, and reflection symmetry for even-
even nuclei. The restoration of rotational symmetry does not require using
bases closed under rotation. Furthermore, we added the SeaLL1 functional and
improved the calculation of the Coulomb potential. Finally, we refactored the
code to facilitate maintenance and future developments.
###### keywords:
Nuclear many-body problem; Density functional theory; Energy density
functional theory; Self-consistent mean field; Hartree-Fock-Bogoliubov theory;
Finite-temperature Hartree-Fock-Bogoliubov theory; Skyrme interaction; Gogny
force; Pairing correlations; Pairing regularization; Collective inertia;
Harmonic oscillator; Transformed harmonic oscillator; Restoration of
symmetries; Angular momentum projection; Particle number projection; Shared
memory parallelism; Distributed memory parallelism.
††journal: Computer Physics Communications
PROGRAM SUMMARY
Program title: hfbtho v4.0
CPC Library link to program files: *
Licensing provisions: GPLv3
Programming language: Fortran 2003
Journal reference of previous version: R. N. Pérez, N. Schunck, R.-D. Lasseri,
C. Zhang and J. Sarich, Comput. Phys. Commun. 220 (2017) 363
Does the new version supersede the previous version: Yes
Reasons for the new version: This version adds new capabilities to restore
broken symmetries and determine corresponding quantum numbers of even-even
nuclei
Summary of revisions:
1. 1.
Angular momentum projection for even-even nuclei in a deformed basis;
2. 2.
Particle number projection for even-even nuclei in the quasiparticle basis;
3. 3.
Implementation of the SeaLL1 functional;
4. 4.
Expansion of the Coulomb potential onto Gaussians;
5. 5.
MPI-parallelization of a single hfbtho execution;
6. 6.
Code refactoring.
Nature of problem: hfbtho is a physics computer code that is used to model the
structure of the nucleus. It is an implementation of the energy density
functional (EDF) approach to atomic nuclei, where the energy of the nucleus is
obtained by integration over space of some phenomenological energy density,
which is itself a functional of the neutron and proton intrinsic densities. In
the present version of hfbtho, the energy density is derived either from the
zero-range Skyrme or the finite-range Gogny effective two-body interaction
between nucleons. Nuclear superfluidity is treated at the Hartree-Fock-
Bogoliubov (HFB) approximation. Constraints on the nuclear shape allow probing
the potential energy surface of the nucleus as needed, e.g., for the
description of shape isomers or fission. A local scale transformation of the
single-particle basis in which the HFB solutions are expanded provides a tool
to properly compute the structure of weakly-bound nuclei. Restoration of the
rotational, particle number, and reflection symmetry for even-even nuclei
enables recovering the quantum numbers that are lost at the HFB approximation.
Solution method: The program uses the axial harmonic oscillator (HO) or the
transformed harmonic oscillator (THO) single-particle basis to expand
quasiparticle wave functions. It iteratively diagonalizes the HFB Hamiltonian
based on generalized Skyrme-like energy densities and zero-range pairing
interactions or the finite-range Gogny force until a self-consistent solution
is found. Lagrange parameters are used to impose constraints on HFB solutions,
and their value is updated at each iteration from an approximation of the
quasiparticle random phase approximation (QRPA) matrix. Symmetry restoration
is implemented through standard projection techniques. Previous versions of
the program were presented in [1-3].
Additional comments including restrictions and unusual features:
Axial and time-reversal symmetries are assumed in HFB calculations;
$y$-simplex symmetry and even particle numbers are assumed in angular momentum
projection.
## References
* [1] M. V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Axially deformed solution of the Skyrme-Hartree-Fock-Bogolyubov equations using the transformed harmonic oscillator basis. The program hfbtho (v1.66p), Comput. Phys. Commun. 167 (1) (2005) 43.
* [2] M. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, S. Wild, Axially deformed solution of the Skyrme-Hartree-Fock-Bogolyubov equations using the transformed harmonic oscillator basis (II) hfbtho v2.00d: A new version of the program, Comput. Phys. Commun. 184 (6) (2013) 1592.
* [3] R. N. Perez, N. Schunck, R.-D. Lasseri, C. Zhang, J. Sarich, Axially deformed solution of the Skyrme–Hartree–Fock–Bogolyubov equations using the transformed harmonic oscillator basis (III) hfbtho (v3.00): A new version of the program, Comput. Phys. Commun. 220 (2017) 363.
## 1 Introduction
Over the past decades, the nuclear energy density functional (EDF) framework
has become a tool of choice for describing the properties of nuclear structure
and reactions across the entire nuclide chart [1, 2, 3, 4]. It closely
resembles density functional theory (DFT), a method widely used in condensed
matter physics and quantum chemistry, insofar that it employs the mean-field
approximation to map a complex many-body problem onto a computationally
feasible one-body problem. In nuclear physics, the EDF framework is typically
realized at two distinct levels. The single-reference energy density
functional (SR-EDF) method introduces relatively simple functionals of nucleon
densities and currents, describing the nuclear ground states in terms of
symmetry-breaking mean-field wave functions. Most of the EDF-based computer
programs available on the market correspond to different flavors of the SR-EDF
method; see, e.g., [5, 6, 7, 8, 9, 10] for some selected examples. However, a
more advanced description requires the inclusion of collective correlations
related to the restoration of broken symmetries and quantum shape
fluctuations. This is the basic tenet of the multi-reference energy density
functional (MR-EDF) method.
The previous versions of the hfbtho program are largely implementations of the
SR-EDF formalism in the axial harmonic oscillator (HO) basis or the
transformed harmonic oscillator (THO) basis [11, 12, 5]. The core of the
program is a solver for the self-consistent Hartree-Fock-Bogoliubov (HFB)
equation. While the initial release [11] was restricted to even-even nuclei
with Skyrme EDFs and contact pairing interactions, more recent versions
expanded the theoretical framework significantly: to describe parity-breaking
shapes, nuclei with odd number of particles, and nuclei at finite temperature
[12]; to solve the HFB equation for the finite-range Gogny potentials, compute
the collective mass tensor and zero-point energy corrections, regularize the
pairing interaction, and compute properties of fission fragments [5].
Among the publicly available codes, MR-EDF capabilities include the
restoration of particle number symmetry in the canonical basis in hfbtho (all
versions) and the restoration of rotational, isospin, particle-number, and
reflection symmetries of HFB states in hfodd 3.06h [13]. Note that hfodd
projects either on total particle number $A$ or total isospin projection
$T_{z}$ but not separately on the number of protons $Z$ and neutrons $N$.
Compared to previous versions of hfbtho, the present release contains a much
more expanded MR-EDF toolkit for symmetry restoration that is tailored for
large-scale applications of the MR-EDF framework. Specifically, the version
4.0 of hfbtho implements the restoration of rotational, particle number, and
reflection symmetry for even-even nuclei. These restorations can be performed
either independently (e.g., either the rotational and reflection symmetries
only or the particle number symmetry only), or they can be combined in the
joint restoration of all three types of quantum numbers (angular momentum,
particle number, and parity). In addition, our implementation of the angular
momentum restoration bypasses the need to use rotationally-invariant, closed
bases. Symmetry restoration can now be performed in the deformed (stretched)
HO basis typically employed in large-scale calculations of potential energy
surfaces.
In Section 2, we review the modifications introduced in this version of the
program. In Section 3, we give several numerical benchmarks for the new
capabilities. Finally, in Section 4, we discuss the new options available in
the input file and explain how to run the code.
## 2 Modifications introduced in version 4.0
In this section, we present the new features added to the code between version
3.00 and 4.0.
### 2.1 Restoration of Broken Symmetries
A module for restoration of broken symmetries is the main new feature of
version 4.0. In the following, we describe the underlying theoretical
framework in detail.
#### 2.1.1 General Framework
The HFB states break several symmetries of the nuclear Hamiltonian and
consequently do not carry the associated good quantum numbers. Since its first
published version, the hfbtho program has implemented the particle number
restoration in the canonical basis for even-even nuclei. The current version
includes a new module for the simultaneous restoration of rotational, particle
number, and reflection symmetry of the HFB states for even-even nuclei [1, 14,
15].
The main ingredient of symmetry-restoring calculations are kernels of the form
$\mathcal{O}_{\bm{q}\bm{q}}^{JMK;NZ;p}=\braket{\Phi_{\bm{q}}}{\hat{O}\hat{P}^{J}_{MK}\hat{P}^{N}\hat{P}^{Z}\hat{P}^{p}}{\Phi_{\bm{q}}}.$
(1)
Here, $\ket{\Phi_{\bm{q}}}$ is an HFB state at point $\bm{q}$ in the
collective space defined by the set of active constraints on the HFB solution,
while $\hat{O}$ is either the identity operator for the norm overlap kernel,
$\mathcal{O}_{\bm{q}\bm{q}}^{JMK;NZ;p}\equiv\mathcal{N}_{\bm{q}\bm{q}}^{JMK;NZ;p}$,
or the Hamiltonian operator for the Hamiltonian kernel,
$\mathcal{O}_{\bm{q}\bm{q}}^{JMK;NZ;p}\equiv\mathcal{H}_{\bm{q}\bm{q}}^{JMK;NZ;p}$.
The operator that projects an HFB state onto a state with good values of
angular momentum $J$ reads
$\hat{P}^{J}_{MK}=\frac{2J+1}{16\pi^{2}}\int
d\Omega\;D^{J*}_{MK}(\alpha,\beta,\gamma)\hat{R}(\alpha,\beta,\gamma),$ (2)
where $\alpha$, $\beta$, and $\gamma$ are the usual Euler angles,
$\int\,d\Omega\equiv\int_{0}^{2\pi}\,d\alpha\int_{0}^{\pi}\,d\beta\sin\beta\int_{0}^{4\pi}\,d\gamma$,
and $D^{J}_{MK}(\alpha,\beta,\gamma)$ is the Wigner $D$-matrix [16]. The
coordinate-space rotation operator reads
$\hat{R}(\alpha,\beta,\gamma)=e^{-i\alpha\hat{J}_{z}}e^{-i\beta\hat{J}_{y}}e^{-i\gamma\hat{J}_{z}}.$
(3)
Note that the conservation of number parity [17] allows reducing the
integration interval over $\gamma$ to $[0,2\pi]$. This has no practical
consequence in hfbtho since integrals over Euler angles $\alpha$ and $\gamma$
are trivial and can be carried out analytically due to the axial symmetry. In
addition, the current version of hfbtho computes kernels (1) for the identity
and the Hamiltonian operator only. For such scalar operators, only the $M=K=0$
components of the total angular momentum do not vanish identically.
Furthermore, the operator that projects an HFB state onto a state with a good
number of particles reads
$\hat{P}^{X}=\frac{1}{2\pi}\int_{0}^{2\pi}d\varphi\,e^{i(\hat{X}-X_{0})\varphi},$
(4)
where $X=N\,(Z)$ is a label referring to neutrons (protons),
$X_{0}=N_{0}\,(Z_{0})$ is the desired number of neutrons (protons), and
$\hat{X}=\hat{N}\,(\hat{Z})$ is the neutron (proton) number operator. In
practice, the integration interval over the gauge angle $\varphi$ can be
reduced to $[0,\pi]$ using the property of a good number parity of an HFB
state. The resulting integral is further discretized and particle number
projection is performed using the Fomenko expansion [18]
$\hat{P}^{X}=\frac{1}{N_{\varphi}}\sum_{l_{\tau}=1}^{N_{\varphi}}e^{i(\hat{X}-X_{0})\varphi_{l_{\tau}}},\quad\varphi_{l_{\tau}}=\frac{\pi}{N_{\varphi}}l_{\tau},$
(5)
where $\tau=n\,(p)$ for neutrons (protons) and $N_{\varphi}$ is the
corresponding number of gauge angle points which may in principle be different
for neutrons and protons.
Finally, the operator that projects an HFB state onto a state with good parity
reads
$\hat{P}^{p}=\frac{1}{2}\Big{(}1+p\hat{\Pi}\Big{)},$ (6)
where $p=+1\,(-1)$ for positive (negative) parity and $\hat{\Pi}$ is the
standard parity operator [19].
Combining the expressions for projection operators and assuming the same
number of gauge angle points for neutrons and protons, the kernels (1) can be
written as
$\displaystyle\begin{split}\mathcal{O}_{\bm{q}\bm{q}}^{J;NZ;p}&=\frac{2J+1}{2}\int_{0}^{\pi}d\beta\,\sin\beta\,d^{J*}_{00}(\beta)\\\
&\times\frac{1}{N_{\varphi}^{2}}\sum_{l_{n}=1}^{N_{\varphi}}\sum_{l_{p}=1}^{N_{\varphi}}e^{-iN_{0}\varphi_{l_{n}}}e^{-iZ_{0}\varphi_{l_{p}}}\\\
&\times\frac{1}{2}\Big{[}\mathcal{O}_{\bm{q}\bm{q}}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})+p\mathcal{O}_{\bm{q}\bm{q}}^{\pi}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})\Big{]},\end{split}$
(7)
with the rotated kernels
$\displaystyle\mathcal{O}_{\bm{q}\bm{q}}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})$
$\displaystyle\equiv\braket{\Phi_{\bm{q}}}{\hat{O}e^{-i\beta\hat{J}_{y}}e^{i\varphi_{l_{n}}\hat{N}}e^{i\varphi_{l_{p}}\hat{Z}}}{\Phi_{\bm{q}}},$
(8a)
$\displaystyle\mathcal{O}_{\bm{q}\bm{q}}^{\Pi}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})$
$\displaystyle\equiv\braket{\Phi_{\bm{q}}}{\hat{O}e^{-i\beta\hat{J}_{y}}e^{i\varphi_{l_{n}}\hat{N}}e^{i\varphi_{l_{p}}\hat{Z}}\hat{\Pi}}{\Phi_{\bm{q}}}.$
(8b)
The expression for kernels can be further simplified by using the symmetries
of an HFB state. In particular, the anti-linear $y$-time-simplex operator
$\hat{S}_{y}^{T}=\hat{\Pi}\hat{T}e^{-i\pi\hat{J}_{y}}$ fixes a phase through a
symmetry transformation [20, 21, 15]
$\hat{S}_{y}^{T}\ket{\Phi_{\bm{q}}}=\ket{\Phi_{\bm{q}}}.$ (9)
Using the time-reversal symmetry, we then obtain the following relation for
the rotated kernels
$\displaystyle\mathcal{O}_{\bm{q}\bm{q}}^{\Pi}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})=\mathcal{O}_{\bm{q}\bm{q}}(\pi-\beta,\varphi_{l_{n}},\varphi_{l_{p}}).$
(10)
This greatly facilitates calculations because only the rotated kernels
$\mathcal{O}_{\bm{q}\bm{q}}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})$ need to be
evaluated explicitly. Moreover, since only diagonal kernels are considered in
this version of the code, the second subscript $\bm{q}$ can be dropped.
Therefore, the rotated kernels will simply be denoted as
$\mathcal{O}_{\bm{q}}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})$.
The symmetry-restoring framework enables us to expand an HFB state
$\ket{\Phi_{\bm{q}}}$ into a basis of states with good quantum numbers
(angular momentum, particle number, parity) and to extract their respective
coefficients [17]. For example, in the case of the particle number
decomposition, we can write
$\ket{\Phi_{\bm{q}}}=\sum_{N}\sum_{Z}c_{\bm{q}}^{NZ}\ket{NZ},$ (11)
and the coefficients satisfy
$\big{|}c_{\bm{q}}^{NZ}\big{|}^{2}=\frac{1}{N_{\varphi}^{2}}\sum_{l_{n}=1}^{N_{\varphi}}\sum_{l_{p}=1}^{N_{\varphi}}e^{-iN_{0}\varphi_{l_{n}}}e^{-iZ_{0}\varphi_{l_{p}}}\mathcal{O}_{\bm{q}}(0,\varphi_{l_{n}},\varphi_{l_{p}}),$
(12)
with $\sum_{N}\sum_{Z}|c_{\bm{q}}^{NZ}|^{2}\\!=\\!1$. Similarly, a
decomposition onto states with good angular momenta and parity implies that
the coefficients satisfy
$\displaystyle\begin{split}\big{|}c_{\bm{q}}^{J;p}\big{|}^{2}&=\frac{2J+1}{2}\int_{0}^{\pi}d\beta\,\sin\beta\,d^{J*}_{00}(\beta)\\\
&\times\frac{1}{2}\Big{[}\mathcal{O}_{\bm{q}}(\beta,0,0)+p\mathcal{O}_{\bm{q}}(\pi-\beta,0,0)\Big{]},\end{split}$
(13)
with $\sum_{J}\sum_{p}|c_{\bm{q}}^{J;p}|^{2}\\!=\\!1$. Note that only
collective states obeying the natural spin-parity selection rule,
$p=(-1)^{J}$, are accessible within the present model. The coefficients of the
simultaneous expansion onto states with good angular momentum, particle
number, and parity are given by Eq. (7), i.e.,
$|c_{\bm{q}}^{J;NZ;p}|^{2}=\mathcal{O}_{\bm{q}\bm{q}}^{J;NZ;p}$. They satisfy
the sum rule $\sum_{J}\sum_{p}\sum_{N,Z}|c_{\bm{q}}^{J;NZ;p}|^{2}=1$. Finally,
the energy of a symmetry-restored state is calculated as
$E_{\bm{q}}^{J;NZ;p}=\frac{\mathcal{H}_{\bm{q}}^{J;NZ;p}}{\mathcal{N}_{\bm{q}}^{J;NZ;p}}.$
(14)
#### 2.1.2 Bases Not Closed Under Rotation
Numerous implementations of the symmetry-restoring framework (see Refs. [3, 4,
22] and references therein for some recent results) relied on the expansion of
HFB states in spherical HO bases that are closed under rotation. However, such
an approach becomes computationally intractable when describing extremely
heavy or deformed configurations like those appearing in studies of nuclear
fission or the structure of superheavy nuclei. In these cases, numerical
convergence can typically be achieved only by expanding HFB states in deformed
HO bases with incomplete oscillator shells. However, such bases are not closed
under rotation and the conventional symmetry-restoring framework is
consequently inapplicable111Alternatively, symmetry restoration can also be
performed with HFB states obtained in a coordinate-space representation [2].
To avoid the large computational cost associated to spatial rotations of HFB
states during the angular momentum projection, the relevant kernels are often
computed in the canonical basis. This can lead to similar difficulties as
using incomplete HO bases; see [23, 24, 25] for a discussion..
The elegant solution to this hurdle was proposed almost three decades ago by
L. Robledo [26], who reformulated Wick’s theorem [27, 28] to encompass bases
not closed under rotation. The first implementations of the modified symmetry-
restoring framework were reported only very recently [29, 30]. Version 4.0 of
hfbtho is the first one to contain this capability. In particular, for the
case of bases not closed under rotation, the rotated norm overlap kernel for
particle type $\tau=n,p$ reads
$\mathcal{N}^{(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})=\sqrt{\det\big{[}A_{\bm{q}}^{(\tau)}(\bm{x^{(\tau)}})\big{]}\det\big{[}R(\bm{x^{(\tau)}})\big{]}},$
(15)
where $\bm{x}^{(\tau)}\equiv\\{\beta,\varphi_{l_{\tau}}\\}$,
$R(\bm{x}^{(\tau)})$ is the total rotation matrix, and the
$A_{\bm{q}}^{(\tau)}(\bm{x}^{(\tau)})$ matrix reads
$A_{\bm{q}}^{(\tau)}(\bm{x}^{(\tau)})=U_{\bm{q}}^{(\tau)T}\big{[}R^{T}(\bm{x}^{(\tau)})\big{]}^{-1}U_{\bm{q}}^{(\tau)*}\\!+V_{\bm{q}}^{(\tau)T}R(\bm{x}^{(\tau)})V_{\bm{q}}^{(\tau)*}.$
(16)
Here, the Bogoliubov matrices $U_{\bm{q}}^{(\tau)}$, $V_{\bm{q}}^{(\tau)}$
correspond to the HFB solution $\ket{\Phi_{\bm{q}}}$ for particle $\tau$.
Without breaking the isospin symmetry, the full rotated norm overlap kernel is
separable in isospin
$\mathcal{N}_{\bm{q}}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})=\mathcal{N}_{\bm{q}}^{(\tau=n)}(\beta,\varphi_{l_{n}})\times\mathcal{N}_{\bm{q}}^{(\tau=p)}(\beta,\varphi_{l_{p}}).$
(17)
Moreover, in the case of a basis closed under rotation we have
$|\det[R(\bm{x}^{(\tau)})]|=1$, and the expression (15) reduces to the
conventional Onishi formula [31].
Furthermore, the rotated density and pairing tensors for particle type $\tau$
read
$\displaystyle\rho^{(\tau)}_{\bm{q}}(\bm{x}^{\tau})$
$\displaystyle=R({\bm{x}^{(\tau)}})V_{\bm{q}}^{(\tau)*}\Big{[}A^{(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})\Big{]}^{-1}V_{\bm{q}}^{(\tau)T},$
(18a) $\displaystyle\kappa^{(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})$
$\displaystyle=R(\bm{x}^{(\tau)})V_{\bm{q}}^{(\tau)*}\Big{[}A^{(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})\Big{]}^{-1}U_{\bm{q}}^{(\tau)T},$
(18b) $\displaystyle\kappa^{*(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})$
$\displaystyle=-R^{*}(\bm{x}^{(\tau)})U_{\bm{q}}^{(\tau)*}\Big{[}A^{(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})\Big{]}^{-1}V_{\bm{q}}^{(\tau)T}.$
(18c)
The rotated Hamiltonian kernel
$\mathcal{H}_{\bm{q}}(\beta,\varphi_{l_{n}},\varphi_{l_{p}})$ is a functional
of the rotated density and pairing tensors; see Section 2.1.6 and Refs. [1, 2]
for more details.
#### 2.1.3 Structure of Matrices in the $y$-Simplex Basis
The rotation by an angle $\beta$ about the $y$-axis of the reference frame
breaks the axial symmetry of HFB solutions. Computations can thus be
facilitated by using a non-axially-symmetric, computationally-efficient
representation of the Bogoliubov matrices $U_{\bm{q}}^{(\tau)}$ and
$V_{\bm{q}}^{(\tau)}$. This is achieved by introducing the $y$-simplex basis.
##### The $y$-simplex Basis
The HO basis states $\ket{\alpha}$ are characterized by the set of quantum
numbers
$\\{\alpha\\}=\\{n_{z}^{\alpha},n_{\perp}^{\alpha},\Lambda^{\\!\alpha},\Sigma^{\alpha}\\}$,
where $n_{z}^{\alpha}$ and $n_{\perp}^{\alpha}$ represent the number of quanta
(nodes) in the $z-$ and the $r_{\perp}-$ direction, respectively, while
$\Lambda^{\\!\alpha}$ and
$\Sigma^{\alpha}(\equiv\ket{\uparrow},\ket{\downarrow})$ denote the components
of the orbital angular momentum and of the spin along the $z-$axis. Starting
from these initial basis states, it is straightforward to show that the linear
combinations
$\displaystyle\begin{split}\ket{n_{z}^{\alpha}n_{\perp}^{\alpha}\Lambda^{\\!\alpha};+}&=\frac{1}{\sqrt{2}}\Big{[}i\ket{n_{z}^{\alpha}n_{\perp}^{\alpha}\Lambda^{\\!\alpha}\\!\uparrow}+\ket{n_{z}^{\alpha}n_{\perp}^{\alpha}\\!-\\!\Lambda^{\\!\alpha}\\!\downarrow}\Big{]},\\\
\ket{n_{z}^{\alpha}n_{\perp}^{\alpha}\Lambda^{\\!\alpha};-}&=\frac{1}{\sqrt{2}}\Big{[}\ket{n_{z}^{\alpha}n_{\perp}^{\alpha}\Lambda^{\\!\alpha}\\!\uparrow}+i\ket{n_{z}^{\alpha}n_{\perp}^{\alpha}\\!-\\!\Lambda^{\\!\alpha}\\!\downarrow}\Big{]},\end{split}$
(19)
are eigenstates of the $y$-simplex operator $\hat{R}_{y}$ with eigenvalues of
$+i$ and $-i$, respectively. The $y$-simplex operator $\hat{R}_{y}$ is defined
as a rotation around the $y$-axis by an angle $\pi$, followed by the parity
transformation $\hat{\Pi}$
$\hat{R}_{y}=\hat{\Pi}\exp(-i\pi\hat{J}_{y}).\\\ $ (20)
The $y$-simplex basis can be used to reduce the computational cost by
exploiting symmetries of the problem at hand.
##### Bogoliubov Matrices
In the $y$-simplex basis, the Bogoliubov matrices acquire the block structure
$U^{(\tau)}_{\bm{q}}=\Bigg{(}\begin{array}[]{cc}u^{(\tau)}_{\bm{q}}&0\\\
0&u_{\bm{q}}^{(\tau)*}\end{array}\Bigg{)},\quad\quad
V^{(\tau)}_{\bm{q}}=\Bigg{(}\begin{array}[]{cc}0&-v_{\bm{q}}^{(\tau)*}\\\
v^{(\tau)}_{\bm{q}}&0\end{array}\Bigg{)}.$ (21)
In this expression, the basis states are organized in two blocks: the first
block comprises all states with an eigenvalue $+i$, while the second block
comprises all states with an eigenvalue $-i$. The transformation between the
components $k$ of Bogoliubov matrices in the $y$-simplex basis and the HO
basis reads
$\displaystyle
u_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},\Omega^{\alpha}-\frac{1}{2}]}$
$\displaystyle=(+1)U_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},\Omega^{\alpha}-\frac{1}{2},\Sigma^{\alpha}=+\frac{1}{2}]},$
(22a) $\displaystyle
u_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},-\Omega^{\alpha}-\frac{1}{2}]}$
$\displaystyle=(+i)U_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},\Omega^{\alpha}+\frac{1}{2},\Sigma^{\alpha}=-\frac{1}{2}]},$
(22b) $\displaystyle
v_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},\Omega^{\alpha}-\frac{1}{2}]}$
$\displaystyle=(-1)V_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},\Omega^{\alpha}-\frac{1}{2},\Sigma^{\alpha}=+\frac{1}{2}]},$
(22c) $\displaystyle
v_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},-\Omega^{\alpha}-\frac{1}{2}]}$
$\displaystyle=(-i)V_{\bm{q},k}^{(\tau)[n_{z}^{\alpha},n_{\perp}^{\alpha},\Omega^{\alpha}+\frac{1}{2},\Sigma^{\alpha}=-\frac{1}{2}]}.$
(22d)
Using these expressions, one can construct $U^{(\tau)}_{\bm{q}}$ and
$V^{(\tau)}_{\bm{q}}$ matrices in the $y$-simplex basis from the HFB solutions
expressed in the HO basis.
##### Rotation Matrix
The total rotation operator corresponds to the combination of a spatial
rotation for an angle $\beta$ and a gauge space rotation for an angle
$\varphi_{l_{\tau}}$. In the $y$-simplex basis, the rotation matrix acquires
the following block structure
$R(\bm{x}^{(\tau)})=e^{i\varphi_{l_{\tau}}}\Bigg{(}\begin{array}[]{cc}r(\beta)&0\\\
0&r^{*}(\beta)\end{array}\Bigg{)},$ (23)
where the matrix elements $r_{\alpha\gamma}(\beta)$ of the $r(\beta)$ matrix
read
$\displaystyle\begin{split}r_{\alpha\gamma}(\beta)&=\frac{1}{2}\cos\Big{(}\frac{\beta}{2}\Big{)}\braket{n_{z}^{\alpha}n_{\perp}^{\alpha}\Lambda^{\\!\alpha}}{e^{-i\beta\hat{L}_{y}}}{n_{z}^{\gamma}n_{\perp}^{\gamma}\Lambda^{\\!\gamma}}\\\
&+\frac{1}{2}\cos\Big{(}\frac{\beta}{2}\Big{)}\braket{n_{z}^{\alpha}n_{\perp}^{\alpha}\\!-\\!\Lambda^{\\!\alpha}}{e^{-i\beta\hat{L}_{y}}}{n_{z}^{\gamma}n_{\perp}^{\gamma}\\!-\\!\Lambda^{\\!\gamma}}\\\
&+\frac{i}{2}\sin\Big{(}\frac{\beta}{2}\Big{)}\braket{n_{z}^{\alpha}n_{\perp}^{\alpha}\Lambda^{\\!\alpha}}{e^{-i\beta\hat{L}_{y}}}{n_{z}^{\gamma}n_{\perp}^{\gamma}\\!-\\!\Lambda^{\\!\gamma}}\\\
&+\frac{i}{2}\sin\Big{(}\frac{\beta}{2}\Big{)}\braket{n_{z}^{\alpha}n_{\perp}^{\alpha}\\!-\\!\Lambda^{\\!\alpha}}{e^{-i\beta\hat{L}_{y}}}{n_{z}^{\gamma}n_{\perp}^{\gamma}\Lambda^{\\!\gamma}}.\end{split}$
(24)
Matrix elements of the $e^{-i\beta\hat{L}_{y}}$ operator are evaluated using
the prescription of Ref. [32].
##### Calculation of Overlaps
Using the block structure of the Bogoliubov matrices and of the total rotation
matrix, we can recast the $A^{(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})$ matrix in the
$y$-simplex basis as
$A_{\bm{q}}^{(\tau)}(\bm{x}^{(\tau)})=\Bigg{(}\begin{array}[]{cc}{a_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})&0\\\
0&{a_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})\end{array}\Bigg{)},$ (25)
where
$\displaystyle{a_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})$
$\displaystyle=e^{-i\varphi_{l_{\tau}}}a_{U_{\bm{q}}}^{(\tau)}(\beta)+e^{i\varphi_{l_{\tau}}}a_{V_{\bm{q}}}^{(\tau)}(\beta),$
(26a) $\displaystyle{a_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})$
$\displaystyle=e^{-i\varphi_{l_{\tau}}}\Big{[}a_{U_{\bm{q}}}^{(\tau)}(\beta)\Big{]}^{*}+e^{i\varphi_{l_{\tau}}}\Big{[}a_{V_{\bm{q}}}^{(\tau)}(\beta)\Big{]}^{*},$
(26b)
and
$\displaystyle a_{U_{\bm{q}}}^{(\tau)}(\beta)$
$\displaystyle=\big{[}u_{\bm{q}}^{(\tau)}\big{]}^{T}\big{[}r^{T}(\beta)\big{]}^{-1}u_{\bm{q}}^{(\tau)*},$
(27a) $\displaystyle a_{V_{\bm{q}}}^{(\tau)}(\beta)$
$\displaystyle=\big{[}v_{\bm{q}}^{(\tau)}\big{]}^{T}r^{*}(\beta)v_{\bm{q}}^{(\tau)*}.$
(27b)
The rotated norm overlap kernel then reads
$\mathcal{N}_{\bm{q}}^{(\tau)}(\bm{x}^{(\tau)})=\sqrt{\det\Bigg{[}\Bigg{(}\begin{array}[]{cc}{n_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})&0\\\
0&{n_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})\end{array}\Bigg{)}\Bigg{]}},$ (28)
with
$\displaystyle{n_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})$
$\displaystyle=e^{i\varphi_{l_{\tau}}}{a_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})r(\beta),$
(29a) $\displaystyle{n_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})$
$\displaystyle=e^{i\varphi_{l_{\tau}}}{a_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})r^{*}(\beta).$
(29b)
Since the two $y$-simplex blocks yield identical overlaps, the sign of the
total overlap is fixed by the sign of any of them.
##### Rotated Density and Pairing Tensors
In the $y$-simplex basis, the density matrix acquires a diagonal block
structure
$\rho_{\bm{q}}^{(\tau)}(\bm{x}^{(\tau)})=\Bigg{(}\begin{array}[]{cc}\rho_{{\bm{q}}}^{(\tau)++}(\bm{x}^{(\tau)})&0\\\
0&\rho_{{\bm{q}}}^{(\tau)--}(\bm{x}^{(\tau)})\end{array}\Bigg{)},$ (30)
where
$\displaystyle\rho_{{\bm{q}}}^{(\tau)++}(\bm{x}^{(\tau)})$
$\displaystyle=e^{i\varphi_{l_{\tau}}}r(\beta)v_{\bm{q}}^{(\tau)}\Big{[}{a_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})\Big{]}^{-1}v_{\bm{q}}^{(\tau)\dagger},$
(31a) $\displaystyle\rho_{\bm{q}}^{(\tau)--}(\bm{x}^{(\tau)})$
$\displaystyle=e^{i\varphi_{l_{\tau}}}r^{*}(\beta)v_{\bm{q}}^{(\tau)*}\Big{[}{a_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})\Big{]}^{-1}v_{\bm{q}}^{(\tau)T}.$
(31b)
On the other hand, the pairing tensor acquires an off-diagonal block structure
$\kappa^{(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})=\Bigg{(}\begin{array}[]{cc}0&\kappa_{\bm{q}}^{(\tau)+-}(\bm{x}^{(\tau)})\\\
\kappa_{\bm{q}}^{(\tau)-+}(\bm{x}^{(\tau)})&0\end{array}\Bigg{)},$ (32)
where
$\displaystyle\kappa_{\bm{q}}^{(\tau)+-}(\bm{x}^{(\tau)})$
$\displaystyle=-e^{i\varphi_{l_{\tau}}}r(\beta)v_{\bm{q}}^{(\tau)}\Big{[}{a_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})\Big{]}^{-1}u_{\bm{q}}^{(\tau)\dagger},$
(33a) $\displaystyle\kappa_{\bm{q}}^{(\tau)-+}(\bm{x}^{(\tau)})$
$\displaystyle=e^{i\varphi_{l_{\tau}}}r^{*}(\beta)v_{\bm{q}}^{(\tau)*}\Big{[}{a_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})\Big{]}^{-1}u_{\bm{q}}^{(\tau)T}.$
(33b)
Similarly,
$\kappa^{*(\tau)}_{\bm{q}}(\bm{x}^{(\tau)})=\Bigg{(}\begin{array}[]{cc}0&\kappa_{\bm{q}}^{*(\tau)+-}(\bm{x}^{(\tau)})\\\
\kappa_{\bm{q}}^{*(\tau)-+}(\bm{x}^{(\tau)})&0\end{array}\Bigg{)},$ (34)
with
$\displaystyle\kappa_{\bm{q}}^{*(\tau)+-}(\bm{x}^{(\tau)})$
$\displaystyle=-e^{-i\varphi_{l_{\tau}}}r^{*}(\beta)u_{\bm{q}}^{(\tau)*}\Big{[}{a_{\bm{q}}^{(\tau)++}}(\bm{x}^{(\tau)})\Big{]}^{-1}v_{\bm{q}}^{(\tau)T},$
(35a) $\displaystyle\kappa_{\bm{q}}^{*(\tau)-+}(\bm{x}^{(\tau)})$
$\displaystyle=e^{-i\varphi_{l_{\tau}}}r(\beta)u_{\bm{q}}^{(\tau)}\Big{[}{a_{\bm{q}}^{(\tau)--}}(\bm{x}^{(\tau)})\Big{]}^{-1}v_{\bm{q}}^{(\tau)\dagger}.$
(35b)
#### 2.1.4 Making Use of the Symmetries
The expansion in the $y$-simplex basis enables us to reduce the computational
cost by making all matrices block-diagonal. The computational cost can further
be reduced by exploiting the symmetries in rotational angle $\beta$ and gauge
angle $\varphi_{l_{\tau}}$:
* 1.
For reflection-symmetric configurations ($q_{30}=0$), all quantities are
symmetric around $\beta=\pi/2$. Consequently, the projection interval can be
reduced to $\beta\\!\in\\![0,\pi/2]$. This feature is automatically
implemented for all reflection-symmetric configurations.
* 2.
The projection interval in gauge angle $\varphi_{l_{\tau}}$ can always be
reduced to $\varphi_{l_{\tau}}\\!\in\\![0,\pi]$ due to the number-parity
symmetry of an HFB state. In addition, using symmetries of the two simplex
blocks, we have
$\displaystyle\mathcal{N}^{(\tau)}_{\bm{q}}(\beta,\pi-\varphi_{l_{\tau}})$
$\displaystyle=\Big{[}\mathcal{N}^{(\tau)}_{\bm{q}}(\beta,\varphi_{l_{\tau}})\Big{]}^{*},$
(36a) $\displaystyle\rho_{\bm{q}}^{(\tau)++}(\beta,\pi-\varphi_{l_{\tau}})$
$\displaystyle=\Big{[}\rho_{\bm{q}}^{(\tau)--}(\beta,\varphi_{l_{\tau}})\Big{]}^{*},$
(36b) $\displaystyle\rho_{\bm{q}}^{(\tau)--}(\beta,\pi-\varphi_{l_{\tau}})$
$\displaystyle=\Big{[}\rho_{\bm{q}}^{(\tau)++}(\beta,\varphi_{l_{\tau}})\Big{]}^{*},$
(36c) $\displaystyle\kappa_{\bm{q}}^{(\tau)+-}(\beta,\pi-\varphi_{l_{\tau}})$
$\displaystyle=-[\kappa_{\bm{q}}^{(\tau)-+}(\beta,\varphi_{l_{\tau}})]^{*},$
(36d) $\displaystyle\kappa^{(\tau)-+}_{\bm{q}}(\beta,\pi-\varphi_{l_{\tau}})$
$\displaystyle=-\Big{[}\kappa^{(\tau)+-}_{\bm{q}}(\beta,\varphi_{l_{\tau}})\Big{]}^{*},$
(36e) $\displaystyle\kappa_{\bm{q}}^{*(\tau)+-}(\beta,\pi-\varphi_{l_{\tau}})$
$\displaystyle=-[\kappa_{\bm{q}}^{*(\tau)-+}(\beta,\varphi_{l_{\tau}})]^{*},$
(36f) $\displaystyle\kappa^{*(\tau)-+}_{\bm{q}}(\beta,\pi-\varphi_{l_{\tau}})$
$\displaystyle=-\Big{[}\kappa^{*(\tau)+-}_{\bm{q}}(\beta,\varphi_{l_{\tau}})\Big{]}^{*}.$
(36g)
Consequently, only quantities within the interval
$\varphi_{l_{\tau}}\in[0,\pi/2]$ are explicitly calculated.
#### 2.1.5 Densities in the Coordinate-Space Representation
The expressions (18a) \- (18c) for the rotated (transition) density and
pairing tensors are written in the configuration space, that is, the
quantities $U_{\bm{q}}^{(\tau)}$, $V_{\bm{q}}^{(\tau)}$, etc., are matrices.
When using Skyrme EDFs, the coordinate-space representation is also especially
useful.
##### General Expressions
In the coordinate-space representation, the full one-body density matrix for
particle type $\tau$ can be written as
$\displaystyle\begin{split}\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime})&=\frac{1}{2}\rho_{\bm{q}}^{(\tau)}(\bm{r},\bm{r^{\prime}})\delta_{\sigma\sigma^{\prime}}\\\
&+\frac{1}{2}\sum_{\mu}\braket{\sigma}{\hat{\sigma}_{\mu}}{\sigma^{\prime}}s_{\bm{q},\mu}^{(\tau)}(\bm{r},\bm{r^{\prime}}),\end{split}$
(37)
where $\rho_{\bm{q}}^{(\tau)}(\bm{r},\bm{r^{\prime}})$ is the non-local one-
body particle density
$\rho_{\bm{q}}^{(\tau)}(\bm{r},\bm{r^{\prime}})=\sum_{\sigma}\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma)$
(38)
and $s_{\bm{q},\mu}^{(\tau)}(\bm{r},\bm{r^{\prime}})$ is the $\mu$ component
of the non-local one-body spin density
$s_{\bm{q},\mu}^{(\tau)}(\bm{r},\bm{r^{\prime}})=\sum_{\sigma\sigma^{\prime}}\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime})\braket{\sigma^{\prime}}{\sigma_{\mu}}{\sigma}.$
(39)
These non-local densities can be used to generate an auxiliary set of local
densities that will appear in the expression for the energy density
functional. In particular, the local particle density
$\rho_{\bm{q}}^{{(\tau)}}({\bm{r}})$, the local spin density
$\bm{s}_{\bm{q}}^{(\tau)}(\bm{r})$, the kinetic energy density
$\tau_{\bm{q}}^{(\tau)}(\bm{r})$, the spin kinetic energy density
$\bm{T}_{\bm{q}}^{(\tau)}(\bm{r})$, the current density
$\bm{j}_{\bm{q}}^{(\tau)}(\bm{r})$, and the spin current density
$\mathsf{J}_{\bm{q}}^{(\tau)}(\bm{r})$ read
$\displaystyle\rho_{\bm{q}}^{(\tau)}(\bm{r})$
$\displaystyle=\rho_{\bm{q}}^{(\tau)}(\bm{r},\bm{r}),$ (40a)
$\displaystyle\bm{s}_{\bm{q}}^{(\tau)}(\bm{r})$
$\displaystyle=\bm{s}_{\bm{q}}^{(\tau)}(\bm{r},\bm{r}),$ (40b)
$\displaystyle\tau_{\bm{q}}^{(\tau)}(\bm{r})$
$\displaystyle=\nabla\cdot\nabla^{\prime}\rho_{\bm{q}}^{(\tau)}(\bm{r},\bm{r^{\prime}})\rvert_{\bm{r^{\prime}}=\bm{r}},$
(40c) $\displaystyle T_{\bm{q},\mu}^{(\tau)}(\bm{r})$
$\displaystyle=\nabla\cdot\nabla^{\prime}s_{\bm{q},\mu}^{(\tau)}(\bm{r},\bm{r^{\prime}})\rvert_{\bm{r^{\prime}}=\bm{r}},$
(40d) $\displaystyle\bm{j}_{\bm{q}}^{(\tau)}(\bm{r})$
$\displaystyle=\frac{1}{2i}(\nabla-\nabla^{\prime})\rho_{\bm{q}}^{(\tau)}(\bm{r},\bm{r^{\prime}})\rvert_{\bm{r^{\prime}}=\bm{r}},$
(40e) $\displaystyle J_{\bm{q},\mu\nu}^{(\tau)}(\bm{r})$
$\displaystyle=\frac{1}{2i}(\nabla_{\mu}-\nabla^{\prime}_{\mu})s_{\bm{q},\nu}^{(\tau)}(\bm{r},\bm{r^{\prime}})\rvert_{\bm{r^{\prime}}=\bm{r}},.$
(40f)
Furthermore, the non-local pairing densities for particle type $\tau$ are
defined through the corresponding pairing tensors as
$\displaystyle\tilde{\rho}_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime})$
$\displaystyle=(-2\sigma^{\prime})\kappa_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\\!-\\!\sigma^{\prime}),$
(41a)
$\displaystyle\tilde{\rho}_{\bm{q}}^{*(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime})$
$\displaystyle=(-2\sigma^{\prime})\kappa_{\bm{q}}^{*(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\\!-\\!\sigma^{\prime}).$
(41b)
They can be equivalently expanded as
$\displaystyle\begin{split}\tilde{\rho}_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime})&=\frac{1}{2}\tilde{\rho}_{\bm{q}}^{(\tau)}(\bm{r},\bm{r^{\prime}})\delta_{\sigma\sigma^{\prime}}\\\
&+\frac{1}{2}\sum_{\mu}\braket{\sigma}{\hat{\sigma}_{\mu}}{\sigma^{\prime}}\tilde{s}_{\bm{q},\mu}^{(\tau)}(\bm{r},\bm{r^{\prime}}).\end{split}$
(42)
However, only local pairing densities will be considered in the pairing term
of the energy density functional
$\displaystyle\tilde{\rho}_{\bm{q}}^{(\tau)}(\bm{r})$
$\displaystyle=\tilde{\rho}_{\bm{q}}^{(\tau)}(\bm{r},\bm{r}),$ (43a)
$\displaystyle\tilde{\rho}_{\bm{q}}^{*(\tau)}(\bm{r})$
$\displaystyle=\tilde{\rho}_{\bm{q}}^{*(\tau)}(\bm{r},\bm{r}).$ (43b)
Formally, equations (40a) - (40f) and (43a) - (43b) look identical regardless
of whether
$\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime})$ is the
diagonal one-body density matrix,
$\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r}^{\prime}\sigma^{\prime})\equiv\frac{\braket{\Phi_{\bm{q}}}{c^{\dagger}(\bm{r}^{\prime}\sigma^{\prime}\tau)c(\bm{r}\sigma\tau)}{\Phi_{\bm{q}}}}{\braket{\Phi_{\bm{q}}}{\Phi_{\bm{q}}}}$
(44)
or the rotated (transition) one-body density,
$\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime};\eta)\equiv\frac{\braket{\Phi_{\bm{q}}}{c^{\dagger}(\bm{r}^{\prime}\sigma^{\prime}\tau)c(\bm{r}\sigma\tau)\mathcal{R}[\eta]}{\Phi_{\bm{q}}}}{\braket{\Phi_{\bm{q}}}{\mathcal{R}[\eta]}{\Phi_{\bm{q}}}},$
(45)
where $c^{\dagger}(\bm{r}^{\prime}\sigma^{\prime}\tau)$ and
$c(\bm{r}\sigma\tau)$ are the creation and the annihilation operator for
particle $\tau$ corresponding to the single-particle basis of choice,
$\mathcal{R}$ is the transformation (rotation) operator related to the
symmetry being restored, and $\eta$ denotes a set of real numbers
parametrizing the elements of the symmetry group(s) related to the
transformation $\mathcal{R}$ (that is, in our case,
$\eta\equiv\bm{x}^{(\tau)}$). The main difference is that for diagonal one-
body density matrix all local densities are real-valued if axial-symmetry is
enforced. On the other hand, the densities stemming from the latter matrix are
generally complex-valued [33]. For completeness, we give the explicit
expressions for the densities and currents (40a) - (40f) and (43a) - (43b) in
A.
##### Time-Odd Densities and Symmetry Restoration
Within the HFB theory, the local densities $\rho_{\bm{q}}^{(\tau)}$,
$\tau_{\bm{q}}^{(\tau)}$, and $\mathsf{J}_{\bm{q}}^{(\tau)}$ are even, while
$\bm{s}_{\bm{q}}^{(\tau)}$, $\bm{T}_{\bm{q}}^{(\tau)}$, and
$\bm{j}_{\bm{q}}^{(\tau)}$ are odd under the time-reversal transformation
[34]. When the HFB state $\ket{\Phi_{\bm{q}}}$ in (44) is time-even, as is the
case for even-even nuclei at the SR-EDF level, the
$\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r}^{\prime}\sigma^{\prime})$ matrix
is time-even as well. Consequently, one can show that in such cases
$\bm{s}_{\bm{q}}^{(\tau)}(\bm{r})=\bm{T}_{\bm{q}}^{(\tau)}(\bm{r})=\bm{j}_{\bm{q}}^{(\tau)}(\bm{r})=0$
and the corresponding energy contributions vanish identically. Furthermore,
blocking calculations for odd nuclei in hfbtho are implemented in the equal
filling approximation [35], which enforces the conservation of time-reversal
symmetry. Therefore, the time-odd densities do not contribute in this case
either.
However, the situation is generally different for transition densities of Eq.
(45), such as the gauge- and Euler-rotated densities appearing at the MR-EDF
level [33]. Most importantly, the transition densities are generally not
Hermitian. Consequently, even if the HFB state is time-even, the time-odd
densities and the corresponding energy contributions may not vanish
identically. In the particular case of particle number projection (PNP), one
can show that the one-body density matrix is symmetric in the oscillator basis
and that, as a result, the spin density transforms under the time-reversal as
$\hat{T}\bm{s}_{\bm{q},\mu}^{(\tau)}(\bm{r},\bm{r}^{\prime})\\!=\\!-\bm{s}_{\bm{q},\mu}^{(\tau)}(\bm{r},\bm{r}^{\prime})$.
This property ensures that the spin density vanishes identically when the
reference state is time-even. However, this result is specific to the case of
PNP alone. For the angular momentum projection (AMP) or the combined PNP and
AMP, all time-odd densities are generally non-zero and contribute to the
projected energy (or any other observable).
#### 2.1.6 Rotated Energy Density Functional
##### Rotated Hamiltonian Kernel
The rotated Hamiltonian kernel is a functional of the rotated density and
rotated pairing tensors. It corresponds to a spatial integral of the rotated
energy density functional
$\mathcal{H}_{\bm{q}}(\bm{x})[\rho,\kappa,\kappa^{*}]=\int
d^{3}\bm{r}\,\mathcal{E}_{\bm{q}}(\bm{r};\bm{x})[\rho,\kappa,\kappa^{*}],$
(46)
where $\bm{x}\equiv\\{\bm{x^{(\tau=n)}},\bm{x^{(\tau=p)}}\\}$. Version 4.0 of
hfbtho implements the restoration of symmetries for Skyrme-based EDFs only.
The total EDF can be decomposed into the particle-hole (Skyrme) part and the
particle-particle (pairing) part
$\mathcal{E}_{\bm{q}}(\bm{r};\bm{x})=\mathcal{E}_{\bm{q}}^{\text{Sky}}(\bm{r};\bm{x})+\mathcal{E}_{\bm{q}}^{\text{pair}}(\bm{r};\bm{x}),$
(47)
where
$\mathcal{E}_{\bm{q}}^{\text{Sky}}(\bm{r};\bm{x})=\mathcal{E}_{\bm{q}}^{\text{kin}}(\bm{r};\bm{x})+\mathcal{E}_{\bm{q}}^{\text{Cou}}(\bm{r};\bm{x})+\mathcal{E}_{\bm{q}}^{\text{pot}}(\bm{r};\bm{x}).$
(48)
Note that functional dependencies on the rotated density and pairing tensors
were dropped for compactness on each side of Eqs. (47) and (48). The kinetic
term simply reads
$\displaystyle\mathcal{E}_{\bm{q}}^{\text{kin}}(\bm{r};\bm{x})$
$\displaystyle=\sum_{\tau=n,p}\frac{\hbar^{2}}{2m}\tau_{\bm{q}}^{(\tau)}(\bm{r};\bm{x}).$
(49a)
The Coulomb term can be decomposed into the direct and the exchange part,
$\mathcal{E}_{\bm{q}}^{\text{Cou}}(\bm{r};\bm{x})=\mathcal{E}_{\bm{q}}^{\text{Cou},\text{dir}}(\bm{r};\bm{x})+\mathcal{E}_{\bm{q}}^{\text{Cou},\text{exc}}(\bm{r};\bm{x})$.
The direct contribution is calculated as
$\mathcal{E}_{\bm{q}}^{\text{Cou},\text{dir}}(\bm{r};\bm{x})=\frac{1}{2}\int\,d^{3}\bm{r^{\prime}}\frac{\rho_{\bm{q}}^{(p)}(\bm{r};\bm{x})\rho_{\bm{q}}^{(p)}(\bm{r^{\prime}})}{|\bm{r}-\bm{r^{\prime}}|},$
(50)
while the exchange contribution is calculated in the local Slater
approximation
$\mathcal{E}_{\bm{q}}^{\text{Cou},\text{exc}}(\bm{r};\bm{x})=-\frac{3e^{2}}{4}\left(\frac{3}{\pi}\right)^{1/3}\Big{[}\rho_{\bm{q}}^{(p)}(\bm{r};\bm{x})\Big{]}^{4/3}.$
(51)
Note that the pairing contribution of the Coulomb interaction has been omitted
and the Coulomb potential is computed with the non-rotated density to save
computational time. The resulting error is less than 100 keV on the $J=10$
state of Table 2.
Furthermore, the Skyrme pseudopotential term can also be decomposed into two
contributions
$\mathcal{E}_{\bm{q}}^{\text{pot}}(\bm{r};\bm{x})=\sum_{t=0,1}\Big{[}\mathcal{E}_{\bm{q},t}^{\text{pot},\text{even}}(\bm{r};\bm{x})+\mathcal{E}_{\bm{q},t}^{\text{pot},\text{odd}}(\bm{r};\bm{x})\Big{]},$
(52)
where the former is built from time-even densities and currents only, while
the latter is built from time-odd densities and currents only. Of course, both
contributions are themselves time-even by construction. Furthermore, the
summation over $t$ in Eq. (52) reflects the coupling of neutron and proton
densities and currents into the isoscalar ($t=0$) and the isovector ($t=1$)
channel, i.e.
$\displaystyle\begin{split}\rho_{\bm{q},0}(\bm{r};\bm{x})&=\rho_{\bm{q}}^{(n)}(\bm{r};\bm{x})+\rho_{\bm{q}}^{(p)}(\bm{r};\bm{x}),\\\
\rho_{\bm{q},1}(\bm{r};\bm{x})&=\rho_{\bm{q}}^{(n)}(\bm{r};\bm{x})-\rho_{\bm{q}}^{(p)}(\bm{r};\bm{x}),\end{split}$
(53)
and equivalently for other densities and currents. The time-even contribution
to the EDF then reads
$\displaystyle\begin{split}\mathcal{E}_{\bm{q},t}^{\text{pot},\text{even}}(\bm{r};\bm{x})&=C_{\bm{q},t}^{\rho\rho}(\bm{r};\bm{x})\rho_{\bm{q},t}^{2}(\bm{r};\bm{x})\\\
&+C_{t}^{\rho\Delta\rho}\rho_{\bm{q},t}(\bm{r};\bm{x})\Delta\rho_{\bm{q},t}(\bm{r};\bm{x})\\\
&+C_{t}^{\rho\tau}\rho_{\bm{q},t}(\bm{r};\bm{x})\tau_{\bm{q},t}(\bm{r};\bm{x})\\\
&+C_{t}^{\rho\nabla
J}\rho_{\bm{q},t}(\bm{r};\bm{x})\nabla\cdot\mathsf{\bm{J}}_{\bm{q},t}(\bm{r};\bm{x})\\\
&+C_{t}^{JJ}\sum_{\mu\nu}J_{\bm{q},t,\mu\nu}(\bm{r};\bm{x})J_{\bm{q},t,\mu\nu}(\bm{r};\bm{x}),\end{split}$
(54)
and the time-odd contribution reads
$\displaystyle\begin{split}\mathcal{E}_{\bm{q},t}^{\text{pot},\text{odd}}(\bm{r};\bm{x})&=C_{\bm{q},t}^{ss}(\bm{r};\bm{x})\bm{s}_{\bm{q},t}^{2}(\bm{r};\bm{x})\\\
&+C_{t}^{s\Delta
s}\bm{s}_{\bm{q},t}(\bm{r};\bm{x})\Delta\bm{s}_{\bm{q},t}(\bm{r};\bm{x})\\\
&+C_{t}^{sj}\bm{j}^{2}_{\bm{q},t}(\bm{r};\bm{x})\\\ &+C_{t}^{s\nabla
j}\bm{s}_{\bm{q},t}(\bm{r};\bm{x})\cdot\Big{(}\nabla\times\bm{j}_{\bm{q},t}(\bm{r};\bm{x})\Big{)}\\\
&+C_{t}^{sT}\bm{s}_{\bm{q},t}(\bm{r};\bm{x})\cdot\bm{T}_{\bm{q},t}(\bm{r};\bm{x}).\end{split}$
(55)
Note that the coupling constants $C_{\bm{q},t}^{\rho\rho}(\bm{r};\bm{x})$ and
$C_{\bm{q},t}^{ss}(\bm{r};\bm{x})$ are density-dependent. Furthermore, the
last terms in Eqs. (54) and (55) represent tensor contributions and are set to
zero by construction in a number of Skyrme EDFs. The full expressions for
coupling constants $C_{t}$ in terms of the $(t,x)$ parameters of the Skyrme
EDF are given in B.
Finally, the pairing term reads
$\mathcal{E}_{\bm{q}}^{\text{pair}}(\bm{r};\bm{x})=\sum_{\tau=n,p}C_{\bm{q}}^{\text{pair}(\tau)}(\bm{r},\bm{x})\tilde{\rho}_{\bm{q}}^{(\tau)}(\bm{r};\bm{x})\tilde{\rho}_{\bm{q}}^{*(\tau)}(\bm{r};\bm{x}),$
(56)
with
$C_{\bm{q}}^{\text{pair}(\tau)}(\bm{r},\bm{x})=\frac{V^{(\tau)}_{0}}{4}\left[1-V^{(\tau)}_{1}\left(\frac{\rho_{\bm{q}}(\bm{r};\bm{x})}{\rho_{c}}\right)\right],$
(57)
where $V^{(\tau)}_{0}$ is the pairing strength for particle $\tau$,
$V^{(\tau)}_{1}$ controls the nature of pairing between the pure volume
($V^{(\tau)}_{1}=0$) and the pure surface ($V^{(\tau)}_{1}=1$) interaction,
and $\rho_{c}=0.16$ fm-3 is the saturation density of nuclear matter.
##### Rotated Hamiltonian Kernel of Density-Dependent Terms
Nearly all parameterizations of Skyrme and Gogny EDFs include a density-
dependent two-body term. This term has a strongly repulsive character and was
originally introduced to reproduce the saturation property of the nuclear
interaction. However, since it is not linked to a genuine Hamiltonian
operator, its contribution to the rotated Hamiltonian kernel is ambiguous. In
fact, this contribution can be determined only by introducing an additional
prescription [36, 37]. The choice of prescription will influence the
calculated projected energies and can therefore be considered as yet another
parameter of a density-dependent EDF.
A common choice is the mixed density prescription
$\rho^{(\tau)}_{\bm{q},\text{mix}}(\bm{r};\beta,\varphi_{l_{\tau}})=\frac{\braket{\Phi_{\bm{q}}}{\hat{\rho}^{(\tau)}(\bm{r})e^{-i\beta\hat{J}_{y}}e^{i\varphi_{l_{\tau}}\hat{\tau}}}{\Phi_{\bm{q}}}}{\braket{\Phi_{\bm{q}}}{\Phi_{\bm{q}}}},$
(58)
where $\hat{\rho}^{(\tau)}(\bm{r})$ is the one-body density operator for
particle type $\tau$ at point $\bm{r}$. This prescription is motivated by the
expression for the Hamiltonian kernel of density-independent interactions
based on the generalized Wick theorem. Moreover, it is the only prescription
on the market satisfying all the consistency requirements [36]. Most
importantly, even though the mixed density (58) is generally complex, the
resulting projected energies are always real and invariant under symmetry
transformations. Nevertheless, if a density-dependent term contains a non-
integer power of density, the corresponding energy contribution is generally
ill-defined. This issue is essentially insurmountable and can be circumvented
only by using density-dependent terms with integer powers of density or a
different density prescription. A possible alternative is the projected
density prescription
$\\!\rho^{(\tau)}_{\bm{q},\text{proj}}(\bm{r};\beta)=\frac{\braket{\Phi_{\bm{q}}}{\hat{\rho}^{(\tau)}(\bm{r})e^{-i\beta\hat{J}_{y}}\hat{P}^{X}}{\Phi_{\bm{q}}}}{\braket{\Phi_{\bm{q}}}{e^{-i\beta\hat{J}_{y}}\hat{P}^{X}}{\Phi_{\bm{q}}}},\\!$
(59)
which is real by construction. Unfortunately, it yields non-physical results
when used in restoration of spatial symmetries, such as the rotational or
reflection symmetry [37]. Nevertheless, a hybrid approach is possible in which
the mixed density prescription is used when restoring spatial symmetries,
while the projected density prescription is used when restoring the particle
number symmetry. Such an approach has been routinely employed in MR-EDF
calculations with Gogny EDFs by the Madrid group [4].
The Skyrme EDFs included in the current implementation contain two density-
dependent terms: (i) the volume term proportional to $\rho^{\alpha}(\bm{r})$,
where $\alpha$ can be either integer or non-integer depending on the EDF, and
(ii) the Coulomb exchange term proportional to $[\rho^{(p)}(\bm{r})]^{4/3}$.
In addition, the pairing interaction is proportional to $\rho(\bm{r})$, except
in the case of the pure volume pairing. The version 4.0 of hfbtho implements
the mixed density prescription in restoration of the rotational, reflection,
and particle number symmetry. However, the code enables choosing the projected
density prescription in particle number projection for the volume term with
non-integer $\alpha$ and the Coulomb exchange term.
### 2.2 HFBTHO Library
The code source has been largely refactored to facilitate maintenance and
future developments. This refactoring included modularizing the code base,
removing obsolescent Fortran statements, and generalizing Fortran 2003
constructs. In each module, module variables, functions, and subroutines are
thus explicitly declared as private and public. Furthermore, arguments passed
to each function and subroutine have the intent(in/out/inout) attribute. The
internal structure of the code has also been reorganized in order to produce
an hfbtho library.
Compiling the program generates the following three objects:
* 1.
A Fortran executable called hfbtho_main. The call sequence of the program has
been modified to provide more flexibility while maintaining backward
compatibility; refer to Sec. 5.2 for a short description.
* 2.
A static library libhfbtho.a. This library provides, among others, the routine
Main_Program() with the following call sequence
Subroutine Main_Program(
filename_hfbtho,filename_unedf, &
my_comm_world,my_comm_team, &
my_n_teams,my_team_color, &
toggle_output,filename_output, &
filename_dat,filename_binary)
This routine will execute a full hfbtho calculation, possibly across different
MPI ranks. Its arguments are the following:
* (a)
filename_hfbtho: the name of the input data file containing the Namelists.
Default: hfbtho_NAMELIST.dat;
* (b)
filename_unedf: the name of the input data file containing the parameters of
the EDF.
Default: hfbtho_UNEDF.dat;
* (c)
my_comm_world: the MPI world communicator, typically MPI_COMM_WORLD. When
compiling the code without MPI support (USE_MPI = 0), this argument is
inactive;
* (d)
my_comm_team: the MPI communicator used to break the MPI processes into teams,
each of which handles a given hfbtho calculation. Currently, distributed
parallelism through MPI is only used when restoring broken symmetries. Without
MPI support, this argument is inactive;
* (e)
my_n_teams: the number of teams in the calculation. Without MPI support, this
argument is inactive;
* (f)
my_team_color: the team "color" of the MPI process, i.e., the unique ID number
of the team to which the process has been assigned. Without MPI support, this
argument is inactive;
* (g)
toggle_output: if equal to 0, then no ASCII output is recorded on file; if
equal to 1, the two files filename_output and filename_dat described below are
written on disk;
* (h)
filename_output: the name of the ASCII output file where the results of the
calculation are written.
Default: hfbtho.out;
* (i)
filename_dat: the name of the ASCII output file where extended results of the
calculations are written. Extended results include the self-consistent loop,
observables, quasiparticle energies, equivalent single-particle energies, and
Nilsson labels.
Default: thoout.dat;
* (j)
filename_binary: the name of the binary file where the code will store the
data needed to restart the iterations.
Default: hfbtho_output.hel.
* 3.
A Python3 binding. The precise name of the binding will depend on the user’s
system, the Python version, and the Fortran compiler. Assuming the binding is
(re)named hfbtho_library.so, it can be used directly from a Python environment
and provides access to the Main_Program() routine. For example:
from hfbtho_library import Main_Program
or
import hfbtho_library
### 2.3 Other changes
##### SeaLL1 Functional
The SeaLL1 EDF [38] is now available in the code. As a reminder, this
functional reads
$\displaystyle\begin{split}\mathcal{E}_{\mathrm{SeaLL1}}(\bm{r})=&\frac{\hbar^{2}}{2m}\Big{(}\tau^{(n)}(\bm{r})+\tau^{(p)}(\bm{r})\Big{)}\\\
+&\sum_{j=0}^{2}\Big{(}a_{j}\rho_{0}^{5/3}(\bm{r})+b_{j}\rho_{0}^{2}(\bm{r})+c_{j}\rho_{0}^{7/3}(\bm{r})\Big{)}~{}\beta^{2j}\\\
+&\eta_{s}\sum_{\tau=n,p}\frac{\hbar^{2}}{2m}|\nabla\rho^{(\tau)}(\bm{r})|^{2}+W_{0}~{}\bm{J}_{0}(\bm{r})\\!\cdot\\!\nabla\rho_{0}(\bm{r})\\\
+&\frac{e^{2}}{2}\int
d^{3}\bm{r}^{\prime}\frac{\rho^{(p)}(\bm{r})\rho^{(p)}(\bm{r}^{\prime})}{|\bm{r}-\bm{r}^{\prime}|}-\frac{3e^{2}}{4}\left(\frac{\rho^{(p)}(\bm{r})}{3\pi}\right)^{4/3}\\\
+&\sum_{\tau=n,p}g_{\mathrm{eff}}^{(\tau)}(\bm{r})|\tilde{\rho}^{(\tau)}(\bm{r})|^{2}.\end{split}$
(60)
The quantity $g_{\mathrm{eff}}^{(\tau)}(\bm{r})$ is the renormalized pairing
strength which is obtained after regularizing a volume pairing interaction of
the form $g^{(\tau)}(\bm{r})=g^{(\tau)}$ [39, 40]; see [5] for details about
the implementation of the regularization procedure. The SeaLL1 EDF is fully
characterized by $11$ parameters
($\\{a_{j},b_{j},c_{j}\\}_{j=0,1,2},\eta_{s},W_{0}$) in the pairing channel
and $2$ parameters in the particle-particle channel ($g^{(n)}$ and $g^{(p)}$,
with $g^{(n)}=g^{(p)}=g_{0}$ for SeaLL1). Note that, like the UNEDFn
functionals, SeaLL1 specifies both the particle-hole and the pairing channel.
Figure 1: Particle number projection in the quasiparticle basis for the
$\braket{\hat{Q}_{20}}=1$ b configuration in 50Cr. (a): The PNP energy as a
function of the number of gauge angles $N_{\varphi}$. The dashed horizontal
line denotes the fully converged solution ($N_{\varphi}=99$). (b): The
decomposition of an HFB state onto different numbers of neutrons and protons
for $N_{\varphi}=15$.
##### Exact Coulomb
In previous versions of hfbtho, the direct (Hartree) term of the Coulomb
potential is calculated using the substitution method [41], the exchange
(Fock) term is calculated at the Slater approximation, while the pairing term
is neglected. As discussed extensively in [12], the substitution method can be
numerically unstable because of aliasing errors. In the current version, we
have leveraged the capability to compute mean-field and pairing energies from
finite-range two-body Gaussian potentials introduced in version 3.00 to
implement an "exact" calculation of the direct, exchange, and pairing term of
the Coulomb potential. In particular, we follow the technique implemented in
[42] and discussed in [43] and by exploiting the identity
$\displaystyle\begin{split}\frac{1}{r}&=\frac{2}{\sqrt{\pi}}\int_{0}^{+\infty}d\alpha\,e^{-\alpha^{2}r^{2}}\\\
&=\frac{2}{L\sqrt{\pi}}\int_{0}^{1}d\xi\,(1-\xi^{2})^{-3/2}\exp\left(-\frac{\xi^{2}r^{2}}{L^{2}(1-\xi^{2})}\right),\end{split}$
(61)
where we used the change of variable $\alpha=\frac{\xi}{L}(1-\xi^{2})^{-1/2}$
and $L$ stands for the larger of the two oscillator lengths,
$L=\max(b_{z},b_{\perp})$. The second integral can be efficiently computed
with Gauss-Legendre quadrature. If $\omega_{i}$ and $\xi_{i}$ are the weights
and the nodes of Gauss-Legendre quadrature, then we can write
$\frac{1}{r}=\sum_{i=1}^{N_{c}}A_{i}e^{-a_{i}r^{2}},$ (62)
with $A_{i}=\frac{2\omega_{i}}{L\sqrt{\pi}}(1-\xi_{i}^{2})^{-3/2}$ and
$a_{i}=\frac{\xi_{i}^{2}}{L^{2}(1-\xi_{i}^{2})}$.
##### Overwrite Mode
The new version of the code provides an option to use the information
contained in the binary hfbtho_output.hel file to overwrite some of the user-
defined inputs. This option is activated by setting the energy functional to
READ (instead of the usual SLY4, SKM*, etc.). In this case, the code will
overwrite (i) all the parameters of the EDF, (ii) the pairing cut-off, (iii)
the activation/deactivation of non-standard terms such as the center-of-mass
correction, tensor terms, or pairing regularization, (iv) the parameters of
the oscillator basis such as the maximal number of shells and oscillator
lengths. The code will then redefine the full HO basis to be consistent with
the one on file.
##### Bugfix of Blocking Calculations
In all versions of hfbtho since 2.00d [12], there is a bug in the calculations
of blocked states when the "automatic" mode is activated. In this mode, the
code determines and computes all possible blocking configurations within a $2$
MeV energy window around the Fermi level; see Section 4.2 of [12] for details.
In practice, the code loops over all $N$ candidate configurations.
Occasionally, one of these configurations may diverge, e.g., the particle
number condition cannot be enforced. When this happened to a configuration
$1\leq k<N$, the code would simply exit the loop without trying to compute the
remaining configurations $k<k^{\prime}\leq N$. Consequently, the results of
the converged calculations were correct but some potentially valid
configurations were not computed. In calculations near the ground state of
stable nuclei, this situation occurs very rarely; in calculations of very
neutron-rich or very deformed nuclei, it may happen more frequently. This bug
is fixed in the current version of the code.
## 3 Benchmarks and Accuracy
### 3.1 Particle Number Projection
As the first illustrative example, we perform the particle number projection
for a range of quadrupole-deformed configurations in 50Cr. Well-converged
solutions are obtained by expanding the HFB states in a spherical HO basis
with $N_{0}=8$ shells and the oscillator length $b_{0}=1.7621858$ fm. The SIII
parametrization of the Skyrme EDF [44] is used, alongside a volume
($V_{1}^{(\tau)}=0.0$) contact pairing interaction [39] with a $60$ MeV
quasiparticle cutoff and pairing strengths
$V_{0}^{(n)}\\!=\\!V_{0}^{(p)}\\!=\\!-190.0$ MeV. In addition, we employ the
mixed density prescription.
#### 3.1.1 Convergence and Particle Number Decomposition
We start by testing the convergence of PNP energies
[$E_{\mathbf{q}}^{\text{PNP}}\\!\equiv\\!E_{\mathbf{q}}^{NZ}$, Eq. (14)] and
decomposing an HFB state onto different numbers of neutrons and protons
[$|c_{\mathbf{q}}^{NZ}|^{2}$, Eq. (12)]. The quadrupole moment of the
reference HFB state is constrained to $\braket{\hat{Q}_{20}}\\!=\\!1$ b, the
dipole and the octupole moment are constrained to zero, while higher multipole
moments are determined self-consistently. Figure 1(a) shows the corresponding
PNP energy as a function of the number of gauge angles $N_{\varphi}$. An
excellent agreement with the fully converged solution (represented by the
dashed horizontal line and computed for $N_{\varphi}=99$) is obtained for
$N_{\varphi}=15$. The convergence pattern will generally vary for different
HFB states, but at most $N_{\varphi}\\!=\\!15$ gauge angles should be
sufficient for most practical purposes.
Furthermore, Fig. 1(b) shows the decomposition of the same HFB state onto
different numbers of neutrons and protons. A pronounced maximum is found at
the correct number of particles, $|c^{N=26,Z=24}_{\bm{q}}|^{2}=0.2278$. Around
this point, the distribution drops sharply in all directions. For example, the
configuration with two protons less has about twice smaller coefficient,
$|c^{N=26,Z=22}_{\bm{q}}|^{2}=0.1197$, while the configuration with four
protons less has only $|c^{N=26,Z=20}_{\bm{q}}|^{2}=0.0201$. Note that, for
this particular configuration, the pairing gaps are $\Delta_{n}=1.0901$ MeV
and $\Delta_{p}=1.1773$ MeV for neutrons and protons, respectively.
#### 3.1.2 PNP in Canonical and Quasiparticle Bases
The particle number projection in the canonical basis had been incorporated to
the hfbtho program since its initial release. On the other hand, the new
version of the program contains the particle number projection performed in
the quasiparticle basis. The two PNP methods are distinct and can under
certain circumstances yield different results. Most notably, a difference will
arise if the underlying HFB calculations enforce a cutoff in the quasiparticle
space. The introduction of such a cutoff is a common way to render the
energies convergent for zero-range pairing interactions and is therefore an
integral part of Skyrme-EDF calculations with hfbtho [11].
Figure 2: The difference between the PNP energies obtained in the
quasiparticle and in the canonical basis, $\Delta
E_{\mathbf{q}}^{\text{PNP}}=E_{\mathbf{q},\text{qps}}^{\text{PNP}}-E_{\mathbf{q},\text{can}}^{\text{PNP}}$,
for three different values of a quasiparticle cutoff: $40$ MeV, $60$ MeV, and
$6000$ MeV (an infinite cutoff). The difference in the corresponding HFB
energies, $\Delta
E_{\mathbf{q}}^{\text{HFB}}=E_{\mathbf{q},\text{qps}}^{\text{HFB}}-E_{\mathbf{q},\text{can}}^{\text{HFB}}$,
is also shown.
To compare the two methods, Fig. 2 shows the difference between the PNP
energies obtained in the quasiparticle and in the canonical basis, $\Delta
E_{\mathbf{q}}^{\text{PNP}}=E_{\mathbf{q},\text{qps}}^{\text{PNP}}-E_{\mathbf{q},\text{can}}^{\text{PNP}}$,
for three different values of a quasiparticle cutoff. We consider a range of
quadrupole deformations in 50Cr,
$\braket{\hat{Q}_{20}}\in[-2.0~{}\mathrm{b},4.0~{}\mathrm{b}]$, and keep the
other parameters fixed. For a relatively low cutoff ($E_{\text{cut}}=40$ MeV),
the difference is $\Delta E_{\mathbf{q}}^{\text{PNP}}\leq 0.5$ MeV. For a
cutoff value typically used in realistic calculations ($E_{\text{cut}}=60$
MeV), the difference reduces to $\Delta E_{\mathbf{q}}^{\text{PNP}}\leq 0.2$
MeV. Finally, in the limit of an infinite cutoff ($E_{\text{cut}}=6000$ MeV)
the difference between the two methods vanishes.
In addition, Fig. 2 shows the difference between the HFB energies obtained in
the quasiparticle and in the canonical basis, $\Delta
E_{\mathbf{q}}^{\text{HFB}}=E_{\mathbf{q},\text{qps}}^{\text{HFB}}-E_{\mathbf{q},\text{can}}^{\text{HFB}}$,
for the three cutoff values. The HFB curves largely follow the corresponding
PNP curves, corroborating the fact that the discrepancy in projected energies
stems from the initial difference in HFB states. Finally, an instructive limit
to consider is the case of a collapsing pairing interaction, which is a common
feature of PNP models that perform variation before projection [14]. Note that
the collapse of pairing happens around $\braket{\hat{Q}_{20}}=2.5$ b in our
calculation. Regardless of the cutoff, the two PNP methods then yield the same
energy that also coincides with the HFB energy.
#### 3.1.3 The Choice of Density Prescription
As discussed in Sec. 2.1.6, the new implementation of PNP enables the choice
of density prescription for the parts of an EDF that depend on non-integer
powers of density. In order to quantify the consequences of this choice, Fig.
3 shows the difference between the PNP energies obtained with the mixed and
the projected density prescription. We consider three Skyrme EDFs whose volume
terms depend on different powers of density $\alpha$: SIII ($\alpha=1$) [44],
Sly4 ($\alpha=\frac{1}{6}$) [45], and SkO ($\alpha=\frac{1}{4}$) [46]. For all
three EDFs, the Coulomb exchange term depends on the $4/3$-th power of the
proton density.
Figure 3: The difference between the PNP energies obtained with the mixed and
the projected density prescription. We consider three Skyrme EDFs whose volume
terms depend on different powers of density $\alpha$.
For SIII, the entire difference between the two prescriptions lies in the
Coulomb exchange term. In 50Cr, this difference amounts to about $0.1\%$ of
the term, or about $0.01$ MeV, and is therefore not visible in Fig. 3. On the
other hand, for Sly4 and SkO an additional difference in the volume term comes
into play. The difference in this term amounts to about $0.1\%$ as well, but
it translates to a sizeable absolute difference of $2-3$ MeV. Again, the two
prescriptions yield the same result in the limit of a collapsing pairing
interaction (around $\braket{\hat{Q}_{20}}=2.5$ b). We note that the
difference from density prescriptions does not scale with nuclear mass and
that it remains of comparable magnitude even in the heaviest nuclei.
Unfortunately, to the best of our knowledge, there are no published
comparisons of PNP energies obtained with different density prescriptions.
However, Ref. [47] contains the comparison between the PNP dynamic moments of
inertia obtained with the mixed and the projected density prescription, using
a Gogny EDF and the Lipkin-Nogami approximation. The reported difference is
sizeable and generally of the order of a few percent.
#### 3.1.4 Benchmarking Against HFODD
To further verify our implementation, we tested the PNP results of hfbtho
against results obtained with hfodd. Since the latest release of the code [13]
cannot project on both protons and neutrons and does not give a full breakdown
of the projected energy, we use for our benchmark a recent, still unpublished,
modification of the hfodd solver based on version 2.73 [6]. In this version,
PNP is implemented in the canonical basis and the results must thus be tested
against the original hfbtho implementation [11]. As demonstrated in Section
3.1.2, this implementation of PNP (in the canonical basis) gives the same
results as the new implementation (in the quasiparticle basis) for infinite
cutoffs.
Table 1: The breakdown of the PNP energy (in MeV) of the $\braket{\hat{Q}_{20}}=1$ b configuration in 50Cr, obtained with the hfbtho and hfodd solvers. A spherical HO basis with $N_{0}=12$ shells and the SIII EDF were used; see text for more details on the parameters of the calculation. | -111 hfbtho | -111 hfodd
---|---|---
$E_{\rm kin}^{(n)}$ | 466.236124 | 466.236123
$E_{\rm kin}^{(p)}$ | 415.937244 | 415.937243
$E^{\rho\rho}$ | -1701.776220 | -1701.776217
$E^{\rho\tau}$ | 201.410935 | 201.410934
$E^{\rho\Delta\rho}$ | 126.141959 | 126.141958
$E^{\rho\nabla J}$ | 11-39.203075 | 11-39.203075
$E_{\rm pair}^{(n)}$ | 111-0.333798 | 111-0.333798
$E_{\rm pair}^{(p)}$ | 111-0.981203 | 111-0.981203
$E_{\rm PNP}$ | -532.568034 | -532.568034
Table 1 contains a breakdown of the PNP energy of the
$\braket{\hat{Q}_{20}}\\!=\\!1$ b configuration in 50Cr, obtained with the
hfbtho and hfodd solvers. The calculation parameters are the same as those
described at the beginning of this section, except that (i) $N_{0}=12$ HO
shells are used, (ii) a surface-volume pairing interaction is used, and (iii)
the Coulomb interaction is entirely neglected. In both hfbtho and hfodd
calculations, $N_{\varphi}=15$ gauge angles were used for both neutrons and
protons. The hfodd results correspond to a Gauss quadrature characterized by
$\texttt{NXHERM}=\texttt{NYHERM}=\texttt{NZHERM}=30$ points. The largest
difference, for the density-dependent volume term, does not exceed $3$ eV.
### 3.2 Angular Momentum Projection
Next, we perform the illustrative angular momentum projection calculations,
using the same parameters as described at the beginning of Section 3.1.
Figure 4: Angular momentum projection in the spherical HO basis for the
$\braket{\hat{Q}_{20}}=1$ b configuration in 50Cr. (a): The AMP energy of the
$J^{p}=0^{+},2^{+},4^{+}$, and $6^{+}$ state as a function of the number of
rotational angles $N_{\beta}$. The dashed horizontal line denotes the fully
converged solution ($N_{\beta}=100$). (b): The decomposition of an HFB state
onto different angular momenta for $N_{\beta}=10$. The inset shows the
corresponding overlaps for neutrons and protons.
#### 3.2.1 Convergence of Angular Momentum Decomposition
To start with, we test the convergence of AMP energies
[$E_{\mathbf{q}}^{\text{AMP}}\\!\equiv\\!E_{\mathbf{q}}^{J;p}$, Eq. (14)] and
decompose an HFB state onto different values of angular momenta
[$|c_{\mathbf{q}}^{J;p}|^{2}$, Eq. (13)]. As before, the quadrupole moment of
the reference HFB state is constrained to $\braket{\hat{Q}_{20}}\\!=\\!1$ b,
the dipole and the octupole moment are constrained to zero, while higher
multipole moments are determined self-consistently. Fig. 4(a) shows the AMP
energies for $J^{p}=0^{+},2^{+},4^{+}$, and $6^{+}$ as a function of the
number of rotational angles $N_{\beta}$. Note that the considered
configuration is reflection-symmetric and thus only positive-parity states can
be obtained. In turn, the projection interval is reduced to
$\beta\\!\in\\![0,\pi/2]$. As expected, the convergence is faster for lower
values of $J$. For all $J$, an excellent agreement with the fully converged
solution (represented by the dashed horizontal lines and computed for
$N_{\beta}\\!=\\!100$) is obtained already for $N_{\beta}\\!=\\!10$. The
convergence pattern will generally depend on the properties of the HFB state
(e.g., the magnitude of the quadrupole deformation or whether the parity is
broken), as well as on the value of $J$. Consequently, in practical
applications, one should verify the convergence of AMP with respect to
$N_{\beta}$.
Furthermore, Fig. 4(b) shows the decomposition of the same HFB state onto
different values of angular momentum. The maximum is found for $J=2$,
$|c_{\mathbf{q}}^{J;+}|^{2}=0.4649$, while the coefficients for $J\geq 8$
components are negligible. The inset shows the corresponding overlaps for both
neutrons and protons [$\mathcal{N}_{\bm{q}}^{(\tau)}(\beta,0)$, Eq. (15)]. The
overlaps for the two types of particles are very similar: they are real and
monotonously decrease from $\mathcal{N}_{\bm{q}}^{(\tau)}(0,0)=1$ to their
respective minimal values at $\beta=\pi/2$. Since the quadrupole deformation
is rather moderate, the overlaps at $\beta=\pi/2$ are still sizeable. Note
that the overlaps for $\beta\\!\in[\pi/2,\pi]$ can be obtained by a reflection
around the $\beta=\pi/2$ vertical axis; see Section 2.1.4.
#### 3.2.2 Benchmarking Against HFODD
In full analogy with the case of PNP discussed in Section 3.1.4, we can
benchmark the AMP results obtained with hfbtho against the results obtained
with hfodd. The main restriction in this case is that hfodd requires the usage
of a spherical HO basis. Once again, we consider the
$\braket{\hat{Q}_{20}}\\!=\\!1$ b configuration in 50Cr. The calculation
parameters are the same as those described at the beginning of Section 3.1,
except that (i) the Coulomb interaction is entirely neglected, (ii) all the
higher multipole moments up to the eighth order are constrained to zero, and
(iii) in order to additionally probe the contribution from the tensor term of
the functional, we used the SLy5 parametrization of the Skyrme EDF [45]. In
this case, the parameterizations of the pairing interaction yields pairing
gaps that are much smaller than the experimental ones. However, since our goal
is simply to compare the two codes against one another, this discrepancy is
irrelevant. All the AMP calculations were performed with $N_{\beta}=30$
rotational angles $\beta\\!\in\\![0,\pi]$.
We compared our results to those generated with the latest release of hfodd,
where the AMP is implemented in the Hartree-Fock basis [13]. Because the two
codes employ different bases, the obtained HFB energies slightly differ and
agree within $2.2$ keV. For the projected energies, the difference does not
exceed $12$ keV for the range of angular momentum $J\in[0,10]$. Although this
test is already very encouraging, we can go one step further and test
separately each contribution to the projected energy. To this end, we use the
same unpublished version of hfodd built on top of the version 2.73 that was
employed for the PNP benchmark. In that version of the code, the AMP is
implemented in the HO basis so a closer comparison is possible. As expected,
we find that the HFB energies agree within $1$ eV: $E_{\rm HFB}=-531.370615$
MeV.
Table 2: The breakdown of the AMP energy (in MeV) of the
$\braket{\hat{Q}_{20}}=1$ b configuration in 50Cr, obtained with the hfbtho
and hfodd solvers. Energies for $J=0$ (top) and $J=8$ (bottom) are shown. A
spherical HO basis with $N_{0}=8$ shells and the Sly5 EDF were used; see text
for more details on the parameters of the calculation.
$J=0$ | -111hfbtho | -111hfodd
---|---|---
$E_{\rm kin}^{(n)}$ | 475.811944 | 475.811932
$E_{\rm kin}^{(p)}$ | 418.693797 | 418.693807
$E^{\rho\rho}$ | -1797.938577 | -1797.938577
$E^{\rho\tau}$ | 269.775424 | 269.775424
$E^{\rho\Delta\rho}$ | 149.166859 | 149.166858
$E^{\rho\nabla J}$ | 11-42.039341 | 11-42.039339
$E^{JJ}$ | -1111.213084 | -1111.213084
$E^{ss}$ | -1110.251440 | -1110.251439
$E^{sj}$ | -1110.287586 | -1110.287585
$E^{s\Delta s}$ | -1110.111281 | -1110.111280
$E^{s\nabla J}$ | -1110.137866 | -1110.137865
$E^{sT}$ | -1110.009186 | -1110.009186
$E_{\rm pair}^{(n)}$ | 111-2.848138 | 111-2.848137
$E_{\rm pair}^{(p)}$ | 111-4.507887 | 111-4.507885
$E_{\rm AMP}$ | -532.307952 | -532.307950
$J=8$ | -111hfbtho | -111hfodd
---|---|---
$E_{\rm kin}^{(n)}$ | 467.384564 | 467.384572
$E_{\rm kin}^{(p)}$ | 437.860544 | 437.860226
$E^{\rho\rho}$ | -1812.483313 | -1812.482960
$E^{\rho\tau}$ | 275.246980 | 275.246855
$E^{\rho\Delta\rho}$ | 148.724958 | 148.724962
$E^{\rho\nabla J}$ | 11-40.088099 | 11-40.088112
$E^{JJ}$ | -1110.997760 | -1110.997763
$E^{ss}$ | 111-1.279415 | 111-1.279386
$E^{sj}$ | 111-1.763059 | 111-1.763017
$E^{s\Delta s}$ | 111-0.559418 | 111-0.559406
$E^{s\nabla J}$ | 111-0.449841 | 111-0.449832
$E^{sT}$ | 111-0.070601 | 111-0.070600
$E_{\rm pair}^{(n)}$ | 111-1.159525 | 111-1.159544
$E_{\rm pair}^{(p)}$ | 111-2.563745 | 111-2.563772
$E_{\rm AMP}$ | -527.963805 | -527.963895
Table 2 contains the breakdown of the AMP energy for angular momentum $J=0$
and $J=8$; see Eqs. (54) - (55) for the definition of each term. For the $J=0$
state, the differences between the two codes do not exceed $10$ eV, with most
terms agreeing within $2$ eV. Not surprisingly, the differences increase a
little for the $J=8$ case. However, they are still of the order of a few
dozens or hundreds of eV, and overall less than $1$ keV. Considering the
remaining differences between the two codes – hfodd works with the Cartesian
basis and implements the full 3D rotation of wave functions while hfbtho works
with the cylindrical basis and implements only the rotation in the Euler angle
$\beta$ – this benchmark is quite conclusive.
#### 3.2.3 AMP in a Deformed Basis
One of the main advantages of the present implementation of AMP is that it can
be performed in bases that are not closed under rotation. Such deformed (or
stretched) bases are often used in calculations of potential energy surfaces
because they provide a computationally efficient way to obtain precise
representations of arbitrarily deformed HFB configurations. The main downside
of using a deformed basis is the need to carefully study the convergence of
calculations as a function of the basis deformation; see [48] for a discussion
of the impact of basis truncation on HFB observables. In this section, we
demonstrate that the convergence pattern of AMP calculations is generally
different from the one of the underlying HFB calculations.
Fig. 5 shows the HFB energy and the AMP ($J^{p}=0^{+}$) energy in 50Cr as a
function of the axial quadrupole moment $\braket{\hat{Q}_{20}}$ and obtained
with three different HO bases: the spherical ($\beta_{2}\\!=\\!0.0)$ basis,
the prolate-deformed ($\beta_{2}=0.1$) basis, and the oblate-deformed
($\beta_{2}\\!=\\!-0.1$) basis. $N_{0}\\!=\\!8$ HO shells were used in all
three cases. For configurations with moderate prolate deformation, the $0^{+}$
energies obey
$E_{J=0}(\beta_{2}\\!=\\!-0.1)\\!<\\!E_{J=0}(\beta_{2}=0.0)\\!<\\!E_{J=0}(\beta_{2}=0.1)$.
The differences in HFB energies are much smaller, but they obey the exact
opposite rule: $E_{\rm HFB}(\beta_{2}\\!=\\!-0.1)\\!>\\!E_{\rm
HFB}(\beta_{2}\\!=\\!0.0)\\!>\\!E_{\rm HFB}(\beta_{2}\\!=\\!0.1)$.
Interestingly, the pattern is reversed for configurations with moderate oblate
deformation. For them, the prolate-deformed basis gives the lowest $0^{+}$
energy and the oblate-deformed basis gives the highest $0^{+}$ energy. In
addition, the pattern is further modified as the deformation increases: for
configurations with $\braket{\hat{Q}_{20}}\gtrapprox 5.4$ b the HFB and the
$0^{+}$ energy follow the same ordering and the lowest energies are obtained
with the prolate-deformed basis.
Figure 5: Total HFB and $J^{p}=0^{+}$ energy of 50Cr as a function of the
constraint on the axial quadrupole moment $\braket{\hat{Q}_{20}}$. Blue curves
with squares show results obtained with a spherical basis; red curves with
circles show results obtained with a prolate-deformed basis of
$\beta_{2}=0.1$; green curves with triangles show results obtained with an
oblate-deformed basis of $\beta_{2}=-0.1$. Plain symbols correspond to AMP
results and open symbols to HFB ones; see text for additional details.
The observed difference in patterns may have two main origins:
* 1.
Numerical Precision. For a prolate-deformed basis, the number of basis states
along the $z$-axis of the reference frame, which coincides with the elongation
axis of the HFB configuration, is larger than the number of states along the
perpendicular axis. Consequently, the prolate-deformed HFB configuration is
numerically well described. However, the elongation axis of the rotated HFB
configuration is not anymore aligned with the $z$-axis of the reference frame.
In fact, for $\beta\\!=\\!\pi/2$ it is aligned with the axis perpendicular to
it – where the number of basis states is lower. Rotated prolate-deformed
configurations are thus described less precisely in a prolate-deformed basis.
Moreover, the weight of each rotated configuration is
$\sin\beta\,d_{00}^{J}(\beta)$. For $J\\!=\\!0$, $d_{00}^{0}(\beta)=1$, and
the weight is simply $\sin\beta$. Consequently, the $\beta\\!\approx\\!\pi/2$
configurations, which are numerically less precise, have larger weights than
the $\beta\approx 0$ configurations, which are numerically more precise. For
$J>0$, the function $\sin\beta\,d_{00}^{J}(\beta)$ is not monotonous and this
simple analysis does not hold anymore.
* 2.
The Effect of the Rotation Matrix. The rotation matrix [Eq. (24)] enters the
calculation of overlaps [Eq. (15)]. Furthermore, the overlaps enter the
calculation of the norm overlap kernel $\mathcal{N}_{\bm{q}}^{J;p}$ and the
Hamiltonian kernel $\mathcal{H}_{\bm{q}}^{J;p}$, both of which are needed to
calculate the AMP energy [Eq. (14)]. However, the properties of the rotation
matrix depend on the basis deformation. For example, the determinant of the
rotation matrix equals to $1$ in the spherical basis and decreases rapidly as
the basis deformation increases. Without actually performing the calculations,
it is not clear how the deformation of the basis impacts the rotation matrix,
the subsequent kernels and, eventually, the AMP energy.
Figure 6: The convergence of the HFB energy (bottom) and the AMP $0^{+}$
energy (top) as a function of the basis deformation $\beta_{2}$ for three
configurations along the fission path of 240Pu:
$(Q_{20},Q_{30})=(90\,\mathrm{b},0\,\mathrm{b}^{3/2})$,
$(Q_{20},Q_{30})=(140\,\mathrm{b},12\,\mathrm{b}^{3/2})$, and
$(Q_{20},Q_{30})=(240\,\mathrm{b},25\,\mathrm{b}^{3/2})$. All curves are
normalized relative to their respective minima over the interval
$\beta_{2}\in[0,0.9]$; see text for additional details.
To get a better idea of the convergence pattern of AMP calculations as a
function of the basis deformation, Fig. 6 shows a semi-realistic example of
the fission path of 240Pu. We considered three different configurations along
the path: $(Q_{20},Q_{30})=(90\,\mathrm{b},0\,\mathrm{b}^{3/2})$,
$(Q_{20},Q_{30})=(140\,\mathrm{b},12\,\mathrm{b}^{3/2})$, and
$(Q_{20},Q_{30})=(240\,\mathrm{b},25\,\mathrm{b}^{3/2})$. For each
configuration, we computed the HFB solution in a basis characterized by
$N_{0}^{\rm max}=24$ HO shells and $\beta_{2}\\!=\\!0.0,0.1,...,0.9$
deformation. In addition, the basis was truncated and only the lowest $N_{\rm
states}=1100$ states were retained. The spherical-equivalent oscillator length
$b_{0}$ was not adjusted and was instead fixed at $b_{0}=2.288$ fm. In other
words, the oscillator lengths $b_{z}$ and $b_{\perp}$ vary as a function of
$\beta_{2}$ in such a way that the product $b_{z}b^{2}_{\perp}=b_{0}^{3}$ is
constant.
The HFB convergence pattern (bottom panel) should be familiar to the
practitioners: very deformed configurations require (very) deformed bases. In
our example, the lowest HFB energy is found for $\beta_{2}\\!=\\!0.6$
($\braket{\hat{Q}_{20}}\\!=\\!90$ b and $\braket{\hat{Q}_{20}}\\!=\\!140$ b)
and for $\beta_{2}=0.8$ ($\braket{\hat{Q}_{20}}=240$ b). Note that, in
principle, one should also adjust the oscillator frequency as a function of
the deformation; see discussion in [48]. For very deformed configurations, the
convergence pattern of the $0^{+}$ energy is qualitatively similar to the HFB
pattern in the sense that the minimum is obtained for non-zero $\beta_{2}$
values. However, these values are significantly smaller than in the HFB case.
In fact, for the least-deformed configuration (which approximately corresponds
to the fission isomer), the lowest $0^{+}$ energy is obtained for a nearly
spherical basis. These results suggest that large-scale applications of AMP in
a deformed basis should be accompanied by a careful study of the numerical
convergence.
#### 3.2.4 Limitations of the Model
The user should be aware of a number of limitations of the novel symmetry
restoration module, related to both the underlying physics and the numerical
implementation:
* 1.
Projection of the Eigenstates. Some HFB configurations are already eigenstates
of an operator related to the symmetry being restored. For example, the
spherical configuration is an eigenstate of the angular momentum operator with
the eigenvalue $J=0$. Similarly, configurations with vanishing odd multipole
moments are eigenstates of the parity operator with the eigenvalue $p=+1$.
Projecting these configurations onto other eigenvalues ($J\\!=\\!1,2,...$ for
the former and $p\\!=\\!-1$ for the latter) will yield non-physical results.
In practice, one should be cautious because numerical issues can occur already
for configurations that are sufficiently close to being eigenstates.
* 2.
Invertibility of the Rotation Matrix. The inverse and the determinant of the
rotation matrix enter our calculations explicitly. However, as the size and
the deformation of the basis increase, the determinant drops rapidly and the
matrix can become numerically non-invertible for some rotational angles close
to $\beta=\pi/2$. These angles are then disregarded in AMP, under the
assumption that the corresponding overlaps are negligible. This assumption is
justified for very deformed configurations, but it can break down for
configurations with moderate or small deformations. Consequently, caution is
advised when calculating moderately deformed configurations with deformed
bases. In particular, the description of near-spherical configurations with
deformed bases is imprecise and should therefore be avoided.
* 3.
Spuriosity of Projected Energies. The Hamiltonian kernel is formally not well-
defined for EDFs that are density-dependent or omit parts of the interaction.
In the worst case scenario, this can lead to sizeable finite steps and even
divergences in projected energies. Such spuriosities were abundantly reported
in PNP [49, 50, 51, 52], while AMP in even-even nuclei seems to remain issue-
free [22]. In many practical implementations, however, the scale of these
spuriosities is smaller than the errors due to the various numerical
limitations. Nevertheless, as the quest for spuriosity-free EDFs is under way,
the user should remain aware of this formal limitation.
### 3.3 Exact Coulomb
We tested our implementation of the "exact" Coulomb calculation by comparing
results obtained with the new version of hfbtho and with the Gogny code used
in [53, 54]. In the latter, all contributions of the Coulomb interaction
(direct, exchange, and pairing) are computed exactly thanks to the properties
of the spherical HO basis.
For numerical comparison, we consider the 208Pb nucleus and use the D1S Gogny
EDF. Furthermore, we disregard the two-body center-of-mass correction and
neglect the Coulomb contribution to pairing. Calculations are performed in a
spherical HO basis with $N_{0}=12$ shells and the oscillator length
$b_{0}=2.5$ fm. They were converged up to $10^{-12}$. Fig. 7 shows the
absolute error $\varepsilon=|E^{X}_{{\text{{\sc hfbtho}}}}-E^{X}_{\rm Gogny}|$
as a function of the number of Gauss-Legendre quadrature points $N_{\rm Leg}$.
Here, $X$ stands for either the direct or the exchange contribution to the
Coulomb energy, and the subscripts "hfbtho" and "Gogny" refer to the hfbtho
4.0 and the spherical Gogny code, respectively.
Figure 7: The absolute error (in MeV) of the Gaussian expansion of the Coulomb
potential as a function of Gauss-Legendre quadrature points, i.e., the number
of Gaussians approximating $1/r$; see Eq. (62).
For $N_{\rm Gauss}\\!=\\!60$ points in both the Gauss-Hermite and Gauss-
Laguerre integrations (the full lines), the expansion of the Coulomb potential
onto Gaussians converges nicely to the exact value. In particular, at
$N_{\rm{Leg}}=14$, the difference is $20$ meV and $1$ meV for the direct and
the exchange term, respectively. If the number of quadrature points is reduced
to $N_{\rm Gauss}=40$ (the dashed lines), we observe a saturation of
convergence at about $1$ eV (direct) and $80$ meV (exchange) at $N_{\rm
Leg}=14$. For comparison, we also show the results of the "standard"
prescription for the direct term, which is based on the substitution method in
a box of size $L=50$ fm with $80$ Gauss-Legendre quadrature points; see
discussion in [12], and for the exchange term, which is computed at the Slater
approximation.
## 4 Input data file
The input data file format remains similar to version 3.00 and only contains
one additional namelist.
### 4.1 Sample input file
&HFBTHO_GENERAL
number_of_shells = 10,
oscillator_length = -1.0,
basis_deformation = 0.0,
proton_number = 24, neutron_number = 26,
type_of_calculation = 1 /
&HFBTHO_INITIAL
beta2_deformation = 0.0,
beta3_deformation = 0.0,
beta4_deformation = 0.0 /
&HFBTHO_ITERATIONS
number_iterations = 100, accuracy = 1.E-5,
restart_file = -1 /
&HFBTHO_FUNCTIONAL
functional = ’SLY4’,
add_initial_pairing = F,
type_of_coulomb = 2 /
&HFBTHO_PAIRING
user_pairing = F,
vpair_n = -300.0, vpair_p = -300.0,
pairing_cutoff = 60.0,
pairing_feature = 0.5 /
&HFBTHO_CONSTRAINTS
lambda_values = 1, 2, 3, 4, 5, 6, 7, 8,
lambda_active = 0, 0, 0, 0, 0, 0, 0, 0,
expectation_values = 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0 /
&HFBTHO_BLOCKING
proton_blocking = 0, 0, 0, 0, 0,
neutron_blocking = 0, 0, 0, 0, 0 /
&HFBTHO_PROJECTION
switch_to_THO = 0,
projection_is_on = 0, gauge_points = 1,
delta_Z = 0, delta_N = 0 /
&HFBTHO_TEMPERATURE
set_temperature = F, temperature = 0.0 /
&HFBTHO_FEATURES
collective_inertia = F,
fission_fragments = F,
pairing_regularization = F,
localization_functions = F /
&HFBTHO_NECK
set_neck_constrain = F, neck_value = 0.5 /
&HFBTHO_DEBUG
number_Gauss = 40, number_Laguerre = 40,
number_Legendre = 80,
compatibility_HFODD = F,
number_states = 500,
force_parity = T, print_time = 0 /
&HFBTHO_RESTORATION
PNP_is_on = 0, number_of_gauge_points = 1,
delta_neutrons = 0, delta_protons = 0,
AMP_is_on = 0,
number_of_rotational_angles = 1,
maximal_angular_momentum = 0 /
### 4.2 Description of input data
We now define the new or updated inputs introduced in version 4.0.
Keyword: HFBTHO_FUNCTIONAL
Keyword: HFBTHO_FUNCTIONAL
$\bullet$ type_of_coulomb = 2: Logical switch that defines the treatment of
the Coulomb potential. In previous versions, this switch could only take
values $0$ (no Coulomb), $1$ (direct contribution only) or $2$ (direct and
exchange contribution with the Slater approximation). In the current version,
the following new options are also available:
1. -1:
direct Coulomb only by sum of $N_{c}$ Gaussians;
2. -2:
direct Coulomb by the substitution method, exchange Coulomb by sum of $N_{c}$
Gaussians;
3. -3:
direct Coulomb by sum of $N_{c}$ Gaussians, exchange Coulomb with the Slater
approximation;
4. -4:
direct and exchange Coulomb by sum of $N_{c}$ Gaussians;
5. -5:
direct, exchange, and pairing Coulomb by sum of $N_{c}$ Gaussians.
Here, $N_{c}$ is the number of Gaussians in (62). It is stored in the UNEDF
module variable n_g_coul and is preset at n_g_coul=9 in the file
hfbtho_unedf.f90. There is no option to change this number directly in the
input file. Default: 2.
Keyword: HFBTHO_RESTORATION
Keyword: HFBTHO_RESTORATION
$\bullet$ PNP_is_on = 0: Logical switch that activates the particle number
projection in the quasiparticle basis. When set to $1$ the mixed density
prescription is used and when set to $2$ the projected density prescription is
used (see Sections 2.1.6 and 3.1.3). This option is different from the old
projection_is_on switch in the HFBTHO_PROJECTION namelist, which activates PNP
with the mixed density prescription in the canonical basis. For an infinite
quasiparticle cutoff, the two mixed density prescription options should give
the same result. This option is incompatible with: finite-temperature, THO
basis, and blocking calculations. Default: 0;
$\bullet$ number_of_gauge_points = 1: Number of gauge angles $N_{\varphi}$ for
particle number projection. The same number $N_{\varphi}$ is used for protons
and neutrons. Default: 1;
$\bullet$ delta_neutrons = 0: Value of the shift in neutron number $\delta N$.
In the case of PNP, one can project on all even neutron numbers in the
interval $[N_{0}-\delta N,N_{0}+\delta N]$, where $N_{0}$ is the number of
neutrons of the considered nucleus (even only for PNP). Default: 0;
$\bullet$ delta_protons = 0: Value of the shift in proton number $\delta Z$.
In the case of PNP, one can project on all even proton numbers in the interval
$[Z_{0}-\delta Z,Z_{0}+\delta Z]$, where $Z_{0}$ is the number of protons of
the considered nucleus (even only for PNP). Default: 0;
$\bullet$ AMP_is_on = 0: Logical switch that activates (if equal to 1) the
restoration of angular momentum $J$ and parity $p$. This option can be
combined with PNP to carry out a simultaneous projection on $N$, $Z$, $J$, and
$p$. It is incompatible with: finite-temperature, THO basis, and blocking
calculations. Default: 0;
$\bullet$ number_of_rotational_angles = 1: Number of rotational angles
$N_{\beta}$ use for AMP. Internally, the code will readjust $N_{\beta}$ if
reflection symmetry is enforced. In such a case, the program will compute
either $N_{\beta}/2$ ($N_{\beta}$ even) or $(N_{\beta}+1)/2$ ($N_{\beta}$ odd)
rotational angles (see Section 2.1.4). Default: 1;
$\bullet$ maximal_angular_momentum = 0: Maximum value of the angular momentum
$J_{\rm max}$. In the case of AMP, all even values of $J$ in $[0,J_{\rm max}]$
(parity conserved) or all values $J$ in $[0,J_{\rm max}]$. Default: 0.
## 5 Program hfbtho
### 5.1 Structure of the code
Compared with version 3.00, we have substantially increased the modularization
of the source code since the number of modules increased from 18 to 25. The
code is organized as follows:
* 1.
hfbtho_bessel.f90: defines the modified Bessel functions of order 0 and 1;
* 2.
hfbtho_canonical.f90: defines the canonical basis of the HFB theory;
* 3.
hfbtho_collective.f90: computes the ATDHF and GCM collective inertia tensor
and zero-point energy correction in the perturbative cranking approximation;
see [5] and references therein;
* 4.
hfbtho_elliptic_integrals.f90: defines complete elliptic integral of the
second kind used for the Coulomb potential;
* 5.
hfbtho_fission.f90: computes the charge, mass, and axial multipole moments of
fission fragments and the value of the Gaussian neck operator;
* 6.
hfbtho_gauss.f90: defines the quadrature meshes: Gauss-Hermite, Gauss-
Laguerre, and Gauss-Legendre;
* 7.
hfbtho_gogny.f90: computes the matrix elements of the Gogny force as well as
the corresponding mean field and pairing field;
* 8.
hfbtho_io.f90: contains a collection of routines handling inputs and outputs;
* 9.
hfbtho_large_scale.f90: contains a collection of routines for mass table, drip
lines, or potential energy surface calculations, as well as for the
parallelization of single HFB calculations;
* 10.
hfbtho_library.f90: provides the definition of the main routine Main_Program()
that launches complete hfbtho calculations: stand-alone, mass tables, drip
lines, or potential energy surfaces;
* 11.
hfbtho_lipkin.f90: calculates the Lipkin-Nogami correction, including the
$\lambda_{2}$ parameters, densities, and energies;
* 12.
hfbtho_localization.f90: computes spatial localization functions;
* 13.
hfbtho_main.f90: calls the Main_Program() routine;
* 14.
hfbtho_math.f90: contains a collection of general-use mathematical routines;
* 15.
hfbtho_multipole_moments.f90: computes the expectation value and matrix
elements of axial multipole moments;
* 16.
hfbtho_pnp.f90: implements particle number projection in the canonical basis;
* 17.
hfbtho_projections.f90: implements the angular momentum, particle number, and
parity projection in the quasiparticle basis;
* 18.
hfbtho_read_functional.f90: contains a collection of routines to read the
parameters of the EDF from a file;
* 19.
hfbtho_solver.f90: solves the self-consistent iterations of the HFB theory;
* 20.
hfbtho_storage.f90: contains an interface to the QRPA pnFAM code; see [55] and
references therein;
* 21.
hfbtho_tho.f90: defines the transformed harmonic oscillator basis; see [11]
and references therein;
* 22.
hfbtho_unedf.f90: defines parameterizations of the Skyrme and Gogny
functionals, and computes density-dependent coupling constants and fields of
generalized Skyrme energy functionals;
* 23.
hfbtho_utilities.f90: defines the integer and real types used throughout the
code, as well as various numerical constants;
* 24.
hfbtho_variables.f90: contains list of global variables used throughout the
code;
* 25.
hfbtho_version.f90: version number (currently git commit number of the
previous commit) and history of previous versions.
The programming language of most of the code is now Fortran 2003. The code
hfbtho requires an implementation of the BLAS and LAPACK libraries to function
correctly. Shared memory parallelism is available via OpenMP pragmas.
This version comes with a built-in Doxygen documentation. To benefit from this
feature, the user should install the doxygen software available at
www.doxygen.org. The documentation is built by typing
make doc
---
By default, Doxygen generates only an on-line HTML documentation. The main
page is located in the source directory at ./src/doc/html/index.html. A PDF
documentation can also be generated by going into ./doc/latex and typing
make
---
The PDF file is named refman.pdf.
### 5.2 Running the code
The program ships with a Makefile that is preset for a number of Fortran
compilers. The user should choose the compiler and set the path for the BLAS
and LAPACK libraries. In version 4.0 of the code, we have simplified the call
sequence of hfbtho. Assuming an executable named hfbtho_main and a Linux
system, execution is started by typing
./hfbtho_main [input_file_name]
---
where [input_file_name] is an optional name of the hfbtho input file that
contains all the Namelists. If none is given, the code will attempt to read
the file with the generic name hfbtho_NAMELIST.dat in the current directory.
The code will also automatically generate two ASCII output files: a compact
one called hfbtho.out and a more extended one called thoout.dat. Finally, the
code generates a binary file named hfbtho_output.hel that is used to restart
calculations.
HFB calculations are greatly accelerated when OpenMP multi-threading is
activated. However, the user should keep in mind that this requires setting
additional environment variables. In Linux/Unix machines, the default stack
size is not large enough to run the code and must be increased. This can be
achieved by instructions such as
ulimit -s unlimited
---
export OMP_STACKSIZE=32M
The value of ulimit defines the amount of stack size for the main OpenMP
thread. OpenMP supports control over the stack size limit of all additional
threads via the environment variable OMP_STACKSIZE. The value given above
should be sufficient for all applications. Note that this value does not
affect the stack size of the main thread set by ulimit. For completeness, note
that the GNU OpenMP run-time (libgomp) recognizes the non-standard environment
variable GOMP_STACKSIZE. If set, it overrides the value of OMP_STACKSIZE.
Finally, the Intel OpenMP run-time library also recognizes the non-standard
environment variable KMP_STACKSIZE. If set, it overrides the value of both
OMP_STACKSIZE and GOMP_STACKSIZE.
## Acknowledgments
Support for this work was partly provided through Scientific Discovery through
Advanced Computing (SciDAC) program funded by U.S. Department of Energy,
Office of Science, Advanced Scientific Computing Research and Nuclear Physics.
It was partly performed under the auspices of the US Department of Energy by
the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
(code release number: LLNL-CODE-826901, document release number: LLNL-
JRNL-827553). This work has been supported in part by the QuantiXLie Centre of
Excellence, a project cofinanced by the Croatian Government and European Union
through the European Regional Development Fund - The Competitiveness and
Cohesion Operational Programme (KK.01.1.1.01.0004). Computing support for this
work came from the Lawrence Livermore National Laboratory (LLNL) Institutional
Computing Grand Challenge program.
## Appendix A Densities and Currents in the Coordinate-Space Representation
Taking into account the block structure of the density matrix in the
$y$-simplex basis [cf. Eq. (30)], we can write
$\displaystyle\begin{split}\rho^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime})=&\sum_{\alpha\gamma}\rho_{\alpha\gamma}^{(\tau)++}\Phi_{\gamma}^{s=+i*}(\bm{r^{\prime}}\sigma^{\prime})\Phi_{\alpha}^{s=+i}(\bm{r}\sigma)\\\
+&\sum_{\alpha\gamma}\rho_{\alpha\gamma}^{(\tau)--}\Phi_{\gamma}^{s=-i*}(\bm{r^{\prime}}\sigma^{\prime})\Phi_{\alpha}^{s=-i}(\bm{r}\sigma),\end{split}$
(63)
where the sums run over HO basis states $\alpha$ and $\gamma$, while
$\Phi_{\gamma}^{s=+i}(\bm{r}\sigma)$ and $\Phi_{\gamma}^{s=-i}(\bm{r}\sigma)$
are the coordinate space representations of the eigenstates of the $y$-simplex
operator [cf. Eqs. (19) and (20)]
$\displaystyle\Phi_{\gamma}^{s=+i}(\bm{r}\sigma)$
$\displaystyle=\frac{1}{\sqrt{4\pi}}\psi_{n_{z}^{\alpha}}(z)\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\alpha}|}(r_{\perp})$
$\displaystyle\times\Big{[}ie^{i\Lambda^{\alpha}\phi}\chi_{+\frac{1}{2}}(\sigma)+e^{-i\Lambda^{\alpha}\phi}\chi_{-\frac{1}{2}}(\sigma)\Big{]},$
(64a) $\displaystyle\Phi_{\gamma}^{s=-i}(\bm{r}\sigma)$
$\displaystyle=\frac{1}{\sqrt{4\pi}}\psi_{n_{z}^{\alpha}}(z)\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\alpha}|}(r_{\perp})$
$\displaystyle\times\Big{[}e^{i\Lambda^{\alpha}\phi}\chi_{+\frac{1}{2}}(\sigma)+ie^{-i\Lambda^{\alpha}\phi}\chi_{-\frac{1}{2}}(\sigma)\Big{]}.$
(64b)
Components of the HO eigenfunctions $\psi_{n_{z}^{\alpha}}(z)$ and
$\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\alpha}|}(r_{\perp})$ are defined in
[11] and $\chi_{\pm\frac{1}{2}}(\sigma)$ are the eigenstates of the
$z$-component of the spin operator. Note that in Eq. (63) the dependence on
$\bm{x}^{(\tau)}$ and $\bm{q}$ was dropped for compactness in both
$\rho_{\bm{q}}^{(\tau)}(\bm{r}\sigma,\bm{r^{\prime}}\sigma^{\prime};\bm{x}^{(\tau)})$
on the left and $\rho_{\bm{q},\alpha\gamma}^{(\tau)++}(\bm{x}^{(\tau)})$,
$\rho_{\bm{q},\alpha\gamma}^{(\tau)--}(\bm{x}^{(\tau)})$ on the right.
The auxiliary local densities (40a)-(40f) can then be calculated from Eq. (63)
as
$\displaystyle\rho^{(\tau)}(\bm{r})$
$\displaystyle=\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65a) $\displaystyle s_{r_{\perp}}^{(\tau)}(\bm{r})$
$\displaystyle=\\!-\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},\\!z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65b) $\displaystyle s_{\phi}^{(\tau)}(\bm{r})$
$\displaystyle=\\!-\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},\\!z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65c) $\displaystyle s_{z}^{(\tau)}(\bm{r})$
$\displaystyle=i\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65d) $\displaystyle\tau^{(\tau)}(\bm{r})$
$\displaystyle=\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{2}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65e) $\displaystyle T_{r_{\perp}}^{(\tau)}(\bm{r})$
$\displaystyle=\\!-\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{3}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65f) $\displaystyle T_{\phi}^{(\tau)}(\bm{r})$
$\displaystyle=\\!-\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{3}_{\alpha\gamma}(r_{\perp},\\!z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65g) $\displaystyle T_{z}^{(\tau)}(\bm{r})$
$\displaystyle=i\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{2}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65h) $\displaystyle j_{r_{\perp}}^{(\tau)}(\bm{r})$
$\displaystyle=\frac{1}{2i}\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{4}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65i) $\displaystyle j_{\phi}^{(\tau)}(\bm{r})$
$\displaystyle=\frac{1}{2i}\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{5}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\beta}\\!-\\!\Lambda^{\\!\alpha})\phi\Big{]},\\!$
(65j) $\displaystyle j_{z}^{(\tau)}(\bm{r})$
$\displaystyle=\frac{1}{2i}\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{6}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65k) $\displaystyle J_{r_{\perp}r_{\perp}}^{(\tau)}(\bm{r})$
$\displaystyle=i\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{4}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65l) $\displaystyle J_{r_{\perp}\phi}^{(\tau)}(\bm{r})$
$\displaystyle=\\!i\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{4}_{\alpha\gamma}(r_{\perp},\\!z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65m) $\displaystyle J_{r_{\perp}z}^{(\tau)}(\bm{r})$
$\displaystyle=\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{4}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65n) $\displaystyle J_{\phi r_{\perp}}^{(\tau)}(\bm{r})$
$\displaystyle=\\!i\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{7}_{\alpha\gamma}(r_{\perp},\\!z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65o) $\displaystyle J_{\phi\phi}^{(\tau)}(\bm{r})$
$\displaystyle=\\!-i\\!\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{7}_{\alpha\gamma}(r_{\perp},\\!z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65p) $\displaystyle J_{\phi z}^{(\tau)}(\bm{r})$
$\displaystyle=\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{5}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},\\!$
(65q) $\displaystyle J_{zr_{\perp}}^{(\tau)}(\bm{r})$
$\displaystyle=i\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{6}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65r) $\displaystyle J_{z\phi}^{(\tau)}(\bm{r})$
$\displaystyle=\\!i\sum_{\alpha\gamma}\rho_{\alpha\gamma,-}^{(\tau)}\mathcal{F}^{6}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!+\\!\Lambda^{\beta}\\!+\\!1)\phi\Big{]},\\!$
(65s) $\displaystyle J_{zz}^{(\tau)}(\bm{r})$
$\displaystyle=\sum_{\alpha\gamma}\rho_{\alpha\gamma,+}^{(\tau)}\mathcal{F}^{6}_{\alpha\gamma}(r_{\perp},z)\sin\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]}.$
(65t)
Here, we have introduced a shorthand notation for density matrices
$\displaystyle\rho_{\alpha\gamma,+}^{(\tau)}$
$\displaystyle=\frac{1}{2\pi}\Big{(}\rho_{\alpha\gamma}^{(\tau)++}\\!+\\!\rho_{\alpha\gamma}^{(\tau)--}\Big{)},$
(66a) $\displaystyle\rho_{\alpha\gamma,-}^{(\tau)}$
$\displaystyle=\frac{1}{2\pi}\Big{(}\rho_{\alpha\gamma}^{(\tau)++}\\!-\\!\rho_{\alpha\gamma}^{(\tau)--}\Big{)},$
(66b)
as well as for the coordinate-dependent factors
$\displaystyle\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z)$
$\displaystyle=\psi_{n_{z}^{\alpha}}(z)\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\psi_{n_{z}^{\beta}}(z)\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp}),$
(67a) $\displaystyle\mathcal{F}^{2}_{\alpha\gamma}(r_{\perp},z)$
$\displaystyle=\psi_{n_{z}^{\alpha}}(z)\Big{(}\partial_{r_{\perp}}\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\Big{)}\psi_{n_{z}^{\beta}}(z)\Big{(}\partial_{r_{\perp}}\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp})\Big{)}$
$\displaystyle+\frac{\Lambda^{\\!\alpha}\Lambda^{\beta}}{r^{2}_{\perp}}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z)$
(67b)
$\displaystyle+\Big{(}\partial_{z}\psi_{n_{z}^{\alpha}}(z)\Big{)}\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\Big{(}\partial_{z}\psi_{n_{z}^{\beta}}(z)\Big{)}\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp}),$
$\displaystyle\mathcal{F}^{3}_{\alpha\gamma}(r_{\perp},z)$
$\displaystyle=\psi_{n_{z}^{\alpha}}(z)\Big{(}\partial_{r_{\perp}}\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\Big{)}\psi_{n_{z}^{\beta}}(z)\Big{(}\partial_{r_{\perp}}\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp})\Big{)}$
$\displaystyle-\frac{\Lambda^{\\!\alpha}\Lambda^{\beta}}{r^{2}_{\perp}}\psi_{n_{z}^{\alpha}}(z)\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\psi_{n_{z}^{\beta}}(z)\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp})$
(67c)
$\displaystyle+\Big{(}\partial_{z}\psi_{n_{z}^{\alpha}}(z)\Big{)}\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\Big{(}\partial_{z}\psi_{n_{z}^{\beta}}(z)\Big{)}\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp}),$
$\displaystyle\mathcal{F}^{4}_{\alpha\gamma}(r_{\perp},z)$
$\displaystyle=\psi_{n_{z}^{\alpha}}(z)\Big{(}\partial_{r_{\perp}}\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\Big{)}\psi_{n_{z}^{\beta}}(z)\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp})$
$\displaystyle-\psi_{n_{z}^{\alpha}}(z)\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\psi_{n_{z}^{\beta}}(z)\Big{(}\partial_{r_{\perp}}\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp})\Big{)},$
(67d) $\displaystyle\mathcal{F}^{5}_{\alpha\gamma}(r_{\perp},z)$
$\displaystyle=\frac{(\Lambda^{\\!\alpha}\\!+\\!\Lambda^{\beta})}{r_{\perp}}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z),$
(67e) $\displaystyle\mathcal{F}^{6}_{\alpha\gamma}(r_{\perp},z)$
$\displaystyle=\Big{(}\partial_{z}\psi_{n_{z}^{\alpha}}(z)\Big{)}\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\psi_{n_{z}^{\beta}}(z)\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp})$
$\displaystyle-\psi_{n_{z}^{\alpha}}(z)\psi_{n_{\perp}^{\alpha}}^{|\Lambda^{\\!\alpha}|}(r_{\perp})\Big{(}\partial_{z}\psi_{n_{z}^{\beta}}(z)\Big{)}\psi_{n_{\perp}^{\beta}}^{|\Lambda^{\beta}|}(r_{\perp}),$
(67f) $\displaystyle\mathcal{F}^{7}_{\alpha\gamma}(r_{\perp},z)$
$\displaystyle=\frac{(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})}{r_{\perp}}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z).$
(67g)
Furthermore, the local pairing densities read
$\displaystyle\tilde{\rho}^{(\tau)}(\bm{r})$
$\displaystyle=\sum_{\alpha\gamma}\kappa^{(\tau)}_{\alpha\gamma,-}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},$
(68a) $\displaystyle\tilde{\rho}^{*(\tau)}(\bm{r})$
$\displaystyle=\sum_{\alpha\gamma}\kappa^{*(\tau)}_{\alpha\gamma,-}\mathcal{F}^{1}_{\alpha\gamma}(r_{\perp},z)\cos\Big{[}(\Lambda^{\\!\alpha}\\!-\\!\Lambda^{\beta})\phi\Big{]},$
(68b)
with an equivalent shorthand notation
$\displaystyle\kappa_{\alpha\gamma,-}^{(\tau)}$
$\displaystyle=\frac{1}{2\pi}\Big{(}\kappa_{\alpha\gamma}^{(\tau)+-}\\!-\\!\kappa_{\alpha\gamma}^{(\tau)-+}\Big{)},$
(69a) $\displaystyle\kappa_{\alpha\gamma,-}^{*(\tau)}$
$\displaystyle=\frac{1}{2\pi}\Big{(}\kappa_{\alpha\gamma}^{*(\tau)+-}\\!-\\!\kappa_{\alpha\gamma}^{*(\tau)-+}\Big{)}.$
(69b)
## Appendix B Coupling Constants of the Skyrme EDF
The time-even and time-odd contributions to the Skyrme EDF [cf. Eqs. (54) and
(55), respectively] contain a total of twenty coupling constants in the
isoscalar ($t=0$) and the isovector ($t=1$) channel. Four of these constants
are density-dependent and can further be decomposed as
$\displaystyle C_{\bm{q},t}^{\rho\rho}(\bm{r};\bm{x})$
$\displaystyle=C_{t,0}^{\rho\rho}+C_{t,D}^{\rho\rho}\rho_{\bm{q}}^{\alpha}(\bm{r};\bm{x}),$
(70a) $\displaystyle C_{\bm{q},t}^{ss}(\bm{r};\bm{x})$
$\displaystyle=C_{t,0}^{ss}+C_{t,D}^{ss}\rho_{\bm{q}}^{\alpha}(\bm{r};\bm{x}).$
(70b)
Here, the real number $\alpha$ can be considered as a parameter of an EDF. The
remaining twenty four density-independent coupling constants can then be
expressed in terms of the $(t,x)$ parameters of the Skyrme EDF. In the time-
even channel, the coupling constants read
$\displaystyle C_{0,0}^{\rho\rho}$ $\displaystyle=+\frac{3}{8}t_{0},$ (71a)
$\displaystyle C_{0,D}^{\rho\rho}$ $\displaystyle=+\frac{1}{16}t_{3},$ (71b)
$\displaystyle C_{1,0}^{\rho\rho}$
$\displaystyle=-\frac{1}{4}t_{0}\Big{(}\frac{1}{2}+x_{0}\Big{)},$ (71c)
$\displaystyle C_{1,D}^{\rho\rho}$
$\displaystyle=-\frac{1}{24}t_{3}\Big{(}\frac{1}{2}+x_{3}\Big{)},$ (71d)
$\displaystyle C_{0}^{\rho\Delta\rho}$
$\displaystyle=-\frac{9}{64}t_{1}+\frac{1}{16}t_{2}\Big{(}\frac{5}{4}+x_{2}\Big{)},$
(71e) $\displaystyle C_{1}^{\rho\Delta\rho}$
$\displaystyle=+\frac{3}{32}t_{1}\Big{(}\frac{1}{2}+x_{1}\Big{)}+\frac{1}{32}t_{2}\Big{(}\frac{1}{2}+x_{2}\Big{)},$
(71f) $\displaystyle C_{0}^{\rho\tau}$
$\displaystyle=+\frac{3}{16}t_{1}+\frac{1}{4}t_{2}\Big{(}\frac{5}{4}+x_{2}\Big{)},$
(71g) $\displaystyle C_{1}^{\rho\tau}$
$\displaystyle=-\frac{1}{8}t_{1}\Big{(}\frac{1}{2}+x_{1}\Big{)}+\frac{1}{8}t_{2}\Big{(}\frac{1}{2}+x_{2}\Big{)},$
(71h) $\displaystyle C_{0}^{\rho\nabla J}$
$\displaystyle=-b_{4}-\frac{1}{2}b_{4}^{\prime},$ (71i) $\displaystyle
C_{1}^{\rho\nabla J}$ $\displaystyle=-\frac{1}{2}b_{4}^{\prime},$ (71j)
$\displaystyle C_{0}^{JJ}$
$\displaystyle=+\frac{1}{8}t_{1}\Big{(}\frac{1}{2}-x_{1}\Big{)}-\frac{1}{8}t_{2}\Big{(}\frac{1}{2}+x_{2}\Big{)},$
(71k) $\displaystyle C_{1}^{JJ}$
$\displaystyle=-\frac{1}{16}\Big{(}t_{2}-t_{1}\Big{)},$ (71l)
where $b_{4}$ and $b_{4}^{\prime}$ are the parameters of the spin-orbit force
and we took $t_{e}=t_{o}=0$ for the tensor terms [1]. In the time-odd channel,
the coupling constants read
$\displaystyle C_{0,0}^{ss}$
$\displaystyle=-\frac{1}{4}t_{0}\Big{(}\frac{1}{2}-x_{0}\Big{)},$ (72a)
$\displaystyle C_{0,D}^{ss}$
$\displaystyle=-\frac{1}{24}t_{3}\Big{(}\frac{1}{2}-x_{3}\Big{)},$ (72b)
$\displaystyle C_{1,0}^{ss}$ $\displaystyle=-\frac{1}{8}t_{0},$ (72c)
$\displaystyle C_{1,D}^{ss}$ $\displaystyle=-\frac{1}{48}t_{3},$ (72d)
$\displaystyle C_{0}^{s\Delta s}$
$\displaystyle=+\frac{3}{32}t_{1}(\frac{1}{2}-x_{1}\Big{)}+\frac{1}{32}t_{2}\Big{(}\frac{1}{2}+x_{2}\Big{)},$
(72e) $\displaystyle C_{1}^{s\Delta s}$
$\displaystyle=+\frac{3}{64}t_{1}+\frac{1}{64}t_{2},$ (72f) $\displaystyle
C_{0}^{sj}$ $\displaystyle=-C_{0}^{\rho\tau},$ (72g) $\displaystyle
C_{1}^{sj}$ $\displaystyle=-C_{1}^{\rho\tau},$ (72h) $\displaystyle
C_{0}^{s\nabla j}$ $\displaystyle=+C_{0}^{\rho\nabla J},$ (72i) $\displaystyle
C_{1}^{s\nabla j}$ $\displaystyle=+C_{1}^{\rho\nabla J},$ (72j) $\displaystyle
C_{0}^{sT}$ $\displaystyle=-C_{0}^{JJ},$ (72k) $\displaystyle C_{1}^{sT}$
$\displaystyle=-C_{1}^{JJ}.$ (72l)
Note that relations (72g) - (72l) are imposed by the local gauge invariance of
an EDF [1].
## References
* [1] N. Schunck, Energy Density Functional Methods for Atomic Nuclei., IOP Expanding Physics, IOP Publishing, Bristol, UK, 2019.
* [2] M. Bender, P.-H. Heenen, P.-G. Reinhard, Self-consistent mean-field models for nuclear structure, Rev. Mod. Phys. 75 (1) (2003) 121.
* [3] T. Nikšić, D. Vretenar, P. Ring, Relativistic nuclear energy density functionals: Mean-field and beyond, Prog. Part. Nucl. Phys. 66 (3) (2011) 519\.
* [4] L. M. Robledo, T. R. Rodríguez, R. R. Rodríguez-Guzmán, Mean field and beyond description of nuclear structure with the Gogny force: A review, J. Phys. G: Nucl. Part. Phys. 46 (1) (2019) 013001.
* [5] R. N. Perez, N. Schunck, R.-D. Lasseri, C. Zhang, J. Sarich, Axially deformed solution of the Skyrme–Hartree–Fock–Bogolyubov equations using the transformed harmonic oscillator basis (III) HFBTHO (v3.00): A new version of the program, Comput. Phys. Commun. 220 (2017) 363.
* [6] N. Schunck, J. Dobaczewski, W. Satuła, P. Bączyk, J. Dudek, Y. Gao, M. Konieczka, K. Sato, Y. Shi, X. B. Wang, T. R. Werner, Solution of the Skyrme-Hartree-Fock-Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis. (VIII) HFODD (v2.73y): A new version of the program, Comput. Phys. Commun. 216 (2017) 145.
* [7] W. Ryssens, V. Hellemans, M. Bender, P.-H. Heenen, Solution of the Skyrme-HF+BCS equation on a 3D mesh, II: A new version of the Ev8 code, Comput. Phys. Commun. 187 (2015) 175.
* [8] T. Nikšić, N. Paar, D. Vretenar, P. Ring, DIRHB - A relativistic self-consistent mean-field framework for atomic nuclei, Comput. Phys. Commun. 185 (6) (2014) 1808.
* [9] B. G. Carlsson, J. Dobaczewski, J. Toivanen, P. Veselý, Solution of self-consistent equations for the N3LO nuclear energy density functional in spherical symmetry. The program HOSPHE (v1.02), Comput. Phys. Commun. 181 (9) (2010) 1641.
* [10] K. Bennaceur, J. Dobaczewski, Coordinate-space solution of the Skyrme-Hartree-Fock- Bogolyubov equations within spherical symmetry. The program HFBRAD (v1.00), Comput. Phys. Commun. 168 (2) (2005) 96.
* [11] M. V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Axially deformed solution of the Skyrme-Hartree-Fock-Bogolyubov equations using the transformed harmonic oscillator basis. The program HFBTHO (v1.66p), Comput. Phys. Commun. 167 (1) (2005) 43.
* [12] M. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, S. Wild, Axially deformed solution of the Skyrme-Hartree-Fock-Bogoliubov equations using the transformed harmonic oscillator basis (II) HFBTHO v2.00d: A new version of the program, Comput. Phys. Commun. 184 (6) (2013) 1592.
* [13] J. Dobaczewski, P. Bączyk, P. Becker, M. Bender, K. Bennaceur, J. Bonnard, Y. Gao, A. Idini, M. Konieczka, M. Kortelainen, L. Próchniak, A. M. Romero, W. Satu\la, Y. Shi, T. R. Werner, L. F. Yu, Solution of universal nonrelativistic nuclear DFT equations in the Cartesian deformed harmonic-oscillator basis. (IX) HFODD (v3.06h): A new version of the program, J. Phys. G: Nucl. Part. Phys. 48 (10) (2021) 102001.
* [14] J. A. Sheikh, J. Dobaczewski, P. Ring, L. M. Robledo, C. Yannouleas, Symmetry restoration in mean-field approaches, J. Phys. G: Nucl. Part. Phys. 48 (12) (2021) 123001.
* [15] B. Bally, M. Bender, Projection on particle number and angular momentum: Example of triaxial Bogoliubov quasiparticle states, Phys. Rev. C 103 (2) (2021) 024315.
* [16] D. Varshalovich, A. Moskalev, V. Khersonskii, Quantum Theory of Angular Momentum, World Scientific, Singapore, 1988.
* [17] P. Ring, P. Schuck, The Nuclear Many-Body Problem, Texts and Monographs in Physics, Springer, 2004.
* [18] V. N. Fomenko, Projection in the occupation-number space and the canonical transformation, J. Phys. A: Gen. Phys. 3 (1) (1970) 8.
* [19] J. L. Egido, L. M. Robledo, Parity-projected calculations on octupole deformed nuclei, Nucl. Phys. A 524 (1) (1991) 65.
* [20] J. Dobaczewski, J. Dudek, S. G. Rohoziński, T. R. Werner, Point symmetries in the Hartree-Fock approach. I. Densities, shapes, and currents, Phys. Rev. C 62 (1) (2000) 014310.
* [21] J. Dobaczewski, J. Dudek, S. G. Rohoziński, T. R. Werner, Point symmetries in the Hartree-Fock approach. II. Symmetry-breaking schemes, Phys. Rev. C 62 (1) (2000) 014311.
* [22] J. L. Egido, State-of-the-art of beyond mean field theories with nuclear density functionals, Phys. Scr. 91 (7) (2016) 073003.
* [23] B. Avez, M. Bender, Evaluation of overlaps between arbitrary fermionic quasiparticle vacua, Phys. Rev. C 85 (3) (2012) 034325.
* [24] A. Valor, P. H. Heenen, P. Bonche, Configuration mixing of mean-field wave functions projected on angular momentum and particle number: Application to 24Mg, Nucl. Phys. A 671 (1) (2000) 145.
* [25] D. Baye, P.-H. Heenen, Angular momentum projection on a mesh of cranked Hartree-Fock wave functions, Phys. Rev. C 29 (3) (1984) 1056.
* [26] L. M. Robledo, Practical formulation of the extended Wick’s theorem and the Onishi formula, Phys. Rev. C 50 (6) (1994) 2874.
* [27] R. Balian, E. Brezin, Nonunitary bogoliubov transformations and extension of Wick’s theorem, Nuovo Cim. B 64 (1) (1969) 37.
* [28] K. Hara, S. Iwasaki, On the quantum number projection, Nucl. Phys. A 332 (1) (1979) 61.
* [29] P. Marević, N. Schunck, Fission of ${}^{240}\mathrm{Pu}$ with Symmetry-Restored Density Functional Theory, Phys. Rev. Lett. 125 (10) (2020) 102504.
* [30] P. Marević, N. Schunck, J. Randrup, R. Vogt, Angular momentum of fission fragments from microscopic theory, Phys. Rev. C 104 (2) (2021) L021601.
* [31] N. Onishi, S. Yoshida, Generator coordinate method applied to nuclei in the transition region, Nucl. Phys. 80 (2) (1966) 367.
* [32] R. G. Nazmitdinov, L. M. Robledo, P. Ring, J. L. Egido, Representation of three-dimensional rotations in oscillator basis sets, Nucl. Phys. A 596 (1) (1996) 53.
* [33] S. G. Rohoziński, J. Dobaczewski, W. Nazarewicz, Self-consistent symmetries in the proton-neutron Hartree-Fock-Bogoliubov approach, Phys. Rev. C 81 (1) (2010) 014313.
* [34] Y. M. Engel, D. M. Brink, K. Goeke, S. J. Krieger, D. Vautherin, Time-dependent Hartree-Fock theory with Skyrme’s interaction, Nucl. Phys. A 249 (2) (1975) 215.
* [35] S. Perez-Martin, L. Robledo, Microscopic justification of the equal filling approximation, Phys. Rev. C 78 (1) (2008) 014304. doi:10.1103/PhysRevC.78.014304.
* [36] L. M. Robledo, Particle number restoration: Its implementation and impact in nuclear structure calculations, Int. J. Mod. Phys. E 16 (02) (2007) 337–351.
* [37] L. M. Robledo, Remarks on the use of projected densities in the density-dependent part of Skyrme or Gogny functionals, J. Phys. G: Nucl. Part. Phys. 37 (6) (2010) 064020.
* [38] A. Bulgac, M. M. Forbes, S. Jin, R. N. Perez, N. Schunck, Minimal nuclear energy density functional, Phys. Rev. C 97 (4) (2018) 044313.
* [39] J. Dobaczewski, W. Nazarewicz, M. V. Stoitsov, Contact pairing interaction for the Hartree-Fock-Bogoliubov calculations, in: The Nuclear Many-Body Problem 2001, no. 53 in Nato Science Series II, Springer Netherlands, 2002, p. 181.
* [40] A. Bulgac, Local density approximation for systems with pairing correlations, Phys. Rev. C 65 (5) (2002) 051305(R).
* [41] M. Girod, B. Grammaticos, Triaxial Hartree-Fock-Bogolyubov calculations with D1 effective interaction, Phys. Rev. C 27 (5) (1983) 2317\.
* [42] J. Dobaczewski, W. Satuła, B. G. Carlsson, J. Engel, P. Olbratowski, P. Powałowski, M. Sadziak, J. Sarich, N. Schunck, A. Staszczak, M. Stoitsov, M. Zalewski, H. Zduńczuk, Solution of the Skyrme-Hartree-Fock-Bogolyubov equations in the Cartesian deformed harmonic-oscillator basis. (VI) HFODD (v.240h): A new version of the program, Comput. Phys. Commun. 180 (11) (2009) 2361.
* [43] J. Dobaczewski, W. Nazarewicz, T. R. Werner, J. F. Berger, C. R. Chinn, J. Dechargé, Mean-field description of ground-state properties of drip-line nuclei: Pairing and continuum effects, Phys. Rev. C 53 (6) (1996) 2809. doi:10.1103/PhysRevC.53.2809.
* [44] M. Beiner, H. Flocard, N. Van Giai, P. Quentin, Nuclear ground-state properties and self-consistent calculations with the skyrme interaction:(I). Spherical description, Nucl. Phys. A 238 (1) (1975) 29.
* [45] E. Chabanat, P. Bonche, P. Haensel, J. Meyer, R. Schaeffer, A Skyrme parametrization from subnuclear to neutron star densities Part II. Nuclei far from stabilities, Nucl. Phys. A 635 (1) (1998) 231.
* [46] P.-G. Reinhard, D. J. Dean, W. Nazarewicz, J. Dobaczewski, J. A. Maruhn, M. R. Strayer, Shape coexistence and the effective nucleon-nucleon interaction, Phys. Rev. C 60 (1) (1999) 014316.
* [47] A. Valor, J. L. Egido, L. M. Robledo, A new approach to approximate symmetry restoration with density dependent forces: The superdeformed band in 192Hg, Phys. Lett. B 392 (3) (1997) 249.
* [48] N. Schunck, Density Functional Theory Approach to Nuclear Fission, Acta Phys. Pol. B 44 (2013) 263.
* [49] M. Anguiano, J. L. Egido, L. M. Robledo, Particle number projection with effective forces, Nucl. Phys. A 696 (3–4) (2001) 467.
* [50] J. Dobaczewski, M. V. Stoitsov, W. Nazarewicz, P.-G. Reinhard, Particle-number projection and the density functional theory, Phys. Rev. C 76 (5) (2007) 054315\.
* [51] T. Duguet, M. Bender, K. Bennaceur, D. Lacroix, T. Lesinski, Particle-number restoration within the energy density functional formalism: Nonviability of terms depending on noninteger powers of the density matrices, Phys. Rev. C 79 (4) (2009) 044320.
* [52] M. Bender, T. Duguet, D. Lacroix, Particle-number restoration within the energy density functional formalism, Phys. Rev. C 79 (4) (2009) 044319.
* [53] N. Schunck, J. L. Egido, Continuum and symmetry-conserving effects in drip-line nuclei using finite-range forces, Phys. Rev. C 77 (1) (2008) 011301(R).
* [54] N. Schunck, J. L. Egido, Nuclear halos and drip lines in symmetry-conserving continuum Hartree-Fock-Bogoliubov theory, Phys. Rev. C 78 (6) (2008) 064305.
* [55] E. M. Ney, J. Engel, T. Li, N. Schunck, Global description of $\beta^{-}$ decay with the axially deformed Skyrme finite-amplitude method: Extension to odd-mass and odd-odd nuclei, Phys. Rev. C 102 (3) (2020) 034326.
|
11institutetext: Ronald R. Coifman 22institutetext: Department of Mathematics,
Program in Applied Mathematics, Yale University, New Haven, CT 06510, USA,
22email<EMAIL_ADDRESS>33institutetext: Jacques Peyrière
44institutetext: Institut de Mathématiques d’Orsay, CNRS, Université Paris-
Saclay, 91405 Orsay, France, 44email: jacques.peyriere@universite-paris-
saclay.fr
# On Complex Analytic tools, and the Holomorphic Rotation methods
Ronald R. Coifman Jacques Peyrière and Guido Weiss
## 1 introduction
This paper in honor of Guido Weiss was written posthumously, jointly with him,
as we had, all of his initial notes and ideas related to the program described
below.
Our task, here, is to recount ideas, explorations, and visions that Guido his
collaborators and students, developed over the last 60 years. To point out the
connection of ideas between the original views of the interplay between
complex and real analysis as envisioned by Zygmund and his students Calderón,
Guido Weiss, Eli Stein,and many others, 70 years ago, and the current
approaches introducing nonlinear multi layered analysis for the organization
and processing of complicated oscillatory functions.
It was Zygmund’s view that harmonic analysis provides the infrastructure
linking most areas of analysis, from complex analysis to partial differential
equations, to probability, number theory, and geometry.
In particular he pushed forward the idea that the remarkable tools of complex
analysis, which include; contour integration, conformal mappings,
factorization, tools which were used to provide miraculous proofs in real
analysis, should be deciphered and converted to real variable tools. Together
with Calderón, they bucked the trend for abstraction, prevalent at the time,
and formed a school pushing forward this interplay between real and complex
analysis. A principal bridge was provided by real variable methods, multiscale
analysis, Littlewood Paley theory, and related Calderon representation
formulas. Our aim, here, is to elaborate on the ”magic” of complex analysis
and indicate potential applications in Higher dimensions. An old idea of
Calderón and Zygmund, the so called ”rotation method”, enabled the reductions
of the study of $L^{p}$ estimates for multi dimensional singular integrals to
a superposition,over all directions,of Hilbert transforms. Thereby allowing
the use of one complex variable methods. A related idea was the invention of
systems of Harmonic functions satisfying generalised Cauchy Riemann equations,
such as the Riesz systems, exploiting their special properties. CW
Our goal is to extend these ideas to enable remarkable nonlinear complex
analytic tools for the adapted analysis of functions in one variable, to apply
in higher dimensions.
Guido has been pushing the idea that factorization theorems like Blaschke
products are a key to a variety of nonlinear analytic methods CW1 . Our goal
here is to demonstrate this point, deriving amazing approximation theorems, in
one variable, and opening doors to higher dimensional applications.
Application in which each harmonic function is the average of special
holomorphic functions in planes and constant in orthogonal directions.
We start by describing recent developments in nonlinear complex analysis,
exploiting the tools of factorization and composition. In particular we will
sketch methods extending conventional Fourier analysis, exploiting both phase
and amplitudes of holomorphic functions. The ”miracles of nonlinear complex
analysis”, such as factorization and composition of functions lead to new
versions of holomorphic wavelets, and relate them to multiscale dynamical
systems.
Our story interlaces the role of the phase of signals with their
analytic/geometric properties. The Blaschke factors are a key ingredient, in
building analytic tools, starting with the Malmquist-Takenaka orthonormal
bases of the Hardy space $\mathsf{H}^{2}({\mathbb{T}})$, continuing with
”best” adapted bases obtained through phase unwinding, and describing
relations to composition of Blaschke products and their dynamics (on the disc
and upper half plane). Specifically we construct multiscale orthonormal
holomorphic wavelet bases, generalized scaled holomorphic orthogonal bases, to
dynamical systems, obtained by composing Blaschke products.
We also, remark, that the phase of a Blaschke product is a one layer neural
net with ($\arctan$ as an activation sigmoid) and that the composition is a
”Deep Neural Net” whose ”depth” is the number of compositions. Our results
provide a wealth of related libraries of orthogonal bases.
We sketch these ideas in various ”vignette” subsections and refer for more
details on analytic methods CP , related to the Blaschke based nonlinear phase
unwinding decompositions coifman ; CSW ; nahon . We also consider orthogonal
decompositions of invariant subspaces of Hardy spaces. In particular we
constructed a multiscale decomposition, described below, of the Hardy space of
the upper half-plane.
Such a decomposition can be carried in the unit disk by conformal mapping. A
somewhat different multiscale decomposition of the space
$\mathsf{H}^{2}({\mathbb{T}})$ has been constructed by using Malmquist-
Takenaka bases associated with Blaschke products whose zeroes are
$\displaystyle(1-2^{-n})\mathrm{e}^{2\mathrm{i}\pi j/2^{n}}$ where $n\geq 1$
and $0\leq j<2^{n}$ feichtinger . Here we provide a variety of multiscale
decompositions by considering iterations of Blaschke products.
In the next chapter we will show how with help of an extended Radon transform
we can introduce a method of rotations to enable us to lift the one
dimensional tools to higher dimensions. In particular the various orthogonal
bases of holomorphic functions in one dimension, give rise to orthogonal bases
of Harmonic functions in the higher dimensional upper half space.
## 2 Preliminaries and notation
For $p\geq 1$, $\mathsf{H}^{p}({\mathbb{T}})$ stands for the space of analytic
functions $f$ on the unit disk ${\mathbb{D}}$ such that
$\sup_{0<r<1}\int_{0}^{2\pi}|f(r\mathrm{e}^{\mathrm{i}\theta})|^{p}\frac{\mathrm{d}\theta}{2\pi}<+\infty.$
Such functions have boundary values almost everywhere, and the Hardy space
$\mathsf{H}^{p}({\mathbb{T}})$ can be identified with the set of $L^{p}$
functions on the torus ${\mathbb{T}}=\partial{\mathbb{D}}$ whose Fourier
coefficients of negative order vanish. We will alternate between analysis on
the disk, and the parallel theory for analytic functions on the upper half
plane ${\mathbb{H}}=\\{x+\mathrm{i}y\ :\ y>0\\}$. The space of analytic
functions $f$ on ${\mathbb{H}}$ such that
$\sup_{y>0}\|f(\cdot+\mathrm{i}y)\|_{L^{p}({\mathbb{R}})}<+\infty$
is denoted by $\mathsf{H}^{p}({\mathbb{R}})$. These functions have boundary
values in $L^{p}({\mathbb{R}})$ when $p\geq 1$. The space
$\mathsf{H}^{p}({\mathbb{R}})$ is identified to the space of $L^{p}$ functions
whose Fourier transform vanishes on the negative half line $(-\infty,0)$.
## 3 Analysis on The upper half plane
We present some known results CP , without proof. In this section one simply
writes $\mathsf{H}^{2}$ instead of $\mathsf{H}^{2}({\mathbb{R}})$.
### Malmquist-Takenaka bases
Let $(a_{j})_{1\leq j}$ be a sequence (finite or not)) of complex numbers with
positive imaginary parts and such that
$\displaystyle\sum_{j\geq 0}\frac{\Im a_{j}}{1+|a_{j}|^{2}}<+\infty.$ (1)
The corresponding Blaschke product is
${\mathsf{B}}(x)=\prod_{j\geq
0}\frac{\left|1+a_{j}^{2}\right|}{1+a_{j}^{2}}\,\frac{x-a_{j}}{x-\overline{a}_{j}},$
where, $0/0$, which appears if $a_{j}=\mathrm{i}$, should be understood as 1.
The factors $\displaystyle\frac{\left|1+a_{j}^{2}\right|}{1+a_{j}^{2}}$ insure
the convergence of this product when there are infinitely many zeroes. But, in
some situations, it is more convenient to use other convergence factors as we
shall see below.
Whether the series (1) is convergent or not, one defines (for $n\geq 0$) the
functions
$\phi_{n}(x)=\frac{1}{\sqrt{\pi}}\left(\prod_{0\leq
j<n}\frac{x-a_{j}}{x-\overline{a}_{j}}\right)\,\frac{1}{x-\overline{a}_{n}}.$
Then these functions form an orthonormal system in $\mathsf{H}^{2}$. If the
series (1) diverges, it is a Malmquist-Takenaka orthonormal basis of
$\mathsf{H}^{2}$, otherwise it is a basis of the orthogonal complement of
${\mathsf{B}}\,\mathsf{H}^{2}$ in $\mathsf{H}^{2}$.
We remark that roughly a hundred years ago these bases were constructed
takenaka ; malmquist through a Gram Schmidt orthogonalization of the list of
rational functions with poles in the lower half plane .
Observe that for a rational function with a pole of order M at $a$ the
corresponding M basis functions have the form
$\phi_{n}(x)=\mathrm{e}^{\mathrm{i}{n}\theta(x)}\frac{1}{x-\overline{a}_{n}}\qquad(n=1..M).$
These are localized ”Fourier like” basis functions around the real part of $a$
scaled by the imaginary part.
### Example of a multiscale Wavelet decomposition
The infinite Blaschke products
$G_{n}(x)=\prod_{j\leq
n}\frac{j-\mathrm{i}}{j+\mathrm{i}}\,\frac{x-j-\mathrm{i}}{x-j+\mathrm{i}}\text{\quad
and\quad}G(x)=\prod_{j\in{\mathbb{Z}}}\frac{j-\mathrm{i}}{j+\mathrm{i}}\,\frac{x-j-\mathrm{i}}{x-j+\mathrm{i}}$
can be expressed in terms of known functions:
$G_{n}(x)=\frac{\Gamma(-\mathrm{i}-n)}{\Gamma(\mathrm{i}-n)}\,\frac{\Gamma(x-n+\mathrm{i})}{\Gamma(x-n-\mathrm{i})}\text{\quad
and\quad}G(x)=\frac{\sin\pi(\mathrm{i}-x)}{\sin\pi(\mathrm{i}+x)}.$ (2)
### An orthonormal system
Consider the function
$\phi(x)=\displaystyle\frac{\Gamma(x-1+\mathrm{i})}{\sqrt{\pi}\Gamma(x-\mathrm{i})}$.
It is easily checked that
$\phi(x-n)=\frac{\Gamma(\mathrm{i}-n)}{\Gamma(-\mathrm{i}-n)}\,\frac{G_{n}(x)}{\sqrt{\pi}\bigl{(}x-(n+1)+\mathrm{i}\bigr{)}}.$
Set $\phi_{n}(x)=\phi(x-n)$. For fixed $m$, the functions $\phi_{n}/G_{m}$,
for $n\geq m$, form a Malmquist-Takenaka basis of $(G/G_{m})\mathsf{H}^{2}$.
In other terms, the functions $\phi_{n}$, for $n\geq m$, form an orthonormal
basis of $G_{m}\mathsf{H}^{2}\ominus G\mathsf{H}^{2}$. This means that the
functions $\phi_{n}$ (for $n\in{\mathbb{Z}}$) form a Malmquist-Takenaka basis
of the orthogonal complement of $G\mathsf{H}^{2}$ in $\mathsf{H}^{2}$.
#### Multiscale decomposition
As $|1-G(2^{n}x)|\leq C2^{n}$ all the products
$\displaystyle{\mathscr{B}}_{n}(x)=\prod_{j<n}G(2^{j}x)$ are convergent and
$\displaystyle\lim_{n\to-\infty}{\mathscr{B}}_{n}=1$ uniformly.
Let ${\mathscr{B}}={\mathscr{B}}_{0}$. Obviously,
${\mathscr{B}}_{n}(x)={\mathscr{B}}(2^{n}x)$. Consider the following subspaces
of $\mathsf{H}^{2}$: ${\mathsf{E}}_{n}={\mathscr{B}}_{n}\mathsf{H}^{2}$. This
is a decreasing sequence. The space
$\displaystyle{\mathsf{E}}_{+\infty}=\bigcap_{n\in{\mathbb{Z}}}{\mathsf{E}}_{n}$
is equal to $\\{0\\}$ since a function orthogonal to this space would have too
many zeros, and the space
$\displaystyle{\mathsf{E}}_{-\infty}=\mathrm{closure~{}of}\bigcup_{n\in{\mathbb{Z}}}{\mathsf{E}}_{n}$
is equal to $\mathsf{H}^{2}$ since ${\mathscr{B}}_{n}$ converges uniformly to
1 when $n$ goes to $-\infty$.
For all $n$ and $j$, let
$\phi_{n,j}(x)=2^{n/2}\phi(2^{n}x-j){\mathscr{B}}(2^{n}x).$
Then, for all $n$, $(\phi_{n,j})_{j\in{\mathbb{Z}}}$ is an orthonormal basis
of ${\mathsf{E}}_{n}\ominus{\mathsf{E}}_{n+1}$. We conclude that
$(\phi_{n,j})_{n,j\in{\mathbb{Z}}}$ is an orthonormal basis of
$\mathsf{H}^{2}$.
## 4 Adapted MT bases, ”phase unwinding”
We now find a ”best” adapted Malmquist Takenaka basis to analyze or unwind the
oscillations of a given function.
The idea is to peel off the oscillation of a function by dividing by its
Blaschke product defined by the zeroes of the function, this procedure is
iterated to yield an expansion in an orthogonal collection of functions or
Blaschke products which of course are naturally embedded in a MT basis, once
the zeroes are ordered.
### The unwinding series.
There is a natural way to iterate the Blaschke factorization, it is inspired
by the power series expansion of a holomorphic function on the disk. If $G$
has no zeroes inside $\mathbb{D}$, its Blaschke factorization is the trivial
one $G=1\cdot G$, however, the function $G(z)-G(0)$ certainly has at least one
root inside the unit disk $\mathbb{D}$ and will therefore yield some
nontrivial Blaschke factorization $G(z)-G(0)={\mathsf{B}}_{1}G_{1}$. We write
$\displaystyle F(z)$ $\displaystyle=$ $\displaystyle{\mathsf{B}}(z)\cdot
G(z)={\mathsf{B}}(z)\cdot\bigl{(}G(0)+(G(z)-G(0)\bigr{)}$ $\displaystyle=$
$\displaystyle{\mathsf{B}}(z)\cdot\bigl{(}G(0)+{\mathsf{B}}_{1}(z)G_{1}(z)\bigr{)}=G(0){\mathsf{B}}z+{\mathsf{B}}(z){\mathsf{B}}_{1}(z)G_{1}(z).$
An iterative application gives rise to the unwinding series
$F=a_{1}{\mathsf{B}}_{1}+a_{2}{\mathsf{B}}_{1}{\mathsf{B}}_{2}+a_{3}{\mathsf{B}}_{1}{\mathsf{B}}_{2}{\mathsf{B}}_{3}+a_{4}{\mathsf{B}}_{1}{\mathsf{B}}_{2}{\mathsf{B}}_{3}{\mathsf{B}}_{4}+\dots$
This orthogonal expansion first appeared in the PhD thesis of Michel Nahon
nahon and independently by T. Qian in qtao ; qw Detailed approximations in
smoothness spaces were derived by S. Steinerberger in coifman . Given a
general function $F$ it is not numerically feasible to actually compute the
roots of the function; a crucial insight in nahon is that this is not
necessary – one can numerically obtain the Blaschke product in a stable way by
using a method that was first mentioned in a paper of Guido and Mary Weiss ww
and has been investigated with respect to stability by Nahon nahon Using the
boundedness of the Hilbert transform one can prove easily convergence in
$L^{p},1<p<\infty$.
### The fast algorithm of Guido and Mary Weiss ww
Our starting point is the theorem that any Hardy function can be decomposed as
$F={\mathsf{B}}\cdot G,$
where $B$ is a Blaschke product, that is a function of the form
${\mathsf{B}}(z)=z^{m}\prod_{i\in
I}{\frac{\overline{a_{i}}}{|a_{i}|}\frac{z-a_{i}}{1-\overline{a_{i}}z}},$
where $m\in\mathbb{N}_{0}$ and $a_{1},a_{2},\dots\in\mathbb{D}$ are zeroes
inside the unit disk $\mathbb{D}$ and $G$ has no roots in $\mathbb{D}$. For
$|z|=1$ we have $|{\mathsf{B}}(z)|=1$ which motivates the analogy
${\mathsf{B}}\sim\mbox{frequency and}~{}G\sim\mbox{amplitude}$
for the function restricted to the boundary. However, the function $G$ need
not be constant: it can be any function that never vanishes inside the unit
disk. If $F$ has roots inside the unit disk, then the Blaschke factorization
$F={\mathsf{B}}\cdot G$ is going to be nontrivial (meaning
${\mathsf{B}}\not\equiv 1$ and $G\not\equiv F$). $G$ should be ’simpler’ than
$F$ because the winding number around the origin decreases.
In fact since $|F|=|G|$ and $\ln(G)$ is analytic in the disk we have formally
that $G=\exp(\ln|F|+\mathrm{i}(\ln|F|)^{\sim}))=\exp(\mathscr{H}(\ln|F|))$
where $\mathscr{H}$ is the projection onto the Hardy space. and
${\mathsf{B}}=F/G$. G can be computed easily using the FFT nahon .
### A remarkable unwinding
The following is an explicit unwinding of a singular inner function in the
upper half plane illustrating this exponentially fast approximation of
$\exp\frac{2\mathrm{i}\pi}{x}$:
$\exp\frac{2\mathrm{i}\pi}{x}=\mathrm{e}^{-2\pi}+\bigl{(}1-\mathrm{e}^{-4\pi}\bigr{)}\sum_{n\geq
0}(-1)^{n}\mathrm{e}^{-2n\pi}B(x)^{n+1},$
where $B$ is a Blaschke product whose zeros are
$\\{1/(j+\mathrm{i})\\}_{j\in{\mathbb{Z}}}$.
## 5 Geometric function theory: the role of compositions of Blaschke
products.
#### Iteration of Blaschke products
We claim that by building Blaschke product through composition we open up rich
dynamical structures, and libraries of corresponding Malmquist Takenaka bases.
We are interested in iteration of finite Blaschke products
${\mathsf{B}}(z)=\mathrm{e}^{\mathrm{i}\theta}z^{\nu}\prod_{j=1}^{\mu}\frac{z+a_{j}}{1+\overline{a}_{j}z},$
where $\mu$ and $\nu$ are nonnegative integers and the $a_{j}$ are complex
numbers of modulus less than 1.
It is well known that ${\mathbb{T}}$ and ${\mathbb{D}}$ are globally invariant
under ${\mathsf{B}}$, as well as the complement of $\overline{\mathbb{D}}$ in
the Riemann sphere.
A careful discussion can be found in CP . Here is the main result.
###### Theorem 5.1
Let ${\mathsf{B}}$ be a finite Blaschke product with a fixed point $\alpha$
inside the unit disk. Then there exists a sequence
$\alpha,a_{1},a_{2},\dots,a_{j},\dots$ of complex numbers in the unit disk and
an increasing sequence $(\nu_{j})_{j\geq 1}$ of positive integers such that
$a_{1},a_{2},\dots,a_{\nu_{n}}$ are the zeros, counted according to their
multiplicity, of ${\mathsf{B}}_{n}$ (the nth iterate of ${\mathsf{B}}$).
Moreover $\displaystyle\sum_{j\geq 1}(1-|a_{j}|)=+\infty$. Also,
${\mathsf{B}}_{n}$ converges towards $\alpha$ unformly on compact subsets of
the open unit disk.
### Dynamic Multiscale analysis through composition of Blaschke products
Each Blaschke product ${\mathsf{B}}$ defines invariant subspaces of
${\mathsf{H}}^{p}$. The projection on this space is given by the kernel
$\displaystyle\frac{{\mathsf{B}}(z)\overline{{\mathsf{B}}(w)}}{z-w}$. This
projection is continuous for $1<p<+\infty$.
Let $F$ be a Blaschke product of degree at least 2 with a fixed point inside
the unit disk. Its iterates define a hierarchy of nested invariant subspaces
${\mathsf{E}}_{n}=F_{n}{\mathsf{H}}^{2}$.
Due to Theorem 5.1, $\displaystyle\bigcap_{n\geq 1}{\mathsf{E}}_{n}=\\{0\\}$.
The Takenaka construction provides orthonormal bases of $E_{n}\ominus
E_{n+1}$. But this is not canonical as it depends on an ordering of the zeros
of $F_{n+1}/F_{n}$.
Figure 1 shows 1st, 3rd, and 5th iterates of $F(z)=z(z-2^{-1})/(1-2^{-1}z)$.
Figure 2 displays the phase for the fourth iterate of
$F(z)=z^{2}(z-2^{-2})/(1-2^{-2}z)$. The upper pictures display the phases
modulo $2\pi$ (values in the interval $(-\pi,\pi]$) of theses Blaschke
products while the lower pictures display minus the logarithms of their
absolute value. The coordinates $(x,y)$ correspond to the point
$\mathrm{e}^{-y+\mathrm{i}x}$. On these figures it is easy to locate the
zeros, specially by looking at the phase which then has an abrupt jump.
### Remarks on Iteration of Blaschke products as a ”Deep Neural Net”
In the upper half plane let $(a_{j})_{1\leq j}$ be a finite sequence of
complex numbers with positive imaginary parts. The corresponding Blaschke
product on the line is
${\mathsf{B}}(x)=\prod_{j\geq 0}\frac{x-a_{j}}{x-\overline{a}_{j}}.$
We can write ${\mathsf{B}}(x)=\exp\bigl{(}\mathrm{i}\theta(x)\bigr{)}$, where
$\theta(x)=\sum_{j\geq 0}\sigma\bigl{(}(x-\alpha_{j})/\beta_{j}\bigr{)}$
with $a_{j}=\alpha_{j}+\mathrm{i}\beta_{j}$ and $\sigma=\arctan x+\pi/2$ is a
sigmoid.
This is precisely the form of a single layer in a Neural Net, each unit has a
weight and bias determined by $a_{j}$. We obtain the various layers of a deep
net through the composition of each layer with a preceding layer. In our
preceding examples we took a single short layer given by a simple Blaschke
term with two zeroes in the first layer that we iterated to obtain an
orthonormal Malmquist Takenaka basis ( we could have composed different
elementary products at each layer), demonstrating the versatility of the
method to generate highly complex functional representations.
As an example let $F(z)$ be mapped from G, (2) in the section on wavelet
construction.
$F(z)=G(w)=\frac{\sin(\pi(\mathrm{i}-w))}{\sin(\pi(\mathrm{i}+w))}\text{\quad
with\quad}w=\frac{\mathrm{i}(1-z)}{(1+z)}.$
We can view the phase of F as a neural layer which when composed with itself
results in a phase which is a two layer neural net represented graphically in
fig 3.
Where each end of a color droplet corresponds to one zero or unit of the two
layer net.
We refer to Daubechies et al. ReluDNN for a description of a similar
iteration for piecewise affine functions in which simple affine functions play
the role of a Blaschke product.
## 6 Higher dimensions, $\theta$-holomorphy
Our goal is to explore methodologies to use the remarkable analytic
approximation theorems described above to enable deeper understanding of real
analysis, in higher dimensions. We know that Blaschke factorization do not
exist, nevertheless there are remarkable bases that can be lifted.
We start by observing that
$Z_{\theta}=(x\cdot\theta+\mathrm{i}y)=t+\mathrm{i}y$ is harmonic in
$(x,y)$,(in 3 dimensions) and so is $Z_{\theta}^{k}$ This is a harmonic
homogeneous polynomial of degree k in $(x_{1},x_{2},y)$ that is constant in
the direction perpendicular to $\theta$. here we identified $\theta$ with
$(\cos\theta,\sin\theta)$. It is well known CW that
$\frac{1}{2\pi}\int_{0}^{2\pi}\mathrm{e}^{-\mathrm{i}{l}\theta}{Z_{\theta}^{k}}\,\mathrm{d}\theta=Y_{l}^{k}(x_{1},x_{2},y)\quad(\left|l\right|\leq{k})$
is the standard orthogonal basis of spherical Harmonics in 3 dimensions.
As a consequence we see that any Harmonic function $U(x,y)$ is a superposition
over $\theta$ of holomorphic functions in $Z_{\theta}$, more explicitly a
Power series in $Z_{\theta}$ with coefficients depending on $\theta$.
$U(x,y)=\frac{1}{2\pi}\int_{0}^{2\pi}\
F_{\theta}(Z_{\theta)})\,\mathrm{d}\theta=\frac{1}{2\pi}\int_{0}^{2\pi}{\sum_{k\geq
0}a_{k}({\theta}){Z_{\theta}^{k}\mathrm{d}\theta}}\ $
where $a_{k}({\theta})$ is a trigonometric polynomial of degree $k$.
Another example, taking
$F_{\theta}(Z_{\theta})=\mathrm{e}^{-2\mathrm{i}\pi
rZ_{\theta}}{\mathrm{e}^{-2\mathrm{i}\pi k{\theta}}}.$
we get the harmonic function
$J_{k}\left(2\pi r\sqrt{x_{1}^{2}+x_{2}^{2}}\right)\mathrm{e}^{-2\mathrm{i}\pi
k{\phi}}\mathrm{e}^{-2\pi yr}$
## Radon and Fourier in the upper Half space
This relationship between holomorphic functions in planes as generating all
harmonic functions can most easily be explored through Fourier analysis. We
define the Radon transform, and relate it to the Fourier transform to lead to
the $\theta$-holomorphic version.
Let
${\mathsf{R}}_{\theta}{f}(t)=\int_{x\in\theta^{\perp}}f(t\theta+x)\,\mathrm{d}x.$
(3)
Obviously ${\mathsf{R}}_{-\theta}{f}(t)={\mathsf{R}}_{\theta}{f}(-t).$ The
formula $\widehat{{\mathsf{R}}_{\theta}{f}}(t)=\hat{f}(u\theta)$ for Fourier
transforms is well known.
For $f\in L^{1}({\mathbb{R}}^{n})$, consider its harmonic extension $u$ to
${\mathbb{R}}_{+}^{n+1}$. For $x\in{\mathbb{R}}^{n}$ and $y>0$ we have
$\displaystyle u(x,y)$ $\displaystyle=$ $\displaystyle
f\star{\mathsf{P}}_{y}(x)=\int\mathrm{e}^{2\mathrm{i}\pi
x\cdot\xi}\mathrm{e}^{-2\pi|\xi|y}\hat{f}(\xi)\,\mathrm{d}\xi$
$\displaystyle=$
$\displaystyle\int_{S_{n-1}}\left(\int_{0}^{\infty}\mathrm{e}^{2\mathrm{i}\pi
r(x\cdot\theta+\mathrm{i}y)}\hat{f}(r\theta)r^{n-1}\mathrm{d}r\right)\,\mathrm{d}\theta$
$\displaystyle=$
$\displaystyle\int_{S_{n-1}}F_{\theta}(x\cdot\theta+\mathrm{i}y)\,\mathrm{d}\theta,$
where
$F_{\theta}(z)=\int_{0}^{\infty}\mathrm{e}^{2\mathrm{i}\pi
rz}\hat{f}(r\theta)r^{n-1}\mathrm{d}r=\int_{0}^{\infty}\mathrm{e}^{2\mathrm{i}\pi
rz}\widehat{{\mathsf{R}}_{\theta}{f}}(r)\,r^{n-1}\mathrm{d}r.$ (4)
When $n=2$, we have
$\widehat{F_{\theta}}(t)=\widehat{{\mathsf{R}}_{\theta}{f}}(t)\,t{\large\bf
1}_{(0,+\infty)}(t)=\frac{1}{2\mathrm{i}\pi}\widehat{{\mathsf{D}}{\mathsf{R}}_{\theta}{f}}(t)\,{\large\bf
1}_{(0,+\infty)}(t).$
So, for $\Im z>0$,
$F_{\theta}(z)=-\frac{1}{4\pi^{2}}\int_{-\infty}^{\infty}\frac{\mathrm{d}({\mathsf{R}}_{\theta}{f}(t))}{t-z}\\\
=-\frac{1}{4\pi^{2}}\int_{-\infty}^{+\infty}\frac{{\mathsf{R}}_{\theta}{f}(t)}{(t-z)^{2}}\,\mathrm{d}t.$
For general $n$ we get
$F_{\theta}(z)=\frac{(n-1)!}{(2\mathrm{i}\pi)^{n}}\int_{-\infty}^{+\infty}\frac{{\mathsf{R}}_{\theta}{f}(t)}{(t-z)^{n}}\,\mathrm{d}t.$
### Some isometries
We describe some computations in the case when $n=2$ and mention the case of
higher dimension at the end of this section.
In view of (4)
$\widehat{F_{\theta}(\cdot+\mathrm{i}y)}(r)=\hat{f}(r\theta)\,\mathrm{e}^{-2\pi
ry}r{\large\bf 1}_{(0,\infty)}(r).$ (5)
Hence, the Plancherel identity yields
$\displaystyle\int_{0}^{\infty}\mathrm{d}y\int_{-\infty}^{+\infty}|F_{\theta}(t+\mathrm{i}y)|^{2}\mathrm{d}t$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}|\hat{f}(r\theta)|^{2}r^{2}\mathrm{e}^{-4\pi
ry}\mathrm{d}r\mathrm{d}y$ $\displaystyle=$
$\displaystyle\frac{1}{4\pi}\int_{0}^{\infty}|\hat{f}(r\theta)|^{2}r\mathrm{d}r$
Let
$\displaystyle\|F\|_{B}^{2}=\int_{0}^{\infty}\int_{-\infty}^{+\infty}|F_{\theta}(t+\mathrm{i}y)|^{2}\mathrm{d}y\mathrm{d}t$
(this is the norm of the Bergman space on the upper half plane). Then
$4\pi\int_{0}^{2\pi}\|F_{\theta}\|_{B}^{2}\mathrm{d}\theta=\iint_{(0,+\infty)\times(0.2\pi)}\left|\hat{f}(r\theta)\right|^{2}r\,\mathrm{d}r\mathrm{d}\theta=\|f\|_{L^{2}({\mathbb{R}}^{n})}^{2}.$
(6)
We have
$\frac{\partial u(x,y)}{\partial
y}=-2\pi\int_{{\mathbb{R}}^{2}}\mathrm{e}^{2\mathrm{i}\pi\xi\cdot
x}|\xi|\mathrm{e}^{-2\pi|\xi|y}\hat{f}(\xi)\,\mathrm{d}\xi.$
$\displaystyle\iint_{{\mathbb{R}}_{+}^{3}}\left|\frac{\partial
u(x,y)}{\partial y}\right|^{2}\mathrm{d}x\mathrm{d}y$ $\displaystyle=$
$\displaystyle(2\pi)^{2}\int_{{\mathbb{R}}^{2}}\left(\int_{0}^{\infty}\mathrm{e}^{-4\pi|\xi|y}\mathrm{d}y\right)|\hat{f}(\xi)|^{2}|\xi|^{2}\mathrm{d}\xi$
(7) $\displaystyle=$
$\displaystyle\pi\int_{{\mathbb{R}}^{n}}|\hat{f}(\xi)|^{2}|\xi|\mathrm{d}\xi$
Equation (5) yields
$\displaystyle\int_{-\infty}^{+\infty}|F_{\theta}(t)|^{2}\,\mathrm{d}t=\int_{0}^{\infty}|\hat{f}(r\theta)|^{2}r^{2}\mathrm{d}r$,
and
$\int_{0}^{2\pi}\|F_{\theta}\|_{\mathsf{H}^{2}({\mathbb{R}})}^{2}\mathrm{d}\theta=\int_{{\mathbb{R}}^{2}}|\hat{f}(\xi)|^{2}|\xi|\mathrm{d}\xi.$
(8)
Formulas (7) and (8) together give
$\int_{0}^{2\pi}\|F_{\theta}\|_{\mathsf{H}^{2}({\mathbb{R}})}^{2}\mathrm{d}\theta=\frac{1}{\pi}\iint_{{\mathbb{R}}_{+}^{3}}\left|\frac{\partial
u(x,y)}{\partial y}\right|^{2}\mathrm{d}x\mathrm{d}y,$ (9)
In higher dimension formulas (6) and (9) become
$\int_{S_{n-1}}\mathrm{d}\theta\iint_{{\mathbb{R}}_{+}^{n+1}}|F_{\theta}(t+\mathrm{i}y)|^{2}y^{n-2}\mathrm{d}t\mathrm{d}y=\frac{1}{(4\pi)^{n-1}}|f\|_{L^{2}({\mathbb{R}}^{n})}^{2}$
and
$\iint_{{\mathbb{R}}_{+}^{n+1}}\left|\frac{\partial^{k}u(x,y)}{\partial
y^{k}}\right|y^{2k-n}\mathrm{d}x\mathrm{d}y\\\
=\frac{(4\pi)^{n-1}\Gamma(2k-n+1)}{2^{2k}}\int_{S_{n-1}}\|F_{\theta}\|_{H^{2}(\mathbb{R})}^{2}\mathrm{d}\theta.$
(10)
### Remarks; ”lifted Analysis” of Harmonic functions
One of our goals is to enable the application of some of the one dimensional
analytic approximation tools to higher dimensions. We refer to Michel Nahon’s
thesis nahon in which he decomposes a function in the plane as a sum of
functions whose Fourier transform live in thin wedges, as a tool to extract
features ( gradients of Phase) from an image of a fingerprint. This
illustrates potential variants of our current approach.
We envision a function in two variables represented as a superposition of
$F_{\theta}(t+iy)$, each of which is approximated to error $\epsilon$ leading
to a harmonic function approximation of error $\epsilon$ in the Dirichlet
space. Similar estimations with different mix of Hilbert spaces can be easily
derived as in coifman , leading to faster rates of convergence (when more
regularity is present).
Another obvious application is the representation of a Calderón Zygmund
operator given by a Fourier multiplier homogeneous of degree $0$,
$\Omega(\theta)$, simply by averaging $\Omega(\theta)F_{\theta}(t+iy)$.
The representation of these operators is a version of the rotation method (no
parity required on $\Omega$ ). Also it provides a local representation method
for generalized conjugate functions or C-Z operators, just by using the local
spherical Harmonics version of the ${\theta}$-holomorphic representation. In
particular Riesz transforms correspond to
$\Omega(\theta)=(\cos\theta,\sin\theta)$.
### A natural ortho-basis in the Dirichlet space
We now use identity (9) to transfer an orthonormal basis of the Hardy space
$H^{2}$ to an orthonormal system in the Dirichlet space in
${\mathbb{R}}_{+}^{3}$.
Start from the basis
$\frac{\mathrm{i}}{\sqrt{\pi}}\left(\frac{z-i}{z+i}\right)^{n}\frac{1}{z+i}$
of $H^{2}$ ( this corresponds to the Fourier basis in the disc, mapped to the
upper half plane Hardy space). We consider the generating function
$F(z)=\frac{\mathrm{i}}{\sqrt{2\pi^{3}}}\sum_{k\geq
0}(z-i)^{n}t^{n}/(z+i)^{n+1}$
and compute
$G(x,y)=\displaystyle\int_{0}^{2\pi}F(z_{\theta})\,\mathrm{d}\theta$. We get
$G(x,y)=\sqrt{2/\pi}/\sqrt{(\rho^{2}+2y+1)t^{2}-2(1-\rho^{2})t+\rho^{2}-2y+1},$
where $\rho=\sqrt{x_{1}^{2}+x_{2}^{2}+(y+1)^{2}}$.
This also can be written as
$G(x,y)=\frac{\sqrt{2/\pi}}{\sqrt{\rho^{2}+2y+1}\,\sqrt{a^{2}t^{2}-2bat+1}},$
if one sets $a=\sqrt{\frac{\rho^{2}-2y+1}{\rho^{2}+2y+1}}$ and
$b=\frac{\rho^{2}-1}{\sqrt{(\rho^{2}+1)^{2}-4y^{2}}}.$
It results that the functions
$\displaystyle\frac{\sqrt{2/\pi}a^{n}P_{n}(b)}{\sqrt{\rho^{2}+2y+1}}$, where
the $P_{n}$ are the Legendre polynomials, form an orthonormal system in the
Dirichlet space.
To get an orthonormal basis of the Dirichlet space in 3 dimensions, it
suffices to take
$\displaystyle\frac{\sqrt{2/\pi}2a^{n}P_{n}(b)\,{\mathrm{e}}^{\mathrm{i}k\theta}}{\sqrt{\rho^{2}+2y+1}}$,
with $k\in\mathbb{Z}$ and $n\geq 0$.
Of course such computation can be done in higher dimension: isometry (10)
allows to transfer orthonormal bases of $H^{2}$ to orthonormal systems in a
suitable Dirichlet space.
### Concluding remarks and potential applications
As we all know complex methods, such as interpolation of operators, or the
remarkable proofs by Calderón of the boundedness in $L^{2}$ of commutators
with the Hilbert transform, or the Cauchy integral on Lipschitz curves are
powerful tools. Over the years the goal has been to convert them into real
variable methods. In parallel the quest for higher dimensional complex tools
is continuing, see the examples CW in which various systems generalizing
holomorphic functions to higher dimension are studied. The point here, is that
the infinite dimensional $\theta$-holomorphic functions generate all of these
systems through the choice of appropriate multipliers (as described for the
Riesz system) .
Our goal here was to describe recent nonlinear analytic tools in the classical
setting. and transfer them to the higher dimensional real setting. Together
with Guido Weiss we had observed CW that all harmonic functions in higher
dimensions are combinations of holomorphic functions on subplanes which are
constant in normal directions. The recent developments in one dimension as
well as the isometries described here, and the corresponding efficient
approximation methods, open the door for applications in higher dimensions,
such as image denoising. See CSW for the impact of unwinding on precision
Doppler analysis in 1 dimension, which we expect to carry over to 2 or 3
dimensions.
Observe also that, for simplicity, we restricted our attention to 2 dimensions
in cylindrical coordinates. We could have defined more generally power series
in the variable $Z_{\epsilon}=(x\cdot{\epsilon)}$ where ${\epsilon}$
satisfying; $(\epsilon\cdot\epsilon)=0$ , represents a point on the complex
quadric with $|\Re{\epsilon}|=1,|\Im{\epsilon}|=1$. , or a two dimensional
plane spanned by $\Re{\epsilon},\Im{\epsilon}$.
Clearly we can extend the preceding discussion to this setting. Where;
$Z_{\epsilon}=(x\cdot{\epsilon)}$ is the point $t+\mathrm{i}s$ in the complex
plane with coordinate $t\Re{\epsilon}+\mathrm{i}s\Im{\epsilon}$.
## References
* (1) Brolin, H., Invariant sets under iteration of rational functions, Arkiv för Matematik, 6-6, (1965), 103–144. Springer 1993.
* (2) Coifman, R. R., and Peyrière, J., Phase Unwinding, or invariant subspace decompositions of Hardy Spaces. Journal of Fourier Analysis and Applications 25 (2019), 684–695.
* (3) Coifman, R. R., and Weiss, G., A kernel associated with certain multiply connected domains, and its applications to factorization theorems. Studia Mathematica (1966).
* (4) Coifman, R. R., and Steinerberger, S., Nonlinear phase unwinding of functions. J. Fourier Anal. Appl. (2016), 1–32.
* (5) Coifman, R. R., Steinerberger, S., and Wu, H. T., Carrier frequencies, holomorphy and unwinding. arXiv preprint arXiv:1606.06475, 2016 - arxiv.org.
* (6) Coifman, R. R., Weiss, G., Analyse Harmonique Non-Commutative sur Certains Espaces Homogenes, Lecture Notes i Mathematics 242, Springer-Verlag.
* (7) I. Daubechies, R. DeVore, S. Foucart, B. Hanin, and G. Petrova.Nonlinear Approximation and (Deep) ReLU Networks. arXiv:1905.02199v1 [cs.LG] 5 May 2019.
* (8) Feichtinger, H.G. and Pap, M., Hyperbolic wavelets and multiresolution in the Hardy space of the upper half plane, Blaschke Products and Their Applications, (2013), Springer.
* (9) Malmquist, F,, Sur la determination d’une classe de fonctions analytiques par leurs valeurs dans un ensemble donne de poits, C.R. 6ieme Cong. Math. Scand. (Kopenhagen, 1925), Copenhagen, (1926), Gjellerups, 253–259.
* (10) Mi,W., Qian, T., and Wan, F., A Fast Adaptive Model Reduction Method Based on Takenaka-Malmquist Systems, Systems & Control Letters. Volume 61, Issue 1, January 2012, Pages 223–230.
* (11) Nahon, M., Dissertation, Yale University (2000).
* (12) Qian, T., I. T. Ho, Leong , I. T., and Wang, Y. B., Adaptive decomposition of functions into pieces of non-negative instantaneous frequencies, International Journal of Wavelets, Multiresolution and Information Processing, 8 (2010), no. 5, 813–833.
* (13) Takenaka, S., On the orthogonal functions and a new formula of interpolation, Jpn. J. Math. II (1925), 129–145.
* (14) Weiss, G, and Weiss M, A derivation of the main results of the theory of $H^{p}$-spaces. Rev. Un. Mat. Argentina 20 (1962), 63–71.
Figure 1: The argument and the absolute value of $F(z)$, $F^{(3)}(z)$, and
$F^{(5}(z)$,
with $F(z)=\frac{z(z-2^{-1})}{1-2^{-1}z}$ and $z=\exp(-y+\mathrm{i}x)$. Figure
2: The multiscale view of the argument of $F^{(4)}(z)$, with
$F(z)=\frac{z^{2}(z-2^{-2})}{1-2^{-2}z}$ and $z=\exp(-y+\mathrm{i}x)$. Figure
3: Two iterations of
$F(z)=G(w)=\frac{\sin(\pi(\mathrm{i}-w))}{\sin(\pi(\mathrm{i}+w))}\text{\quad
with\quad}w=\frac{\mathrm{i}(1-z)}{(1+z)}.$
|
ABL, KIAB, LHR, LTM
Currently at: 2 Council on Energy, Environment and Water, 4, ISID Campus,
Vasant Kunj, New Delhi 110070 India K R Sreenivas, Engineering Mechanics Unit,
Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur PO, Bengaluru
560064, India<EMAIL_ADDRESS>1 Engineering Mechanics Unit, Jawaharlal Nehru
Centre for Advanced Scientific Research, Jakkur PO, Bengaluru 560064, India a.
Technical Research Centre, Jawaharlal Nehru Centre for Advanced Scientific
Research (JNCASR), Bengaluru 560 064, India,
b. Bengaluru International Airport Limited, Bengaluru 560 300, India AND
c. National Supercomputing Mission, JNCASR, Bengaluru 560 064, India Research
Article
# Investigation of the Thermal Structure in the Atmospheric Boundary Layer
During Evening Transition and the Impact of Aerosols on Radiative Cooling
Suryadev Pratap Singh Engineering Mechanics Unit, Jawaharlal Nehru Centre for
Advanced Scientific Research, Jakkur, Bengaluru 560064, India Mohammad
Rafiuddin Engineering Mechanics Unit, Jawaharlal Nehru Centre for Advanced
Scientific Research, Jakkur, Bengaluru 560064, India Subham Banerjee
Engineering Mechanics Unit, Jawaharlal Nehru Centre for Advanced Scientific
Research, Jakkur, Bengaluru 560064, India Sreenivas K R Engineering
Mechanics Unit, Jawaharlal Nehru Centre for Advanced Scientific Research,
Jakkur, Bengaluru 560064, India
###### Abstract
The evening transition is crucial in various phenomena including boundary
layer stability, temperature inversion, radiation fog, vertical mixing, and
pollution dispersion. We have explored this transition using data from eighty
days of observations across two fog seasons at the Kempegowda International
Airport, Bengaluru (KIAB). Through field experiments and simulations
integrating aerosol interaction in a radiation-conduction model, we elucidate
the impact of aerosols on longwave cooling of the Atmospheric Boundary Layer
(ABL).
Field observations indicate that under calm and clear-sky conditions, the
evening transition typically results in a distinct vertical thermal structure
called the Lifted Temperature Minimum (LTM). We observe that the prevailing
profile near the surface post-sunset is the LTM-profile. Additionally, the
occurrence of LTM is observed to increase with decreases in downward and
upward longwave flux, soil sensible heat flux, wind speed, and turbulent
kinetic energy measured at two meters above ground level (AGL). In such
scenarios, the intensity of LTM-profiles is primarily governed by aerosol-
induced longwave heating rate (LHR) within the surface layer. Furthermore, the
presence of clouds leads to increased downward flux, causing the disappearance
of LTM, whereas shallow fog can enhance LTM intensity, as observed in both
field observations and simulations.
Usually, prevailing radiation models underestimate aerosol-induced longwave
heating rate (LHR) by an order, compared to actual field observations. We
attribute this difference to aerosol-induced radiation divergence. We show
that impact of aerosol-induced LHR extends hundreds of meters into the
inversion layer, affecting temperature profiles and potentially influencing
processes such as fog formation. As the fog layer develops, LHR strengthens at
its upper boundary, however, we highlight the difficulty in detecting this
cooling using remote instruments such as microwave radiometer.
Keywords — Aerosols, radiation divergence, fog, longwave cooling, lifted
temperature minimum
## 1 Introduction
The evening transition in the Atmospheric Boundary Layer (ABL) holds practical
significance, as highlighted in various studies [27, 18, 2]. Post-sunset, this
transition leads to the formation of the stable Nocturnal Boundary Layer
(NBL), influencing several meteorological phenomena including inversion layer
growth, fog occurrence, and the impact of stable layers on vertical mixing and
pollution dispersion. The cooling rate and moisture accumulation within a few
meters above ground level in the NBL are crucial for determining the onset,
progression, and dissipation of radiation fog [28]. Additionally, cooling
rates affect the occurrence and strength of temperature inversions,
complicating pollution dispersal [33]. Surface-heat flux, a key factor in dew
formation at night, is regulated by near-surface temperature and relative
humidity [45]. Therefore, comprehending and analyzing the evening transition
and its detailed characteristics through vertical temperature and radiative
cooling rate profiles holds practical importance.
Atmospheric turbulence subsides around sunset and a stable inversion layer
forms. According to conventional explanations found in textbooks [70], the
process starts with radiative cooling of the ground and subsequent cooling of
the air layers above it, with lowest temperature occurring on the ground.
However, an intriguing observation by Ramdas and Atmanathan [60] under calm
and clear sky conditions is that the ground does not attain the lowest
temperature locally. Instead, a local minimum temperature, known as the Lifted
Temperature Minimum (LTM), appears a few decimeters above the ground surface.
The height at which this minimum occurs is referred to as LTM height, and the
difference between the surface temperature and the LTM is termed LTM
intensity.
Despite the robust observations of the LTM in numerous field experiments
worldwide, it took considerable time to provide an explanation for its
occurrence. Ramdas and Atmanathan [60] and Ramanathan and Ramdas [59]
initially hypothesized the role of radiation in LTM formation, but this was
met with skepticism due to several reasons: (a) it contradicted the prevailing
belief that the ground cools faster than the surrounding air layers after
sunset, (b) the challenge of maintaining LTM against convective instability
for extended periods, and (c) alternative explanations such as drainage flow
or measurement errors [24]. However, over time, Lake [39], Ramdas and
Atmanathan [61], Oke [53], Mukund et al. [46, 47], Blay-Carreras et al. [3],
Jensen et al. [35] conducted meticulous field experiments on various types of
soils in different regions of the world. These experiments confirmed the
robust and widespread occurrence of LTM across diverse terrains, including
snow, bare soil, grassland [39, 53, 3], concrete surfaces [46, 47], and
mountainous terrain [35], with varying LTM intensities. Moreover, they refuted
the role of drainage flow in LTM development through precise measurements of
local winds. By manipulating surface properties in field experiments, Mukund
et al. [47] demonstrated that LTM intensity is strongly influenced by surface
emissivity and thermal properties. Additionally, their controlled laboratory
experiments, which avoided drainage flow, conclusively showed that LTM
intensity decreases with lower aerosol concentrations. They observed
significant radiative cooling in the air layer adjacent to the ground,
highlighting the importance of aerosol-induced radiation divergence in LTM
development. This phenomenon underscores the complexity of nocturnal
temperature profiles and emphasizes the significance of radiative cooling in
modeling the nocturnal atmospheric boundary layer.
Apart from the field experiments, there have been efforts to develop
mathematical and numerical models to understand the origin and parametric
dependence of the LTM on other factors. For instance, Varghese et al. [72]
modified the radiation model by Chou et al. [8], incorporating energy transfer
from radiation, conduction, and forced convection into a mathematical model.
However, Edwards [15], Ponnulakshmi et al. [54, 55] identified erroneous
assumptions in Varghese et al. [72]’s treatment of ground reflection,
particularly regarding the downward longwave (LW) flux, leading to spurious
cooling. Subsequent corrections, as proposed by Ponnulakshmi et al. [54, 55],
failed to replicate an LTM profile. Mukund et al. [46] suggested the inclusion
of aerosol-radiation interaction to explain the LTM phenomenon, a proposition
supported by conclusive experimental evidence from Mukund et al. [47].
Building on the findings of Mukund et al. [47], we incorporated aerosol-
radiation interaction into the corrected model of Edwards [15], Ponnulakshmi
et al. [54, 55] to simulate the thermal structure in the nocturnal boundary
layer. Further details of the models are provided in Section 3.
Despite the significant role of radiation divergence in the Nocturnal Boundary
Layer (NBL) under low-wind conditions [26, 42, 23, 14] and its importance in
numerical weather prediction (NWP) [66, 65, 29], current radiation
parameterizations in NWP models fail to accurately simulate the evolution of
radiative fluxes [67, 68]. Additionally, radiative cooling rates reported in
many field experiments lack robustness due to measurement uncertainties and
are comparable to reported longwave (LW) cooling rates [21, 40, 20, 52, 50,
76, 51, 70]. However, a set of observations by [31, 32, 14, 68] have conducted
careful measurements and reported significant radiative cooling within a few
tens of meters near the ground during the night.
In addition to measurements, numerous researchers have endeavored to simulate
radiative cooling using various methods [22, 78, 23, 17, 66, 68]. However,
these modeling approaches have encountered challenges such as limited vertical
resolution, and unrealistic assumptions, or adjustments to parameterizations
to capture observed radiative cooling [58, 65]. HA and Mahrt [29] noted
discrepancies between estimated radiative and turbulent flux divergence and
observed cooling in the Stable Boundary Layer (SBL). While Steeneveld et al.
[66] found reasonable agreement between modeled radiative cooling rates and
CASES-99 observations, however, parameterization coefficients for modeling
needed to be adjusted for getting agreement. Through meticulous observations,
Steeneveld et al. [68] and Sun et al. [71] reported high radiative cooling
rates, particularly during evening transitions and under clear sky conditions.
They also highlighted that commonly used longwave radiation models
underestimate observed cooling during the evening transition by an order of
magnitude, whereas a physical model [11] showed better agreement without
aerosols. However, this physical model’s applicability is limited due to
assumptions like logarithmic temperature and humidity profiles, stationarity,
and other parameterizations based on Monin-Obukhov theory [11, 43], rendering
it less suitable for the use in NWP models [68].
The observed deficiencies in radiative parameterization have been highlighted
in both observational and numerical investigations [74, 62, 65]. While
Zdunkowski et al. [79], Coantic and Seguin [11], André and Mahrt [1], Mukund
et al. [46] have speculated on the potential significance of aerosols in
longwave radiative modeling, laboratory experiments by Mukund et al. [47]
demonstrated the necessity of considering aerosols to explain temperature
profiles near the ground. Despite the dominance of LW radiation divergence
over turbulent flux divergence in low-wind conditions, as observed in our
study [26, 42, 23, 14], aerosols have not been incorporated into LW radiation
modeling to elucidate observed radiative cooling in the nocturnal boundary
layer.
In summary, the preceding discussion underscores the necessity of conducting
field experiments alongside numerical simulations, incorporating aerosol-
radiation interaction, to elucidate the influence of aerosols on radiative
processes in the nocturnal boundary layer. Additionally, it aims to determine
the height within the atmosphere, where aerosols impact thermal structure. In
this pursuit, we present field observations of evening transitions and nights
spanning an extensive eighty-day period across two fog seasons. We integrate
aerosol-radiation interaction into the corrected band model [55]. Through
simulations and field observations, we explore the impact of aerosol-induced
cooling on evening transition and LTM in calm and clear sky conditions.
This paper is organized in the following way: Field experiments,
instrumentation details, and observational data are presented in Section 2.
The longwave (LW) radiation model and integration of the aerosols model into
it are elaborated in Section 3. LTM observations and analysis, LHR during
evening transitions, and the influence of fog/cloud on LTM are presented in
Section 4. Limitations of current radiation models and observations regarding
LTM and LHR have been discussed in Section 5, and last, we conclude this work
in Section 6.
## 2 Field campaign and observational data
Figure 1: Observation site and mounted instruments. (a) Field experiments have
been conducted in the airfield of KIAB (Red star) (obtained by
$\mathrm{Google-Earth^{TM}}$). (b) Aerial view of KIAB, observation site
(dropped pin), and (c) Instruments installed on the concrete base.
The field campaign has been conducted in the airfield of Kempegowda
International Airport, Bengaluru (KIAB), located at 13.20∘N, 77.70∘E and
$\sim$900 m above mean sea level (Figure 1a). The observation site is
$\sim$175 m north of the north-runway (09L/27R; pin dropped in Figure 1b). All
instruments (except soil temperature profiler and sensible heat flux sensors)
are mounted on a concrete base (9 m $\times$ 3 m) to maintain their alignment
and orientation during the campaign, whereas soil sensors are installed into
the soil, 0.5 m away from the concrete base. Although sensors are installed
either directly on the concrete base or in close proximity to it, measurements
may still differ from those taken on soil. Nevertheless, the thermophysical
and radiative properties of soil are comparable to those of concrete, and we
anticipate that the results will not vary significantly [47]. For safety
purposes, the observation site is enclosed by a thin metal wire fence
extending up to approximately $0.5$ m above ground level around its perimeter.
The flat grassland surrounding the observation site, with grass approximately
$\sim 10$ cm in height maintained by the airport authority, offers an
unobstructed view for remote-sensing instruments to scan with minimal
disruptions. The description of geographical details such as terrain, soil
properties, vegetation, and climate around KIAB, which influence its
thermodynamic and dynamic parameters (e.g., wind speed and direction,
temperature and moisture in atmosphere and soil, etc.) can be found in a
recent article by Kutty et al. [38].
As the observation site is located in the tropical region, convective systems
of local to large scale are commonly observed throughout the year. In the KIAB
region, it is difficult to get enough days that are completely free from
clouds. Hence, for analysis, we have selected days when reported cloud cover
in the Meteorological Aerodrome Report (METAR) at KIAB is $<2$ octas during
the analysis period (10:00 UTC [03:30 IST] and 18:30 UTC [23:59 IST]) which we
call clear-sky days. Since the field campaign at the site is an ongoing
project; we report data and analysis for a total of 80 clear-sky days from
2021-22 and 2022-23, which consists of two winter seasons (December and
January) as well as two spring season (February and March). During these days,
calm and clear sky conditions prevailed. In the winter season, the ABL is
found to be stable with frequent occurrence of dense fog in the morning hours.
Because of the prevailing easterly wind in both the seasons, meteorological
conditions at the observation site get modulated by large-scale systems
developed in the Bay of Bengal (BoB), which are observed most days of the
year.
Several instruments have been deployed at the observation site to measure
different parameters in the atmosphere and the soil (see Table
LABEL:tab:_instruments). The probe locations range from 0.5 m inside the soil
to 10 km into the atmosphere (See Figure 1c). The category of instruments,
measured quantities, range, resolution, and sampling interval have been
presented in Table LABEL:tab:_instruments. Windcube, an active lidar-based
remote sensing device, provides wind profiling in the hemispherical volume of
a radius of 3 km using three different modes of scan: Doppler Beam Swinging
(DBS), Range Height Indicator (RHI), and Plan Position Indicator (PPI). Wind
data quality is ensured based on the carrier-to-noise ratio (CNR), which
depends on the concentration and size distribution of aerosols, dust
particles, clouds, and fog droplets in the atmosphere. Since emitted radiation
from the windcube can not penetrate through the thick cloud/fog layer, CNR
drops significantly above that layer, which leads to noisy wind data, but it
detects the presence of fog/cloud above KIAB.
Temperature and humidity profiles play a key role in modulating the radiation
budget in the atmosphere. Humidity and Temperature profiling (HATPRO)
radiometer, a passive remote sensing device, continuously retrieves
temperature and moisture profiles up to a height of 10 km using 14 channels of
microwave (MW) radiation. However, moisture profiling is coarse in vertical
resolution [4]. An Infrared (IR) sensor integrated with HATPRO retrieves the
cloud/fog base, its thickness, and liquid water mixing ratio with poor
accuracy. In case of heavy rain, the optical window (Radome sheet cover) of
HATPRO wets and introduces noise in observed vertical profiles, but a high-
temperature blower integrated with HATPRO dries the optical cover just after
rain, and the data quality gets restored. LN2 calibration of HATPRO was
performed periodically to ensure quality data of the temperature and humidity
profiles.
An automatic weather station (AWS) integrated with HATPRO gives temperature,
pressure, relative humidity (RH), rain rate, wind speed, and direction at 2 m
above ground level (AGL). Observations from AWS and vertical profiles from the
HATPRO are used to retrieve many thermodynamic parameters such as water vapor
mixing ratio, total precipitable water, liquid water mixing ratio, liquid
water path, dew point temperature profile, different stability indices such as
convective available potential energy (CAPE) and convective inhibition energy
(CINE). Although HATPRO misses and misplaces the liquid water mixing ratio in
the vertical direction and gives poor profiling of the liquid water mixing
ratio, the windcube detects the cloud and fog base height based on the sudden
change in CNR. The occurrence of cloud/fog is also well detected through a
sharp change in the incoming longwave radiation from the 4-component radiation
sensor (Discussed in Section 4).
Two humidity sensors and two radiation sensors, integrated with two internal
Pt100 temperature sensors, are installed at 1.14 m and 1.93 m heights on a 2 m
vertical mast. Note that the height of the mast is limited to 2 m due to
operational constraints at the airport. A 1.5 W heater, integrated with each
radiation sensor, is kept on to avoid condensation on the optical window of
sensors. However, radiation sensors are not integrated with ventilated units,
which can introduce errors up to $\pm$ 15 W m-2 whenever natural ventilation
is not sufficient. Additionally, twenty temperature sensors are mounted on the
same mast, enabling high vertical resolution measurements from 4.5 cm to 2 m
above ground level (AGL). These sensors are calibrated in an isothermal bath,
and the maximum relative errors among the sensors are $<0.1$ K. The high
vertical resolution close to the surface is intended to capture the Lifted
Temperature Minimum (LTM) height and its intensity. To obtain a temperature
profile in the soil and measure sensible heat flux (SHF) at the soil surface,
a soil temperature profiler and two heat flux sensors are installed near the
concrete base. All sensors, including those on the mast and within the soil,
are connected to a data logger (Keysight DAQ970A) to record the measurements
continuously. Windcube, HATPRO, and the data logger are connected through
three mini-computers with uninterrupted internet connections, facilitating
continuous remote monitoring. To ensure a continuous and stable power supply,
all computers and other accessories are installed in two IP65 electrical
enclosures (each measuring 0.5 m × 1.0 m × 1.0 m) located near the concrete
base and connected to the reliable airport’s electricity supply.
Table 1: Details of instruments deployed at the observation site (KIAB) and
description of different meteorological variables Details of Instruments
Deployed
---
Categories | Instruments | Measured quantities | Range (RA), Sampling interval (SA) and resolution (RE)
Soil sensors | Soil temperature profiler (STP01, Hukseflux) | Soil Temperature at 0.02, 0.05, 0.1, 0.2 and 0.5 m depth of soil | RA: -30 to 70∘C, Absolute uncertainty: $\pm$0.7 K, relative uncertainty: $\pm$0.05 K, SA: 5 sec
Soil Heat flux (HFP01, Hukseflux) | Heat flux at 0.05 m depth of soil | RA: -2000 to 2000 W, uncertainty: $\pm$3 $\%$, SA: 5 sec
Surface layer within 2 meters AGL | 20 thermistors (Te connectivity sensor NTC Discrete MBD 10 kilo-ohm) | Temperature | RA: -40 to 125∘C, uncertainty: $\pm$0.2∘ between 0–70∘C, SA: 5 sec
2 humidity sensors (HIH-5030/5031 series, Honeywell) | RH | RA: 0 to 100$\%$, uncertainty: $\pm 3\%$ from 11–89$\%$, otherwise $\pm 7\%$, SA: 5 sec
Weather sensors at 2 m AGL | Multi-component weather sensors (Vaisala, WXT530 Series) | Air temperature | RA: -52∘C–60∘C, uncertainty: $\pm$0.3∘C, SA: 1 sec
RH | RA: 0–100 $\%$, uncertainty: $\pm 3\%$ at 0–90$\%$ & $\pm 5\%>90\%$, RE: 0.1$\%$, SA: 1 sec
Barometric pressure | RA: 600 to 1100 hPa, uncertainty: $\pm$0.5 hPa, RE: 0.1 hPa, SA: 1 sec
Precipitation | RA: 0 to 200 mm/h, uncertainty: $\pm 5\%$, SA: 10 sec
Wind speed | RA: 0 to 60 m/s, Accuracy: $\pm 3\%$ at 10 m/s, RE: 0.1 m/s, SA: 1 sec
Wind direction | RA: 0 to 360∘, accuracy: $\pm$3∘ at 10 m/s, RE: $1^{\circ}$, SA: 1 sec
Radiative fluxes | 4-component net radiometer (NR01, Hukseflux) | Radiative flux (upward LW and SW, downward LW and SW) | Calibration uncertainty solar: $<1.8\%$, calibration uncertainty longwave: $<7\%$, SA: 5 sec
Atmospheric profiles | Wind lidar (Windcube 100S, Leosphere) | Wind speed and direction in the hemisphere of 3 km radius | RA: -30–30 m/s in the radial direction, accuracy: $\pm 0.5\%$, RE: 0.1∘ resolution, range resolution: 50 m, SA: 20 sec to 3 minutes (based on mode of scan)
Humidity and temperature profiler (HATPRO, Radiometer Physics, A Rohde & Schwarz Company) | Profile of temperature, RH, water and liquid water mixing ratio, ABL height, cloud base height, and stability profiles | Total 93 vertical levels from 10 m AGL to 10 km, having a resolution of 25 m to 300 m, RMS accuracy of water vapor mixing ratio: $\pm 0.3$ g m-3, Temperature accuracy: $\pm$0.25 K with 500 m, boundary layer ($<2$ km) mixing ratio accuracy: $\pm$ 0.03 g m-3, SA: 60 sec
To study the effect of fog on LTM and radiation divergence, fog data is taken
from the METAR, an airport weather monitoring report used for aviation
purposes [44]. Since the METAR station is located $\sim$1 km east from our
observation site and METAR reporting is half-hourly, there exists a chance of
temporal offset in the reporting of fog by METAR and the corresponding
response of sensors at our observation site. In this paper, 5-minute averaging
has been performed on all data except METAR to avoid spurious observations.
Unless stated otherwise, all heights are reported relative to the local ground
level.
## 3 Radiation model with aerosols
The radiation model used in this study is the modified version of the band
model used by Varghese et al. [72], which itself is adopted from the band
model developed and improved by Chou et al. [8], Chou and Suarez [7], Chou et
al. [9]. Later, Edwards [15], Ponnulakshmi et al. [54, 55] pointed out the
erroneous assumption of the Planckian nature of downward longwave (LW)
radiation in the reflected radiation term used in the model by Varghese et al.
[72] (which results in a spurious source of cooling near the surface, having a
nonphysical length scale). We take the model with the corrections suggested by
Ponnulakshmi et al. [54, 55]. In the modified version of the model, downward
and upward radiative flux divergence at height $z$ is given by
$\frac{dF^{\downarrow}_{ji}}{dz}=-A^{i}_{j}[c_{i}^{j}\pi
B_{j}(T)-F_{ji}^{\downarrow}]$ (1)
$\frac{dF^{\uparrow}_{ji}}{dz}=A^{i}_{j}[c_{i}^{j}\pi
B_{j}(T)-F_{ji}^{\uparrow}]$ (2)
The top boundary condition comes from the fact that there is no incoming
longwave radiative flux at the top of the atmosphere, and fluxes for the
bottom boundary are given by reflected and emitted components of longwave
radiation from the ground.
$F_{ji}^{\downarrow}(\infty)=0$ (3)
$F{ji}^{\uparrow}(0)=c_{i}^{j}[\epsilon_{s}\pi
B_{j}(T_{s})]+(1-\epsilon_{s})F_{ji}^{\downarrow}(0)$ (4)
where $F_{ji}$ is the radiative fluxes for sub-band $i$ of band $j$.
$A_{j}^{i}$ is given by
$A_{j}^{i}=dk_{j}^{i}\bigg{(}\frac{P}{P_{r}}\bigg{)}^{m}f_{j}(T,Tr)\rho_{w}$
(5)
The value of $c_{i}^{j}$, $d$, $k_{j}^{i}$, $P_{r}$, m, and $T_{r}$ as well as
pressure and temperature scaling have been taken from Chou and Suarez [7],
Chou et al. [9]. $B$ is the Planck function of radiation. $T_{s}$ is the
surface temperature of the surface having emissivity $\epsilon_{s}$.
### 3.1 Inclusion of aerosol-radiation interactions
The vertical profile of the aerosols close to the ground plays a key role via
a change in radiative flux divergence [46, 47]. From laboratory experiments,
Mukund et al. [47] showed that the LTM intensity decreases with decrease in
aerosol concentrations in the test section, and it disappears when the aerosol
concentrations in the test section are reduced significantly by filtering
aerosols or by blocking radiation interaction in the test section with a thick
opaque sheet. Noting the experimental observations of Mukund et al. [47], we
have developed an extension to the corrected version of the model Ponnulakshmi
et al. [54, 55] by including aerosol-radiation effects. The extent to which a
spherical aerosol particle ($s$) of radius $r$ and refractive index
$n+k\textit{i}$, interacts with radiation at wavenumber $\nu$ is given by the
extinction cross-section $\sigma_{ext}^{s}(\nu,r)$ [69, 34]. Effect of the
hygroscopic growth of aerosols on radiation has been accounted via a change in
refractive index when RH changes [30]. For simplicity, the shape of aerosols
is considered to be spherical, and independent scattering dominates for
typical aerosol concentrations in the atmosphere [41]. Under these conditions,
extinction efficiency ($\sigma_{ext}^{s}(\nu,z)$) at wavenumber $\nu$ for an
aerosol species $s$ having distribution $N_{s}(r,z)$ at height $z$ is given by
$\sigma_{ext}^{s}(\nu,z)=\sum_{r}\sigma_{ext}^{s}(\nu,r)N_{s}(r,z)$ (6)
where $N_{s}(r,z)$ follow log-normal distribution at height $z$ [30] and is
given by
$\frac{dN_{s}(r,z)}{dr}=\frac{N_{s}(z)}{\sqrt{2\pi}r\log\sigma_{i}\ln{10}}exp\bigg{[}-\frac{1}{2}\bigg{(}\frac{\log
r-\log r_{modN,s}}{\log\sigma_{i}}\bigg{)}^{2}\bigg{]}$ (7)
where $N_{s}(z)$ is the total number density of aerosol species $s$ at height
$z$; $\sigma$ and $r_{modN,s}$ are distribution parameters.
$\sigma_{ext}^{s}(\nu,r)$ is calculated from the standard BHMIE code [5] and
$\sigma_{ext}^{s}(\nu,z)$ is summed for all aerosol species ($s$) and sub-band
interval $i$ to get total extinction efficiency in band $j$ at height $z$
$\sigma_{ext}^{j}(z)=\sum_{\nu=\nu_{ij}}\sum_{s=1}^{S}\sigma_{ext}^{s}(\nu,z)$
(8)
The diffuse transmission function for the aerosol in band $j$, at level $z$ is
given by Liou [41]
$\tau_{j}^{aer}(z)=exp\bigg{(}-d\int_{0}^{z}\sigma_{ext}^{j}(z^{\prime})dz^{\prime}\bigg{)}$
(9)
When water vapor and aerosols are both present, the transmission function for
a band $j$ is given by:
$\tau_{j}(eff)=\tau_{j}^{wv}\tau_{j}^{aer}$ (10)
where $\tau_{j}^{wv}$ is given by
$\tau_{j}^{wv}=\sum_{i=1}^{m_{j}}c_{i}^{j}\tau_{j}^{i}$ (11)
where $\tau_{j}^{i}$ is given by
$\tau_{j}^{i}=exp\bigg{[}-d\int_{0}^{z}k_{j}^{i}\bigg{(}\frac{P}{P_{r}}\bigg{)}^{m}f_{j}(T,T_{r})\rho_{w}dz^{\prime}\bigg{]}$
(12)
From Eqs. (9, 10, 11, 12), $\tau_{j}^{i}(eff)$ is given by
$\tau_{j}^{i}(eff)=exp\Bigg{[}-d\int_{0}^{z}\bigg{(}\sigma_{ext}^{j}(z^{\prime})+k_{j}^{i}\bigg{(}\frac{P}{P_{r}}\bigg{)}^{m}f_{j}(T,T_{r})\rho_{w}\bigg{)}dz^{\prime}\Bigg{]}$
(13)
To account for the role of aerosols, $A_{j}^{i}$ is updated in the model by
using Equation (13) as
$A_{j}^{i}=d\bigg{[}\sigma_{ext}^{j}(z)+k_{j}^{i}\bigg{(}\frac{P}{P_{r}}\bigg{)}^{m}f_{j}(T,T_{r})\rho_{w}\bigg{]}$
(14)
Inserting the updated expression of $A_{j}^{i}$ from Equation (14), we get the
updated radiation model, which accounts for aerosol-radiation interactions. In
this updated model, Equation (1, 2, 4 and 11) remains the same along with the
updated Equation (14).
From Equation (1) and (2), total radiative flux divergence at level z is given
by
$\frac{dF}{dz}=\sum_{i}\sum_{j}\bigg{(}\frac{dF_{ij}^{\uparrow}}{dz}-\frac{dF_{ij}^{\downarrow}}{dz}\bigg{)}$
(15)
Equation (15) becomes the source term in the 1-dimensional radiation-
conduction equation which is given by:
$\frac{\partial T(t,z)}{\partial t}=\alpha\frac{\partial^{2}T(T,z)}{\partial
z^{2}}-\frac{1}{\rho C_{p}}\frac{dF}{dz}$ (16)
where $z\in(0,H)$ and $t>0$. $H$ is the height of the atmosphere, and $\alpha$
is the molecular thermal diffusivity of air. Equation (16) is solved using the
Thomas Algorithm [56] to get the temperature and radiative flux evolution in
the atmosphere in the presence of water vapor and aerosols.
### 3.2 Aerosol concentrations and profile
Due to the unavailability of aerosol data at the observation site,
representative aerosol properties have been taken from the OPAC (Optical
Properties of Aerosols and Clouds) database to account for the role of
aerosols in the model [30]. Although this database contains diverse aerosol
profiles for different atmospheric conditions, the considered radiation model
has been integrated with urban aerosols (which consists of insoluble; INSO,
water-soluble; WASO and soot; SOOT particles) because flight operations and
other construction-work close to the observation site emit a massive amount of
aerosol particles in the atmosphere. Note that only WASO particles show
hygroscopic growth as a function of RH [30], which has been accounted for in
all simulations. The parameters for the size distribution and refractive index
for the different aerosol components at different RH have been taken from the
same database where all components follow log-normal distribution as shown in
Equation (7).
Hess et al. [30] has considered the roughly uniform concentration of aerosols
(scale height of 8 km) in the ABL, which has not been observed in Indian
tropical regions [12, 13, 6]. Devara and Raj [12] and Devara et al. [13] have
reported a decrease in the aerosol concentration with an increase in height
(measured above 40 m AGL) from long-term lidar observations at a tropical
site. Chate and Pranesha [6] has shown uniform aerosol concentration within 1
m AGL in the range of $10^{4}$–$10^{5}$ cm-3, which is 10-100 times higher
compared to the concentration measured above 40 m. Hence, we use the aerosol
concentration profile, based on the measurements by Devara and Raj [12],
Devara et al. [13], Chate and Pranesha [6], and we use the properties of
aerosol species as provided by the OPAC database [30].
Mukund et al. [46] fitted the Rouse profile to the aerosols profile from
Devara and Raj [12] and arrived at the functional profile as in Equation (17)
for the variation of aerosol number density with height. Taking into account
the observations of Chate and Pranesha [6] and Mukund et al. [47]
calculations, we use different equations for aerosol concentrations above 1 m
and below 1 m differently (Equation 17, 18 and 19)
$N_{i}(z)=N_{i0}\bigg{(}\frac{z}{Z}\bigg{)}^{-p},z>1m$ (17)
$N_{i}(z)=N_{c}exp(-z+1),z\leq 1m$ (18)
$N_{c}=N_{1m+}(1.0+10^{-6})$ (19)
where $p=0.74$ and $z=50$ m and $N_{i0}$ is total concentration at height $z$
[46, 30].
Figure 2: Comparison of upward, downward, and net flux, with the radiation
model by Varghese et al. [72], (this model produces correct result at
$\epsilon_{s}=1$) and FASCODE [10] for $\epsilon_{s}=1$. In these models,
Midlatitude summer atmosphere with water vapor absorption has been used to
calculate fluxes.
Figure 3: Vertical profile of aerosol concentration (accumulation and coarse
mode only): concentration of water-soluble (WASO) and soot particles (SOOT)
are significantly higher than that of insoluble particles (INSO), and
concentrations are roughly uniform below 1 m AGL.
Since direct measurements of aerosol concentration profile and its properties
are not available at the observation site, we have tested 1 to 7 times of the
reported concentration in Hess et al. [30] in the present work to account for
the uncertainties in spatial and temporal variability of aerosol particle
concentrations and properties [30, 73, 19, 75]. If we take the same aerosol
concentrations as reported in Hess et al. [30], the steady-state simulated LTM
intensity is less than 0.2 K, which is weaker compared to our field
observations as well as other field observations [47]. Instead, if we consider
six times (6x) higher concentration of aerosols compared to the reported
concentration in Hess et al. [30] (Figure 3), the LTM and LHR obtained from
the simulations are in good agreement with our field observations. Note that
aerosol particles of diameter less than 0.1 $\mu$m (ultra-fine particles) are
not relevant to optical interactions [64], hence aerosol concentration
presented here consists of accumulation and coarse mode only. With this
profile, the total aerosol concentration at 1 m AGL is 3.3$\times 10^{5}$
cm-3, whereas it is 2.15$\times 10^{4}$ cm-3 at 40 m AGL, which is in the same
order of magnitude, reported from many field observations [75]. Hence, we will
use aerosol concentration profiles shown in Figure 3 for all further analysis.
In Section 5, we will discuss the implications of higher concentrations of
aerosols close to the ground.
### 3.3 Validation of the model
Although the radiation model by Varghese et al. [72] produces a spurious
cooling near the ground due to incorrect handling of reflected term, such
spurious cooling does not occur for $\epsilon_{s}=1$ because the reflected
component of downward radiative flux vanishes (Equation 4). Hence, the present
model has been validated against the model by Varghese et al. [72] and the
line-by-line Fast Atmospheric Signature Code (FASCODE) by Clough et al. [10]
for $\epsilon_{s}=1$ for the Mid Latitude Summer (MLS) standard atmosphere
with water vapor line absorption only. Figure 2 shows the comparison of
upward, downward, and net flux with the radiation model by Varghese et al.
[72] model and FASCODE. At the top of the atmosphere, offset in upward fluxes
in the present model goes up to 7 W m-2 as compared to FASCODE, and a similar
offset in upward fluxes has also been observed between the model by Varghese
et al. [72] and FASCODE [72]. Near ground, the relative offset of fluxes in
the present model is less than 2 W m-2 compared to the model by Varghese et
al. [72] and FASCODE.
### 3.4 Initialization of radiation model
Vertical profile of temperature and water vapor mixing ratio with respect to
pressure and height and aerosols profiles are required to initialize the
model. To get the temperature and humidity profile from the surface to 50 km
AGL, data from three different sources have been concatenated according to the
height: surface to 2 m data from the mast observations, 2 m to 10 m data from
MW radiometer measurements, and 10 km to 50 km from spatially interpolated
ERA5 reanalysis dataset [16]. Since, true vertical resolution of the retrieved
water vapor profile is coarse [4], we have performed sensitivity analysis by
$\pm$20% change in mixing ratio at all heights ($\pm$1.2 g/kg change at 10 m
AGL), magnitude of change in temperature is less than 0.2 K at any height. It
is one order of magnitude smaller compared to observed LTM intensity (Section
6), also variation of the mixing ratio alone in this range will not produce an
LTM type profile. Hence, we will use the observed mixing ratio profiles from
the MW radiometer. To resolve LTM and radiation divergence, the vertical
resolution of the present model is kept at 0.4 mm within 1 m AGL, and later,
it gradually coarsens to 18 m at 50 km AGL, which counts a total of 32771
vertical grid points in the model. All simulations have been performed with
and without aerosol profiles for all the considered days at this resolution.
For the bottom boundary condition, we have used observed/estimated surface
temperature ($T_{s}$). We don’t have a direct measurement of $T_{s}$ for the
initial 44-days of analysis, and hence, we have estimated it using the
radiosity Equation (20), whenever required. The estimated surface temperature
has been validated against the observed surface temperature (mounted later in
January 2023), which is available for the last 36-days of the analysis. Here,
it is to be noted that with a surface emissivity of $\epsilon_{s}=0.95$, the
mean and standard deviation of the absolute error between estimated and
observed $T_{s}$ from local sunset to sunrise time is 0.42 K and 0.3 K,
respectively. This discrepancy is 5–6 times smaller than the observed LTM
intensity derived from direct $T_{s}$ observations.(See supplementary figure
S1).
$F^{\uparrow}_{s}=\epsilon_{s}\sigma
T_{s}^{4}+(1-\epsilon_{s})F^{\downarrow}_{s}$ (20)
where $F^{\uparrow}_{s}$ and $F^{\downarrow}_{s}$ are upward and downward LW
flux respectively.
Each day, simulation begins with a concatenated temperature and water vapor
mixing ratio profile, initiated two hours before local sunset as a model spin-
up. After 5–6 hours post local sunset, mist forms, reducing visibility to less
than 5 km for most days (Figure 4a). This can lead to sensor wetting and
measurement errors when relative humidity (RH) exceeds 80%. Therefore,
analysis and simulation are primarily restricted to within 4 hours after local
sunset when RH remains below 75%. However, when discussing fog/cloud effects
on LTM (Subsection 4.4), mast-temperature observations during fog are
approached with caution. Significant changes in CNR at 50 m AGL occur with
mist or fog presence, attributed to variations in micron-sized particle (water
droplet) concentration or size distribution not considered in the current
model. As aerosol concentrations remain constant over time in simulations,
analysis is limited to 4 hours post local sunset to mitigate observed mist or
fog effects at the site and condensation-induced measurement errors.
## 4 Results
### 4.1 LTM observations
A temperature profile within 2 m AGL is considered an LTM profile if the
minimum temperature between 10 cm to 55 cm AGL is lower than the surface
temperature and the temperature between 1.6 m and 2 m by at least 0.3 K. Note
that in the absence of direct observations for ground surface temperature, we
have used temperature measured at 4.5 cm to quantify LTM occurrence. Threshold
of 0.3 K is chosen to avoid the spurious observation of LTM whereas the
threshold for upper and lower limit of LTM height is taken from Blay-Carreras
et al. [3] and Oke [53].
#### 4.1.1 LTM occurrence and characteristics
Although it has been speculated that LTM would not appear in cloudy conditions
or it disappears when cloud/fog appear [47, 3], the behavior of LTM due to
change in the downward LW flux in foggy conditions has not been reported
quantitatively through observations. From the simultaneous measurement of LW
flux during LTM occurrences and fog events from our field experiments, we show
that downward LW flux changes substantially with the appearance of fog (Figure
4b). In this plot, downward LW flux obtained from the radiation sensor mounted
at 1.14 m AGL has been overlapped with LTM (orange marker) and fog occurrence
(red marker) to observe LTM behavior in different atmospheric conditions.
Since on all considered days show similar behavior (for eighty days, covering
two fog seasons), data from only five days have been shown for brevity. It can
be observed that downward LW flux shows a distinctive diurnal variation
(except when fog appears), indicating that the sky is free from convective
system. Also, the precipitation sensor has not recorded any precipitation (not
shown). However, mild fluctuations in the downward LW flux during the local
afternoon on some days indicate the presence of fair-weather cumulus clouds
associated with local convection.
Figure 4: Observations of LTM over a few days (a) Change in CNR at 50 m AGL in
the presence of mist and fog indicates a change in concentration or
distribution of micron-size water droplets in the atmosphere. (b) LTM appears
during evening transitions and maintains for hours before fog occurs. LTM is
not observed during day hours, local conditions are based on METAR
notification and reported minimum-visibility on 17th, 18th, 19th, 20th and
21st December , 2021 are 96 m, 800 m, 193 m, 48 m and 48 m respectively.
We note that, LTM is absent during the daytime but emerges as the evening
transition progresses, persisting for several hours on clear nights or
intermittently if local conditions are unfavorable. When fog develops in the
early morning, LTM disappears, which is also accompanied by a sharp increase
in downward longwave (LW) flux (except on 18th December 2021). It should be
noted that temperature from the mast sensors and, hence, LTM occurrence might
not be accurate because of sensor wetting, but a similar observation, like
disappearance of LTM in the presence of fog/cloud has been reported in other
field experiments [47, 3]. We will further look at this aspect from idealized
simulations in Section 4.4. From METAR observations, fog reported on $18^{th}$
December 2021 was mild, having a minimum visibility of 804 m (possibly an
optically thin layer of fog). Because of the thin layer of the fog, downward
LW flux as well as divergence does not change sharply, and LTM persists during
this fog event. When sunlight causes the dissipation of fog in the morning,
downward LW flux, as well as radiation divergence, returns to its diurnal
cycle. However, because of solar heating, local convection close to the
surface dominates and does not allow sustenance of LTM. Hence, LTM is not
observed during daytime in spite of the sky being cloud-free.
Based on eighty days of observations spanning two winter and two spring
seasons, we find that LTM characteristics remain consistent across seasons.
During winter, LTM intensity averages $2.3\pm 0.7$ K, while in spring it is
$2.0\pm 0.5$ K (mean and one standard deviation). Likewise, LTM height
measures $0.30\pm 0.10$ m in winter and $0.32\pm 0.12$ m in spring. Overall,
across the eighty-days analysis period, LTM intensity averages 2.2$\pm$0.6 K,
with a height of 0.31$\pm$0.11 m. These findings align well with previous
studies (Mukund et al. [47], Blay-Carreras et al. [3]). Our observations
indicate slightly higher LTM intensity compared to Blay-Carreras et al. [3]
and lower than Mukund et al. [47]. Minor differences in LTM characteristics
across different field experiments may stem from variations in favorable
conditions such as calm, clear skies, ground properties, and aerosol
characteristics.
#### 4.1.2 Parameters that control LTM
Calm and clear sky conditions are favorable for LTM occurrence, which implies
that wind speed, TKE, and downward and upward LW flux are important parameters
that control the LTM characteristics [39, 47, 3]. Further, Apart from aerosol
characteristics near the ground, SHF at the soil surface controls the surface
temperature evolution and hence, LTM intensity. Histogram of these
meteorological factors with and without LTM occurrence from local sunset to
the next 4 hours are shown in Figure 5. Unfortunately, since the wind sensor
was not functioning during the winter and spring seasons of 2022-23, we have
used initial 44 days of wind data from season 2021-22 for analysis (Figure 5a
and 5b).
Low wind speed and fluctuations introduce minimal disturbances so that an LTM
profile can sustain. Wind speed measured 2 m above the ground level is $<$ 6 m
s-1 indicate the calm condition during the observation period (Figure 5a). We
observe that LTM occurrence strongly depends on the wind speed. Most ($>$ 80
%) of the LTM occurrence is when the wind speed is less than 2 m s-1.
Moreover, LTM does not appear or sustain if wind speed is more than 3 m s-1. A
drop in frequency of LTM occurrence with the increase in wind speed rules out
the role of advection or drainage flow in LTM development, which is also
reported in other field experiments [47, 3].
Figure 5: Histogram of different meteorological factors with and without LTM.
(a) Wind speed and (b) TKE at 2 m AGL; (c) downward and (d) upward LW flux at
1.14 m AGL; (e) Surface sensible heat flux (SHF) at 0.05 m below ground.
Apart from mean wind characteristics, turbulence is another key parameter in
LTM development [48, 47, 3, 35]. Since turbulent kinetic energy (TKE) is a
quantitative measure of the intensity of turbulence, we calculate it from
horizontal wind measured at 2 m AGL using Equation 21.
$TKE=\frac{1}{2}\big{(}\overline{u^{\prime 2}}+\overline{v^{\prime 2}}\big{)}$
(21)
where $u$ and $v$ are easterly and northerly components of wind, respectively,
sampled at the interval of 1 second. Velocity fluctuation is calculated as
$u^{\prime}=u-\overline{u}$, $v^{\prime}=v-\overline{v}$, and all averaging
have been done over 5 minutes. Figure 5b shows TKE variation with and without
LTM. Although TKE varies up to 0.5 m2 s-2, TKE $>$0.2 m2 s-2 is observed for
less than 8% of the observations, which indicates that the boundary layer near
the ground is not highly turbulent. Further, note that most ($\sim$ 95 %) of
the LTM occurrence is when the TKE is $<0.1$ m2 s-2 and LTM is not observed
when TKE is $>$ 0.3 m2 s-2. It signifies the requirement of low-turbulent
conditions within a few meters from ground for the occurrence of LTM. Since
low wind speed and TKE provide favorable conditions for LTM occurrence but do
not interact directly with LTM evolution, we observe that LTM intensity is
poorly correlated with wind speed and LTM intensity.
Figures 5c and 5d show histograms of observed radiative fluxes with and
without LTM. Unlike wind-speed and TKE, we note that LTM appears for all
observed values of LW radiative fluxes. However, the frequency of LTM
occurrence increases with the decrease in upward and downward LW fluxes. It
indicates that LTM occurrence, not only depends on cloud-free sky, but it also
depends on the state of the local atmosphere, which can modulate the incoming
LW fluxes. These factors include vertical distribution of water vapor and
thermal state of the atmosphere, which can change the incoming LW radiation
depending on how it is distributed vertically. Since LTM intensity depends on
radiation divergence, not on radiative fluxes (Subsection 4.2), we observe
that LTM intensity and LW fluxes are weakly anti-correlated ($r<-0.2$).
Sensible heat flux (SHF) and surface temperature exhibit a strong correlation
and are pivotal in determining the occurrence and characteristics of the LTM.
Figure 5e illustrates that the occurrence of LTM decreases with an increase in
SHF. When LTM is observed, assuming other fluxes remain constant, a reduction
in SHF diminishes the surface cooling rate, potentially resulting in a
relatively higher surface temperature, thus intensifying LTM. We find a
moderate negative correlation between SHF and LTM intensity ($r=-0.36$,
p-values $<$ .001). Similar relationships between surface SHF and radiative
cooling have been documented by Gentine et al. [25]. In summary, LTM
occurrence tends to increase with decreasing wind speed, Turbulent Kinetic
Energy, upward and downward Longwave fluxes, as well as sensible heat flux.
### 4.2 LTM in 1-D model and comparison with field observations
Figure 6: Mean relative temperature profiles (line plots) from observations
(green) and simulations with aerosols (blue) and without aerosols (orange).
Shading represents ($\pm 1\sigma$) variability in the profiles for each cases.
(a) day-to-day variability in 4-hourly mean profiles (from local sunset to
next 4-hours) observed over 80 days spanning two fog-seasons and (b) temporal
variability of the vertical temperature profile observed during the 4 hours
period after the local sunset for the same dataset. The green, dotted lines in
observations represent the transition from mast-data (up to 2 m) to microwave
radiometer data beyond 10 m. Note here vertical axis has hybrid-scale, a
logarithmic scale above 1 m AGL, and a linear scale from the surface to 1 m
AGL.
Mukund et al. [47] demonstrated through laboratory experiments the necessity
of aerosols, for getting observed temperature profiles and radiative cooling.
However, both laboratory and field experiments conducted by Mukund et al.
[47], were limited to a height within two meters close to the ground. For
various atmospheric phenomena, such as fog occurrence, temperature and
humidity profiles extending several hundred meters above ground level are
crucial. In this context, utilizing our dataset, we investigate the
significance of aerosols in determining vertical temperature profiles within
the nocturnal atmospheric boundary layer. For all eighty-days of observations,
numerical simulations were conducted with and without aerosols, for a four-
hours period from sunset. Initial and boundary conditions are derived from
observations as discussed in Subsection 3.4. Given that ground-level
temperatures vary diurnally and daily, we present relative temperature
profiles with respect to the ground surface. Results from this analysis are
depicted in Figure 6. It is evident that, there exists a substantial disparity
between observed and simulated profiles without aerosols, both in profile
shape and mean temperatures ($\sim$ 2–5°C). Incorporating aerosols into the
simulations largely mitigates these temperature differences, resulting in
simulated vertical profiles resembling LTM as observed in this field study.
Also, the impact of including aerosols in radiative processes extends several
hundred meters into the boundary layer. However, still a significant offset in
temperature (more than 2oC) occurs above 20 m, this might be due to large-
scale temperature advection (see supplementary figure S2) and day-to-day
variations as discussed below.
In Figure 6, we present two types of averaging both for simulations and field
observations. These plots elucidate two types of variabilities in the
temperature profile. We have considered observations and simulated data at
five-minutes time interval, starting from local sunset time to the next four
hours on each of the eighty days. For this part of the discussion, concentrate
on observed profiles (green color) and simulated profiles with aerosols (blue
color). The mean temperature profiles in Figures 6a and 6b, are represented by
solid green-lines (observation) and dashed blue-lines for simulations. Day-to-
Day variability in the temperature profiles for eighty days is depicted in
Figure 6a. For this plot, temperature measurements for each day within the
4-hour window following local sunset are averaged to obtain a single mean
temperature profile for that day. This process is repeated for eighty-days in
the dataset, resulting in a collection of eighty-mean temperature profiles.
Now the average of these eighty profiles is the solid green-line (for
observation) and dashed blue-line (for simulation with aerosols) in Figure 6a.
Green-shaded region is obtained by calculating variability ($\pm 1\sigma$) of
observed temperature at each height from the mean value of observation at that
height. Shaded region indicates day-to-day variability of temperature in
observation, resulting from the prevailing atmospheric conditions. Similarly
blue-shaded region indicates day-to-day variability of simulated temperature
profile for given input temperature initialization and observed boundary
condition.
We can also investigate temporal variability of the temperature profiles (see
Figure 6b). For this analysis, we consider five-minute interval separated 48
(4 hr x 12/hr) temperature profiles on each day for eighty-days. Here, we take
averages across eighty-days for corresponding time profiles, we get a set of
48-mean temperature profiles. The average of these 48-profile is same as the
one we got previously and is plotted as solid green-line (observation) and
dashed blue-line (for simulation with aerosols). As above, for the set of
48-temperature profiles calculating the variability ($\pm 1\sigma$) of
temperature at each height from the mean value at that height, we have plotted
the shaded green (observation) region and shaded blue (for simulation with
aerosols) region. It is evident that, in the surface layer up to 20–30 m, the
day-to-day variability (in Figure 6a) is many times greater than the temporal
variability (in Figure 6b). The result indicates that, time to establish
observed temperature profile on a given day is very short, and most of the
variation originates from the day-to-day variation in the atmospheric
conditions including temperature, water vapor in the atmosphere, diurnal
history of solar insolation and probably even the aerosol distribution.
Figure 7: Mean radiative flux and its divergence with and without aerosols
from local sunset to next 4 hours over the daily mean profiles of 80 days
analysis. (a) Net LW radiative flux with and without aerosols from
simulations. Flux measurements from the field experiments are also marked. (b)
Vertical variation of radiation, conduction, and net divergence without
aerosols and (c) with aerosols. Here, absolute divergence less than 0.01 W m-3
has been shown as $\pm 10^{-2}$ W m-3. Spread at any height is one standard
deviation over 4 hours from sunset (shading around line plot).
Scattering, absorption, and emission of radiation from aerosols change the net
radiative flux and net radiation divergence, leading to different temperature
profiles in the atmosphere [41]. 4-hour (from sunset) mean net radiative flux
with/without aerosols, as well as net LW flux obtained from radiation sensors,
have been presented in Figure 7a (green marker). The shaded region shows one
standard deviation spread over 4 hours. The maximum spread in both simulations
and observations is less than 6 W m-2 at 1.14 m AGL, which signifies that
simulations and observations follow similar variability over the evolution.
Downward LW fluxes obtained from both radiation sensors show an offset of 4 W
m-2, a possible relative uncertainty in radiative flux measurement. However,
net radiative flux from simulations is 100 W m-2 higher than the observed one
if aerosols are not accounted for, and the above difference reduces to 40–50 W
m-2 if aerosols are accounted for in the model. These offsets in downward LW
flux can be further reduced if other major greenhouse gases like Carbon
Dioxide (CO2) and Ozone (O3) are accounted for in the radiation model.
Radiation, conduction, and net divergence with/without aerosols are shown in
Figure 7b, c (Equation 16). When aerosols are not accounted for in the model,
radiation divergence induces heating within a few decimeters above the
surface, but conduction divergence causes cooling. In this region, cooling
induced by conduction divergence dominates over warming induced by radiation
divergence and produces net cooling close to the surface. After a few
decimeters above the surface, conduction divergence weakens, and radiation
divergence dominates over the remaining column of the atmosphere. Overall, in
the absence of aerosols, radiation and conduction divergence together lead to
net divergence (cooling) of the whole column, but net divergence decreases
monotonically with height and does not cause any preferential cooling, which
is required for the development of LTM.
In the presence of aerosols, radiation divergence is substantial near the
surface and dominates over conduction divergence over the whole column (Figure
7c). However, as heating induced by conduction divergence decreases rapidly
away from the surface and cooling caused by radiation divergence dominates
(Equation 16). It leads to locally enhanced net divergence between 0.5 to 2 m
above the surface and makes the net divergence profile non-monotonous with
height. This enhanced net divergence causes preferential cooling, which leads
to LTM development. Interplay of these flux divergence terms, at the surface,
is further complicated by the presence of penetrative convection system (see
Kaushal et al. [36]) driven by radiative cooling. We also observe that
aerosol-induced radiation divergence (net cooling) after a few decimeters from
the surface is higher than the no-aerosols conditions, extending for a few
hundred meters. Although the net divergence (cooling) decreases with height,
it can induce significant temperature change over the night in ABL.
Figure 8: Temporal variation in mean LHR from local sunset to next 4 hours up
to 1 km (a) Without aerosols: when aerosol-radiation interactions are not
accounted for in the model (b) With aerosols: When aerosol-radiation
interactions are accounted for, (c) Observations; derived LHR from observed
temperature profiles using Equation 16. Derived LHR from observations are in
better agreement with aerosol-accounted LHR, compared to without-aerosol LHR.
### 4.3 Longwave heating rate (LHR) after sunset
Temporal variation in day-to-day mean LHR over 80 days from local sunset to
the next 4 hours are shown in Figure 8. Since the net heating rate and
conduction heating rate can be directly estimated from the temperature
profile, LHR profiles have been derived using Equation (16) under the
assumption of negligible horizontal advection and vertical mixing. When
aerosols are not accounted for, the mean LHR near the surface is weaker
compared to the LHR from the field observations and aerosol-accounted
simulations (Figure 8). Simulations without aerosols show positive LHR and,
hence, radiative warming within a few centimeters from the surface. Above the
warming region, radiative cooling of $\approx$2 K/h is observed near the
ground. A comparable radiative cooling rate has also been reported without
aerosols in HA and Mahrt [29] and Steeneveld et al. [66]. In contrast,
observations and aerosols-accounted models show intense cooling within 1 m AGL
where LHR goes less than -10 K/h. Similar values of LHR of several K/h during
sunset have also been reported by Steeneveld et al. [68], but all LW radiation
models systematically underestimate radiative cooling by one order of
magnitude, which could be attributed to the absence of aerosols-radiation
interaction in the models. Further, when we don’t account for aerosols in the
models, it can lead to an offset of $\approx 4$ K in the model (Figure 6),
which can affect the onset, growth, and intensity of fog. Here, we show that
LHR profiles from simulations and field experiments are of the same order only
if aerosols-radiation interactions are allowed in the model.
The temporal and vertical evolution of LHR profiles from aerosol-accounted
simulations is in good agreement with derived profiles from field observations
(Figure 8). During local sunset, radiative cooling of more than 1 K h-1 is
observed up to 10 m AGL in aerosol-accounted models as well as in field
observations, which causes intense cooling close to the ground. Further, LHR
decreases sharply with height, but a weak negative LHR can be observed till a
few hundred meters. We believe that LHR fluctuations in derived LHR are due to
the advection of temperature and humidity profiles, which has not been
accounted for in the present model. Although LHR spread in observations is
large compared to simulations, mean LHR from aerosol-accounted simulations is
in better agreement with the observed mean LHR compared to simulations.
Moreover, it is clear from derived LHR and aerosol-accounted simulation that
LHR can be lower than -5 K h-1 very close to the ground which might play an
important role in land-atmosphere coupling.
### 4.4 Radiation divergence and LTM in the presence of fog/cloud
Based on the past field observations, it has been speculated that LTM
disappears if cloud passes over it [60, 53, 47, 3]. We also observe that LTM
disappears in most of the fog events but sustains in one fog event (Figure 4,
on 18th December). Hence, to investigate the role of different thicknesses as
well as different base heights of fog on LTM, we have considered four
idealized layers of fog/cloud in simulations. These layers are located
between; surface to 10 m (T1), surface to 90 m (T2), 10 m–100 m (T3), and 300
m–390 m (T4). Although T1 and T2 both represent fog touching the ground, T1 is
shallower than T2. T3 and T4 both represent fog/cloud whose thickness is the
same, but its base height is different. Further, the size distribution and
concentration of fog are not site-specific and are taken from the OPAC
database [30]. For the sake of simplicity, the concentration and size
distribution of fog do not change within a fog layer and show sharp changes
across the boundary. In this set of analysis, aerosol profile considered is
same as earlier (Figure 3), in addition, fog droplets were considered with a
modified gamma distribution having a total concentration of 15 cm-3.
For each fog thickness, we have simulated the growth and dissipation of LTM
profiles, radiative flux, and its divergence in the presence of the fog layer.
In all three simulations, the initial 2 hours of the simulations are run
without the fog layer to allow the LTM to develop. After 2 hours of
simulations, the fog layer is activated, where the evolution of different
parameters like temperature, radiative flux, and divergence under the combined
effect of aerosols and fog are observed for the next 2 hours of simulation.
Figure 9: Vertical profiles of (a) relative temperature and (b) radiative
divergence in the presence of different fog layers.
Figure 9(a) shows relative temperature profiles compared to the surface
temperature in the presence of different fog/cloud layers as well as without
it. Without any fog/cloud, we observe LTM profiles of 2 K, usually observed in
our field experiments. However, LTM intensity and height increase in the
presence of shallow fog of 2 m thickness (Case T1) due to enhanced radiation
divergence near the ground (Figure 9(b)). It is due to an increase in
radiation interaction with fog droplets and clear skies. A similar case, like
maintenance of LTM in fog has been observed during a fog event (Figure 4). At
the fog top and bottom, we notice a sharp change in temperature profiles and
radiative divergence, which is due to idealized boundary conditions, i.e., the
sharp change in fog concentration at the boundary. After the fog top,
temperature and radiative divergence tend towards without fog condition. With
an increase in fog thickness (Case T2), the optical thickness of fog becomes
so large that it behaves like an opaque sheet for the air layer near the
ground. Hence, radiative flux evolution becomes practically independent above
the fog layer and near the ground, and LTM does not appear. We have observed
an increase in downward LW radiation in the presence of fog in the simulations
(not shown here), which has also been observed in the field experiments
(Figure 4). Hence, we observe an inversion profile rather than an LTM profile.
Further, with an increase in the cloud base height, keeping the same thickness
(Case T3 and T4), we observe that radiation divergence weakens near the
ground, which results into an inversion profile of higher intensity.
We observe a significant temperature drop at the fog top. Many field
observations and simulations have reported intense fog top cooling [63, 49,
37, 77], but we note here that the present model is one-dimensional and does
not account for gravitational settling, downward draft, and mixing due to
turbulent convection (because of negative buoyancy). With such limitations,
temperature drop within the fog layer or fog top/bottom indicated here can be
significant. Further, vertical mixing induced by fog top cooling might lead to
temperature convergence within the fog layer [57]. Moreover, the energy
released during phase change in foggy conditions offsets the cooling caused by
radiation divergence. As fog top height increases, cooling caused by radiation
divergence extends across the fog layer, which results in reduced observable
cooling within the fog layer. If fog top height is substantial and fog
duration is brief, temperature drop because of fog top cooling is not
significantly noticeable (See supplementary figure S3). However, our interest
here is to test the radiative effect of the fog layer, especially the impact
of fog on radiation divergence and the sustenance of LTM. Therefore, fog
micro-physics and related dynamics will not be discussed here. We observe that
enhanced radiation divergence at the fog top is responsible for the intense
cooling at the fog top.
## 5 Discussions
LTM characteristics vary significantly across different field experiments [53,
47, 3]. In the present field experiments, the observed mean LTM height is
close to the height observed by Mukund et al. [47], whereas the observed
intensity is lower. Mukund et al. [47], who studied LTM at one location with
modified surface properties, have shown that LTM height does not depend
significantly on the surface type. We expect that it strongly depends on other
parameters, especially the aerosol concentration profiles, which have high
spatial and temporal variability [75]. An extensive field experiment with
simultaneous profiling of aerosols and LTM would unravel the direct
correlation between aerosol vertical distribution and variation in LTM height
as reported in field observations [53, 47, 3].
The results obtained from the one-dimensional conduction-radiation model,
which incorporates radiation interactions with a representative aerosol
profile, closely agree with the observed temperature profiles after the
evening transitions. This agreement also extends to predicted and measured
radiation divergence during the evening transition in the present study and
other field observations [68]. However, considerable variability in aerosol
vertical distribution, particle concentrations, sizes, chemical composition,
and their intricate interactions with radiation is reported by [19]. In this
survey, the total aerosol concentration at a few sites was found to exceed
$10^{5}$ cm-3, but most of these observations have been performed a few meters
away from the ground, and aerosols within 1–2 m above surface have not been
investigated. However, aerosol concentration within 1 m can be 10-100 times
higher than the concentration measured a few meters above the ground [13, 6].
Overall, in light of significant uncertainties associated with spatial and
temporal variability of aerosol particle concentrations and properties, we use
aerosol concentration, profile, and properties presented in Section 3.2.
However, detailed profiling of aerosols at the observation site might further
reduce the offset in LTM height and offset in temperature profiles at
different vertical levels away from the surface. Despite uncertainties in the
aerosol profile, we demonstrate that LTM simulations appear in good agreement
with the field observations only when aerosols are included. It is also
consistent with laboratory-scale LTM observations by Mukund et al. [47] that
LTM intensity decreases with a decrease in aerosol concentration in the test
section. Moreover, the evolution of radiative flux, divergence, and
temperature are in better agreement with field observations if aerosols are
accounted for in the model. Therefore, results presented here clearly
emphasise the need to include aerosol-induced radiative cooling in current
radiation models used in stable, nocturnal ABL.
The model employed in this study is a one-dimensional conduction-radiation
model that considers aerosols. However, it does not incorporate horizontal
advection, vertical convection/mixing, or other dynamical processes. Hence,
the model has limitations in accounting for the changes in the temperature
profile due to horizontal advection and vertical mixing. Although we show from
current field observations that radiation divergence plays a crucial role in
LTM development and its decay, a better comparison with simulations and field
observations can be achieved if other dynamical processes like vertical mixing
in unstable layer near the ground are also accounted for in the model [43].
A basic calculation shows that a 0.5 W m-3 radiation divergence can lead to a
cooling rate of 1.8 K/h, which can influence many meteorological phenomena
after sunset like near-surface temperature inversion, fog, pollution
dispersal, drainage flow, and nocturnal jet in the NBL. Since occurrence,
dissipation, and intensity of fog and drainage flow are sensitive to small
temperature changes, prediction of these phenomena can be improved if
aerosols-radiation interaction in the LW region is accounted for in the
current forecasting model.
## 6 Conclusions
During evening transition and later in the night, radiative cooling strongly
modulates the thermal structure of the ABL near ground which has significant
implications in micro-meteorology and agriculture. Most of the radiation
models are not able to produce it satisfactorily [68, 65] which might be due
to missing aerosol-radiation interaction in LW region [79, 11, 1, 46]. In this
paper, we have presented results that elucidate the role of radiative cooling
due to aerosols on the thermal structure of the ABL during and after evening
transitions through extensive field observations and numerical simulations.
From our field experiments, we have demonstrated that the Lifted Temperature
Minimum (LTM) generally occurs in the nocturnal boundary layer under calm and
clear sky conditions, typically appearing during the evening transition and
intermittently disappearing during the night. The persistence of LTM depends
on specific conditions and it is not observed during the daytime when solar
heating and convection dominate.
Our field observations reveal that LTM occurrence are strongly influenced by
factors like mean wind speed, turbulent kinetic energy, downward and upward
longwave fluxes, as well as sensible heat flux. The probability of occurrence
of LTM occurrence increases with a decrease in the above parameters. Notably,
LTM’s disappearance with increasing wind speed suggests that near-surface
advection is not the primary cause. Further, LTM is observed in both seasons
(winter and spring), and there is no significant change in its characteristics
across the seasons. Observed LTM intensity and height are 2.2$\pm$0.6 K and
0.31$\pm$0.11 m (mean and one standard deviation), respectively. These values
are in a similar range as reported by Mukund et al. [47] and Blay-Carreras et
al. [3].
Simulations using a one-dimensional, conduction-radiation model that accounts
for aerosols show that LTM cannot form in the absence of aerosols, even under
favorable conditions. Without aerosols, the net divergence of longwave
radiation decreases monotonically with height, leading to a typical
temperature inversion profile. However, the presence of aerosols results in a
non-monotonic net divergence, causing preferential cooling near the ground and
leading to LTM development. Simulated LTM height and intensity match with
field observations when aerosols are considered. Further analysis with
idealized fog layer simulations reveals that the presence of fog modulates
downward longwave flux and radiation divergence depending upon the fog base
height and thickness. In case of shallow fog near the ground, LTM strengthens
with an increase in radiation divergence. Both radiation divergence and LTM
intensity decrease with an increase in fog thickness, and if the fog becomes
optically thick, LTM disappears completely, and we observe an inversion
profile. This behavior aligns with observations in the field, supporting the
model’s microphysics of radiative transfer.
We have also investigated the reasons for the underestimation of longwave
heating rate (LHR) by radiation models during evening transitions, as reported
by Steeneveld et al. [68]. Our findings, along with simulations with and
without aerosols, demonstrate that LHR from simulations and field experiments
better agree when aerosols are considered in the model. Aerosol-induced LHR is
not limited to LTM occurrence during evening transitions; its effects extend
several hundred meters above ground level and can influence various
meteorological phenomena such as the development of the nocturnal boundary
layer, temperature inversion, mist, fog, and pollution dispersion, ultimately
affecting the stability of the boundary layer. Incorporating aerosol-radiation
interactions in longwave radiation models will lead to improved forecasts of
these phenomena.
Although we have shown the role of aerosols in radiation divergence as well as
LTM occurrence in calm and clear sky conditions from 1-dimensional model, its
parameterized version needs to be developed and tested in NWP models. Further,
current study does not account for dynamical modeling which can be important
near the ground. Although penetrative convection formed near the ground due to
radiative cooling has been studied by Kaushal et al. [36], LTM maintenance
against convective instability and turbulent flux divergence in the unstable
layer of LTM needs to be investigated in future. Other LTM characteristics
like its transient behavior, appearing and disappearing nature during night
are currently under investigation from observations and simulations.
## 7 Acknowledgements
We thank the Department of Science and Technology, Government of India, for
co-funding this project through the Technical Research Centre (TRC) program at
JNCASR, Bengaluru, India. Additionally, we extend our thanks to Bangalore
International Airport Limited (BIAL), Bengaluru, India, for co-funding this
project and for granting access to the runway area at Kempegowda International
Airport Bengaluru (KIAB) to establish our observation site, as well as
providing other logistical support. Furthermore, we acknowledge the National
Supercomputing Mission (NSM) program at JNCASR for providing the computational
facility necessary for this research project.
## 8 Conflict of interest
We herewith declare that we do not have any conflict of interest in the work
reported in this paper.
## References
* André and Mahrt [1982] JC André and L Mahrt. The nocturnal surface inversion and influence of clear-air radiative cooling. _Journal of Atmospheric Sciences_ , 39(4):864–878, 1982.
* Angevine et al. [2020] Wayne M Angevine, John M Edwards, Marie Lothon, Margaret A LeMone, and Simon R Osborne. Transition periods in the diurnally-varying atmospheric boundary layer over land. _Boundary-Layer Meteorology_ , 177:205–223, 2020.
* Blay-Carreras et al. [2015] Estel Blay-Carreras, ER Pardyjak, D Pino, SW Hoch, J Cuxart, Daniel Martínez, and Joachim Reuder. Lifted temperature minimum during the atmospheric evening transition. _Atmospheric Chemistry and Physics_ , 15(12):6981–6991, 2015.
* Blumberg et al. [2015] WG Blumberg, DD Turner, U Löhnert, and S Castleberry. Ground-based temperature and humidity profiling using spectral infrared and microwave observations. part ii: Actual retrieval performance in clear-sky and cloudy conditions. _Journal of Applied Meteorology and Climatology_ , 54(11):2305–2319, 2015.
* Bohren and Huffman [2008] Craig F Bohren and Donald R Huffman. _Absorption and scattering of light by small particles_. John Wiley & Sons, 2008.
* Chate and Pranesha [2004] DM Chate and TS Pranesha. Field measurements of sub-micron aerosol concentration during cold season in india. _Current Science_ , pages 1610–1613, 2004.
* Chou and Suarez [1994] Ming-Dah Chou and Max J Suarez. An efficient thermal infrared radiation parameterization for use in general circulation models. 1994\.
* Chou et al. [1993] Ming-Dah Chou, William L Ridgway, and Michael MH Yan. One-parameter scaling and exponential-sum fitting for water vapor and co2 infrared transmission functions. _Journal of the atmospheric sciences_ , 50(14):2294–2303, 1993.
* Chou et al. [2001] Ming-Dah Chou, Max J Suarez, Xin-Zhong Liang, Michael M-H Yan, and Charles Cote. A thermal infrared radiation parameterization for atmospheric studies. Technical report, 2001.
* Clough et al. [1992] Shepard A Clough, Michael J Iacono, and Jean-Luc Moncet. Line-by-line calculations of atmospheric fluxes and cooling rates: Application to water vapor. _Journal of Geophysical Research: Atmospheres_ , 97(D14):15761–15785, 1992.
* Coantic and Seguin [1971] M Coantic and Bernard Seguin. On the interaction of turbulent and radiative transfers in the surface layer. _Boundary-Layer Meteorology_ , 1:245–263, 1971.
* Devara and Raj [1993] PCS Devara and P Ernest Raj. Lidar measurements of aerosols in the tropical atmosphere. _Advances in atmospheric sciences_ , 10:365–378, 1993.
* Devara et al. [1995] PCS Devara, P Ernest Raj, S Sharma, and G Pandithurai. Real-time monitoring of atmospheric aerosols using a computer-controlled lidar. _Atmospheric Environment_ , 29(16):2205–2215, 1995.
* Drüe and Heinemann [2007] Clemens Drüe and Günther Heinemann. Characteristics of intermittent turbulence in the upper stable boundary layer over greenland. _Boundary-layer meteorology_ , 124:361–381, 2007.
* Edwards [2009] JM Edwards. Radiative processes in the stable boundary layer: Part i. radiative aspects. _Boundary-layer meteorology_ , 131:105–126, 2009.
* ERA5 [2017] ERA5. Copernicus Climate Change Service (C3S) (2017): ERA5: Fifth generation of ECMWF atmospheric reanalyses of the global climate. Copernicus Climate Change Service Climate Data Store (CDS). https://cds.climate.copernicus.eu/cdsapp#!/home, 2017. [Online; accessed 30-August-2022].
* Estournel and Guedalia [1985] Claude Estournel and Daniel Guedalia. Influence of geostrophic wind on atmospheric nocturnal cooling. _Journal of the atmospheric sciences_ , 42(23):2695–2698, 1985.
* Fernando et al. [2013] Harindra JS Fernando, Brett Verhoef, Silvana Di Sabatino, Laura S Leo, and Seoyeon Park. The phoenix evening transition flow experiment (transflex). _Boundary-layer meteorology_ , 147:443–468, 2013.
* Forster et al. [2021] Piers Forster, Trude Storelvmo, Kyle Armour, William Collins, Jean-Louis Dufresne, David Frame, Dan Lunt, Thorsten Mauritsen, Matthew Palmer, Masahiro Watanabe, et al. The earth’s energy budget, climate feedbacks, and climate sensitivity. 2021\.
* Fuggle and Oke [1976] RF Fuggle and TR Oke. Long-wave radiative flux divergence and nocturnal cooling of the urban atmosphere: I: Above roof-level. _Boundary-Layer Meteorology_ , 10(2):113–120, 1976.
* Funk [1960] JP Funk. Measured radiative flux divergence near the ground at night. _Quarterly Journal of the Royal Meteorological Society_ , 86(369):382–389, 1960.
* Funk [1961] JP Funk. A numerical method for the computation of the radiative flux divergence near the ground. _Journal of the Atmospheric Sciences_ , 18(3):388–392, 1961.
* Garratt and Brost [1981] JR Garratt and RA Brost. Radiative cooling effects within and above the nocturnal boundary layer. _Journal of Atmospheric Sciences_ , 38(12):2730–2746, 1981.
* Geiger [1957] Rudolf Geiger. The climate near the ground, 494 pp, 1957.
* Gentine et al. [2018] Pierre Gentine, Gert-Jan Steeneveld, Bert G Heusinkveld, and Albert AM Holtslag. Coupling between radiative flux divergence and turbulence near the surface. _Quarterly Journal of the Royal Meteorological Society_ , 144(717):2491–2507, 2018.
* Gopalakrishnan et al. [1998] SG Gopalakrishnan, Maithili Sharan, RT McNider, and MP Singh. Study of radiative and turbulent processes in the stable boundary layer under weak wind conditions. _Journal of the atmospheric sciences_ , 55(6):954–960, 1998.
* Grant [1997] ALM Grant. An observational study of the evening transition boundary-layer. _Quarterly Journal of the Royal Meteorological Society_ , 123(539):657–677, 1997.
* Gultepe [2008] Ismail Gultepe. Fog and boundary layer clouds: fog visibility and forecasting. 2008\.
* HA and Mahrt [2003] KYUNG-JA HA and Larry Mahrt. Radiative and turbulent fluxes in the nocturnal boundary layer. _Tellus A_ , 55(4):317–327, 2003.
* Hess et al. [1998] Michael Hess, Peter Koepke, and Ingrid Schult. Optical properties of aerosols and clouds: The software package opac. _Bulletin of the American meteorological society_ , 79(5):831–844, 1998.
* Hoch [2005] Sebastian Wilhelm Hoch. _Radiative flux divergence in the surface boundary layer: A study based on observations at Summit, Greenland_. PhD thesis, ETH Zurich, 2005.
* Hoch et al. [2007] SW Hoch, P Calanca, R Philipona, and A Ohmura. Year-round observation of longwave radiative flux divergence in greenland. _Journal of Applied Meteorology and Climatology_ , 46(9):1469–1479, 2007.
* Hou and Wu [2016] Pei Hou and Shiliang Wu. Long-term changes in extreme air pollution meteorology and the implications for air quality. _Scientific reports_ , 6(1):23792, 2016.
* Jacobson [1999] Mark Z Jacobson. _Fundamentals of atmospheric modeling_. Cambridge university press, 1999.
* Jensen et al. [2016] Derek D Jensen, Daniel F Nadeau, Sebastian W Hoch, and Eric R Pardyjak. Observations of near-surface heat-flux and temperature profiles through the early evening transition over contrasting surfaces. _Boundary-layer meteorology_ , 159:567–587, 2016.
* Kaushal et al. [2024] Shaurya Kaushal, D. K. Singh, and K. R. Sreenivas. Radiatively driven penetrative convection in the nocturnal boundary layer. _(Under Revision,_ , 2024).
* Koračin et al. [2014] Darko Koračin, Clive E Dorman, John M Lewis, James G Hudson, Eric M Wilcox, and Alicia Torregrosa. Marine fog: A review. _Atmospheric research_ , 143:142–175, 2014.
* Kutty et al. [2019] Saumya G Kutty, G Agnihotri, AP Dimri, and I Gultepe. Fog occurrence and associated meteorological factors over kempegowda international airport, india. _Pure and Applied Geophysics_ , 176:2179–2190, 2019.
* Lake [1956] JV Lake. The temperature profile above bare soil on clear nights. _Quarterly Journal of the Royal Meteorological Society_ , 82(352):187–197, 1956.
* Lieske and Stroschein [1967] Bruce Jerome Lieske and LA Stroschein. Measurements of radiative flux divergence in the arctic. _Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B_ , 15:67–81, 1967.
* Liou [2002] Kuo-Nan Liou. _An introduction to atmospheric radiation_ , volume 84. Elsevier, 2002.
* Mahrt [1985] L Mahrt. Vertical structure and turbulence in the very stable boundary layer. _Journal of Atmospheric Sciences_ , 42(22):2333–2349, 1985.
* Mahrt [2014] Larry Mahrt. Stably stratified atmospheric boundary layers. _Annual Review of Fluid Mechanics_ , 46:23–45, 2014.
* Mesonet [2019] Iowa Environmental Mesonet. Asos-awos-metar data download. _URL: http://mesonet. agron. iastate. edu/request/download. phtml_ , 2019.
* Monteith [1957] JL Monteith. Dew. _Quarterly Journal of the Royal Meteorological Society_ , 83(357):322–341, 1957.
* Mukund et al. [2010] V Mukund, VK Ponnulakshmi, DK Singh, G Subramanian, and KR Sreenivas. Hyper-cooling in the nocturnal boundary layer: the ramdas paradox. _Physica Scripta_ , 2010(T142):014041, 2010.
* Mukund et al. [2014] V Mukund, DK Singh, VK Ponnulakshmi, Ganesh Subramanian, and KR Sreenivas. Field and laboratory experiments on aerosol-induced cooling in the nocturnal boundary layer. _Quarterly Journal of the Royal Meteorological Society_ , 140(678):151–169, 2014.
* Narasimha [1994] R Narasimha. The dynamics of the ramdas layer. _Current Science_ , 66(1):16–28, 1994.
* Nishikawa et al. [2004] Toru Nishikawa, Shigenao Maruyama, and Seigo Sakai. Radiative heat transfer and hydrostatic stability in nocturnal fog. _Boundary-layer meteorology_ , 113:273–286, 2004.
* Nkemdirim [1978] Lawrence C Nkemdirim. A comparison of radiative and actual nocturnal cooling rates over grass and snow. _Journal of Applied Meteorology (1962-1982)_ , pages 1643–1646, 1978\.
* Nkemdirim [1988] Lawrence C Nkemdirim. Nighttime surface-layer temperature tendencies with and without chinooks. _Journal of Applied Meteorology and Climatology_ , 27(4):482–489, 1988.
* Nunez and Oke [1976] M Nunez and TR Oke. Long-wave radiative flux divergence and nocturnal cooling of the urban atmosphere: Ii: Within an urban canyon. _Boundary-Layer Meteorology_ , 10(2):121–135, 1976.
* Oke [1970] TR Oke. The temperature profile near the ground on calm clear nights. _Quarterly Journal of the Royal Meteorological Society_ , 96(407):14–23, 1970.
* Ponnulakshmi et al. [2012] VK Ponnulakshmi, V Mukund, DK Singh, KR Sreenivas, and Ganesh Subramanian. Hypercooling in the nocturnal boundary layer: Broadband emissivity schemes. _Journal of the atmospheric sciences_ , 69(9):2892–2905, 2012.
* Ponnulakshmi et al. [2013] VK Ponnulakshmi, DK Singh, V Mukund, KR Sreenivas, and Ganesh Subramanian. Hypercooling in the atmospheric boundary layer: beyond broadband emissivity schemes. _Journal of the atmospheric sciences_ , 70(1):278–283, 2013.
* Press [2007] William H Press. _Numerical recipes 3rd edition: The art of scientific computing_. Cambridge university press, 2007.
* Price [2011] Jeremy Price. Radiation fog. part i: observations of stability and drop size distributions. _Boundary-layer meteorology_ , 139:167–191, 2011.
* Räisänen [1996] Petri Räisänen. The effect of vertical resolution on clear-sky radiation calculations: tests with two schemes. _Tellus A_ , 48(3):403–423, 1996.
* Ramanathan and Ramdas [1935] KR Ramanathan and LA Ramdas. Derivation of ångstrom’s formula for atmospheric radiation and some general considerations regarding nocturnal cooling of air-layers near the ground. In _Proceedings of the Indian Academy of Sciences-Section A_ , volume 1, pages 822–829. Springer India, 1935.
* Ramdas and Atmanathan [1932] LA Ramdas and S Atmanathan. The vertical distribution of air temperature near the ground at night. _Beit. Geophys_ , 37:116–117, 1932.
* Ramdas and Atmanathan [1957] LA Ramdas and S Atmanathan. Über das nächtliche temperaturminimum über nackten boden in poona. _Meteorol. Rundsch_ , 10:1–11, 1957.
* Rinke et al. [2012] Annette Rinke, Yongfeng Ma, Lingen Bian, Yufei Xin, Klaus Dethloff, P Ola G Persson, Christof Lüpkes, and Cunde Xiao. Evaluation of atmospheric boundary layer–surface process relationships in a regional climate model along an east antarctic traverse. _Journal of Geophysical Research: Atmospheres_ , 117(D9), 2012.
* Roach et al. [1976] WT Roach, R Brown, SJ Caughey, JA Garland, and CJ Readings. The physics of radiation fog: I–a field study. _Quarterly Journal of the Royal Meteorological Society_ , 102(432):313–333, 1976.
* Seinfeld et al. [2016] John H Seinfeld, Christopher Bretherton, Kenneth S Carslaw, Hugh Coe, Paul J DeMott, Edward J Dunlea, Graham Feingold, Steven Ghan, Alex B Guenther, Ralph Kahn, et al. Improving our fundamental understanding of the role of aerosol- cloud interactions in the climate system. _Proceedings of the National Academy of Sciences_ , 113(21):5781–5790, 2016.
* Steeneveld [2014] Gert-Jan Steeneveld. Current challenges in understanding and forecasting stable boundary layers over land and ice. _Frontiers in Environmental Science_ , 2:41, 2014.
* Steeneveld et al. [2006] GJ Steeneveld, BJH Van de Wiel, and AAM Holtslag. Modeling the evolution of the atmospheric boundary layer coupled to the land surface for three contrasting nights in cases-99. _Journal of the atmospheric sciences_ , 63(3):920–935, 2006.
* Steeneveld et al. [2008] GJ Steeneveld, Thorsten Mauritsen, EIF De Bruijn, J Vilà-Guerau de Arellano, Gunilla Svensson, and AAM Holtslag. Evaluation of limited-area models for the representation of the diurnal cycle and contrasting nights in cases-99. _Journal of Applied Meteorology and Climatology_ , 47(3):869–887, 2008.
* Steeneveld et al. [2010] GJ Steeneveld, MJJ Wokke, CD Groot Zwaaftink, S Pijlman, BG Heusinkveld, AFG Jacobs, and AAM Holtslag. Observations of the radiation divergence in the surface layer and its implication for its parameterization in numerical weather prediction models. _Journal of Geophysical Research: Atmospheres_ , 115(D6), 2010.
* Stephens [1984] Graeme L Stephens. The parameterization of radiation for numerical weather prediction and climate models. _Monthly weather review_ , 112(4):826–867, 1984\.
* Stull [1988] Roland B Stull. _An introduction to boundary layer meteorology_ , volume 13. Springer Science & Business Media, 1988.
* Sun et al. [2003] Jielun Sun, Sean P Burns, Anthony C Delany, Steven P Oncley, Thomas W Horst, and Donald H Lenschow. Heat balance in the nocturnal boundary layer during cases-99. _Journal of Applied Meteorology_ , 42(11):1649–1666, 2003.
* Varghese et al. [2003] Saji Varghese, AS Vasudeva Murthy, and Roddam Narasimha. A fast, accurate method of computing near-surface longwave fluxes and cooling rates in the atmosphere. _Journal of the atmospheric sciences_ , 60(23):2869–2886, 2003.
* Wang et al. [2011] M Wang, S Ghan, R Easter, M Ovchinnikov, Xiaohong Liu, E Kassianov, Y Qian, WI Gustafson Jr, VE Larson, DP Schanen, et al. The multi-scale aerosol-climate model pnnl-mmf: Model description and evaluation. _Geoscientific Model Development_ , 4(1):137–168, 2011.
* Wild et al. [2001] Martin Wild, Atsumu Ohmura, Hans Gilgen, Jean-Jacques Morcrette, and Anthony Slingo. Evaluation of downward longwave radiation in general circulation models. _Journal of climate_ , 14(15):3227–3239, 2001\.
* Wu and Boor [2021] Tianren Wu and Brandon E Boor. Urban aerosol size distributions: a global perspective. _Atmospheric Chemistry and Physics_ , 21(11):8883–8914, 2021.
* Xing-Sheng et al. [1983] Li Xing-Sheng, JE Gaynor, and JC Kaimal. A study of multiple stable layers in the nocturnal lower atmosphere. _Boundary-layer meteorology_ , 26(2):157–168, 1983.
* Yang and Gao [2020] Yue Yang and Shanhong Gao. The impact of turbulent diffusion driven by fog-top cooling on sea fog development. _Journal of Geophysical Research: Atmospheres_ , 125(4):e2019JD031562, 2020.
* Zdunkowski and Johnson [1965] Wilford G Zdunkowski and Frank G Johnson. Infrared flux divergence calculations with newly constructed radiation tables. _Journal of Applied Meteorology and Climatology_ , 4(3):371–377, 1965.
* Zdunkowski et al. [1976] Wilford G Zdunkowski, Ronald M Welch, and Jan Paegle. One-dimensional numerical simulation of the effects of air pollution on the planetary boundary layer. _Journal of Atmospheric Sciences_ , 33(12):2399–2414, 1976.
|
# SimBle - Introducing privacy preserving BLE simulation to generate real-
world traces
††thanks: This work has been partially funded by the ANR MITIK project, French
National Research Agency (ANR), PRC AAPG2019.).
Abhishek Kumar Mishra12, Aline Carneiro Viana2, Nadjib Achir23, 1 Ecole
Polytechnique
Palaisau, France 2 Inria
Palaisau, France
{abhishek.mishra, aline.viana<EMAIL_ADDRESS>3 Université Sorbonne
Paris Nord
Paris, France
<EMAIL_ADDRESS>
# SimBle: Generating privacy preserving real-world BLE traces with ground
truth
Abhishek Kumar Mishra12, Aline Carneiro Viana2, Nadjib Achir23, 1 Ecole
Polytechnique
Palaisau, France 2 Inria
Palaisau, France
{abhishek.mishra, aline.viana<EMAIL_ADDRESS>3 Université Sorbonne
Paris Nord
Paris, France
<EMAIL_ADDRESS>
###### Abstract
Bluetooth has become critical as many IoT devices are arriving in the market.
Most of the current literature focusing on Bluetooth simulation concentrates
on the network protocols’ performances and completely neglects the privacy
protection recommendations introduced in the BLE standard. Indeed, privacy
protection is one of the main issues handled in the Bluetooth standard. For
instance, the current standard forces devices to change the identifier they
embed within the public and private packets, known as MAC address
randomization. Although randomizing MAC addresses is intended to preserve
device privacy, recent literature shows many challenges that are still
present. One of them is the correlation between the public packets and the
emitters. Unfortunately, existing evaluation tools such as NS-3 are not
designed to reproduce this Bluetooth standard’s essential functionality. This
makes it impossible to test solutions for different device-fingerprinting
strategies as there is a lack of ground truth for large-scale scenarios with
the majority of current BLE devices implementing MAC address randomization. In
this paper, we first introduce a solution of standard-compliant MAC address
randomization in the NS-3 framework, capable of emulating any real BLE device
in the simulation and generating real-world Bluetooth traces. In addition,
since the simulation run-time for trace-collection grows exponentially with
the number of devices, we introduce an optimization to linearize public-packet
sniffing. This made the large-scale trace-collection practically feasible.
Then, we use the generated traces and associated ground truth to do a case
study on the evaluation of a generic MAC address association available in the
literature [1]. Our case study reveals that close to $90\%$ of randomized
addresses could be correctly linked even in highly dense and mobile scenarios.
This prompts the BLE standard to be revisited on privacy-related provisions.
We provide privacy recommendations based on our case study. Finally, we
discuss the consequences that real randomized traces bring to different
scientific research domains and how our proposed solution helps in overcoming
new challenges.
###### Index Terms:
Bluetooth, IOT devices, BLE(Bluetooth Low Energy), Simulatuion, Privacy, MAC
address randomization, MAC address association, Data-sets
## I Introduction
The Internet of Things (IoT) is expected to connect billions of low-end
devices to the Internet. It thereby drastically increases communication
without a human source or destination. The total count of products and
businesses that use IoT technologies has increased to about 25 percent, and
the number of connected devices is projected to reach 43 billion by 2023[2].
Bluetooth has been a significant backbone for most of these connected devices
and applications[3]. Sniffing Bluetooth traffic has not been straightforward
because of the manufacturer-dependent adaptive channel hopping behavior and
shared 2.4 GHz spectrum of Bluetooth’s device. Various approaches have
predicted hop changes, allowing the user to be traceable [4]. Nevertheless,
these hopping challenges are mostly for the private data packets being
exchanged in Bluetooth. As we go for the public packets such as beacons and
keep-alive messages, which are emitted in three channels, it is much easier to
sniff them accurately. These beacons reveal the sender’s device identity in
the form of MAC address.
Devices that perform MAC randomization can hide device’s identity to some
extent. Bluetooth Classic (BT) does not randomize the addresses and has
already been shown to be de-anonymized[5]. Even MAC address randomization in
BLE has been claimed to be defeated specific to apple devices[6] and for
generalized devices[1]. [1] claim to get 100% device association for small set
of devices on sniffing public-packets in a controlled environment(inside
Faraday cage) as seen in Figure 1. The addresses shown in figure 1 are LAP
(Lower Address Part) of anonymized MAC addresses seen by [1] in the trace.
There is a need to evaluate the performance of [1] for a large population of
devices in real-world scenarios. If the results of Figure 1 are similar in
realistic environments, immense threats to user-privacy are posed in BLE.
Figure 1: Perfect association of MAC addresses achieved by [1] on sniffing
public-packets in the controlled environment for BLE with MAC randomization.
Each color represents a device broadcasting with anonymized addresses
Amidst raising privacy intrusion findings in the Bluetooth, there has been an
absence of frameworks to test these suggestions in scalable real-world
conditions. Current BLE simulators are mostly focusing on throughput, latency,
and signal-to-noise ratio (SNR) features rather than the security and privacy
aspects of the standard. There has been an inability to incorporate the real-
world device parameters into the simulation framework. Without these
advancements, it is impossible to generate a realistic BLE trace that
considers integral factors like MAC address randomization. This is because the
implementation of address randomization is dependent on the device
manufacturer. Lack of controlled simulated traces presently halts the
retrieval of ground truth in large-scale scenarios. Ground truth here refers
to the knowledge of a set of randomized MAC addresses that were emitted from a
particular device. It is needed to successfully evaluate device fingerprinting
solutions and propose adjustments in the standard to guarantee the user’s
privacy.
To the best of our knowledge, none of the current available BLE simulators
support and consider privacy aspects, specifically MAC address randomization.
The current state-of-the-art open-source for simulating wireless
communications in general, NS-3111https://www.nsnam.org/, is very weak in
support of BLE standard to much-advanced WiFi stack it possesses. In fact, the
official release of NS-3 still lacks BLE support. A different open-source
implementation of BLE stack without MAC randomization have been released based
on NS-3 framework[7, 8]. There has also been an implementation of BLE in
Omnet++ framework too222http://cc.oulu.fi/ kmikhayl/BLE.html. We rigorously
tested and chose [8] as the base BLE stack (BLE 4.1) of our proposed
simulator. This is because, firstly, it is currently most accurate, efficient,
and organized. Secondly, it is in the NS-3 framework, which gives users the
freedom to perform BLE experiments co-existing with the latest WiFi standards.
Most of the BLE trace collection is for public packets and is done passively
through sniffers. Private packets are mostly encrypted, and capturing them is
illegal in many countries. Expensive hardware like Ubertooth One[9] is
required to sniff on data channels. Moreover, as stated earlier, channel
hopping in BLE data packets makes the capturing worse. Unfortunately, current
simulation tools are not meant for generating sniffed public BLE traces. This
is because simulation time explodes with a large number of devices due to the
number of simulation events increasing when handling the inter-node public
packets. We are interested in the full processing of broadcast packets only at
the sniffer. SimBle addresses this issue and proposes optimized sniffing in
Section III-A2, which eliminates exponential run-time while being able to
generate the exact same trace.
In this paper, we first study and present different privacy guidelines across
released Bluetooth standards. Then, we develop and introduce the simulator
SimBle, which incorporates standard-compliant MAC address randomization
capable of emulating any BLE device. This is made possible as SimBle
introduces the notion of device class, which differentiates various kinds of
devices like phones, smartwatches, and headsets based on the frequency of
transmitted beacons.
The four major contributions of this paper are:
1. 1.
Study of different privacy features present in the BLE standard that is
necessary to be introduced in Simulation.
2. 2.
Architecture and implementation of a new BLE simulation stack in the form of
SimBle in NS-3 which considers user-privacy and distinguishes the devices
spawned in it.
3. 3.
Case study of the only generic MAC address association algorithm present in
literature. It is made possible for scalable scenarios after generating the
ground truth using our solution
4. 4.
Release of an open-source simulator along with tools and methods to generate a
realistic Bluetooth trace with associated ground truth
The rest of this paper is organized as follows. Section II defines the
overview of different privacy measures recommended by the BLE standard. We
present our BLE simulation stack, SimBle in Section III and IV. Section V
validates the functionality of SimBle. In Section VI, we perform a case study
of the generic MAC address association strategy available in literature using
simulated ground truth. We show the strategy’s effectiveness and then discuss
possible amendments to the BLE standard that this case study has forced to
consider. Finally, Section VII discusses the impact of privacy-preserving BLE
provisions on other research domains and how real-world traces from SimBle
would address big challenges. We also present the conclusion of our work along
with looking into the future directions.
## II Background
This section discusses how BLE handles MAC level addressing. We look into
different addressing modes supported by BLE. But we are mostly interested in
private addresses as they are fundamental in preserving user privacy.
Afterward, we present a study of privacy provisions currently proposed by the
standard. Finally, we identify the factors that must be taken into account for
designing the simulator that respects user privacy.
### II-A BLE MAC addressing
Bluetooth has been there for quite some time now, but it is the Bluetooth Low
Energy (BLE) variant[10] that has been used by the majority of the IoT
devices. When a particular BLE device communicates, it keeps sending
advertising packets on three public channels specified by the standard. These
packets include a link-layer MAC address, which acts as an identifier to the
device[[11], p. 69]. To avoid the user leaking the identifier to the world,
recent BLE standards have continuously forced all the devices to update their
publicly advertised MAC addresses. Various addressing modes have been
specified in the standard [[12], p. p. 2988] which are briefly described next.
In BLE, we identify the devices using a device address and an address type
[[12], p. 2988]. This means that whenever we compare two device addresses, the
same 48-bit addresses does not guarantee the same device. This is because the
two addresses could have different types. The address type could either be a
public device address or a random device address, which are both 48 bits long.
The device has the freedom to use at least one or both types of device
addresses.
Pubic device addresses are traditional MAC addresses that are created in
accordance with Universal addresses section of the IEEE 802-2014 standard[13].
They are more prevalent, but it is the random device address which is privacy-
preserving.
Random device address could either be static or private. A static address is a
48-bit randomly generated address meeting specific standard requirements. On
the other hand, private addresses are again either resolvable or non-
resolvable[[12], p. 2991]. These specific subtypes are identified by the two
most significant bits of the random device address, as shown in the table I.
Address [47:46] | Address Sub-Type
---|---
0b00 | Non-resolvable private address
0b01 | Resolvable private address
0b10 | Reserved for future use
0b11 | Static device address
TABLE I: Sub-types of random device addresses
BLE device’s Identity Address is one of Public device address or Random static
device address. When a device is continuing with Resolvable private addresses,
it must also possess an Identity Address.
### II-B BLE privacy provisions
The key to privacy provided by the BLE link layer is using private addresses,
which we described in the previous sub-section[[12], p. 3201]. This again
reflects the importance of the introduction of MAC address randomization done
by SimBle. BLE recommends devices to generate a resolvable private address.
The link-layer corresponding to the host sets a timer and regenerates a new
resolvable private address when the timer expires. Moreover, once the Link
Layer is reset, a new resolvable private address is generated, and the timer
is allowed to start with an arbitrary value in the allowed range. To maintain
the efficiency of connection establishment, the standard recommends setting
the timer to 15 minutes.
BLE[14][12] does not allow private devices to use its Identity Address in any
advertising packet. The Host could instruct the Controller to advertise, scan,
or initiate a connection using a resolvable private address after enabling the
resolving list.
The state machine for the link layer of BLE consists of various states[[12],
p. 2985]. A device could be found in either of these states. For instance,
advertising, scanning, and initiation states have different guidelines by the
standard. In the advertising state, the link layer is allowed to perform
device filtering based on the device address of the peer device to minimize
the number of devices to which it responds. This could be done according to a
local white list which contains a set of records comprising of both the device
address and the device address type (public or random) [[12], p. 3202]. If the
device is in scanning or initiating state, it is recommended to use private
addresses. The scanning device should use the resolvable or non-resolvable
private address as the device address. Whenever a scanning device receives an
advertising packet that contains a resolvable private address for the
advertiser’s device address, after address resolution, the scanner’s filter
policy decides to respond with a scan request or not.
Having over-viewed the BLE standard’s privacy-related recommendations,
especially the latest release BLE 5.2, we proceed in what follows to
incorporate the key elements to the simulator. The simulator should not only
care of including resolvable private addresses that are integral to BLE
privacy but also bring together other MAC address randomization related
aspects. The proposed simulation stack SimBle, is thus designed in such a
manner that adding further privacy-specific features in the future is
relatively straightforward.
## III SimBle: Design & Architecture
This section aims at providing the solution to the problem of emulating
devices that follow network and device privacy-provisions of BLE. This step is
a key to generating realistic traces with associated ground truth. If we
successfully come up with a device-specific privacy-preserving simulation, we
could easily produce traces that resemble real scenarios. This has profound
implications. It enables us to practically evaluate any MAC address-based
device-fingerprinting or privacy-intrusion solutions that are suggested in the
literature.
In the following, we introduce our BLE simulation stack that we call as
SimBle. We first look at different design aspects of SimBle and then we
present our SimBle architecture.
### III-A Design considerations
The first aspect that we should take into consideration is the device
heterogeneity. Indeed, BLE gives vendors the flexibility to implement privacy
features respecting specific guidelines released by the standard. Therefore,
different mobile phone manufacturing companies like Apple and Samsung could
have different implementation parameters related to randomization. Even one
vendor could have a range of devices supporting various BLE releases. Hence,
device distinction is an essential feature for BLE simulation, which is
currently absent in available simulators.
The second aspect that we have to consider is privacy provisions. As we saw in
the previous section, the central component of BLE privacy provisioning is the
MAC address randomization procedure. If devices violate these recommendations
and, for example, advertise it’s identity address, then the device and, thus,
network privacy is compromised, leading to traceability. Simble needs to
introduce these provisions specifically MAC address randomization in its
framework.
Finally, the last aspect is the flexibility to generate realistic traces.
Indeed, one of the significant demands in the research community is BLE
traces’ availability, which could replicate different real-world scenarios
like mobility, crowd density, and kind of devices present in the zone where
the trace was collected. Trace collection is impractical for the large
population using active means like installing specific applications on user
devices. Even passive methods, like the usage of sniffers, would require
massive deployment and user consent. That is why SimBle also aims to include a
framework for releasing a ready-to-use utility for trace generation in various
user-specified scenarios. We show a case-study of MAC address association
algorithm in section VI using traces and associated ground truth from this
framework.
In the following subsections, we detail how these design choices are
implemented in SimBle.
#### III-A1 Device heterogeneity
As discussed earlier in the previous section, different vendors have the
freedom with some bounds in the implementation of BLE stack in the device. For
example, Apple picks from the range for values to decide how frequently the
device changes a randomized MAC address. We need to distinguish for each
device introduced in SimBle so that simulation would be able to replicate its
behavior in terms of privacy features. In the following, we define the
device’s type through two points: the device’s class and the supported
standard version.
1. (a)
Notion of Device Class: We find a property to classify the device into various
groups where the behavior is similar irrespective of manufacturer. This
property is the frequency of transmitting beacons, which is characteristic of
a device with a maximum variation of 10ms [14, p. 2751]. The base value of the
beacon transmission period is between [20 ms; 10.24 s]. Based on this
property, we classify BLE devices into the following device classes:
* •
Frequent Emitters: For this class, the frequency of transmitting beacons is
from a normal distribution of mean 50 ms and standard deviation 10 ms. This
represents a highly active device like earbuds. We expect these kinds of
devices to also swap their randomized MAC address quickly.
* •
Moderate Emitters: These are devices with a moderate frequency of
advertisements. We describe them to be from a normal distribution of mean 300
ms and standard deviation 25 ms. From our experimentation, most smartphones,
especially iPhones, are falling into this category.
* •
Semi-Moderate Emitters: These are devices which are still active in
transmitting regular beacons on broadcast channels. They follow a normal
distribution of mean 500 ms and standard deviation 25 ms. This class again
mainly includes phones.
* •
Low Emitters: These are devices which are least active in sending out
advertisements. We define them to have inter beacon transmission intervals
from a normal distribution of mean 2 s and standard deviation 500 ms.
Smartwatches generally fall in this category.
A user, when instantiating a node in SimBle could choose any of the stated
device classes. If the user enables beacons, nodes automatically set their
behavior to that of the specified class. However, we give the flexibility to
specify the exact beacon frequency of a device if a user knows it beforehand
through experimentation.
2. (b)
BLE standards: The frequency of changing a randomized MAC address does depend
on the standard. In the most prevalent release currently in terms of the
number of devices, the BLE 4.0, for instance, devices change their MAC
addresses every 15 minutes[11]. In recent releases like BLE 5.2, devices are
allowed to change their address before 15 minutes too. Therefore, it is
crucial to specify a BLE node with its standard before using its privacy
features in the simulation. SimBle gives the user the option to mention the
standard they want to run on top of the declared node, which enables
controlling the privacy features associated.
#### III-A2 Realistic trace generation
One of the major motivations of this paper is to address the issue of
generating realistic Bluetooth traces finally. We identify following
components that are essential to be taken care of for SimBle to emulate real-
world trace collection:
1. 1.
Privacy features: As already stated earlier, SimBle not only introduces BLE
network and device privacy features like MAC address randomization but also
identifies key parameters that are necessary to get real-world traces. These
factors as introduced before in section III are swapDelay, randInterval,
Device Class and the BLE release version . As mentioned above, making sure of
correct device-specific parameters enables SimBle to emulate any vendor
device’s privacy features.
2. 2.
Passive sniffing: Trace collection using active methods like user
participation is not practical for BLE. Indeed, we need to recruit volunteers
and install the specific application on user devices. There has been rapid
growth in contact tracing and trajectory-reconstruction using BLE recently,
and the research community requires more real-world traces collected through
passive sniffing.
The capture of BLE packets should fall under the principle of ”legal capture”
in different countries. It is mostly not valid for private packets and
requires special authorization. Therefore, BLE passive sniffing generally
refers to listening on public channels. SimBle introduces a framework for the
user to deploy an arbitrary number of sniffers and nodes to be placed in a
sniffing zone. On top of it, different mobility models could be installed on
BLE nodes’ varying density, which enables recreating realistic environments.
Hence, we could emulate real-world BLE sniffing.
3. 3.
Ground truth: Introducing privacy in BLE simulation automatically answers the
search of ground truth in randomized-address traces. Ground truth here refers
to the knowledge of the history of randomized MAC addresses emitted by a
device. We need this to evaluate MAC association algorithms or device
fingerprinting methods in general, that are increasingly being proposed [1]
[6] [5]. SimBle generates ground truth trace by matching each device’s
generated private addresses to the Node ID, which acts a unique identifier to
the device in simulation time.
#### III-A3 Optimizing trace generation
As discussed earlier, passive sniffing is the most practical method for BLE
trace collection. We identify a major issue in the generation of real-world
traces inside a simulation. As the number of nodes increases, the number of
simulation-events due to processing inter-node packets also increases
quadratically. This has a significant impact on the time and resources needed
for simulation. But we are only interested in the node-sniffer interaction in
case of public packet capture.
SimBle addresses this problem and gives the user the flexibility to specify a
flag in simulation, which induces filtered and optimized handling of broadcast
packets at nodes. This reduces the simulation duration significantly and thus
makes trace-collection feasible. We discuss more on this and look at the
obtained gain in performances in Section V.
### III-B Architecture
After having figured out the design, we have a brief look into the
architecture of a BLE Node inside SimBle in the Figure 2. As discussed earlier
in the Section 1, we use the base BLE stack of [8]. Components of NetDevices
except the PrivacyManager were defined in the base stack. Application and
Packet socket interface are NS-3 wide entities not specific to BLE. We created
the new component, PrivacyManager that takes care of all BLE privacy features.
A node in SimBle carries the same meaning as in NS-3. It is a physical entity
with a unique integer ID and contains NetDevices and Applications.
In this paper, we could think the Node to be equivalent to a device/hardware
in the real world. We show in Figure 2 single instance of Application and
NetDevice for illustration but could be multiple in principle. NetDevice is an
integral object of a node representing a physical interface on it. Here, we
are interested in the Bluetooth interface. NetDevice communicates with the
help of interfaces to the Application. Packet socket interface connects the
application interfaces to the NetDevice here. IPv4/IPv6 stack could also be
installed by the user on the node in parallel. Let’s have a brief look at the
roles of other components of NetDevice which were already present in the base
BLE stack[8].
Figure 2: Architecture of a node in SimBle
BroadbandManager helps add a link to the list of links that can be associated
with a NetDevice. A link here refers to a BLE association between two nodes.
It also handles checking if there are new packets in the NetDevice queue and
forwards them to the right LinkManager’s queue.
LinkManager is the entity associated with a particular BroadbandManager. It
setups a link to a specific receiver with the role(Master/Slave) as expected
at the end of the setup process. LinkManager also manages TransmitWindow which
is the next time the device can send a packet over the associated link.
LinkController is majorly responsible for monitoring and handling the re-
transmissions and state changes in the link. It checks if the ACK was received
for the sent packet and also fires list of callbacks to other NetDevice
objects if the link changes. Lastly, PHY mainly takes the responsibility of
handling link bandwidth, bit-rates, transmission power, and bit-errors.
We introduce a new module, PrivacyManager in SimBle which takes care of all
the privacy-related aspects of a device. In the forthcoming section, we
discuss how MAC address randomization is managed by the PrivacyManager.
## IV SimBle: Privacy provisions
Hereafter, we describe the PrivacyManager implementation and the MAC address
randomization of BLE. We describe in details the implementation of
PrivacyManager or, to be specific, the MAC address randomization. All the
introduced algorithms follow the BLE standard guidelines[12].
Figure 3: PrivacyManager in SimBle
Overview of the PrivacyManager is illustrated in the Figure 3. Main in the
figure represents the base class of the PrivacyManager from which member
functions are called. We could observe in the figure that the function UPDATE
is called on the device startup. UPDATE generates new Resolvable private
addresses for the calling node using the function GENERATE. It recursively
calls itself after the expiration of the time associated with the current
private address. On the event of packet reception or checking of the existence
of a link to a destination, CHECKVALIDATION is called. On every call, it
checks with RESOLVE with a particular private address. RESOLVE returns on turn
the validity status and the device’s identity address, which generated the
private address. In the following, we describe the functions of PrivacyManager
in detail.
### IV-A KEY generation and distribution
PrivacyManager focuses on supporting Resolvable private addresses – the center
of all privacy provisions in current BLE release[12] (cf. Section II-B) For
node to generate a resolvable private address, it must have either the Local
Identity Resolving Key (IRK) or the Peer Identity Resolving Key (IRK). This
128 bit key is a proof of possession of a particular private address. In real
devices, IRK’s are exchanged through specific control messages. In SimBle, we
generate IRK randomly at each Node when it is initialized in the simulation.
The delay caused in the key exchange for real hardware is emulated by
swapDelay which we describe in the next section. Simultaneously, the Node also
generates an Identity Address, which is a unique identifier to the device.
In this paper, the Node or the NetDevice essentially mean the same in terms of
BLE associated parameters. This is because the remaining modules inside the
node (i.e., the socket and the application modules), are not dependent on the
BLE standard itself.
Finally, before creating links in SimBle and installing an application on top
declared nodes, each node updates a list in their respective NetDevice. This
list contains (IRK : Identity Address) pairs of each of the fellow BLE nodes
instantiated in the simulator.
### IV-B Generation of Randomized MAC
The format of a Resolvable private address is shown in fig 4. The resolvable
private address is generated with the IRK and a 24-bit number known as prand.
We see that it could be mainly divided into two blocks of 24 bits each. The
first block consists of 24 bit hash introduced in [Alg. 1 line 7]. SimBle
incorporates the AES (Advanced Encryption Standard) support as it is
recommended by the standard[12] for encrypting the plain-text data into
ciphered block [15] [16] in the process of randomized MAC address generation.
Figure 4: Format of a Resolvable Private Address
The second block consists of prand. Prand in the case of Resolvable private
address has two most significant bits as 1 and 0 as shown in the figure 4. The
random part of prand must consist of at least one bit as 0 and one bit as 1.
We discover in detail the generation of the Resolvable private address by
PrivacyManager in [Alg. 1].
Algorithm 1 SimBle’s Resolvable Private Address generation
1:procedure Generate($IRK$) $\triangleright$ Input variable
$\triangleright$ Prepare encryption inputs
2: $prand\leftarrow genPrand()$
3: $padding\leftarrow genPaddingBits(104)$
4: $plaintext\leftarrow Concatenate(padding,prand)$
$\triangleright$ AES encryption
5: $aesobj\leftarrow AES(IRK)$
6: $ciphertext\leftarrow aesobj.getEncrypt(plaintext)$
$\triangleright$ Getting MAC address
7: $prunedcipher\leftarrow getLeastSigBits(ciphertext,24)$
8: $macstr\leftarrow Concatenate(prunedcipher,prand)$
9: $macaddr\leftarrow toMacHex(macstr)$
10: return $\triangleright$ Returns a Resolvable Private Address
11:end procedure
12:procedure Update($randInterval,swapDelay,IRK$) $\triangleright$ Input
variables
13: $roundIndex=getCurrentRoundIndex()$
14: $macDevice=\textsc{Generate}(IRK)$
$\triangleright$ Check if this call is just after device initialization
15: if $roundIndex==1$ then
$\triangleright$ Calculate time offset for recursive callback
16: $nextUpOffset\leftarrow getURV(0,randInterval)\newline +swapDelay$
17: else
18: $nextUpOffset\leftarrow randInterval+swapDelay$
19: end if
$\triangleright$ Schedule a callback after offset expires
20: $incRoundIndex()$
21: Schedule(Update, nextUpOffset)
22:end procedure
Each of the node in SimBle has an instance of PrivacyManager as illustrated
earlier in the figure 4. [Alg. 1] performs two major functions. GENERATE in
[Alg. 1 line 1], takes as input the IRK and generates a resolvable private
address for that node. While UPDATE [Alg. 1 line 1] take care of necessary
calls to update a device’s MAC address according to the user specified BLE
standard and device class that we are trying to emulate.
Whenever GENERATE is called we generate a 24 bits value with two most
significant bits as 10. Rest of the bits are random and we use this value as
prand, the trailing half a resolvable private address [Alg. 1 line 2]. This
generated prand is then padded by 104 null bits such that the most significant
byte of the prand becomes the most significant byte of padding [Alg. 1 line
4]. We call this value plaintext as it is given as input for encryption. Then,
we generate an instance of AES algorithm initialized with the IRK of the
current node [Alg. 1 line 5]. AES instance then encrypts the plaintext to
generate 128 bits of ciphertext [Alg. 1 line 6]. We take 24 most significant
bits of ciphertext [Alg. 1 line 7] and concatenate to the earlier generated
prand to generate a string of 48 bits [Alg. 1 line 4]. The generated string is
then finally formatted in IEEE 802.11 MAC address format to produce a
resolvable private address [Alg. 1 line 9].
Once the randomized MAC address is generated, the next step is to change this
address dynamically while respecting the standard. This is done by the UPDATE
function of PrivacyManager which takes three arguments. One of them is IRK,
the identity resolving key of the node, which we have already discussed. The
other two arguments are device-dependent with the freedom to users for
allocating any specific values. They are as follows:
* •
randInterval: This is the time after which a specific device generates a new
resolvable private address. In BLE 4.1 standard[11], the most prevalent
Bluetooth standard in current mobile devices, this interval is fixed to 15
minutes. However, in the most recent release, BLE 5.2[12], the vendor is
flexible to randomize the MAC address before the mark of 15 minutes. But
standard recommends not to update the addresses too frequently as it might
affect the paired devices’ performance. It is due to an increase in the number
of control messages that need to be exchanged after generating a new address.
SimBle takes the BLE standard and device class as input from the user at the
initialization of nodes to calculate the respective randInterval value.
* •
swapDelay: It is introduced to emulate the behavior of the device in practice.
We see from the experiments that devices take some time before they develop a
new randomized address and advertise. This delay is caused due to resources
used in address generation and in updating the current MAC level state.
swapDelay could be device-specific. We empirically choose the value to be 10
times the frequency of transmitting beacons. We do after measuring the value
of this delay in experiments done on a large-set of BLE devices broadcasting
beacons.
On receiving the input arguments, UPDATE first checks the iteration index of
this call and stores it as roundIndex [Alg. 1 line 13]. For calls to UPDATE,
roundIndex has the value greater than or equal to 1. It distinguishes the two
states in which a node can generate a new address. The first
state(roundIndex=1) is when a node goes for obtaining a new address just after
spawning inside the simulation. While the second state(roundIndex$>$1) is when
the node requests an address after the expiration of the old one. GENERATE is
called from UPDATE to assign the device a new resolvable private address [Alg.
1 line 14].
After assigning the randomized address, UPDATE calculates the duration for
which this address would be valid. If the device has called UPDATE for the
first round, then we calculate this duration by taking a random value out of
uniform random variable distribution in [0, randInterval] and adding the
swapDelay to this value [Alg. 1 line 16].
We do this to respect the standard guidelines for setting the address
expiration timers as discussed in Section II-B. Else if the device has already
changed it’s MAC address since spawning, then we assign the offset to be the
sum of randInterval and swapDelay [Alg. 1 line 18].
Finally, we increase the roundIndex and schedule a recursive callback to
UPDATE after the expiration of offset that we just calculated above [Alg. 1
line 21] in order to get resolvable private addresses during the simulation
time.
### IV-C Resolution of Randomized MAC
Generation of MAC address is not sufficient for a BLE device. The receiving
node must be able to ”resolve” or associate the private address with the
sending device’s identity. A Resolvable private address may be resolved if the
sending device’s IRK is available to the receiver. If the address is resolved,
the receiving device can associate this address with the peer device.
To support this privacy-preserving feature, we need to figure out solutions to
two major questions inside a device; how to resolve a private address of a
device? And, where do we need to check the validity of the private address in
the packet being handled inside SimBle?
The solution to the first question is given by RESOLVE [Alg. 2 line 1] while
CHECKVALIDATION [Alg. 2 line 20] answers the second question that we arise
above.
As briefly stated earlier, RESOLVE returns a tuple consisting of (resolved,
resIDAdd). Here resolved states if the resolution attempt of the
privateAddress was successful or not. If the private address is resolved then
resIDAdd consists of the Identity Address of the node creating the private
address, else it is a empty string in the returned pair.
Whenever a node receives resolvable private address, the corresponding
PrivacyManager calls RESOLVE with privateAddress and irkIAddPairList as input.
While privateAddress is the sending device’s randomized MAC address,
irkIAddPairList is the locally maintained list of (IRK, Identity Address)
pairs at the resolving node, as described in section IV-A.
RESOLVE first extracts hash and prand part of the the private address [Alg. 2
line 3] as described earlier in Figure 4. We pad 104 null bits to the
extracted senderPrand such that the most significant byte of the senderPrand
becomes the most significant byte of plaintext, which is the resulted byte
array after padding.
Algorithm 2 SimBle’s Resolvable Private Address resolution
1:procedure Resolve($privateAddress,\newline irkIAddPairList$)
$\triangleright$ Input variable
$\triangleright$ Extract hash and random part of privateAddress
2: $senderHash\leftarrow extractHash(privateAddress)$
3: $senderPrand\leftarrow extractPrand(privateAddress)$
4: $padding\leftarrow genPaddingBits(104)$
5: $plaintext\leftarrow Concatenate(padding,senderPrand)$
6: $resolved\leftarrow FALSE$
7: $resIDAdd\leftarrow NULLSTR$
$\triangleright$ Check if Sender hash is valid
8: for $IRK,IDAdd\quad in\quad irkIAddPairList$ do
9: $aesobj\leftarrow AES(IRK)$
10: $ciphertext\leftarrow aesobj.getEncrypt(plaintext)$
11: $localHash\leftarrow getLeastSigBits(ciphertext,24)$
12: $resolved\leftarrow isEqual(localHash,senderHash)$
13:
14: if $resolved==TRUE$ then
15: $resIDAdd\leftarrow IDAdd$
16: end if
17: end for
$\triangleright$ Return resolved status & Identity Address
18: return ($PAIR(resolved,resIDAdd)$)
19:end procedure
20:procedure CheckValidation
$\triangleright$ Call RESOLVE to validate private address if any of the
function calls below is triggered in SimBle
21: if
22: $\textbf{BroadbandManager:}LinkExists(),\newline
GetLinkManager(),GetLink()$
23: $\textbf{LinkController:}CheckReceivedAckPacket()$
then
24: $\textsc{Resolve}(privateAddress,irkIAddPairList)$
25: end if
26:end procedure
Before considering a privateAddress to be resolved, the handling node checks
the validity of the address. Valid private address refers to the address which
was resolved using one of the IRK’s in the list available at the resolving
node. To get this verification, we first take out a (IRK : Identity Address)
pair from the irkIAddPairList. We generate an instance of AES algorithm
initialized with the IRK from the current pair [Alg. 2 line 9]. AES instance
then encrypts the plaintext to generate 128 bits of ciphertext [Alg. 2 line
10]. We take 24 most significant bits of ciphertext to generate the localHash.
If the value of localHash matches the earlier extracted senderHash [Alg. 2
line 2] for any of the iterations, RESOLVE successful returns the (TRUE,
Identity Address) pair. Otherwise resolution is considered a failure and
RESOLVE returns the (FALSE, ” ”) pair.
After resolving a private address, we look into the framework of SimBle to
investigate the modules that need address resolution. We identify two modules
that need to call PrivacyManager’s RESOLVE procedure: BroadbandManager and
LinkController through CHECKVALIDATION [Alg. 2 line 22]. Whenever
BroadbandManager receives a packet from the NetDevice, RESOLVE is recalled in
two cases. First is when it checks/tries to fetch the link. The second is when
it requests the LinkManager to the destination node. We do this to ensure that
the identity address resolved by the node suggested by the destination address
matches with the identity address of the existing link. Finally,
CHECKVALIDATION also needs to check if the sender address of the correctly
received packet by the LinkController could be resolved using one of the
stored IRK’s at the receiver [Alg. 2 line 23].
## V Validation
For validation of SimBle, it is fundamental to evaluate the functionalities of
the introduced PrivacyManager. Therefore resolvable private address generation
and resolution must be validated. Specifically, we must show that generated
randomized addresses are very close to what real-world devices advertise.
Also, we have to show that BLE data communication continues flawlessly between
the paired devices even when they change their advertised MAC address. In this
case, we assume that the devices have exchanged each other’s IRK during
initialization. All the MAC addresses shown in the paper are hashed using
SHA-256 and truncated to the first 8 bytes for illustration purposes.
### V-A Validating private address generation
To know if SimBle can emulate a real-world trace, we first collect real-traces
obtained form real experimentation. Then, we compare the difference between
real-traces obtained from capturing public packets from actual devices to that
of traces generated from initializing similar behavior devices inside the
simulator. This comparison aims to show that Simble could emulate the same
behavior in terms of randomized MAC advertisements and the transmission of
public packets.
#### V-A1 Experimental setup
As a sniffer, we use the Bluetooth chipset of the Raspberry Pi 4B to capture
Bluetooth public packets. Capture is done in a controlled environment inside a
Faraday cage. We choose two devices Apple iPad Pro 3 and iPad Mini 2, emitting
public packets in the cage for 40 minutes using BLE 4.1, which is captured by
the Raspberry Pi. We are mainly interested in captured timestamps and LAP
(lower address part) of the advertised beacons in the collected traces. LAP
refers to the least significant 24 bits of a BLE MAC address. Even though we
do trace-collection in non-public environments, we still present hashed values
to protect the device’s privacy.
While for the devices inside the simulator, we assign the BLE standard in
initialization as the release 4.1, which fixes the interval of MAC address
regeneration to 15 minutes. Afterward, we install a broadcast application on
top of spawned nodes. We assign the frequency of beacon transmissions in the
application as the mean device broadcast interval observed from the real-world
sniffer capture. We found this value to be 2 seconds. Moreover, we place a
sniffer at the center of a square area of 10 meters in which initialized
emitting devices are statically present. Sniffer captures on three public BLE
channels. The chosen area’s size is kept small to avoid transmission errors
because of the distance between the devices and the sniffer. This is because
errors are not present in the Faraday cage real-world experiment described
earlier. The simulation parameters are illustrated in Table II.
Parameter | Value
---|---
Simulation area | 10*10
Packet size | 20 bytes
Simulation duration | 2410 seconds
Packet sending Duration | 2400 seconds
Path loss model | nakagami
Num of nodes | N
Mobility model(nodes) | static
Num of sniffers | M
Mobility model(sniffer) | static
beacon interval | 2 seconds
Connection Interval | 6.25ms
Swap delay | 10* beacon interval
BLE standard | BLE 4.1
TABLE II: Simulation parameters for SimBle validation
(a) Real-World
(b) SimBle
Figure 5: Observed public packet addresses in real-world vs SimBle by two
devices. Each color represents a device broadcasting anonymized addresses.
#### V-A2 Observations
The first observation is related to the changing of the MAC addresses. In this
case, for the real experiments, we turn on the Bluetooth of the two IPad
devices at the start of sniffing since otherwise first change in MAC address
would be random, and it would be hard to use that trace for validation. As we
can see in Figure 5(a), randomized MAC addresses change every 15 minutes along
with the capture duration. Like real IPad devices, IPads emulated inside the
simulation change their MAC addresses after 15 minutes, shown in Figure 5(b).
Figure 6: Real-world vs SimBle in inter public packet times
After validating the role of PrivacyManager in private address generation, we
validate if the rest of the BLE stack could emulate the chosen real device. We
do this by looking at the inter-packet times for public packets observed at
the sniffer inside the SimBle and the real-world. We maintain the same
experimental setup and generated traces. We observe in Figure 6 that for both
the devices, real-world and SimBle inter-packet intervals at the sniffer have
the mean value of 2 seconds. A deviation of 20 milliseconds is expected for
the sniffers as they capture on either of three public BLE channels on random
and may miss some public packets on one of the three channels. A public packet
on Bluetooth is broadcasted on all three public channels within a time-frame
of 20 milliseconds. This validates the overall working of public packets in
SimBle.
Figure 7: Sent and received data packets by two paired BLE devices inside
SimBle
### V-B Validating private address resolution
To validate the resolution of private addresses in SimBle, we consider a
simple scenario, where a transmitter and receiver nodes are paired inside it.
This allows us to look into global trace obtained by send and receive logs and
deduce if the data communication was continuous in-spite of sender and
receiver changing their MAC addresses.
As we can see in Figure 7, the sender changes its private address around 13
minutes. However, the receiver BLE application continues to process and
receive packets as it could resolve the new private address to the sender’s
Identity Address, having possession of its IRK. Similarly, around 32 minutes,
we observe that the receiver changes its private address. Still, it is
communicated to the sender through beacons, and hence, the sender this time
around resolves and verifies the receiver’s private address. Therefore, the
sender could be seen sending its data to the receiver seamlessly. This
experiment thus ensures that SimBle’s [Alg. 2] is functional in handling BLE
MAC randomization.
### V-C Validating optimized trace-collection
We discussed in Section III-A3 about the need to optimize the trace-collection
procedure to obtain traces in a reasonable time. We validate the improvement
brought by SimBle in terms of run-time by increasing the density of devices up
to 1 device per square meter around a sniffer for a simulation duration of 30
seconds. The density is varied by increasing the number of devices up to 100
in 100 square meters around the sniffer. As we can observe, in Figure 8,
optimized sniffing gives a performance gain in simulation run-time up to a
factor of 100. In conclusion, since we generally have to simulate a
considerably longer duration to test BLE privacy provisions as most MAC
addresses change around 15 minutes, SimBle can optimize the sniffing to
generate traces in a reasonable amount of time.
Figure 8: Performance gain in run-time with optimized sniffing inside
simulation
## VI Case Study
MAC address association refers to defeating the anonymization techniques used
by the devices and being able to track a particular device. Recently many
strategies have been suggested to achieve this goal of associating different
private addresses advertised publically from the same device [1][17] [18] [6].
For instance, [17] [18] show that manufacturers like Apple and Microsoft leak
partial identifiers in the data field of public packets, which can be easily
exploited. In [6], authors reverse engineer continuity protocol messages of
Apple devices. They show that finger-printing the device, as well as
behaviorally profiling users, is possible using the contents of public BLE
messages. They also demonstrate that predictable frame sequence numbers in
them leave the possibility of tracking Apple devices across space and time.
As we mention in the Section I, [5] also discuss a de-anonymization strategy.
Authors of [5] mention that the focus of their solution is Bluetooth Classic
(BT) not BLE, because of the absence of MAC address randomization. Besides,
the proposed strategy requires specific sniffing devices and targets only
private packets. We believe that this approach can not be considered as fully
generic and scalable.
Contrary to the above BLE strategies [17][6][18] which target specific devices
like Apple, [1] propose a method which associates MAC addresses from a device
based on emitted public packets. This makes [6] independent of the device
vendor and generic for any BLE device as it just relies on beacons and
whatever the used application. They identify devices across time using an
identifier that discriminates a subset of devices at a given time, that is, a
weak identifier, and achieve close to $100\%$ accuracy for controlled
environments as shown in Figure 1. Therefore, we decided to implement and
study performances of [1] when using SimBle, since to the best of our
knowledge, it is the only generic BLE MAC address association strategy
currently available in the literature. We evaluate it using the traces and the
ground truth generated by SimBle.
### VI-A Algorithm Overview
The association strategy proposed in [1] could be briefed into the following
three steps:
1. 1.
Identifying the MAC conflicts across time: Whenever we look at passively
sniffed traces across time for public BLE packets, it is very probable that
two or more devices change their randomized MAC addresses around the same
time. These are identified as conflicts by [1] and seen over the entire
sniffing duration as conflict clusters. The authors also define the dswap as
the time that separates the consecutive and distinct private addresses from a
particular device. For each address change seen in the trace, there is a set
of appearing and disappearing MAC addresses in the interval dswap. They are
associated using the Linear Assignment [19] where the weights of possible
associations are chosen as distances between weak identifiers, which is
described next.
2. 2.
Finding a weak identifier: A device constant could be a weak identifier if it
is accessible to the sniffer and it splits the device population into a few
groups that are distributed as uniformly as possible. [1] choose the fixed
part of the time between advertising packets in BLE as the weak identifier and
call it characteristic time.
3. 3.
Resolving MAC conflicts: Union Find [20] is used to break the conflict
clusters into groups of appearing and disappearing MACs. Finally, all
conflicts seen in the observed trace are resolved by using the absolute
difference between the characteristic times as association weights for the
Linear Assignment.
### VI-B Study of the association strategy
We identify three aspects for which the association strategy [1] is most
sensitive in terms of effectiveness:
1. 1.
Conflict size and dswap chosen: As the number of devices in the sniffing zone
increases, the number of devices that change their private addresses around
the same time also increase. We see in section VI-A that weak identifier is
used to resolve conflicts. We define the number of devices in a single
conflict as conflict size. Increasing conflict sizes in the conflict cluster
have two major consequences in [1]. Firstly, weak identifiers would not be
effective in resolving conflicts during Linear Assignment. This is because a
large number of devices cause more possible associations to have similar
weights. Secondly, we identify the strategy [1] to be quadratic in run-time.
Thus, using Linear Assignment for the resolution of a huge set of conflicting
MAC addresses is practically not feasible for device-tracking purposes. We see
dswap as critical parameter in [1]. It could not be chosen arbitrarily large,
as this results in very large conflict clusters containing MAC addresses that
are probably not single conflict. On the contrary, relatively small value
leads to the exclusion of actual conflicts. For the evaluation of association
strategy, we use dswap to be 10 times characteristic time as recommended to be
optimal by [1].
2. 2.
Device diversity in the population: The effectiveness of association is also
dependent on the diversity of devices in the sniffed trace. This is because
characteristic times of devices vary more with diversity. Thus it is easy for
the Linear assignment to group conflict pairs with similar weights. [1] also
uses the vendor information in public packets as an identifier while resolving
conflicts. Filtering out possible associations with different vendors in the
advertised packet increases the chance of correct MAC address association.
3. 3.
Mobility observed in trace: Characteristic times as a weak identifier is
calculated from the observed packet timestamps sequence in the trace. If there
is a high degree of mobility around the sniffer, then devices keep coming and
leaving the sniffing zone. This causes an error in the value chosen by [1] for
possible association pairs’ weight during conflict resolution. Hence the
accuracy of MAC address association should decrease naturally.
### VI-C Evaluation
In the following, we evaluate the accuracy of MAC address association and
growth of conflict cluster size for various realistic scenarios. In scenario
1, we choose BLE 4.1, since it is the most prevalent BLE release in devices
today. We also choose a single device class, which is smartphones. Smartphones
largely fall into the device class moderate emitters as stated earlier in
Section III-A1. The randomization interval in BLE 4.1 is set to 15 minutes.
For scenario 2, we choose BLE 4.1 and multiple device classes. We emulate the
environment with different device classes to include co-existing smartphones,
smartwatches, earbuds e.t.c. Finally, in scenario 3, we consider BLE 5.2 and
multiple device classes. Here we emulate a diverse range of devices supporting
the latest release, BLE 5.2, in them. We choose this BLE standard because,
unlike BLE 4.1, vendors can keep private address generation interval to be
less than 15 minutes. Though standard advises avoiding smaller values for
randomization interval than 15 minutes as it could affect performance due to
connection times. We deliberately keep the randomization interval as uniform
distribution in the range (3, 15) minutes to observe how [1] performs when
more and more vendors start to quicken private address generation. We evaluate
all the scenarios for the following mobility-profiles:
1. 1.
Static-Confined: Here the devices are static and are always present in the
sniffing zone.
2. 2.
Mobile-Free: In this profile, devices are mobile and are free to leave and
enter the sniffing zone. We try to mimic human mobility by using a random-walk
mobility model with a speed of 1.5 $m/s$ and direction change after every 2
$s$.
We generate all the traces and associated ground truth by simulating several
BLE devices and a sniffer for 40 minutes using SimBle. We prefer a longer
duration than multiple simulation runs of small duration as it gives detailed
insight on how conflicts evolve with time. It is essential to note how
accurately strategy in Section VI-A resolves the MAC addresses from a single
device in the capture duration. For Static-Confined mobility-profile, we place
a sniffer in the center of a square of 100 square meters and vary the number
of BLE devices/nodes up to 100. We choose this area to make sure that nodes
are always in sniffing range of the sniffer. As shown in Table II, we use the
Nakagmi path loss model and consider the successful BLE transmission range to
be around 20 meters. While in the case of Mobile-Free mobility-profile, we
deliberately take a square of 2500 square meters and place the sniffer in the
middle of it. BLE nodes are performing random-walk in that area and thus move
in and out of the sniffing range.
(a) Scenario 1
(b) Scenario 1
(c) Scenario 2
(d) Scenario 2
(e) Scenario 3
(f) Scenario 3
Figure 9: Accuracy of MAC address associations and average conflict size
observed by MAC association strategy[1] on SimBle generated traces for Static-
Confined and Mobile-Free mobility-profiles, described in Section VI-C
### VI-D Results and Analysis
1. 1.
Scenario 1: First, we observe how well the algorithm[1] can defeat MAC
randomization and correctly associate private addresses for BLE 4.1 with
moderate emitters. MAC addresses change after every 15 minutes in BLE 4.1. For
average conflict sizes below 10, we expect the algorithm in Section VI-A to
perform well both in run-time and accuracy. We observe in the Figure 9(a) that
accuracy of association is above $98\%$ for Static-Confined mobility-profile.
Even in the case of Mobile-Free nodes, minimum accuracy of around $91\%$ is
seen for 100 devices. Average conflicts increase with an increase in the
number of devices as expected in Figure 9(b), but they are well beneath the
bound of 10 conflicts. Hence, the accuracy of MAC address association is very
high for both mobility-profiles.
2. 2.
Scenario 2: We just saw how accurately MAC addresses from moderate emitters,
which are generally mobile phones is associated. We present a further
realistic scenario, where we allow all device classes (Section III-A1). This
favors MAC association as described in Section VI-B. We again stick to the
privacy behavior of BLE 4.1 as it is the most prevalent standard in current
devices. As expected, we observe an increase in accuracy for both the
scenarios in Figure 9(c). While MAC addresses of Static-Confined nodes are
associated with accuracy close to $100\%$, the minimum accuracy of association
for Mobile-Free devices also increased to $93\%$. Conflict sizes observed are
also small for up to 100 devices, as seen in Figure 9(d).
3. 3.
Scenario 3: Finally, we go for multiple device classes but with privacy
behavior of BLE 5.2, which allows vendors to change the private address of the
device before the interval of 15 minutes (Section VI-C). We expect the
conflict sizes to rise and hence a decrease in accuracy for a large number of
devices. We see a relative decrease in accuracy in the Figure 9(e) when
compared to the previous Figure 9(c) as expected. For 100 devices accuracy of
MAC address associations decrease to around $89\%$ for both mobility-profiles.
Conflict sizes increase to a maximum value of 13 as seen in Figure 9(f), but
it is still not large enough to degrade the efficiency of the association
strategy [1].
Results of the case study shows that current MAC address randomization
proposed by the BLE standard is not enough to safeguard user-privacy. The
association strategy[1] can successfully defeat the randomization procedure
and correctly fingerprint close to $90\%$ of the devices even in highly dense
and mobile scenarios. An adversary could setup multiple sniffers strategically
and easily track a particular user device.
The high accuracy of MAC address association in the initial case study made us
look into the methods to avoid device-traceability. We reduced the
randomization interval of the device population to 3 minutes. Devices changing
their private addresses quickly should lead to higher conflict sizes and hence
lower accuracy of association by [1]. Using the mobility-profile Mobile-Free,
we varied the number of devices inside SimBle to 100 for this smaller value of
randomization interval. Devices belong to multiple device classes. We observe
in Figure 10 that indeed accuracy decreases to a minimum of around $78\%$ with
conflict size growing to 97.
(a) Real-World
(b) SimBle
Figure 10: Accuracy of MAC address associations and average conflict size
observed by MAC association strategy[1] on SimBle generated traces for Mobile-
Free mobility-profile with Randomization interval of 3 minutes
With single device classes, [1] might get lower accuracy, but $78\%$ accurate
associations are still a threat to user-privacy. Hence lowering the
randomization interval is not the only solution the BLE standard should
address.
Based on the case study, we summarize the following recommendations to lower
the accuracy of successful MAC address association possibly:
1. 1.
Recommended randomization interval must be lowered. This might lead to
increased connection times. Optimization in the IRK exchange and resolving the
list at the receiver could allow BLE devices to change address frequently
without compromising performance.
2. 2.
The parameter exploited by [1] in VI-A is the characteristic time that acts as
weak identifier. This parameter is unique to a device and varies for the
device population. This makes the identification of the device easier. We
suggest the standard to recommend vendors having similar characteristic times
## VII Final remarks and future steps
MAC address randomization is indispensable for protecting user-privacy in BLE
as we see in Section II. If devices keep on advertising their true MAC address
or their Identity Address, they could easily be tracked by co-coordinated
passive sniffing. Widespread usage of resolvable private addresses could
potentially protect the privacy of users to some extent.
On the other side, vendor-dependent MAC address randomization has lead to the
retrieval of realistic BLE traces more and more challenging. The lack of
ground truth in randomized traces and impracticality of large-scale passive
trace collection is making the testing of solutions based on trajectory
reconstruction or user identification [21] [22] [23] [24] [25] [26] [27]
almost impossible.
All of the existing and future works based on device-identification using MAC
address in BLE must be revisited with the introduction of BLE privacy-
provisions like private addresses. SimBle is the answer to this issue as
researchers could now generate large-scale trace traces with devices of their
interest and use it to validate their works. Sniffers could be deployed
accordingly to emulate real-world passive trace-collection for BLE.
The works that do BLE MAC address association or device-fingerprinting are
threats to privacy provisions of BLE[1][17] [18] [6] as these strategies lead
to tracking of users. Only SimBle can allow the community to compare the
effectiveness of any two of these available solutions. This is because we need
exact/identical conditions for comparing the evaluations. It is not only hard
for experiments/test-beds to emulate identical conditions but are also not
scalable. Moreover, as discussed earlier, finding ground truth for
experimentally obtained traces is practically impossible for large-scale
testing.
SimBle is the first BLE simulation stack capable of generating traces that
preserve privacy. It introduces resolvable private addresses that are the core
to BLE device and network privacy-provisions. We showed that it is capable of
emulating the behavior of any real BLE device/hardware. Users have to choose
the appropriate device class they want to test, based on the targeted device.
It resolved the lack of ground truth for scalable scenarios after the
introduction of MAC address randomization. SimBle provides the associated
ground truth with every trace that is generated.
We presented the case study to the only generic MAC address association
strategy for BLE available in literature using SimBle. Realistic device and
mobility scenarios were used in the evaluation. The case study revealed the
user-privacy trade-off even with the usage of MAC address randomization as
close to $90\%$ private addresses could be associated correctly in the worst-
case case. This enforces the need to revise the recommendations currently
proposed in the standard.
Regarding future works, the key distribution could be done by using control
messages rather than pre-installation at the node. BLE stack could be enriched
by the addition of different device pairing modes. Also, as one of the aims of
SimBle is to emulate any real device, more and more vendor-specific
information could be added to facilitate usability. Finally, we aim to
evaluate and compare more BLE privacy-related works in the future using
SimBle.
## References
* [1] L. Jouans, A. C. Viana, N. Achir, and A. Fladenmuller, “Associating the randomized bluetooth mac addresses of a device,” in IEEE Annual Consumer Communications & Networking Conference, CCNC 2021, Las Vegas, NV, USA, January 9-12, 2021, pp. 1–6, 2021.
* [2] Fredrik Dahlqvist, Mark Patel, Alexander Rajko, and Jonathan Shulman, Growing opportunities in the Internet of Things.
* [3] K. Chang, “Bluetooth: a viable solution for iot? [industry perspectives],” IEEE Wireless Communications, vol. 21, no. 6, pp. 6–7, 2014.
* [4] W. Albazrqaoe, J. Huang, and G. Xing, “Practical bluetooth traffic sniffing: Systems and privacy implications,” in Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys ’16, (New York, NY, USA), p. 333–345, Association for Computing Machinery, 2016.
* [5] M. Cominelli, F. Gringoli, P. Patras, M. Lind, and G. Noubir, “Even black cats cannot stay hidden in the dark: Full-band de-anonymization of bluetooth classic devices,” in 2020 IEEE Symposium on Security and Privacy (SP), pp. 534–548, 2020.
* [6] J. Martin, D. Alpuche, K. Bodeman, L. Brown, E. Fenske, L. Foppe, T. Mayberry, E. Rye, B. Sipes, and S. Teplov, “Handoff all your privacy–a review of apple’s bluetooth low energy continuity protocol,” PoPETs, vol. 2019, no. 4, pp. 34–53, 2019.
* [7] Kartik Patel, http://kartikpatel.in/ns-3-dev-git/.
* [8] Stijn Geysen, https://gitlab.com/Stijng/ns3-ble-module/-/tree/master/ble.
* [9] Ubertooth One, https://greatscottgadgets.com/ubertoothone/.
* [10] B. SIG, Bluetooth SIG. 2010. Bluetooth Core Specification v4.0.
* [11] B. SIG, Specification of the Bluetooth System, Core v4.1. 2013-03-12.
* [12] B. SIG, Specification of the Bluetooth System, Core v5.2. 2019-12-31.
* [13] 802-2014 Standard.
* [14] B. SIG, Specification of the Bluetooth System, Core v5.1. 2019-01-21.
* [15] F. I. P. S. P. 197, “Announcing the advanced encryption standard (aes),” vol. 21, p. 51, 2001.
* [16] Jason Lee, Encryptions.
* [17] J. K. Becker, D. Li, and D. Starobinski, “Tracking anonymized bluetooth devices,” Proceedings on Privacy Enhancing Technologies, vol. 2019, no. 3, pp. 50–65, 2019.
* [18] G. Celosia and M. Cunche, “Saving private addresses: an analysis of privacy issues in the bluetooth-low-energy advertising mechanism,” in MOBIQUITOUS, pp. 444–453, 2019.
* [19] S. Martello and P. Toth, “Linear assignment problems,” in North-Holland Mathematics Studies, vol. 132, pp. 259–282, Elsevier, 1987.
* [20] G. C. Harfst and E. M. Reingold, “A potential-based amortized analysis of the union-find data structure,” ACM SIGACT News, vol. 31, no. 3, pp. 86–95, 2000.
* [21] G. Aceto, D. Ciuonzo, A. Montieri, V. Persico, and A. Pescapé, “Mirage: Mobile-app traffic capture and ground-truth creation,” in 2019 4th International Conference on Computing, Communications and Security (ICCCS), pp. 1–8, 2019.
* [22] G. Michau, A. Nantes, A. Bhaskar, E. Chung, P. Abry, and P. Borgnat, “Bluetooth data in an urban context: Retrieving vehicle trajectories,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 9, pp. 2377–2386, 2017.
* [23] Y. Xu, D. He, P. Chao, J. Kim, W. Hua, and X. Zhou, “Route reconstruction using low-quality bluetooth readings,” in Proceedings of the 28th International Conference on Advances in Geographic Information Systems, pp. 179–182, 2020.
* [24] A. Bhaskar, M. Qu, and E. Chung, “Bluetooth vehicle trajectory by fusing bluetooth and loops: Motorway travel time statistics,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 1, pp. 113–122, 2014.
* [25] A. Alghamdi, T. Nadeem, and M. Cetin, “Bluemap: A pervasive bluetooth-based vehicle trajectory reconstruction system,” in 2018 IEEE Global Communications Conference (GLOBECOM), pp. 1–7, IEEE, 2018.
* [26] A. Alhamoud, A. A. Nair, C. Gottron, D. Böhnstedt, and R. Steinmetz, “Presence detection, identification and tracking in smart homes utilizing bluetooth enabled smartphones,” in 39th Annual IEEE Conference on Local Computer Networks Workshops, pp. 784–789, IEEE, 2014.
* [27] W. Shao, T. Nguyen, K. Qin, M. Youssef, and F. D. Salim, “Bledoorguard: a device-free person identification framework using bluetooth signals for door access,” IEEE Internet of Things Journal, vol. 5, no. 6, pp. 5227–5239, 2018.
|
# Model discovery in the sparse sampling regime
Gert-Jan Both, Georges Tod & Remy Kusters
Université de Paris, INSERM U1284
Center for Research and Interdisciplinarity (CRI)
F-75006 Paris, France
<EMAIL_ADDRESS>
###### Abstract
To improve the physical understanding and the predictions of complex dynamic
systems, such as ocean dynamics and weather predictions, it is of paramount
interest to identify interpretable models from coarsely and off-grid sampled
observations. In this work we investigate how deep learning can improve model
discovery of partial differential equations when the spacing between sensors
is large and the samples are not placed on a grid. We show how leveraging
physics informed neural network interpolation and automatic differentiation,
allow to better fit the data and its spatiotemporal derivatives, compared to
more classic spline interpolation and numerical differentiation techniques. As
a result, deep learning based model discovery allows to recover the underlying
equations, even when sensors are placed further apart than the data’s
characteristic length scale and in the presence of high noise levels. We
illustrate our claims on both synthetic and experimental data sets where
combinations of physical processes such as (non)-linear advection, reaction
and diffusion are correctly identified.
## 1 Introduction
Mathematical models are central in modelling complex dynamical processes such
as climate change, the spread of an epidemic or to design aircrafts. To derive
such models, conservation laws, physical principles and phenomenological
behaviors are key. However, some systems are too complex to model with a
purely bottom up approach Bolton & Zanna (2019); Sanchez-Pi et al. (2020).
When observational data is present, automated model discovery tools are
becoming increasingly more useful to derive partial differential equations
directly from the data. The classical method for data driven model discovery
is to apply sparse regression on a set of pre-selected features, the so-called
library. In the case of partial differential equations, this library is
constructed from a set of (higher)-order spatial derivatives. Model discovery
is thus effectively a two-step process: first construct the library, then
apply sparse regression. Numerically differentiating the data to construct the
library using finite differences is extremely sensitive to noise; in practice,
usually splines are fitted first and then differentiated. Splines model the
data as piece-wise polynomials, but this expansion breaks down when the
spacing between two sensors is large. These methods, which we refer to as
classical methods, thus fundamentally limit model discovery to densely sampled
data sets: even when no noise is present, the error incurred by the numerical
differentiation corrupts the library and renders the sparse regression
algorithm useless. The limits of classical interpolation methods have long
been known and are often cited as a reason to use neural networks instead.
Automatic differentiation can then be used to calculate the derivatives Baydin
et al. (2017), resulting in much more accurate derivatives. Previous works
Long et al. (2018); Both et al. (2021); Both & Kusters (2020) have shown that
using neural networks to create a surrogate of the data allows model discovery
in noisy and small data sets.
In this paper we systematically study how sample spacing influences model
discovery and compare neural-network based interpolation with classical
methods. Our focus is the influence of the differentiation method used to
construct the (higher-order) derivatives and its impact on model discovery, in
particular when the spacing between two sensors $\Delta x$ is larger than the
underlying equations’ characteristic length scale $l_{c}$. As NN-based model
discovery method we use DeepMoD, which is able to combine NN-based
interpolation with any sparse regression method Both et al. (2021); Both &
Kusters (2020). By using an identical sparse regression algorithm for both the
classical method and DeepMoD, we can isolate the effect of interpolation on
the library and the discovered equation. Our results show that NN-based
interpolators, in contrast to classical methods, can recover the underlying
equation when $\Delta x>l_{c}$. Furthermore, we show that NN-based
interpolation can succeed even when $\Delta x\gg l_{c}$ by either randomly
sampling or displacing the sampling grid over time. We corroborate our
findings with experimental data sets of the 2D advection-diffusion equation
and the 1D cable equation. In both cases, DeepMoD, discovers the underlying
equation in this sparsely sampled regime, contrarily to classical methods. Our
findings solidify the case for deep learning methods by showing that they
succeed in a regime where classical methods fundamentally fail.
## 2 Related works
#### Sensor placement
There exists a vast literature on determining optimal sensor placement for
control theory or signal reconstruction based on a library of features,
emphasizing the importance of sampling in the limit of sparse data Brunton et
al. (2013); Manohar et al. (2018); Wang et al. (2019). While many of these
sampling strategies have been developed to either reconstruct multi-scale
data-sets Champion et al. (2019b), flow-fields Brunton et al. (2015); Loiseau
et al. (2017) or other physical properties of a system Schaeffer et al.
(2018), research on the exact role of spatial and temporal sensor density or
distribution for model discovery has received limited attention.
#### Sparse regression-based model discovery
Using sparse regression to discover differential equations was popularized by
algorithms such as SINDY Brunton et al. (2016) and PDE-find Rudy et al.
(2017b) and has received considerable interest for both ODEs Mangan et al.
(2017); Messenger & Bortz (2020) as well as for PDEs Rudy et al. (2017a); Long
et al. (2018); Vaddireddy et al. (2020). These approaches have since been
expanded to automated hyper-parameter tuning Champion et al. (2019a); Maddu et
al. (2019); a Bayesian approach for model discovery using Sparse Bayesian
Learning Yuan et al. (2019), model discovery for parametric differential
equations Rudy et al. (2019).
#### Deep learning-based model discovery
With the advent of Physics Informed Neural Networks Raissi et al. (2017a; b),
a neural network has become one of the prime approaches to create a surrogate
of the data and perform sparse regression either on the networks prediction
Schaeffer (2017); Berg & Nyström (2019) or within the loss function of the
neural network Both et al. (2021); Both & Kusters (2020). Alternatively,
Neural ODEs were also used to discover the unknown governing equation
Rackauckas et al. (2020) from physical data-sets. Different optimisation
strategy based on the method of alternating direction is considered in Chen et
al. (2020), and graph based approaches have been developed recently Seo & Liu
(2019); Sanchez-Gonzalez et al. (2018). Finally, Cranmer et al. (2020);
Greydanus et al. (2019) directly encode symmetries in neural networks using
respectively the Hamiltonian and Lagrangian framework.
## 3 Methods
#### Sparse regression
A popular approach to discover a PDE from a spatio-temporal data set is to
apply sparse regression on a library of candidate terms $\Theta$, e.g. solve,
$u_{t}=f(1,u,u_{x},...)=\Theta\cdot\xi,$ (1)
to obtain the coefficient vector $\xi$. Here $u_{t}$ is the temporal
derivative and each column in $\Theta$ is a candidate term for the underlying
equation, typically a combination of polynomial and spatial derivative
functions (e.g. $u$, $u_{x}$, $uu_{x}$). To promote the sparsity of this
solution an $l_{1}$ regularization is added to the problem, leading to the so-
called Lasso regression:
$\xi^{*}=\min_{\xi}\left\lVert
u_{t}-\Theta\cdot\xi\right\rVert^{2}+\lambda\sum_{i}|\xi_{i}|.$ (2)
Here $\lambda$ controls the strength of the regularization, and hence the
resulting level of sparsity. In this paper we use the Lasso as a sparse
regression algorithm, with $\lambda$ determined by 5-fold cross-validation.
The resulting coefficients are normalized by the $l_{2}$ norm of the feature
vector, $\hat{\xi}_{i}=\xi_{i}\cdot||\Theta_{i}||_{2}/||u_{t}||_{2}$ and
thresholded. The exact value of the threshold can significantly influence the
resulting equation. We use exactly the same Lasso and threshold for both
DeepMoD and the classical methods so as to eliminate the influence of the
exact variable selection algorithm used. Our goal is to compare how the
feature library $\Theta$ and temporal derivative $u_{t}$ as generated by
either DeepMoD or a classical method differ, and its resulting effect on model
discovery.
#### Numerical differentiation
The features of the library $\Theta$ consists of (higher-order) derivatives,
which need to be differentiated from the observed data. Numerical
differentiation is typically performed either by finite differences or by
fitting a spline on the data and subsequently differentiating this spline.
Finite difference methods directly operate on the observed data to calculate
the derivative. In this paper, we use a standard second order accurate central
difference scheme. Finite differences is computationally cheap and easy to
scale to higher dimensions, but suffers from sensitivity to noise and requires
samples to be closely spaced for accurate results; the truncation error of the
scheme given above scales with the grid sampling, $h$, as
$\mathcal{O}\left(h^{2}\right)$. In the sparse regime where $\Delta x\to
l_{c}$, higher order schemes will not further improve this method.
Furthermore, FD requires samples on the edges of the domain to be discarded,
and for small data-sets and higher order schemes this can become a significant
fraction of the total data.
A more accurate and widely used alternative is to fit a spline to the data and
differentiate it. When fitting using splines, the data is approximated by a
piece-wise polynomial with enforced continuity at the edges. Splines yield
more accurate results in practice, but do not scale easily to higher
dimensions, especially when using splines of higher order. This hinders model
discovery, which requires these higher orders due to the derivatives in the
feature library; by using a fifth-order spline to approximate the data, we
effectively approximate the 3rd order derivative with only a second order
polynomial, hence hindering its application to model discovery.
#### DeepMoD
Both et al. (2021); Both & Kusters (2020)111github.com/PhIMaL/DeePyMoD is a
neural network-based model discovery algorithm. It uses a neural network to
learn both a noiseless surrogate of the data $\hat{u}$ and a coefficient
vector $\xi$ by minimizing,
$\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\left(u_{i}-\hat{u}_{i}\right)^{2}+\frac{1}{N}\sum_{i=1}^{N}\left((\hat{u}_{t})_{i}-\Theta_{i}(\xi\cdot
g)\right)^{2}.$ (3)
Here $\Theta$ and $\hat{u}_{t}$ are calculated by automatic differentiation of
the neural network output $\hat{u}$. $g$ is a mask which sets the active
terms, i.e. the terms that feature in the differential equation. The first
term learns the data mapping $(x,t)\to\hat{u}$, while the second term
constrains the network to solutions of the partial differential equation given
by $\hat{u}_{t},\Theta$ and $\xi\cdot g$. During training, the coefficients
$\xi$ are determined by solving the least squares problem corresponding to the
second part of eq. 3. The mask $g$ is updated separately by a sparse
regression algorithm. The mask $g$ thus selects which terms are in the
equation, while $\xi$ are the coefficients of these active terms. The value of
the threshold can impact the discovered equation. To remove this factor from
our comparison, we use exactly the same method to find the sparse coefficient
vector $\xi^{*}$ for DeepMoD and the classical methods. More details on
DeepMoD can be found in Both et al. (2021); Both & Kusters (2020).
We emphasize two key differences with classical methods: 1) DeepMoD uses
automatic differentiation to calculate $\Theta$, and the accuracy of the
derivatives is thus not fundamentally linked to the sample distancing as with
numerical differentiation. 2) By including the regression term within the loss
function, we regularise the solution of the neural network $\hat{u}$, with the
learned solution of the PDE. The result of fitting a spline is solely based on
the data, whereas with DeepMoD it is also influenced by the constrained of the
underlying equation (i.e. $\xi$ and $g$). We show in the next section that
these two differences allow model discovery in extremely sparse and noisy data
sets, whereas classical methods fail.
## 4 Results
### 4.1 Synthetic data - Burgers equation
We consider a synthetic data set of the Burgers equation $u_{t}=\nu
u_{xx}-uu_{x}$, with a delta peak initial condition $u(x,t=0)=A\delta(x)$ and
domain $t\in[0.1,1.1],x\in[-3,4]$. This problem can be solved analytically
(see Appendix A) to yield a solution dependent on a dimensionless coordinate
$z=x/\sqrt{4\nu t}$. We recognize the denominator as a time-dependent length
scale: a Burgers data set sampled with spacing $\Delta x$ thus has a time-
dependent dimensionless spacing $\Delta z(t)$. We are interested in the
smallest characteristic length scale, which for this data set is
$l_{c}=\sqrt{4\nu t_{0}}$, where $t_{0}=0.1$ is the initial time of the data
set. We set $A=1$ and $\nu=0.25$, giving $l_{c}=\sqrt{0.1}\approx 0.3$.
Splines do not scale effectively beyond a single dimension, making it hard to
fairly compare across both the spatial and temporal dimensions. We thus study
the effect of spacing only along the spatial axis and minimize the effect of
discretization along the temporal axis by densely sampling 100 frames, i.e.
$\Delta t=0.01$. Along the spatial axis we vary the number of samples between
4 and 40, equivalent to $0.5<\frac{\Delta x}{l_{c}}<5$. We study the relative
error $\epsilon$ as the sum of the relative errors for all the derivatives,
normalized over every frame,
$\epsilon=\sum_{i=1}^{3}\left\langle\frac{\left\lVert\partial_{x}^{i}u_{j}-\partial_{x}^{i}\hat{u}_{j}\right\rVert_{2}}{\left\lVert\partial_{x}^{i}u_{j}\right\rVert_{2}}\right\rangle_{j}$
(4)
where $i$ sums the derivatives and $j$ runs over the frames. The derivatives
are normalised every frame by the $l_{2}$ norm of the ground truth to ensure
$\epsilon$ is independent of the magnitude of $u$. $\epsilon$ does not take
into account the nature of noise (e.g. if it is correlated and non-gaussian),
nor if the correct equation is discovered. However, taken together with a
success metric (i.e if the right equation was discovered), it does serve as a
useful proxy to the quality of the interpolation.
Figure 1b) shows $\epsilon$ as a function of the relative spacing $\Delta
x/l_{c}$ and whether the correct equation was discovered. The error when using
splines (yellow) increases with $\Delta x$ and, as expected, we are unable to
discover the correct equation for $\Delta x>0.8l_{c}$ (dots indicate the
correct equation is discovered and triangles indicates it failed to do so).
Considering the NN-based DeepMoD method, sampled on a grid (green), we observe
that it is able to accurately interpolate and discover the correct equation up
to $\Delta x\approx 1.2l_{c}$. The reason for this is that NN-based
interpolation constructs a surrogate of the data, informed by both the spatial
and the temporal dynamics of the data set, while classical interpolation is
intrinsically limited to a single time frame.
In figure 1c) we consider the same graph with $20\%$ white noise on the data.
Despite smoothing, the spline is unable to construct an accurate library and
fails to discover the correct equation in every case. DeepMoD stands in stark
contrast, discovering the correct equation with comparable relative error as
in the $0\%$ noise case.
Figure 1: a) The three sampling strategies considered. b) and c) Error in the
function library (Eq. 4) as function of the distance between the senors
$\Delta x$, normalized with $l_{c}=\sqrt{4\nu t_{0}}$, for b) noise-less data
and c) 20$\%$ of additive noise. The yellow symbols correspond to a spline
interpolation and the green, blue and red correspond to the NN-based model
discovery with various sampling strategies. The circles indicate that model
discovery was successful while the triangles indicate that the incorrect model
was discovered. The horizontal dashed line indicates the smallest
characteristic length-scale in the problem: $\Delta x/l_{c}=1$.
#### Off-grid sampling
Whereas higher-order splines are constrained to interpolating along a single
dimension, DeepMoD uses a neural network to interpolate along both the spatial
and temporal axis. This releases us from the constraint of on-grid sampling,
and we exploit this by constructing an alternative sampling method. We observe
that for a given number of samples $n$, DeepMoD is able to interpolate much
more accurately if these samples are randomly drawn from the sampling domain.
We show in figure 1b and c (Red) that the relative error $\epsilon$ in the
sparse regime, can be as much as two orders of magnitude lower compared to the
grid-sampled results at the same number of samples. We hypothesize that this
is due to the spatio-temporal interpolation of the network. By interpolating
along both axes, each sample effectively covers its surrounding area, and by
randomly sampling we cover more of the spatial sampling domain. Contrarily,
sampling on a grid leaves large areas uncovered; we are effectively sampling
at a much lower resolution than when using random sampling.
To test whether or not this improvement is intrinsically linked to the
randomness of sampling, we also construct an alternative sampling method
called shifted-grid sampling. Given a sampling grid with sensor distance
$\Delta x$, shifted-grid sampling translates this grid a distance $\Delta a$
every frame, leading to an effective sample distance of $\Delta a\ll\Delta x$.
This strategy, similarly as random sampling varies the sensor position over
time, but does so in a deterministic and grid-based way. We show this sampling
strategy in figure 1a, while panels b and c confirm our hypothesis; shifted
grid sampling (Blue) performs similarly to random sampling. Shifted-grid
sampling relies on a densely sampled temporal axis ’compensating’ for the
sparsely sampled spatial axes. This makes off-grid sampling beneficial when
either the time or space axis, but not both, can be sampled with a high
resolution. In the experimental section we show that if both axes are sparsely
sampled, we do not see a strong improvement over grid sampling.
### 4.2 Experimental data - 2D Advection-Diffusion
In an electrophoresis experiment, a charged dye is pipetted in a gel over
which a spatially uniform electric field is applied (see Figure 2a)). The dye
passively diffuses in the gel and is advected by the applied electric field,
giving rise to an advection-diffusion equation with advection in one
direction: $u_{t}=D(u_{xx}+u_{yy})+vu_{y}$. Both et al. (2021) showed that
_DeepMoD_ could discover the correct underlying equation from the full data-
set (size 120 x 150 pixels and 25 frames). Here, we study how much we can sub-
sample this data and still discover the advection-diffusion equation.
Figure 2: a) Experimental setup of the gel electrophoresis. b) Three time
frames of the density with a spatial resolution of 20x25. c) and d) Success
diagram for the experimental data indicating correct model discovery (Yellow
indicates the correct AD equation $u_{t}=D(u_{xx}+u_{yy})+vu_{y}$ is
discovered) as function of the spatial and temporal resolution for c) grid
sampling and d) random sampling. e) Obtained mask and coefficients ($D=0.025$
and and $v=(0,0.2)$) for the artificial data-set as function of the noise
level (11x11 spatial resolution). Hereby, yellow indicates the terms selected
by the algorithm and the red dashed box the terms that are expected in the
PDE. f) Success diagrams for various levels of additive noise, comparing the
result of DeepMoD with a grid and random sampling strategy and the classical
LassoCV algorithm on a Finite Difference (FD)-based library (after SVD
filtering of the different frames).
In figure 2 c) and d) we perform grid based as well as a random sub-sampling
of the data. The neural network-based method discovers the advection-diffusion
equation on as few as 6 x 8 spatial sampling points with 13 time-points, or
with 20 x 25 pixels on only 3 time-points. The minimum number of required
samples is similar for grid and random sampling, confirming that when both
axes are poorly sampled, there is no benefit to sample randomly.
The smallest characteristic length scale in the problem is the width of the
dye at $t=t_{0}$, which we estimate as $l_{c}\approx 10$ pixels. For the data
presented in figure 2c) and 2d), at a resolution below $10\times 13$ sensors
classical approaches would inherently fail, even if no noise was present in
the data set. This is indeed what we observe: using a finite difference-based
library we were unable to recover the advection-diffusion equation, even after
denoising with SVD (See Appendix A for details).
The use of a neural network and random sampling lead to non-deterministic
behaviour: the neural network training dynamics depend on its initialization
and two randomly sampled datasets of similar size might not lead to similar
results. In practice this leads to a ’soft’ decision boundary, where a
fraction of a set of runs with different initialization and datasets fail. We
discuss and study this issue in appendix B.
#### Noiseless synthetic data set
To further confirm our results from the previous section, we simulate the
experiment by solving the 2D advection-diffusion with a Gaussian initial
condition and experimentally determined parameters ($D=0.025$ and and
$v=(0,0.2)$. Figure 2e) shows the selected terms and their magnitude as
functions of the applied noise levels for a highly subsampled data-set (grid
sampling, spatial resolution of 11x11 and temporal resolution 14). The correct
AD equation is recovered up to noise levels of 100$\%$ (See figure 2e),
confirming the noise robustness of the NN-based model discovery. In panel f)
we compare the deep-learning based model discovery using grid and random
sampling with classical methods for various noise levels and sensor spacing
with a fixed temporal resolution of 81 frames (Data for the FD was pre-
processed with a SVD filter, see SI for further details). We confirm that,
similarly to the Burgers example of the previous section, the correct
underlying PDE is discovered even below the smallest characteristic length-
scale in the problem (indicated by a red dashed line in figure 2f).
This figure confirms our three main conclusions: 1) In the noiseless limit,
classical approaches are only slightly less performing than NN-based model
discovery for grid sampling. 2) Increasing the noise level dramatically
impacts classical model discovery while barely impacting NN-based model
discovery and 3) random sampling over space considerably improves performance,
allowing model discovery with roughly 4-8 times fewer sample points for this
particular data-set (depending on the noise level).
### 4.3 Experimental data - Cable equation
Applying a constant voltage to a RC-circuit with longitudinal resistance (see
figure 3 a) result in time-dependent voltage increase throughout the circuit
due to the charging of the capacitors. This rise is modeled by the cable
equation, which is essentially a reaction-diffusion equation
$u_{t}=u_{xx}/(R_{l}C)+u/(R_{m}C)$ with $C$ the capacitance, $R_{l}$ the
longitudinal resistance and $R_{m}$ the parallel resistance of the circuit.
The discrete nature of the experiment automatically gives $\Delta x=O(l_{c})$.
We consider an extreme case where we only have seven sensors throughout the
circuit (i.e. spatial axis), but take 2500 samples along the time axis. Figure
3b shows the measured voltage at these seven elements. Initially, all the
capacitors are uncharged and we observe a sharp voltage increase at the first
element. As the capacitors charge, this further propagates through the
circuit, charging the capacitors and resulting in the curves shown in the
figure. We apply both a classical approach with the library generated with
splines and DeepMoD to a varying amount of elements. Figure 3 c and d show
that the DeepMoD discovers the cable equation with as few as seven elements,
whereas classical methods are unable to find the cable equation at any number
of elements.
Figure 3: a) Schematic overview of the electronic setup to generate the cable
equation. b) The voltage drop $u$ as function of time for various positions
along the circuit for a circuit with 7 elements. The mask obtained for c) NN-
based and d) cross validated Lasso with spline based library model discovery
(Yellow indicates the term was recovered by the algorithm). The red boxes
indicate the two expected terms in the equation.
## 5 Discussion and future work
In this paper we showed how a deep learning approach allows to discover
partial differential equations from coarsely and off-grid sampled observations
in time and space. The correct equations are discovered, even when the sensor
spacing is larger than some data set’s characteristic length scale- an
inaccessible regime when using numerical differentiation procedures. We have
also shown that the presence of noise quickly deteriorates the performance of
classical methods, whereas the neural network based method is much less
affected. However, in the limit of very sparse data, model discovery can be
sensitive to the exact positioning of the sensors, hence sensitive to where
exactly on the grid the samples are drawn. Future work could investigate the
upper limit of the characteristic length scale above which the approach
consistently starts failing and how it relates to the spectrum of the data. We
will also focus in the future on including more structure in the interpolation
for better convergence and initialization robustness, for example using
Gaussian Processes.
## Acknowledgments
This work received support from the CRI Research Fellowship. We thank the
Bettencourt Schueller Foundation long term partnership and NVidia for
supplying the GPU under the Academic Grant program. We would also like to
thank the authors and contributors of Numpy (Harris et al. (2020)), Scipy
(Virtanen et al. (2020)), Scikit-learn (Pedregosa et al. (2011)), Matplotlib
(Hunter (2007)), Ipython (Perez & Granger (2007)), and Pytorch (Paszke et al.
(2019)) for making our work possible through their open-source software. The
authors declare no competing interest.
## References
* Baydin et al. (2017) Atılım Günes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. _The Journal of Machine Learning Research_ , 18(1):5595–5637, 2017.
* Berg & Nyström (2019) Jens Berg and Kaj Nyström. Data-driven discovery of PDEs in complex datasets. _Journal of Computational Physics_ , 384:239–252, May 2019\. ISSN 00219991. doi: 10.1016/j.jcp.2019.01.036. URL http://arxiv.org/abs/1808.10788. arXiv: 1808.10788.
* Bolton & Zanna (2019) Thomas Bolton and Laure Zanna. Applications of deep learning to ocean data inference and subgrid parameterization. _Journal of Advances in Modeling Earth Systems_ , 11(1):376–399, 2019.
* Both & Kusters (2020) Gert-Jan Both and Remy Kusters. Sparsely constrained neural networks for model discovery of pdes. _arXiv preprint arXiv:2011.04336_ , 2020.
* Both et al. (2021) Gert-Jan Both, Subham Choudhury, Pierre Sens, and Remy Kusters. Deepmod: Deep learning for model discovery in noisy data. _Journal of Computational Physics_ , 428:109985, 2021.
* Brunton et al. (2013) Bingni W Brunton, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Optimal sensor placement and enhanced sparsity for classification. _arXiv preprint arXiv:1310.4217_ , 2013.
* Brunton et al. (2015) Steven L Brunton, Joshua L Proctor, Jonathan H Tu, and J Nathan Kutz. Compressed sensing and dynamic mode decomposition. _Journal of computational dynamics_ , 2(2):165, 2015.
* Brunton et al. (2016) Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. _Proceedings of the National Academy of Sciences_ , 113(15):3932–3937, April 2016. ISSN 0027-8424, 1091-6490. doi: 10.1073/pnas.1517384113. URL http://www.pnas.org/lookup/doi/10.1073/pnas.1517384113.
* Champion et al. (2019a) Kathleen Champion, Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton. Data-driven discovery of coordinates and governing equations. _arXiv:1904.02107 [stat]_ , March 2019a. URL http://arxiv.org/abs/1904.02107. arXiv: 1904.02107.
* Champion et al. (2019b) Kathleen P Champion, Steven L Brunton, and J Nathan Kutz. Discovery of nonlinear multiscale systems: Sampling strategies and embeddings. _SIAM Journal on Applied Dynamical Systems_ , 18(1):312–333, 2019b.
* Chen et al. (2020) Zhao Chen, Yang Liu, and Hao Sun. Deep learning of physical laws from scarce data. _arXiv:2005.03448 [physics, stat]_ , May 2020. URL http://arxiv.org/abs/2005.03448. arXiv: 2005.03448.
* Cranmer et al. (2020) Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. Lagrangian Neural Networks. _arXiv:2003.04630 [physics, stat]_ , March 2020. URL http://arxiv.org/abs/2003.04630. arXiv: 2003.04630.
* Greydanus et al. (2019) Sam Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian Neural Networks. _arXiv:1906.01563 [cs]_ , September 2019. URL http://arxiv.org/abs/1906.01563. arXiv: 1906.01563.
* Harris et al. (2020) Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. _Nature_ , 585(7825):357–362, September 2020\. ISSN 0028-0836, 1476-4687. doi: 10.1038/s41586-020-2649-2. URL http://www.nature.com/articles/s41586-020-2649-2.
* Hunter (2007) John D. Hunter. Matplotlib: A 2D Graphics Environment. _Computing in Science Engineering_ , 9(3):90–95, May 2007. ISSN 1558-366X. doi: 10.1109/MCSE.2007.55. Conference Name: Computing in Science Engineering.
* Loiseau et al. (2017) Jean-Christophe Loiseau, Bernd R Noack, and Steven L Brunton. Sparse reduced-order modeling: sensor-based dynamics to full-state estimation. _arXiv preprint arXiv:1706.03531_ , 2017.
* Long et al. (2018) Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. Pde-net: Learning pdes from data. In _International Conference on Machine Learning_ , pp. 3208–3216, 2018.
* Maddu et al. (2019) Suryanarayana Maddu, Bevan L. Cheeseman, Ivo F. Sbalzarini, and Christian L. Müller. Stability selection enables robust learning of partial differential equations from limited noisy data. _arXiv:1907.07810 [physics]_ , July 2019. URL http://arxiv.org/abs/1907.07810. arXiv: 1907.07810.
* Mangan et al. (2017) Niall M Mangan, J Nathan Kutz, Steven L Brunton, and Joshua L Proctor. Model selection for dynamical systems via sparse regression and information criteria. _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , 473(2204):20170009, 2017.
* Manohar et al. (2018) Krithika Manohar, Bingni W Brunton, J Nathan Kutz, and Steven L Brunton. Data-driven sparse sensor placement for reconstruction: Demonstrating the benefits of exploiting known patterns. _IEEE Control Systems Magazine_ , 38(3):63–86, 2018.
* Messenger & Bortz (2020) Daniel A Messenger and David M Bortz. Weak sindy for partial differential equations. _arXiv preprint arXiv:2007.02848_ , 2020.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. _arXiv:1912.01703 [cs, stat]_ , December 2019. URL http://arxiv.org/abs/1912.01703. arXiv: 1912.01703.
* Pedregosa et al. (2011) Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. _the Journal of machine Learning research_ , 12:2825–2830, 2011.
* Perez & Granger (2007) F. Perez and B. E. Granger. IPython: A System for Interactive Scientific Computing. _Computing in Science Engineering_ , 9(3):21–29, May 2007. ISSN 1558-366X. doi: 10.1109/MCSE.2007.53.
* Rackauckas et al. (2020) Christopher Rackauckas, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Supekar, Dominic Skinner, and Ali Ramadhan. Universal Differential Equations for Scientific Machine Learning. _arXiv:2001.04385 [cs, math, q-bio, stat]_ , January 2020. URL http://arxiv.org/abs/2001.04385. arXiv: 2001.04385.
* Raissi et al. (2017a) Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. _arXiv:1711.10561 [cs, math, stat]_ , November 2017a. URL http://arxiv.org/abs/1711.10561. arXiv: 1711.10561.
* Raissi et al. (2017b) Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations. _arXiv:1711.10566 [cs, math, stat]_ , November 2017b. URL http://arxiv.org/abs/1711.10566. arXiv: 1711.10566.
* Rudy et al. (2017a) Samuel H Rudy, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Data-driven discovery of partial differential equations. _Science Advances_ , 3(4):e1602614, 2017a.
* Rudy et al. (2017b) Samuel H. Rudy, Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Data-driven discovery of partial differential equations. _Science Advances_ , 3(4):e1602614, April 2017b. ISSN 2375-2548. doi: 10.1126/sciadv.1602614. URL http://advances.sciencemag.org/lookup/doi/10.1126/sciadv.1602614.
* Rudy et al. (2019) Samuel H. Rudy, J. Nathan Kutz, and Steven L. Brunton. Deep learning of dynamics and signal-noise decomposition with time-stepping constraints. _Journal of Computational Physics_ , 396:483–506, November 2019. ISSN 00219991. doi: 10.1016/j.jcp.2019.06.056. URL http://arxiv.org/abs/1808.02578. arXiv: 1808.02578.
* Sanchez-Gonzalez et al. (2018) Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. _arXiv:1806.01242 [cs, stat]_ , June 2018. URL http://arxiv.org/abs/1806.01242. arXiv: 1806.01242.
* Sanchez-Pi et al. (2020) Nayat Sanchez-Pi, Luis Marti, André Abreu, Olivier Bernard, Colomban de Vargas, Damien Eveillard, Alejandro Maass, Pablo A Marquet, Jacques Sainte-Marie, Julien Salomon, et al. Artificial intelligence, machine learning and modeling for understanding the oceans and climate change. In _NeurIPS 2020 Workshop-Tackling Climate Change with Machine Learning_ , 2020.
* Schaeffer (2017) Hayden Schaeffer. Learning partial differential equations via data discovery and sparse optimization. _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , 473(2197):20160446, January 2017. ISSN 1364-5021, 1471-2946. doi: 10.1098/rspa.2016.0446. URL https://royalsocietypublishing.org/doi/10.1098/rspa.2016.0446.
* Schaeffer et al. (2018) Hayden Schaeffer, Giang Tran, and Rachel Ward. Extracting sparse high-dimensional dynamics from limited data. _SIAM Journal on Applied Mathematics_ , 78(6):3279–3295, 2018.
* Seo & Liu (2019) Sungyong Seo and Yan Liu. Differentiable Physics-informed Graph Networks. _arXiv:1902.02950 [cs, stat]_ , February 2019. URL http://arxiv.org/abs/1902.02950. arXiv: 1902.02950.
* Vaddireddy et al. (2020) Harsha Vaddireddy, Adil Rasheed, Anne E Staples, and Omer San. Feature engineering and symbolic regression methods for detecting hidden physics from sparse sensor observation data. _Physics of Fluids_ , 32(1):015113, 2020.
* Virtanen et al. (2020) Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C. J. Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1 0 Contributors. SciPy 1.0–Fundamental Algorithms for Scientific Computing in Python. _Nature Methods_ , 17(3):261–272, March 2020\. ISSN 1548-7091, 1548-7105. doi: 10.1038/s41592-019-0686-2. URL http://arxiv.org/abs/1907.10121. arXiv: 1907.10121.
* Wang et al. (2019) Zhi Wang, Han-Xiong Li, and Chunlin Chen. Reinforcement learning-based optimal sensor placement for spatiotemporal modeling. _IEEE transactions on cybernetics_ , 50(6):2861–2871, 2019.
* Yuan et al. (2019) Ye Yuan, Junlin Li, Liang Li, Frank Jiang, Xiuchuan Tang, Fumin Zhang, Sheng Liu, Jorge Goncalves, Henning U. Voss, Xiuting Li, Jürgen Kurths, and Han Ding. Machine Discovery of Partial Differential Equations from Spatiotemporal Data. _arXiv:1909.06730 [physics, stat]_ , September 2019. URL http://arxiv.org/abs/1909.06730. arXiv: 1909.06730.
## Appendix A Reproducibility
### A.1 Hyperparameters
#### DeepMoD:
In this paper we use the neural network based model discovery tool DeepMoD
222github.com/PhIMaL/DeePyMoD. Every experiment uses a neural network with
tanh-activation functions and 4 layers of 30 neurons with random
initialization, and an Adam optimizer with default learning rate $10^{-3}$ and
$\beta=(0.9,0.9)$. The sparsity scheduler has a patience of 500 epochs and a
periodicity of 50 epochs Both & Kusters (2020).We use a cross validated,
thresholded Lasso sparsity selection with a threshold of 0.2 and otherwise
default parameters from the Sklearn implementation Pedregosa et al. (2011)
#### Spline interpolation:
For fitting the spline interpolation in both the Burgers as well as the Cable
equation, we use a smoothing parameter of $s=0.01$ in the case of noisy data
and 5th order splines.
#### Finite difference and SVD filter:
To construct the function library of the 2D Advection diffusion equation we
use a second-order accurate central difference scheme. For the 2D advection-
diffusion data, the data was denoised using by decomposing it using the SVD
and (Harris et al. (2020)) selecting the 3 largest modes from the signal.
#### Noise on synthetic data:
We add white noise to the data with a strength relative to the standard
deviation of the data, i.e. $50\%$ noise corresponds to $0.5\cdot\sigma$.
### A.2 Data preparation
#### Burgers equation:
Using the Cole-Hopf transform, the Burgers equation described in the main text
reduces to the heat equation and can be solved exactly for a delta peak
initial condition to give,
$u(x,t)=\sqrt{\frac{\nu}{t\pi}}\left(\frac{(e^{R}-1)e^{-z^{2}}}{1+\frac{\left(e^{R}-1\right)}{2}\text{erfc}(z)}\right).$
(5)
where $R=A/2\nu$ and $z=x/\sqrt{4\nu t}$, a dimensionless coordinate. The
characteristic length-scale is thus the smallest on in the system; for our
case study $l_{c}=\sqrt{4\nu t}|_{t=t_{0}}$. We use a function library
containing all combinations of up to third order spatial derivative and second
order polynomials in $u$, for a total of 12 terms, i.e.,
$\Theta=\left[1,u_{x},u_{xx},u_{xxx},u,uu_{x},uu_{xx},uu_{xxx},u^{2},u^{2}u_{x},u^{2}u_{xx},u^{2}u_{xxx}\right].$
(6)
#### Cable equation:
We measured the passive voltage drop across a RC-circuit coupled to a
longitudinal resistance (See Fig. 3A). This voltage drop across the circuit
typically serves to model the passive voltage transmission through a dendrite,
and is described by the so-called cable equation,
$u_{t}=\frac{1}{R_{l}C}u_{xx}-\frac{1}{R_{m}C}u.$ (7)
Here $C$ is the capacitance. $R_{l}$ the longitudinal resistance and $R_{m}$
the membrane resistance. This equation can be discretizised by an electric
circuit, consisting of a serial set of $n$ longitudinal resistors, $r_{i}$,
membrane resistors, $r_{m}$, and capacitors, $c_{m}$. Using Ohm’s ans
Kirchhoff’s law, the discretized versioan of an array of these elements read,
$\frac{du_{i}}{dt}=\frac{(u_{i-1}+2u_{i}+u_{i+1})}{c_{m}r_{l}}-\frac{u_{i}}{c_{m}r_{m}}.$
(8)
We use a breadboard using structures imitating GMMs, using only standard
electronics hardware ($r_{m}=10k\Omega$, $r_{c}=270\Omega$ and $c_{m}=680mF$).
We applied a voltage profile across the electronics structure using an
arbitrary wave generator (AWG) (mhinstek MHS-2300A) and used a dual channel
oscilloscope (Voltcraft DSO-1062D) to measure the voltage at various positions
along the circuitry. These positions along the circuitry are the the spatial
dimension of the cable equation. We varied the amount of elements between 5
and 13, mimicking various spatial discretizations. At every sensor, we
collected 2500 data-points. We trained the model discovery algorithm on a
function library up to first order polynomials and second order derivatives.
### A.3 2D Advection Diffusion
#### Experiment:
We consider the 2D advection-diffusion process described by,
$u_{t}=-\nabla\cdot\left(-D\nabla u+\vec{v}u\right).$ (9)
Here $\vec{v}$ is the velocity vector describing the advection and $D$ is the
diffusion coefficient. We measure a time-series of images from an
electrophoresis experiment, tracking the advection-diffusion of a charged
purple loading dye under the influence of a spatially uniform electric field.
We capture a set of 25 images with a resolution of 120x150 pixels and show the
resultant 2D density field for three separate time-frames (in arbitrary units)
in Fig. 2a, by subtracting the reference image (no dye present). The dye
displays a diffusive and advective motion with constant velocity $\vec{v}$,
which is related to the strength of the applied electric field. We use this
data-set to asses the impact of temporal as well as spatial sensor density on
the model discovery task. We used a cross validated thresholded Lasso sparsity
selection with a threshold of 0.05 and a function library containing all
combinations of up to third order spatial derivative and second order
polynomials in $u$, for a total of 10 terms,
${\Theta}=\left[1,u_{x},u_{y},u_{xx},u_{yy},u_{xy},u_{xxx},u_{yyy},u_{xxy},u_{xyy}\right].$
(10)
## Appendix B Sensitivity to the random sampling
In this Appendix we discuss the sensitivity of the deep learning based
approach, DeepMoD, w.r.t. the set of random samples selected, in particular is
the limit $\Delta x/l_{c}>1$. In order to show the impact of the random set of
samples drawn we perform 10 runs of DeepMoD with otherwise identical
parameters (1000 samples drawn and $10\%$ white noise and otherwise identical
parameters as discussed in Appendix A). In Fig. 4a we show the outcome for
$\Delta x/l_{c}=2>1$ indicated that in 7 of the 10 cases the correct equation
is discovered and in 3 of the 10 cases this is not the case. In Fig. 4b) we
repeat this as function of $\Delta x/l_{c}$ and show that the larger the
average distance between the samples becomes, the more pronounced the
discrepancy between discovered models becomes. We have also tested the impact
of the initialization of the neural network on the outcome, with identical set
of samples and parameters, but this had little impact to the obtained PDE.
Figure 4: a) Coefficients obtained for the Burgers equation with 10$\%$ white
noise for 10 separate runs with 10 sets of randomly sampled data-sets. b)
Fraction of correctly discovered equations over 10 runs (with 10$\%$ white
noise and 1000 samples per run) as function of the average distance between
the samples, $\Delta x$, relative the the smallest characteristic length-scale
$l_{c}$.
|
capbtabboxtable[][]
11institutetext: Technical University Munich, Munich, Germany 22institutetext:
Helmholtz AI, Neuherberg, Germany 33institutetext: Institute for Computational
Biology, HelmholtzZentrum Munich, Germany 44institutetext: Munich School of
Data Science (MuDS), Munich, Germany 55institutetext: ContextVision AB,
Stockholm, Sweden
# Structure-Preserving Multi-Domain Stain Color Augmentation using Style-
Transfer with Disentangled Representations
Sophia J. Wagner 112244 Nadieh Khalili 55 Raghav Sharma 33 Melanie Boxberg
1144 Carsten Marr 33 Walter de Back 55 Tingying Peng 112244
###### Abstract
In digital pathology, different staining procedures and scanners cause
substantial color variations in whole-slide images (WSIs), especially across
different laboratories. These color shifts result in a poor generalization of
deep learning-based methods from the training domain to external pathology
data. To increase test performance, stain normalization techniques are used to
reduce the variance between training and test domain. Alternatively, color
augmentation can be applied during training leading to a more robust model
without the extra step of color normalization at test time. We propose a novel
color augmentation technique, HistAuGAN, that can simulate a wide variety of
realistic histology stain colors, thus making neural networks stain-invariant
when applied during training. Based on a generative adversarial network (GAN)
for image-to-image translation, our model disentangles the content of the
image, i.e., the morphological tissue structure, from the stain color
attributes. It can be trained on multiple domains and, therefore, learns to
cover different stain colors as well as other domain-specific variations
introduced in the slide preparation and imaging process. We demonstrate that
HistAuGAN outperforms conventional color augmentation techniques on a
classification task on the publicly available dataset Camelyon17 and show that
it is able to mitigate present batch effects. 111Code and model weights are
available at https://github.com/sophiajw/HistAuGAN.
###### Keywords:
color augmentation style-transfer disentangled representations.
## 1 Introduction
Modern cancer diagnosis relies on the expert analysis of tumor specimens and
biopsies. To highlight its structure and morphological properties,
conventionally, the tissue is stained with hematoxylin and eosin (H&E) [5].
The path from the raw tissue to the final digitized image slide however
consists of many different processing steps that can introduce variances, such
as tissue fixation duration, the age and the composition of the H&E-staining,
or scanner settings. Therefore, histological images show a large variety of
colors, not only differing between laboratories but also within one laboratory
[3].
This variability can lead to poor generalization of algorithms that are
trained on WSIs from a single source. One strategy to account for this is
stain color normalization. Traditionally, this is either done by aligning the
color distribution of the test images to a reference tile in the training
domain [12] or by decomposing the color space of a reference tile into
hematoxylin and eosin components [10, 17]. Then, H&E components of the test
tiles can be aligned while keeping the structure intact.
Recently, the focus shifted toward the application of style-transfer methods
such as cycle-consistent generative adversarial networks, CycleGAN [19], for
stain normalization [16]. However, these models aim to match the target
distribution possibly leading to undesired changes in the morphological
structure [6]. To circumvent this, other approaches propose color space
transformations [14], structural similarity loss functions [9], or residual
learning [4].
We propose a novel histological color transfer model, HistAuGAN, based on a
GAN architecture for image-to-image translation. In contrast to previous
approaches, HistAuGAN disentangles the content of a histological image, i.e.,
the morphological tissue structure, from the stain color attributes, hence
preserving the structure while altering the color. Therefore, HistAuGAN can be
used as a stain augmentation technique during training of a task-specific
convolutional neural network (CNN). We demonstrate that this helps to render
the trained network color-invariant and makes it transferable to external
datasets without an extra normalization step at test time. Applied as an
augmentation technique, HistAuGAN significantly outperforms other color
augmentation techniques on a binary tumor-classification task. Furthermore,
clustering results suggest that HistAuGAN can capture sources of domain shifts
beyond color variations, such as noise and artifacts introduced in the
staining or digitization process, e.g., image compression or blurring.
To the best of our knowledge, HistAuGAN is the first GAN-based color
augmentation technique that generates realistic histological color variations.
## 2 Method
### 2.1 Model architecture
Figure 1: We propose HistAuGAN for structure-preserving multi-domain stain
color augmentation. (a) Histological slides from different laboratories
(domains) exhibit color variations. (b) Model architecture. Here, the domain
information flow is visualized by colored arrows. (c) At inference, HistAuGAN
can be used as an augmentation technique by sampling attribute $z_{a}$ and
domain $d$.
We build our model based on a multi-domain GAN using disentangled
representations, inspired by DRIT++ [8]. Originally designed for image-to-
image translation of natural images using a predefined style, we propose its
application on histological images to disentangle the morphological tissue
structure from the visual appearance. In contrast to previous CycleGAN-based
color normalization methods that use only a single encoder, HistAuGAN is able
to separate two essential image properties from each other as visualized in
Figure 1b: the domain-invariant content encoder $E_{c}$ encodes the
histopathological structure of the tissue, e.g., size and position of the
nuclei, whereas the domain-specific attribute encoder $E_{a}$ learns the
domain-specific color appearance. The model can be trained on data from
multiple domains and thereby captures both inter-laboratory variability
between multiple domains and intra-laboratory variability within each domain
at the same time. Finally, the generator $G$ takes as input a content vector
$z_{c}$, an attribute vector $z_{a}$, and the one-hot-encoded domain vector
$d$ and outputs a simulated histological image. The objective function is
given by
$L_{total}=w_{cc}L^{cc}+w_{c}L^{c}+w_{d}L^{d}+w_{recon}L^{recon}+w_{latent}L^{latent}+w_{KL}L^{KL},$
(1)
where $L^{cc}$ is the cycle-consistency loss, $L^{c}$ and $L^{d}$ are
adversarial losses for the content and the attribute encoder, $L^{recon}$ is
an $L_{1}$-loss for image reconstruction, $L^{latent}$ is an $L_{1}$-loss for
latent space reconstruction, and $L^{KL}$ enforces the latent attribute space
to be distributed according to the standard normal distribution. Please refer
to [8] for a detailed explanation of each loss and the precise hyperparameter
setting.
Figure 2: Overview of the color variation in the dataset and the augmentation
techniques used in this paper using the framed image as example tile.
At inference, using the fixed content encoding of the input image $z_{c}$, we
can sample the attribute vector $z_{a}$ and the one-hot encoded domain vector
$d$ as visualized in Figure 1c. Hence, we can map one image to many different
structure-preserving augmentations. More specifically, we sample a random
color attribute $z_{a}$ from a normal distribution that parametrizes the stain
color variabilities in one domain. Figure 2b shows randomly sampled outcomes
of intra-domain augmentations. Additionally, we can change the one-hot-encoded
domain vector $d$ to project the input image into multiple target domains as
visualized in Figure 2c. In addition to sampling from the training domains, we
can also interpolate between these domains to obtain an even broader variety
of realistic color appearances for histopathological images. Figure 2d
demonstrates this by linearly interpolating the domain from domain RUMC to
domain UMCU according to
$d=(1-t)\cdot d_{\mathrm{RUMC}}+t\cdot d_{\mathrm{UMCU}},\quad\mathrm{for}\
t\in[0,1].$ (2)
### 2.2 Competing methods for stain color augmentation
Most existing stain color transfer methods are used for stain normalization,
i.e., to transfer the stain color of the test domain to that of the training
domain. Recently, it has been shown that simple stain color augmentations,
such as perturbing the HSV color space of the histological images, perform
better and lead to more robust models than traditional and network-based
normalization techniques [15]. Therefore, we compare our HistAuGAN to the HSV
augmentations used in [15]. Besides HSV augmentation, there is a more
complicated augmentation technique based on the Wasserstein distance of
different domains [11]. But the method is much slower than HSV and HistAuGAN,
thus difficult to be used as an on-the-fly augmentation technique.
For a quantitative evaluation of our augmentation technique, we consider the
following augmentation methods:
* •
Geometric augmentations: vertical and horizontal flipping, as well as
$90^{\circ}$, $180^{\circ}$, and $270^{\circ}$ rotations.
* •
HSV color augmentations: geometric augmentations with Gaussian blur and
contrast and brightness perturbations applied with probability 0.25 and 0.5,
respectively. We tried both light and strong color augmentations, as suggested
in [15]. Strong color augmentations can generate unrealistic color
appearances. However, applying hue and saturation jittering with factor 0.5
and probability 0.5, which results in relatively strong color perturbance as
shown in Figure 2e, performed best for us.
* •
HistAuGAN: geometric augmentations combined with our augmentation technique
applied to half of the images during training. For each image, we randomly
pick a target domain from the training domains and sample a color attribute
vector $z_{a}\in\mathbb{R}^{8}$ from the standard normal distribution.
### 2.3 Evaluation
We evaluate HistAuGAN on three different aspects, in particular, i) whether it
can remove batch effects present in histological images collected from
multiple medical laboratories, ii) how it affects the out-of-domain
generalization of a deep learning model trained for a specific down-stream
task, and iii) how HistAuGAN preserves morphological structure during
augmentation. For ii), we choose a binary classification task of classifying
WSI tiles into the classes tumor versus non-tumor as described in more detail
in Section 3.3. Question iii) is evaluated by asking a pathology expert to
check image similarity before and after augmentation. To explore how
generalizable our model is, we extend the HistAuGAN training data (lymph
nodes) by tiles from unseen tissue and tumor types, in particular, breast
tissue [13].
## 3 Results and Discussion
### 3.1 Dataset
For the quantitative evaluation of HistAuGAN, we choose the publicly available
Camelyon17 dataset [1] that provides WSIs from five different medical centers
(denoted by RUMC, CWZ, UMCU, RST, and LPON) with different scanning properties
and stain colors as shown in Figure 2a. Pixel-wise annotations are given for
50 WSIs in total, 10 from each medical center. To create the training patches,
we first threshold the images with naive RGB thresholding combined with Otsu
thresholding and then patch the tissue regions of each WSI at the highest
resolution based on a grid into tiles of size $512\times 512$ pixels. Each
tile is labeled as tumor if the ratio of pixels annotated as tumor pixels is
larger than 1%, otherwise, it is labeled as non-tumor. The tiled dataset has
an imbalanced class distribution, i.e., overall, 7% of the tiles are labeled
as tumor and the ratio of tumor tiles is in the same order of magnitude across
all medical centers.
### 3.2 Evaluation of batch-effect removal
Figure 3: Effect of color augmentation on batch effects in color statistics.
(a-d) UMAP embeddings of color statistics of training data, color-coded by
source domains. (e) The quantification of mixing based on mean local diversity
(mLD, higher is better) suggests HistAuGAN effectively mitigates batch
effects.
To evaluate how color augmentation mitigates batch effects, we quantify the
mixing of images from different medical centers with respect to their color
statistics. A random set of 1,000 image tiles were extracted from the WSIs
from each center and analyzed in terms of the average values of each component
after transformation to various color spaces (RGB, HSV, LAB, HED, grayscale).
To visually observe batch effects, we reduced the dimensionality to 2D using
UMAP [2] and labeled points according to their domain as shown in Figure 3a-d.
To quantify the mixing of different domains, we measured the mean over the
local diversity (mLD) for all $k$-nearest neighborhoods ($k=10$) in the 2D
projection using Shannon’s equitability which varies between 0 for non-mixed
and 1 for perfectly mixed populations (cf. Figure 3e).
Without color augmentation, we observe a clear batch effect: tiles from
different domains form distinct clusters ($\mathrm{mLD}=0.2$, Figure 3a). HSV
augmentations improve data mixing, but domain-correlated clusters are still
visible ($\mathrm{mLD}=0.48$, Figure 3b) and single domains, e.g. LPON, are
not mixed with other domains. In contrast, HistAuGAN mixes data from multiple
domains (Figure 3c,d) with a high local diversity ($\mathrm{mLD}=0.85$). If
HistAuGAN is used to transfer colors to discrete domains, the distinct domain
clusters are retained, but each cluster contains well-mixed image samples
transferred from all domains (Figure 3c). When HistAuGAN is used to randomly
interpolate between domains, a continuous well-mixed color subspace is
obtained without any clustering structure (Figure 3d).
These results show that HistAuGAN is highly effective in removing batch
effects present in color statistics of images sampled from different medical
centers.
### 3.3 Evaluation on a down-stream classification task
Figure 4: Precision-recall AUC (left) and F1-score (right) of our binary
classification task. The bold bars depict the results on the out-of-domain
centers averaged across all runs. The most-right, pale bars denote the in-
domain test performance of the classifiers trained with geometric
augmentations.
To evaluate the effect of our proposed augmentation method, we train a CNN on
a binary tumor classification task and compare the performance on different
out-of-domain test sets based on the Camelyon17 dataset. Due to the relatively
small size of our dataset, in particular the small number of tumor tiles, we
choose a small CNN, namely, a pre-trained ResNet18 [7], and fine-tune the last
two ResNet-blocks together with the fully-connected layer on our dataset. For
training, we use weighted cross-entropy-loss to rebalance the contribution of
each class, with a learning rate of 1e-5 and an $L_{2}$-regularization of 1e-5
across all runs and for all augmentation techniques. Furthermore, we used
random erasing as regularization on all augmentation techniques [18]. Since
our dataset is highly imbalanced, we report the F1-score of the tumor class in
addition to the area under the precision-recall curve (PR-AUC).
Figure 4 shows the results of the quantitative evaluation of different
augmentation techniques on the binary tumor-classification task. For each
medical center, we trained three classifiers, one for each augmentation type,
and aggregated the results evaluated on the test domains. All experiments were
repeated three times. On both metrics, HistAuGAN shows better performance on
all of the out-of-domain test sets. As visualized in Figure 2, the appearance
of images from medical center UMCU and LPON deviates strongly from the other
centers, explaining their lower scores. In comparison to HSV color
augmentation, HistAuGAN performs better in handling the stain color
discrepancy between training and test domain and is therefore able to generate
a more robust classification model that generalizes better to out-of-domain
test sets. This can also be measured in the standard deviation of the results
across the out-of-domain test sets centers. For our model, the standard
deviation of the PR-AUC for the tumor class amounts to 0.08, whereas it higher
for geometric (0.22) and color (0.14) augmentations, respectively, which
demonstrates that our model is more robust to underlying stain color
variations. The right-most group shows the in-domain test results for
geometric augmentations. It can be seen as an upper bound for any stain
normalization technique, and thus shows that HistAuGAN can even outperform
stain normalization techniques on some of the five domains.
### 3.4 Qualitative evaluation by an expert pathologist
We further check the quality of HistAuGAN by an expert pathologist on the
structural similarity of original and augmented WSI tiles from the training
set, i.e., the Camelyon17 dataset, and an unseen dataset of breast tissue
[13]. We define three levels of similarity: a) “High similarity”: a
pathologist would find it difficult to distinguish the original tile from the
augmented tile. b) “Moderate similarity”: some structural variations are
observed, but do not affect pathological diagnosis. c) “Low similarity”: the
augmentated tiles can not be used for diagnostic purposes. As shown in Table
6, most of the augmented images do not have a structural modification that
affects diagnosis and over half of them can even fool an expert pathologist.
It is worth mentioning that HistAuGAN is not trained on any of the breast
cancer images but is still able to transfer its color in a structure-
preserving manner as shown in Figure 6 on a sample tile.
Figure 5: Expert evaluation.
Tissue type | High | Moderate | Low | Total
---|---|---|---|---
Lymph nodes | 10 | 7 | 3 | 20
Breast | 14 | 4 | 2 | 20
Figure 6: HistAuGAN on unseen tissue.
## 4 Conclusion
In summary, we propose a novel GAN-based technique, HistAuGAN, for color
augmentation of histopathological images. Based on the disentangled
representations of content and style, HistAuGAN is able to change the color
appearance of an histological image while preserving its morphological
structure. Moreover, HistAuGAN captures both intra-domain and inter-domain
color variations. It is able to interpolate between domains and can therefore
span a continuous color space covering a large variety of realistic stain
colors. When applied as an augmentation technique, HistAuGAN yields a robust
down-stream classifier that generalizes better to out-of-domain test sets than
other color augmentations techniques and, therefore, renders additional stain
normalization steps unnecessary. Finally, HistAuGAN can mitigate batch effects
present in histopathological data which suggests that it is also able to cover
domain shifts beyond color variations, such as noise and artifacts introduced
in image compression. The code is publicly available at
https://github.com/sophiajw/HistAuGAN together with a model of HistAuGAN
trained on the five medical centers of the Camelyon17 dataset.
## References
* [1] Bandi, P., Geessink, O., Manson, Q., Van Dijk, M., Balkenhol, M., Hermsen, M., Ehteshami Bejnordi, B., Lee, B., Paeng, K., Zhong, A., Li, Q., Zanjani, F.G., Zinger, S., Fukuta, K., Komura, D., Ovtcharov, V., Cheng, S., Zeng, S., Thagaard, J., Dahl, A.B., Lin, H., Chen, H., Jacobsson, L., Hedlund, M., Cetin, M., Halici, E., Jackson, H., Chen, R., Both, F., Franke, J., Kusters-Vandevelde, H., Vreuls, W., Bult, P., van Ginneken, B., van der Laak, J., Litjens, G.: From detection of individual metastases to classification of lymph node status at the patient level: The CAMELYON17 challenge. IEEE Trans. Med. Imaging 38(2), 550–560 (Feb 2019)
* [2] Becht, E., McInnes, L., Healy, J., Dutertre, C.A., Kwok, I.W.H., Ng, L.G., Ginhoux, F., Newell, E.W.: Dimensionality reduction for visualizing single-cell data using UMAP. Nat. Biotechnol. (Dec 2018)
* [3] Bejnordi, B.E., Litjens, G., Timofeeva, N., Otte-Holler, I., Homeyer, A., Karssemeijer, N., van der Laak, J.A.: Stain specific standardization of Whole-Slide histopathological images (2016)
* [4] de Bel, T., Bokhorst, J.M., van der Laak, J., Litjens, G.: Residual cyclegan for robust domain transformation of histopathological tissue slides. Med. Image Anal. 70, 102004 (May 2021)
* [5] Chan, J.K.C.: The wonderful colors of the Hematoxylin–Eosin stain in diagnostic surgical pathology. Int. J. Surg. Pathol. 22(1), 12–32 (Feb 2014)
* [6] Cohen, J.P., Luck, M., Honari, S.: Distribution matching losses can hallucinate features in medical image translation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I. Lecture Notes in Computer Science, vol. 11070, pp. 529–536. Springer (2018). https://doi.org/10.1007/978-3-030-00928-1_60, https://doi.org/10.1007/978-3-030-00928-1_60
* [7] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016)
* [8] Lee, H.Y., Tseng, H.Y., Mao, Q., Huang, J.B., Lu, Y.D., Singh, M., Yang, M.H.: DRIT : Diverse Image-to-Image translation via disentangled representations (2020)
* [9] Liang, H., Plataniotis, K.N., Li, X.: Stain style transfer of histopathology images via Structure-Preserved generative learning. In: Machine Learning for Medical Image Reconstruction. pp. 153–162. Springer International Publishing (2020)
* [10] Macenko, M., Niethammer, M., Marron, J.S., Borland, D., Woosley, J.T., Guan, X., Schmitt, C., Thomas, N.E.: A method for normalizing histology slides for quantitative analysis. In: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. pp. 1107–1110. IEEE (2009)
* [11] Nadeem, S., Hollmann, T., Tannenbaum, A.: Multimarginal wasserstein barycenter for stain normalization and augmentation. Med. Image Comput. Comput. Assist. Interv. 12265, 362–371 (Oct 2020)
* [12] Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (Jul 2001)
* [13] Roux, L.: Mitos-atypia-14 grand challenge. https://mitos-atypia-14.grand-challenge.org/, accessed: 2021-03-03
* [14] Shaban, M.T., Tarek Shaban, M., Baur, C., Navab, N., Albarqouni, S.: Staingan: Stain style transfer for digital histological images (2019)
* [15] Tellez, D., Litjens, G., Bándi, P., Bulten, W., Bokhorst, J.M., Ciompi, F., van der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Med. Image Anal. 58, 101544 (Dec 2019)
* [16] Tschuchnig, M.E., Oostingh, G.J., Gadermayr, M.: Generative adversarial networks in digital pathology: A survey on trends and future potential. Patterns (N Y) 1(6), 100089 (Sep 2020)
* [17] Vahadane, A., Peng, T., Sethi, A., Albarqouni, S., Wang, L., Baust, M., Steiger, K., Schlitter, A.M., Esposito, I., Navab, N.: Structure-Preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging 35(8), 1962–1971 (Aug 2016)
* [18] Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. Proceedings of the AAAI Conference on Artificial Intelligence 34(07), 13001–13008 (Apr 2020). https://doi.org/10.1609/aaai.v34i07.7000, https://ojs.aaai.org/index.php/AAAI/article/view/7000
* [19] Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2223–2232 (2017)
|
\frac{p_{a}\\!\cdot p_{b}}{\,z_{a}\\!-\\!z_{b}\,}\hskip 0.85358pt,&\hskip
11.38109pt(\mathrm{for}~{}a\\!\neq\\!b),\\\\[2.84526pt] 0\hskip
0.85358pt,~{}\hskip 8.53581pt&\hskip
11.38109pt(\mathrm{for}~{}a\\!=\\!b).\end{aligned}\right.$ (67jf)
In Eq.(67jea), the color factor $\hskip 0.85358pt\mathcal{C}[\alpha]$ is
decomposed into the DDM basis and contains $(N\\!-2)!\hskip 0.85358pt$
elements, as defined below:
$\mathcal{C}[\alpha]\,=\sum_{e_{1},\cdots,\hskip
0.85358pte_{N-3}}f^{a_{1}a_{\alpha(2)}e_{1}}f^{e_{1}a_{\alpha(3)}e_{2}}\cdots
f^{e_{N-3}a_{\alpha(N-1)}a_{N}}\,,$ (67jg)
where the the 1st and $N$-th labels are fixed in the color ordering
$\alpha\hskip 0.85358pt$, i.e.,
$\alpha\\!=\\![1,\alpha(2),\cdots\\!,\alpha(N\\!-\\!1),N]\\!\in\\!S_{N-2}\hskip
0.85358pt$. The symbol $\mathrm{PT}[\alpha]$ stands for the Parke-Taylor
factor and is given by
$\mathrm{PT}[\alpha]=\frac{1}{~{}(z_{1}\\!-\\!z_{\alpha(2)})\cdots(z_{\alpha(N-1)}\\!-\\!z_{N})(z_{N}\\!-\\!z_{1})~{}}\hskip
0.85358pt.$ (67jh)
The above CHY formulation works for even number $N$, whereas for the odd
number $N$ the Pfaffian is trivially zero. This is consistent with the
conclusions we reached earlier in this subsection. Namely, in the case of
$N$-point scattering (with even number $N$), the leading-order double-copy
holds. But, for the $N$-point scattering (with odd number $N$), the KK GAET
and GRET (which are connected by double-copy) will take the trivial form of
$0=0\hskip 0.85358pt$ at the leading order of high energy expansion. Note that
the scattering equations are the same for both the KK YMS theory and the KK
EMS theory. So the above LO double-copy relation (67jeb) holds under the
replacement $\,c_{N}\tilde{a}_{n_{1}\cdots\hskip
0.85358ptn_{N}}^{2}\\!\\!\\!\\!\longrightarrow\\!\tilde{\beta}_{n_{1}\cdots\hskip
0.85358ptn_{N}}$. Using Eqs.(67jd)-(67je) and the fundamental BCJ relations,
we derive the extended KLT-type double-copy formula for the $N$-point LO
scattering amplitude of gravitational KK Goldstone bosons:
$\displaystyle\widetilde{\mathcal{M}}_{0}^{\mathrm{dc}}\big{[}\phi_{n_{1}}\\!,\cdots\\!,\phi_{n_{N}}\big{]}=-\,c_{N}\tilde{a}_{n_{1}\\!\cdots\hskip
0.85358ptn_{N}}^{2}\\!\left(\\!\frac{\,\kappa\,}{4}\right)^{\\!\\!N-2}\\!\\!\\!\\!\\!\\!\\!\\!\\!\sum_{\\{\alpha,\beta\\}\in
S_{N\\!-\\!3}}\widetilde{\mathcal{T}}_{0}\big{[}A^{a_{\alpha(i)}n_{\alpha(i)}}_{5}\big{]}\mathcal{K}[\alpha|\beta]\,\widetilde{\mathcal{T}}_{0}\big{[}A^{a_{\beta(i)}n_{\beta(i)}}_{5}\big{]}\hskip
0.85358pt,$ (67ji)
where $\mathcal{K}[\alpha|\beta]$ is the KLT kernel. Finally, we also note
that the CHY double-copy construction of the $N$-point LO scattering amplitude
of the vector-type gravitational KK Goldstone bosons
$\widetilde{\mathcal{M}}_{0}^{\mathrm{dc}}\big{[}\mathcal{V}_{n_{1}}^{\pm
1},\\!\cdots\\!,\mathcal{V}_{n_{N}}^{\pm 1}\big{]}$ can be similarly done.
### 4.4 Prescription for Warped Double-Copy Construction at Leading Order
Our previous work Li:2022rel proved that for the compactified KK
gauge/gravity theories, only the toroidal compactification of flat extra
dimensions can satisfy the mass spectrum condition and directly realize
extended double-copy of massive KK gauge/gravity scattering amplitudes. It was
also proven Li:2021yfk that under the toroidal compactification of flat extra
dimensions such massive KK double-copies can be derived as the field theory
limit of the massive KLT relations of KK open/closed string amplitudes. But we
note that the compactified warped 5d gauge/gravity theories with orbifold
$S^{1}\\!/\mathbb{Z}_{2}$ (such as the RS1 model RS1 ) clearly do not meet
this criterion. This is because such warped KK gauge/gravity theories have
highly nonlinear mass spectra and the mass spectrum of KK gauge bosons differ
from that of the KK gravitons due to the 5d warped metric, whereas coupling
coefficients of the KK gauge bosons and of the KK gravitons also differ
significantly. Moreover, the $S^{1}/\mathbb{Z}_{2}$ orbifold compactification
spoils the KK number conservation and the KK scattering amplitudes do not
exhibit single-pole structure in each channel. Nevertheless, as demonstrated
in Sections 4.1-4.2, for tree-level KK gauge/gravity scattering amplitudes, we
find proper ways (or restriction) to evade these problems. These include: (i)
the 3-point double-copied KK graviton scattering amplitudes (from that of the
KK gauge bosons) exhibit exactly the same kinematic structure as the original
3-point KK grviton amplitudes even without high energy expansion except that
we need to set up a proper prescription for the correspondence between the
double-copied trilinear KK gauge couplings and the original trilinear KK
graviton couplings, whereas under such prescription the double-copy works for
the corresponding 3-point gravitational KK Goldstone amplitudes only at the
leading-order of high energy expansion;999We found previously Li:2022rel that
for the gauge/gravity KK Goldstone scattering amplitudes with toroidal
compactification of flat extra dimensions, the double-copy construction also
works only at the leading order of high energy expansion. (ii) at the leading-
order of high energy expansion, the numerators of 4-point scattering
amplitudes of KK gauge bosons and of KK Goldstone bosons obey the kinematic
Jacobi identities; (iii) the double-copied leading-order KK graviton
scattering amplitudes (constructed from that of the KK gauge bosons) exhibit
exactly the same kinematic structure as the original leading-order amplitudes
of KK grvitons except that we need to set up a proper prescription for the
correspondence between double-copied KK gauge couplings and the original KK
graviton couplings.
Based upon the above observation and the double-copy analysis shown in
Sections 4.1-4.2, we summarize the prescriptions for successful double-copy
constructions of the 3-point and 4-point KK graviton scattering amplitudes in
the warped 5d gauge/gravity theories under orbifold compactification of
$S^{1}\\!/\mathbb{Z}_{2}\hskip 0.85358pt$:
1. 1).
For the present double-copy construction of $N$-point warped KK graviton
scattering amplitudes from the corresponding $N$-point warped KK gauge boson
scattering amplitudes, we first replace the 4d gauge coupling $g$ by the 4d
gravitational coupling $\kappa$ as follows:
$g^{N-2}~{}\longrightarrow~{}\\!-\left(\frac{\kappa}{4}\right)^{\\!\\!N-2}\,.$
(67jj)
For instance, this gives $\hskip 0.85358ptg\\!\rightarrow\\!-\kappa/4\,$ for
$N\\!=\\!3$ and $\hskip 0.85358ptg^{2}\\!\rightarrow\\!-\kappa^{2}/16$ for
$N\\!=\\!4\hskip 0.85358pt$. This double-copy replacement (67jj) is the
similar to what we established for the compactified flat 5d gauge/gravity
theories Li:2021yfk Li:2022rel . The difference is that in the present
analysis, due to an overall factor of i of the $N$-point (for $N\\!=\hskip
0.85358pt$odd) gauge amplitude, the corresponding double-copied gravitational
amplitude has an overall factor of $-1\hskip 0.85358pt$. Thus, the factor
$(-)^{N+1}$ in Refs.Li:2021yfk Li:2022rel will be replaced by $-1$ in this
study.
2. 2).
Then, for the scattering amplitudes of KK gauge bosons (KK Goldstone bosons),
we apply the extended massive color-kinematics (CK) duality and make the
following group factor replacements as in Eq.(67gb) for the 3-point KK
amplitudes, or as in Eqs.(67gw)(67hj) for the 4-point KK amplitudes:
$\begin{array}[]{rlrll}f^{abc}&{\,\longrightarrow\,}\mathcal{N}[\\{\epsilon_{j}\\}]\hskip
0.85358pt,&f^{abc}&{\,\longrightarrow\,}\widetilde{\mathcal{N}}[\epsilon_{3}]\hskip
0.85358pt,&\text{(3-point~{}amplitudes),}\\\\[2.84526pt]
\mathcal{C}_{j}&{\,\longrightarrow\,}\mathcal{N}_{j}^{0}\hskip
0.85358pt,&\mathcal{C}_{j}&{\,\longrightarrow\,}\widetilde{\mathcal{N}}_{j}^{0}\hskip
0.85358pt,&\text{(4-point~{}elastic~{}amplitudes),}\\\\[2.84526pt]
\mathcal{C}_{j}&{\,\longrightarrow\,}\mathcal{N}_{j}^{\mathrm{in}\hskip
0.85358pt0}\hskip
0.85358pt,&\mathcal{C}_{j}&{\,\longrightarrow\,}\widetilde{\mathcal{N}}_{j}^{\mathrm{in}\hskip
0.85358pt0}\hskip
0.85358pt,&\text{(4-point~{}inelastic~{}amplitudes).}\end{array}$ (67jn)
3. 3).
At each KK level-$n$, we replace the mass-eigenvalue $M_{n}$ of KK gauge
bosons [determined by Eq.(20)] by mass-eigenvalue $\mathbb{M}_{n}$ of KK
gravitons [determined by Eq.(57)], i.e.,
$M_{n}\\!\rightarrow\mathbb{M}_{n}\hskip 0.85358pt$.
4. 4).
At each KK level-$n$, we further replace the involved KK gauge couplings by
the corresponding KK gravitational couplings as in Eq.(67gk) for the 3-point
KK amplitudes and as in Eqs.(67hd)(67hp)(67id) for the 4-point KK amplitudes:
$\displaystyle
c_{3}a_{n_{1}n_{2}n_{3}}^{2}\\!\\!\rightarrow\alpha_{n_{1}n_{2}n_{3}},\quad
c_{3}a_{n_{1}n_{2}n_{3}}\tilde{a}_{n_{1}n_{2}n_{3}}\\!\\!\rightarrow\tilde{\alpha}_{n_{1}n_{2}n_{3}},\quad
c_{3}\tilde{a}_{n_{1}n_{2}n_{3}}^{2}\\!\\!\rightarrow\tilde{\beta}_{n_{1}n_{2}n_{3}},$
$\displaystyle
c_{4}\tilde{a}_{nnnn}^{2}\\!\rightarrow\tilde{\beta}_{nnnn}\hskip
0.85358pt,~{}~{}~{}~{}c_{4}^{\mathrm{in}}\tilde{a}_{nnmm}^{2}\\!\rightarrow\tilde{\beta}_{nnmm}\hskip
0.85358pt,~{}~{}~{}~{}c_{4}^{\mathrm{in}}\hskip
0.85358pta_{0nn}^{4}\\!\rightarrow\alpha_{0nn}^{2}\hskip 0.85358pt,$
$\displaystyle
c_{4}^{\mathrm{in}}a_{000}^{2}a_{nn0}^{2}\\!\rightarrow\alpha_{000}\alpha_{nn0}\hskip
0.85358pt,~{}~{}~{}~{}c_{4}^{\mathrm{in}}a_{000}^{2}a_{nn0}\tilde{a}_{nn0}\\!\rightarrow\alpha_{000}\tilde{\alpha}_{nn0}\hskip
0.85358pt,$ (67jo)
where the overall coefficient $c_{3}$ or $c_{4}$ is the relevant normalization
factor and its determination is given in Sections 4.1-4.2. Finally, in Section
4.3, we studied the double-copy construction for the $N$-point
($N\\!\\!\geqslant\\!4$) LO KK Goldstone boson amplitudes. Thus for the
$N$-point LO double-copy, we impose the following correspondence (replacement)
between the KK gauge and gravity coupling coefficients:
$\displaystyle c_{N}\hskip 0.85358pt\llbracket\hskip
0.85358pt\cdots\mathsf{f}_{n_{k}}\\!\\!\\!\cdots\tilde{\mathsf{f}}_{n_{k^{\prime}}}\\!\\!\\!\cdots\hskip
0.85358pt\rrbracket\hskip 0.85358pt\hskip 0.85358pt\llbracket\hskip
0.85358pt\cdots\tilde{\mathsf{f}}_{n_{k}}\\!\\!\\!\cdots\mathsf{f}_{n_{k^{\prime}}}\\!\\!\\!\cdots\hskip
0.85358pt\rrbracket\hskip
0.85358pt\,\longrightarrow~{}\tilde{\alpha}_{n_{1}\cdots\,n_{N}}\hskip
0.85358pt,$ (67jpa) $\displaystyle c_{N}\hskip
0.85358pt\tilde{a}_{n_{1}\\!\cdots\,n_{N}}^{2}\,\longrightarrow~{}\tilde{\beta}_{n_{1}\cdots
n_{N}}.$ (67jpb)
## 5 Conclusions
In this work, we conducted comprehensive analyses on the structure of
scattering amplitudes of massive Kaluza-Klein (KK) states in the compactified
5-dimensional warped gauge and gravity theories. We presented systematic
formulations of the gauge theory equivalence theorem (GAET) and the
gravitational equivalence theorem (GRET) within the $R_{\xi}$ gauge and up to
loop level. Under the high energy expansion, the GAET quantitatively connects
the scattering amplitudes of longitudinal KK gauge bosons to that of the
corresponding KK Goldstone bosons, whereas the GRET connects the scattering
amplitudes of massive KK gravitons of helicity-0 (or helicity-1) to that of
the corresponding gravitational KK Goldstone bosons. A key point of our work
is to take the GAET of KK Yang-Mills gauge theories as the truly fundamental
ET formulation, from which the GRET of the corresponding KK gravity theories
can be reconstructed by using the double-copy at the leading order (LO) of
high energy expansion. We systematically studied the double-copy construction
of 3-point and 4-point KK gauge/gravity scattering amplitudes at tree level.
We proved that under proper color-kinematics correspondence and gauge-gravity
coupling correspondence, the double-copy of 3-point physical KK gauge-
boson/graviton scattering amplitudes can be realized without high energy
expansion, and for the corresponding 3-point gauge/gravity KK Goldstone
scattering amplitudes the double-copy can be achieved at the leading order
(LO) of high energy expansion. Moreover, we demonstrated that the double-copy
can be realized for 4-point massive scattering amplitudes of KK gauge bosons
and of KK gravitons (as well as their corresponding KK Goldstone bosons) to
the leading order of high energy expansion. In addition, we can reduce the
validity of 4-point-level GAET and GRET to the validity of 3-point-level GAET
and GRET. Hence, the GAET and GRET formulations at the level of 3-point KK
scattering amplitudes are the most fundamental formulations, from which the
GAET and GRET formulations at 4-point-level of KK scattering amplitudes can be
inferred. We further derived the GRET from GAET by LO double-copy
constructions for 3-point and 4-point KK scattering amplitudes. A more
elaborated summary of our results and conclusions in each sections are
presented as below, which is followed by a schematic summary as shown in the
last paragraph of this section together with Fig. 2.
In Section 2, we proved the formulations of the GAET and GRET within the
$R_{\xi}$ gauge and up to loop level. The GAET formulation was presented in
Eqs.(36)-(37) and Eq.(38), whereas the GRET formulations were given in
Eqs.(67cu)(67cy) for KK gravitions with helicity-0 (called type-I) and in
Eqs.(67cf)(67ch) for KK gravitions with helicity-1 (called type-II). In
essence, the GAET and GRET reflect the geometric “Higgs” mechanism through the
KK compactifications, which hold not only for the compactified flat extra
dimensions Chivukula:2001esy ; Chivukula:2002ej Hang:2021fmp ; Hang:2022rjp ,
but also for the compactified warped extra dimensions as proved in Sections
2.2-2.3. They determine the high energy behaviors of massive KK graviton
scattering amplitudes and ensure the interlancing cancellations among
contributions of different KK levels as happened in the scattering amplitudes
of KK gauge bosons and of KK gravitons.
In Section 3, we analyzed the structure of 3-point and 4-point massive
scattering amplitudes of KK gauge bosons and of KK gravitons (as well as the
corresponding scattering amplitudes of KK Goldstone bosons), including the
interconnections between the 3-point and 4-point KK amplitudes. In Section
3.1, we explicitly proved the warped GAET and GRET for the fundamental 3-point
KK gauge/gravity scattering amplitudes. We found that the nontrivial
realization of GAET is given by the 3-point KK gauge boson amplitude with two
longitudinal external states and one transverse external state (of $LLT$
type), as shown in Eq.(67df). We proved that to hold this 3-point-level GAET
(67df) requires the condition (67dg) which is directly proved in Eq.(67mi) of
Appendix D. Then, we computed the 3-point KK graviton amplitude101010We
derived the most general 3-point scattering amplitude of KK gravitons in
Eq.(C.2) of Appendix C. of helicities $(\pm 1,\pm 1,\pm 2)$ and its
corresponding KK Goldstone amplitude in Eqs.(67do)-(67do) and
Eqs.(67dp)-(67dp). To hold the 3-point-level GRET (67dq) imposes the condition
(67dr) which we proved in Eq.(67mj) of Appdenix D. We further computed the
3-point KK graviton amplitude of helicities $(0,\hskip 0.85358pt0,\hskip
0.85358pt\pm 2)$ and its corresponding KK Goldstone amplitude in
Eqs.(67dua)-(67dub). To hold the 3-point-level GRET (67dv) leads to the
nontrivial condition (67dw) which we proved in Eq.(67mw) of Appdenix D. We
also computed the mixed 3-point KK graviton amplitude of helicities $(\pm
1,\pm 1,\hskip 0.85358pt0)$ and its corresponding KK Goldstone amplitude in
Eq.(67dy). To hold the GRET (67ea) imposes the condition (67eb) which was
proved in (67my) of Appendix D.
In Section 3.2, we further demonstrated that the validity of the warped GAET
and GRET for 4-point KK scattering amplitudes can be effectively reduced to
the validity of these theorems for the 3-point KK scattering amplitudes. For
the 4-point elastic and inelastic scattering amplitudes of KK gauge bosons and
KK Goldstone bosons, we explicitly established the warped GAET as in
Eqs.(67eq), (67ez), and (67fh) respectively. We proved that the validity of
the 4-point-level GAET in these cases is ensured by the validity of the
fundamental 3-point-level GAET (67df). Then, we analyzed the 4-point
scattering amplitudes of KK gravitons and gravitational KK Goldstone bosons.
We explicitly established the warped GRET as in Eq.(67fq) for the 4-body
elastic scattering channel. The validity of the 4-point-level GRET (67fq)
relies on the sum rule condition (67fr), while the proof of Eq.(67fr) requires
the sum rule condition (67dx) to play the key role, where the condition (67dx)
just ensures the validity of the 3-point-level GRET (67dv) in the case of
$(n_{1}{},n_{2},n_{3})=(n,\hskip 0.85358ptn,\hskip 0.85358ptj)$. This shows
that validity of the 4-point-level GRET (67fq) is reduced to the validity of
the 3-point-level GRET (67dv). We further computed the 4-point inelastic
scattering amplitudes of KK gravitons and of KK Goldstone bosons as in
Eqs.(67fwa)-(67fwb). Under high energy expansion, we found that the leading-
order amplitudes (67fxa)-(67fxb) are simple enough in this case. Thus, we
explicitly established the 4-point-level GRET (67fy) without a need of
additional sum rule condition.
In Section 4, we studied the double-copy construction of the massive
gravitational KK scattering amplitudes from the corresponding massive KK gauge
scattering amplitudes for the warped 5d gauge and gravity theories with the
orbifold compactification of $S^{1}\\!/\mathbb{Z}_{2}\hskip 0.85358pt$. This
is nontrivial and challenging since it was proved Li:2022rel that the direct
construction of double-copy for $N$-point massive gauge/gravity KK scattering
amplitudes (with $N\\!\\!\geqslant\\!4$) can directly work out only for
toroidal compactifications with flat extra dimensions. Nevertheless, we could
realize the double-copy construction for warped gauge/gravity theories with
proper restrictions and prescriptions so as to evade the previous conditions
Li:2022rel . In Section 4.1, we newly proved that the double-copy can be
constructed for the 3-point full scattering amplitudes of KK gravitons at tree
level for warped gauge/gravity theories. We set up the 3-point color-
kinematics (CK) correspondence and the gauge/gravity coupling correspondence
as in Eq.(67gb), and the KK mass replacement
$M_{n}\\!\rightarrow\mathbb{M}_{n}\hskip 0.85358pt$ at each KK level. With
these, we first presented the general 3-point doubel-copy formulas (with any
polarization tensors of physical KK graviton states) as in Eqs.(4.1)-(4.1).
Then, we explicitly constructed the 3-point KK graviton amplitudes with
helicities $(\pm 1,\pm 1,\pm 2)$ and $(0,\hskip 0.85358pt0,\hskip 0.85358pt\pm
2)$ in Eq.(67gh) and Eq.(67gi) respectively. We further derived their
corresponding gravitational KK Goldstone boson amplitudes in
Eqs.(67gja)-(67gjb). The required conversions of gauge-gravity coupling
constants are given in Eq.(67gk). With these we established successful double-
copy constructions of the GRET (67gpa) and (67gpb) (in warped KK gravity
theory) from the GAET (67df) (in the warped KK gauge theory) at the level of
3-point KK scattering amplitudes.
In Section 4.2, we demonstrated that the double-copy construction can be
achieved for the 4-point KK gauge/gravity scatttering amplitudes at the
leading-order (LO) of high energy expansion. For the 4-point elastic
scattering $(n,n)\\!\rightarrow\\!(n,n)\hskip 0.85358pt$, the LO amplitudes of
KK gauge bosons and of KK Goldstone bosons have their effective numerators
connected by the generalized gauge transformations (67gu) and obey the
kinematic Jacobi identities (67gv). The double-copy of these two LO amplitudes
gives the LO amplitudes of KK gravitons and of the corresponding gravitational
KK Goldstone bosons as in Eqs.(67gx) and (67gz). Using the coupling conversion
(67hd), we derived the double-copied LO gravitational KK amplitudes (67he)
which agree with the same amplitudes (67ha) (by explicit Feynman diagram
approach). In parallel, we studied the double-copy constructions for the LO
inelastic KK gauge/gravity scattering amplitudes, including the processes
$(n,n)\\!\rightarrow\\!(m,m)$ (with $n\\!\neq\\!m$),
$(n,m)\\!\rightarrow\\!(\ell,q)$ (with
$n\\!\neq\\!m\\!\neq\\!\ell\\!\neq\\!q$), and $(0,0)\\!\rightarrow\\!(n,n)$
(with $n\\!>\\!0$). We found that their effective numerators satisfy the
kinematic Jacobi identity respectively, as shown in Eqs.(67hi), (67hu)-(67hv),
and so on. For these inelastic processes, we presented the double-copied
inelastic amplitudes of KK gravitons and of gravitational KK Goldstone bosons
as in Eqs.(67hq), (4.2), and (67ia)(67id). In Section 4.3, we further
established that this LO double-copy construction can be extended to the
general $N$-point KK scattering amplitudes with $N\\!\geqslant\\!4\hskip
0.85358pt$. In Section 4.4, we summarized a set of well-defined prescriptions
for the successful tree-level double-copy constructions in the warped massive
KK gauge/gravity theories, which include the double-copy constructions for
3-point full KK gauge/gravity amplitudes and the double-copy constructions for
the 4-point leading-order KK gauge/gravity amplitudes.
Figure 2: Schematic Summary of the present analyses: Equivalence Theorem and
Double-Copy Correspondences from 3-point scattering amplitudes to 4-point
scattering amplitudes and from massive KK gauge scattering amplitudes to
massive KK gravitational scattering amplitudes at the leading order of high
energy expansion.
Finally, we present a schematic summary of the present analyses as in Fig. 2.
In this figure, we start from the horizontal bottom row in which all entries
describe the basic 3-point KK scattering amplitudes. From the left to right,
the long equality sign linking the first two entries gives the “GAET(3)” which
connects the 3-point LO longitudinal KK gauge boson amplitude
$\mathcal{T}_{0}^{(3)}[A_{L}^{n}]$ to the corresponding LO KK Goldstone boson
amplitude $\widetilde{\mathcal{T}}_{0}^{(3)}[A_{5}^{n}]$; then the long
equality sign linking the last two entries gives the “GRET(3)” which connects
the 3-point LO longitudinal KK graviton amplitude
$\mathcal{M}_{0}^{(3)}[h_{L}^{n}]$ to the corresponding LO gravitational KK
Goldstone boson amplitude $\widetilde{\mathcal{M}}_{0}^{(3)}[\phi_{n}]$; in
the middle, the horizontal arrow indicates the double-copy “DC3” which
constructs the 3-point LO longitudinal KK graviton (KK Goldstone) amplitude
from the 3-point LO longitudinal KK gauge boson (KK Goldstone) amplitude.
After this, we see that all the entries and equality signs (or arrow) in the
top row are doing the same jobs as those in the bottom row except that all
entries in the top row deal with the 4-point KK gauge/gravity scattering
amplitudes. Finally, the vertical arrows from bottom to top indicate the
4-point-level “GAET(4)” and “GRET(4)” can be reduced to (reconstructed from)
the fundamental “GAET(3)” and “GRET(3)” for the basic 3-point KK gauge
(gravity) amplitudes; and furthermore, we can construct the GRET(3) from
GAET(3) through double-copy.
Acknowledgments This research was supported in part by the National NSF of
China under grants 12175136 and 11835005. YH is supported in part by the
Northwestern University Amplitudes and Insight group, the Department of
Physics and Astronomy, and Weinberg College.
Appendix
## Appendix A Kinematics of KK Particle Scattering
In this Appendix we present the kinematics of 3 and 4 KK particle scattering
processes in the (3+1)d spacetime. We choose the 4d Minkowski metric tensor
$\,\eta^{\mu\nu}\\!=\\!\eta_{\mu\nu}\\!=\\!\text{diag}(-1,1,1,1)$.
For the 3 KK particle scattering, we define the 4-momenta of the external
particles as follows:
$\displaystyle\begin{aligned} p_{1}^{\mu}&=(E_{1},\hskip
0.85358ptks_{\theta},\hskip 0.85358pt0,\hskip 0.85358ptkc_{\theta})\hskip
0.85358pt,\qquad&&E_{1}=\sqrt{M_{1}^{2}+k^{2}\,}\,,\\\
p_{2}^{\mu}&=(E_{2},\hskip 0.85358pt-ks_{\theta},\hskip 0.85358pt0,\hskip
0.85358ptkc_{\theta})\hskip
0.85358pt,\qquad&&E_{2}=\sqrt{M_{2}^{2}+k^{2}\,}\,,\\\
p_{3}^{\mu}&=-(E_{3},\hskip 0.85358pt0,\hskip 0.85358pt0,\hskip
0.85358pt2kc_{\theta})\hskip
0.85358pt,\qquad&&E_{3}=\sqrt{M_{3}^{2}+4k^{2}c_{\theta}^{2}\,}\,,\end{aligned}$
(67jq)
where $k\\!=\\!|\vec{p}\hskip 0.85358pt|$, $(s_{\theta},\hskip
0.85358ptc_{\theta})=(\sin\theta,\hskip 0.85358pt\cos\theta)$,
$p_{j}^{2}\\!=\\!-M_{j}^{2}$ (with $j\\!=\\!1,2,3\hskip 0.85358pt$), and all
momenta are outgoing by convention.
Using the energy conservation condition $E_{1}\\!+\\!E_{2}\\!=\\!E_{3}$, we
can solve the magnitude of 3-momentum $k\\!=\\!|\vec{p}\hskip 0.85358pt|$ as a
function of the scattering angle $\theta\,$:
$\displaystyle k$
$\displaystyle=\bigg{\\{}\frac{1}{2}\sqrt{\\!M_{3}^{4}\\!+\\!4\cos^{2}\\!\theta\big{[}M_{1}^{4}\\!+\\!M_{2}^{4}\\!-\\!(M_{1}^{2}\\!+\\!M_{2}^{2})M_{3}^{2}\\!+\\!2M_{1}^{2}M_{2}^{2}\cos
2\theta\hskip 0.85358pt\big{]}\,}$ $\displaystyle\quad~{}~{}+\frac{\,\csc
2\theta\,}{\,2\,}\Big{[}M_{3}^{2}\cot
2\theta\\!-\\!(M_{1}^{2}\\!+\\!M_{2}^{2})\cot\theta\Big{]}\\!\bigg{\\}}^{\\!\frac{1}{2}}\\!.$
(67jr)
Alternatively, we can express $\theta$ as a function of $k$ :
$\cos\theta=\frac{1}{\,2k\,}\\!\left[2k^{2}\\!+\\!M_{1}^{2}\\!+\\!M_{2}^{2}\\!+\\!2\sqrt{\\!(k^{2}\\!+\\!M_{1}^{2})(k^{2}\\!+\\!M_{2}^{2})\,}\\!-\\!M_{3}^{2}\right]^{\\!\frac{1}{2}}\\!.$
(67js)
In high energy limit, we set $\hskip 0.85358ptk\\!\rightarrow\\!\infty$ and
$\theta\\!\rightarrow\\!0\hskip 0.85358pt$. Thus, we further expand $k$ and
$\theta$ as follows:
$\displaystyle k$
$\displaystyle=\frac{~{}M_{3}^{2}\\!-\\!2(M_{1}^{2}\\!+\\!M_{2}^{2})~{}}{4\sin^{2}\\!\theta}+\frac{~{}(M_{1}^{2}\\!-\\!M_{2}^{2})^{2}}{~{}4\big{[}M_{3}^{2}\\!-\\!2(M_{1}^{2}\\!+\\!M_{2}^{2})\big{]}~{}}+O(\theta^{2})\hskip
0.85358pt,$ (67jta) $\displaystyle\cos\theta$
$\displaystyle=1-\frac{1}{\,8k^{2}\,}\\!\left[M_{3}^{2}\\!-\\!2(M_{1}^{2}\\!+\\!2M_{2}^{2})\right]\\!+O(k^{-4})\hskip
0.85358pt,$ (67jtb) $\displaystyle\sin\theta$
$\displaystyle=\frac{1}{2k}\sqrt{M_{3}^{2}\\!-\\!2(M_{1}^{2}\\!+\\!M_{2}^{2})\,}+O(k^{-4})\hskip
0.85358pt.$ (67jtc)
From the above formulas, we see that to have real solutions of $k$ and
$\theta$ requires $M_{3}\\!\geqslant\\!\sqrt{2(M_{1}^{2}\\!+\\!M_{2}^{2})\,}$.
For the four-body scattering of KK states
$\,X_{1}X_{2}\\!\rightarrow\\!X_{3}X_{4}\,$, the 4-momenta in the center-of-
mass frame are defined as follows:
$\displaystyle\begin{aligned} p_{1}^{\mu}=&-\\!(E_{1},0,0,k),&&\hskip
28.45274ptp_{2}^{\mu}=-(E_{2},0,0,-k),\\\
p_{3}^{\mu}=&\,(E_{3},k^{\prime}s_{\theta},0,k^{\prime}c_{\theta}),&&\hskip
28.45274ptp_{4}^{\mu}=(E_{4},-k^{\prime}s_{\theta},0,-k^{\prime}c_{\theta}),\end{aligned}$
(67ju)
where we define the following Mandelstam variables:
$s=-(p_{1}\\!+p_{2})^{2},\qquad t=-(p_{1}\\!+p_{4})^{2},\qquad
u=-(p_{1}\\!+p_{3})^{2},$ (67jv)
from which we have $\hskip
0.85358pts+t+u=\\!\displaystyle\sum_{j=1}^{4}\\!M_{j}^{2}\hskip 0.85358pt$. In
addition, the momenta $k$ and $k^{\prime}$ in Eq.(67ju) are defined as
follows:
$\displaystyle\begin{aligned}
k&=\frac{1}{\,2\sqrt{s\,}\,}\\!\left(\big{[}s-\\!(M_{1}\\!+\\!M_{2})^{2}\big{]}\big{[}s-\\!(M_{1}\\!-\\!M_{2})^{2}\big{]}\right)^{\\!\frac{1}{2}}\\!,\\\
k^{\prime}&=\frac{1}{\,2\sqrt{s\,}\,}\\!\left([s-\\!(M_{3}\\!+\\!M_{4})^{2}][s-\\!(M_{3}\\!-\\!M_{4})^{2}]\right)^{\\!\frac{1}{2}}\\!.\end{aligned}$
(67jw)
We further define the massless Mandelstam variables as
$(s_{0},t_{0},u_{0})\\!=\\!(s,t,u)|_{M_{j}=0}$, and thus we have:
$s_{0}=4k^{2}\hskip
0.85358pt,\quad~{}~{}t_{0}=-\frac{\,s_{0}\,}{2}(1\\!+c_{\theta})\hskip
0.85358pt,\quad~{}~{}u_{0}=-\frac{\,s_{0}\,}{2}(1\\!-c_{\theta})\hskip
0.85358pt,$ (67jx)
where $j\\!=\\!1,2,3,4$ and the sum of these Mandelstam variables is given by
$\hskip 0.85358pts_{0}+t_{0}+u_{0}=0\hskip 0.85358pt$.
A massive KK graviton $h_{n}^{\mu\nu}$ in 4d has 5 physical helicity states
and their polarization tensors are represented by
$\varepsilon^{\mu\nu}_{\pm 2}=\epsilon_{\pm}^{\mu}\epsilon_{\pm}^{\nu}\hskip
0.85358pt,\quad\varepsilon_{\pm
1}^{\mu\nu}=\\!\frac{1}{\sqrt{2\,}\,}(\epsilon_{\pm}^{\mu}\epsilon_{L}^{\nu}\\!+\\!\epsilon_{L}^{\mu}\epsilon_{\pm}^{\nu})\hskip
0.85358pt,\quad\varepsilon^{\mu\nu}_{L}\\!=\frac{1}{\sqrt{6\,}\,}(\epsilon^{\mu}_{+}\epsilon^{\nu}_{-}\\!+\epsilon^{\mu}_{-}\epsilon^{\nu}_{+}\\!+2\hskip
0.85358pt\epsilon^{\mu}_{L}\epsilon^{\nu}_{L})\hskip 0.85358pt,$ (67jy)
where $\epsilon_{\pm}^{\mu}$ and $\epsilon_{L}^{\mu}$ are the spin-1
polarization vectors:
$\epsilon_{\pm}^{\mu}=\pm\frac{e^{\mp\text{i}\phi}}{\sqrt{2\,}\,}(0,c_{\theta}c_{\phi}\pm\\!\text{i}\hskip
0.85358pts_{\phi},\hskip 0.85358ptc_{\theta}s_{\phi}\\!\mp\\!\text{i}\hskip
0.85358ptc_{\phi},-s_{\theta})\hskip
0.85358pt,\quad~{}\epsilon_{L}^{\mu}\\!=\frac{1}{\,\mathbb{M}_{n}\,}(k,E_{n}s_{\theta}c_{\phi},E_{n}s_{\theta}s_{\phi},E_{n}c_{\theta})\,.$
(67jz)
In the above, the KK graviton $h_{n}^{\mu\nu}$ moves in an arbitrary direction
with polar and azimuthal angles $(\theta,\hskip 0.85358pt\phi)$. For instance,
consider the 4-body elastic scattering at the KK level-$n$, we have
$\,E_{n_{j}}\\!\\!=E\,$ and $\mathbb{M}_{n_{j}}\\!\\!=M\hskip 0.85358pt$.
Then, we can define the following polarization vectors in the center-of-mass
frame:
$\displaystyle\epsilon^{\mu}_{1,\hskip 0.85358pt\pm}$
$\displaystyle=\frac{1}{\sqrt{2\,}\,}(0,\mp 1,\text{i},0)\hskip
0.85358pt,\hskip 22.76219pt$
$\displaystyle\epsilon^{\mu}_{1,L}=-\frac{E}{M}(\beta,0,0,1)\hskip 0.85358pt,$
(67ka) $\displaystyle\epsilon^{\mu}_{2,\hskip 0.85358pt\pm}$
$\displaystyle=\frac{1}{\sqrt{2\,}\,}(0,\pm 1,\text{i},0)\hskip
0.85358pt,\hskip 22.76219pt$
$\displaystyle\epsilon^{\mu}_{2,L}=-\frac{E}{M}(\beta,0,0,-1)\hskip
0.85358pt,$ $\displaystyle\epsilon^{\mu}_{3,\hskip 0.85358pt\pm}$
$\displaystyle=\frac{1}{\sqrt{2\,}\,}(0,\pm c_{\theta},-\text{i},\mp
s_{\theta})\hskip 0.85358pt,\hskip 22.76219pt$
$\displaystyle\epsilon^{\mu}_{3,L}=\frac{E}{M}(\beta,s_{\theta},0,c_{\theta})\hskip
0.85358pt,$ $\displaystyle\epsilon^{\mu}_{4,\hskip 0.85358pt\pm}$
$\displaystyle=\frac{1}{\sqrt{2\,}\,}(0,\mp\text{i}\hskip
0.85358ptc_{\theta},-\text{i},\pm s_{\theta})\hskip 0.85358pt,\hskip
22.76219pt$
$\displaystyle\epsilon^{\mu}_{4,L}=\frac{E}{M}(\beta,-s_{\theta},0,-c_{\theta})\hskip
0.85358pt,$
where $\hskip 0.85358pt\beta\\!=\\!(1\\!-\\!M^{2}/E^{2})^{1/2}$. The
polarizations for the inelastic scattering processes can be derived in a
similar way.
## Appendix B BRST Quantization for GRET Formulation
In this Appendix, we provide more detailed derivations to support the
formulation of warped GRET in the main text of Section 2.3. This includes the
BRST quantization used in Section 2.3.1, the formulation of the warped GRET
type-I as presented in Section 2.3.3, and the formulation of the warped GRET
type-II as given in Section 2.3.2.
### B.1 BRST Quantization for Warped 5d Gravity
For the BRST quantization approach, the ghost fields are introduced in the
path intergral formulation. The 5d Faddeev-Popov ghost Lagrangian for the 5d
GR with warped metric takes the following form:
$\hat{\mathcal{L}}_{\mathrm{FP}}\,=\,e^{3A(z)}\,\hat{\bar{c}}^{M}\widehat{\tt
s}\hat{\mathcal{F}}_{M},$ (67kb)
where the 5d gauge-fixing functions
$\hat{\mathcal{F}}_{M}\\!\\!=\\!(\hat{\mathcal{F}}_{\mu},\hat{\mathcal{F}}_{5})$
are given by Eq.(44). The BRST transformations for 5d graviton, ghost and
anti-ghost fields take the following form:
$\displaystyle\widehat{\tt s}\hat{h}_{MN}$
$\displaystyle=-\partial_{M}\hat{c}_{N}-\partial_{N}\hat{c}_{M}+\hat{\kappa}\\!\left(\hat{h}_{MN}\partial_{P}-\hat{h}_{MP}\partial_{N}-\hat{h}_{NP}\partial_{M}\right)\\!\hat{c}^{P},$
$\displaystyle\widehat{\tt s}\hat{c}_{M}$ $\displaystyle=\hat{\kappa}\hskip
0.85358pt\hat{c}_{N}\partial^{N}\hat{c}_{M}\,,\qquad\widehat{\tt
s}\hat{\bar{c}}_{M}=-2\hskip 0.85358pt\xi^{-1}\hat{\mathcal{F}}_{M}\,.$ (67kc)
The BRST transformations exhibit nilpotency and obey $\widehat{\tt
s}^{2}\\!=0\hskip 0.85358pt$.
Next, we make KK expansions for ghost and anti-ghost fields as follows:
$\displaystyle\hat{c}^{\mu}(x,z)$
$\displaystyle=\frac{1}{\sqrt{L\,}\,}\\!\sum_{n=0}^{\infty}\\!c^{\mu}_{n}(x)\hskip
0.85358pt\mathsf{u}_{n}(z)\hskip
0.85358pt,\qquad\hat{\bar{c}}^{\mu}(x,z)=\frac{1}{\sqrt{L\,}\,}\\!\sum_{n=0}^{\infty}\\!\bar{c}_{n}^{\mu}(x)\hskip
0.85358pt\mathsf{u}_{n}(z)\hskip 0.85358pt,$ (67kda)
$\displaystyle\hat{c}^{5}(x,z)$
$\displaystyle=\frac{1}{\sqrt{L\,}\,}\\!\sum_{n=1}^{\infty}\\!c_{n}^{5}(x)\hskip
0.85358pt\mathsf{v}_{n}(z)\hskip
0.85358pt,\qquad\hat{\bar{c}}^{5}(x,z)=\frac{1}{\sqrt{L\,}\,}\\!\sum_{n=1}^{\infty}\\!\bar{c}_{n}^{5}(x)\hskip
0.85358pt\mathsf{v}_{n}(z)\hskip 0.85358pt.$ (67kdb)
Using the KK expansions (48), we derive the BRST transformations for the KK
fields in 4d spacetime:
$\displaystyle\widehat{\tt s}h_{n}^{\mu\nu}$
$\displaystyle=-\,\partial^{\mu}c^{\nu}_{n}\\!-\\!\frac{1}{2}\eta^{\mu\nu}\mathbb{M}_{n}c^{5}_{n}\\!+\\!\frac{\kappa}{2}\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{u}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt(h^{\mu\nu}_{m}\partial^{\alpha}\\!-\\!2h^{\mu\alpha}_{m}\partial^{\nu})c_{\alpha}^{\ell}\\!+\\!\frac{\kappa}{2}\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{u}_{m}\mathsf{v}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\mathbb{M}_{\ell}h^{\mu\nu}_{m}c^{5}_{\ell}$
$\displaystyle\quad\,+\\!\frac{\kappa}{\,2\sqrt{2\,}\,}\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{v}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\eta^{\mu\nu}\mathbb{M}_{\ell}\mathcal{V}^{\alpha}_{m}c_{\alpha}^{\ell}\\!-\\!\frac{\kappa}{\sqrt{2\,}\,}\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{v}_{m}\mathsf{v}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\mathcal{V}^{\mu}_{m}\partial^{\nu}c^{5}_{\ell}\\!+\\!\frac{\kappa}{\sqrt{6\,}\,}\big{(}\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{w}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\phi_{m}\partial^{\mu}c^{\nu}_{\ell}$
$\displaystyle\quad\,-\hskip 0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{w}_{m}\mathsf{v}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\eta^{\mu\nu}\mathbb{M}_{\ell}\phi_{m}c^{5}_{\ell}\big{)}+(\mu\leftrightarrow\nu)\hskip
0.85358pt,$ (67kea) $\displaystyle\widehat{\tt s}\mathcal{V}_{n}^{\mu}$
$\displaystyle=-\sqrt{2}\hskip
0.85358pt\partial^{\mu}c^{5}_{n}+\sqrt{2\,}\hskip
0.85358pt\mathbb{M}_{n}c_{n}^{\mu}\\!+\sqrt{2\,}\hskip 0.85358pt\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{v}_{n}\mathsf{u}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\mathbb{M}_{\ell}h^{\mu\nu}_{m}c_{\nu}^{\ell}\\!+\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{v}_{n}\mathsf{v}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\big{(}\mathcal{V}_{m}^{\mu}\partial^{\nu}\\!-\mathcal{V}_{m}^{\nu}\partial^{\mu}\big{)}c_{\nu}^{\ell}$
$\displaystyle\quad\,-\frac{\kappa}{\sqrt{3\,}\,}(\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{v}_{n}\mathsf{w}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\mathbb{M}_{\ell}\phi_{m}c^{\mu}_{\ell}\\!+2\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{v}_{n}\mathsf{w}_{m}\mathsf{v}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\phi_{m}\partial^{\mu}c^{5}_{\ell})\hskip
0.85358pt,$ (67keb) $\displaystyle\widehat{\tt s}\phi_{n}$
$\displaystyle=-\sqrt{6\,}\hskip
0.85358pt\mathbb{M}_{n}c^{5}_{n}\\!+\\!\sqrt{3\,}\hskip 0.85358pt\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{w}_{n}\mathsf{v}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\mathbb{M}_{\ell}\mathcal{V}_{m}^{\mu}c_{\mu}^{\ell}\\!+\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{w}_{n}\mathsf{w}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\phi_{m}\partial_{\mu}c^{\mu}_{\ell}$
$\displaystyle\quad\,-\kappa\hskip 0.85358pt\llbracket\hskip
0.85358pt\mathsf{w}_{n}\mathsf{w}_{m}\mathsf{v}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\mathbb{M}_{\ell}\phi_{m}c^{5}_{\ell}\,,$
(67kec) $\displaystyle\widehat{\tt s}c^{\mu}_{n}$ $\displaystyle=\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{u}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358ptc^{\nu}_{m}\partial_{\nu}c^{\mu}_{\ell}-\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{u}_{n}\mathsf{v}_{m}\mathsf{u}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358pt\mathbb{M}_{\ell}c^{5}_{m}c^{\mu}_{\ell}\,,$ (67ked)
$\displaystyle\widehat{\tt s}c^{5}_{n}$ $\displaystyle=\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{v}_{n}\mathsf{u}_{m}\mathsf{v}_{\ell}\hskip
0.85358pt\rrbracket\hskip
0.85358ptc^{\mu}_{m}\partial_{\mu}c^{5}_{\ell}+\kappa\hskip
0.85358pt\llbracket\hskip
0.85358pt\mathsf{v}_{n}\mathsf{v}_{m}\mathsf{v}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\mathbb{M}_{\ell}c^{5}_{m}c^{5}_{\ell}\,,$
(67kee) $\displaystyle\widehat{\tt s}\bar{c}^{\mu}_{n}$
$\displaystyle=-\frac{2}{\,\xi_{n}\,}\mathcal{F}_{n}^{\mu}\,,\qquad\widehat{\tt
s}\bar{c}^{5}_{n}=-\frac{2}{\,\xi_{n}\,}\mathcal{F}_{n}^{5}\,,$ (67kef)
where $\hskip 0.85358pt\kappa=\hat{\kappa}/\sqrt{L}$ and the brackets $\hskip
0.85358pt\llbracket\hskip 0.85358pt\cdots\hskip 0.85358pt\rrbracket\hskip
0.85358pt$ are defined in Appendix D.
### B.2 Gravitational ET of Type-I
In the Sections 2.3.2-2.3.3 of the main text, we have formulated the GRET
which quantitatively connects the scattering amplitude of KK gravitons
$h^{\mu\nu}_{n}$ with helicities $(0,\hskip 0.85358pt\pm 1)$ to that of the
corresponding gravitational KK Goldstone bosons $(\phi_{n},\hskip
0.85358pt\mathcal{V}_{n}^{\pm 1})$ respectively. In this and next sub-
Appendices, we will provide more detailed derivations to support the results
in the main text.
We study the combination of gauge-fixing functions
$\partial_{\mu}\mathcal{F}_{n}^{\mu}\\!-\xi_{n}\mathbb{M}_{n}\mathcal{F}_{n}^{5}\,$
which eliminates the KK vector-Goldstone field $\,\mathcal{V}_{n}^{\mu}\,$.
With this we derive the following formula in the momentum space:
$-\text{i}k_{\mu}\mathcal{F}_{n}^{\mu}\\!-\xi_{n}\mathbb{M}_{n}\mathcal{F}_{n}^{5}\,=\,-k_{\mu}k_{\nu}h_{n}^{\mu\nu}\\!+\mbox{$\frac{\,{1}\,}{2}$}\\!\left[\\!(2\\!-{\xi_{n}^{-1}})k^{2}\\!-\xi_{n}\mathbb{M}_{n}^{2}\right]\\!h_{n}\\!+\\!\sqrt{\\!\mbox{$\frac{\,{3}\,}{2}$}\,}{\xi_{n}^{2}}\mathbb{M}_{n}^{2}\phi_{n}\,,$
(67kf)
where the rescaling
$\phi_{n}\\!\rightarrow\\!\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}}\phi_{n}$ is
made according to Eq.(63). Applying the on-shell condition
$k^{2}\\!=\\!-\mathbb{M}_{n}^{2}\hskip 0.85358pt$, we further derive Eq.(67kf)
in the following form:
$\displaystyle\text{i}k_{\mu}\mathcal{F}_{n}^{\mu}\\!+\xi_{n}\mathbb{M}_{n}\mathcal{F}_{n}^{5}\,=\sqrt{\\!\mbox{$\frac{\,{3}\,}{2}$}\,}\mathbb{M}_{n}^{2}\hskip
0.85358pt\widetilde{\mathbb{F}}_{n}\,,$ (67kga)
$\displaystyle\widetilde{\mathbb{F}}_{n}=\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}h_{n}^{S}+\mbox{$\frac{\,{1}\,}{\sqrt{6\,}\,}$}\big{(}2+\xi_{n}\\!-\xi_{n}^{-1}\big{)}h_{n}-\xi_{n}^{2}\phi_{n}=\mathbf{K}^{T}_{n}\mathbf{H}_{n}$
(67kgb) $\displaystyle\hskip
11.66563pt=\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}\big{(}h_{n}^{S}+h_{n}\big{)}\\!-\phi_{n}\hskip
0.85358pt,\hskip 19.91692pt(\text{for}~{}\xi_{n}\\!=\\!1)\hskip 0.85358pt,$
(67kgc)
$\displaystyle\mathbf{K}_{n}\\!=\Big{(}\\!\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}\varepsilon^{S}_{\mu\nu}\\!+\\!\mbox{$\frac{\,{1}\,}{\sqrt{6\,}\,}$}\big{(}2+\xi_{n}\\!-\xi_{n}^{-1}\big{)}\eta_{\mu\nu},\,-\xi_{n}^{2}\Big{)}^{\\!T}\\!,~{}~{}~{}~{}\mathbf{H}_{n}=(h_{n}^{\mu\nu},\,\phi_{n})^{T},$
(67kgd) $\displaystyle\hskip
11.66563pt=\Big{(}\\!\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}\\!\big{(}\varepsilon^{S}_{\mu\nu}+\eta_{\mu\nu}\big{)},\hskip
0.85358pt-1\Big{)}^{\\!T}\\!,\hskip
19.91692pt(\text{for}~{}\xi_{n}\\!=\\!1)\hskip 0.85358pt.$ (67kge)
We then derive a BRST identity involving the gauge-fixing function
$\widetilde{\mathbb{F}}_{n}$ in Eq.(67kgb):
$\big{\langle}0\big{|}T\,\widetilde{\mathbb{F}}_{n}\mathbf{H}_{m}^{T}\big{|}0\big{\rangle}(k)=-\frac{\xi_{n}}{\sqrt{6\,}\hskip
0.85358pt\mathbb{M}_{n}^{2}}\big{\langle}0\big{|}T\,\widehat{\tt
s}\mathbf{H}_{m}^{T}(\text{i}k_{\mu}\bar{c}^{\mu}_{n}+\xi_{n}\mathbb{M}_{n}\bar{c}_{n}^{5})\big{|}0\big{\rangle}(k)\,,$
(67kh)
and it can be further rewritten in the following form by utilizing Eq.(67kg):
$\mathbf{K}_{n}^{T}\bm{\mathcal{D}}_{nm}(k)=-\mathbf{X}_{nm}^{T}(k)\,.$ (67ki)
In the above, we adopt for simplicity the ’t Hooft-Feynman gauge condition
$(\xi_{n}\\!=\\!1)$. Thus, in Eq.(67ki), we define the following notations:
$\displaystyle\bm{\mathcal{D}}_{nm}(k)$
$\displaystyle=\big{\langle}0\big{|}T\mathbf{H}_{n}\mathbf{H}_{m}^{T}\big{|}0\big{\rangle}(k)\,,\quad\mathbf{X}_{nm}(k)=\underline{\mathbf{X}}_{mj}(k)\SS_{jn}(k)\hskip
0.85358pt,$ $\displaystyle\underline{\mathbf{X}}_{mj}(k)$
$\displaystyle=\frac{1}{2\sqrt{6\hskip
0.85358pt}M_{j}^{2}\,}\\!\begin{pmatrix}\big{\langle}0\big{|}T\,\widehat{\tt
s}h^{\mu\nu}_{m}\big{|}\bar{\mathsf{c}}_{j}\big{\rangle}(k)\\\\[2.84526pt]
\big{\langle}0\big{|}T\,\widehat{\tt
s}\phi_{m}\big{|}\bar{\mathsf{c}}_{j}\big{\rangle}(k)\end{pmatrix}\\!=\\!{\frac{1}{2\sqrt{6\hskip
0.85358pt}\,}}\\!\begin{pmatrix}\\!(2k^{\mu}k^{\nu}\\!/M_{j}^{2}\\!-\\!\eta^{\mu\nu})[\delta_{mj}\\!+\\!\Delta^{\\!(3)}_{mj}(k^{2})]\\\\[2.84526pt]
-\sqrt{6\,}[\delta_{mj}\\!+\\!\widetilde{\Delta}_{mj}^{\\!(4)}(k^{2})]\end{pmatrix}\\!,$
$\displaystyle\SS_{jn}(k)$
$\displaystyle=\big{\langle}0\big{|}T{\mathsf{c}_{j}\bar{\mathsf{c}}_{n}}\big{|}0\big{\rangle}(k)\hskip
0.85358pt,$ (67kj)
with
$\,{\mathsf{c}}_{n}\\!\equiv\text{i}\epsilon_{\mu}^{S}{c}^{\mu}_{n}+{c}_{n}^{5}\hskip
0.85358pt$, $\hskip
0.85358pt\bar{\mathsf{c}}_{n}\\!\equiv\text{i}\epsilon_{\mu}^{S}\bar{c}^{\mu}_{n}+\bar{c}_{n}^{5}\hskip
0.85358pt$, and $\epsilon_{\mu}^{S}\\!=\\!k_{\mu}/\mathbb{M}_{n}\hskip
0.85358pt$. In the above the external momentum is chosen as incoming. The
formulas (67ki)-(67kj) just give Eqs.(67cm)-(67cn) in the text. In addition,
we see that the quantities $\Delta^{\\!(3)}_{mj}(k^{2})$ and
$\widetilde{\Delta}^{\\!(4)}_{mj}(k^{2})$ are of loop order and are generated
by the non-linear terms of the BRST transformations (67ke) and (67ke) of
$h_{n}^{\mu\nu}$ and $\phi_{n}\hskip 0.85358pt$.
Next, we use the identity (66) and Eq.(67kg) to deduce an identity containing
the external state $\widetilde{\mathbb{F}}_{n}(k)\hskip 0.85358pt$:
$\displaystyle 0$ $\displaystyle=\big{\langle}0\hskip
0.85358pt|\,\widetilde{\mathbb{F}}_{n}(k)\cdots{\Phi}\,|\hskip
0.85358pt0\big{\rangle}=\mathbf{K}^{T}_{n}\bm{\mathcal{D}}_{nm}(k)\big{\langle}0\hskip
0.85358pt|\,\underline{\mathbf{H}}_{m}(k)\cdots{\Phi}\,|\hskip
0.85358pt0\big{\rangle}$ $\displaystyle=-\mathbf{X}_{nm}^{T}(k)\hskip
0.85358pt\mathcal{M}\big{[}\underline{\mathbf{H}}_{m}(k),\cdots\\!,{\Phi}\hskip
0.85358pt\big{]},$ (67kk)
which leads to the following identity,
$\displaystyle\mathcal{M}\big{[}\underline{\widetilde{\mathbb{F}}}_{n}(k),\cdots\\!,{\Phi}\hskip
0.85358pt\big{]}=0\,,$ (67kl)
with the amputated external state $\underline{\widetilde{\mathbb{F}}}_{n}(k)$
given by 111111We note that after the LSZ amputation in general $R_{\xi}$
gauge at tree-level, the coefficient of $h_{n}$ in
$\underline{\widetilde{\mathbb{F}}}$ should be $-1/2$ and corrects the
coefficient of $h_{n}$ as in our previous works Hang:2021fmp ; Hang:2022rjp ,
but this does not affect all the conclusions therein Hang:2021fmp ;
Hang:2022rjp .
$\displaystyle\underline{\widetilde{\mathbb{F}}}_{n}$
$\displaystyle=\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}(h_{n}^{S}\\!-\\!\mbox{$\frac{\,{1}\,}{2}$}h_{n})-C_{nm}\phi_{m}=h_{n}^{L}-\Omega_{n}\hskip
0.85358pt,$ (67kma) $\displaystyle\Omega_{n}$ $\displaystyle=C_{nm}\hskip
0.85358pt\phi_{m}\\!+\tilde{\Delta}_{n}\hskip
0.85358pt,~{}~{}~{}~{}\tilde{\Delta}_{n}=\mbox{$\frac{\,{1}\,}{\sqrt{6\,}\,}$}h_{n}\\!+\tilde{v}_{n}\hskip
0.85358pt,~{}~{}~{}~{}\tilde{v}_{n}\\!=v_{\mu\nu}h_{n}^{\mu\nu}\hskip
0.85358pt.$ (67kmb)
In the above, $C_{nm}$ is a multiplicative modification factor induced at loop
level:
$C_{nm}(k^{2})=\left[\\!\frac{~{}\mathbf{1}\\!+\\!\bm{\widetilde{\Delta}}^{\\!(4)}(k^{2})~{}}{\,\mathbf{1}\\!+\\!\bm{\Delta}^{\\!(3)}(k^{2})}\\!\right]_{mn}\\!=\,\delta_{nm}+O(\mathrm{loop})\,,$
(67kn)
where the matrix form is used such that
$(\bm{\Delta}^{\\!(3)})_{jj^{\prime}}\\!\\!=\\!{\Delta}^{\\!(3)}_{jj^{\prime}}$
and
$(\bm{\widetilde{\Delta}}^{\\!(4)})_{jj^{\prime}}\\!\\!=\\!\\!\widetilde{\Delta}^{\\!(4)}_{jj^{\prime}}$
with the matrix elements $({\Delta}^{\\!(3)}_{jj^{\prime}},\hskip
0.85358pt\widetilde{\Delta}^{\\!(4)}_{jj^{\prime}})$ from Eq.(67kj). The above
Eqs.(B.2)-(67kn) just reproduce the formulas (67co)-(67cr) in the main text.
Thus, from Eq.(67kl) we deduce another general identity for gravitational
equivalence theorem (GRET):
$\mathcal{M}\big{[}\hskip
0.85358pt\underline{\widetilde{\mathbb{F}}}_{n_{1}}\\!(k_{1}),\cdots\\!,\underline{\widetilde{\mathbb{F}}}_{n_{\\!N}}\\!(k_{N}),{\Phi}\hskip
0.85358pt\big{]}=0\,.$ (67ko)
From this, we further derive the following GRET identity:
$\displaystyle\mathcal{M}\big{[}h_{n_{1}}^{L}\\!(k_{1}),\cdots\\!,h_{n_{\\!N}}^{L}\\!(k_{N}),{\Phi}\hskip
0.85358pt\big{]}\hskip 0.85358pt=\hskip
0.85358pt\mathcal{M}\big{[}\Omega_{n_{1}}\\!(k_{1}),\cdots\\!,\Omega_{n_{\\!N}}\\!(k_{N}),{\Phi}\hskip
0.85358pt\big{]}\,.$ (67kp)
We can directly prove this identity by expanding the right-hand side of
Eq.(67kp) with each external state replacded by
$\Omega_{n}\\!=\\!h_{n}^{L}\\!-\underline{\widetilde{\mathbb{F}}}_{n}$ and
further using Eq.(67ko). This just reproduces the GRET identity (67ct) in the
main text.
We note that at tree level the LSZ reduction may be implemented directly.
Adopting the ’t Hooft-Feynman gauge $(\xi_{n}\\!=\\!1)$ and using the identity
(66), we can derive a new identity involving the external line
$\widetilde{\mathbb{F}}_{n}(k)\hskip 0.85358pt$ as follows:
$\displaystyle 0\,$ $\displaystyle=\big{\langle}0\hskip
0.85358pt|\,\widetilde{\mathbb{F}}_{n}(k)\cdots\overline{\Phi}\,|\hskip
0.85358pt0\big{\rangle}=\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}\big{(}\varepsilon_{\mu\nu}^{S}\\!+\\!\eta_{\mu\nu}\big{)}\big{\langle}0\hskip
0.85358pt|\,h_{n}^{\mu\nu}(k)\cdots\Phi\,|\hskip
0.85358pt0\big{\rangle}-\big{\langle}0\hskip
0.85358pt|\,\phi_{n}(k)\cdots\Phi\,|\hskip 0.85358pt0\big{\rangle}$
$\displaystyle=\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}\big{(}\varepsilon_{\mu\nu}^{S}\\!+\\!\eta_{\mu\nu}\big{)}\mathcal{D}^{\mu\nu\alpha\beta}_{h,nm}(k)\hskip
0.85358pt\mathcal{M}\big{[}h_{m}^{\alpha\beta}(k),\cdots\\!,\Phi\hskip
0.85358pt\big{]}-\mathcal{D}_{\phi,nm}(k)\hskip
0.85358pt\mathcal{M}\big{[}\phi_{m}(k),\cdots\\!,\Phi\hskip 0.85358pt\big{]}$
$\displaystyle=\mathcal{D}_{\phi,nm}(k)\hskip
0.85358pt\mathcal{M}\big{[}\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}(h_{m}^{S}\\!-\\!\mbox{$\frac{\,{1}\,}{2}$}h_{m})-\phi_{m},\cdots\\!,\Phi\hskip
0.85358pt\big{]}$ $\displaystyle\equiv\overline{\mathcal{D}}_{\phi}(k)\hskip
0.85358pt\mathcal{M}\big{[}\underline{\widetilde{\mathbb{F}}}_{n}(k),\cdots\\!,\Phi\hskip
0.85358pt\big{]},$ (67kq)
where
$\mathcal{D}_{\phi,nm}(k)\\!=\\!\delta_{nm}\overline{\mathcal{D}}_{\phi}(k)$
and the LSZ-amputated external state
$\underline{\widetilde{\mathbb{F}}}_{n}(k)$ is given by Eq.(67km) with
$C_{nm}\\!=\\!1\hskip 0.85358pt$. We have also used the tree-level propagators
(65a) and (65c) for $\xi_{n}\\!=\\!1\hskip 0.85358pt$. In addition, we can
extend the above derivation to the general $R_{\xi}$ gauge. Using the
$h_{n}^{\mu\nu}$ and $\phi_{n}$ propagators in the $R_{\xi}$ gauge as given by
Eqs.(67lk)-(67lkb), we derive the following identity of propagators at tree
level:
$\displaystyle\big{[}\varepsilon_{\mu\nu}^{S}\\!+\\!\mbox{$\frac{\,{1}\,}{2}$}(\xi_{n}\\!-\xi_{n}^{-1})\eta_{\mu\nu}\big{]}\mathcal{D}^{\mu\nu\alpha\beta}_{h,nm}(k)=\xi_{n}^{2}\mathcal{D}_{nm}(k)\big{(}\varepsilon^{\alpha\beta}_{S}\\!-\\!\mbox{$\frac{\,{1}\,}{2}$}\eta^{\alpha\beta}\big{)}\hskip
0.85358pt,$ (67kr)
we derive the following tree-level identity with amputated external state
$\widetilde{\mathbb{F}}_{n}(k)\hskip 0.85358pt$:
$\displaystyle 0\,$ $\displaystyle=\big{\langle}0\hskip
0.85358pt|\,\widetilde{\mathbb{F}}_{n}(k)\cdots\overline{\Phi}\,|\hskip
0.85358pt0\big{\rangle}$
$\displaystyle=\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}\big{[}\varepsilon_{\mu\nu}^{S}\\!+\\!\mbox{$\frac{\,{1}\,}{2}$}(\xi_{n}\\!-\xi_{n}^{-1})\eta_{\mu\nu}\big{]}\mathcal{D}^{\mu\nu\alpha\beta}_{h,nm}(k)\hskip
0.85358pt\mathcal{M}\big{[}h_{m}^{\alpha\beta}(k),\cdots\\!,\Phi\hskip
0.85358pt\big{]}\\!-\xi_{n}^{2}\mathcal{D}_{\phi,nm}(k)\hskip
0.85358pt\mathcal{M}\big{[}\phi_{m}(k),\cdots\\!,\Phi\hskip 0.85358pt\big{]}$
$\displaystyle=\xi_{n}^{2}\mathcal{D}_{nm}(k)\mathcal{M}\Big{[}\\!\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}\\!\big{(}h_{m}^{S}\\!-\\!\mbox{$\frac{\,{1}\,}{2}$}h_{m}\big{)}\\!-\phi_{m},\cdots\\!,\Phi\Big{]}.$
(67ks)
Thus, we have amputated external state
$\underline{\widetilde{\mathbb{F}}}_{n}(k)$ obey the identity:
$\mathcal{M}\Big{[}\underline{\widetilde{\mathbb{F}}}_{n}(k),\cdots\\!,\Phi\Big{]}=0\hskip
0.85358pt,$ (67kt)
where we have defined the following quantities at tree level,
$\displaystyle\underline{\widetilde{\mathbb{F}}}_{n}$
$\displaystyle=\sqrt{\\!\mbox{$\frac{\,{2}\,}{3}$}\,}(h_{n}^{S}\\!-\\!\mbox{$\frac{\,{1}\,}{2}$}h_{n})-\phi_{m}=h_{n}^{L}-\Omega_{n}\hskip
0.85358pt,$ (67kua) $\displaystyle\Omega_{n}$
$\displaystyle=\phi_{m}\\!+\tilde{\Delta}_{n}\hskip
0.85358pt,~{}~{}~{}~{}\tilde{\Delta}_{n}=\mbox{$\frac{\,{1}\,}{\sqrt{6\,}\,}$}h_{n}\\!+\tilde{v}_{n}\hskip
0.85358pt,~{}~{}~{}~{}\tilde{v}_{n}\\!=v_{\mu\nu}h_{n}^{\mu\nu}\hskip
0.85358pt.$ (67kub)
We can readily extend the identity (67kt) to the case with $N$ external states
$\underline{\widetilde{\mathbb{F}}}_{n}$, which then reproduces the form of
the GRET (67ko) for $R_{\xi}$ gauge and at tree level.
### B.3 Gravitational ET of Type-II
In Section 2.3.2 of the main text, we have formulated the KK GRET of type-II
which connects the scattering amplitude of KK gravitons $h^{\mu\nu}_{n}$ with
helicity $\pm 1$ to that of the corresponding gravitational KK vector
Goldstone bosons $\mathcal{V}_{n}^{\mu}$ with the same helicity. In this sub-
Appendix, we will provide detailed derivations to support the main text.
We start with the general Slavnov-Taylor (ST) type identity (66) for the
gravitational gauge-fixing functions given in Eq.(64a)-(64b). Then, we
reexpress the gauge-fixing function (64a) in the following matrix notation:
$\displaystyle\mathcal{F}_{n}^{\mu}$
$\displaystyle=-\frac{\,\text{i}\mathbb{M}_{n}\,}{\sqrt{2\,}\,}\mathbb{F}_{n}^{\mu}\,,\quad\mathbb{F}_{n}^{\mu}=\mathbf{K}_{n}^{T}\mathbf{H}_{n}\,,$
(67kva) $\displaystyle\mathbf{K}_{n}$
$\displaystyle=\left(\\!\mbox{$\frac{\,{1}\,}{\,\sqrt{2\,}\mathbb{M}_{n}}$}\\!\big{[}2k_{\beta}\tensor{\eta}{{}_{\alpha}^{\mu}}\\!-k^{\mu}(2\\!-\xi_{n}^{-1})\eta_{\alpha\beta}\big{]},\,\text{i}\xi_{n}\right)^{\\!T}\\!,\quad\mathbf{H}_{n}\\!=\big{(}h_{n}^{\alpha\beta},\,\mathcal{V}_{n}^{\mu}\big{)}^{\\!T},$
(67kvb)
where we have also assumed the external momentum $k^{\mu}$ to be incoming.
Therefore, the ST-type identity associated with the gauge function
$\mathbb{F}_{n}^{\mu}$ is given by
$\big{\langle}0\big{|}T\,\mathbb{F}_{n}^{\mu}\,\mathbf{H}_{m}^{T}\big{|}0\big{\rangle}(k)=-\frac{\text{i}\xi_{n}}{\sqrt{2\,}\mathbb{M}_{n}}\big{\langle}0\big{|}T\,\widehat{\tt
s}\mathbf{H}_{m}^{T}\,\bar{c}^{\mu}_{n}\big{|}0\big{\rangle}(k)\,.$ (67kw)
By taking Eq.(67kv), we can further rewrite Eq.(67kw) as follows:
$\mathbf{K}_{n}^{T}\bm{\mathcal{D}}_{nm}(k)=-\mathbf{X}^{T}_{nm}(k)\,,$ (67kx)
where we have defined the following notations:
$\displaystyle\bm{\mathcal{D}}_{nm}(k)$
$\displaystyle=\big{\langle}0\big{|}T\hskip
0.85358pt\mathbf{H}_{n}\mathbf{H}_{m}^{T}\big{|}0\big{\rangle}(k)\hskip
0.85358pt,\qquad\mathbf{X}_{nm}(k)=\underline{\mathbf{X}}_{mj}(k)\SS_{jn}(k)\hskip
0.85358pt,$ $\displaystyle\underline{\mathbf{X}}_{mj}(k)$
$\displaystyle=\frac{\,\text{i}\hskip
0.85358pt\bar{\eta}^{\lambda\mu}}{\,\sqrt{2\,}\mathbb{M}_{j}\,}\\!\\!\begin{pmatrix}\\!\big{\langle}0\big{|}\hskip
0.85358ptT\,\widehat{\tt s}h^{\sigma\rho}_{m}\big{|}\bar{c}_{j,\lambda}\hskip
0.85358pt\big{\rangle}(k)\\\\[2.84526pt] \big{\langle}0\big{|}\hskip
0.85358ptT\,\widehat{\tt
s}\mathcal{V}_{m}^{\nu}\big{|}\bar{c}_{j,\lambda}\hskip
0.85358pt\big{\rangle}(k)\end{pmatrix}\\!=\\!\frac{1}{\sqrt{2\,}\mathbb{M}_{j}}\\!\\!\begin{pmatrix}\\!(k^{\sigma}\bar{\eta}^{\rho\mu}\\!+k^{\rho}\bar{\eta}^{\sigma\mu})[\hskip
0.85358pt\delta_{mj}\\!+\\!\Delta^{\\!(1)}_{mj}(k^{2})\hskip
0.85358pt]\\\\[2.84526pt] \text{i}\sqrt{2}M_{m}\bar{\eta}^{\mu\nu}[\hskip
0.85358pt\delta_{mj}\\!+\\!\widetilde{\Delta}^{\\!(2)}_{mj}(k^{2})\hskip
0.85358pt]\end{pmatrix}\\!,$ $\displaystyle\SS_{jn}(k)\hskip
0.85358pt\bar{\eta}^{\lambda\mu}$
$\displaystyle=\big{\langle}0\big{|}T{c_{j}^{\lambda}\bar{c}_{n}^{\hskip
0.85358pt\mu}}\big{|}0\big{\rangle}(k)\,,\quad\bar{\eta}^{\lambda\mu}=\eta^{\lambda\mu}-\frac{k^{\lambda}k^{\mu}(1-\xi_{n})}{k^{2}+\xi_{n}^{2}\mathbb{M}_{n}^{2}}\,,$
(67ky)
which $\Delta^{(1)}_{mj}(k^{2})$ and $\widetilde{\Delta}^{(2)}_{mj}(k^{2})$
are the loop-level quantities. The above Eqs.(67kx)-(B.3) just give
Eqs.(67bq)-(67br) in the text. Moreover, according to the BRST transformations
(67ke)-(67ke), each loop-level quantity incorporates distinct contributions
from the non-linear terms of the BRST transformations (67ke)-(67ke):
$\displaystyle\Delta^{(1)}_{mj}(k^{2})$
$\displaystyle=\big{[}\Delta_{h,h}^{(1)}(k^{2})+\Delta_{h,\mathcal{V}}^{(1)}(k^{2})+\Delta_{h,\phi}^{(1)}(k^{2})\big{]}_{mj}\,,$
(67kza) $\displaystyle\widetilde{\Delta}^{(2)}_{mj}(k^{2})$
$\displaystyle=\big{[}\widetilde{\Delta}_{\mathcal{V},h}^{(2)}(k^{2})+\widetilde{\Delta}_{\mathcal{V},\mathcal{V}}^{(2)}(k^{2})+\widetilde{\Delta}_{\mathcal{V},\phi}^{(2)}(k^{2})\big{]}_{mj}\,.$
(67kzb)
Next, we consider the identity (66) with an external state given by the
combination of gauge-fixing functions as in Eq.(67kv). With this, we can
directly amputate the external state as follows:
$\displaystyle 0$ $\displaystyle=\big{\langle}0\hskip
0.85358pt|\,\mathbb{F}_{n}^{\hskip 0.85358pt\mu}(k)\cdots{{\Phi}}\,|\hskip
0.85358pt0\big{\rangle}=\mathbf{K}^{T}_{n}\bm{\mathcal{D}}_{nm}(k)\big{\langle}0\hskip
0.85358pt|\,{\underline{\mathbf{H}}_{m}(k)}\cdots{{\Phi}}\,|\hskip
0.85358pt0\big{\rangle}$ $\displaystyle=-\mathbf{X}_{nm}^{T}(k)\hskip
0.85358pt\mathcal{M}\big{[}\underline{\mathbf{H}}_{m}(k),\cdots\\!,{{\Phi}}\hskip
0.85358pt\big{]},$ (67la)
which leads to the following identity:
$\displaystyle\mathcal{M}\big{[}\underline{\mathbb{F}}_{n}^{\hskip
0.85358pt\mu}(k),\cdots\\!,{\Phi}\hskip 0.85358pt\big{]}=0\,,$ (67lb)
where we have used the BRST identity (67kx) and $\mathcal{M}[\cdots]$ denotes
the amputated scattering amplitude. In Eq.(67lb), we derive the following
amputated external state $\underline{\mathbb{F}}_{n}^{\hskip
0.85358pt\mu}\hskip 0.85358pt$:
$\underline{\mathbb{F}}_{n}^{\hskip
0.85358pt\mu}(k)=\frac{\sqrt{2\,}\,}{\mathbb{M}_{n}}\hskip
0.85358ptk_{\nu}h_{n}^{\mu\nu}-{\hat{C}}^{\hskip
0.85358ptnm}_{\mathrm{mod}}\hskip 0.85358pt{\eta}^{\hskip
0.85358pt\mu\nu}\mathcal{V}_{m,\nu}\hskip 0.85358pt,$ (67lc)
with the loop-induced modification factor given by
${\hat{C}}^{\hskip
0.85358ptnm}_{\mathrm{mod}}(k^{2})=-\frac{\,\text{i}\mathbb{M}_{m}\,}{\mathbb{M}_{n}}\\!\\!\left[\\!\frac{\,{\mathbf{1}}\\!+\\!\widetilde{\bm{\Delta}}^{\\!(2)}(k^{2})\,}{\,{\mathbf{1}}\\!+\\!\bm{\Delta}^{\\!(1)}(k^{2})\,}\\!\right]_{mn}\\!=-\text{i}\hskip
0.85358pt\delta_{nm}+O(\mathrm{loop})\hskip 0.85358pt.$ (67ld)
The matrix form presented above represents loop-level quantities
$(\bm{\Delta}^{\\!(1)})_{jj^{\prime}}\\!={\Delta}^{\\!(1)}_{jj^{\prime}}$ and
$(\bm{\widetilde{\Delta}}^{\\!(2)})_{jj^{\prime}}\\!=\\!\widetilde{\Delta}^{\\!(2)}_{jj^{\prime}}\hskip
0.85358pt$. We provide the above loop-level formulation for completeness,
since our present focus in the main text is to analyze the KK scattering
amplitudes at tree level. The above Eqs.(B.3)-(67ld) just give the
Eqs.(67bs)-(67bv) in the main text.
By utilizing a transverse polarization vector $\epsilon^{\mu}_{\pm}$, we
contract it with $\underline{\mathbb{F}}_{n}^{\hskip 0.85358pt\mu}(k)$ in
Eq.(67lc), yielding the formula:
$\displaystyle\underline{\mathbb{F}}_{n}(k)=h_{n}^{\pm 1}\\!-\Theta_{n}\hskip
0.85358pt,~{}~{}~{}~{}\Theta_{n}\\!={\hat{C}}_{\mathrm{mod}}^{\hskip
0.85358ptnm}\mathcal{V}_{m}^{\pm 1}\\!+v_{n}^{\pm 1}\hskip 0.85358pt,$ (67le)
where $v_{\pm 1}^{\mu\nu}\\!=O(\mathbb{M}_{n}/E_{n})$. Then, we can generalize
Eq.(67lb) to incorporate $N$ externally-amputated gauge-fixing functions:
$\mathcal{M}\big{[}\underline{\mathbb{F}}_{n_{1}}\\!(k_{1}),\cdots\\!,\underline{\mathbb{F}}_{n_{\\!N}}\\!(k_{\\!N}),{\Phi}\hskip
0.85358pt\big{]}=0\,.$ (67lf)
Using this identity, we can further derive the gravitational equivalence
theorem (GRET) identity which connects the $h_{n}^{\pm 1}$ amplitude to the
amplitude of $\Theta_{n}$:
$\displaystyle\mathcal{M}\big{[}h_{n_{1}}^{\pm
1}(k_{1}),\cdots\\!,h_{n_{\\!N}}^{\pm 1}(k_{N}),{\Phi}\hskip
0.85358pt\big{]}\hskip 0.85358pt=\hskip
0.85358pt\mathcal{M}\big{[}\Theta_{n_{1}}\\!(k_{1}),\cdots\\!,\Theta_{n_{\\!N}}\\!(k_{N}),{\Phi}\hskip
0.85358pt\big{]}\hskip 0.85358pt.$ (67lg)
We can prove the above identity by computing the amplitude on its right-hand
side. For this, we expand $\Theta_{n}$ in terms of $\,\Theta_{n}\\!=h_{n}^{\pm
1}\\!-\underline{\mathbb{F}}_{n}$ for each external state of Eq.(67lg) and
thus deduce the left-hand side of Eq.(67lg) after applying the identity
Eq.(67lf) to eliminate each extenal state of $\underline{\mathbb{F}}_{n}$. The
above Eq.(67lg) just reproduce the GRET identity (67ce) in the main text.
## Appendix C Feynman Rules for Warped KK Gauge and Gravity Theories
In this Appendix, we present the relevant Feynman rules for the warped KK
gauge and gravity theories under the 5d compactification of
$S^{1}\\!/\mathbb{Z}_{2}\hskip 0.85358pt$, which are needed for explicitly
computing the 3-point and 4-point KK scattering amplitudes in the main text.
### C.1 Feynman Rules for Warped KK Gauge Theory
In this sub-Appendix, we present the relevant Feynman rules for the warped KK
gauge theory. The propagators for KK gauge boson and the KK Goldstone boson
take the following forms in the $R_{\xi}$ gauge:
$\displaystyle\mathcal{D}_{nm}^{\mu\nu}(p)$
$\displaystyle=-\frac{\text{i}\hskip
0.85358pt\delta_{nm}}{\,p^{2}\\!+\\!\mathbb{M}_{n}^{2}\,}\\!\left[\eta^{\mu\nu}\\!+(\xi_{n}\\!-\\!1)\frac{p^{\mu}p^{\nu}}{\,p^{2}+\xi_{n}\mathbb{M}_{n}^{2}\,}\right]\\!,$
(67lha) $\displaystyle\mathcal{D}_{nm}(p)$
$\displaystyle=-\frac{\text{i}\hskip
0.85358pt\delta_{nm}}{~{}p^{2}+\xi_{n}\mathbb{M}_{n}^{2}~{}}\,.$ (67lhb)
The trilinear vertices and quartic vertices for KK gauge bosons (KK Goldstone
bosons) are derived as follows:
$\displaystyle\begin{aligned} &=-g\hskip 0.85358pta_{nm\ell}\hskip
0.85358ptf^{abc}\big{[}\hskip
0.85358pt\eta^{\mu\nu}(p_{1}\\!-p_{2})^{\alpha}\\!+\eta^{\nu\alpha}(p_{2}\\!-p_{3})^{\mu}\\\
&\hskip 73.97716pt+\eta^{\alpha\mu}(p_{3}\\!-p_{1})^{\nu}\hskip
0.85358pt\big{]}\,,\end{aligned}$ (67lia) $\displaystyle=-g\hskip
0.85358pt\tilde{a}_{nm\ell}\hskip 0.85358ptf^{abc}(p_{1}\\!-p_{2})^{\mu}\,,$
(67lib) $\displaystyle=\text{i}g\hskip 0.85358pta_{nm\ell}\hskip
0.85358ptf^{abc}\eta^{\mu\nu}(M_{n}^{2}\\!-\\!M_{m}^{2})M_{\ell}^{-1}\,,$
(67lic) $\displaystyle\begin{array}[]{ll}&=\text{i}\hskip 0.85358ptg^{2}\hskip
0.85358pta_{nm\ell q}\big{[}\hskip
0.85358ptf^{abe}f^{cde}(\eta^{\mu\alpha}\eta^{\nu\beta}\\!-\eta^{\mu\beta}\eta^{\nu\alpha})\\\
&\hskip
56.9055pt+f^{ace}f^{dbe}(\eta^{\mu\beta}\eta^{\nu\alpha}\\!-\eta^{\mu\nu}\eta^{\alpha\beta})\\\
&\hskip
56.9055pt+f^{ade}f^{bce}(\eta^{\mu\nu}\eta^{\alpha\beta}\\!-\eta^{\mu\alpha}\eta^{\nu\beta})\hskip
0.85358pt\big{]},\end{array}$ (67lig)
where the effective cubic and quartic KK coupling coefficients
$(a_{nm\ell},\,\tilde{a}_{nm\ell},\,a_{nm\ell q},$ $\tilde{a}_{nm\ell q})$ are
defined as follows:
$\displaystyle a_{nm\ell}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\text{d}z\,e^{A(z)}\,\mathsf{f}_{n}(z)\mathsf{f}_{m}(z)\mathsf{f}_{\ell}(z)\,,$
(67lja) $\displaystyle\tilde{a}_{nm\ell}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\text{d}z\,e^{A(z)}\,\tilde{\mathsf{f}}_{n}(z)\tilde{\mathsf{f}}_{m}(z)\mathsf{f}_{\ell}(z)\,,$
(67ljb) $\displaystyle a_{nm\ell q}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\text{d}z\,e^{A(z)}\,\mathsf{f}_{n}(z)\mathsf{f}_{m}(z)\mathsf{f}_{\ell}(z)\mathsf{f}_{q}(z)\,,$
(67ljc) $\displaystyle\tilde{a}_{nm\ell q}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\text{d}z\,e^{A(z)}\,\tilde{\mathsf{f}}_{n}(z)\tilde{\mathsf{f}}_{m}(z)\tilde{\mathsf{f}}_{\ell}(z)\tilde{\mathsf{f}}_{q}(z)\,.$
(67ljd)
These coupling coefficients will be used for the double-copy construction in
Sections 4.1-4.2 and for deriving the gravitational sum rules in Appendix D.
In the above Feynman rules (67li), the coupling $g$ is the 4d gauge coupling
and is connected to the 5d gauge coupling $\hat{g}_{5}$ via
$\,g=\hat{g}_{5}/\sqrt{L\,}$. For the 5d physical coordinate $y$ (instead of
the 5d conformal coordinate $z$), the 5d length is $\bar{L}\\!=\\!\pi r_{c}$
and the corresponding 4d gauge coupling is
$\bar{g}\\!=\\!\hat{g}_{5}/\sqrt{\bar{L}\,}$, which is related to the gauge
coupling $g$ via $\,\bar{g}\\!=\\!{g}\sqrt{\bar{L}/L\,}$. Thus, the cubic KK
gauge coupling coefficient $\bar{a}_{nm\ell}$ (defined in 5d physical
coordinate) is connected to the above cubic KK gauge coupling coefficient
${a}_{nm\ell}$ (defined in 5d conformal coordinate) by
$\bar{a}_{nm\ell}\\!=\\!{a}_{nm\ell}\sqrt{\bar{L}/L\,}$. Similarly, for the
quartic KK gauge coupling coefficient, we have the relation, $\bar{a}_{nm\ell
q}\\!=\\!(\bar{L}/L)\hskip 0.85358pt{a}_{nm\ell q}\hskip 0.85358pt$.
### C.2 Feynman Rules for Warped KK Gravity Theory
The propagators for KK gravitons and gravitational KK Goldstone bosons in a
general $R_{\xi}$ gauge (43)-(44) are given by the following Hang:2021fmp :
$\displaystyle\mathcal{D}_{nm}^{\mu\nu\alpha\beta}(p)=-\frac{\,\text{i}\hskip
0.85358pt\delta_{nm}\,}{2}\left\\{\frac{\,\eta^{\mu\alpha}\eta^{\nu\beta}\\!+\eta^{\mu\beta}\eta^{\nu\alpha}\\!-\\!\eta^{\mu\nu}\eta^{\alpha\beta}}{p^{2}\\!+\mathbb{M}_{n}^{2}}+\frac{1}{3}\\!\left[\frac{1}{\,p^{2}\\!+\mathbb{M}_{n}^{2}\,}-\frac{1}{\,p^{2}\\!+\\!(3\hskip
0.85358pt\xi_{n}\\!-\\!2)\mathbb{M}_{n}^{2}\,}\right]\right.$
$\displaystyle\times\\!\left(\\!\eta^{\mu\nu}\\!\\!-\\!\frac{\,2p^{\mu}p^{\nu}\,}{\mathbb{M}_{n}^{2}}\\!\right)\\!\\!\left(\\!\eta^{\alpha\beta}\\!\\!-\\!\frac{\,2p^{\alpha}p^{\beta}\,}{\mathbb{M}_{n}^{2}}\\!\right)\\!+\\!\frac{1}{M_{n}^{2}}\\!\left[\frac{1}{\,p^{2}\\!+\\!\mathbb{M}_{n}^{2}\,}-\frac{1}{\,p^{2}\\!+\xi_{n}\mathbb{M}_{n}^{2}\,}\right]\\!\\!\left(\eta^{\mu\alpha}p^{\nu}p^{\beta}\\!+\eta^{\mu\beta}p^{\nu}p^{\alpha}\right.$
$\displaystyle\left.\left.+\,\eta^{\nu\alpha}p^{\mu}p^{\beta}\\!+\\!\eta^{\nu\beta}p^{\mu}p^{\alpha}\right)\\!+\\!\frac{~{}4\hskip
0.85358ptp^{\mu}p^{\nu}p^{\alpha}p^{\beta}~{}}{\xi_{n}M_{n}^{4}}\\!\left(\frac{1}{~{}p^{2}\\!+\xi_{n}^{2}\mathbb{M}_{n}^{2}~{}}-\frac{1}{~{}p^{2}\\!+\xi_{n}\mathbb{M}_{n}^{2}~{}}\right)\\!\right\\},$
(67lka) $\displaystyle\mathcal{D}^{\mu\nu}_{nm}(p)=-\frac{\text{i}\hskip
0.85358pt\delta_{nm}}{\,p^{2}\\!+\xi_{n}\mathbb{M}_{n}^{2}\,}\\!\left[\eta^{\mu\nu}\\!\\!-\\!\frac{~{}p^{\mu}p^{\nu}(1\\!-\\!\xi_{n})~{}}{\,p^{2}\\!+\xi_{n}^{2}\mathbb{M}_{n}^{2}~{}}\right]\\!,$
(67lkb) $\displaystyle\mathcal{D}_{nm}(p)=-\frac{\text{i}\hskip
0.85358pt\delta_{nm}}{\,p^{2}+(3\xi_{n}\\!\\!-\\!2)\mathbb{M}_{n}^{2}\,}\,.$
(67lkc)
The relevant trilinear gravitational KK vertices are given by
$\displaystyle\hskip 17.07182pt=\text{i}\hskip 0.85358pt\kappa\hskip
0.85358pt\alpha_{nm\ell}\hskip
0.85358pt\Gamma^{\mu\nu\rho\sigma\alpha\beta}_{nm\ell}(p_{1},p_{2},p_{3})\hskip
0.85358pt,$ (67lla) $\displaystyle\hskip 17.64056pt=\frac{\,\text{i}\hskip
0.85358pt\kappa\,}{2}\tilde{\alpha}_{nm\ell}\hskip
0.85358pt\Gamma^{\alpha\beta\mu\nu}_{nm\ell}(p_{1},p_{2})\hskip 0.85358pt,$
(67llb) $\displaystyle\hskip 17.64056pt\begin{aligned}
&=\frac{\,\text{i}\hskip
0.85358pt\kappa\,}{2}\tilde{\beta}_{nm\ell}\Big{\\{}2(p_{1}^{\mu}p_{1}^{\nu}+p_{2}^{\mu}p_{2}^{\nu})+p_{1}^{\mu}p_{2}^{\nu}+p_{1}^{\nu}p_{2}^{\mu}-[2(p_{1}^{2}+p_{2}^{2})\\\
&\hskip
59.75078pt+3p_{1}\\!\cdot\\!p_{2}+\mathbb{M}_{n}\mathbb{M}_{\ell}+\mathbb{M}_{m}\mathbb{M}_{\ell}]\eta^{\mu\nu}\Big{\\}}\hskip
0.85358pt,\end{aligned}$ (67llc) $\displaystyle\hskip
17.64056pt\begin{aligned} &=\frac{\text{i}\hskip
0.85358pt\kappa}{\sqrt{6\,}\,}\hskip
0.85358pt\tilde{\rho}_{nm\ell}\Big{\\{}\\!\\!-\\!2\hskip
0.85358pt(p_{1}^{\mu}p_{1}^{\nu}\\!+p_{2}^{\mu}p_{2}^{\nu})\\!-\\!3\hskip
0.85358ptp_{2}^{\mu}p_{1}^{\nu}\\!+\\![2(p_{1}^{2}\\!+\\!p_{2}^{2})\\\ &\hskip
59.75078pt+3\hskip 0.85358ptp_{1}\\!\cdot\\!p_{2}]\eta^{\mu\nu}\Big{\\}}\hskip
0.85358pt,\end{aligned}$ (67lld) $\displaystyle\hskip
17.64056pt\begin{aligned} &=\text{i}\hskip
0.85358pt2\sqrt{\frac{2}{3}\,}\hskip 0.85358pt\kappa\hskip
0.85358pt\tilde{\omega}_{nm\ell}\big{(}p_{1}\\!\cdot\\!p_{2}\\!+p_{1}\\!\cdot\\!p_{3}\\!+p_{2}\\!\cdot\\!p_{3}\\!-\mathbb{M}_{n}\mathbb{M}_{m}\\\
&\hskip
79.66771pt-\mathbb{M}_{n}\mathbb{M}_{\ell}\\!-\mathbb{M}_{m}\mathbb{M}_{\ell}\big{)}\hskip
0.85358pt.\end{aligned}$ (67lle)
In the above Feynman rules (67ll), the coupling $\kappa$ is the 4d
gravitational coupling and is connected to the 5d gravitational coupling
$\hat{\kappa}_{5}$ via $\,\kappa\\!=\\!\hat{\kappa}_{5}/\sqrt{L\,}$. For the
5d physical coordinate $y$ (instead of the 5d conformal coordinate $z$), the
5d length is $\bar{L}\\!=\\!\pi r_{c}$ and the corresponding 4d gravitational
coupling is $\bar{\kappa}\\!=\\!\hat{\kappa}_{5}/\sqrt{\bar{L}\,}$, which is
related to the gravitational coupling $\kappa$ via
$\,\bar{\kappa}\\!=\\!{\kappa}\sqrt{\bar{L}/L\,}$. Thus, the cubic KK
gravitational coupling coefficient $\bar{\alpha}_{nm\ell}$ (defined in 5d
physical coordinate) is connected to the above cubic KK gravitational coupling
coefficient ${\alpha}_{nm\ell}$ (defined in 5d conformal coordinate) by
$\bar{\alpha}_{nm\ell}\\!=\\!{\alpha}_{nm\ell}\sqrt{\bar{L}/L\,}$.
In the above Eq.(67lla), the trilinear gravitational vertex function
$\Gamma^{\mu\nu\rho\sigma\alpha\beta}_{nm\ell}$ is defined as follows:
$\Gamma^{\mu\nu\rho\sigma\alpha\beta}_{nm\ell}(p_{1},p_{2},p_{3})=\frac{1}{\,8\,}\big{[}(\mathbb{M}_{n}^{2}\hskip
0.85358ptF_{1}\\!-p_{1}^{2}\hskip
0.85358ptF_{2}+F_{3})\\!+\\!(1\\!\leftrightarrow\\!2)\\!+\\!(1\\!\leftrightarrow\\!3)\big{]}\,,$
(67lm)
with the functions $(F_{1},\,F_{2},\,F_{3})$ given by
$\displaystyle F_{1}=$
$\displaystyle-\eta^{\alpha\beta}\eta^{\mu\sigma}\eta^{\nu\rho}-\eta^{\alpha\beta}\eta^{\mu\rho}\eta^{\nu\sigma}+2\eta^{\alpha\beta}\eta^{\mu\nu}\eta^{\rho\sigma}+\eta^{\alpha\mu}\eta^{\beta\sigma}\eta^{\nu\rho}+\eta^{\alpha\mu}\eta^{\beta\rho}\eta^{\nu\sigma}-\eta^{\alpha\mu}\eta^{\beta\nu}\eta^{\rho\sigma}$
$\displaystyle+\eta^{\alpha\sigma}\eta^{\beta\mu}\eta^{\nu\rho}+\eta^{\alpha\rho}\eta^{\beta\mu}\eta^{\nu\sigma}+\eta^{\alpha\nu}\eta^{\beta\sigma}\eta^{\mu\rho}+\eta^{\alpha\nu}\eta^{\beta\rho}\eta^{\mu\sigma}-\eta^{\alpha\nu}\eta^{\beta\mu}\eta^{\rho\sigma}+\eta^{\alpha\sigma}\eta^{\beta\nu}\eta^{\mu\rho}$
$\displaystyle+\eta^{\alpha\rho}\eta^{\beta\nu}\eta^{\mu\sigma}-3\eta^{\alpha\rho}\eta^{\beta\sigma}\eta^{\mu\nu}-3\eta^{\alpha\sigma}\eta^{\beta\rho}\eta^{\mu\nu}\,,$
$\displaystyle F_{2}=$
$\displaystyle~{}3\eta^{\alpha\beta}\eta^{\mu\sigma}\eta^{\nu\rho}+3\eta^{\alpha\beta}\eta^{\mu\rho}\eta^{\nu\sigma}-4\eta^{\alpha\beta}\eta^{\mu\nu}\eta^{\rho\sigma}-2\eta^{\alpha\mu}\eta^{\beta\sigma}\eta^{\nu\rho}-2\eta^{\alpha\mu}\eta^{\beta\rho}\eta^{\nu\sigma}+3\eta^{\alpha\mu}\eta^{\beta\nu}\eta^{\rho\sigma}$
$\displaystyle+3\eta^{\alpha\nu}\eta^{\beta\mu}\eta^{\rho\sigma}-2\eta^{\alpha\nu}\eta^{\beta\sigma}\eta^{\mu\rho}-2\eta^{\alpha\nu}\eta^{\beta\rho}\eta^{\mu\sigma}+4\eta^{\alpha\sigma}\eta^{\beta\rho}\eta^{\mu\nu}+4\eta^{\alpha\rho}\eta^{\beta\sigma}\eta^{\mu\nu}-2\eta^{\alpha\rho}\eta^{\beta\nu}\eta^{\mu\sigma}$
$\displaystyle-2\eta^{\alpha\rho}\eta^{\beta\mu}\eta^{\nu\sigma}-2\eta^{\alpha\sigma}\eta^{\beta\nu}\eta^{\mu\rho}-2\eta^{\alpha\sigma}\eta^{\beta\mu}\eta^{\nu\rho}\,,$
$\displaystyle F_{3}=$
$\displaystyle~{}2\eta^{\mu\nu}\eta^{\rho\sigma}p_{1}^{\alpha}p_{2}^{\beta}+\eta^{\mu\nu}\eta^{\rho\sigma}p_{1}^{\alpha}p_{3}^{\beta}+\eta^{\nu\rho}\eta^{\beta\sigma}p_{1}^{\alpha}p_{2}^{\mu}+\eta^{\beta\rho}\eta^{\nu\sigma}p_{1}^{\alpha}p_{2}^{\mu}+\eta^{\nu\rho}\eta^{\beta\sigma}p_{1}^{\alpha}p_{3}^{\mu}+\eta^{\beta\rho}\eta^{\nu\sigma}p_{1}^{\alpha}p_{3}^{\mu}$
$\displaystyle+\eta^{\mu\rho}\eta^{\beta\sigma}p_{1}^{\alpha}p_{2}^{\nu}+\eta^{\beta\rho}\eta^{\mu\sigma}p_{1}^{\alpha}p_{2}^{\nu}+\eta^{\mu\rho}\eta^{\beta\sigma}p_{1}^{\alpha}p_{3}^{\nu}+\eta^{\beta\rho}\eta^{\mu\sigma}p_{1}^{\alpha}p_{3}^{\nu}-\eta^{\beta\sigma}\eta^{\mu\nu}p_{1}^{\alpha}p_{2}^{\rho}-\eta^{\beta\sigma}\eta^{\mu\nu}p_{1}^{\alpha}p_{3}^{\rho}$
$\displaystyle+\eta^{\beta\nu}\eta^{\mu\sigma}p_{1}^{\alpha}p_{3}^{\rho}+\eta^{\beta\mu}\eta^{\nu\sigma}p_{1}^{\alpha}p_{3}^{\rho}-\eta^{\mu\sigma}\eta^{\nu\rho}p_{1}^{\alpha}p_{2}^{\beta}-\eta^{\mu\sigma}\eta^{\nu\rho}p_{1}^{\alpha}p_{3}^{\beta}-\eta^{\beta\rho}\eta^{\mu\nu}p_{1}^{\alpha}p_{2}^{\sigma}-\eta^{\beta\rho}\eta^{\mu\nu}p_{1}^{\alpha}p_{3}^{\sigma}$
$\displaystyle+\eta^{\beta\nu}\eta^{\mu\rho}p_{1}^{\alpha}p_{3}^{\sigma}+\eta^{\beta\mu}\eta^{\nu\rho}p_{1}^{\alpha}p_{3}^{\sigma}-\eta^{\mu\rho}\eta^{\nu\sigma}p_{1}^{\alpha}p_{2}^{\beta}-\eta^{\mu\rho}\eta^{\nu\sigma}p_{1}^{\alpha}p_{3}^{\beta}-\eta^{\beta\nu}\eta^{\rho\sigma}p_{1}^{\alpha}p_{2}^{\mu}-\eta^{\beta\nu}\eta^{\rho\sigma}p_{1}^{\alpha}p_{3}^{\mu}$
$\displaystyle-\eta^{\beta\mu}\eta^{\rho\sigma}p_{1}^{\alpha}p_{2}^{\nu}-\eta^{\beta\mu}\eta^{\rho\sigma}p_{1}^{\alpha}p_{3}^{\nu}+2\eta^{\mu\nu}\eta^{\rho\sigma}p_{1}^{\beta}p_{2}^{\alpha}+\eta^{\mu\nu}\eta^{\rho\sigma}p_{1}^{\beta}p_{3}^{\alpha}+\eta^{\nu\rho}\eta^{\alpha\sigma}p_{1}^{\beta}p_{2}^{\mu}+\eta^{\alpha\rho}\eta^{\nu\sigma}p_{1}^{\beta}p_{2}^{\mu}$
$\displaystyle+\eta^{\nu\rho}\eta^{\alpha\sigma}p_{1}^{\beta}p_{3}^{\mu}+\eta^{\alpha\rho}\eta^{\nu\sigma}p_{1}^{\beta}p_{3}^{\mu}+\eta^{\mu\rho}\eta^{\alpha\sigma}p_{1}^{\beta}p_{2}^{\nu}+\eta^{\alpha\rho}\eta^{\mu\sigma}p_{1}^{\beta}p_{2}^{\nu}+\eta^{\mu\rho}\eta^{\alpha\sigma}p_{1}^{\beta}p_{3}^{\nu}+\eta^{\alpha\rho}\eta^{\mu\sigma}p_{1}^{\beta}p_{3}^{\nu}$
$\displaystyle-\eta^{\alpha\sigma}\eta^{\mu\nu}p_{1}^{\beta}p_{2}^{\rho}-\eta^{\alpha\sigma}\eta^{\mu\nu}p_{1}^{\beta}p_{3}^{\rho}+\eta^{\alpha\nu}\eta^{\mu\sigma}p_{1}^{\beta}p_{3}^{\rho}+\eta^{\alpha\mu}\eta^{\nu\sigma}p_{1}^{\beta}p_{3}^{\rho}-\eta^{\mu\sigma}\eta^{\nu\rho}p_{1}^{\beta}p_{3}^{\alpha}-\eta^{\alpha\rho}\eta^{\mu\nu}p_{1}^{\beta}p_{2}^{\sigma}$
$\displaystyle-\eta^{\alpha\rho}\eta^{\mu\nu}p_{1}^{\beta}p_{3}^{\sigma}+\eta^{\alpha\nu}\eta^{\mu\rho}p_{1}^{\beta}p_{3}^{\sigma}+\eta^{\alpha\mu}\eta^{\nu\rho}p_{1}^{\beta}p_{3}^{\sigma}-\eta^{\mu\rho}\eta^{\nu\sigma}p_{1}^{\beta}p_{2}^{\alpha}-\eta^{\mu\rho}\eta^{\nu\sigma}p_{1}^{\beta}p_{3}^{\alpha}-\eta^{\alpha\nu}\eta^{\rho\sigma}p_{1}^{\beta}p_{2}^{\mu}$
$\displaystyle-\eta^{\alpha\nu}\eta^{\rho\sigma}p_{1}^{\beta}p_{3}^{\mu}-\eta^{\alpha\mu}\eta^{\rho\sigma}p_{1}^{\beta}p_{2}^{\nu}-\eta^{\alpha\mu}\eta^{\rho\sigma}p_{1}^{\beta}p_{3}^{\nu}-\eta^{\nu\rho}\eta^{\mu\sigma}p_{1}^{\beta}p_{2}^{\alpha}-\eta^{\alpha\sigma}\eta^{\beta\rho}p_{1}^{\mu}p_{2}^{\nu}-\eta^{\alpha\rho}\eta^{\beta\sigma}p_{1}^{\mu}p_{2}^{\nu}$
$\displaystyle+\eta^{\alpha\beta}\eta^{\rho\sigma}p_{1}^{\mu}p_{2}^{\nu}-\eta^{\alpha\sigma}\eta^{\beta\rho}p_{1}^{\mu}p_{3}^{\nu}-\eta^{\alpha\rho}\eta^{\beta\sigma}p_{1}^{\mu}p_{3}^{\nu}+\eta^{\alpha\beta}\eta^{\rho\sigma}p_{1}^{\mu}p_{3}^{\nu}-\eta^{\alpha\beta}\eta^{\nu\sigma}p_{1}^{\mu}p_{3}^{\rho}-\eta^{\alpha\beta}\eta^{\nu\rho}p_{1}^{\mu}p_{3}^{\sigma}$
$\displaystyle-\eta^{\alpha\nu}\eta^{\rho\sigma}p_{1}^{\mu}p_{2}^{\beta}-\eta^{\beta\nu}\eta^{\rho\sigma}p_{1}^{\mu}p_{2}^{\alpha}-\eta^{\alpha\sigma}\eta^{\beta\rho}p_{1}^{\nu}p_{2}^{\mu}-\eta^{\alpha\rho}\eta^{\beta\sigma}p_{1}^{\nu}p_{2}^{\mu}+\eta^{\alpha\beta}\eta^{\rho\sigma}p_{1}^{\nu}p_{2}^{\mu}-\eta^{\alpha\sigma}\eta^{\beta\rho}p_{1}^{\nu}p_{3}^{\mu}$
$\displaystyle-\eta^{\alpha\rho}\eta^{\beta\sigma}p_{1}^{\nu}p_{3}^{\mu}+\eta^{\alpha\beta}\eta^{\rho\sigma}p_{1}^{\nu}p_{3}^{\mu}-\eta^{\alpha\beta}\eta^{\mu\sigma}p_{1}^{\nu}p_{3}^{\rho}-\eta^{\alpha\beta}\eta^{\mu\rho}p_{1}^{\nu}p_{3}^{\sigma}-\eta^{\alpha\mu}\eta^{\rho\sigma}p_{1}^{\nu}p_{2}^{\beta}-\eta^{\beta\mu}\eta^{\rho\sigma}p_{1}^{\nu}p_{2}^{\alpha}$
$\displaystyle+\eta^{\beta\nu}\eta^{\mu\sigma}p_{1}^{\rho}p_{2}^{\alpha}+\eta^{\beta\mu}\eta^{\nu\sigma}p_{1}^{\rho}p_{2}^{\alpha}+\eta^{\alpha\nu}\eta^{\mu\sigma}p_{1}^{\rho}p_{2}^{\beta}+\eta^{\alpha\mu}\eta^{\nu\sigma}p_{1}^{\rho}p_{2}^{\beta}+\eta^{\beta\nu}\eta^{\alpha\sigma}p_{1}^{\rho}p_{2}^{\mu}+\eta^{\alpha\nu}\eta^{\beta\sigma}p_{1}^{\rho}p_{2}^{\mu}$
$\displaystyle+\eta^{\beta\nu}\eta^{\alpha\sigma}p_{1}^{\rho}p_{3}^{\mu}+\eta^{\alpha\nu}\eta^{\beta\sigma}p_{1}^{\rho}p_{3}^{\mu}+\eta^{\beta\mu}\eta^{\alpha\sigma}p_{1}^{\rho}p_{2}^{\nu}+\eta^{\alpha\mu}\eta^{\beta\sigma}p_{1}^{\rho}p_{2}^{\nu}-\eta^{\alpha\beta}\eta^{\mu\sigma}p_{1}^{\rho}p_{2}^{\nu}+\eta^{\beta\mu}\eta^{\alpha\sigma}p_{1}^{\rho}p_{3}^{\nu}$
$\displaystyle+\eta^{\alpha\mu}\eta^{\beta\sigma}p_{1}^{\rho}p_{3}^{\nu}-\eta^{\alpha\beta}\eta^{\mu\sigma}p_{1}^{\rho}p_{3}^{\nu}-\eta^{\alpha\sigma}\eta^{\mu\nu}p_{1}^{\rho}p_{2}^{\beta}-\eta^{\alpha\sigma}\eta^{\mu\nu}p_{1}^{\rho}p_{3}^{\beta}-\eta^{\beta\sigma}\eta^{\mu\nu}p_{1}^{\rho}p_{2}^{\alpha}-\eta^{\beta\sigma}\eta^{\mu\nu}p_{1}^{\rho}p_{3}^{\alpha}$
$\displaystyle-\eta^{\alpha\nu}\eta^{\beta\mu}p_{1}^{\rho}p_{2}^{\sigma}-\eta^{\alpha\mu}\eta^{\beta\nu}p_{1}^{\rho}p_{2}^{\sigma}+\eta^{\alpha\beta}\eta^{\mu\nu}p_{1}^{\rho}p_{2}^{\sigma}-\eta^{\alpha\nu}\eta^{\beta\mu}p_{1}^{\rho}p_{3}^{\sigma}-\eta^{\alpha\mu}\eta^{\beta\nu}p_{1}^{\rho}p_{3}^{\sigma}+2\eta^{\alpha\beta}\eta^{\mu\nu}p_{1}^{\rho}p_{3}^{\sigma}$
$\displaystyle-\eta^{\alpha\beta}\eta^{\nu\sigma}p_{1}^{\rho}p_{2}^{\mu}-\eta^{\alpha\beta}\eta^{\nu\sigma}p_{1}^{\rho}p_{3}^{\mu}+\eta^{\beta\nu}\eta^{\mu\rho}p_{1}^{\sigma}p_{2}^{\alpha}+\eta^{\beta\mu}\eta^{\nu\rho}p_{1}^{\sigma}p_{2}^{\alpha}+\eta^{\alpha\nu}\eta^{\mu\rho}p_{1}^{\sigma}p_{2}^{\beta}+\eta^{\alpha\mu}\eta^{\nu\rho}p_{1}^{\sigma}p_{2}^{\beta}$
$\displaystyle+\eta^{\beta\nu}\eta^{\alpha\rho}p_{1}^{\sigma}p_{2}^{\mu}+\eta^{\alpha\nu}\eta^{\beta\rho}p_{1}^{\sigma}p_{2}^{\mu}+\eta^{\beta\nu}\eta^{\alpha\rho}p_{1}^{\sigma}p_{3}^{\mu}+\eta^{\alpha\nu}\eta^{\beta\rho}p_{1}^{\sigma}p_{3}^{\mu}+\eta^{\beta\mu}\eta^{\alpha\rho}p_{1}^{\sigma}p_{2}^{\nu}+\eta^{\alpha\mu}\eta^{\beta\rho}p_{1}^{\sigma}p_{2}^{\nu}$
$\displaystyle-\eta^{\alpha\beta}\eta^{\mu\rho}p_{1}^{\sigma}p_{2}^{\nu}+\eta^{\beta\mu}\eta^{\alpha\rho}p_{1}^{\sigma}p_{3}^{\nu}+\eta^{\alpha\mu}\eta^{\beta\rho}p_{1}^{\sigma}p_{3}^{\nu}-\eta^{\alpha\beta}\eta^{\mu\rho}p_{1}^{\sigma}p_{3}^{\nu}-\eta^{\alpha\rho}\eta^{\mu\nu}p_{1}^{\sigma}p_{2}^{\beta}-\eta^{\alpha\rho}\eta^{\mu\nu}p_{1}^{\sigma}p_{3}^{\beta}$
$\displaystyle-\eta^{\beta\rho}\eta^{\mu\nu}p_{1}^{\sigma}p_{2}^{\alpha}-\eta^{\beta\rho}\eta^{\mu\nu}p_{1}^{\sigma}p_{3}^{\alpha}-\eta^{\alpha\nu}\eta^{\beta\mu}p_{1}^{\sigma}p_{2}^{\rho}-\eta^{\alpha\mu}\eta^{\beta\nu}p_{1}^{\sigma}p_{2}^{\rho}+\eta^{\alpha\beta}\eta^{\mu\nu}p_{1}^{\sigma}p_{2}^{\rho}-\eta^{\alpha\nu}\eta^{\beta\mu}p_{1}^{\sigma}p_{3}^{\rho}$
$\displaystyle-\eta^{\alpha\mu}\eta^{\beta\nu}p_{1}^{\sigma}p_{3}^{\rho}+2\eta^{\alpha\beta}\eta^{\mu\nu}p_{1}^{\sigma}p_{3}^{\rho}-\eta^{\alpha\beta}\eta^{\nu\rho}p_{1}^{\sigma}p_{2}^{\mu}-\eta^{\alpha\beta}\eta^{\nu\rho}p_{1}^{\sigma}p_{3}^{\mu}\,.$
(67ln)
In Eq.(67llb), the trilinear gravitational vertex function
$\Gamma^{\alpha\beta\mu\nu}_{nm\ell}$ is defined as follows:
$\displaystyle\Gamma^{\alpha\beta\mu\nu}_{nm\ell}(p_{1},p_{2})=$
$\displaystyle-2\eta^{\alpha\beta}p_{1}^{\mu}p_{1}^{\nu}+2\eta^{\alpha\beta}\eta^{\mu\nu}p_{1}^{2}+\eta^{\alpha\mu}p_{1}^{\beta}p_{1}^{\nu}+\eta^{\alpha\nu}p_{1}^{\beta}p_{1}^{\mu}+\eta^{\beta\mu}p_{1}^{\alpha}p_{1}^{\nu}-\eta^{\alpha\nu}\eta^{\beta\mu}p_{1}^{2}$
$\displaystyle+\eta^{\beta\nu}p_{1}^{\alpha}p_{1}^{\mu}-\eta^{\alpha\mu}\eta^{\beta\nu}p_{1}^{2}-2\eta^{\mu\nu}p_{1}^{\alpha}p_{1}^{\beta}-\eta^{\alpha\beta}p_{1}^{\nu}p_{2}^{\mu}-\eta^{\alpha\beta}p_{1}^{\mu}p_{2}^{\nu}+3\eta^{\alpha\beta}\eta^{\mu\nu}(p_{1}\\!\cdot\\!p_{2})$
$\displaystyle+\eta^{\alpha\mu}p_{1}^{\nu}p_{2}^{\beta}\\!+\eta^{\alpha\nu}p_{1}^{\mu}p_{2}^{\beta}+\eta^{\beta\mu}p_{1}^{\alpha}p_{2}^{\nu}-\eta^{\alpha\nu}\eta^{\beta\mu}(p_{1}\\!\cdot\\!p_{2})\\!+\eta^{\beta\nu}p_{1}^{\alpha}p_{2}^{\mu}-\eta^{\alpha\mu}\eta^{\beta\nu}(p_{1}\\!\cdot\\!p_{2})$
$\displaystyle-\eta^{\mu\nu}p_{1}^{\beta}p_{2}^{\alpha}-2\eta^{\mu\nu}p_{1}^{\alpha}p_{2}^{\beta}-2\eta^{\alpha\beta}p_{2}^{\mu}p_{2}^{\nu}+2\eta^{\alpha\beta}\eta^{\mu\nu}p_{2}^{2}+\eta^{\alpha\mu}p_{2}^{\beta}p_{2}^{\nu}+\eta^{\alpha\nu}p_{2}^{\beta}p_{2}^{\mu}$
$\displaystyle+\eta^{\beta\mu}p_{2}^{\alpha}p_{2}^{\nu}-\eta^{\alpha\nu}\eta^{\beta\mu}p_{2}^{2}+\eta^{\beta\nu}p_{2}^{\alpha}p_{2}^{\mu}-\eta^{\alpha\mu}\eta^{\beta\nu}p_{2}^{2}-2\eta^{\mu\nu}p_{2}^{\alpha}p_{2}^{\beta}\,.$
(67lo)
Moreover, the trilinear coupling coefficients $(\alpha_{nm\ell},\hskip
0.85358pt\tilde{\alpha}_{nm\ell},\hskip 0.85358pt\tilde{\beta}_{nm\ell},\hskip
0.85358pt\tilde{\rho}_{nm\ell})$ in Eq.(67ll) are defined as bellow:
$\displaystyle\alpha_{nm\ell}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\mathsf{u}_{n}(z)\mathsf{u}_{m}(z)\mathsf{u}_{\ell}(z)\hskip
0.85358pt,$ (67lpa) $\displaystyle\tilde{\alpha}_{nm\ell}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\mathsf{v}_{n}(z)\mathsf{v}_{m}(z)\mathsf{u}_{\ell}(z)\hskip
0.85358pt,$ (67lpb) $\displaystyle\tilde{\beta}_{nm\ell}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\mathsf{w}_{n}(z)\mathsf{w}_{m}(z)\mathsf{u}_{\ell}(z)\hskip
0.85358pt,$ (67lpc) $\displaystyle\tilde{\rho}_{nm\ell}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\mathsf{v}_{n}(z)\mathsf{v}_{m}(z)\mathsf{w}_{\ell}(z)\hskip
0.85358pt,$ (67lpd) $\displaystyle\tilde{\omega}_{nm\ell}$
$\displaystyle=\frac{1}{\,L\,}\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\mathsf{w}_{n}(z)\mathsf{w}_{m}(z)\mathsf{w}_{\ell}(z)\hskip
0.85358pt.$ (67lpe)
In particular, we can derive the trilinear coupling coefficients containing
zero-modes, $\hskip
0.85358pt\alpha_{000}\\!=\\!\alpha_{nn0}\\!=\\!\tilde{\alpha}_{nn0}\\!=\\!\mathsf{u}_{0}\hskip
0.85358pt$ and $\hskip
0.85358pta_{000}\\!=\\!a_{nn0}\\!=\\!\tilde{a}_{nn0}\\!=\\!\mathsf{f}_{0}\hskip
0.85358pt$, based on the normalization condition Eq.(51). We summarize these
coupling coefficients as follows:
$\displaystyle\alpha_{000}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\hskip
0.85358pt\mathsf{u}_{0}^{2}\hskip 0.85358pt\mathsf{u}_{0}=\mathsf{u}_{0}\,,$
(67lqa) $\displaystyle\alpha_{nn0}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\hskip
0.85358pt\mathsf{u}_{n}^{2}\hskip 0.85358pt\mathsf{u}_{0}=\mathsf{u}_{0}\,,$
(67lqb) $\displaystyle\tilde{\alpha}_{nn0}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\hskip
0.85358pt\mathsf{v}_{n}^{2}\hskip 0.85358pt\mathsf{u}_{0}=\mathsf{u}_{0}\,,$
(67lqc) $\displaystyle a_{000}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\text{d}z\,e^{A(z)}\hskip
0.85358pt\mathsf{f}_{0}^{2}\hskip 0.85358pt\mathsf{f}_{0}=\mathsf{f}_{0}\,,$
(67lqd) $\displaystyle a_{nn0}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\text{d}z\,e^{A(z)}\hskip
0.85358pt\mathsf{f}_{n}^{2}\hskip 0.85358pt\mathsf{f}_{0}=\mathsf{f}_{0}\,,$
(67lqe) $\displaystyle\tilde{a}_{nn0}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\text{d}z\,e^{A(z)}\hskip
0.85358pt\tilde{\mathsf{f}}_{n}^{2}\hskip
0.85358pt\mathsf{f}_{0}=\mathsf{f}_{0}\,.$ (67lqf)
In addition, we further define the following quartic coupling coefficients:
$\displaystyle\alpha_{nm\ell q}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\\!\text{d}z\,e^{3A(z)}\hskip
0.85358pt\mathsf{u}_{n}(z)\mathsf{u}_{m}(z)\mathsf{u}_{\ell}(z)\mathsf{u}_{q}(z)\,,$
(67lra) $\displaystyle\tilde{\beta}_{nm\ell q}$
$\displaystyle=\frac{1}{\,L\,}\\!\int_{0}^{L}\\!\\!\\!\text{d}z\,e^{3A(z)}\hskip
0.85358pt\mathsf{w}_{n}(z)\mathsf{w}_{m}(z)\mathsf{w}_{\ell}(z)\mathsf{w}_{q}(z)\,,$
(67lrb)
which will be used for the double-copy construction of Section 4.2 and for
deriving the gravitational sum rules in Appendix D.
Taking the flat space limit $k\\!\rightarrow\\!0\hskip 0.85358pt$, we see that
all the coupling coefficients will be reduced to the simple trigonometric
functions:
$\displaystyle\mathsf{f}_{0}=\mathsf{u}_{0}=\mathsf{w}_{0}=1\,,$
$\displaystyle\mathsf{f}_{n}(z)=\mathsf{u}_{n}(z)=\mathsf{w}_{n}(z)=\sqrt{2\,}\cos\frac{\,n\pi
z\,}{L}\hskip 0.85358pt,$ (67ls)
$\displaystyle\tilde{\mathsf{f}}_{n}(z)=\mathsf{v}_{n}(z)=\sqrt{2\,}\sin\frac{\,n\pi
z\,}{L},$
where $n\in\mathbb{Z}^{+}$. Thus, together with the definitions (67lq)-(67lr),
we deduce the values of these KK coupling coefficients in the flat 5d limit,
which are summarized in Table 1.
Cubic and Quartic Couplings (flat 5d) | Values
---|---
$\tilde{a}_{nn2n},\,\tilde{a}_{nm|n\pm m|},\,\tilde{\alpha}_{nn2n},\,\tilde{\rho}_{nn2n},\,\tilde{\alpha}_{nm|n\pm m|},\,\tilde{\rho}_{nm|n\pm m|}$ | $-\frac{1}{\sqrt{2\,}\,}$
$a_{nm0},\,\tilde{a}_{nm0},\,\alpha_{nm0},\,\tilde{\alpha}_{nm0},\,\tilde{\beta}_{nm0},\,\tilde{\rho}_{nm0},\,\tilde{\omega}_{nm0}$ | 0
$a_{nm\ell q},\,\tilde{a}_{nm\ell q},\,\alpha_{nm\ell q},\,\tilde{\beta}_{nm\ell q}$ | $\frac{1}{\,2\,}$
$a_{nn2n},\,a_{nm|n\pm m|},\,\alpha_{nn2n},\,\tilde{\beta}_{nn2n},\,\tilde{\omega}_{nn2n},\,\alpha_{nm|n\pm m|},\,\tilde{\beta}_{nm|n\pm m|},\,\tilde{\omega}_{nm|n\pm m|}$ | $\frac{1}{\sqrt{2\,}\,}$
$a_{nn0},\,\tilde{a}_{nn0},\,a_{nnmm},\,\alpha_{nn0},\,\tilde{\alpha}_{nn0},\,\tilde{\beta}_{nn0},\,\tilde{\rho}_{nn0},\,\tilde{\omega}_{nn0},\,\alpha_{nnmm},\,\tilde{\alpha}_{nnmm},\,\tilde{\beta}_{nnmm}$ | 1
$a_{nnnn},\,\alpha_{nnnn},\,\tilde{\alpha}_{nnnn},\,\tilde{\beta}_{nnnn}$ | $\frac{\,3\,}{2}$
Table 1: List of relevant cubic and quartic KK coupling coefficients of the
flat 5d gauge and gravity theories under $S^{1}\\!/\mathbb{Z}_{2}$
compactification. The subscripts are the relevant KK indices with
$n\\!\neq\\!m\\!\neq\\!\ell\\!\neq\\!q$ and $(n,m,\ell,q)\in\mathbb{Z}^{+}$.
Finally, we derive the on-shell 3-point KK gluon scattering amplitudes as
follows:
$\displaystyle\mathcal{T}[\\{\epsilon_{i}\\}]$ $\displaystyle=g\hskip
0.85358ptf^{abc}\mathcal{N}[\\{\epsilon_{j}\\}]\hskip 0.85358pt,$ (67maa)
$\displaystyle\mathcal{N}[\\{\epsilon_{j}\\}]$ $\displaystyle=-\text{i}\hskip
0.85358pt2\hskip 0.85358ptg\hskip 0.85358pta_{nm\ell}\big{[}\hskip
0.85358pt(\epsilon_{1}\epsilon_{2})(\epsilon_{3}p_{1})+(\epsilon_{2}\epsilon_{3})(\epsilon_{1}p_{2})+(\epsilon_{3}\epsilon_{1})(\epsilon_{2}p_{3})\hskip
0.85358pt\big{]}\hskip 0.85358pt,$ (67mab)
which reproduces the formula (67fz) in the main text.
Then, we can express the 3-point KK graviton amplitude into following form:
$\displaystyle\mathcal{M}\big{[}h^{\sigma_{1}}_{n}h^{\sigma_{2}}_{m}h^{\sigma_{3}}_{\ell}\big{]}$
$\displaystyle=\sum_{\lambda_{j},\lambda^{\prime}_{j}}\\!\\!\Big{(}\\!\prod_{j}\\!C_{\lambda_{j}\lambda^{\prime}_{j}}^{\hskip
0.85358pt\sigma_{j}}\Big{)}\mathcal{M}\big{[}\\{e_{i}\\}^{\lambda_{j}}\\!,\\{\epsilon_{i}\\}^{\lambda^{\prime}_{j}}\big{]}$
$\displaystyle\equiv\frac{~{}\kappa\hskip
0.85358pt\alpha_{nm\ell}~{}}{4}\\!\sum_{\lambda_{j},\lambda^{\prime}_{j}}\\!\\!\Big{(}\\!\prod_{j}\\!C_{\lambda_{j}\lambda^{\prime}_{j}}^{\hskip
0.85358pt\sigma_{j}}\Big{)}\hbox to0.0pt{\hskip
2.91666pt\leavevmode\hbox{\set@color$\overline{\hbox{}}$}\hss}{\leavevmode\hbox{\set@color$\mathcal{M}$}}\big{[}\\{e_{i}\\}^{\lambda_{j}}\\!,\\{\epsilon_{i}\\}^{\lambda^{\prime}_{j}}\big{]}\hskip
0.85358pt,$ (67mb)
where the helicity index $\,\sigma_{j}\\!=\\!\\{\pm 2,\pm
1,0\\}\\!\equiv\\!\\{\pm 2,\pm 1,L\\}\,$ labels the 5 helicity states of each
external massive KK graviton. The polarization tensors (including the
coefficients $C_{\lambda_{j}\lambda^{\prime}_{j}}^{\hskip
0.85358pt\sigma_{j}}$) of the external KK graviton states are defined as in
Eq.(67gf) of the main text. Then, we explicitly compute the on-shell 3-point
KK graviton scattering amplitudes as follows:
$\displaystyle\hbox to0.0pt{\hskip
2.91666pt\leavevmode\hbox{\set@color$\overline{\hbox{}}$}\hss}{\leavevmode\hbox{\set@color$\mathcal{M}$}}[\\{e_{i}\\},\\{\epsilon_{i}\\}]=\kappa\hskip
0.85358pt\alpha_{nm\ell}\left[e_{1,\mu}\epsilon_{1,\nu}e_{2,\rho}\epsilon_{2,\sigma}e_{3,\alpha}\epsilon_{3,\beta}\Gamma^{\mu\nu\rho\sigma\alpha\beta}_{nm\ell}(p_{1},p_{2},p_{3})\right]$
$\displaystyle=$ $\displaystyle\frac{~{}\kappa\hskip
0.85358pt\alpha_{nm\ell}~{}}{4}\big{[}(e_{1}\epsilon_{3})(e_{2}p_{3})(e_{3}p_{2})(\epsilon_{1}\epsilon_{2})+(e_{1}e_{3})(e_{2}p_{3})(p_{2}\epsilon_{3})(\epsilon_{1}\epsilon_{2})+(e_{1}p_{3})(e_{2}p_{3})(e_{3}\epsilon_{2})(\epsilon_{1}\epsilon_{3})$
$\displaystyle+(e_{1}\epsilon_{2})(e_{2}p_{3})(e_{3}p_{2})(\epsilon_{1}\epsilon_{3})+(e_{1}p_{3})(e_{2}p_{3})(e_{3}\epsilon_{1})(\epsilon_{2}\epsilon_{3})+(e_{1}e_{3})(e_{2}p_{3})(p_{3}\epsilon_{1})(\epsilon_{2}\epsilon_{3})$
$\displaystyle+(e_{1}\epsilon_{2})(e_{2}p_{3})(e_{3}\epsilon_{1})(p_{2}\epsilon_{3})-2(e_{1}\epsilon_{2})(e_{2}\epsilon_{1})(e_{3}p_{2})(p_{2}\epsilon_{3})-2(e_{1}e_{2})(e_{3}p_{2})(p_{2}\epsilon_{3})(\epsilon_{1}\epsilon_{2})$
$\displaystyle+(e_{1}e_{3})(e_{2}\epsilon_{1})(p_{3}\epsilon_{2})(p_{2}\epsilon_{3})+(e_{1}e_{2})(e_{3}\epsilon_{1})(p_{3}\epsilon_{2})(p_{2}\epsilon_{3})-(e_{1}p_{3})(e_{2}\epsilon_{3})(e_{3}p_{2})(\epsilon_{1}\epsilon_{2})$
$\displaystyle-(e_{1}p_{3})(e_{2}\epsilon_{1})(e_{3}p_{2})(\epsilon_{2}\epsilon_{3})-(e_{1}p_{3})(e_{2}\epsilon_{1})(e_{3}\epsilon_{2})(p_{2}\epsilon_{3})-(e_{1}p_{3})(e_{2}e_{3})(p_{2}\epsilon_{3})(\epsilon_{1}\epsilon_{2})$
$\displaystyle+(e_{1}\epsilon_{3})(e_{2}p_{3})(e_{3}\epsilon_{2})(p_{3}\epsilon_{1})-(e_{1}\epsilon_{2})(e_{2}\epsilon_{3})(e_{3}p_{2})(p_{3}\epsilon_{1})-(e_{1}e_{2})(e_{3}p_{2})(p_{3}\epsilon_{1})(\epsilon_{2}\epsilon_{3})$
$\displaystyle-(e_{1}\epsilon_{2})(e_{2}e_{3})(p_{3}\epsilon_{1})(p_{2}\epsilon_{3})-(e_{1}e_{2})(e_{3}\epsilon_{2})(p_{3}\epsilon_{1})(p_{2}\epsilon_{3})-2(e_{1}p_{3})(e_{2}\epsilon_{3})(e_{3}\epsilon_{2})(p_{3}\epsilon_{1})$
$\displaystyle-2(e_{1}p_{3})(e_{2}e_{3})(p_{3}\epsilon_{1})(\epsilon_{2}\epsilon_{3})+(e_{1}p_{3})(e_{2}\epsilon_{3})(e_{3}\epsilon_{1})(p_{3}\epsilon_{2})+(e_{1}p_{3})(e_{2}e_{3})(p_{3}\epsilon_{2})(\epsilon_{1}\epsilon_{3})$
$\displaystyle+(e_{1}e_{2})(e_{3}p_{2})(p_{3}\epsilon_{2})(\epsilon_{1}\epsilon_{3})+(e_{1}\epsilon_{3})(e_{2}\epsilon_{1})(e_{3}p_{2})(p_{3}\epsilon_{2})-2(e_{1}\epsilon_{3})(e_{2}p_{3})(e_{3}\epsilon_{1})(p_{3}\epsilon_{2})$
$\displaystyle-\\!2(e_{1}e_{3})(e_{2}p_{3})(p_{3}\epsilon_{2})(\epsilon_{1}\epsilon_{3})\\!+\\!(e_{1}\epsilon_{3})(e_{2}e_{3})(p_{3}\epsilon_{1})(p_{3}\epsilon_{2})\\!+\\!(e_{1}e_{3})(e_{2}\epsilon_{3})(p_{3}\epsilon_{1})(p_{3}\epsilon_{2})\big{]},$
(67mc)
where for simplicity we have introduced a shorthand notation,
$(ab)\\!\equiv\\!(a\cdot b)$.
## Appendix D Proving 3-Point and 4-Point Warped Identities
In this Appendix, we will prove relevant identities and sum rules for the
cubic and quartic coupling coefficients in the warped KK gauge theory and KK
gravity theory.
By imposing the following completeness conditions,
$\displaystyle\delta(z-z^{\prime})$
$\displaystyle=\sum_{j=0}^{\infty}\\!e^{A(z)}\mathsf{f}_{j}(z)\mathsf{f}_{j}(z^{\prime})=\sum_{j=0}^{\infty}\\!e^{A(z)}\tilde{\mathsf{f}}_{j}(z)\tilde{\mathsf{f}}_{j}(z^{\prime})\hskip
0.85358pt,$ (67mda) $\displaystyle\delta(z-z^{\prime})$
$\displaystyle=\sum_{j=0}^{\infty}\\!e^{3A(z)}\mathsf{u}_{j}(z)\mathsf{u}_{j}(z^{\prime})=\sum_{j=0}^{\infty}\\!e^{3A(z)}\mathsf{v}_{j}(z)\mathsf{v}_{j}(z^{\prime})=\sum_{j=0}^{\infty}\\!e^{3A(z)}\mathsf{w}_{j}(z)\mathsf{w}_{j}(z^{\prime})\hskip
0.85358pt,$ (67mdb)
we can derive the sum rules between the cubic and quartic KK gauge/gravity
coupling coefficients respectively:
$\displaystyle\sum_{j=0}^{\infty}a_{nmj}a_{\ell qj}=\,a_{nm\ell q}\hskip
0.85358pt,\qquad~{}~{}\sum_{j=0}^{\infty}\alpha_{nmj}\alpha_{\ell
qj}=\,\alpha_{nm\ell q}\hskip 0.85358pt,$ (67mea)
$\displaystyle\sum_{j=0}^{\infty}\tilde{a}_{nmj}\tilde{a}_{\ell
qj}=\,\tilde{a}_{nm\ell q}\hskip
0.85358pt,\qquad~{}~{}\sum_{j=0}^{\infty}\tilde{\beta}_{nmj}\tilde{\beta}_{\ell
qj}=\,\tilde{\beta}_{nm\ell q}\hskip 0.85358pt.$ (67meb)
For the special cases with some or all of the KK indices being equal, we can
recast above sum rules in the following simpler forms:
$\displaystyle\sum_{j=0}^{\infty}a_{nnj}^{2}$ $\displaystyle=a_{nnnn}\hskip
0.85358pt,\qquad~{}~{}\sum_{j=0}^{\infty}a_{nnj}a_{mmj}=\sum_{j=0}^{\infty}a_{nmj}^{2}=a_{nnmm}\hskip
0.85358pt,$ (67mfa) $\displaystyle\sum_{j=0}^{\infty}\alpha_{nnj}^{2}$
$\displaystyle=\alpha_{nnnn}\hskip
0.85358pt,\qquad~{}~{}\sum_{j=0}^{\infty}\alpha_{nnj}\alpha_{mmj}=\sum_{j=0}^{\infty}\alpha_{nmj}^{2}=\alpha_{nnmm}\hskip
0.85358pt,$ (67mfb) $\displaystyle\sum_{j=0}^{\infty}\tilde{a}_{nnj}^{2}$
$\displaystyle=\tilde{a}_{nnnn}\hskip
0.85358pt,\qquad~{}~{}\sum_{j=0}^{\infty}\tilde{a}_{nnj}\tilde{a}_{mmj}=\sum_{j=0}^{\infty}\tilde{a}_{nmj}^{2}=\tilde{a}_{nnmm}\hskip
0.85358pt,$ (67mfc) $\displaystyle\sum_{j=0}^{\infty}\tilde{\beta}_{nnj}^{2}$
$\displaystyle=\tilde{\beta}_{nnnn}\hskip
0.85358pt,\qquad~{}~{}\sum_{j=0}^{\infty}\tilde{\beta}_{nnj}\tilde{\beta}_{mmj}=\sum_{j=0}^{\infty}\tilde{\beta}_{nmj}^{2}=\tilde{\beta}_{nnmm}\hskip
0.85358pt.$ (67mfd)
Many of these relations will be used in Sections 3 and 4 of the main text.
Next, by utilizing the equation of motion (14a) and integration by parts, we
derive the following relation:
$\displaystyle\int_{0}^{L}\\!\\!\text{d}z\,e^{A(z)}M_{n}^{2}\,\mathsf{f}_{n}\hskip
0.85358pt\mathsf{f}_{m}\mathsf{f}_{\ell}$ $\displaystyle\,=\hskip
0.85358pt-\\!\int_{0}^{L}\\!\\!\text{d}z\,\partial_{z}(e^{A(z)}\partial_{z}\mathsf{f}_{n})\hskip
0.85358pt\mathsf{f}_{m}\mathsf{f}_{\ell}$
$\displaystyle=\int_{0}^{L}\\!\\!\text{d}z\,e^{A(z)}\big{(}\mathsf{f}_{n}^{\prime}\mathsf{f}_{m}^{\prime}\mathsf{f}_{\ell}+\mathsf{f}_{n}^{\prime}\mathsf{f}_{m}\mathsf{f}_{\ell}^{\prime})\hskip
0.85358pt.$ (67mg)
Thus, using the above relation and the definitions given in
Eqs.(67lja)-(67ljb), we obtain the following identities which are connected to
each other by cycling the three KK indices $(n,m,\ell)\hskip 0.85358pt$:
$\displaystyle M_{n}^{2}\,a_{nm\ell}$
$\displaystyle=M_{n}M_{m}\,\tilde{a}_{nm\ell}+M_{n}M_{\ell}\,\tilde{a}_{n\ell
m}\hskip 0.85358pt,$ $\displaystyle M_{m}^{2}\hskip 0.85358pta_{nm\ell}$
$\displaystyle=M_{n}M_{m}\,\tilde{a}_{nm\ell}+M_{m}M_{\ell}\,\tilde{a}_{m\ell
n}\hskip 0.85358pt,$ (67mh) $\displaystyle M_{\ell}^{2}\hskip
0.85358pta_{nm\ell}$ $\displaystyle=M_{n}M_{\ell}\,\tilde{a}_{n\ell
m}+M_{m}M_{\ell}\,\tilde{a}_{m\ell n}\hskip 0.85358pt.$
From these identities, we can further derive a relation connecting the
coupling coefficient $a_{nm\ell}$ to $\tilde{a}_{nm\ell}\hskip 0.85358pt$:
$\big{(}M_{n}^{2}+M_{m}^{2}\\!-M_{\ell}^{2}\big{)}a_{nm\ell}\,=\,2\hskip
0.85358ptM_{n}M_{m}\hskip 0.85358pt\tilde{a}_{nm\ell}\,,$ (67mi)
which just gives Eq.(67dg) in the main text. Similarly, we can derive the
following relation between the wavefunction couplings $\alpha_{nm\ell}$ and
$\tilde{\alpha}_{nm\ell}\hskip 0.85358pt$:
$\big{(}\mathbb{M}_{n}^{2}+\mathbb{M}_{m}^{2}\\!-\mathbb{M}_{\ell}^{2}\big{)}\alpha_{nm\ell}\,=\,2\hskip
0.85358pt\mathbb{M}_{n}\mathbb{M}_{m}\hskip
0.85358pt\tilde{\alpha}_{nm\ell}\,,$ (67mj)
which further gives Eq.(67dr) in the main text.
Moreover, by utilizing Eq.(14a) and integration by parts, we derive the
following relation containing four wavefunctions of the warped KK gauge
theory:
$\int_{0}^{L}\\!\\!\text{d}z\,e^{A(z)}M_{n}^{2}\hskip
0.85358pt\mathsf{f}_{n}\mathsf{f}_{m}\mathsf{f}_{\ell}\hskip
0.85358pt\mathsf{f}_{q}\,=\int_{0}^{L}\\!\\!\text{d}z\,e^{A(z)}\big{(}\mathsf{f}_{n}^{\prime}\mathsf{f}_{m}^{\prime}\mathsf{f}_{\ell}\hskip
0.85358pt\mathsf{f}_{q}\\!+\mathsf{f}_{n}^{\prime}\mathsf{f}_{m}\mathsf{f}_{\ell}^{\prime}\hskip
0.85358pt\mathsf{f}_{q}\\!+\mathsf{f}_{n}^{\prime}\mathsf{f}_{\ell}\hskip
0.85358pt\mathsf{f}_{m}\mathsf{f}_{q}^{\prime}\big{)}\hskip 0.85358pt.$ (67mk)
With this relation, we further derive the following identities by cycling the
four KK indices $(n,m,\ell,q)\hskip 0.85358pt$:
$\displaystyle M_{n}^{2}\,a_{nm\ell q}$ $\displaystyle\hskip 0.85358pt=\hskip
0.85358ptM_{n}M_{m}\hskip 0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\tilde{\mathsf{f}}_{m}\mathsf{f}_{\ell}\hskip
0.85358pt\mathsf{f}_{q}\hskip 0.85358pt\rrbracket\hskip
0.85358pt\\!+\\!M_{n}M_{\ell}\hskip 0.85358pt\hskip 0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\tilde{\mathsf{f}}_{\ell}\hskip
0.85358pt\mathsf{f}_{m}\mathsf{f}_{q}\hskip 0.85358pt\rrbracket\hskip
0.85358pt\\!+\\!M_{n}M_{q}\hskip 0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\tilde{\mathsf{f}}_{q}\mathsf{f}_{m}\mathsf{f}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt,$ $\displaystyle M_{m}^{2}\,a_{nm\ell q}$
$\displaystyle\hskip 0.85358pt=\hskip 0.85358ptM_{n}M_{m}\hskip
0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\tilde{\mathsf{f}}_{m}\mathsf{f}_{\ell}\hskip
0.85358pt\mathsf{f}_{q}\hskip 0.85358pt\rrbracket\hskip
0.85358pt\\!+\\!M_{m}M_{\ell}\hskip 0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{m}\tilde{\mathsf{f}}_{\ell}\hskip
0.85358pt\mathsf{f}_{n}\mathsf{f}_{q}\hskip 0.85358pt\rrbracket\hskip
0.85358pt\\!+\\!M_{m}M_{q}\hskip 0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{m}\tilde{\mathsf{f}}_{q}\mathsf{f}_{n}\mathsf{f}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt,$ $\displaystyle M_{\ell}^{2}\,a_{nm\ell
q}$ $\displaystyle\hskip 0.85358pt=\hskip 0.85358ptM_{n}M_{\ell}\hskip
0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\tilde{\mathsf{f}}_{\ell}\hskip
0.85358pt\mathsf{f}_{m}\mathsf{f}_{q}\hskip 0.85358pt\rrbracket\hskip
0.85358pt\\!+\\!M_{m}M_{\ell}\hskip 0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{m}\tilde{\mathsf{f}}_{\ell}\hskip
0.85358pt\mathsf{f}_{n}\mathsf{f}_{q}\hskip 0.85358pt\rrbracket\hskip
0.85358pt\\!+\\!M_{\ell}M_{q}\hskip 0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{\ell}\hskip
0.85358pt\tilde{\mathsf{f}}_{q}\mathsf{f}_{n}\mathsf{f}_{m}\hskip
0.85358pt\rrbracket\hskip 0.85358pt,$ $\displaystyle M_{q}^{2}\,a_{nm\ell q}$
$\displaystyle\hskip 0.85358pt=\hskip 0.85358ptM_{n}M_{q}\hskip
0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\tilde{\mathsf{f}}_{q}\mathsf{f}_{m}\mathsf{f}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\\!+\\!M_{m}M_{q}\hskip
0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{m}\tilde{\mathsf{f}}_{q}\mathsf{f}_{n}\mathsf{f}_{\ell}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\\!+\\!M_{\ell}M_{q}\hskip
0.85358pt\llbracket\hskip 0.85358pt\tilde{\mathsf{f}}_{\ell}\hskip
0.85358pt\tilde{\mathsf{f}}_{q}\mathsf{f}_{n}\mathsf{f}_{m}\hskip
0.85358pt\rrbracket\hskip 0.85358pt.$ (67ml)
In the above Eq.(D), we have used brackets $\hskip 0.85358pt\llbracket\hskip
0.85358pt\cdots\hskip 0.85358pt\rrbracket\hskip 0.85358pt$ to denote the
integration over product of the 5d wavefunctions:
$\hskip 0.85358pt\llbracket\hskip 0.85358pt\mathsf{X}_{n_{1}}\\!\cdots\hskip
0.85358pt\mathsf{X}_{n_{N}}\hskip 0.85358pt\rrbracket\hskip
0.85358pt=\frac{1}{\,L\,}\\!\\!\int_{0}^{L}\\!\\!\text{d}z\,e^{rA(z)}\hskip
0.85358pt\mathsf{X}_{n_{1}}\\!(z)\cdots\mathsf{X}_{n_{N}}\\!(z)\hskip
0.85358pt,$ (67mm)
where $\mathsf{X}_{n}$ represents the 5d wavefunctions and the parameter $r$
is chosen as $r\\!=\\!1\,(r\\!=\\!3)$ for the warped KK gauge (gravity)
theory. Then, summing up the four identities of Eq.(D) and making use of
Eq.(67mi), we derive a new sum rule:
$\sum_{j=0}^{\infty}\\!M_{j}^{2}(a_{nmj}a_{\ell qj}\\!+a_{n\ell
j}a_{mqj}\\!+a_{nqj}a_{m\ell
j})=\big{(}M_{n}^{2}\\!+\\!M_{m}^{2}\\!+\\!M_{\ell}^{2}\\!+\\!M_{q}^{2}\big{)}a_{nm\ell
q}\hskip 0.85358pt.$ (67mn)
For the special case of $n\\!=\\!m\\!=\\!\ell\\!=\\!q$, the above identity
reduces to:
$\sum_{j=0}^{\infty}\\!M_{j}^{2}a_{nnj}^{2}=\frac{\,4\,}{3}M_{n}^{2}a_{nnnn}\hskip
0.85358pt,$ (67mo)
which just reproduces the identity (67eib) in the main text. As for the warped
KK gravity theory, we can derive a new sum rule in similar form:
$\sum_{j=0}^{\infty}\\!\mathbb{M}_{j}^{2}(\alpha_{nmj}\alpha_{\ell
qj}\\!+\\!\alpha_{n\ell j}\alpha_{mqj}\\!+\\!\alpha_{nqj}\alpha_{m\ell
j})=\big{(}\mathbb{M}_{n}^{2}\\!+\mathbb{M}_{m}^{2}\\!+\mathbb{M}_{\ell}^{2}\\!+\mathbb{M}_{q}^{2}\big{)}\alpha_{nm\ell
q}\hskip 0.85358pt.$ (67mp)
For the special case of $n\\!=\\!m\\!=\\!\ell\\!=\\!q$, we simplify the above
identity as follows:
$\sum_{j=0}^{\infty}\mathbb{M}_{j}^{2}\alpha_{nnj}^{2}=\frac{\,4\,}{3}\mathbb{M}_{n}^{2}\alpha_{nnnn}\hskip
0.85358pt,$ (67mq)
which reproduces the identity (67ftb) in the main text.
Next, we prove the identity (67dw) in the main text. For this purpose, we
first derive the following relation,
$\displaystyle
L\mathbb{M}_{n}\mathbb{M}_{m}\tilde{\beta}_{nm\ell}=\int_{0}^{L}\\!\\!\text{d}z\,e^{3A(z)}\,(\mathbb{M}_{n}\mathsf{w}_{n})(\mathbb{M}_{m}\mathsf{w}_{m})\mathsf{u}_{\ell}$
$\displaystyle=\int_{0}^{L}\\!\\!\text{d}z\hskip
0.85358pt\Big{[}\partial_{z}(e^{A(z)}\mathsf{v}_{n})\Big{]}\Big{[}e^{2A(z)}(A^{\prime}\\!+\partial_{z})\mathsf{v}_{m}\Big{]}\mathsf{u}_{\ell}$
$\displaystyle=-\\!\\!\int_{0}^{L}\\!\\!\\!\text{d}z\Big{(}e^{A(z)}\mathsf{v}_{n}\Big{)}\partial_{z}\Big{[}e^{2A(z)}(A^{\prime}\\!+\partial_{z})\mathsf{v}_{m}\Big{]}\mathsf{u}_{\ell}-\\!\\!\int_{0}^{L}\\!\\!\\!\text{d}z\Big{(}e^{A(z)}\mathsf{v}_{n}\Big{)}\\!\Big{[}e^{2A}(A^{\prime}\\!+\partial_{z})\mathsf{v}_{m}\Big{]}\mathsf{u}_{\ell}^{\prime}\hskip
17.07164pt$
$\displaystyle=L\mathbb{M}_{m}^{2}\tilde{\alpha}_{nm\ell}-L\mathbb{M}_{m}\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{n\ell m}\hskip 0.85358pt,$ (67mr)
which leads to the identity:
$\mathbb{M}_{n}\tilde{\beta}_{nm\ell}=\mathbb{M}_{m}\tilde{\alpha}_{nm\ell}-\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{n\ell m}\,.$ (67ms)
For the right-hand side of Eq.(67ms), the second term contains the trilinear
coupling coefficient $\tilde{\rho}_{nm\ell}$ for which we can derive the
following relation:
$\displaystyle L\mathbb{M}_{n}\tilde{\rho}_{m\ell
n}=\int_{0}^{L}\\!\\!\\!\text{d}z\Big{[}\partial_{z}(e^{A(z)}\mathsf{v}_{n})\Big{]}\\!\Big{(}e^{A(z)}\mathsf{v}_{m}\Big{)}\\!\Big{(}e^{A(z)}\mathsf{v}_{\ell}\Big{)},$
$\displaystyle=-\\!\\!\int_{0}^{L}\\!\\!\\!\text{d}z\Big{(}e^{A(z)}\mathsf{v}_{n}\Big{)}\\!\Big{[}\partial_{z}\big{(}e^{A(z)}\mathsf{v}_{m}\big{)}\Big{]}\\!\Big{(}e^{A(z)}\mathsf{v}_{\ell}\Big{)}\\!-\\!\int_{0}^{L}\\!\\!\\!\text{d}z\Big{(}\\!e^{A(z)}\mathsf{v}_{n}\Big{)}\\!\Big{(}\\!e^{A(z)}\mathsf{v}_{m}\Big{)}\\!\Big{[}\partial_{z}\big{(}e^{A(z)}\mathsf{v}_{\ell}\big{)}\Big{]}\hskip
14.22636pt$ $\displaystyle=-L\hskip 0.85358pt\mathbb{M}_{m}\hskip
0.85358pt\tilde{\rho}_{n\ell m}\\!-L\hskip 0.85358pt\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{nm\ell}\,,$ (67mt)
where we have used Eqs.(52) and (53). We can reexpress Eq.(D) as follows:
$\mathbb{M}_{n}\tilde{\rho}_{m\ell n}\\!+\mathbb{M}_{m}\hskip
0.85358pt\tilde{\rho}_{n\ell m}\\!+\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{nm\ell}=0\hskip 0.85358pt.$ (67mu)
Using Eqs.(67ms) and (67mj) as well as cycling their three KK indices, we
further derive the following relations:
$\displaystyle 2\hskip
0.85358pt\mathbb{M}_{n}^{2}\mathbb{M}_{m}^{2}\tilde{\beta}_{nm\ell}$
$\displaystyle=\mathbb{M}_{m}^{2}\big{(}\mathbb{M}_{m}^{2}\\!+\\!\mathbb{M}_{n}^{2}\\!-\\!\mathbb{M}_{\ell}^{2}\big{)}\alpha_{nm\ell}-2\hskip
0.85358pt\mathbb{M}_{n}\mathbb{M}_{m}^{2}\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{n\ell m}\hskip 0.85358pt,$ (67mva) $\displaystyle
2\hskip 0.85358pt\mathbb{M}_{n}^{2}\mathbb{M}_{m}^{2}\tilde{\beta}_{nm\ell}$
$\displaystyle=\mathbb{M}_{n}^{2}\big{(}\mathbb{M}_{m}^{2}\\!+\\!\mathbb{M}_{n}^{2}\\!-\\!\mathbb{M}_{\ell}^{2}\big{)}\alpha_{nm\ell}-2\hskip
0.85358pt\mathbb{M}_{n}^{2}\mathbb{M}_{m}\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{m\ell n}\hskip 0.85358pt,$ (67mvb) $\displaystyle
2\hskip 0.85358pt\mathbb{M}_{m}^{2}\mathbb{M}_{\ell}^{2}\tilde{\beta}_{m\ell
n}$
$\displaystyle=\mathbb{M}_{\ell}^{2}\big{(}\mathbb{M}_{\ell}^{2}\\!+\\!\mathbb{M}_{m}^{2}\\!-\\!\mathbb{M}_{n}^{2}\big{)}\alpha_{nm\ell}-2\hskip
0.85358pt\mathbb{M}_{n}\mathbb{M}_{m}\mathbb{M}_{\ell}^{2}\tilde{\rho}_{nm\ell}\hskip
0.85358pt,$ (67mvc) $\displaystyle 2\hskip
0.85358pt\mathbb{M}_{m}^{2}\mathbb{M}_{\ell}^{2}\tilde{\beta}_{m\ell n}$
$\displaystyle=\mathbb{M}_{m}^{2}\big{(}\mathbb{M}_{\ell}^{2}\\!+\\!\mathbb{M}_{m}^{2}\\!-\\!\mathbb{M}_{n}^{2}\big{)}\alpha_{nm\ell}-2\hskip
0.85358pt\mathbb{M}_{n}\mathbb{M}_{n}^{2}\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{n\ell m}\hskip 0.85358pt,$ (67mvd) $\displaystyle
2\hskip 0.85358pt\mathbb{M}_{\ell}^{2}\mathbb{M}_{n}^{2}\tilde{\beta}_{\ell
nm}$
$\displaystyle=\mathbb{M}_{\ell}^{2}\big{(}\mathbb{M}_{\ell}^{2}\\!+\\!\mathbb{M}_{n}^{2}\\!-\\!\mathbb{M}_{m}^{2}\big{)}\alpha_{nm\ell}-2\hskip
0.85358pt\mathbb{M}_{n}\mathbb{M}_{m}\mathbb{M}_{\ell}^{2}\hskip
0.85358pt\tilde{\rho}_{nm\ell}\hskip 0.85358pt,$ (67mve) $\displaystyle
2\hskip 0.85358pt\mathbb{M}_{\ell}^{2}\mathbb{M}_{n}^{2}\tilde{\beta}_{\ell
nm}$
$\displaystyle=\mathbb{M}_{n}^{2}\big{(}\mathbb{M}_{\ell}^{2}\\!+\\!\mathbb{M}_{n}^{2}-\mathbb{M}_{m}^{2}\big{)}\alpha_{nm\ell}-2\hskip
0.85358pt\mathbb{M}_{n}^{2}\mathbb{M}_{m}\mathbb{M}_{\ell}\hskip
0.85358pt\tilde{\rho}_{m\ell n}\hskip 0.85358pt.$ (67mvf)
With the six relations above, we compute the sum
$\mbox{$\frac{\,{3}\,}{2}$}[(a)\\!+\\!(b)]\\!+\\!\mbox{$\frac{\,{1}\,}{2}$}\big{[}(c)\\!-\\!(d)\\!+\\!(e)\\!-\\!(f)\big{]}$
and impose Eq.(D), with which we derive the final identity:
$\Big{[}\big{(}\mathbb{M}_{n}^{2}\\!+\\!\mathbb{M}_{m}^{2}\\!-\\!\mathbb{M}_{\ell}^{2}\big{)}^{\\!2}\\!+\\!2\hskip
0.85358pt\mathbb{M}_{n}^{2}\mathbb{M}_{m}^{2}\Big{]}\alpha_{nm\ell}\,=\,6\hskip
0.85358pt\mathbb{M}_{n}^{2}\mathbb{M}_{m}^{2}\hskip
0.85358pt\tilde{\beta}_{nm\ell}\hskip 0.85358pt.$ (67mw)
This just reproduces the identity (67dw) in the main text.
For the derivation of Eq.(67eb) in the main text, we can sum up the third and
fifth identities in Eq.(67mv) and obtain the following:
$4\hskip
0.85358pt\mathbb{M}_{n}\mathbb{M}_{m}\mathbb{M}_{\ell}^{2}\tilde{\rho}_{nm\ell}=2\hskip
0.85358pt\mathbb{M}_{\ell}^{4}\alpha_{nm\ell}-2\hskip
0.85358pt\mathbb{M}_{m}^{2}\mathbb{M}_{\ell}^{2}\tilde{\beta}_{m\ell
n}-2\hskip
0.85358pt\mathbb{M}_{n}^{2}\mathbb{M}_{\ell}^{2}\tilde{\beta}_{n\ell m}\,.$
(67mx)
With this and further using the identity (67mw), we arrive at
$\Big{[}2\hskip
0.85358pt\mathbb{M}_{\ell}^{4}\\!-\\!\big{(}\mathbb{M}_{n}^{2}\\!-\mathbb{M}_{m}^{2}\big{)}^{\\!2}\\!-\mathbb{M}_{\ell}^{2}\big{(}\mathbb{M}_{n}^{2}\\!+\mathbb{M}_{m}^{2}\big{)}\\!\Big{]}\alpha_{nm\ell}=6\hskip
0.85358pt\mathbb{M}_{n}\mathbb{M}_{m}\mathbb{M}_{\ell}^{2}\hskip
0.85358pt\tilde{\rho}_{nm\ell}\,.$ (67my)
This just reproduces Eq.(67eb) in the main text.
Then, we derive two sum rules Eqs.(67eub) and (67euc) which are used for
computing the inelastic scattering amplitudes of longitudinal KK gauge bosons.
For the Eq.(67eub), we can set $(\ell,q)\\!=\\!(n,m)$ in Eq.(67mn), and derive
the sum rule:
$\sum_{j=0}^{\infty}\\!M_{j}^{2}a_{nmj}^{2}=\sum_{j=0}^{\infty}\\!\left(M_{n}^{2}\\!+\\!M_{m}^{2}\\!-\\!\mbox{$\frac{\,{1}\,}{2}$}M_{j}^{2}\right)\\!a_{nnj}a_{mmj}\,.$
(67mz)
As for the derivation of Eq.(67euc), by taking the difference between the
first two identities in Eq.(67mh), we arrive at
$\big{(}M_{n}^{2}-\\!M_{m}^{2}\big{)}a_{nmj}=M_{j}\big{(}M_{n}\tilde{a}_{njm}\\!-\\!M_{m}\tilde{a}_{mjn}\big{)}.$
(67na)
Then, squaring the above formula and sum over $j$, we derive the following:
$\displaystyle\sum_{j=1}^{\infty}\\!\big{(}M_{n}^{2}-\\!M_{m}^{2}\big{)}^{\\!2}M_{j}^{-2}a_{nmj}^{2}=\sum_{j=1}^{\infty}\big{(}M_{n}\tilde{a}_{njm}\\!-\\!M_{m}\tilde{a}_{mjn}\big{)}^{\\!2}$
$\displaystyle=\sum_{j=1}^{\infty}\\!\big{(}M_{n}^{2}\tilde{a}_{njm}^{2}\\!-\\!2M_{n}M_{m}\tilde{a}_{njm}\tilde{a}_{mjn}\\!+\\!M_{m}^{2}\tilde{a}_{mjn}^{2}\big{)}$
$\displaystyle=\sum_{j=1}^{\infty}(M_{n}^{2}\tilde{a}_{nnj}a_{mmj}\\!-2M_{n}M_{m}\tilde{a}_{nmj}a_{nmj}\\!+\\!M_{m}^{2}\tilde{a}_{mmj}a_{nnj})$
$\displaystyle=\sum_{j=1}^{\infty}\\!\left[\mbox{$\frac{\,{1}\,}{2}$}(2M_{n}^{2}\\!-\\!M_{j}^{2})a_{nnj}a_{mmj}\\!-\\!(M_{n}^{2}\\!+\\!M_{m}^{2}-M_{j}^{2})a_{nmj}^{2}\\!+\\!\mbox{$\frac{\,{1}\,}{2}$}(2M_{m}^{2}-M_{j}^{2})a_{mmj}a_{nnj}\right]$
$\displaystyle=\sum_{j=0}^{\infty}M_{j}^{2}\big{(}a_{nmj}^{2}\\!-a_{nnj}a_{mmj}\big{)}\hskip
0.85358pt,$ (67nb)
where for the third equality sign, we have imposed the following completeness
relation:
$\displaystyle\sum_{j=1}^{\infty}\\!\tilde{a}_{njm}\tilde{a}_{\ell jq}=\hskip
0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\mathsf{f}_{m}\tilde{\mathsf{f}}_{\ell}\mathsf{f}_{q}\hskip
0.85358pt\rrbracket\hskip 0.85358pt=\sum_{j=1}^{\infty}\\!\hskip
0.85358pt\llbracket\hskip
0.85358pt\tilde{\mathsf{f}}_{n}\tilde{\mathsf{f}}_{\ell}\mathsf{f}_{j}\hskip
0.85358pt\rrbracket\hskip 0.85358pt\\!\hskip 0.85358pt\llbracket\hskip
0.85358pt\mathsf{f}_{m}\mathsf{f}_{q}\mathsf{f}_{j}\hskip
0.85358pt\rrbracket\hskip 0.85358pt=\sum_{j=1}^{\infty}\\!\tilde{a}_{n\ell
j}a_{mqj}\hskip 0.85358pt.$ (67nc)
For the fourth equality sign of Eq.(D), we have used Eq.(67mi). Moreover,
substituting Eq.(67mz) into Eq.(D), we derive the sum rule identity:
$\sum_{j=1}^{\infty}\\!\\!\big{(}M_{n}^{2}\\!-\\!M_{m}^{2}\big{)}^{\\!2}M_{j}^{-2}a_{nmj}^{2}=\sum_{j=0}^{\infty}\\!\big{(}M_{n}^{2}\\!+\\!M_{m}^{2}\\!-\\!\mbox{$\frac{\,{3}\,}{2}$}M_{j}^{2}\big{)}a_{nnj}a_{mmj}\hskip
0.85358pt,$ (67nd)
which is Eq.(67euc) in the main text.
Next, we derive the sum rule identities in Eq.(67fg) of Sec. 3.2. We prove the
$s$-channel sum rule (67fgc) as an example and other sum rules in Eq.(67fg) be
readily obtained from Eq.(67fgc) by permuting the KK indices. From Eq.(67na),
we derive the following:
$\displaystyle\big{(}M_{n}^{2}\\!-\\!M_{m}^{2}\big{)}\big{(}M_{\ell}^{2}\\!-\\!M_{q}^{2}\big{)}\\!\sum_{j=1}^{\infty}\\!M_{j}^{-2}a_{nmj}\hskip
0.85358pta_{\ell qj}$
$\displaystyle=\sum_{j=1}^{\infty}\\!\left[\\!\big{(}M_{n}^{2}\\!-\\!M_{m}^{2}\big{)}M_{j}^{-1}a_{nmj}\right]\\!\\!\left[\\!\big{(}M_{\ell}^{2}\\!-\\!M_{q}^{2}\big{)}M_{j}^{-1}a_{\ell
qj}\right]$
$\displaystyle=\sum_{j=1}^{\infty}\\!\big{(}M_{n}\tilde{a}_{njm}\\!-\\!M_{m}\tilde{a}_{mjn}\big{)}\\!\big{(}M_{\ell}\tilde{a}_{\ell
jq}\\!-\\!M_{q}\tilde{a}_{qj\ell}\big{)}$
$\displaystyle=\sum_{j=1}^{\infty}\\!\big{(}M_{n}M_{\ell}\tilde{a}_{njm}\tilde{a}_{\ell
jq}\\!-\\!M_{n}M_{q}\tilde{a}_{njm}\tilde{a}_{qj\ell}\\!-\\!M_{m}M_{\ell}\tilde{a}_{mjn}\tilde{a}_{\ell
jq}\\!+\\!M_{m}M_{q}\tilde{a}_{mjn}\tilde{a}_{qj\ell}\big{)}$
$\displaystyle=\sum_{j=1}^{\infty}\\!\\!\big{(}M_{n}M_{\ell}\tilde{a}_{n\ell
j}a_{mqj}\\!-\\!M_{n}M_{q}\tilde{a}_{nqj}a_{m\ell
j}\\!-\\!M_{m}M_{\ell}\tilde{a}_{m\ell
j}a_{nqj}\\!+\\!M_{m}M_{q}\tilde{a}_{mqj}a_{n\ell j}\big{)},$ (67ne)
where we have applied Eqs.(67mh) and (67nc) for the second and fourth equality
signs, respectively. Then, substituting Eq.(67mi) into Eq.(D), we finally
arrive at
$\displaystyle\big{(}M_{n}^{2}-M_{m}^{2}\big{)}\big{(}M_{\ell}^{2}-M_{q}^{2}\big{)}\sum_{j=1}^{\infty}M_{j}^{-2}a_{nmj}a_{\ell
qj}$
$\displaystyle=\sum_{j=1}^{\infty}\\!\Big{[}\mbox{$\frac{\,{1}\,}{2}$}\big{(}M_{n}^{2}\\!+\\!M_{\ell}^{2}\\!-\\!M_{j}^{2}\big{)}a_{n\ell
j}a_{mqj}\\!-\\!\mbox{$\frac{\,{1}\,}{2}$}\big{(}M_{n}^{2}\\!+\\!M_{q}^{2}\\!-\\!M_{j}^{2}\big{)}a_{nqj}a_{m\ell
j}$ $\displaystyle\hskip
28.45274pt-\mbox{$\frac{\,{1}\,}{2}$}\big{(}M_{m}^{2}\\!+\\!M_{\ell}^{2}\\!-\\!M_{j}^{2}\big{)}a_{m\ell
j}a_{nqj}\\!+\\!\mbox{$\frac{\,{1}\,}{2}$}\big{(}M_{m}^{2}\\!+\\!M_{q}^{2}\\!-\\!M_{j}^{2}\big{)}a_{mqj}a_{n\ell
j}\Big{]}$
$\displaystyle=\sum_{j=0}^{\infty}\\!M_{j}^{2}\big{(}a_{nqj}a_{m\ell
j}\\!-a_{n\ell j}a_{mqj}\big{)}.$ (67nf)
This just reproduces the sum rule identity (67fgc) given in the main text.
Finally, we derive the identity (67fv) in the main text. With Eq.(67dt) and
Eq.(67mq), we first derive the following identity:
$\sum_{j=0}^{\infty}\\!\alpha_{nnj}\tilde{\alpha}_{nnj}=\frac{1}{\,3\,}\alpha_{nnnn}\hskip
0.85358pt.$ (67ng)
By using Eq.(67dt) and Eq.(67mq), we compute:
$\displaystyle\sum_{j=0}^{\infty}\\!\hat{r}_{j}^{6}\alpha_{nnj}^{2}$
$\displaystyle=\sum_{j=0}^{\infty}\\!4\hskip
0.85358pt\hat{r}_{j}^{2}\big{(}\alpha_{nnj}\\!-\tilde{\alpha}_{nnj}\big{)}^{\\!2}=\sum_{j=0}^{\infty}\\!4\hskip
0.85358pt(4+\hat{r}^{2}_{j})\hskip 0.85358pt\tilde{\alpha}_{nnj}^{2}\hskip
0.85358pt,$ (67nh)
where we have used the notation $\hskip
0.85358pt\hat{r}_{j}\\!=\\!\mathbb{M}_{j}/\mathbb{M}_{n}\hskip 0.85358pt$.
Then, we compute the following:
$\displaystyle\mathbb{M}_{j}^{2}\tilde{\alpha}_{nnj}$
$\displaystyle=\mathbb{M}_{j}^{2}\int_{0}^{L}\\!\\!\text{d}z\hskip
0.85358pt{e^{3A(z)}}\hskip 0.85358pt\mathsf{v}_{n}^{2}\mathsf{u}_{j}$
$\displaystyle=\int_{0}^{L}\\!\\!\text{d}z\hskip 0.85358pt{e^{3A(z)}}\hskip
0.85358pt\mathsf{u}_{j}\Big{[}\\!-\\!12{A^{\prime\prime}}\hskip
0.85358pt\mathsf{v}_{n}^{2}\\!+\\!12{A^{\prime}}\hskip
0.85358pt\mathsf{u}_{n}\mathsf{v}_{n}\\!+\\!2\hskip
0.85358pt\mathsf{v}_{n}^{2}\\!-\\!2\hskip
0.85358pt\mathbb{M}_{n}^{2}\mathsf{u}_{n}^{2}\Big{]},$ (67nia)
$\displaystyle\sum_{j=0}^{\infty}\mathbb{M}_{j}^{2}\tilde{\alpha}_{nnj}^{2}$
$\displaystyle=-2\hskip
0.85358pt\mathbb{M}_{n}^{2}\alpha_{nnj}\tilde{\alpha}_{nnj}+2\hskip
0.85358pt\mathbb{M}_{n}^{2}\tilde{\alpha}_{nnj}^{2}+I_{1}+I_{2}\hskip
0.85358pt,$ (67nib)
where the $I_{1}$ and $I_{2}$ represent two integrals:
$\displaystyle I_{1}$ $\displaystyle=-12\hskip
0.85358pt\mathbb{M}_{n}\\!\\!\int_{0}^{L}\\!\\!\text{d}z\hskip
0.85358pt{e^{3A(z)}A^{\prime}}\hskip
0.85358pt\mathsf{u}_{n}\mathsf{v}_{n}^{3}=-2\hskip
0.85358pt\mathbb{M}_{n}^{2}\sum_{j=0}^{\infty}\\!\big{(}\tilde{\alpha}_{nnj}^{2}\\!-\\!3\hskip
0.85358pt\alpha_{nnj}\tilde{\alpha}_{nnj}\big{)}\hskip 0.85358pt,$ (67nja)
$\displaystyle I_{2}$ $\displaystyle=-12\\!\int_{0}^{L}\\!\\!\text{d}z\hskip
0.85358pt{e^{3A(z)}A^{\prime\prime}}\hskip
0.85358pt\mathsf{v}_{n}^{4}=-\frac{1}{\,2\,}I_{1}\hskip 0.85358pt.$ (67njb)
In the above we have denoted $A^{\prime}\\!\\!=\\!\partial_{z}A(z)$ and
$A^{\prime\prime}\\!\\!=\\!\partial_{z}^{2}A(z)$. Combining Eqs.(67ni) and
(67nj), we further compute the summation of Eq.(67nh):
$\displaystyle\sum_{j=0}^{\infty}\\!\hat{r}_{j}^{6}\alpha_{nnj}^{2}$
$\displaystyle=\sum_{j=0}^{\infty}\\!4\hskip 0.85358pt\big{(}5\hskip
0.85358pt\tilde{\alpha}_{nnj}^{2}\\!+\alpha_{nnj}\tilde{\alpha}_{nnj}\big{)}=\sum_{j=0}^{\infty}\\!\left(\\!5\hskip
0.85358pt\hat{r}_{j}^{4}-\\!\frac{\,16\,}{3}\\!\right)\\!\alpha_{nnj}^{2}\hskip
0.85358pt,$ (67nk)
where we have made use of Eq.(67dt) and Eq.(67mq). We can re-express Eq.(67nk)
as follows:
$\displaystyle\sum_{j=0}^{\infty}\\!\left(\\!\hat{r}_{j}^{6}-5\hskip
0.85358pt\hat{r}_{j}^{4}+\frac{\,16\,}{3}\\!\right)\\!\alpha_{nnj}^{2}=0\hskip
0.85358pt,$ (67nl)
which just reproduces the identity (67fv) in the main text.
## References
* (1) For a recent review book, G. Travaglini et al., “The SAGEX Review on Scattering Amplitudes”, J. Phys. A 55 (2022) 44, 443001 [arXiv:2203.13011 [hep-th]].
* (2) T. Kaluza, “On the Unification Problem in Physics”, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys.) 1921 (1921) 966 [Int. J. Mod. Phys. D 27 (2018) 1870001, [arXiv:1803.08616 [physics.hist-ph]]; O. Klein, “Quantum Theory and Five-Dimensional Theory of Relativity”, Z. Phys. 37 (1926) 895 [Surveys High Energy Phys. 5 (1986) 241].
* (3) N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, Phys. Lett. B 429 (1998) 263 [arXiv:hep-ph/9803315]; I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, Phys. Lett. B 436 (1998) 257 [arXiv:hep-ph/9804398].
* (4) L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 3370 [arXiv:hep-ph/9905221]; Phys. Rev. Lett. 83 (1999) 4690 [arXiv:hep-th/9906064].
* (5) For reviews, e.g., R. Sundrum, arXiv:hep-th/0508134; C. Csaki, arXiv:hep-ph/0404096; R. Rattazzi, arXiv:hep-ph/0607055.
* (6) E.g., M. J. G. Veltman, Nucl. Phys. B 7 (1968) 637-650.
* (7) M. Fierz and W. Pauli, Proc. Roy. Soc. Lond. A 173 (1939) 211.
* (8) For a review, K. Hinterbichler, Rev. Mod. Phys. 84 (2012) 671 [arXiv:1105.3735 [hep-th]].
* (9) H. van Dam and M. J. G. Veltman, Nucl. Phys. B 22 (1970) 397.
* (10) V. I. Zakharov JETP Letters (Sov. Phys.) 12 (1970) 312.
* (11) R. S. Chivukula, D. A. Dicus, H.-J. He, Phys. Lett. B 525 (2002) 175 [hep-ph/0111016].
* (12) R. S. Chivukula and H.-J. He, Phys. Lett. B 532 (2002) 121 [hep-ph/0201164].
* (13) H.-J. He, Int. J. Mod. Phys. A 20 (2005) 3362 [hep-ph/0412113], presentation in the proceedings of DPF-2004: Annual Meeting of the Division of Particles and Fields, American Physical Society, Riverside, California, USA, August 26-31, 2004.
* (14) L. Dolan and M. Duff, Phys. Rev. Lett. 52 (1984) 14.
* (15) Y.-F. Hang and H.-J. He, Phys. Rev. D 105 (2022) 084005 [hep-th/2106.04568].
* (16) Y.-F. Hang and H.-J. He, Research 2022 (2022) 9860945 [hep-th/2207.11214].
* (17) F. Englert and R. Brout, Phys. Rev. Lett. 13 (1964) 321; P. W. Higgs, Phys. Rev. Lett. 13 (1964) 508; Phys. Lett. 12 (1964) 132; G. S. Guralnik, C. R. Hagen and T. Kibble, Phys. Rev. Lett. 13 (1965) 585; T. Kibble, Phys. Rev. 155 (1967) 1554.
* (18) L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 3370 [hep-ph/9905221].
* (19) H. Davoudiasl, J. L. Hewett, T. G. Rizzo Phys. Lett. B 473 (2000) 43 [arXiv:hep-ph/9911262].
* (20) A. Pomarol, Phys. Lett. B 486 (2000) 153 [hep-ph/9911294].
* (21) R . S. Chivukula, J. A. Gill, K. A. Mohan, D. Sengupta, E. H. Simmons and X. Wang, Phys. Rev. D109 (2024) 075016 [arXiv:2312.08576 [hep-ph]].
* (22) L. Randall and M. D. Schwartz, JHEP 11 (2001) 003 [hep-th/0108114].
* (23) H. Kawai, D. C. Lewellen, and S. H. H. Tye, Nucl. Phys. B 269 (1986) 1-23.
* (24) Z. Bern, J. J. M. Carrasco and H. Johansson, Phys. Rev. D 78 (2008) 085011 [hep-ph/0805.3993]; Phys. Rev. Lett. 105 (2010) 061602 [hep-th/1004.0476].
* (25) For a comprehensive review, Z. Bern, J. J. M. Carrasco, H. Johansson, and R. Roiban, (2019) [hep-th/1909.01358].
* (26) Y. Li, Y.-F. Hang, H.-J. He, S. He, JHEP 02 (2022) 120 [hep-th/2111.12042].
* (27) Y. Li, Y.-F. Hang, H.-J. He, JHEP 03 (2023) 254 [hep-th/2209.11191].
* (28) Y.-F. Hang, H.-J. He, C. Shen, JHEP 01 (2022) 153 [arXiv:2110.05399 [hep-th]]; Y.-F. Hang, H.-J. He, C. Shen, Research 6 (2023) 0072 [arXiv:2406.13671 [hep-th]].
* (29) M. C. González, A. Momeni, J. Rumbutis, JHEP 08 (2021) 116 [arXiv:2107.00611 [hep-th]].
* (30) C. de Rham and G. Gabadadze, Phys. Rev. D 82 (2010) 044020 [arXiv:1007.0443 [hep-th]]; C. de Rham, G. Gabadadze, and A. J. Tolley, Phys. Rev. Lett. 106 (2011) 231101 [arXiv: 1011.1232 [hep-th]]; A. Momeni, J. Rumbutis, and A. J. Tolley, JHEP 12 (2020) 030 [2004.07853 [hep-th]].
* (31) L. A. Johnson, C. R. T. Jones, and S. Paranjape, JHEP 02 (2021) 148 [arXiv:2004.12948 [hep-th]].
* (32) M. Chiodaroli, M. Gunaydin, H. Johansson, and R. Roiban, JHEP 06 (2017) 064 [arXiv: 1511.01740 [hep-th]].
* (33) A. Momeni, J. Rumbutis, and A. J. Tolley, JHEP 08 (2021) 081 [arXiv:2012.09711 [hep-th]].
* (34) T. Bargheer, S. He, and T. McLoughlin, Phys. Rev. Lett. 108 (2012) 231601 [arXiv: 1203.0562 [hep-th]].
* (35) Y. t. Huang and H. Johansson, Phys. Rev. Lett. 110 (2013) 171601 [arXiv:1210.2255 [hep-th]].
* (36) N. Moynihan, JHEP 12 (2020) 163 [arXiv:2006.15957 [hep-th]]; D. J. Burger, W. T. Emond and N. Moynihan, JHEP 01 (2022) 017 [arXiv:2103.10416 [hep-th]]; N. Moynihan, JHEP 05 (2024) 310 [arXiv:2110.02209 [hep-th]].
* (37) H. Johansson and A. Ochirov, JHEP 09 (2019) 40 [arxiv:1906.12292 [hep-th]].
* (38) Y. F. Bautista and A. Guevara, JHEP 11 (2021) 184 [arxiv:1908.11349 [hep-th]].
* (39) J. Plefka, C. Shi, and T. Wang, Phys. Rev. D 101 (2020) 066004, [arxiv:1911.06785 [hep-th]].
* (40) H. J. He, Y. P. Kuang and X. Li, Phys. Rev. D 49 (1994) 4842-4872; Phys. Rev. Lett. 69 (1992) 2619.
* (41) H.-J. He and W. B. Kilgore, Phys. Rev. D 55 (1997) 1515 [hep-ph/9609326].
* (42) For a review, H. J. He, Y. P. Kuang and C. P. Yuan, hep-ph/9704276, DESY-97-056, in the proceedings of CCAST workshop on “Physics at the TeV Energy Scale”, vol.72, p.119 (1996).
* (43) R. S. Chivukula, D. A. Dicus, H.-J. He, and S. Nandi, Phys. Lett. B 562 (2003) 109 [hep-ph/0302263].
* (44) P. Callin and F. Ravndal, Phys. Rev. D 72 (2005) 064026 [hep-ph/0412109].
* (45) T. Gherghetta, M. Peloso and E. Poppitz, Phys. Rev. D 72 (2005) 104003 [hep-th/0507245].
* (46) Z. Chacko, M. Graesser, C. Grojean and L. Pilo, Phys. Rev. D 70 (2004) 084028 [hep-th/0312117].
* (47) C. S. Lim, T. Nagasawa, S. Ohay, K. Sakamoto and M. Sakamoto, Phys. Rev. D 77 (2008) 065009 [arXiv:0801.0845 [hep-th]].
* (48) R. S. Chivukula, E. H. Simmons, X. Wang, Phys. Rev. D 106 (2022) 035026 [arXiv: 2207.02887 [hep-ph]].
* (49) R. S. Chivukula, D. Foren, K. A. Mohan, D. Sengupta and E. H. Simmons, Phys. Rev. D 101 (2020) 075013 [arXiv:2002.12458 [hep-ph]].
* (50) W. D. Goldberger and M. B. Wise, Phys. Rev. Lett. 83 (1999) 4922 [hep-ph/9907447]; Phys. Lett. B 475 (2000) 275 [hep-ph/9911457].
* (51) R. S. Chivukula, D. Foren, K. A. Mohan, D. Sengupta and E. H. Simmons, Phys. Rev. D 103 (2021) 095024 [arXiv:2104.08169 [hep-ph]]; ibid, D 107 (2023) 035015 [arXiv:2206.10628 [hep- ph]]; R. S. Chivukula, E. H. Simmons, X. Wang, ibid, D 106 (2022) 035026 [arXiv:2207.02887 [hep-ph]].
* (52) S. Weinberg, Physica A 96 (1979) 327, no.1-2.
* (53) V. Del Duca, L. J. Dixon, and F. Maltoni, Nucl. Phys. B 571 (2000) 51 [hep-ph/9910563].
* (54) R. Kleiss and H. Kuijf, Nucl. Phys. B 312 (1989) 616.
* (55) F. Cachazo, S. He and E. Y. Yuan, Phys. Rev. D 90 (2014) 065001 [arXiv:1306.6575 [hep-th]]; Phys. Rev. Lett. 113 (2014) 171601 [arXiv:1307.2199 [hep-th]]; JHEP 1407 (2014) 033 [arXiv:1309.0885 [hep-th]].
* (56) F. Cachazo, S. He and E. Y. Yuan, JHEP 07, 149 (2015) [arXiv:1412.3479 [hep-th]]. |
# Improving Accuracy of Interpretability Measures in Hyperparameter
Optimization via Bayesian Algorithm Execution
Julia Moosbauer<EMAIL_ADDRESS>
Institute of Statistics, Munich Center for Machine Learning (MCML)
Ludwig-Maximilians-Universität München Giuseppe Casalicchio
<EMAIL_ADDRESS>
Institute of Statistics, Munich Center for Machine Learning (MCML)
Ludwig-Maximilians-Universität München Marius Lindauer<EMAIL_ADDRESS>hannover.de
Institute of Artificial Intelligence
Leibniz University Hannover Bernd Bischl<EMAIL_ADDRESS>
Institute of Statistics, Munich Center for Machine Learning (MCML)
Ludwig-Maximilians-Universität München
###### Abstract
Despite all the benefits of automated hyperparameter optimization (HPO), most
modern HPO algorithms are black-boxes themselves. This makes it difficult to
understand the decision process which leads to the selected configuration,
reduces trust in HPO, and thus hinders its broad adoption. Here, we study the
combination of HPO with interpretable machine learning (IML) methods such as
partial dependence plots. These techniques are more and more used to explain
the marginal effect of hyperparameters on the black-box cost function or to
quantify the importance of hyperparameters. However, if such methods are
naively applied to the experimental data of the HPO process in a post-hoc
manner, the underlying sampling bias of the optimizer can distort
interpretations. We propose a modified HPO method which efficiently balances
the search for the global optimum w.r.t. predictive performance _and_ the
reliable estimation of IML explanations of an underlying black-box function by
coupling Bayesian optimization and Bayesian Algorithm Execution. On benchmark
cases of both synthetic objectives and HPO of a neural network, we demonstrate
that our method returns more reliable explanations of the underlying black-box
without a loss of optimization performance.
## 1 Introduction
The performance of machine learning (ML) models usually depends on many
decisions, such as the choice of a learning algorithm and its hyperparameter
configurations. Manually reaching these decisions is usually a tedious trial-
and-error process. Automated machine learning (AutoML), e.g., hyperparameter
optimization (HPO), can support developers and researchers in this regard. By
framing these decisions as an optimization problem and solving them using
efficient black-box optimizers such as Bayesian Optimization (BO), HPO is
demonstrably more efficient than manual tuning, and grid or random search
(Bergstra et al., 2011; Snoek et al., 2012; Turner et al., 2020; Bischl et
al., 2021). However, there is still a lack of confidence in AutoML systems and
a reluctance to trust the returned best configuration (Drozdal et al., 2020).
One reason for why some practitioners still today prefer manual tuning over
automated HPO is that existing systems lack the ability to convey an
understanding of hyperparameter importance and how certain hyperparameters
affect model performance (Hasebrook et al., 2022), helping them to understand
why a final configuration was chosen.
Desirable insights into hyperparameter effects or importance could in
principle be generated by applying methods of interpretable machine learning
(IML) to experimental data from the HPO process, specifically the final
surrogate model generated by BO based on this HPO-data. However, these methods
– even though possible from a technical perspective and used before (Hutter et
al., 2014; Van Rijn & Hutter, 2018; Young et al., 2018; Head et al., 2022) –
should be used with caution in this context. The main reason is a sampling
bias caused by the desire for efficient optimization during HPO (Moosbauer et
al., 2021): Efficient optimizers typically sample more configurations in
promising regions with potentially well-performing hyperparameter
configurations, while other regions are underrepresented. This sampling bias
introduces a surrogate-model bias in under-explored regions as the surrogate
model is subject to high uncertainty in these regions. Consequently,
explanations of HPO runs, such as partial dependence plots (PDPs) (Friedman,
2001), can be misleading as they also rely on artificially created evaluations
in under-explored regions. Moosbauer et al. (2021) had been the first to
address this issue, and had proposed an approach to identify well-explored,
rather small subregions in which PDPs can be estimated accurately. While this
is valuable, it still does not allow to accurately estimate hyperparameter
effects globally.
To anticipate these unintended effects of this sampling bias as effectively as
possible already during the HPO process, we propose a modified BO algorithm
that efficiently searches for the global optimum _and_ accurate IML estimates
of the underlying black-box function at the same time. We build on the concept
of Bayesian Algorithm Execution (BAX) (Neiswanger et al., 2021) to estimate
the expected information gain (EIG) (Lindley, 1956) of configurations w.r.t.
the output of an interpretation method. We ultimately couple BO with BAX and
propose BOBAX as an efficient method that searches for accurate
interpretations without a relevant loss of optimization performance. Our
proposed method is generic as it is applicable to any BO variant (e.g.,
different acquisition functions or probabilistic surrogate models). As IML
technique we focus on PDPs (Friedman, 2001), which estimate the marginal
effect(s) of features (in our case: hyperparameters) on the output by
visualizing a marginal 1D or 2D function. PDPs constitute an established IML
technique (Lemmens & Croux, 2006; Cutler et al., 2007; Wenger & Olden, 2012;
Zhang et al., 2018), have been in use for more than 20 years to analyze ML
models, and have recently gained further interest in IML and XAI, and are also
increasingly used to analyze hyperparameter effects in HPO and AutoML (Young
et al., 2018; Zela et al., 2018; Head et al., 2022). We point out that our
technique is in principle not limited to PDPs, but can be combined with any
IML technique which can be quantitatively estimated from a surrogate model.
In a benchmark study, we demonstrate how BOBAX consistently yields more
reliable estimates for marginal effects estimated via the partial dependence
method while maintaining the same level of optimization efficiency as commonly
used methods. Finally, we demonstrate how BOBAX can give reliable insights
into hyperparameter effects of a neural network during tuning yielding state-
of-the-art performance. We believe that through our generic method, the
potential of IML methods can be unlocked in the context of HPO, thus paving
the way for more interpretability of and trust into human-centered HPO. Our
contributions include:
1. 1.
The direct optimization for an accurate estimation of IML statistics, e.g.,
marginal effects for single or multiple hyperparameters, as part of BO for
HPO, making HPO interpretable and more trustworthy;
2. 2.
The combination of BO and Bayesian Algorithm Execution (BAX), dubbed BOBAX,
where BAX is used to guide the search towards more accurate estimation of IML
statistics;
3. 3.
Thorough study of different variants of BOBAX and baselines on synthetic
functions; and
4. 4.
Empirical evidence that budget allocation regarding IML estimates does not
come at the expense of significantly reduced optimization performance on a
deep learning HPO benchmark.
## 2 Background
In this section, we formalize HPO and BO as the context of our work. We also
give an overview of Bayesian Algorithm Execution (BAX) as it serves as basis
for our work.
##### Hyperparameter Optimization
The aim of HPO is to efficiently find a well-performing configuration of a
learning algorithm. HPO is therefore commonly formalized as finding the
minimizer $\bm{\lambda}^{\ast}\in\mathop{\sf
arg\,min}\nolimits_{\bm{\lambda}\in\Lambda}c(\bm{\lambda})$ of a _black-box_
cost function $c:\Lambda\to\mathds{R}$ which maps a hyperparameter
configuration
$\mbox{$\bm{\lambda}=\left(\lambda_{1},...,\lambda_{d}\right)$}\in\Lambda$ to
the validation error of the model trained by a learning algorithm run using
$\bm{\lambda}$. The hyperparameter space
$\Lambda=\Lambda_{1}\times...\times\Lambda_{d}$ can be mixed, containing
categorical and continuous hyperparameters. Particularly in the context of
AutoML, where whole machine learning pipeline configurations are optimized
over, $\Lambda$ may even contain hierarchical dependencies between
hyperparameters (Thornton et al., 2013; Olson & Moore, 2016).
##### Bayesian Optimization
BO is a black-box optimization algorithm which has become increasingly popular
in the context of HPO (Jones et al., 1998; Snoek et al., 2012). BO
sequentially chooses configurations
$\bm{\lambda}^{(1)},...,\bm{\lambda}^{(T)}$ that are evaluated
$c_{\bm{\lambda}^{(1)}},...,c_{\bm{\lambda}^{(T)}}$ to obtain an archive
$A_{T}=\left\\{\left(\bm{\lambda}^{(i)},c_{\bm{\lambda}^{(i)}}\right)\right\\}_{i=1,...,T}$.
To choose the next configuration $\bm{\lambda}^{(T+1)}$ as efficiently as
possible, a surrogate model $\hat{c}$ is estimated on the archive $A_{T}$, and
a new point is proposed based on an acquisition function that leverages
information from the surrogate model $\hat{c}$. Typically, we chose a
probabilistic model and estimate a distribution over $c$, denoted by
$p(c~{}|~{}A_{T})$. A common choice are Gaussian processes
$c\sim\mathcal{GP}\left(\mu,k\right)$, characterized by a mean function
$\mu:\Lambda\to\mathds{R}$ and a covariance function
$k:\Lambda\times\Lambda\to\mathds{R}$. Acquisition functions usually trade off
exploration (i.e., sampling in regions with few data points and high posterior
uncertainty) and exploitation (i.e., sampling in regions with low mean).
Common examples are the expected improvement (EI) (Jones et al., 1998), the
lower confidence bound (LCB) (Jones, 2001; Srinivas et al., 2010), entropy
search (Hennig & Schuler, 2012; Hernández-Lobato et al., 2014) and knowledge
gradient (Wu et al., 2017).
##### Marginal Effects of Hyperparameters
Practitioners of HPO are often interested in whether and how individual
hyperparameters affect model performance. Not only is there a desire to gain
model comprehension Hasebrook et al. (2022), also such insights can influence
decisions, for example whether to tune a hyperparameter or not (Probst et al.,
2019), or modify hyperparameter ranges. One interpretation measure that the
community is looking at (Hutter et al., 2014; Zela et al., 2018; Young et al.,
2018; Van Rijn & Hutter, 2018; Zöller et al., 2022) is the marginal effect of
one or multiple hyperparameters $\bm{\lambda}_{S}$, $S\subset\\{1,2,...,d\\}$
on model performance, which is defined as111To keep notation simple, we denote
$c(\bm{\lambda})$ as a function of two arguments
$(\bm{\lambda}_{S},\bm{\lambda}_{R})$ to differentiate components in the index
set $S$ from those in the complement $R=\\{1,2,...,d\\}\setminus S$. The
integral shall be understood as a multiple integral of $c$ where
$\bm{\lambda}_{j}$, $j\in R$, are integrated out.
$\displaystyle
c_{S}(\bm{\lambda}_{S}):=\mathds{E}_{\bm{\lambda}_{R}}\left[c(\bm{\lambda})\right]=\int_{\Lambda_{R}}c(\bm{\lambda}_{S},\bm{\lambda}_{R})~{}\textrm{d}\mathbb{P}(\bm{\lambda}_{R}).$
(1)
In the context of HPO, $\mathds{P}$ is typically assumed to be the uniform
distribution over $\Lambda_{R}$ since we are interested in how hyperparameter
values $\bm{\lambda}_{S}$ impact model performance uniformly across the
hyperparameter space (Hutter et al., 2014; Moosbauer et al., 2021). Since
computing Eq. (1) analytically is usually possible, the PDP method (Friedman,
2001) approximates the integral, as in Eq. (1), by Monte Carlo approximation.
##### Information-based Bayesian Algorithm Execution
Information-based Bayesian Algorithm Execution (BAX) extends the idea of using
entropy search for estimating global optima to estimating other properties of
a function $f:\mathcal{X}\to\mathds{R}$ (Neiswanger et al., 2021). Similar to
BO, BAX tries to sequentially choose points $\mathbf{x}^{(i)}\in\mathcal{X}$
in order to estimate the quantity of interest accurately with as few
evaluations as possible. It is assumed that the quantity of interest can be
computed as the output
$\mathcal{O}_{\mathcal{A}}:=\mathcal{O}_{\mathcal{A}}(f)$ of running an
algorithm $\mathcal{A}$ on $f$, e.g. top-k estimation on a finite set,
computing level sets or finding shortest paths.
Similarly to BO, BAX sequentially builds a probabilistic model
$p(f~{}|~{}A_{T})$, e.g., a GP, over an archive of evaluated points $A_{T}$.
Based on $p(f~{}|~{}A_{T})$, they derive the posterior distribution over the
algorithm output $p(\mathcal{O}_{\mathcal{A}}~{}|~{}A_{T})$. To build the
archive $A_{T}$ as efficiently as possible, they choose to evaluate the point
$\mathbf{x}^{(T+1)}$ which maximizes the expected information gain about the
algorithm output $\mathcal{O}_{\mathcal{A}}$
$\displaystyle\text{EIG}_{T}(\mathbf{x})$
$\displaystyle:=\mathbb{H}\left[\mathcal{O}_{\mathcal{A}}|A_{T}\right]-\mathbb{E}_{f_{\mathbf{x}}|A_{T}}\left[\mathbb{H}\left[\mathcal{O}_{\mathcal{A}}|A_{T+1}\right]\right],$
(2)
where $\mathbb{H}$ denotes the entropy, and
$A_{T+1}:=A_{T}\cup\left\\{\left(\mathbf{x},f_{\mathbf{x}}\right)\right\\}$
with $f_{\mathbf{x}}$ the (unrevealed) value of $f$ at $\mathbf{x}$.
Neiswanger et al. (2021) propose an acquisition function to approximate the
EIG in Eq. (2). In its simplest form, the algorithm output
$\mathcal{O}_{\mathcal{A}}$ in the EIG is replaced by the algorithm’s
execution path $e_{\mathcal{A}}$, i.e., the sequence of all evaluations the
algorithm $\mathcal{A}$ traverses, which thus gives full information about the
output. The expected information gain estimated based on the execution path
$e_{\mathcal{A}}$ is given by
$\displaystyle\text{EIG}_{T}^{e}(\mathbf{x})$
$\displaystyle=\mathbb{H}\left[e_{\mathcal{A}}|A_{T}\right]-\mathbb{E}_{f_{\mathbf{x}}|A_{T}}\left[\mathbb{H}\left[e_{\mathcal{A}}|A_{T+1}\right]\right]$
(3)
$\displaystyle=\mathbb{H}\left[f_{\mathbf{x}}|A_{T}\right]-\mathbb{E}_{e_{\mathcal{A}}|A_{T}}\left[\mathbb{H}\left[f_{\mathbf{x}}|A_{T},e_{\mathcal{A}}\right]\right].$
where they used the symmetry of the mutual information to come up with the
latter expression. The first term
$\mathbb{H}\left[f_{\mathbf{x}}|A_{T}\right]$ is the entropy of the posterior
predictive distribution at an input $\mathbf{x}$ and can be computed in closed
form. The second term can be estimated as follows: A number of
$n_{\text{path}}$ samples $\tilde{f}\sim p(f~{}|~{}A_{T})$ is drawn from the
posterior process. The algorithm $\mathcal{A}$ is run on each of the samples
$\tilde{f}$ to produce sample execution paths $\tilde{e}_{\mathcal{A}}$,
yielding samples $\tilde{e}_{\mathcal{A}}\sim p(e_{\mathcal{A}}~{}|~{}A_{T})$,
used to estimate the second term as described by Neiswanger et al. (2021).
## 3 Related Work
Interpretability in AutoML refers either to (1) the interpretation of the
resulting model returned by an AutoML system (Xanthopoulos et al., 2020;
Binder et al., 2020; Carmichael et al., 2021; Coors et al., 2021), or (2) the
interpretation of hyperparameter effects and importance (Moosbauer et al.,
2021). We focus on the latter, specifically the construction of accurate and
unbiased estimators for, e.g., hyperparameter effects in HPO.
There are HPO and AutoML frameworks that provide visualisations and
interpretability statistics as additional outputs, e.g., _Google Vizier_
(Golovin et al., 2017) and _xAutoML_ (Zöller et al., 2022) provide an
interactive dashboard visualizing the progress of the optimization and
insights via parallel coordinate plots and multi-dimensional scaling on the
optimizer footprint. Similarly, the HPO frameworks _optuna_ (Akiba et al.,
2019) or _scikit-optimize_ (Head et al., 2022) allow for quick and simple
visualization of optimization progress and results. However, such relatively
simple visualizations do not give a deeper understanding of which
hyperparameter influence model performance in what way.
In the context of HPO, practitioners are commonly interested on the marginal
effects of hyperparameters on model performance Hutter et al. (2014); Young et
al. (2018); Zela et al. (2018) or the importance of hyperparameters on model
performance (Hutter et al., 2014; Biedenkapp et al., 2017; Van Rijn & Hutter,
2018; Probst et al., 2019). The latter is often directly derived from marginal
effects of hyperparameters (Hutter et al., 2014). Established HPO frameworks
(Head et al., 2022; Akiba et al., 2019) as well as visualization toolboxes
Zöller et al. (2022) already make implementations of these methods accessible
to users, however they neither discuss nor address a distortion of those
arising due to a sampling bias. While all of these approaches have their
merits, none of them address the imprecision in the estimates of these
interpretive measures caused by sample bias that is present in the archive
sampled by BO, since BO tends to exploit promising regions while leaving other
regions unexplored. So far, only Moosbauer et al. (2021) explicitly proposed a
post-hoc method that is able to identify subspaces of the configuration space
in which accurate and unbiased PDPs can be computed. However, the method does
not provide more accurate global IML estimates. To our knowledge, we are the
first to propose a method that improves the sampling process of HPO to provide
more accurate global estimates of such IML methods.
## 4 BOBAX: Enhanced Estimation of Interpretability Measures for HPO
We present our main contribution: BOBAX that efficiently searches for accurate
marginal effect estimates of hyperparameters while maintaining competitive HPO
performance.
### 4.1 Expected Information Gain for Partial Dependence
We first derive the information gained with regards to the estimate of a
marginal effect of a hyperparameter $\bm{\lambda}_{S}$ _if_ we observe
performance $c_{\bm{\lambda}^{(T+1)}}$ for a hyperparameter configuration
$\bm{\lambda}^{(T+1)}$. To this end, we quantify and analyze how a marginal
effect is estimated in the context of HPO. Two types of approximations are
performed: First, instead of estimating the marginal effect with regards to
the true, but unknown and expensive objective $c$, we estimate the marginal
effect of the surrogate model $\hat{c}$ 222Constructed by BO, usually this
will be the final surrogate model of the BO run, but this can also be applied
interactively to intermediate models, with $\hat{c}$ denoting the posterior
mean of a probabilistic model $p(c~{}|~{}A_{T})$. Secondly, we use the partial
dependence method (Friedman, 2001) for efficient estimation of marginal
effects of $\hat{c}:\Lambda\to\mathds{R}$, which estimates Eq. (1) by Monte-
Carlo sampling:
$\displaystyle\varphi_{\bm{\lambda}_{S}}=\frac{1}{n}\sum_{i=1}^{n}\hat{c}\left(\bm{\lambda}_{S},\bm{\lambda}_{R}^{(i)}\right),$
(4)
with $\bm{\lambda}_{S}$ fixed and
$\bm{\lambda}_{R}^{(i)}\overset{i.i.d.}{\sim}\mathds{P}(\bm{\lambda}_{R})$ a
Monte-Carlo sample drawn from a uniform distribution $\mathds{P}$. To bound
the computational effort to compute the PDP, Eq. (4) is evaluated for a
(typically equidistant) set of grid points
$\\{\bm{\lambda}_{S}^{(j)}\\}_{j=1,...,G}$. The PDP visualizes
$\varphi_{\bm{\lambda}_{S}}$ against $\bm{\lambda}_{S}$.
To define the expected information gain for partial dependence
$\textrm{EIG}_{\textrm{PDP}}$, we have the partial dependence method in terms
of a formal execution path (see also Algorithm 1): We iterate over all grid
points, and compute the mean prediction $\hat{c}^{(g,i)}$. The execution path
$e_{\mathcal{A}}$ thus corresponds to the Cartesian product
$\left(\bm{\lambda}_{S}^{(g)},\bm{\lambda}_{R}^{(i)}\right)$ for
$g\in\\{1,...,G\\}$ and $i\in\\{1,...,n\\}$ of all grid points
$\bm{\lambda}_{S}^{(g)}$ and the Monte-Carlo samples $\bm{\lambda}_{R}^{(i)}$.
As proposed by Neiswanger et al. (2021) as one variant, we estimate the
information gained with regards to the execution path of $e_{\mathcal{A}}$
instead of estimating the execution path with regards to the algorithm output
$O_{\mathcal{A}}$. Note that Neiswanger et al. (2021) argued that the
criterion in Eq. (3) is in general suboptimal, if for example large parts of
the execution path $e_{\mathcal{A}}$ do not have an influence on the algorithm
output. We argue, however, that it is not applicable to our use-case since
every element in the execution path of the PD method contributes with equal
weight to the computation of the partial dependence. Figure 1 illustrates the
computation of the PD based on the execution path, as well as the computation
of the $\textrm{EIG}_{\textrm{PDP}}$.
Input $G$, $\hat{c}$,
$\left(\bm{\lambda}_{R}^{(i)}\right)\overset{i.i.d.}{\sim}\mathds{P}(\bm{\lambda}_{R})$
$\left(\bm{\lambda}_{S}^{(1)},...,\bm{\lambda}_{S}^{(G)}\right)\leftarrow$
equidist. grid on $\Lambda_{S}$
for $g\in\\{1,2,...,G\\}$ do
for $i\in\\{1,2,...,n\\}$ do
$\bm{\lambda}^{(g,i)}\leftarrow\left(\bm{\lambda}_{S}^{(g)},\bm{\lambda}_{R}^{(i)}\right)$
$\hat{c}^{(g,i)}\leftarrow\hat{c}\left(\bm{\lambda}^{(g,i)}\right)$
$e_{\mathcal{A}}\leftarrow
e_{\mathcal{A}}\cup\left(\bm{\lambda}^{(g,i)},\hat{c}^{(g,i)}\right)$
end for
$\varphi_{\bm{\lambda}_{S}^{(g)}}\leftarrow\frac{1}{n}\sum_{i=1}^{n}\hat{c}^{(g,i)}$
end for
Return
$\left(\bm{\lambda}_{S}^{(g)},\varphi_{\bm{\lambda}_{S}^{(g)}}\right)$,
$g=1,...,G$
Algorithm 1 PD algorithm with explicit execution path $e_{A}$
Figure 1: Shown are the elements of $e_{\mathcal{A}}$ (blue) PD method. The
grey points show the configurations in the archive which are used by BO to
construct the surrogate model. The green configuration is sampled by EI
(showing more exploitation) while the orange point is the point maximizing the
information gained about the PD estimate.
### 4.2 BOBAX: Efficient Optimization and Search for More Accurate
Interpretability Measures
Given the $\textrm{EIG}_{\textrm{PDP}}$ for PD, the optimization for
interpretability of hyperparameter effects as part of BO is possible by using
the $\textrm{EIG}_{\textrm{PDP}}$ as acquisition function. However,
interpretability alone is rarely of primary interest in practice; rather, the
goal is to identify well-performing configurations and obtaining reasonable
interpretations at the same time. We propose a method, dubbed BOBAX, that
allows to efficiently search for explanations without a relevant loss of
optimization efficiency.
BOBAX is an interleaving strategy which performs BO, and iterates between
using the EI (or any other suited acquisition function) and the
$\textrm{EIG}_{\textrm{PDP}}$ as acquisition function. Although we have
investigated also more complex variants (see Appendix B.2), interleaving
$\textrm{EIG}_{\textrm{PDP}}$ in every $k$-th iteration is simple yet
efficient. The smaller $k$ is, the higher is the weight of optimizing for
accurate interpretations in a BO run. We note that this strategy can replace
other interleaving exploration strategies, such as random samples (Hutter et
al., 2011), since optimizing for interpretability can be seen as another
strategy to cover the entire space in an efficient manner.333One might have
also considered addressing this as a multi-objective problem since we have two
objectives: (i) finding the optimum and (ii) obtaining good PDPs. However,
usually post-hoc multi-objective optimizers construct a Pareto front of a set
of multiple candidate solution, which we are not interested in here. Instead,
in each iteration of BO, the optimizer has to choose a concrete trade-off
between both objectives. For dynamically balancing out this trade-off, please
also refer to the next section.
From a practitioner’s point of view, it may be reasonable to consider accuracy
of interpretations rather as a constraint than an objective function to
optimize for. As soon as this constraint is fulfilled, a user may want to
invest all remaining budget into optimization only. Therefore, we also propose
an adaptive variant of BOBAX, dubbed a-BOBAX, which performs the interleaving
strategy in BOBAX as described above in a first phase, and transitions into
optimization only in a second phase as soon as the constraint is fulfilled. To
allow a user to input a meaningful constraint, the constraint must itself be
interpretable by a user. Therefore, we define this constraint by a desired
average width of confidence intervals around PD estimates, using the
definition444Confidence intervals are defined as
$\varphi_{\bm{\lambda}_{S}^{(g)}}\pm
q_{1-\alpha/2}\cdot\hat{s}_{\bm{\lambda}_{S}^{(g)}}$ around the PD estimate.
$\hat{s}_{\bm{\lambda}_{S}^{(g)}}$ denotes the uncertainty of a PD estimate
for a grid point $g$. As default, we look at $\alpha=0.05$. of Moosbauer et
al. (2021). As an example, a user may want to specify a tolerance $\pm 1\%$ in
validation accuracy in estimation of PDs (see green tolerance bands in Figure
3 for illustration).
### 4.3 Theoretical and Practical Considerations
##### Runtime Complexity
Since BOBAX comes with additional overhead, we discuss this here in more
detail. The computation of the expectation requires posterior samples of the
execution path $e_{\mathcal{A}}~{}\sim p(e_{\mathcal{A}}~{}|~{}A_{T})$. This
is achieved by sampling from the posterior GP $\tilde{c}~{}\sim
p(c~{}|~{}A_{T})$ and execution of $O_{\mathcal{A}}$ on those samples, which
may produce a computational overhead depending on the costs of running
$O_{\mathcal{A}}$. We assume that executing $O_{\mathcal{A}}$ is neglectable
in terms of runtime. However, to compute the entropy
$\mathbb{H}\left[c_{\bm{\lambda}}|A_{T},e_{\mathcal{A}}\right]$, the posterior
process needs to be trained based on $A_{T}\cup e_{\mathcal{A}}$ (which has
size $T+n\cdot G$). Thus, the overall runtime complexity is dominated by
$\mathcal{O}\left(n_{\text{path}}\cdot(T+n)^{3}\right)$, as we compute the
entropy $n_{\text{path}}$ times to approximate the expectation and since
training a GP is cubic in the number of data points. Therefore, we recommend
to keep an eye on the runtime overhead of the calculation of
$\textrm{EIG}_{\textrm{PDP}}$ in relation to evaluating $c$ (e.g., training
and evaluating an ML algorithm). Especially in the context of deep learning,
the evaluation of a single configuration is usually by orders of magnitude
higher than that of computing the $\textrm{EIG}_{\textrm{PDP}}$555In our case,
the computation of the $\textrm{EIG}_{\textrm{PDP}}$ was ranging from the
order of a few seconds to a few minutes.. Also, we would like to emphasize
that the implementation of our method is based on GPflow (Matthews et al.,
2017), which allows fast execution of GPs on GPUs. Since GPUs are typically in
use for training in the context of DL anyway, they can easily be leveraged in
between iterations to speed up the computation of the
$\textrm{EIG}_{\textrm{PDP}}$.
##### Marginal Effects for Multiple Hyperparameters
Until now we have assumed that a user specifies a single hyperparameter of
interest $\bm{\lambda}_{S}$ for which we will compute the PD. However, it is
difficult to prioritize the hyperparameter of interest a-priori. Fortunately,
it is possible to extend the execution path to compute
$\textrm{EIG}_{\textrm{PDP}}$ by the respective execution paths of the PDs
with regards to all variables
$e_{\mathcal{A}}=e_{\mathcal{A},\bm{\lambda}_{1}}\cup
e_{\mathcal{A},\bm{\lambda}_{2}}\cup...\cup e_{\mathcal{A},\bm{\lambda}_{d}}$.
We investigate the differences between $\textrm{EIG}_{\textrm{PDP}}$ for a
single hyperparameter vs. for multiple hyperparameters in more detail in
Appendix C; in the practical use-case (see Section 2), we compute the
$\textrm{EIG}_{\textrm{PDP}}$ for multiple hyperparameters.
## 5 Benchmark
In this section, we present experiments to demonstrate the validity of our
method. In particular, we look at:
##### Hypothesis H1
Performing BO with $\text{EIG}_{\textrm{PDP}}$ as acquisition function is more
efficient than random search in optimizing for accurate interpretations
##### Hypothesis H2
Through BOBAX the accuracy of marginal effect estimates is clearly improved
without a significant loss of optimization performance.
### 5.1 Experimental Setup
Figure 2: The first three plots show the estimated PD with $95\%$ confidence
interval (blue) based on the surrogate model $\hat{c}$ after $T=30$ iterations
vs. the true marginal effect (black). BAX and BOBAX yield more accurate
estimates for the PD as compared the BO with EI. The right plot shows the
cumulative regret for the three methods. BAX, which is not performing
optimization at all, is also clearly outperformed in optimization performance.
BOBAX reaches the optimization result of BO with EI only after a few more
iterations.
##### Objective Functions
We apply our method to synthetic functions which are treated as black-box
function during optimization: Branin ($d=2$), Camelback ($d=2$), Stylinski-
Tang ($d=3$), Hartmann3 ($d=3$) and Hartmann6 ($d=6$).
##### Algorithms
To investigate H1, we consider BO with $\text{EIG}_{\text{PDP}}$ as
acquisition function (BAX). For H2, we consider BOBAX as described in
Algorithm 2, where we iterate evenly ($k=2$) between EI and
$\text{EIG}_{\text{PDP}}$ as acquisition function. Following Neiswanger et al.
(2021) we set the number of execution path samples to $20$ to approximate the
expectation in Eq. (3) in both variants. As strong baseline for accurate PDs
we consider random search (RS) and BO with posterior variance as acquistion
function (PVAR) as a pure exploration case of LCB. As strong baseline for
optimization we consider BO with EI (BO-EI). Further variants of our methods
(e.g., different frequencies of interleaving) and additional baselines (such
as BO with LCB with different exploration factors, or BO with EI and random
interleaving) are described in Appendix C.
##### Evaluation
We evaluate the accuracy of PD estimates by comparing the PD
$\varphi_{S}^{(g)}$ (estimated based on $\hat{c}$) against the PD
$\tilde{\varphi}_{S}^{(g)}$ computed on the ground-truth objective function
$c$, approximated with the same sample $\bm{\lambda}_{R}^{(i)}$ and the same
grid size $G$. As measure we use the $L_{1}$ distance
$\textrm{d}_{\text{L}_{1}}:=\frac{1}{G}\sum_{g=1}^{G}\left|\varphi_{S}^{(g)}-\tilde{\varphi}_{S}^{(g)}\right|$
averaged over all grid points. To assess optimization performance, we report
the simple regret $c(\hat{\bm{\lambda}})-c(\bm{\lambda}^{\ast})$, where
$\bm{\lambda}^{\ast}$ denotes the theoretical optimum of a function, and
$\hat{\bm{\lambda}}\in\textrm{argmin}\left\\{c_{\bm{\lambda}}~{}|~{}(\bm{\lambda},c_{\bm{\lambda}})\in
A_{T}\right\\}$ is the best found configuration during optimization.
##### Further Configurations
A Gaussian process with a squared exponential kernel is used as surrogate
model for all BO variants, and PDs are estimated on the respective surrogate
models. For RS, a GP (with same configuration) is fitted on $A_{T}$ and the PD
is computed thereon. Acquisition function optimization is performed by
randomly sampling $1500$ configurations, evaluating the respective acquisition
function and returning the best. Each (BO) run is given a maximum number of
$30\cdot d$ function evaluations.
##### Reproducibility and Open Science
The implementation of methods as well as reproducible scripts for all
experiments are publicly made available. Each experiment is replicated $20$
times based $20$ different seeds fixed across all variants. More details on
the code and on computational can be found in Appendix E.
### 5.2 H1: More accurate interpretations
Our experiments support hypothesis H1, i.e., we can achieve more accurate PD
estimates more efficiently through targeted sampling via the
$\textrm{EIG}_{\textrm{PDP}}$. An example run on the Branin function shown in
Figure 2 illustrates the behavior of the methods that is observable across all
experiments: BAX is yielding clearly more accurate PDPs than BO with EI
already after few iterations. Figure 4 in Appendix C.2 supports that PDs
estimated on data produced by BO with EI might provide not only
quantitatively, but also qualitatively wrong information in terms of ranking
the values $\varphi_{\bm{\lambda}_{S}^{(g)}}$ differently than the ground-
truth. As expected, increased accuracy of interpretations through BAX comes to
the cost of optimization efficiency. Results aggregated across all problems
and replications confirm this behavior on a broader scale, see Table 1666We
note that the different functions live on different scales s.t. we normalized
it by showing relative metrics wrt baselines, such RS for PDP estimates and EI
for optimization regret.. BAX is producing more accurate PDPs than RS (which
can be assumed to converge against the true marginal effect) already at early
stages, and is strongly significantly ($\alpha=1\%$) outperforming RS with
less iterations. We conclude that both BAX and PVAR can contribute to
approximating the true marginal effect well, but BAX is converging faster. In
addition, BO with EI is significantly outperformed in terms of accuracy of
PDPs, which supports our assumption of lowered quality caused through a heavy
sampling bias.
Table 1: Left: $L1$ error of the estimated PDP w.r.t. the ground truth PDP,
relative to RS as baseline. Negative values mean a relative reduction of the
$L1$ error compared to random search. Right: Optimization relative to BO-EI as
baseline. Results are averaged across all $20$ replications. Best values are
bold, and values are underlined if not significantly worse than the best based
on a Post-Hoc Friedman test ($\alpha=1\%$), see also Demsar (2006); García et
al. (2010) and Appendix C.1 for more details.
| Relative $\textrm{d}_{L_{1}}(\textrm{PDP})$ after
---|---
| 25% | 50% | 75% | 100%
| Max. iterations spent
RS | 0.00 | 0.00 | 0.00 | 0.00
BO-EI | 0.18 | 0.39 | 0.47 | 0.67
PVAR | 0.13 | -0.08 | 0.08 | 0.14
BAX | -0.17 | -0.20 | -0.07 | 0.00
BOBAX | -0.14 | -0.16 | -0.04 | 0.03
| Relative optimization regret after
---|---
| 25% | 50% | 75% | 100%
| Max. iterations spent
RS | 2.42 | 160.99 | 530.70 | 951.47
BO-EI | 0.00 | 0.00 | 0.00 | 0.00
PVAR | 3.38 | 232.14 | 741.69 | 1887.22
BAX | 2.27 | 242.062 | 602.15 | 1408.62
BOBAX | 1.68 | 5.04 | 4.73 | 3.26
### 5.3 H2: More accurate interpretations at no relevant loss of optimization
efficiency
Our experiments also support hypothesis H2, i.e., with BOBAX we can achieve
clearly more accurate PD estimates while maintaining a competitive level of
optimization efficiency. Table 1 compares the accuracy of PD estimates
(measured via $\textrm{d}_{L_{1}}$) and optimization regret as compared to
baselines RS and BO-EI, respectively, aggregated over all five objective
functions. (BO)BAX allows for more accurate PDPs than the other methods, with
diminishing relative distance to RS, while BO with EI is clearly outperformed.
On the other hand, it can be observed that BOBAX is giving optimization
performance comparable to BO with EI throughout the course of optimization,
whereas RS is clearly outperformed. So, BOBAX combines the best of both
worlds: good interpretability (even better than RS) and efficient optimization
(on par with BO-EI). Figure 5 in Appendix C.2 shows that this effect is
visible for all objective function, but the strength of the effect depends on
the objective functions.
We conclude that our experiments support that BOBAX makes no (or only little)
compromises in optimization performance, but yields clearly better estimates
of marginal effects at the same time.
## 6 Practical HPO Application
Figure 3: Comparing PDP evolution for number of iterations for EI and BAX. BAX returns fairly certain PDPs early on, whereas BO with EI requires much more time. Table 2: Iterations needed to reach the desired precision of PD estimate of $\pm 1.5$ balanced accuracy points, accuracy of the final PD based on the L1 error to the ground truth, as well as the final model performance reached. Results are averaged across all 30 replications and all 15 datasets. Best values are bold, and values are underlined if not significantly worse than the best based on a Post-Hoc Friedman test ($\alpha=1\%$), see also Demsar (2006); García et al. (2010) and Appendix C.1 for more details. | Iterations to desired precision | Rel. $\textrm{d}_{L_{1}}$ (PDP) | 1 - Balanced Accuracy
---|---|---|---
RS | 14.91 | 0.49 | 22.56
BO-EI | 22.59 | 0.57 | 19.38
BAX | 9.85 | 0.51 | 23.97
a-BOBAX | 11.56 | 0.52 | 19.94
We demonstrate a-BOBAX on a concrete HPO scenario, following the setup of
Moosbauer et al. (2021). We tune common hyperparameters of a neural network
with regards to balanced validation accuracy on 15 different datasets
respresenting different tasks from different domains (see Tables 5, 6 in
Appendix D) using the interface provided by YAHPO gym (Pfisterer et al.,
2021). We compare RS, EI, BAX, and adaptive BOBAX (a-BOBAX). In a-BOBAX, we
set the desired width of confidence intervals to $\pm 1.5\%$ balanced accuracy
points; we emphasize thought, that this value can be set by the user. For
a-BOBAX, we compute the $\textrm{EIG}_{\textrm{PDP}}$ jointly for the PDPs of
_learning rate_ , _dropout_ , _max. number of units_ , _weight decay_ , and
_momentum_. The respective methods ran under the same conditions as in Section
5, but were replicated $30$ times.
Figure 3 shows how accuracy of the PD estimate increases over time for BO with
EI vs. BAX. We observe that BAX is clearly more efficient in returning an
accurate estimate, which is in line with the results we observed in Section 5.
As motivated in Section 4.2, a practitioner might prefer to rather ensure a
minimum accuracy of IML measures, and therefore, handle this rather as a
constraint than as an objective. Table 2 is showing the time to reach the
desired precision of $\pm 1.5\%$ for the PDP, as well as final accuracy of PDs
and final optimization performance, aggregated over all experiments an
replications. We observe that a-BOBAX is (i) significantly faster in reaching
the desired precision threshold, allowing a user to interact earlier with
confidence, (ii) is comparable to RS in terms of final accurate representation
of PDs, and (iii) comparable to BO-EI in terms of optimization performance.
Note that the effect again depends strongly on the respective dataset (see
Figures 7, 8, 9 in Appendix D.2).
## 7 Discussion and Conclusion
##### Findings
We proposed (adaptive) BOBAX, modifying Bayesian Optimization (BO) for black-
box optimization and HPO to enhance interpretability of the optimization
problem at hand. We achieved this by adapting BAX to optimize for accurate
marginal effects and then interleaved BO and BAX. We further showed that BOBAX
can significantly enhance the accuracy of PD estimates during an optimization
procedure, while not losing optimization performance.
##### Usage
If a user has some desired precision of the IML estimates in mind, a-BOBAX
allows them to make use of BAX only until this level is not reached yet and
will focus on the optimization quality afterwards. This simple, yet efficient
strategy allows to get the most out of the overall budget.
##### Critical View and Limitations
Even though the usage of EIG is beneficial to the quality of a PD estimate,
there are also examples where no significant improvement is observed. We
assume that this particularly holds for hyperparameters that have a simple
(and therefore easy-to-learn) effect on performance. Consequently, the
marginal effect is easily learned for any of the methods. In addition to using
the adaptive version of BOBAX, we recommend dropping these simple-to-learn
hyperparameters from the joint computation of the EIG (4.3) as soon as the
PDPs are sufficiently certain. Furthermore, our method comes at a
computational overhead, being slightly larger than traditional BO since
computing EIG with BAX costs a bit more compute time. In terms of application
to HPO, we expect that the cost for training and validating hyperparameter
configurations or architectures of neural networks will be much larger than
BOBAX’s overhead in most relevant cases.
##### Outlook
We believe that BOBAX will contribute in particular towards more human-
centered HPO, where developers can start inspecting intermediate results as
soon as desired confidence was reached and then adapt the configuration space
if necessary. Although we focused on PDPs as an interpretability method,
extending our BOBAX idea to other IML approaches would be straightforward and
opens up new follow up directions. As one next step, we envision extending
BOBAX to the multi-fidelity setting (Li et al., 2017; Falkner et al., 2018)
which is required for more expensive HPO and AutoML problems. Last but not
least, we emphasize that we developed BOBAX primarily for HPO problems, but it
can also be applied to any black-box optimization problem, e.g., in
engineering or chemistry.
#### Acknowledgments
Use unnumbered third level headings for the acknowledgments. All
acknowledgments, including those to funding agencies, go at the end of the
paper. Only add this information once your submission is accepted and
deanonymized.
## References
* Akiba et al. (2019) Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In _Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2019.
* Bergstra et al. (2011) James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. _Advances in neural information processing systems_ , 24, 2011.
* Biedenkapp et al. (2017) Andre Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett, and Holger H. Hoos. Efficient parameter importance analysis via ablation with surrogates. In Satinder Singh and Shaul Markovitch (eds.), _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA_ , pp. 773–779. AAAI Press, 2017.
* Binder et al. (2020) Martin Binder, Julia Moosbauer, Janek Thomas, and Bernd Bischl. Multi-objective hyperparameter tuning and feature selection using filter ensembles. In Carlos Artemio Coello Coello (ed.), _GECCO ’20: Genetic and Evolutionary Computation Conference, Cancún Mexico, July 8-12, 2020_ , pp. 471–479. ACM, 2020.
* Bischl et al. (2021) Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, and Marius Lindauer. Hyperparameter optimization: Foundations, algorithms, best practices and open challenges. _CoRR_ , abs/2107.05847, 2021.
* Carmichael et al. (2021) Zachariah Carmichael, Tim Moon, and Sam Ade Jacobs. Learning interpretable models through multi-objective neural architecture search. _CoRR_ , abs/2112.08645, 2021.
* Coors et al. (2021) Stefan Coors, Daniel Schalk, Bernd Bischl, and David Rügamer. Automatic componentwise boosting: An interpretable automl system. _CoRR_ , abs/2109.05583, 2021.
* Cutler et al. (2007) D Richard Cutler, Thomas C Edwards Jr, Karen H Beard, Adele Cutler, Kyle T Hess, Jacob Gibson, and Joshua J Lawler. Random forests for classification in ecology. _Ecology_ , 88(11):2783–2792, 2007.
* Demsar (2006) Janez Demsar. Statistical comparisons of classifiers over multiple data sets. _J. Mach. Learn. Res._ , 7:1–30, 2006.
* Drozdal et al. (2020) Jaimie Drozdal, Justin Weisz, Dakuo Wang, Gaurav Dass, Bingsheng Yao, Changruo Zhao, Michael Muller, Lin Ju, and Hui Su. Trust in AutoML. _Proceedings of the 25th International Conference on Intelligent User Interfaces_ , Mar 2020.
* Falkner et al. (2018) Stefan Falkner, Aaron Klein, and Frank Hutter. BOHB: robust and efficient hyperparameter optimization at scale. In Jennifer G. Dy and Andreas Krause (eds.), _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pp. 1436–1445. PMLR, 2018\.
* Fisher et al. (2019) Aaron Fisher, Cynthia Rudin, and Francesca Dominici. All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously. _J. Mach. Learn. Res._ , 20:177:1–177:81, 2019.
* Friedman (2001) Jerome H Friedman. Greedy function approximation: a gradient boosting machine. _Annals of statistics_ , pp. 1189–1232, 2001.
* García et al. (2010) Salvador García, Alberto Fernández, Julián Luengo, and Francisco Herrera. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. _Inf. Sci._ , 180(10):2044–2064, 2010.
* Golovin et al. (2017) Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D. Sculley. Google vizier: A service for black-box optimization. In _Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017_ , pp. 1487–1495. ACM, 2017.
* Hasebrook et al. (2022) Niklas Hasebrook, Felix Morsbach, Niclas Kannengießer, Jörg K. H. Franke, Frank Hutter, and Ali Sunyaev. Why do machine learning practitioners still use manual tuning? A qualitative study. _CoRR_ , abs/2203.01717, 2022.
* Head et al. (2022) Tim Head, Manoj Kumar, Holger Nahrstaedt, Gilles Louppe, and Iaroslav Shcherbatyi. scikit-optimize/scikit-optimize, April 2022. URL https://doi.org/10.5281/zenodo.6451894.
* Hennig & Schuler (2012) Philipp Hennig and Christian J. Schuler. Entropy search for information-efficient global optimization. _J. Mach. Learn. Res._ , 13:1809–1837, 2012.
* Hernández-Lobato et al. (2014) José Miguel Hernández-Lobato, Matthew W. Hoffman, and Zoubin Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger (eds.), _Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada_ , pp. 918–926, 2014\.
* Hutter et al. (2011) Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Carlos A. Coello Coello (ed.), _Learning and Intelligent Optimization - 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers_ , volume 6683 of _Lecture Notes in Computer Science_ , pp. 507–523. Springer, 2011.
* Hutter et al. (2014) Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. An efficient approach for assessing hyperparameter importance. In _Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014_ , volume 32 of _JMLR Workshop and Conference Proceedings_ , pp. 754–762. JMLR.org, 2014\.
* Hvarfner et al. (2022) Carl Hvarfner, Danny Stoll, Artur L. F. Souza, Marius Lindauer, Frank Hutter, and Luigi Nardi. $\pi$bo: Augmenting acquisition functions with user beliefs for bayesian optimization. _CoRR_ , abs/2204.11051, 2022. doi: 10.48550/arXiv.2204.11051. URL https://doi.org/10.48550/arXiv.2204.11051.
* Jones (2001) Donald R. Jones. A taxonomy of global optimization methods based on response surfaces. _Journal of Global Optimization_ , 21(4):345–383, 2001.
* Jones et al. (1998) Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. _J. Glob. Optim._ , 13(4):455–492, 1998.
* Lemmens & Croux (2006) Aurélie Lemmens and Christophe Croux. Bagging and boosting classification trees to predict churn. _Journal of Marketing Research_ , 43(2):276–286, 2006.
* Li et al. (2017) Lisha Li, Kevin G. Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. _J. Mach. Learn. Res._ , 18:185:1–185:52, 2017.
* Lindley (1956) D. V. Lindley. On a measure of the information provided by an experiment. _The Annals of Mathematical Statistics_ , 27(4):986–1005, 1956. ISSN 00034851.
* Matthews et al. (2017) Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke. Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. GPflow: A Gaussian process library using TensorFlow. _Journal of Machine Learning Research_ , 18(40):1–6, apr 2017.
* Moosbauer et al. (2021) Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, and Bernd Bischl. Explaining hyperparameter optimization via partial dependence plots. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), _Advances in Neural Information Processing Systems_ , volume 34, pp. 2280–2291. Curran Associates, Inc., 2021.
* Neiswanger et al. (2021) Willie Neiswanger, Ke Alexander Wang, and Stefano Ermon. Bayesian algorithm execution: Estimating computable properties of black-box functions using mutual information. In Marina Meila and Tong Zhang (eds.), _Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event_ , volume 139 of _Proceedings of Machine Learning Research_ , pp. 8005–8015. PMLR, 2021.
* Olson & Moore (2016) Randal S. Olson and Jason H. Moore. TPOT: A tree-based pipeline optimization tool for automating machine learning. In _Proceedings of the 2016 Workshop on Automatic Machine Learning_ , volume 64 of _JMLR Workshop and Conference Proceedings_ , pp. 66–74. JMLR.org, 2016.
* Pfisterer et al. (2021) Florian Pfisterer, Lennart Schneider, Julia Moosbauer, Martin Binder, and Bernd Bischl. YAHPO gym - design criteria and a new multifidelity benchmark for hyperparameter optimization. _CoRR_ , abs/2109.03670, 2021.
* Probst et al. (2019) Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hyperparameters of machine learning algorithms. _Journal of Machine Learning Research_ , 20:53:1–53:32, 2019.
* Snoek et al. (2012) Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In _Advances in Neural Information Processing Systems 25_ , pp. 2960–2968, 2012.
* Srinivas et al. (2010) Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Johannes Fürnkranz and Thorsten Joachims (eds.), _Proceedings of the 27th International Conference on Machine Learning_ , pp. 1015–1022. Omnipress, 2010.
* Thornton et al. (2013) Chris Thornton, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Auto-weka: combined selection and hyperparameter optimization of classification algorithms. In _The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pp. 847–855. ACM, 2013.
* Turner et al. (2020) Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. In _NeurIPS 2020 Competition and Demonstration Track_ , volume 133 of _Proceedings of Machine Learning Research_ , pp. 3–26. PMLR, 2020\.
* Van Rijn & Hutter (2018) Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pp. 2367–2376, 2018.
* Wenger & Olden (2012) Seth J Wenger and Julian D Olden. Assessing transferability of ecological models: an underappreciated aspect of statistical validation. _Methods in Ecology and Evolution_ , 3(2):260–267, 2012.
* Wu et al. (2017) Jian Wu, Matthias Poloczek, Andrew Gordon Wilson, and Peter I. Frazier. Bayesian optimization with gradients. In _Advances in Neural Information Processing Systems 30_ , pp. 5267–5278, 2017.
* Xanthopoulos et al. (2020) Iordanis Xanthopoulos, Ioannis Tsamardinos, Vassilis Christophides, Eric Simon, and Alejandro Salinger. Putting the human back in the AutoML loop. In _Proceedings of the Workshops of the EDBT/ICDT 2020 Joint Conference_ , volume 2578 of _CEUR Workshop Proceedings_. CEUR-WS.org, 2020\.
* Young et al. (2018) M. Todd Young, Jacob D. Hinkle, Arvind Ramanathan, and Ramakrishnan Kannan. Hyperspace: Distributed bayesian hyperparameter optimization. In _30th International Symposium on Computer Architecture and High Performance Computing, SBAC-PAD 2018, Lyon, France, September 24-27, 2018_ , pp. 339–347. IEEE, 2018. doi: 10.1109/CAHPC.2018.8645954. URL https://doi.org/10.1109/CAHPC.2018.8645954.
* Zela et al. (2018) Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. Towards automated deep learning: Efficient joint neural architecture and hyperparameter search. _CoRR_ , abs/1807.06906, 2018. URL http://arxiv.org/abs/1807.06906.
* Zhang et al. (2018) Zhongheng Zhang, Marcus W Beck, David A Winkler, Bin Huang, Wilbert Sibanda, Hemant Goyal, et al. Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. _Annals of translational medicine_ , 6(11), 2018.
* Zimmer et al. (2021) Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-PyTorch Tabular: Multi-fidelity metalearning for efficient and robust AutoDL. _IEEE TPAMI_ , 2021. Preprint via Early Access.
* Zöller et al. (2022) Marc-André Zöller, Waldemar Titov, Thomas Schlegel, and Marco F. Huber. Xautoml: A visual analytics tool for establishing trust in automated machine learning. _CoRR_ , abs/2202.11954, 2022.
## Appendix A Appendix
## Appendix B Additional methodological aspects
### B.1 Interpretability methods beyond the PDP
BOBAX is generic in the sense that it can be applied to other IML methods than
the PDP that are of interest to the user, as long as a the execution path of
the respective method is accessible to BOBAX.
While we considered the partial dependence method to estimate main effects
(i.e., the marginal effect of a single hyperparameter $\bm{\lambda}_{s}$ on
estimated performance) in our experiments, Algorithm 1 can be extended to
estimate interaction effects of two hyperparameters $S=\\{s,s^{\prime}\\}$.
This is done by simply replacing the grid points in Algorithm 1 by a two-
dimensional grid
$\left(\bm{\lambda}_{s}^{(g)},\bm{\lambda}_{s^{\prime}}^{(g^{\prime})}\right)$
for all pairs $g,g^{\prime}\in\\{1,2,...,G\\}$ with
$\left(\bm{\lambda}_{s}^{(1)},...,\bm{\lambda}_{s}^{(G)}\right)$ and
$\left(\bm{\lambda}_{s^{\prime}}^{(1)},...,\bm{\lambda}_{s^{\prime}}^{(G)}\right)$
representing equidistant grids. With this modified execution path our method
is be straightforwardly applied to estimate interaction effects.
Also, other methods within IML can be optimized for with BOBAX; for example,
the hyperparameter importance via permutation feature importance (PFI) (Fisher
et al., 2019). Importance of a single hyperparameter $\bm{\lambda}_{S}$ is
computed by shuffling the the values of this hyperparameter in the $A_{T}$,
resulting in a modified archive $\tilde{A}_{T,\bm{\lambda}_{S}}$ and the
difference in errors of the model $\hat{c}$ on $A_{T}$ and on
$\tilde{A}_{T,\bm{\lambda}_{S}}$ is compared. The respective execution path
$e_{\mathcal{A}}$ is the joint set of all shuffled versions of the archive
$\tilde{A}_{T,\bm{\lambda}_{1}}\cup\tilde{A}_{T,\bm{\lambda}_{2}}\cup...\cup\tilde{A}_{T,\bm{\lambda}_{d}}$.
### B.2 BOBAX and Variants
Algorithm 2 BOBAX
Input $k$, $n_{\text{init}}$, $O_{\mathcal{A}}$
$A_{T}\leftarrow$ Sample initial design of size $n_{\text{init}}$ over
$\Lambda$
while stopping criterion not met do
if $T\mod k=0$ then
$\bm{\lambda}^{(T+1)}\leftarrow\arg\max_{\bm{\lambda}\in\Lambda}\textrm{EIG}_{\textrm{PDP}}(\bm{\lambda})$
else
$\bm{\lambda}^{(T+1)}\leftarrow\arg\max_{\bm{\lambda}\in\Lambda}\text{EI}(\bm{\lambda})$
end if
$c_{\bm{\lambda}^{(T+1)}}\leftarrow c(\bm{\lambda}^{(T+1)})$
$A_{T+1}\leftarrow
A_{T}\cup\\{\left(\bm{\lambda}^{(T+1)},c_{\bm{\lambda}^{(T+1)}}\right)\\}$
$T\leftarrow T+1$
end while
Return $A_{T},O_{\mathcal{A}}(\hat{c})$
Algorithm 2 shows the BOBAX algorithm as introduced and discussed in the main
paper. We have investigated two more alternative acquisition functions to
trade-off interpretability and optimization efficiency. One is a probabilistic
variant of interleaving $\textrm{EIG}_{\textrm{PDP}}$, where in every
iteration
$\displaystyle\bm{\lambda}^{(T+1)}=\textrm{arg
max}_{\bm{\lambda}\in\Lambda}\begin{cases}\textrm{EIG}_{\textrm{PDP}}(\bm{\lambda})&\textrm{if
}p\leq\pi\\\ \textrm{EI}(\bm{\lambda})&\textrm{if }p>\pi\end{cases}$
where $p\sim\textrm{Unif}(0,1)$ and $\pi$ is a threshold set by a user. If
$\pi$ is set to $0.5$ this corresponds to the probabilistic variant of
Algorithm 2 with $k=2$. We call this variant
$\textrm{BOBAX}_{\textrm{prob}}^{\pi}$. This method also opens up the
possibility to reduce the relative amount search for interpretability (as a
kind of exploration) over time by an annealing strategy where the probability
$\pi$ is lowered over time.
As a second variant, we investigated a multiplicative variant of
$\textrm{EIG}_{\textrm{PDP}}$ and EI inspired by Hvarfner et al. (2022):
$\displaystyle\textrm{EIBAX}^{\beta}(\bm{\lambda})=\textrm{EI}(\bm{\lambda})\cdot\textrm{EIG}_{\textrm{PDP}}(\bm{\lambda})^{\beta/T},$
where the values of a sampled batch of
$\textrm{EIG}_{\textrm{PDP}}(\bm{\lambda})$ are min-max-scaled to $[0,1]$.
Note that in comparison to the interleaving strategy, this method has a
computational disadvantage since it requires to compute the
$\textrm{EIG}_{\textrm{PDP}}$ in _every_ iteration.
Note that in any of the variants above, the EI can be replaced by any other
acquisition function.
## Appendix C Benchmark
### C.1 Additional Details
##### Details on evaluation
We performed a statistical test to allow for conclusions as to whether the
methods compared ($\textbf{RS},\textbf{EI},\textbf{BAX},\textbf{BOBAX}$) are
performing significantly differently in terms of (1) quality of the PD
estimate measured by $\textrm{d}_{\textrm{L}_{1}}$, (2) optimization
performance as measured by regret in Table 1. We applied a _Friedman aligned
ranks test_ as described in (García et al., 2010) on the respective
performance values on different objective functions and replications to
conclude whether there is a difference between methods. Note that the chosen
test is recommended over the Friedman test by García et al. (2010) in
particular if the number of algorithms is low (four to five) because of an
increased power. We applied a post hoc test with Hommel correction for
multiple testing, and report statistical significance based on corrected
p-values. We rely on the implementation
scmamp777https://github.com/b0rxa/scmamp.
##### Comparison with additional baselines
As additional baselines, we are running BO with LCB
$\hat{c}(\bm{\lambda})+\tau\cdot\hat{s}^{2}(\bm{\lambda})$ acquisition
function with different values of $\tau\in\\{1,2,5\\}$, denoted by
$\textbf{LCB}^{1}$, $\textbf{LCB}^{2}$, $\textbf{LCB}^{5}$. Also, we are
running BO with interleaved random configurations every $k\in\\{2,5,10\\}$
iterations, denoted by $\textbf{BO-RS}^{2},\textbf{BO-RS}^{5},\textbf{BO-
RS}^{10}$. We are in addition considering different variations of the BOBAX
method as described in Section B.2: We consider $\textbf{EIBAX}^{20}$,
$\textbf{EIBAX}^{50}$, $\textbf{EIBAX}^{10}$, as well as
$\textbf{BOBAX}^{0.5}_{\text{prob}}$. Also, we have run BOBAX for different
degrees of random interleaving $k\in\\{2,5,10\\}$, denoted by
$\textbf{BOBAX}^{2},\textbf{BOBAX}^{5},\textbf{BOBAX}^{10}$.
Note that all (BAX) variants optimize for a PD for one variable only; we have
chosen the first variable as default. To support our claims in Section 4.3
that our method can be easily applied to jointly compute the PDP for multiple
variables, we are also comparing to one variant which computes the PDP for
_all_ variables, denoted by $\textbf{BAX}_{\textrm{all}}$ and compare it to
BAX.
##### Technical details
All experiments only require CPUs (and no GPUs) and were computed on a Linux
cluster (see Table 3).
Table 3: Description of the infrastructure used for the experiments in this
paper. Computing Infrastructure
---
Type | Linux CPU Cluster
Architecture | 28-way Haswell-EP nodes
Cores per Node | 1
Memory limit (per core) | 2.2 GB
##### Implementation details
Our implementation of BOBAX is based on the implementation provided by
(Neiswanger et al., 2021)888https://github.com/willieneis/bayesian-algorithm-
execution, which in turn is based on the GPflow (Matthews et al., 2017)
implementation for Gaussian processes.
Note that we are not optimizing the hyperparameters of the GP (lengthscale,
kernel variance, and nugget effect) during BOBAX to eliminate one source of
variance between methods. Instead, similarly to (Neiswanger et al., 2021), we
are setting those parameters to sensible default values. These are determined
by the following heuristic executed prior to all experiments: For every
objective function, we perform maximum likelihood optimization of these GP
hyperparameters based on $200$ randomly sampled points, and choose the
configuration with the highest likelihood. This configuration is fixed across
all replications and methods. While this heuristic does not impact the
expressiveness of our statements since all methods are based on the same
kernel hyperparameters, we emphasize that choosing appropriate hyperparameters
is crucial for the performance of our method; therefore, a stable
implementation (as done in established BO libraries) is regarded a necessary
requirement for practical usage.
### C.2 Additional Results
First of all, to provide some evidence for our claim that that BO with EI can
return inaccurate PDPs not only in absolute terms but also when considering
ranks, we have computed Spearman’s rank correlation of the respective PD
estimate with the ground truth objective (see Figure 4).
To evaluate many different algorithms based on two criteria (1) error in PDP
estimate $\textrm{d}_{\textrm{dL}_{1}}$ and (2) optimization regret in a
compressed way, we are looking at the ranks of different methods with regards
to both metrics, resulting in two ranks
$\textrm{rank}_{\textrm{d}_{\textrm{dL}1}}$,
$\textrm{rank}_{\textrm{regret}}$. For the sake of evaluation we assume that
interpretability and optimization efficiency are of equal importance and
therefore assign each method a combined rank of
$\frac{1}{2}\cdot\textrm{rank}_{\textrm{d}_{\textrm{dL}1}}+\frac{1}{2}\cdot\textrm{rank}_{\textrm{regret}}$.
We average the combined ranks of every method across replications and problem
instances. Table 4 shows the combined ranks for our proposed methods BAX and
BOBAX (introduced in Section 2) as well as all baselines.
Figure 6 compares the $\textrm{EIG}_{\textrm{PDP}}$ computed w.r.t. the PD of
a single variable vs. jointly for the PDs of all variables. We observe that
there is no drop in performance; in particular, we observe that the joint
computation performs comparably to the computation for a single variable when
evaluated on a single variable; and the joint computation performs better, if
the accuracy of _all_ PDPs is considered.
Table 4: The table shows the combined ranks $\frac{1}{2}\cdot\textrm{rank}_{\textrm{d}_{\textrm{dL}1}}+\frac{1}{2}\cdot\textrm{rank}_{\textrm{regret}}$ of different methods introduced in Section 4 as well as additional baselines introduced in Appendix C.1. Results are averaged across $20$ replications and across all problems. We observe that $\text{BOBAX}^{2}$ is best in terms of the combined rank. | Combined ranks after
---|---
| 25% | 50% | 75% | 100%
| Max. iterations spent
BOBAX2 | 6.30 | 5.24 | 4.88 | 4.88
BOBAX5 | 6.36 | 5.91 | 5.23 | 5.08
BOBAX10 | 6.51 | 6.10 | 5.46 | 5.14
BO-RS2 | 7.65 | 7.02 | 5.88 | 5.49
BOBAX${}_{\textrm{prob}}^{0.5}$ | 6.96 | 6.39 | 5.92 | 5.72
BO-RS5 | 7.24 | 6.60 | 5.92 | 5.73
BO-RS10 | 7.37 | 6.64 | 6.18 | 5.78
EIBAX100 | 6.71 | 6.40 | 6.00 | 5.94
BAX | 6.77 | 6.95 | 6.32 | 6.18
EIBAX20 | 6.91 | 6.79 | 6.10 | 6.20
LCB5 | 8.93 | 6.65 | 6.09 | 6.22
EI | 7.53 | 7.44 | 6.67 | 6.28
EIBAX50 | 6.83 | 6.45 | 6.21 | 6.49
RS | 8.82 | 9.19 | 7.74 | 6.92
PVAR | 9.80 | 8.01 | 7.18 | 7.00
LCB2 | 8.94 | 7.30 | 7.30 | 7.54
LCB1 | 8.36 | 7.99 | 8.20 | 8.10
Figure 4: The figure shows Spearman’s rank correlation of the estimated PDP
vs. the iterations performed for the Branin function. It demonstrates that
looking at PDPs computed on data from BO with expected improvement can be even
wrong in terms of correlation (as compared to BAX and RS), which matters a lot
in the context of optimization.
Figure 5: Error of PD estimates measured via measured by $\textrm{d}_{L_{1}}$
(left) and optimization regret (right) for the different synthetic objectives.
While RS is clearly outperformed in terms of optimization efficiency, BOBAX
and BO with EI perform comparable on this problem instance. Figure 6: The
performance of BOBAX with $\textrm{EIG}_{\textrm{PDP}}$ computed with regards
to the first variable only (blue) vs. the performance of BOBAX when
$\textrm{EIG}_{\textrm{PDP}}$ is computed for the joint execution paths of PD
estimates with regards to _all_ variables (orange). Left: Error of the PD
estimate for the _first_ variable (measured via
$\textrm{d}_{\textrm{L}_{1}}$). Right: Error of the PD estimate for the _all_
variables (measured via $\textrm{d}_{\textrm{L}_{1}}$). We observe that the
joint computation delivers more accurate PDs over _all_ variables. However, we
also observe that the difference is not dramatically big.
## Appendix D Practical HPO Application
### D.1 Additional Details
Table 5: Hyperparameter space of the LCBench (Zimmer et al., 2021) benchmark suite within YAHPO gym (Pfisterer et al., 2021); _batch size_ and _maximum number of layers_ have been set to defaults $512$ and $5$, respectively. Name | Range | log | type
---|---|---|---
Max. number of units | $[64,512]$ | yes | int
Learning rate (SGD) | $[1\textrm{e}^{-4},1\textrm{e}^{-1}]$ | yes | float
Weight decay | $[1\textrm{e}^{-5},1\textrm{e}^{-1}]$ | no | float
Momentum | $[0.1,0.99]$ | no | float
Max. dropout rate | $[0.0,1.0]$ | no | float
Table 6: Datasets accessed via the _lcbench_ suite of YAHPO gym (Pfisterer et al., 2021); the underlying data for the surrogate benchmark was made available by (Zimmer et al., 2021). ID | Name | Usecase | $n$ | $d$
---|---|---|---|---
3945 | KDDCup09_appetency | Prediciton of customer behavior | 50000 | 231
34539 | drug-directory | Drug classification | 120215 | 21
7593 | covertype | Forest cover type | 581012 | 55
126025 | adult | Salary prediction | 48842 | 15
126026 | nomao | Active-learning in real-world | 34465 | 119
126029 | bank-marketing | Bank direct marketing | 4521 | 17
146212 | shuttle | | 58000 | 10
167104 | Australian | Credit approval | 690 | 15
167149 | kr-vs-kp | Chess game | 3196 | 37
167152 | mfeat-factors | Handwritten numerals | 2000 | 217
167161 | credit-g | Credit risk prediciton | 1000 | 21
167168 | vehicle | Classification of vehicles | 846 | 22
167185 | cnae-9 | Classification of free text | 1080 | 857
167200 | higgs | Higgs boson detection | 98050 | 29
189908 | Fashion-MNIST | Classification of Zalando’s article images | 70000 | 785
As practical HPO application we have chosen the use case of tuning
hyperparameters of a neural network (as shown in Table 5) on the different
classification tasks (listed in Table 6) with regards to _Balanced accuracy_
as performance measures. In BAX / a-BOBAX, we are computing the
$\textrm{EIG}_{\textrm{PDP}}$ jointly for the PDPs of all hyperparameters
listed in Table 5. Each run is replicated $10$ times. Otherwise, all other
settings correspond to the settings in Sections 5 and Appendix C. Note that
the benchmark provided via Yahpo Gym (Pfisterer et al., 2021) is a surrogate
benchmark, which not only supports efficient execution of a benchmark, but
also gives access to a (reasonably cheap-to-evaluate) empirical performance
model as ground truth objective; allowing us to compute the ground-truth PDP
(and thus, any measure of error of the PDP) based on this empirical
performance model.
### D.2 Additional Results
Figures 7, 8, 9 shows a more granular representation of results for the HPO
usecase.
Figure 7: The figure compares error of the PDP estimate after the full budget
spent (in terms of $\textrm{dL}_{1}$; shown in the first row), the percentage
of iterations needed to reach the desired level of confidence (middle row), as
well as the final regret (last row) for the different methods a-BOBAX, EI, and
RS on the different datasets (columns) that we tuned for. In most cases,
a-BOBAX has a final error in PDP comparable to RS, but clearly better than
with EI, and reaches the desired level of confidence faster then the two other
methods. In terms of optimization performance, a-BOBAX and EI perform
comparably, and both clearly outperform RS. Figure 8: The figure compares
error of the PDP estimate after the full budget spent (in terms of
$\textrm{dL}_{1}$; shown in the first row), the percentage of iterations
needed to reach the desired level of confidence (middle row), as well as the
final regret (last row) for the different methods a-BOBAX, EI, and RS on the
different datasets (columns) that we tuned for. In most cases, a-BOBAX has a
final error in PDP comparable to RS, but clearly better than with EI, and
reaches the desired level of confidence faster then the two other methods. In
terms of optimization performance, a-BOBAX and EI perform comparably, and both
clearly outperform RS. Figure 9: The figure compares error of the PDP
estimate after the full budget spent (in terms of $\textrm{dL}_{1}$; shown in
the first row), the percentage of iterations needed to reach the desired level
of confidence (middle row), as well as the final regret (last row) for the
different methods a-BOBAX, EI, and RS on the different datasets (columns) that
we tuned for. In most cases, a-BOBAX has a final error in PDP comparable to
RS, but clearly better than with EI, and reaches the desired level of
confidence faster then the two other methods. In terms of optimization
performance, a-BOBAX and EI perform comparably, and both clearly outperform
RS.
## Appendix E Code and Implementation
All code and data needed to reproduce the benchmark will be made publicly
available via a Github repository after completion of the review process.
During review phase, all code is uploaded as a supplementary material, or can
alternatively be downloaded from https://figshare.com/s/d6ef1b8f4c9c1e844229.
Please refer to the README.md file for further information about how to use
the code to reproduce results.
Note that our implementation is based on the implementation provided by
Neiswanger et al. (2021)999https://github.com/willieneis/bayesian-algorithm-
execution.
Raw and processed results can be downloaded from
https://figshare.com/s/4573a2546f1d8a535c12.
|
# Simple and Effective Input Reformulations for Translation
Brian Yu, Hansen Lillemark, Kurt Keutzer
University of California, Berkeley
Berkeley Artificial Intelligence Research (BAIR)
<EMAIL_ADDRESS>
###### Abstract
Foundation language models learn from their finetuning input context in
different ways. In this paper, we reformulate inputs during finetuning for
challenging translation tasks, leveraging model strengths from pretraining in
novel ways to improve downstream performance. These reformulations are simple
data level modifications, require no additional collection of training data or
modification of data at inference time. They can be applied either on single
language pair translation tasks or massively multilingual translation tasks.
Experiments with these techniques demonstrate significant performance
improvements up to 3.5 chrF++ on the Flores200 translation benchmark. We hope
our research accessibly improves finetuning data efficiency, enabling more
effective training to scalably improve state-of-the-art performance. Our code
is released here.
## 1 Introduction
Figure 1: Task reformulations. Baseline: a direct translation pair. POSE:
append a prefix of the target translation to the input translation. ParSE:
append a parallel English translation to the input translation. MiPS: append a
different parallel translation to both the input and output.
Foundation language models (FLMs) are powerful and task-agnostic models. They
are pretrained on language understanding objectives, enabling strong
performance on downstream language tasks Brown et al. (2020); Shoeybi et al.
(2020); Xue et al. (2021); Hoffmann et al. (2022); Chowdhery et al. (2022);
Zhang et al. (2022a); Chung et al. (2022); Workshop (2023); Touvron et al.
(2023). FLMs are then either prompted or finetuned for downstream use.
In this paper, we present three different data efficient techniques for
improving translation performance, applied to the multilingual FLM mT5 during
finetuning Xue et al. (2021). In our first approach, we train mT5 on a
Classical Tibetan to English (tib2eng) translation task. mT5 struggles heavily
in the initial training steps. Thus, for the first 20% of finetuning, we apply
the "Partial Output Scaffolding in English" or POSE reformulation, shown in
Figure 1. Tib2eng translation examples consist of a Classical Tibetan source
and English target translation pair. POSE simply appends a prefix of the
target English output to the Classical Tibetan input. We see qualitative
improvements in the variance of the training curves. When evaluated on the
same test set with no reformulations, POSE significantly increases overall
translation performance compared to the direct finetuning baseline, up to
10.3% / 2.8 BLEU.
The POSE setup had many adjustable hyperperameters relating to task
difficulty, task curriculum, and substring selection for scaffolding. We find
that input reformulation setups should consist of 20% less informative
examples, and 80% harder and more informative examples. More ablation details
can be found below.
Second, we approach the massively multilingual Flores200 translation benchmark
NLLB-Team et al. (2022). mT5 does not struggle in the initial steps of
finetuning on Flores200 in the same way it did on tib2eng. Even so, we begin
by replicating the tib2eng POSE setup on Flores200 by appending a partial
output of the target translation to the input translation. As expected, this
setup matched but did not improve upon the baseline performance.
The Flores200 benchmark consists of parallel examples of the same sentence in
different languages. In our second approach, we extend the tib2eng POSE
reformulation to create the "Parallel Scaffold in English" or ParSE
reformulation, shown in Figure 1. ParSE appends the corresponding full
parallel English translation (provided by Flores200) to the input. Following
the tib2eng setup, we use a data mix of 20% baseline (less informative) and
80% ParSE (more informative) examples. ParSE significantly improves
translation performance, up to 17.2% / 3.5 chrF++.
We postulate that POSE and ParSE improve translation performance in part
because they enable mT5 to attend to an in-distribution pretrain language with
strong monolingual performance. In our third approach, we explore the efficacy
of parallel scaffolding that does not require strong monolingual performance
using the "Mixed-language Parallel Scaffold" or MiPS reformulation, shown in
Figure 1. MiPS appends a different parallel translation to both the input and
output for a total of 4 distinct languages per input. Again, we use a data mix
of 20% baseline and 80% MiPS examples. MiPS also improves translation
performance, up to 9.1% / 1.6 chrF++. Scaffolding with the strongest
performing pretraining language (ParSE) outperforms scaffolding with a mix of
other languages (MiPS).
Finally, we perform analysis on the languages in the translation set. Using a
balanced dataset like Flores200 allows mT5 to partially overcome pretraining
dataset size biases. Naturally, translating into lower resource languages is
more difficult than translating into higher resource languages, but we find
that the ParSE and MiPS reformulations improve translation into all languages
across the board, rather than disproportionately improving performance on high
resource languages.
In summary, we propose input reformulations on translation tasks. These
reformulations require no additional data, have few hyperparameters, and are
simple to implement. When finetuning on a single language pair translation
task, if the target output language is in the model’s pretraining dataset
distribution, the POSE reformulation can be applied. When translating between
multiple language pairs, the ParSE reformulation can be applied to the
strongest performing pretraining language.
## 2 Related work
Our work can be viewed as a data efficiency technique for translation. Past
works in translation have explored data augmentation Sennrich et al. (2016);
Fadaee et al. (2017), sample re-weighting Shu et al. (2019); Ren et al.
(2019); Gu et al. (2018), and curriculum learning Kocmi and Bojar (2017);
Zhang et al. (2018); Platanios et al. (2019); Zhang et al. (2019); NLLB-Team
et al. (2022). These approaches vary in effectiveness, are not generalizable,
and introduce complexity into the training process. Curriculum learning
approaches in particular are typically complicated and unsuccessful, because
they are designed using intuition on how humans treat inputs, which may differ
from how models treat inputs. In contrast, our input reformulations are simple
and can be directly applied to any sequence-to-sequence task.
Previous work has explored prompting a frozen language model using manually
curated prompts Brown et al. (2020); Touvron et al. (2023); Petroni et al.
(2019). Results are typically sensitive to the exact prompt used. This
technique cannot be applied to larger corpora because it is limited by the
number of examples that can feasibly fit into a single input context. Other
works have explored finetuning with a fixed prompt without leveraging the
target output as a part of the input Radford et al. (2018, 2019); Dong et al.
(2019); Devlin et al. (2019); Lewis et al. (2019); Sun et al. (2019); Liu et
al. (2019); Clark et al. (2020); Yang et al. (2020); Raffel et al. (2020); Gao
et al. (2021); Schick and Schütze (2021); au2 et al. (2021); Xue et al.
(2021); He et al. (2021); Taori et al. (2023).
Following the success of fixed prompt techniques, other works proposed prompt
tuning setups Shin et al. (2020); Schick et al. (2020); Li and Liang (2021);
Hambardzumyan et al. (2021); Lester et al. (2021); Zhong et al. (2021b);
Wallace et al. (2021); Haviv et al. (2021); Jiang et al. (2020); Chen et al.
(2022); Qin and Eisner (2021); Liu et al. (2021); Han et al. (2021); Zhong et
al. (2021a); Lu et al. (2022); Ben-David et al. (2022); Wang et al. (2022a);
Zhou et al. (2023b). These prompt tuning setups were typically used in the
context of compute efficiency: training a smaller number of prompt-related
parameters to input into a larger frozen language model. These setups are an
orthogonal improvement to our proposed input reformulations.
Previous approaches also investigated dataset improvements for better
downstream task performance. These approaches gathered additional data for
model training to augment the model’s input context Chung et al. (2022); Wei
et al. (2023); Wang et al. (2023a); Iyer et al. (2023); Min et al. (2022); Wei
et al. (2022); Wang et al. (2022b); Gu et al. (2023); Wang et al. (2023b);
Zhang et al. (2022b); Press et al. (2023); Zhou et al. (2023a). They require
large, specific, and high quality datasets to be collected. On the other hand,
our input reformulations require no additional data.
Overall, our approach differs from previously explored approaches by avoiding
prompts and leveraging the target output as a part of the input reformulation.
Our input reformulations are a data-level change that can be easily applied to
any training setup.
## 3 Experiments on a difficult single language pair translation task
Figure 2: POSE reformulation applied to the tib2eng translation task. Changes
are highlighted in red.
### 3.1 Setup
We perform experiments on a Classical Tibetan to English (tib2eng) dataset.
Critically, Classical Tibetan is not found in mT5’s pretraining dataset, while
English is. As a result, the tib2eng dataset is challenging for mT5.
Additionally, mT5’s tokenizer was not trained on Tibetan. We use mT5’s current
tokenizer and use the byte-level fallback capabilities of the underlying
SentencePiece tokenizer to encode unknown tokens Xue et al. (2021). We use the
BLEU metric Papineni et al. (2002) for evaluation.
The dataset consists of 450k train, 5k validation, and 5k test translation
pairs. The tokenized Tibetan inputs are mean 72 and median 51 tokens long; we
use a maximum sequence length of 256. We train for 10k steps and a batch size
of 512 translation pairs (about 35k tokens per batch, about 350M tokens
total), equivalent to 11 epochs. We use the AdamW Loshchilov and Hutter (2019)
optimizer with parameters $\beta_{1}=0.9$, $\beta_{2}=0.999$, and weight decay
$0$. We use a constant learning rate schedule with no warmup. The models
converge successfully under this data compute budget. We ablate over learning
rates in {1e-3, 2e-3, 3e-3} for 600M and 1B parameter models (the default
finetuning learning rate for mT5 is 1e-3 Xue et al. (2021)) and {3e-4, 5e-4,
1e-3} for 3B parameter models, where we found lower learning rates to be
empirically better.
We perform evaluation on the models and save checkpoints every 200 steps, for
a total of 50 evaluations, and we use the highest scoring checkpoint for all
results. Models were trained on GPU nodes of either 8 NVIDIA A5000 24GB GPUs
or 8 NVIDIA A6000 48GB GPUs. The typical train time varied from 8 hours for
the smallest models to 80 hours for the largest. We leverage the Deepspeed
library https://www.deepspeed.ai/ for training in the half precision bf16, as
well as for effective multi-GPU training.
In all the following results tables, we report the highest test set BLEU
scores and standard deviation (std) values over learning rates.
### 3.2 Motivation
We begin by training baseline mT5 models on the tib2eng dataset. The resulting
training curves are shown in Figure 3 with the blue colored curves. Clearly,
mT5 struggles in the first 2000 steps or 20% of the training steps. With the
intuition of reducing task difficulty, we design an easier task reformulation
to apply only in the first 20% of training. First, we select a prefix from the
target English translation. The length of this prefix is uniformly randomly
chosen over the full length of the English translation. Then, we append this
English prefix to the Classical Tibetan translation input. Intuitively, we
"scaffold" the Classical Tibetan input with a partial English translation. We
use a partial prefix of the English translation so the model doesn’t
degenerate into simply outputting all the English in the input. We name this
reformulation "Partial Output Scaffold English" or POSE. An example of POSE is
found in Figure 2. The next 4 subsections cover ablations over the finetuning
reformulation setup. For direct results on the POSE task, which ended up being
the most successful, see section 3.7.
Table 1: Task difficulty experiment results on mT5 600M. Difficulty $\downarrow$ | % reform | BLEU | Std
---|---|---|---
Least difficult | 100% | 21.1 | 0.29
| 50% | 23.9 | 0.05
| 20% | 24.6 | 0.26
Most difficult | 0% | 23.5 | 1.64
### 3.3 Modulating task difficulty
The POSE reformulation is easier than the baseline task. In order to modulate
task difficulty, we ablate over different amounts of training examples that
use this reformulation: 0% (baseline), 20%, 50%, and 100% (all reformulated).
Results are found in Table 1. The best condition involves reformulating the
first 20% of training examples, achieving 24.6 BLEU, 1.3 BLEU higher than the
baseline. We hypothesize that making the task too easy e.g. 50% or 100%
reformulated makes the task less informative, which hurts downstream
performance. All of the reformulated runs have low variance across the
learning rates, suggesting that models are better conditioned while training
on easier tasks.
### 3.4 Optimizing the curriculum
Table 2: Curriculum experiment results on mT5 600M. Setup | BLEU | Std
---|---|---
Baseline | 23.5 | 1.64
POSE | 24.6 | 0.26
(Curriculum 1) | 17.4 | 0.85
(Curriculum 2) | 24.9 | 0.74
(Curriculum 3) | 24.7 | 2.50
We attempt to optimize the curriculum using human intuition in 3 setups.
(Curriculum 1): Instead of reformulating only the first 20% of training
examples (i.e. all examples in the first 2000 steps), we rigidly add 100% of
the output to the input at the beginning of training, and linearly scale down
to 0% added at the end of training. (Curriculum 2): Instead of reformulating
100% of training examples in the first 2000 steps, we reformulate 80% of the
inputs for the first 2000 steps, linearly scale down from 80% reformulated to
40% reformulated for the next 4000 steps, and reformulate no examples for the
last 4000 steps. (Curriculum 3): Instead of using uniformly random length
prefixes for the first 20% of training examples, we rigidly add 100% of the
output to the input and linearly scale down to 0% at the end of 2000 steps.
Results are found in Table 2. Even though these setups have merit using human
intuition, mT5 performs markedly worse on all of them in either performance,
stability, or both. The best performing runs perform better than POSE, but at
the cost of stability.
### 3.5 Modulating scaffold substring
Table 3: Prefix+suffix experiment results on mT5 600M. Substring | % reform | BLEU | Std
---|---|---|---
Baseline | 0% | 23.5 | 1.64
Prefix | 20% | 24.6 | 0.26
Prefix+suffix | 12% | 24.8 | 0.55
| 20% | 24.5 | 0.90
| 40% | 24.0 | 0.12
Rather than using just a prefix of the target English output, we experiment
with setups that append both a portion of the target English prefix and a
portion of the target English suffix ("prefix+suffix" reformulation). The
total selected length remains the same for the prefix+suffix experiments. The
prefix+suffix input reformulation is still in natural language, but using
different pieces of the target output. Additionally, we perform a more fine-
grained sweep over how many initial training examples are reformulated.
Results are found in Table 3. The prefix+suffix reformulation performs better
and is less varied than the baseline, but performs worse than the prefix-only
reformulation. We hypothesize that the prefix-only reformulation performs the
best because it is the simplest. Over different amounts of initial training
examples reformulated, 12% reformulated had the best raw performance, closely
followed by 20%. We chose to stick with the 20% experiment due to the lower
variance.
### 3.6 Matching the pretraining task
We hypothesize that matching the pretraining task smooths performance similar
to the POSE reformulation. We experiment on 4 masking setups: (Mask 1) mask in
the first 20% of finetuning steps with p=0.1; (Mask 2) mask in the last 20% of
finetuning steps with p=0.1; (Mask 3) mask in the last 50% of finetuning steps
with p=0.25; and (Mask 4) span-mask in the last 50% of finetuning steps with
p=0.25. Results are found in Table 4. Masking setups have less variance
compared to the baseline or previous best setup, most likely because they are
closer to the pretraining task distribution. Setup (Mask 1) performs better
than the POSE reformulation with slightly higher variance. However, we retain
the POSE reformulation as the best because it is simpler than setup (Mask 1).
The other masking setups (Mask 2), (Mask 3), and (Mask 4) result in lower
performance, most likely because the task is less informative to the actual
downstream translation task.
Table 4: Matching pretraining experiment results on mT5 600M with masking. Setup | BLEU | Std
---|---|---
Baseline | 23.5 | 1.64
POSE | 24.6 | 0.26
(Mask 1) | 24.9 | 0.35
(Mask 2) | 23.6 | 0.20
(Mask 3) | 23.0 | 0.15
(Mask 4) | 23.4 | 0.04
### 3.7 Final results and comparison to state-of-the-art
Figure 3: Tib2eng translation task reformulation experiment results. These
results compare the mT5 baseline (blue), mT5 POSE (orange), and the NLLB
(green) experimental configurations. The solid lines and shaded areas are the
mean and variance over learning rates, respectively. Left: 600M. Center: 1B.
Right: 3B.
We select the best setup based on stability, simplicity, and performance. The
best reformulation was still the original POSE reformulation. We compare
performance of the baseline and POSE mT5 conditions with the state-of-the-art
translation model NLLB NLLB-Team et al. (2022). Because NLLB is a translation-
only model, our input reformulations cannot be applied to it. NLLB’s encoded
input lengths are mean 26 / median 19 tokens. For NLLB, We ablate over
learning rates in {3e-4, 5e-4, 1e-3}. For the NLLB tib2eng baseline, we use a
linear warmup of 1000 steps, 10% of the total number of updates, with constant
learning rate afterwards. The final results comparing the finetuning of mT5
baseline, mT5 POSE, and NLLB on the tib2eng task are shown in Table 5 and
Figure 3.
The POSE reformulation stabilizes training and improves performance, with the
largest mT5 3B model exceeding the performance of NLLB 600M. Additionally,
while the baseline runs have converged, the mT5 POSE and NLLB models could be
trained further for higher performance. NLLB has strong performance on this
finetuning task despite not being trained on Classical Tibetan. This is
because NLLB was trained on modern Tibetan, similar to classical Tibetan, and
because NLLB is a translation-only model with a strong translation inductive
prior. Our finetuning paradigm begins to bridge the gap between FLMs such as
mT5, and task-specific translation-only models such as NLLB.
Table 5: Main results on the tib2eng translation task for mT5. Values shown are test set BLEU scores. The difference shown is the improvement gained by using the input finetuning reformulations. The NLLB column is the test set BLEU score for the corresponding sized NLLB model. Params | NLLB | Baseline | POSE | Diff
---|---|---|---|---
600M | 29.3 | 23.5 | 24.6 | +1.1
1B | 32.3 | 27.2 | 28.3 | +1.1
3B | 34.4 | 27.3 | 30.1 | +2.8
## 4 Experiments on a massively multilingual translation task
### 4.1 Setup
The Flores200 dataset consists of around 3,000 parallel sentences in 204
different languages, meaning each sentence is translated into all 204
languages with high fidelity NLLB-Team et al. (2022); Goyal et al. (2021);
Guzmán et al. (2019). This dataset is challenging for mT5 not only because of
the sheer number of languages, but also because mT5 was not pretrained on over
half of the languages present in the dataset. The Flores200 dataset is
purported for evaluation with a separate, partially parallel train set, but
the fully parallel nature of the Flores200 dataset enables interesting
reformulations for finetuning. We take translation pairs from the Flores200
dev set as our training set, and translation pairs from the devtest set as our
validation and test sets.
Our reformulated Flores200 dataset for training consists of 20M train, 5k
validation, and 10k test translation pairs. Following the tokenization setup
for the tib2eng task, mT5’s tokenizer yields inputs of mean 52 / median 46
tokens and we use a max sequence length of 256. We follow the NLLB team and
perform evaluation on the Flores200 task using the chrF++ metric Popović
(2015) with the xx-yy condition to present the final average score across
languages NLLB-Team et al. (2022). We ablate over the learning rates {1e-4,
2e-4, 3e-4}, where we found lower learning rates to be empirically better. We
train for 10k steps with a batch size of 2048 examples (approximately 105,000
tokens).
### 4.2 Designing task reformulations
Figure 4: Examples of the ParSE and MiPS input reformulations applied to the
Flores200 translation task. The changes to the original input are highlighted
in red. Figure 5: Flores200 translation task reformulation experiment results.
These results compare the mT5 baseline (blue), mT5 ParSE (orange), and mT5
MiPS (green) experimental configurations. The solid lines and shaded areas are
the mean and variance over learning rates, respectively. Left: 600M. Center:
1B. Right: 3B.
For the tib2eng task, we designed POSE to mitigate mT5’s struggles early in
finetuning. mT5 does not struggle in the same manner on Flores200. Even so, we
begin by replicating the tib2eng POSE setup on Flores200 by appending a
partial output of the target translation to the input translation. We
experiment on mT5 300M. The baseline model achieves 16.8 validation set chrF++
and the reformulated model achieves 16.7 validation set chrF++. As expected,
this setup matched but did not improve upon the baseline performance.
mT5 has strong English performance because it was pretrained on orders of
magnitude more English data than other languages. So, we look to leverage this
strong capability in an input reformulation. The Flores200 benchmark consists
of parallel examples of the same sentence in different languages. We extend
the tib2eng POSE reformulation to the "Parallel Scaffold in English" or ParSE
reformulation. ParSE appends a full parallel English translation to the input
translation. For the ParSE setup, we provide the intuition that English is
used as a pivot language between the two other languages.
We explore the efficacy of parallel scaffolding without using English using
the "Mixed-language Parallel Scaffold" or MiPS reformulation. MiPS appends a
different parallel translation to both the input and output for a total of 4
distinct language translations per input. For simplicity, we use any
combination of languages in Flores200, regardless if they’re in or out of
mT5’s pretraining distribution. Examples of the ParSE and MiPS reformulations
are shown in Figures 1 and 4.
For both the ParSE and MiPS reformulations, we follow the tib2eng setup and a
data mix of 20% baseline (less informative) and 80% reformulated (more
informative) examples. We use a data mix rather than reformulating the last
80% of training examples to further simplify setup and expose the model to the
input reformulations early in training. The input reformulations use up to
twice the number of examples per input so we reduce the per-step batch size by
a factor of two from 2048 to 1024 in order to hold the data and compute
budgets constant across experiments.
### 4.3 Results
Table 6: Results on the Flores200 translation task for mT5. Values shown are test set chrF++ scores. The NLLB column is the task performance of a corresponding size NLLB model. For the NLLB score, we use the 200 xx-yy chrF++ scores listed here. Params | NLLB | Baseline | ParSE | MiPS
---|---|---|---|---
600M | 39.5 | 17.6 | 20.7 | 19.2
1B | 41.5 | 20.3 | 23.8 | 21.6
3B | 41.8 | 23.2 | 25.1 | 23.6
Our results are presented in Figure 5 and Table 6. We observe positive effects
on performance similar to the tib2eng results. For the ParSE reformulation,
the model learns slightly slower initially, but learns much more over the
course of training. For the MiPS reformulation, the model learns faster and
better than the baseline. Clearly, our input reformulation scheme improves
performance, beyond just relying on strong English performance. We hypothesize
that both tasks successfully improve performance, in part because they allow
for direct attention between the input context in different languages,
aligning representations across languages.
Interestingly, the ParSE reformulation performs the best, but also has the
highest variance over the learning rates. The need for lower learning rates
typically indicates poor conditioning, so the input task is likely more ill-
conditioned than the baseline. One possible explanation is that mT5 is
learning the languages in Flores200 that were not present in its training set.
### 4.4 Analysis on mT5’s pretraining dataset and Flores200
Figure 6: Pretraining dataset sizes and Flores200 finetuning performance. The
first row represents translation from a language in the pretraining set into
other languages, including those not in the pretraining set. The second row
represents translation from other languages into a language present in the
pretraining set. Each dot represents one language and the value in the graph
represents the corresponding chrF++ test set score for that language and
model. Points shown only cover languages present in the mT5 pretraining set.
The point corresponding to English is the rightmost point on all the graphs.
Dataset sizes are calculated using the number of examples of each language
present in the mC4 dataset. Dataset sizes range from 100k to 1B examples.
Flores200 contains 204 languages, while mT5 was only pretrained on 95 of them.
We perform additional analysis on how being pretrained on a language affects
the post-finetuning performance on Flores200, as well as how the pretraining
data size for a specific language affects performance, shown in Figure 6.
Translating from a language in the pretraining set into other languages is
more difficult than translating from other languages into a language in the
pretraining set. This is most likely because decoding into lower-resource
languages is more difficult than encoding them.
When translating from a language in the pretraining set into other languages,
pretraining data size is slightly correlated with better performance. However,
this correlation is small considering the large range of dataset sizes. The
ParSE and MiPS reformulations improve performance across the board, not
depending on pretraining data size. Using a balanced finetuning dataset like
Flores200 helps mitigate some of the language frequency related pretraining
biases of mT5.
The performance improvement using ParSE when translating from English into
other languages is much more pronounced. This can be seen visually in Figure 6
for the rightmost datapoint in each plot in the top row. The corresponding
numbers in Table 7 for 3B models shows the increase for from-English is 6.3
chrF++. This makes intuitive sense since the model has seen significantly more
English in the input during finetuning.
We break down the performance of different model sizes and reformulation
setups in Table 7. Interestingly, the ParSE and MiPS reformulations improve
performance involving lower-resource languages, sometimes at a slight cost to
performance on higher resource languages. For example, the 3B baseline and
ParSE conditions perform about the same when translating from languages in the
pretrain dataset to other languages in the pretrain dataset. The ParSE
condition performs 1.3 chrF++ worse than the baseline when translating from
out-pretrain to in-pretrain languages. However, the ParSE condition performs
significantly better than the baseline condition on the in-out and out-out
language pairs, with chrF++ improvements of 5.3 and 3.6 respectively.
Explanations for this requires further targeted experimental investigations.
## 5 Conclusion
We have explored how FLMs learn from their input contexts. We provide two
separate techniques that can be applied to any translation use case. For the
case of a single language pair translation task, we recommend POSE. For the
case of a multi-language pair translation task, we recommend ParSE and MiPS.
For challenging translation tasks, our scaffolding reformulations produce
better conditioned training curves and significantly better performance. These
input reformulations are simple to understand and implement, robust over
hyperparameters, general to translation tasks, and effective. We hope our
technique is used to accessibly improve data efficiency on translation tasks.
## Limitations
Our proposed technique has only been applied to two challenging translation
tasks, where the input and output are both information rich and sequential in
nature. Mechanically, these ideas can be applied to other tasks such as
sequence classification. Intuitively, doing so would enable the model to
attend to multiple inputs in its input context in order to better denoise the
inputs. This allows the model to learn more effectively. Similar techniques
can be applied to other tasks, even explored further in pretraining Lample and
Conneau (2019).
The baseline model used here was mT5, a relatively old FLM. As a result, our
baseline results are low compared to state-of-the-art NLLB results.
Unfortunately, there are no better FLMs in the parameter ranges from 600M to
3B. We believe there is still much to explore here with better FLMs, larger
parameter counts, and other creative reformulations. We believe that FLMs will
eventually outperform translation-only models like NLLB, due to the
flexibility given by the capability to understand inputs. The input
reformulations presented in this paper, which begin to bridge the performance
gap between NLLB and mT5, are one example of how FLMs are more flexible in
various input contexts.
## Ethics Statement
As with all work today in deep learning and large models, there are many
biases introduced during large data pretraining and finetuning. We did our
best to choose datasets and models which acknowledge and attempt to mitigate
these biases as much as they can, and encourage the development of even better
datasets and models in the future. Because the techniques introduced in this
paper are input reformulations that don’t introduce new data, we believe they
are at least not introducing many additional risks, and are generally safe to
introduce to other models and techniques. Additionally, one surprising outcome
of our work is that heavy language-oriented pretraining biases were mitigated
by finetuning on a language-balanced dataset. This is critical for equity with
regards to multilingual applications of language models.
We believe the priority of ethics in this line of research is to ensure that
the future integration of these technologies into society as safe, ethical,
and trustworthy. High quality training is critical. Understanding how
different inputs affect downstream performance is an important stepping stone.
We encourage further research in this direction to improve model understanding
and control.
Furthermore, we aim to increase accessibility of high quality, task-specific,
and compute friendly large language models by improving data efficiency.
## Acknowledgements
We would like to thank Prof. Kurt Keutzer for his wisdom and hardware.
## References
* au2 et al. (2021) Robert L. Logan IV au2, Ivana Balažević, Eric Wallace, Fabio Petroni, Sameer Singh, and Sebastian Riedel. 2021. Cutting down on prompts and parameters: Simple few-shot learning with language models.
* Ben-David et al. (2022) Eyal Ben-David, Nadav Oved, and Roi Reichart. 2022. Pada: Example-based prompt learning for on-the-fly adaptation to unseen domains.
* Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
* Chen et al. (2022) Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. KnowPrompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In _Proceedings of the ACM Web Conference 2022_. ACM.
* Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022\. Palm: Scaling language modeling with pathways.
* Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models.
* Clark et al. (2020) Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding.
* Dong et al. (2019) Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc.
* Fadaee et al. (2017) Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 567–573, Vancouver, Canada. Association for Computational Linguistics.
* Gao et al. (2021) Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners.
* Goyal et al. (2021) Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc’Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual machine translation.
* Gu et al. (2018) Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor O. K. Li. 2018. Meta-learning for low-resource neural machine translation.
* Gu et al. (2023) Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. 2023. Pre-training to learn in context.
* Guzmán et al. (2019) Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc’Aurelio Ranzato. 2019. The flores evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english.
* Hambardzumyan et al. (2021) Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. Warp: Word-level adversarial reprogramming.
* Han et al. (2021) Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. Ptr: Prompt tuning with rules for text classification.
* Haviv et al. (2021) Adi Haviv, Jonathan Berant, and Amir Globerson. 2021. BERTese: Learning to speak to BERT. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 3618–3623, Online. Association for Computational Linguistics.
* He et al. (2021) Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention.
* Hoffmann et al. (2022) Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models.
* Iyer et al. (2023) Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O’Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. 2023. Opt-iml: Scaling language model instruction meta learning through the lens of generalization.
* Jiang et al. (2020) Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know?
* Kocmi and Bojar (2017) Tom Kocmi and Ondrej Bojar. 2017. Curriculum learning and minibatch bucketing in neural machine translation. In _RANLP 2017 - Recent Advances in Natural Language Processing Meet Deep Learning_. Incoma Ltd. Shoumen, Bulgaria.
* Lample and Conneau (2019) Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining.
* Lester et al. (2021) Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning.
* Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.
* Li and Liang (2021) Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation.
* Liu et al. (2021) Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-$3$?
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
* Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization.
* Lu et al. (2022) Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022\. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity.
* Min et al. (2022) Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2022. Metaicl: Learning to learn in context.
* NLLB-Team et al. (2022) NLLB-Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
* Petroni et al. (2019) Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases?
* Platanios et al. (2019) Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom M. Mitchell. 2019. Competence-based curriculum learning for neural machine translation.
* Popović (2015) Maja Popović. 2015. chrF: character n-gram F-score for automatic MT evaluation. In _Proceedings of the Tenth Workshop on Statistical Machine Translation_ , pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
* Press et al. (2023) Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models.
* Qin and Eisner (2021) Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts.
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.
* Ren et al. (2019) Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. 2019. Learning to reweight examples for robust deep learning.
* Schick et al. (2020) Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically identifying words that can serve as labels for few-shot text classification.
* Schick and Schütze (2021) Timo Schick and Hinrich Schütze. 2021. Exploiting cloze questions for few shot text classification and natural language inference.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 86–96, Berlin, Germany. Association for Computational Linguistics.
* Shin et al. (2020) Taylor Shin, Yasaman Razeghi, Robert L. Logan IV au2, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts.
* Shoeybi et al. (2020) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2020. Megatron-lm: Training multi-billion parameter language models using model parallelism.
* Shu et al. (2019) Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019\. Meta-weight-net: Learning an explicit mapping for sample weighting.
* Sun et al. (2019) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration.
* Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca.
* Touvron et al. (2023) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models.
* Wallace et al. (2021) Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2021. Universal adversarial triggers for attacking and analyzing nlp.
* Wang et al. (2022a) Boshi Wang, Xiang Deng, and Huan Sun. 2022a. Iteratively prompt pre-trained language models for chain of thought. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 2714–2730, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
* Wang et al. (2023a) Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023a. Self-consistency improves chain of thought reasoning in language models.
* Wang et al. (2023b) Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. Self-instruct: Aligning language models with self-generated instructions.
* Wang et al. (2022b) Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. 2022b. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks.
* Wei et al. (2022) Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners.
* Wei et al. (2023) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models.
* Workshop (2023) BigScience Workshop. 2023. Bloom: A 176b-parameter open-access multilingual language model.
* Xue et al. (2021) Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer.
* Yang et al. (2020) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2020. Xlnet: Generalized autoregressive pretraining for language understanding.
* Zhang et al. (2022a) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022a. Opt: Open pre-trained transformer language models.
* Zhang et al. (2018) Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy Gwinnup, Marianna J Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat. 2018. An empirical exploration of curriculum learning for neural machine translation.
* Zhang et al. (2019) Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul McNamee, Marine Carpuat, and Kevin Duh. 2019. Curriculum learning for domain adaptation in neural machine translation.
* Zhang et al. (2022b) Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022b. Automatic chain of thought prompting in large language models.
* Zhong et al. (2021a) Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021a. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections.
* Zhong et al. (2021b) Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021b. Factual probing is [MASK]: Learning vs. learning to recall. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5017–5033, Online. Association for Computational Linguistics.
* Zhou et al. (2023a) Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi. 2023a. Least-to-most prompting enables complex reasoning in large language models.
* Zhou et al. (2023b) Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2023b. Large language models are human-level prompt engineers.
## Appendix A Appendix
### A.1 Flores200 in- and out- pretrain results
Table 7: Breakdown of model and setup performance over different splits of the Flores200 dataset. "In" refers to a language that was found in the mT5 pretraining dataset and "out" refers to a language that was not. "To Eng" and "From Eng" is refererd to as xx-eng and eng-xx in some other papers, respectively. Notably, the proposed techniques improve "To Eng" performance up to 4.2 chrF++ and "From Eng" performance up to 9.4 chrF++, in the 600M case. We hypothesize this difference in improvement is due to the finetuning task including more English examples in the input, helping with downstream English translations as well as other language translations. Params | Setup | In-in | Out-in | In-out | Out-out | To Eng | From Eng | Avg
---|---|---|---|---|---|---|---|---
600M | Baseline | 20.5 | 19.2 | 17.2 | 16.4 | 21.2 | 20.2 | 17.6
| ParSE | 24.5 | 21.1 | 21.2 | 18.7 | 25.4 | 29.6 | 20.7
| MiPS | 22.6 | 20.5 | 19.1 | 17.7 | 23.9 | 22.8 | 19.2
1B | Baseline | 28.3 | 23.6 | 17.1 | 15.2 | 33.8 | 24.6 | 20.3
| ParSE | 30.9 | 25.2 | 22.7 | 19.3 | 34.6 | 32.9 | 23.8
| MiPS | 27.8 | 23.6 | 19.9 | 17.7 | 31.3 | 25.8 | 21.6
3B | Baseline | 33.2 | 27.3 | 19.3 | 16.9 | 41.0 | 29.0 | 23.2
| ParSE | 33.0 | 26.0 | 24.6 | 20.5 | 37.9 | 35.3 | 25.1
| MiPS | 30.5 | 25.5 | 22.3 | 19.5 | 34.8 | 28.8 | 23.6
|
# Neural Active Learning with Performance Guarantees
Pranjal Awasthi
Google Research NY &Christoph Dann
Google Research NY &Claudio Gentile
Google Research NY &Ayush Sekhari
Cornell University &Zhilei Wang
New York University
###### Abstract
We investigate the problem of active learning in the streaming setting in non-
parametric regimes, where the labels are stochastically generated from a class
of functions on which we make no assumptions whatsoever. We rely on recently
proposed Neural Tangent Kernel (NTK) approximation tools to construct a
suitable neural embedding that determines the feature space the algorithm
operates on and the learned model computed atop. Since the shape of the label
requesting threshold is tightly related to the complexity of the function to
be learned, which is a-priori unknown, we also derive a version of the
algorithm which is agnostic to any prior knowledge. This algorithm relies on a
regret balancing scheme to solve the resulting online model selection problem,
and is computationally efficient. We prove joint guarantees on the cumulative
regret and number of requested labels which depend on the complexity of the
labeling function at hand. In the linear case, these guarantees recover known
minimax results of the generalization error as a function of the label
complexity in a standard statistical learning setting.
## 1 Introduction
Supervised learning is a fundamental paradigm in machine learning and is at
the core of modern breakthroughs in deep learning [28]. A machine learning
system trained via supervised learning requires access to labeled data
collected via recruiting human experts, crowdsourcing, or running expensive
experiments. Furthermore, as the complexity of current deep learning
architectures grows, their requirement for labeled data increases
significantly. The area of active learning aims to reduce this data
requirement by studying the design of algorithms that can learn and generalize
from a small carefully chosen subset of the training data [13, 39].
The two common formulations of active learning are pool based active learning,
and sequential (or streaming) active learning. In the pool based setting [29],
the learning algorithm has access to a large unlabeled set of data points, and
the algorithm can ask for a subset of the data to be labeled. In contrast, in
the sequential setting, data points arrive in a streaming manner, either
adversarially or drawn i.i.d. from a distribution, and the algorithm must
decide whether to query the label of a given point or not [14].
From a theoretical perspective, active learning has typically been studied
under models inspired by the probably approximately correct (PAC) model of
learning [40]. Here one assumes that there is a pre-specified class
$\mathcal{H}$ of functions such that the target function mapping examples to
their labels either lies in $\mathcal{H}$ or has a good approximation inside
the class. Given access to unlabeled samples generated i.i.d. from the
distribution, the goal is to query for a small number of labels and produce a
hypothesis of low error.
In the parametric setting, namely, when the class of functions $\mathcal{H}$
has finite VC-dimension (or finite disagreement coefficient) [21], the rate of
convergence of active learning, i.e., the rate of decay of the error as a
function of the number of label queries ($N$), is of the form
$\nu\,N^{-1/2}+e^{-N}$, where $\nu$ is the population loss of the best
function in class $\mathcal{H}$. This simple finding shows that active
learning behaves like passive learning when $\nu>0$, while very fast rates can
only be achieved under low noise ($\nu\approx 0$) conditions. This has been
worked out in, e.g., [19, 15, 5, 4, 6, 37].
While the parametric setting comes with methodological advantages, the above
shows that in order to unleash the true power of active learning, two
properties are desirable: (1) A better interplay between the input
distribution and the label noise and, (2) a departure from the parametric
setting leading us to consider wider classes of functions (so as to reduce the
approximation error $\nu$ to close to 0). To address the above, there has also
been considerable theoretical work in recent years on non-parametric active
learning [10, 32, 30]. However, these approaches suffer from the curse of
dimensionality and do not lead to computationally efficient algorithms. A
popular approach that has been explored empirically in recent works is to use
Deep Neural Networks (DNNs) to perform active learning (e.g., [36, 25, 38, 3,
43]). While these works empirically demonstrate the power of the DNN-based
approach to active learning, they do not come with provable guarantees. The
above discussion raises the following question: Is provable and
computationally efficient active learning possible in non-parametric settings?
We answer the above question in the affirmative by providing the first, to the
best of our knowledge, computationally efficient algorithm for active learning
based on Deep Neural Networks. Similar to non-parametric active learning, we
avoid fixing a function class a-priori. However, in order to achieve
computational efficiency, we instead propose to use over-parameterized DNNs,
where the amount of over-parameterization depends on the input data at hand.
We work in the sequential setting, and propose a simple active learning
algorithm that forms an uncertainty estimate for the current data point based
on the output of a DNN, followed by a gradient descent step to update the
network parameters if the data point is queried. We show that under standard
low-noise assumptions [31] our proposed algorithm achieves fast rates of
convergence.
In order to analyze our algorithm, we use tools from the theory of Neural
Tangent Kernel (NTK) approximation [23, 2, 18] that allows us to analyze the
dynamics of gradient descent by considering a linearization of the network
around random initialization. Since we study the non-parametric regime, the
convergence rates of our algorithm depend on a data-dependent complexity term
that is expected to be small in practical settings, but could be very large in
worst-case scenarios. Furthermore, the algorithm itself needs an estimate of
complexity term in order to form accurate uncertainty estimates. We show that
one can automatically adapt to the magnitude of the unknown complexity term by
designing a novel model selection algorithm inspired by recent works in model
selection in multi-armed bandit settings [35, 34]. Yet, several new insights
are needed to ensure that the model selection algorithm can simultaneously
achieve low generalization error without spending a significant amount of
budget on label queries.
## 2 Preliminaries and Notation
Let $\mathcal{X}$ denote the input space, $\mathcal{Y}$ the output space, and
$\mathcal{D}$ an unknown distribution over $\mathcal{X}\times\mathcal{Y}$. We
denote the corresponding random variables by $x$ and $y$. We also denote by
$\mathcal{D}_{\mathcal{X}}$ the marginal distribution of $\mathcal{D}$ over
$\mathcal{X}$, and by $\mathcal{D}_{\mathcal{Y}|x_{0}}$ the conditional
distribution of random variable $y$ given $x=x_{0}$. Moreover, given a
function $f$ (sometimes called a hypothesis or a model) mapping $\mathcal{X}$
to $\mathcal{Y}$, the conditional population loss (often referred to as
conditional risk) of $f$ is denoted by $L(f\,|\,x)$, and defined as
$L(f\,|\,x)=\mathbb{E}_{y\sim\mathcal{D}_{\mathcal{Y}|x}}[\ell(f(x),y)\,|\,x]$,
where $\ell\,\colon\,\mathcal{Y}\times\mathcal{Y}\to[0,1]$ is a loss function.
For ease of presentation, we restrict to a binary classification setting with
0-1 loss, whence $\mathcal{Y}=\\{-1,+1\\}$, and $\ell(a,y)=\
1{1}{\left\\{a\neq y\right\\}}\in\\{0,1\\}$, $\ 1{1}{\left\\{\cdot\right\\}}$
being the indicator function of the predicate at argument. When clear from the
surrounding context, we will omit subscripts like
“$y\sim\mathcal{D}_{\mathcal{Y}|x}$" from probabilities and expectations.
We investigate a non-parametric setting of active learning where the
conditional distribution of $y$ given $x$ is defined through an unknown
function $h\,:\,\mathcal{X}^{2}\rightarrow[0,1]$ such that
$\mathbb{P}(y=1\,|\,x)=h((x,0))\qquad\mathbb{P}(y=-1\,|\,x)=h((0,x))~{},$ (1)
where $0\in\mathcal{X}$, $(x_{1},x_{2})$ denotes the concatenation (or
pairing) of the two instances $x_{1}$ and $x_{2}$ (so that $(x,0)$ and $(0,x)$
are in $\mathcal{X}^{2}$) and, for all $x\in\mathcal{X}$ we have
$h((x,0))+h((0,x))=1$. We make no explicit assumptions on $h$, other than its
well-behavedness w.r.t. the data $\\{x_{t}\\}_{t=1}^{T}$ at hand through the
formalism of Neural Tangent Kernels (NTK) – see below. As a simple example, in
the linear case, $\mathcal{X}$ is the $d$-dimensional unit ball,
$h(\cdot,\cdot)$ is parametrized by an unknown unit vector
$\theta\in\mathbb{R}^{d}$, and
$h((x_{1},x_{2}))=\frac{1+\langle(\theta,-\theta),(x_{1},x_{2})\rangle}{2}~{},$
so that $h((x,0))=\frac{1+\langle\theta,x\rangle}{2}$ and
$h((0,x))=\frac{1-\langle\theta,x\rangle}{2},$ where
$\langle\cdot,\cdot\rangle$ is the usual dot product in $\mathbb{R}^{d}$.
We consider a streaming setting of active learning where, at each round
$t\in[T]=\\{1,\ldots,T\\}$, a pair
$(x_{t},y_{t})\in\mathcal{X}\times\mathcal{Y}$ is drawn i.i.d. from
$\mathcal{D}$. The learning algorithm receives as input only $x_{t}$, and is
compelled to both issue a prediction $a_{t}$ for $y_{t}$ and, at the same
time, decide on-the-fly whether or not to observe $y_{t}$. These decisions can
only be based on past observations. Let $\mathbb{E}_{t}$ denote the
conditional expectation
$\mathbb{E}[\cdot\,|(x_{1},y_{1})\ldots,(x_{t-1},y_{t-1}),x_{t}],$ and we
introduce the shorthand
$x_{t,a}=\begin{cases}(x_{t},0)&{\mbox{if $a=1$}}\\\ (0,x_{t})&{\mbox{if
$a=-1$}}~{}.\end{cases}$
Notice that with this notation
$\mathbb{E}[\ell(a,y_{t})\,|\,x_{t}]=1-h(x_{t,a})$, for all $a\in\mathcal{Y}$.
We quantify the accuracy of the learner’s predictions through its (pseudo)
_regret_ , defined as
$R_{T}~{}=~{}\sum_{t=1}^{T}\Bigl{(}\mathbb{E}_{t}[\ell(a_{t},y_{t})\,|\,x_{t}]-\mathbb{E}[\ell(a^{*}_{t},y_{t})\,|\,x_{t}]\Bigl{)}~{}=~{}\sum_{t=1}^{T}\left(h(x_{t,a^{*}_{t}})-h(x_{t,a_{t}})\right)~{},$
where $a_{t}^{*}$ is the Bayesian-optimal classifier on instance $x_{t}$, that
is, $a_{t}^{*}=\arg\max_{a\in\mathcal{Y}}h(x_{t,a})$. Additionally, we are
interested in bounding the number of labels $N_{T}$ the algorithm decides to
request. Our goal is to simultaneously bound $R_{T}$ and $N_{T}$ with high
probability over the generation of the sample
$\\{(x_{t},y_{t})\\}_{t=1,\ldots,T}$ .
Throughout this work, we consider the following common low-noise condition on
the marginal distribution $\mathcal{D}_{\mathcal{X}}$ (Mammen-Tsybakov low
noise condition [31]): There exist absolute constants $c>0$, and $\alpha\geq
0$ such that for all $\epsilon\in(0,1/2)$ we have
$\mathbb{P}\bigl{(}|h((x,0))-\frac{1}{2}|<\epsilon\bigr{)}\leq
c\,\epsilon^{\alpha}.$ In particular, $\alpha=\infty$ gives the so-called hard
margin condition
$\mathbb{P}\bigl{(}|h((x,0))-\frac{1}{2}|<\epsilon\bigr{)}=0.$ while, at the
opposite extreme, exponent $\alpha=0$ (and $c=1$) results in no assumptions
whatsoever on $\mathcal{D}_{\mathcal{X}}$. For simplicity, we shall assume
throughout that the above low-noise condition holds for111 A more general
formulation requires the above to hold only for $\epsilon\leq\epsilon_{0}$,
where $\epsilon_{0}\in(0,1/2)$ is a third parameter. We shall omit this extra
parameter from our presentation. $c=1$.
Our techniques are inspired by the recent work [44] from which we also borrow
some notation. We are learning the class of functions $\\{h\\}$ by means of
fully connected neural networks
$f(x,{\theta})=\sqrt{m}W_{n}\sigma(...\sigma(W_{1}x))~{},$
where $\sigma$ is a ReLU activation function $\sigma(x)=\max\\{0,x\\}$, $m$ is
the width of the network and $n\geq 2$ is its depth. In the above,
$\theta\in\mathbb{R}^{p}$ collectively denotes the set of weights
$\\{W_{1},W_{2},\ldots,W_{n}\\}$ of the network, where $p=m+2md+m^{2}(n-2)$ is
their number, and the input $x$ at training time should be thought of as some
$x_{t,a}\in\mathcal{X}^{2}$.
With any depth-$n$ network and data points
$\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$ we associate a depth-$n$ NTK matrix
as follows [23]. First, rename $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$ as
$\\{x^{(i)}\\}_{i=1,\ldots,2T}$. Then define matrices
${\widetilde{H}^{(1)}}=\left[H^{(1)}_{i,j}\right]_{i,j=1}^{2T\times
2T}\qquad\Sigma^{(1)}=\left[\Sigma^{(1)}_{i,j}\right]_{i,j=1}^{2T\times
2T}\qquad{\mbox{with}}\qquad H^{(1)}_{i,j}=\Sigma^{(1)}_{i,j}=\langle
x^{(i)},x^{(j)}\rangle~{},$
and then, for any $k\leq n$ and $i,j=1,\ldots,2T$, introduce the bivariate
covariance matrix
$A^{(k)}_{i,j}=\begin{bmatrix}\Sigma^{(k)}_{i,i}&\Sigma^{(k)}_{i,j}\\\
\Sigma^{(k)}_{i,j}&\Sigma^{(k)}_{j,j}\end{bmatrix}$ by which we recursively
define $\Sigma^{(k+1)}_{i,j}=2\mathbb{E}_{(u,v)\sim
N(0,A^{(k)}_{i,j})}[\sigma(u)\sigma(v)]$ and
${\widetilde{H}}^{(k+1)}_{i,j}=2{\widetilde{H}}^{(k)}_{i,j}\mathbb{E}_{(u,v)\sim
N(0,A^{(k)}_{i,j})}[\ 1{1}{\left\\{u\geq 0\right\\}}\ 1{1}{\left\\{v\geq
0\right\\}}]+\Sigma^{(k+1)}_{i,j}~{}.$ The $2T\times 2T$-dimensional matrix
$H=\frac{1}{2}({\widetilde{H}}^{(n)}+\Sigma^{(n)})$ is called the Neural
Tangent Kernel (NTK) matrix of depth $n$ (and infinite width) over the set of
points $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$. The reader is referred to
[23] for more details on NTK.
In order to avoid heavy notation, we assume $||x_{t}||=1$ for all $t$. Matrix
$H$ is positive semi-definite by construction but, as is customary in the NTK
literature (e.g., [2, 9, 17]), we assume it is actually positive definite
(hence invertible) with smallest eigenvalue $\lambda_{0}>0$. This is a mild
assumption that can be shown to hold if no two vectors $x_{t}$ are aligned to
each other.
We measure the complexity of the function $h$ at hand in a way similar to
[44]. Using the same rearrangement of $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$
into $\\{x^{(i)}\\}_{i=1,\ldots,2T}$ as above, let $\mathbf{h}$ be the
$2T$-dimensional (column) vector whose $i$-th component is $h(x^{(i)})$. Then,
we define the complexity $S_{T,n}(h)$ of $h$ over
$\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$ w.r.t. an NTK of depth $n$ as
$S_{T,n}(h)=\sqrt{\mathbf{h}^{\top}H^{-1}\mathbf{h}}~{}.$ Notice that this
notion of (data-dependent) complexity is consistent with the theoretical
findings of [2], who showed that for a two-layer network the bound on the
generalization performance is dominated by
$\mathbf{y}^{\top}H^{-1}\mathbf{y}$, where $\mathbf{y}$ is the vector of
labels. Hence if $\mathbf{y}$ is aligned with the top eigenvectors of $H$ the
learning problem becomes easier. In our case, vector $\mathbf{h}$ plays the
role of vector $\mathbf{y}$. Also observe that $S^{2}_{T,n}(h)$ can in general
be as big as linear in $T$ (in which case learning becomes hopeless with our
machinery). In the special case where $h$ belongs to the RKHS induced by the
NTK, one can upper bound $S_{T,n}(h)$ by the norm of $h$ in the RKHS. The
complexity term $S_{T,n}(h)$ is typically unknown to the learning algorithm,
and it plays a central role in both regret and label complexity guarantees.
Hence the algorithm needs to learn this value as well during its online
functioning. Apparently, this aspect of the problem has been completely
overlooked by [44] (as well as by earlier references on contextual bandits in
RKHS, like [12]), where a (tight) upper bound on $S_{T,n}(h)$ is assumed to be
available in advance. We will cast the above as a model selection problem in
active learning, where we adapt and largely generalize to active learning the
regret balancing technique from [35, 34]. In what follows, we use the short-
hand $g(x;\theta)=\nabla_{\theta}f(x,\theta)~{}$ and, for a vector
$g\in\mathbb{R}^{p}$ and matrix $Z\in\mathbb{R}^{p\times p}$, we often write
$\sqrt{g^{\top}Zg}$ as $||g||_{Z}$, so that
$S_{T,n}(h)=||\mathbf{h}||_{H^{-1}}$.
### 2.1 Related work
The main effort in theoretical works in active learning is to obtain rates of
convergence of the population loss of the hypothesis returned by the algorithm
as a function of the number $N$ of requested labels. We emphasize that most of
these works, that heavily rely on approximation theory, are not readily
comparable to ours, since our goal here is not to approximate $h$ through a
DNN on the entire input domain, but only on the data at hand.
As we recalled in the introduction, in the parametric setting the convergence
rates are of the form $\nu\,N^{-1/2}+e^{-N}$, where $\nu$ is the population
loss of the best function in class $\mathcal{H}$. Hence, active learning rates
behave like the passive learning rate $N^{-1/2}$ when $\nu>0$, while fast
rates can only be achieved under very low noise ($\nu\approx 0$) conditions.
In this respect, relevant references include [20, 26] where, e.g., in the
realizable case (i.e., when the Bayes optimal classifier lies in
$\mathcal{H}$), minimax active learning rates of the form
$N^{-\frac{\alpha+1}{2}}$ are shown to hold for adaptive algorithms that do
not know beforehand the noise exponent $\alpha$. In non-parametric settings, a
comprehensive set of results has been obtained by [30], which builds on and
significantly improves over earlier results from [32]. Both papers work under
smoothness (Holder continuity/smoothness) assumptions. In addition, [32]
requires $\mathcal{D}_{\mathcal{X}}$ to be (quasi-)uniform on
$\mathcal{X}=[0,1]^{d}$. In [30] the minimax active learning rate
$N^{-\frac{\beta(\alpha+1)}{2\beta+d}}$ is shown to hold for $\beta$-Holder
classes, where exponent $\beta$ plays the role of the complexity of the class
of functions to learn, and $d$ is the input dimension. This algorithm is
adaptive to the complexity parameter $\beta$, and is therefore performing a
kind of model selection. Notice that minimax rates in the parametric regime
are recovered by setting $\beta\rightarrow\infty$. Of a somewhat similar
flavor is an earlier result by [26], where a convergence rate of the form
$N^{-\frac{\alpha+1}{2+\kappa\alpha}}$ is shown, being $\kappa$ the metric
entropy of the class (again, a notion of complexity). A refinement of the
results in [30] has recently been obtained by [33] where, following [11], a
more refined notion of smoothness for the Bayes classifier is adopted which,
however, also implies more restrictive assumptions on the marginal
distribution $\mathcal{D}_{\mathcal{X}}$.
Model selection of the scale of a Nearest-Neighbor-based active learning
algorithm is also performed in [27], whose main goal is to achieve data-
dependent rates based on the noisy-margin properties of the random sample at
hand, rather than those of the marginal distribution. Their active learning
rates are not directly comparable to ours and, unlike our paper, the authors
work in a pool-based scenario, where all unlabeled points are available
beforehand. Finally, an interesting investigation in active learning for over-
parametrized and interpolating regimes is contained in [24]. The paper
collects a number of interesting insights in active learning for 2-layer
Neural Networks and Kernel methods, but it restricts to either uniform
distributions on the input space or cases of well-clustered data points, with
no specific regret and query complexity guarantees, apart from very special
(though insightful) cases.
## 3 Basic Algorithm
Our first algorithm (Algorithm 1) uses randomly initialized, but otherwise
frozen, network weights (a more refined algorithm where the network weights
are updated incrementally is described and analyzed in the appendix).
Algorithm 1 is an adaptation to active learning of the neural contextual
bandit algorithm of [44], and shares similarities with an earlier selective
sampling algorithm analyzed in [16] for the linear case. The algorithm
generates network weights $\theta_{0}$ by independently sampling from Gaussian
distributions of appropriate variance, and then uses $\theta_{0}$ to stick
with a gradient mapping $\phi(\cdot)$ which will be kept frozen from beginning
to end. The algorithm also takes as input the complexity parameter
$S=S_{T,n}(h)$ of the underlying function $h$ satisfying (1). We shall later
on remove the assumption of the prior knowledge of $S_{T,n}(h)$. In
particular, removing the latter, turns out to be quite challenging from a
technical standpoint, and gives rise to a complex online model selection
algorithms for active learning in non-parametric regimes.
Input: Confidence level $\delta$, complexity parameter $S$, network width $m$,
and depth $n$ .
Initialization:
* •
Generate each entry of $W_{k}$ independently from $\mathcal{N}(0,2/m)$, for
$k\in[n-1]$, and each entry of $W_{n}$ independently from
$\mathcal{N}(0,1/m)$;
* •
Define $\phi(x)=g(x;\theta_{0})/\sqrt{m}$, where $\theta_{0}=\langle
W_{1},\ldots,W_{n}\rangle\in\mathbb{R}^{p}$ is the (frozen) weight vector of
the neural network so generated;
* •
Set $Z_{0}=I\in\mathbb{R}^{p\times p}$, $b_{0}=0\in\mathbb{R}^{p}$ .
for _$t=1,2,\ldots,T$_
Observe instance $x_{t}\in\mathcal{X}$ and build $x_{t,a}\in\mathcal{X}^{2}$,
for $a\in\mathcal{Y}$
Set
$\mathcal{C}_{t-1}=\\{\theta:\|\theta-\theta_{t-1}\|_{Z_{t-1}}\leq\frac{\gamma_{t-1}}{\sqrt{m}}\\}$,
with $\gamma_{t-1}=\sqrt{\log\det Z_{t-1}+2\log(1/\delta)}+S$
Set
$U_{t,a}=\sqrt{m}\max_{\theta\in\mathcal{C}_{t-1}}\langle\phi(x_{t,a}),\theta-\theta_{0}\rangle=\sqrt{m}\langle\phi(x_{t,a}),\theta_{t-1}-\theta_{0}\rangle+\gamma_{t-1}\|\phi(x_{t,a})\|_{Z_{t-1}^{-1}}$
Predict $a_{t}=\arg\max_{a\in\mathcal{Y}}U_{t,a}$
Set $I_{t}=\ 1{1}{\left\\{|U_{t,a_{t}}-1/2|\leq B_{t}\right\\}}\in\\{0,1\\}$
with $B_{t}=B_{t}(S)=2\gamma_{t-1}\|\phi(x_{t,a_{t}})\|_{Z_{t-1}^{-1}}$
if _$I_{t}=1$_
Query $y_{t}\in\mathcal{Y}$, and set loss $\ell_{t}=\ell(a_{t},y_{t})$
Update $\displaystyle Z_{t}$
$\displaystyle=Z_{t-1}+\phi(x_{t,a_{t}})\phi(x_{t,a_{t}})^{\top}$
$\displaystyle b_{t}$ $\displaystyle=b_{t-1}+(1-\ell_{t})\phi(x_{t,a_{t}})$
$\displaystyle\theta_{t}$ $\displaystyle=Z_{t}^{-1}b_{t}/\sqrt{m}+\theta_{0}$
else
$Z_{t}=Z_{t-1}$, $b_{t}=b_{t-1}$, $\theta_{t}=\theta_{t-1}$,
$\gamma_{t}=\gamma_{t-1}$, $\mathcal{C}_{t}=\mathcal{C}_{t-1}$ .
Algorithm 1 Frozen NTK Selective Sampler.
At each round $t$, Algorithm 1 receives an instance $x_{t}\in\mathcal{X}$, and
constructs the two augmented vectors $x_{t,1}=(x_{t},0)$ and
$x_{t,-1}=(0,x_{t})$ (intuitively corresponding to the two “actions" of a
contextual bandit algorithm). The algorithm predicts the label $y_{t}$
associated with $x_{t}$ by maximizing over $a\in\mathcal{Y}$ an upper
confidence index $U_{t,a}$ stemming from the linear approximation
$h(x_{t,a})\approx\sqrt{m}\langle\phi(x_{t,a}),\theta_{t-1}-\theta_{0}\rangle$
subject to ellipsoidal constraints $\mathcal{C}_{t-1}$, as in standard
contextual bandit algorithms operating with the frozen mapping $\phi(\cdot)$.
In addition, in order to decide whether or not to query label $y_{t}$, the
algorithm estimates its own uncertainty by checking to what extent
$U_{t,a_{t}}$ is close to $1/2$. This uncertainty level is ruled by the time-
varying threshold $B_{t}$, which is expected to shrink to 0 as time
progresses. Notice that $B_{t}$ is a function of $\gamma_{t-1}$, which in turn
includes in its definition the complexity parameter $S$. Finally, if $y_{t}$
is revealed, the algorithm updates its least-squares estimator $\theta_{t}$ by
a rank-one adjustment of matrix $Z_{t}$ and an additive update to the bias
vector $b_{t}$. No update is taking place if the label is not queried. The
following is our initial building block.222 All proofs are in the appendix.
###### Theorem 1.
Let Algorithm 1 be run with parameters $\delta$, $S$, $m$, and $n$ on an
i.i.d. sample $(x_{1},y_{1}),\ldots,(x_{T},y_{T})\sim\mathcal{D}$, where the
marginal distribution $\mathcal{D}_{\mathcal{X}}$ fulfills the low-noise
condition with exponent $\alpha\geq 0$ w.r.t. a function $h$ that satisfies
(1) and such that $\sqrt{2}S_{T,n}(h)\leq S$. Then with probability at least
$1-\delta$ the cumulative regret $R_{T}$ and the total number of queries
$N_{T}$ are simultaneously upper bounded as follows:
$\displaystyle R_{T}$
$\displaystyle=O\biggl{(}L_{H}^{\frac{\alpha+1}{\alpha+2}}\Bigl{(}L_{H}+\log(\log
T/\delta)+S^{2}\Bigl{)}^{\frac{\alpha+1}{\alpha+2}}T^{\frac{1}{\alpha+2}}\biggr{)}$
$\displaystyle N_{T}$
$\displaystyle=O\biggl{(}L_{H}^{\frac{\alpha}{\alpha+2}}\Bigl{(}L_{H}+\log(\log
T/\delta)+S^{2}\Bigl{)}^{\frac{\alpha}{\alpha+2}}T^{\frac{2}{\alpha+2}}\biggr{)}~{},$
where $L_{H}=\log\det(I+H)$, $H$ being the NTK matrix of depth $n$ over the
set of points $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$.
The above bounds depend, beyond time horizon $T$, on three relevant
quantities: the noise level $\alpha$, the complexity parameters $S$ and the
log-determinant quantity $L_{H}$. Notice that, whereas $S$ essentially
quantifies the complexity of the function $h$ to be learned, $L_{H}$ measures
instead the complexity of the NTK itself, hence somehow quantifying the
complexity of the function space we rely upon in learning $h$. It is indeed
instructive to see how the bounds in the above theorem vary as a function of
these quantities. First, as expected, when $\alpha=0$ we recover the usual
regret guarantee $R_{T}=O(\sqrt{T})$, more precisely a bound of the form
$R_{T}=O((L_{H}+\sqrt{L_{H}}S)\sqrt{T})$, with the trivial label complexity
$N_{T}=O(T)$. At the other extreme, when $\alpha\rightarrow\infty$ we obtain
the guarantees $R_{T}=N_{T}=O(L_{H}(L_{H}+S^{2}))$. In either case, if $h$ is
“too complex" when projected onto the data, that is, if
$S^{2}_{T,n}(h)=\Omega(T)$, then all bounds become vacuous.333 The same
happens, e.g., to the regret bounds in [44]. At the opposite end of the
spectrum, if $\\{h\\}$ is simple, like a class of linear functions with
bounded norm in a $d$-dimensional space, and the network depth $n$ is 2 then
$S_{T,n}(h)=O(1)$, while $L_{H}=O(d\log T$), and we recover the rates reported
in [16] for the linear case. The quantity $L_{H}$ is tightly related to the
decaying rate of the eigenvalues of the NTK matrix $H$, and is poly-
logarithmic in $T$ in several important cases [41]. One relevant example is
discussed in [42], which relies on the spectral characterization of NTK in [7,
8]: If $n=2$ and all points $x^{(i)}$ concentrate on a $d_{0}$-dimensional
nonlinear subspace of the RKHS spanned by the NTK, then $L_{H}=O(d_{0}\log
T)$.
It is also important to stress that, via a standard online-to-batch
conversion, the result in Theorem 1 can be turned to a compelling guarantee in
a traditional statistical learning setting, where the goal is to come up at
the end of the $T$ rounds with a hypothesis $f$ whose population loss
$L(f)=\mathbb{E}_{x\sim D_{\mathcal{X}}}[L(f\,|\,x)]$ exceeds the Bayes
optimal population loss $\mathbb{E}_{x_{t}\sim
D_{\mathcal{X}}}[h(x_{t,a^{*}_{t}})]=\mathbb{E}_{x_{t}\sim
D_{\mathcal{X}}}[\max\\{h(x_{t,1}),h(x_{t,-1})\\}]$ by a vanishing quantity.
Following [16], this online-to-batch algorithm will simply run Algorithm 1 by
sweeping over the sequence $\\{(x_{t},y_{t})\\}_{t=1,\ldots,T}$ only once, and
pick one function uniformly at random among the sequence of predictors
generated by Algorithm 1 during its online functioning, that is, among the
sequence $\\{U_{t}(x)\\}_{t=1,\ldots,T}$, where
$U_{t}(x)=\arg\max_{a\in\mathcal{Y}}\max_{\theta\in\mathcal{C}_{t-1}}\langle\phi(x_{\cdot,a}),\theta-\theta_{0}\rangle$,
with $x_{\cdot,1}=(x,0)$ and $x_{\cdot,-1}=(0,x)$. This randomized algorithm
enjoys the following high-probability excess risk guarantee:444 Observe that
this is a data-dependent bound, in that the RHS is random variable. This is
because both $L_{H}$ and $S$ may depend on $x_{1},\ldots,x_{T}$.
$\mathbb{E}_{t\sim{\textrm{unif}}(T)}[L(U_{t})]-\mathbb{E}_{x_{t}\sim
D_{\mathcal{X}}}[h(x_{t,a^{*}_{t}})]=O\Biggl{(}\Biggl{(}\frac{L_{H}\Bigl{(}L_{H}+\log({\log
T}/\delta)+S^{2}\Bigl{)}}{T}\Biggl{)}^{\frac{\alpha+1}{\alpha+2}}+\,\frac{\log\log(T/\delta)}{T}\Biggl{)}~{}.$
Combining with the guarantee on the number of labels $N_{T}$ from Theorem 1
(and disregarding log factors), this allows us to conclude that the above
excess risk can be bounded as a function of $N_{T}$ as
$\Bigl{(}\frac{L_{H}(L_{H}+S^{2})}{N_{T}}\Bigl{)}^{\frac{\alpha+1}{2}}~{},$
(2)
where $L_{H}(L_{H}+S^{2})$ plays the role of a (compound) complexity term
projected onto the data $x_{1},\ldots,x_{T}$ at hand. When restricting to VC-
classes, the convergence rate $N_{T}^{-\frac{\alpha+1}{2}}$ is indeed the best
rate (minimax rate) one can achieve under the Mammen-Tsybakov low-noise
condition with exponent $\alpha$ (see, e.g., [10, 20, 26, 16]).
Yet, since we are not restricting to the parametric case, both $L_{H}$ and,
more importantly, $S^{2}$ can be a function of $T$. In such cases, the
generalization bound in (2) can still be expressed as a function of $N_{T}$
alone, For instance, when $L_{H}$ is poly-logarithmic in $T$ and
$S^{2}=O(T^{\beta})$, for some $\beta\in[0,1)$, one can easily verify that (2)
takes the form $N_{T}^{-\frac{(1-\beta)(\alpha+1)}{2+\beta\alpha}}$ (again, up
to log factors).
In Section A.3 of the appendix, we extend all our results to the case where
the network weights are not frozen, but are updated on the fly according to a
(stochastic) gradient descent procedure. In this case, in Algorithm 1 the
gradient vector $\phi(x)=g(x;\theta_{0})/\sqrt{m}$ will be replaced by
$\phi_{t}(x)=g(x;\theta_{t-1})/\sqrt{m}$, where $\theta_{t}$ is not the
linear-least squares estimator
$\theta_{t}=Z_{t}^{-1}b_{t}/\sqrt{m}+\theta_{0}$, as in Algorithm 1, but the
result of the DNN training on the labeled data $\\{(x_{k},y_{k})\,:\,k\leq
t,\,I_{k}=1\\}$ gathered so far.
## 4 Model Selection
Our model selection algorithm is described in Algorithm 2. The algorithm
operates on a pool of base learners of Frozen NTK selective samplers like
those in Algorithm 1, each member in the pool being parametrized by a pair of
parameters $(S_{i},d_{i})$, where $S_{i}$ plays the role of the (unknown)
complexity parameter $S_{T,n}(h)$ (which was replaced by $S$ in Algorithm 1),
and $d_{i}$ plays the role of an (a-priori unknown) upper bound on the
relevant quantity $\sum_{t\in T\,:\,i_{t}=i}\frac{1}{2}\wedge
I_{t,i}B_{t,i}^{2}$ that is involved in the analysis (see Lemma 5 and Lemma 7
in Appendix A.1). This quantity will at the end be upper bounded by a term of
the form $L_{H}(L_{H}+\log(\log T/\delta)+S^{2}_{T,n}(h))$, whose components
$L_{H}$ and $S^{2}_{T,n}(h)$ are initially unknown to the algorithm.
Algorithm 2 maintains over time a set $\mathcal{M}_{t}$ of active base
learners, and a probability distribution ${\mbox{\boldmath$p$}}_{t}$ over
them. This distribution remains constant throughout a sequence of rounds
between one change to $\mathcal{M}_{t}$ and the next. We call such sequence of
rounds an epoch. Upon observing $x_{t}$, Algorithm 2 selects which base
learner to rely upon in issuing its prediction $a_{t}$ and querying the label
$y_{t}$, by drawing base learner $i_{t}\in\mathcal{M}_{t}$ according to
${\mbox{\boldmath$p$}}_{t}$.
Then Algorithm 2 undergoes a series of carefully designed elimination tests
which are meant to rule out mis-specified base learners, that is, those whose
associated parameter $S_{i}$ is likely to be smaller than $S_{T,n}(h)$, while
retaining those such that $S_{i}\geq S_{T,n}(h)$. These tests will help keep
both the regret bound and the label complexity of Algorithm 2 under control.
Whenever, at the end of some round $t$, any such test triggers, that is, when
it happens that $|\mathcal{M}_{t+1}|<|\mathcal{M}_{t}|$ at the end of the
round, a new epoch begins, and the algorithm starts over with a fresh
distribution ${\mbox{\boldmath$p$}}_{t+1}\neq{\mbox{\boldmath$p$}}_{t}$.
The first test (“disagreement test") restricts to all active base learners
that would not have requested the label if asked. As our analysis for the base
selective sampler (see Lemma 8 in Appendix A.1) shows that a well-specified
base learner does not suffer (with high probability) any regret on non-queried
rounds, any disagreement among them reveals mis-specification, thus we
eliminate in pairwise comparison the base learner that holds the smaller
$S_{i}$ parameter. The second test (“observed regret test") considers the
regret behavior of each pair of base learners $i,j\in\mathcal{M}_{t}$ on the
rounds $k\leq t$ on which $i$ was selected $(i_{k}=i)$ and requested the label
$(I_{k,i}=1$), but $j$ would not have requested if asked ($I_{k,j}=0$), and
the predictions of the two happened to disagree on that round ($a_{k,i}\neq
a_{k,j}$). The goal here is to eliminate base learners whose cumulative regret
is likely to exceed the regret of the smallest well-specified learner, while
ensuring (with high probability) that any well-specified base learner $i$ is
not removed from the pool. In a similar fashion, the third test (“label
complexity test") is aimed at keeping under control the label complexity of
the base learners in the active pool $\mathcal{M}_{t}$. Finally, the last test
(“$d_{i}$ test") simply checks whether or not the candidate value $d_{i}$
associated with base learner $i$ remains a valid (and tight) upper bound on
$L_{H}(L_{H}+S^{2}_{T,n}(h))$.
Input: Confidence level $\delta$; probability parameter $\gamma\geq 0$; pool
of base learners $\mathcal{M}_{1}$, each identified with a pair
$(S_{i},d_{i})$; number of rounds $T$.
Set $L(t,\delta)=\log\frac{5.2\log(2t)^{1.4}}{\delta}$
for _$t=1,2,\ldots,T$_
Observe instance $x_{t}\in\mathcal{X}$ and build $x_{t,a}\in\mathcal{X}^{2}$,
for $a\in\mathcal{Y}$
for _$i\in\mathcal{M}_{t}$_
Set $I_{t,i}\in\\{0,1\\}$ as the indicator of whether base learner $i$ would
ask for label on $x_{t}$
Set $a_{t,i}\in\mathcal{Y}$ as the prediction of base learner $i$ on $x_{t}$
Let $B_{t,i}=B_{t,i}(S_{i})$ denote the query threshold of base learner $i$
(from Algorithm 1)
Select base learner
$i_{t}\sim{\mbox{\boldmath$p$}}_{t}=(p_{t,1},p_{t,2},\dots,p_{t,|\mathcal{M}_{t}|})$,
where
$p_{t,i}=\begin{cases}\frac{d_{i}^{-(\gamma+1)}}{\sum_{j\in\mathcal{M}_{t}}d_{j}^{-(\gamma+1)}},&\text{if
}i\in\mathcal{M}_{t}\\\ 0,&\text{otherwise}\end{cases}$
Predict $a_{t}=a_{t,i_{t}}$
if _$I_{t,i_{t}}=1$_
Query label $y_{t}\in\mathcal{Y}$ and send $(x_{t},y_{t})$ to base learner
$i_{t}$
$\mathcal{M}_{t+1}=\mathcal{M}_{t}$
[2mm] Set $\mathcal{N}_{t}=\\{i\in\mathcal{M}_{t}\colon I_{t,i}=0\\}$ // (1)
Disagreement test
for _all pairs of base learners $i,j\in\mathcal{N}_{t}$ that disagree in
their prediction ($a_{t,i}\neq a_{t,j}$)_
Eliminate all learners with smaller $S$:
$\mathcal{M}_{t+1}=\\{m\in\mathcal{M}_{t+1}\colon
S_{m}>\min\\{S_{i},S_{j}\\}\\}$
for _all pairs of base learners $i,j\in\mathcal{M}_{t}$ _ // (2) Observed
regret test
Consider rounds where the chosen learner $i$ requested the label but $j$ did
not, and $i$ and $j$ disagree in their prediction:
$\displaystyle\mathcal{V}_{t,i,j}=\\{k\in[t]\colon
i_{k}=i,I_{k,i}=1,I_{k,j}=0,a_{k,i}\neq a_{k,j}\\}$ if _
$\displaystyle\sum_{k\in\mathcal{V}_{t,i,j}}\\!\\!(\ 1{1}{\left\\{a_{k,i}\neq
y_{k}\right\\}}-\ 1{1}{\left\\{a_{k,j}\neq
y_{k}\right\\}})>\\!\\!\\!\\!\sum_{k\in\mathcal{V}_{t,i,j}}\\!\\!\\!(1\\!\wedge\\!B_{k,i})+1.45\sqrt{|{\mathcal{V}}_{t,i,j}|L(|{\mathcal{V}}_{t,i,j}|,\delta)}$
_
Eliminate base learner $i$:
$\mathcal{M}_{t+1}=\mathcal{M}_{t+1}\setminus\\{i\\}$
for _$i\in\mathcal{M}_{t}$_ // (3) Label complexity test
Consider rounds where base learner $i$ was played:
$\displaystyle\mathcal{T}_{t,i}=\\{k\in[t]\colon i_{k}=i\\}$
if _
$\displaystyle\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}>\inf_{\epsilon\in(0,1/2]}\biggl{(}3\epsilon^{\gamma}|\mathcal{T}_{t,i}|+\frac{1}{\epsilon^{2}}\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}B_{k,i}^{2}\wedge\frac{1}{4}\biggr{)}+2L(|\mathcal{T}_{t,i}|,\delta/(M\log_{2}(12t)))$
_
Eliminate base learner $i$:
$\mathcal{M}_{t+1}=\mathcal{M}_{t+1}\setminus\\{i\\}$
for _$i\in\mathcal{M}_{t}$ _ // (4) $d_{i}$ test
if _ $\sum_{k\in\mathcal{T}_{t,i}}(\mbox{$\frac{1}{2}$}\wedge
I_{k,i}B_{k,i}^{2})>8d_{i}$ _
Eliminate base learner $i$:
$\mathcal{M}_{t+1}=\mathcal{M}_{t+1}\setminus\\{i\\}$
Algorithm 2 Frozen NTK Selective Sampler with Model Selection.
We have the following result, whose proof is contained in Appendix A.2.
###### Theorem 2.
Let Algorithm 2 be run with parameters $\delta$, $\gamma\leq\alpha$ with a
pool of base learners $\mathcal{M}_{1}$ of size $M$ on an i.i.d. sample
$(x_{1},y_{1}),\ldots,(x_{T},y_{T})\sim\mathcal{D}$, where the marginal
distribution $\mathcal{D}_{\mathcal{X}}$ fulfills the low-noise condition with
exponent $\alpha\geq 0$ w.r.t. a function $h$ that satisfies (1) and
complexity $S_{T,n}(h)$. Let also $\mathcal{M}_{1}$ contain at least one base
learner $i$ such that $\sqrt{2}S_{T,n}(h)\leq S_{i}\leq 2\sqrt{2}S_{T,n}(h)$
and $d_{i}=\Theta(L_{H}(L_{H}+\log(M\log T/\delta)+S^{2}_{T,n}(h)))$, where
$L_{H}=\log\det(I+H)$, being $H$ the NTK matrix of depth $n$ over the set of
points $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$. Then with probability at
least $1-\delta$ the cumulative regret $R_{T}$ and the total number of queries
$N_{T}$ are simultaneously upper bounded as follows:
$\displaystyle R_{T}$
$\displaystyle=O\left(M\,\Bigl{(}L_{H}\bigl{(}L_{H}+\log(M\log
T/\delta)+S^{2}_{T,n}(h)\bigl{)}\Bigl{)}^{\gamma+1}T^{\frac{1}{\gamma+2}}+M\,L(T,\delta)\right)$
$\displaystyle N_{T}$
$\displaystyle=O\left(M\,\Bigl{(}L_{H}\bigl{(}L_{H}+\log(M\log
T/\delta)+S^{2}_{T,n}(h)\bigl{)}\Bigl{)}^{\frac{\gamma}{\gamma+2}}T^{\frac{2}{\gamma+2}}+M\,L(T,\delta)\right)~{},$
where $L(T,\delta)$ is the logarithmic term defined at the beginning of
Algorithm 2’s pseudocode.
We run Algorithm 2 with the pool
$\mathcal{M}_{1}=\\{(S_{i_{1}},d_{i_{2}})\\}$, where $S_{i_{1}}=2^{i_{1}}$,
$i_{1}=0,1,\ldots,O(\log T)$ and $d_{i_{2}}=2^{i_{2}}$,
$i_{2}=0,1,\ldots,O(\log T+\log\log(M\log T/\delta))$, ensuring the existence
of a pair $(i_{1},i_{2})$ such that
$\sqrt{2}S_{T,n}(h)\leq S_{i_{1}}\leq 2\sqrt{2}S_{T,n}(h)$
and
$L_{H}\bigl{(}L_{H}+\log(M\log T/\delta)+S^{2}_{T,n}(h)\bigl{)}\leq
d_{i_{2}}\leq 2L_{H}\bigl{(}L_{H}+\log(M\log
T/\delta)+S^{2}_{T,n}(h)\bigl{)}~{}.$
Hence the resulting error due to the discretization is just a constant factor,
while the resulting number $M$ of base learners is $O(\log^{2}T+(\log
T)(\log\log(M\log T/\delta)))$.
Theorem 2 allows us to conclude that running Algorithm 2 on the above pool of
copies of Algorithm 1 yields guarantees that are similar to those obtained by
running a single instance of Algorithm 1 with $S=\sqrt{2}S_{T,n}(h)$, that is,
as if the complexity parameter $S_{T,n}(h)$ were known beforehand. Yet, this
model selection guarantee comes at a price, since Algorithm 2 needs to receive
as input the noise exponent $\alpha$ (through parameter $\gamma\leq\alpha$) in
order to correctly shape its label complexity test.
The very same online-to-batch conversion mentioned in Section 3 can be applied
to Algorithm 2. Again, combining with the bound on the number of labels and
disregarding log factors, this gives us a high probability excess risk bound
of the form
$\left(\frac{\left[L_{H}\left(L_{H}+S^{2}_{T,n}(h)\right)\right]^{\frac{3\alpha+2}{\alpha+2}}}{N_{T}}\right)^{\frac{\alpha+1}{2}}~{},$
(3)
provided $\gamma=\alpha$. Following the same example as at the end of Section
3, when $L_{H}$ is poly-logarithmic in $T$ and $S^{2}=O(T^{\beta})$, for some
$\beta\in[0,1)$, one can verify that (3) is of the form
$N_{T}^{-\frac{(1-\beta(\alpha+1))(\alpha+1)}{2+\beta\alpha}}$ (up to log
factors), which converges for $\beta<1/(\alpha+1)$. Hence, compared to (2) we
can ensure convergence in a more restricted set of cases.
Section A.3 in the appendix contains the extension of our model selection
procedure to the case where the network weights are themselves updated.
## 5 Conclusions and Work in Progress
We have presented a rigorous analysis of selective sampling and active
learning in general non-parametric scenarios, where the complexity of the
Bayes optimal predictor is evaluated on the data at hand as a fitting measure
with respect to the NTK matrix of a given depth associated with the same data.
This complexity measure plays a central role in the level of uncertainty the
algorithm assigns to labels (the higher the complexity the higher the
uncertainty, hence the more labels are queried). Yet, since this is typically
an unknown parameter of the problem, special attention is devoted to designing
and analyzing a model selection technique that adapts to this unknown
parameter.
In doing so, we borrowed tools and techniques from Neural Bandits [44, 42],
selective sampling (e.g., [16]), and online model selection in contextual
bandits [35, 34], and combined them together in an original and non-trivial
manner.
We proved regret and label complexity bounds that recover known minimax rates
in the parametric case, and extended such results well beyond the parametric
setting achieving favorable guarantees that cannot easily be compared to
available results in the literature of active learning in non-parametric
settings. One distinctive feature of our proposed technique is that it gives
rise to efficient and manageable algorithms for modular DNN architecture
design and deployment.
We conclude by mentioning a few directions we are currently exploring:
1. 1.
We are trying to get rid of the prior knowledge of $\alpha$ in the model
selection Algorithm 2. This may call for a slightly more refined balancing
technique that jointly involves $S_{T,n}(h)$ and $\alpha$ itself.
2. 2.
Regardless of whether $\alpha$ is available, it would be nice to improve the
dependence on $\gamma=\alpha$ in the regret bound of Theorem 2. This would
ensure convergence of the generalization bound as $N_{T}\rightarrow\infty$
when $S_{T,n}(h)^{2}=T^{\beta}$, for all $\beta\in[0,1)$. We conjecture that
this is due to a suboptimal design of our balancing mechanism for model
selection in Algorithm 2.
3. 3.
We are investigating links between the complexity measure $S_{T,n}(h)$ and the
smoothness properties of the (Bayes) regression function $h$ with respect to
the NTK kernel (of a given depth $n$).
## References
* [1] Y. Abbasi-yadkori, D. Pál, and C. Szepesvári. Improved algorithms for linear stochastic bandits. In Advances in Neural Information Processing Systems 24, pages 2312–2320. Curran Associates, Inc., 2011.
* [2] S. Arora, S. S. Du, W. Hu, Z. Li, R. Salakhutdinov, and R. Wang. On exact computation with an infinitely wide neural net. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2019.
* [3] J. T. Ash, C. Zhang, A. Krishnamurthy, J. Langford, and A. Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv preprint arXiv:1906.03671, 2019.
* [4] M. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. Journal of Computer and System Sciences, 75(1):78–89, 2009.
* [5] N. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. In COLT, 2008.
* [6] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In ICML, 2009.
* [7] A. Bietti and J. Mairal. On the inductive bias of neural tangent kernels. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2019.
* [8] Y. Cao, Z. Fang, Y. Wu, D. Zhou, and Q. Gu. Towards understanding the spectral bias of deep learning. In arXiv:1912.01198, 2019.
* [9] Y. Cao and Q. Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2019.
* [10] R. Castro and R. Nowak. Minimax bounds for active learning. IEEE Transactions on Information Theory, 54(5):2339–2353, 2008\.
* [11] K. Chaudhuri and S. Dasgupta. Rates of convergence for nearest neighbor classification. In Advances in Neural Information Processing Systems, pages 3437–3445, 2014.
* [12] S. R. Chowdhury and A. Gopalan. On kernelized multi-armed bandits. In Proceedings of the 34th International Conference on Machine Learning, 2017.
* [13] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine learning, 15(2):201–221, 1994.
* [14] I. Dagan and S. P. Engelson. Committee-based sampling for training probabilistic classifiers. In Machine Learning Proceedings 1995, pages 150–157. Elsevier, 1995\.
* [15] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In Advances in Neural Information Processing Systems, 2007.
* [16] O. Dekel, C. Gentile, and K. Sridharan. Selective sampling and active learning from single and multiple teachers. J. Mach. Learn. Res., 13(1), 2012.
* [17] S. Du, J. Lee, H. Li, L. Wang, and X. Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, page 1675–1685, 2019.
* [18] S. Du, J. Lee, Y. Tian, A. Singh, and B. Poczos. Gradient descent learns one-hidden-layer CNN: Don’t be afraid of spurious local minima. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1339–1348. PMLR, 2018.
* [19] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, 2007.
* [20] S. Hanneke. Adaptive rates of convergence in active learning. In Proc. of the 22th Annual Conference on Learning Theory, 2009\.
* [21] S. Hanneke et al. Theory of disagreement-based active learning. Foundations and Trends® in Machine Learning, 7(2-3):131–309, 2014.
* [22] S. R. Howard, A. Ramdas, J. McAuliffe, and J. Sekhon. Time-uniform, nonparametric, nonasymptotic confidence sequences. arXiv preprint arXiv:1810.08240, 2018.
* [23] A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: convergence and generalization in neural networks. In Advances in neural information processing systems, page 8571–8580. MIT Press, 2018.
* [24] M. Karzand and R. Nowak. Maximin active learning in overparameterized model classes. In arXiv:1905.12782v2. 2020.
* [25] A. Kirsch, J. Van Amersfoort, and Y. Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. arXiv preprint arXiv:1906.08158, 2019.
* [26] V. Koltchinskii. Rademacher complexities and bounding the excess risk of active learning. Journal of Machine Learning Research, 11:2457–2485, 2010.
* [27] A. Kontorovich, S. Sabato, and R. Urner. Active nearest-neighbor learning in metric spaces. In Advances in Neural Information Processing Systems, pages 856–864, 2016.
* [28] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
* [29] D. D. Lewis and W. A. Gale. A sequential algorithm for training text classifiers. In SIGIR’94, pages 3–12. Springer, 1994.
* [30] C. A. Locatelli A. and S. Kpotufe. Adaptivity to noise parameters in nonparametric active learning. In Proceedings of the 2017 Conference on Learning Theory, volume 65 of Proceedings of Machine Learning Research, pages 1383–1416, 2017.
* [31] E. Mammen and A. Tsybakov. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808–1829, 1999.
* [32] S. Minsker. Plug-in approach to active learning. Journal of Machine Learning Research, 13:67–90, 2012.
* [33] B. Njike and X. Siebert. Nonparametric adaptive active learning under local smoothness condition. In arxiv: 2102.11077. 2021.
* [34] A. Pacchiano, C. Dann, G. C., and P. Bartlett. Regret bound balancing and elimination for model selection in bandits and RL. arXiv preprint arXiv:2012.13045, 2020.
* [35] A. Pacchiano, M. Phan, Y. Abbasi Yadkori, A. Rao, J. Zimmert, T. Lattimore, and C. Szepesvari. Model selection in contextual stochastic bandit problems. In Advances in Neural Information Processing Systems, volume 33, pages 10328–10337. Curran Associates, Inc., 2020.
* [36] R. Pop and P. Fulop. Deep ensemble bayesian active learning: Addressing the mode collapse issue in monte carlo dropout via ensembles. arXiv preprint arXiv:1811.03897, 2018.
* [37] M. Raginsky and A. Rakhlin. Lower bounds for passive and active learning. In Advances in Neural Information Processing Systems, 2011.
* [38] O. Sener and S. Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017.
* [39] B. Settles. Active learning literature survey. 2009\.
* [40] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
* [41] M. Valko, N. Korda, R. Munos, I. Flaounas, and N. Cristianini. Finite-time analysis of kernelised contextual bandits. In arxiv:1309.6869. 2013.
* [42] W. Zhang, D. Zhou, L. Li, and Q. Gu. Neural thompson sampling. In arXiv:2010.00827. 2020.
* [43] F. Zhdanov. Diverse mini-batch active learning. arXiv preprint arXiv:1901.05954, 2019.
* [44] D. Zhou, L. Li, and Q. Gu. Neural contextual bandits with ucb-based exploration. In Proceedings of the 37th International Conference on Machine Learning, 2020.
## Appendix A Appendix
This appendix contains, beyond the proof of all results contained in the main
body (Section A.1 and Section A.2), the extension of our model selection
results to the non-frozen NTK case (Section A.3). Section A.4 contains
ancillary technical lemmas used throughout the proofs.
### A.1 Proofs for Section 3
We first recall the following representation theorem (which is Lemma 5.1 in
[44]). We give a proof sketch for completeness.
###### Lemma 1.
There exists a positive constant $C$ such that for any $\delta\in(0,1)$, if
$m\geq CT^{4}n^{6}\log(2Tn/\delta)/\lambda_{0}^{4}$
then with probability at least $1-\delta$ over the random initialization
$\theta_{0}$, there exists $\theta^{*}\in\mathbb{R}^{p}$ for which
$\displaystyle h(x_{t,a})=\langle
g(x_{t,a};\theta_{0}),\theta^{*}-\theta_{0}\rangle\qquad{\mbox{and}}\qquad\sqrt{m}\,\|\theta^{*}-\theta_{0}\|_{2}\leq\sqrt{2}S_{T,n}(h)$
(4)
for all $t\in[T]$, $a\in\mathcal{Y}$, and $h$.
###### Proof.
Recall the rearrangement of $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$ into
$\\{x^{(i)}\\}_{i=1,\ldots,2T}$. We define the $p\times 2T$ matrix
$G=\left[\phi(x^{(1)}),\ldots,\phi(x^{(2T)})\right]$. For
$m=\Omega(T^{4}n^{6}\log(2Tn/\delta)/\lambda_{0}^{4})$, we have
$\|G^{\top}G-H\|_{F}\leq\lambda_{0}/2$ with probability at least $1-\delta$
over the random initialization over $\theta_{0}$, which is based on a union
bound over Theorem 3.1 in [2]. Since $H$ on $\\{x^{(i)}\\}_{i=1,\ldots,2T}$ is
positive definite with smallest eigenvalue $\lambda_{0}$, $G^{\top}G$ is also
positive definite. Let the singular value decomposition of $G$ be
$G=PAQ^{\top}$, $P\in\mathbb{R}^{p\times 2T}$, $A\in\mathbb{R}^{2T\times 2T}$,
$Q\in\mathbb{R}^{2T\times 2T}$, then $A$ is also positive definite. We define
$\theta^{*}=\theta_{0}+PA^{-1}Q^{\top}\mathbf{h}/\sqrt{m}~{}.$
It is easy to see that $\theta^{*}$ satisfies (4), hence concluding the proof.
∎
Next we present a lemma relating the matrix $Z_{T}$ with NTK matrix $H$.
###### Lemma 2.
There exists a positive constant $C$ such that for any $\delta\in(0,1)$, if
$m\geq CT^{6}n^{6}\log(Tn/\delta)$
then with probability at least $1-\delta$ over the random initialization
$\theta_{0}$ we have
$\displaystyle\log\det Z_{T}\leq\log\det(I+H)+1~{}.$ (5)
###### Proof.
The proof is an adaptation of the proof of Lemma 5.4 in [44]. Let
$G=(\phi(x^{(1)},...,\phi(x^{(2T)}))\in\mathbb{R}^{p\times 2T}$. We can write
$\displaystyle\log\det Z_{T}$
$\displaystyle=\log\det\left(I+\sum_{t=1}^{T}I_{t}\phi(x_{t,a_{t}})\phi(x_{t,a_{t}})^{\top}\right)$
$\displaystyle\leq\log\det\left(I+\sum_{i=1}^{2T}\phi(x^{(i)})\phi(x^{(i)})^{\top}\right)$
$\displaystyle=\log\det\bigl{(}I+GG^{\top}\bigr{)}$
$\displaystyle=\log\det\bigl{(}I+G^{\top}G\bigr{)}$
$\displaystyle=\log\det\bigl{(}I+H+(G^{\top}G-H)\bigr{)}$
$\displaystyle\leq\log\det\bigl{(}I+H)+\langle(I+H)^{-1},(G^{\top}G-H)\rangle_{F}$
$\displaystyle\leq\log\det\bigl{(}I+H)+\|(I+H)^{-1}\|_{F}\|G^{\top}G-H\|_{F}$
$\displaystyle\leq\log\det\bigl{(}I+H)+\sqrt{2T}\,\|G^{\top}G-H\|_{F}$
$\displaystyle\leq\log\det(I+H)+1~{}.$
In the above, the first inequality is obvious, the second inequality uses the
fact that $\log\det(\cdot)$ is a concave function, the third one used Cauchy-
Schwartz inequality, the fourth one comes from
$\|(I+H)^{-1}\|_{F}\leq\|I\|_{F}=\sqrt{2T}$, and the last inequality uses
Lemma B.1 in [44] along with our choice of $m$. ∎
The proofs of both Lemma 1 and Lemma 2 rely on controlling the size of
$\|G^{\top}G-H\|_{F}$, which is small with high probability when $m$ is large
enough. Therefore, given
$m\geq CT^{4}\log(2Tn/\delta)n^{6}\left(T^{2}\vee
1/\lambda_{0}^{4}\right)~{},$
we have
$\displaystyle\mathcal{E}_{0}=\\{\exists\,\theta^{*}\in\mathbb{R}^{p}\,:\,(\ref{eqn:representation
formula})\ {\mbox{and}}\ (\ref{ineq:bounding Z_T by H})\ {\mbox{hold}}\\}~{},$
(6)
holds with probability at least $1-\delta$ over random initialization of
$\theta_{0}$.
To take into account the random noise from the sequence of labels, we also
define
$\displaystyle\mathcal{E}=\\{\exists\,\theta^{*}\in\mathbb{R}^{p}\,:\,\mathcal{E}_{0}\
{\mbox{holds and }}\theta^{*}\in\mathcal{C}_{t}\ \forall t>0\\}~{}.$ (7)
In order to make sense of the querying threshold $B_{t}$ in Algorithm 1, we
derive an upper and a lower bound for $U_{t,a}-h(x_{t,a})$ under
$\mathcal{E}$.
As for the lower bound, simply notice that, by definition ,
$\displaystyle U_{t,a}=\max_{\theta\in\mathcal{C}_{t-1}}\langle
g(x_{t,a};\theta_{0}),\theta-\theta_{0}\rangle\geq\langle
g(x_{t,a};\theta_{0}),\theta^{*}-\theta_{0}\rangle=h(x_{t,a})~{}.$ (8)
To derive an upper bound, we can write
$\displaystyle U_{t,a}-h(x_{t,a})$
$\displaystyle=\max_{\theta\in\mathcal{C}_{t-1}}\langle
g(x_{t,a};\theta_{0}),\theta-\theta_{0}\rangle-\langle
g(x_{t,a};\theta_{0}),\theta^{*}-\theta_{0}\rangle$
$\displaystyle=\max_{\theta\in\mathcal{C}_{t-1}}\langle
g(x_{t,a};\theta_{0}),\theta-\theta_{t-1}\rangle-\langle
g(x_{t,a};\theta_{0}),\theta^{*}-\theta_{t-1}\rangle$
$\displaystyle\leq\max_{\theta\in\mathcal{C}_{t-1}}\|g(x_{t,a};\theta_{0})\|_{Z_{t-1}^{-1}}\Bigl{(}\|\theta-\theta_{t-1}\|_{Z_{t-1}}+\|\theta^{*}-\theta_{t-1}\|_{Z_{t-1}}\Bigl{)}$
$\displaystyle\leq 2\gamma_{t-1}\|\phi(x_{t,a})\|_{Z_{t-1}^{-1}}~{},$ (9)
where in the last inequality we used the definition of $\mathcal{C}_{t-1}$ and
the assumption that $\theta^{*}\in\mathcal{C}_{t-1}$. A proof of this
assumption is contained in the below lemma, which follows from standard
arguments.
###### Lemma 3.
Let the input parameter $S$ in Algorithm 1 be such that
$\sqrt{2}S_{T,n}(h)\leq S$, then under event $\mathcal{E}_{0}$ for any
$\delta>0$, with probability at least $1-\delta$ over the random noises we
have
$\|\theta^{*}-\theta_{t}\|_{Z_{t}}\leq\gamma_{t}/\sqrt{m}$
for all $t\geq 0$ simultaneously, i.e., $\theta^{*}\in\mathcal{C}_{t}$ with
high probability simultaneously for all $t\geq 0$.
###### Proof.
We essentially follow the proof of Theorem 2 in [1] (see also the proof of
Lemma 5.2 in [44]).
We have $\ell_{t}=1-h(x_{t,a_{t}})-\xi_{t}$, where
$\xi_{t}=1-\ell_{t}-h(x_{t,a_{t}})$ is a sub-Gaussian random variable. Hence,
setting $\bm{\xi}_{t}=(I_{1}\xi_{1},...,I_{t}\xi_{t})^{\top}$,
$X_{t}=(I_{1}\phi(x_{1,a_{1}}),...,I_{t}\phi(x_{t,a_{t}}))^{\top}$, and
$Y_{t}=(I_{1}(1-\ell_{1}),...,I_{t}(1-\ell_{t}))^{\top}$, we can write
$Z_{t}=X_{t}^{\top}X_{t}+I,\qquad b_{t}=X_{t}^{\top}Y_{t}$
Plug them into the definition of $\theta_{t}$ gives
$\displaystyle\theta_{t}-\theta_{0}$ $\displaystyle=Z_{t}^{-1}b_{t}/\sqrt{m}$
$\displaystyle=(X_{t}^{\top}X_{t}+I)^{-1}X_{t}^{\top}(\sqrt{m}X_{t}(\theta^{*}-\theta_{0})+\bm{\xi}_{t})/\sqrt{m}$
$\displaystyle=(X_{t}^{\top}X_{t}+I)^{-1}X_{t}^{\top}\bm{\xi}_{t}/\sqrt{m}+\theta^{*}-\theta_{0}-(X_{t}^{\top}X_{t}+I)^{-1}(\theta^{*}-\theta_{0})~{},$
where in the first equality we used definition of $\xi_{t}$ and Lemma 1. Now,
for any $x\in\mathbb{R}^{p}$, we get
$\displaystyle x^{\top}(\theta_{t}-\theta^{*})=\langle
x,X_{t}^{\top}\bm{\xi}_{t}\rangle_{Z_{t}^{-1}}/\sqrt{m}-\langle
x,\theta^{*}-\theta_{0}\rangle_{Z_{t}^{-1}}~{},$
hence
$\displaystyle|x^{\top}(\theta_{t}-\theta^{*})|$
$\displaystyle\leq\|x\|_{Z_{t}^{-1}}\Bigl{(}\|X_{t}^{\top}\bm{\xi}_{t}\|_{Z_{t}^{-1}}/\sqrt{m}+\|\theta^{*}-\theta_{0}\|_{Z_{t}^{-1}}\Bigl{)}$
$\displaystyle\leq\|x\|_{Z_{t}^{-1}}\Bigl{(}\|X_{t}^{\top}\bm{\xi}_{t}\|_{Z_{t}^{-1}}/\sqrt{m}+\|\theta^{*}-\theta_{0}\|_{2}\Bigl{)}~{},$
where the first inequality derives from the Cauchy-Schwartz inequality and the
second from the fact that the smallest eigenvalue of $Z_{t}$ is at least $1$.
Then, by Theorem 1 in [1], for any $\delta$ with probability at least
$1-\delta$ over the random noises
$\|X_{t}^{\top}\bm{\xi}_{t}\|_{Z_{t}^{-1}}\leq\sqrt{\log\biggl{(}\frac{\det(Z_{t})}{\delta^{2}}\biggr{)}}~{}.$
Therefore, when $\mathcal{E}_{0}$ holds, we have for all $t>0$, with
probability at least $1-\delta$,
$|x^{\top}(\theta_{t}-\theta^{*})|\leq\|x\|_{Z_{t}^{-1}}\left(\sqrt{\log\biggl{(}\frac{\det(Z_{t})}{\delta^{2}}\biggr{)}/m}+\sqrt{2}S_{T,n}(h)/\sqrt{m}\right)~{}.$
Plugging in $x=Z_{t}(\theta_{t}-\theta^{*})$ and using $\sqrt{2}S_{T,n}(h)\leq
S$, we obtain
$\|\theta^{*}-\theta_{t}\|_{Z_{t}}\leq\sqrt{\log\biggl{(}\frac{\det(Z_{t})}{\delta^{2}}\biggr{)}/m}+S/\sqrt{m}=\gamma_{t}/\sqrt{m}~{},$
as claimed. ∎
Combining Lemma 1, 2 and 3 we confirm that $\mathcal{E}$ is a high probability
event.
###### Lemma 4.
There exists a constant $C$ such that if $m\geq
CT^{4}\log(2Tn/\delta)n^{6}\left(T^{2}\vee 1/\lambda_{0}^{4}\right)$ and
$\sqrt{2}S_{T,n}(h)\leq S$, then
$\displaystyle\mathbb{P}(\mathcal{E})\geq 1-2\delta~{}.$ (10)
###### Proof.
Lemma 1 and 2 imply that $\mathbb{P}(\mathcal{E}_{0})\geq 1-\delta$ when
$m\geq CT^{4}\log(2Tn/\delta)n^{6}\left(T^{2}\vee 1/\lambda_{0}^{4}\right)$.
Lemma 3 implies that when $\sqrt{2}S_{T,n}(h)\leq S$,
$\mathbb{P}(\theta^{*}\in\mathcal{C}_{t}\ \forall t>0\mid\mathcal{E}_{0})\geq
1-\delta$. Therefore,
$\mathbb{P}(\mathcal{E})=\mathbb{P}(\theta^{*}\in\mathcal{C}_{t}\ \forall
t>0\mid\mathcal{E}_{0})\mathbb{P}(\mathcal{E}_{0})\geq(1-\delta)^{2}\geq
1-2\delta~{}.$
∎
###### Lemma 5.
For any $b>0$ we have
$\sum_{t=1}^{T}b\wedge I_{t}B_{t}^{2}\leq 8\left(\log\det
Z_{T}+2\log(1/\delta)+S^{2}+\frac{b}{8}\right)\log\det Z_{T}~{}.$ (11)
###### Proof.
By definition of $B_{t}$ and the fact that $\gamma_{t}$ is increasing, we have
$\sum_{t=1}^{T}b\wedge I_{t}B_{t}^{2}\leq
4\gamma_{T}^{2}\sum_{t=1}^{T}\frac{b}{4\gamma_{T}^{2}}\wedge
I_{t}\|\phi(x_{t,a_{t}})\|_{{Z}_{t-1}^{-1}}^{2}\leq(b+4\gamma_{T}^{2})\log\det
Z_{T}~{},$
where the second inequality is from Lemma 24. Using the definition of
$\gamma_{T}$ and the inequality $(a+b)^{2}\leq 2a^{2}+2b^{2}$ we obtain
$\gamma_{T}^{2}\leq 2\log\det Z_{T}+4\log(1/\delta)+2S^{2}~{}.$
Plugging this in we get (11). ∎
Let us now introduce the short-hand notation
$\displaystyle\widehat{\Delta}_{t}=U_{t,a_{t}}-1/2~{},\qquad\Delta_{t}=h(x_{t,a_{t}})-1/2~{},\qquad
T_{\epsilon}=\sum_{t=1}^{T}\
1{1}{\left\\{\Delta_{t}^{2}\leq\epsilon^{2}\right\\}}~{},$
for some $\epsilon\in(0,\frac{1}{2})$. Combined with (8) and (9), we have the
following statement about $\widehat{\Delta}_{t}$ and $\Delta_{t}$.
###### Lemma 6.
Under event $\mathcal{E}$, $0\leq\widehat{\Delta}_{t}-\Delta_{t}\leq B_{t}$
and $\ 0\leq\widehat{\Delta}_{t}$ hold for all $t$, where $B_{t}$ is the
querying threshold in Algorithm 1, i.e.,
$B_{t}=2\gamma_{t-1}\|\phi(x_{t,a_{t}})\|_{Z_{t-1}^{-1}}~{}.$
###### Proof.
Recalling that (8) and (9) implies that for $a\in\mathcal{Y}$
$0\leq U_{t,a}-h(x_{t,a})\leq B_{t}~{}.$
Specifically when $a=a_{t}$,
$0\leq\widehat{\Delta}_{t}-\Delta_{t}\leq B_{t}~{}.$
Also using (8) we have $U_{t,1}+U_{t,-1}\geq h(x_{t,1})+h(x_{t,-1})=1$. Hence,
by definition of $a_{t}$, $U_{t,a_{t}}\geq 1/2$, i.e.,
$\widehat{\Delta}_{t}\geq 0$. ∎
The following lemma bounds the label complexity $N_{T}$ of Algorithm 1 under
event $\mathcal{E}$. Notice that, as stated, the bound does not depend on any
specific properties of the marginal distribution $\mathcal{D}_{\mathcal{X}}$.
###### Lemma 7.
Under event $\mathcal{E}$, for any $\epsilon\in(0,1/2)$ we have
$\displaystyle N_{T}$ $\displaystyle\leq
T_{\epsilon}+\frac{8}{\epsilon^{2}}(\log\det
Z_{T}+2\log(1/\delta)+S^{2}+\frac{1}{32})\log\det Z_{T}$
$\displaystyle=O\left(T_{\epsilon}+\frac{1}{\epsilon^{2}}\left(\log\det(I+H)+\log(1/\delta)+S^{2}\right)\log\det(I+H)\right)~{}.$
###### Proof.
We adapt the proof of Lemma 6 in [16]. Assume $\mathcal{E}$ holds. Since
$0\leq\widehat{\Delta}_{t}-\Delta_{t}\leq B_{t}$ and $\widehat{\Delta}_{t}\geq
0$ by Lemma 6, $\hat{\Delta}_{t}\leq B_{t}$ implies $|\Delta_{t}|\leq B_{t}$.
We can write
$\displaystyle I_{t}$ $\displaystyle=I_{t}\ 1{1}{\left\\{\hat{\Delta}_{t}\leq
B_{t}\right\\}}$ $\displaystyle\leq I_{t}\ 1{1}{\left\\{\hat{\Delta}_{t}\leq
B_{t},B_{t}\geq\epsilon\right\\}}+I_{t}\ 1{1}{\left\\{\widehat{\Delta}_{t}\leq
B_{t},B_{t}<\epsilon\right\\}}$
$\displaystyle\leq\frac{I_{t}B_{t}^{2}}{\epsilon^{2}}\wedge 1+\
1{1}{\left\\{\Delta_{t}^{2}\leq\epsilon^{2}\right\\}}~{}.$
For the first term, summing over $t$ yields
$\displaystyle\frac{1}{\epsilon^{2}}\sum_{t=1}^{T}I_{t}B_{t}^{2}\wedge\epsilon^{2}$
$\displaystyle\leq\frac{1}{\epsilon^{2}}\sum_{t=1}^{T}I_{t}B_{t}^{2}\wedge\frac{1}{4}$
$\displaystyle\leq\frac{8}{\epsilon^{2}}\left(\log\det
Z_{T}+2\log(1/\delta)+S^{2}+\frac{1}{32}\right)\log\det Z_{T}$
$\displaystyle=O\left(\frac{1}{\epsilon^{2}}\left(\log\det(I+H)+\log(1/\delta)+S^{2}\right)\log\det(I+H)\right)~{},$
where the second bound follows from Lemma 5, and the last bound holds under
event $\mathcal{E}$. ∎
The next lemma shows that on rounds where Algorithm 1 does not issue a query,
we are confident that our prediction $a_{t}$ suffers no regret.
###### Lemma 8.
Under event $\mathcal{E}$, for the rounds $t$ such that $I_{t}=0$, we have
$a_{t}=a_{t}^{*}$, that is, Algorithm 1 suffers no regret.
###### Proof.
We apply Lemma 6, when $I_{t}=0$ this yields $\widehat{\Delta}_{t}>B_{t}$. As
a consequence of the condition $\widehat{\Delta}_{t}-\Delta_{t}\leq B_{t}$, we
get $\Delta_{t}>0$, which in turn entails $a_{t}=a_{t}^{*}$. ∎
The next lemma establishes an upper bound on the cumulative regret $R_{T}$ in
the same style as in Lemma 7.
###### Lemma 9.
Under event $\mathcal{E}$, for any $\epsilon\in(0,1/2)$ we have
$\displaystyle R_{T}$ $\displaystyle\leq 2\epsilon
T_{\epsilon}+\frac{16}{\epsilon}\left(\log\det{Z}_{T}+2\log(1/\delta)+S^{2}+\frac{1}{16}\right)\log\det{Z}_{T}$
$\displaystyle=O\left(\epsilon
T_{\epsilon}+\frac{1}{\epsilon}\left(\log\det(I+H)+\log(1/\delta)+S^{2}\right)\,\log\det(I+H)\right)~{}.$
###### Proof.
By virtue of Lemma 8, we can restrict with high probability to the rounds $t$
on which $I_{t}=1$. We have
$\displaystyle R_{T}$
$\displaystyle=\sum_{t=1}^{T}I_{t}\bigl{(}h(x_{t,a_{t}^{*}})-h(x_{t,a_{t}})\bigr{)}$
$\displaystyle=\sum_{t=1}^{T}I_{t}\bigl{(}h(x_{t,a_{t}^{*}})-h(x_{t,a_{t}})\bigr{)}\
1{1}{\left\\{a_{t}\neq a_{t}^{*}\right\\}}$
$\displaystyle\leq\sum_{t=1}^{T}I_{t}\bigl{|}h(x_{t,1})-h(x_{t,-1})\bigr{|}\
1{1}{\left\\{a_{t}\neq a_{t}^{*}\right\\}}$
$\displaystyle=2\,\sum_{t=1}^{T}I_{t}|\Delta_{t}|$
$\displaystyle=2\sum_{t=1}^{T}I_{t}|\Delta_{t}|\
1{1}{\left\\{|\Delta_{t}|>\epsilon\right\\}}+2\sum_{t=1}^{T}I_{t}|\Delta_{t}|\
1{1}{\left\\{|\Delta_{t}|\leq\epsilon\right\\}}~{}.$
The second sum is clearly upper bounded by $2\epsilon T_{\epsilon}$. As for
the first sum, notice that Lemma 6 along with $I_{t}=1$ implies
$|\Delta_{t}|\leq B_{t}$ under event $\mathcal{E}$. Therefore
$\displaystyle 2\sum_{t=1}^{T}I_{t}|\Delta_{t}|\
1{1}{\left\\{|\Delta_{t}|>\epsilon\right\\}}$
$\displaystyle\leq\frac{2}{\epsilon}\sum_{t=1}^{T}I_{t}\Delta_{t}^{2}\wedge\epsilon$
$\displaystyle\leq\frac{2}{\epsilon}\sum_{t=1}^{T}I_{t}B_{t}^{2}\wedge\frac{1}{2}$
$\displaystyle\leq\frac{16}{\epsilon}\left(\log\det{Z}_{T}+2\log(1/\delta)+S^{2}+\frac{1}{16}\right)\log\det{Z}_{T}$
$\displaystyle=O\left(\frac{1}{\epsilon}\left(\log\det(I+H)+\log(1/\delta)+S^{2}\right)\,\log\det(I+H)\right)~{}.$
The third bound follows from Lemma 5, while the last bound holds under event
$\mathcal{E}$. ∎
At this point, we leverage the fact that $x_{1},...,x_{T}$ are generated in an
i.i.d. fashion according to a marginal distribution
$\mathcal{D}_{\mathcal{X}}$ satisfying the low-noise assumption with exponent
$\alpha$ recalled in Section 3. A direct application of Lemma 23 (Appendix
A.4) gives, with probability at least $1-\delta$,
$T_{\epsilon}\leq 3T\epsilon^{\alpha}+O\left(\log\frac{\log
T}{\delta}\right)~{},$
simultaneously over $\epsilon$. Using the above bound on $T_{\epsilon}$ back
into both Lemma 7 and Lemma 9 and optimizing over $\epsilon$ in the two bounds
separately yields the following result, which is presented in the main body as
Theorem 1.
###### Theorem 3.
Let Algorithm 1 be run with parameters $\delta$, $S$, $m$, and $n$ on an
i.i.d. sample $(x_{1},y_{1}),\ldots,(x_{T},y_{T})\sim\mathcal{D}$, where the
marginal distribution $\mathcal{D}_{\mathcal{X}}$ fulfills the low-noise
condition with exponent $\alpha\geq 0$ w.r.t. a function $h$ that satisfies
(1) and such that $\sqrt{2}S_{T,n}(h)\leq S$ for all $\\{x_{i}\\}_{i=1}^{T}$.
Also assume $m\geq CT^{4}\log(2Tn/\delta)n^{6}\left(T^{2}\vee
1/\lambda_{0}^{4}\right)$ where $C$ is the constant in Lemma 1 and Lemma 2.
Then with probability at least $1-\delta$ the cumulative regret $R_{T}$ and
the total number of queries $N_{T}$ are simultaneously upper bounded as
follows:
$\displaystyle R_{T}$
$\displaystyle=O\biggl{(}L_{H}^{\frac{\alpha+1}{\alpha+2}}\Bigl{(}L_{H}+\log(\log
T/\delta)+S^{2}\Bigl{)}^{\frac{\alpha+1}{\alpha+2}}T^{\frac{1}{\alpha+2}}\biggr{)}$
$\displaystyle N_{T}$
$\displaystyle=O\biggl{(}L_{H}^{\frac{\alpha}{\alpha+2}}\Bigl{(}L_{H}+\log(\log
T/\delta)+S^{2}\Bigl{)}^{\frac{\alpha}{\alpha+2}}T^{\frac{2}{\alpha+2}}\biggr{)}~{},$
where $L_{H}=\log\det(I+H)$, and $H$ is the NTK matrix of depth $n$ over the
set of points $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$.
### A.2 Proofs for Section 4
##### Additional notation.
In this section, we add subscript “$i$" to the relevant quantities occurring
in the proof when these quantities refer to the $i$-th base learner. For
instance, we write $Z_{t,i}$ to denote the covariance matrix updated within
the $i$-th base learner,
$B_{t,i}=B_{t,i}(S_{i})=2\gamma_{t-1,i}\|\phi(x_{t,a_{t}})\|_{Z_{t-1,i}^{-1}}$,
with $\gamma_{t-1,i}=\sqrt{\log\det Z_{t-1,i}+2\log(1/\delta)}+S_{i}$, and
${\mathcal{C}}_{t,i}$ to denote the confidence ellipsoid maintained by the
$i$-th base learner.
For convenience, we also introduce the function
$\displaystyle
d(S,\delta)=(\log\det(I+H)+1)(\log\det(I+H)+\frac{17}{16}+2\log(M/\delta)+S^{2})~{}.$
(12)
The above is a high probability upper bound on
$(\frac{1}{16}+\frac{1}{2}\gamma_{T,i}^{2})\log\det Z_{T,i}$ (holding for all
$i$), which in turn upper bounds
$\frac{1}{8}\sum_{t=1}^{T}I_{t,i}B_{t,i}^{2}\wedge\frac{1}{2}$.
By the assumption in Theorem 2, we know that there is a learner
$i^{\star}=\langle i^{\star}_{1},i^{\star}_{2}\rangle\in\mathcal{M}_{1}$ such
that its parameters $S_{i^{\star}_{1}}$ and $d_{i^{\star}_{2}}$ satisfy
$\displaystyle\sqrt{2}S_{T,n}(h)\leq$
$\displaystyle~{}~{}S_{i^{\star}_{1}}\leq 2\sqrt{2}S_{T,n}(h)$ (13)
$\displaystyle d(S_{T,n}(h),\delta)\leq d(S_{i^{\star}_{1}},\delta)\leq$
$\displaystyle~{}~{}d_{i^{\star}_{2}}\leq 2d(S_{i^{\star}_{1}},\delta)\leq
8d(S_{T,n}(h),\delta)~{}.$ (14)
Throughout the proof we will refer to a specific learner that satisfies these
conditions by $i^{\star}$. Moreover, we denote by $\mathcal{E}_{i}$ the event
where the conditions of the event in Eq. (7) and the event in Lemma 2 hold for
base learner $i$. In $\mathcal{E}_{i}$, we call $i$ well-specified.
Let $R({\mathcal{T}})$ and $N(\mathcal{T})$ denote cumulative regret $R$ and
number of requested labels $N$ when restricted to subset
${\mathcal{T}}\subseteq[T]$. Then the regret and label complexity analyses of
Algorithm 1 in Section A.1 directly imply the following regret and label
complexity bounds of a well-specified base learner $i$ during the execution of
Algorithm 2.
###### Lemma 10 (Regret and label complexity of a well-specified base
learner).
Let $i\in\mathcal{M}_{1}$ be any base learner. In event $\mathcal{E}_{i}$
(when $i$ is well-specified), the following regret and label complexity bound
holds for any $0<\epsilon<\frac{1}{2}$ and $t\in[T]$:
$\displaystyle R(\mathcal{T}_{t,i})$ $\displaystyle\leq
2\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}B_{k,i}\wedge\frac{1}{2}~{}~{}\leq~{}~{}\frac{16}{\epsilon}\,d(S_{i_{1}},\delta)+2\epsilon|\mathcal{T}_{t,i}^{\epsilon}|$
$\displaystyle N(\mathcal{T}_{t,i})$
$\displaystyle\leq|\mathcal{T}_{t,i}^{\epsilon}|+\frac{1}{\epsilon^{2}}\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}B_{k,i}^{2}\wedge\frac{1}{4}~{}~{}\leq~{}~{}\frac{8}{\epsilon^{2}}\,d(S_{i_{1}},\delta)+|\mathcal{T}_{t,i}^{\epsilon}|~{},$
where $\mathcal{T}_{t,i}^{\epsilon}=\\{k\in[t]\colon
i_{k}=i,~{}|\Delta_{k}|\leq\epsilon\\}$. Furthermore, in rounds
$t\in\mathcal{T}_{t,i}$ where the label is not queried ($I_{t,i}=0$), the
regret is $0$.
###### Proof.
This follows directly from the analysis of Algorithm 1 in the previous
section. ∎
Equipped with these two properties of well-specified base learners, we can
first show that with high probability, Algorithm 2 will never eliminate a
well-specified learner, and subsequently analyze the label complexity and
cumulative regret of Algorithm 2.
###### Lemma 11.
Let $i=\langle i_{1},i_{2}\rangle\in\mathcal{M}_{1}$ be a base learner with
$d_{i_{2}}\geq d(S_{i_{1}},\delta)$. Assume $\gamma\leq\alpha$ and consider
event $\bigcap_{j\colon j\geq i_{1}}\mathcal{E}_{j}$. Then, under that event,
with probability at least $1-M\delta$ Algorithm 2 never eliminates base
learner $i$.
###### Proof.
We show the statement for each of the four mis-specification tests in turn:
* •
Disagreement test: Consider a round $t$ and any learner $j=\langle
j_{1},j_{2}\rangle$ with $S_{j_{1}}\geq S_{i_{1}}$ and $I_{t,i}=I_{t,j}=0$. By
assumption, $\mathcal{E}_{i}\cap\mathcal{E}_{j}$ holds. Since $i$ did not ask
for the label, this implies that $|\Delta_{t}|>0$ (since in rounds with no
margin $|\Delta_{t}|=0$, a learner always asks for the label). Further, by
Lemma 10, the prediction of $i$ and $j$ has no regret in round $t$. Thus, $i$
and $j$ need to make the same prediction and the test does not trigger.
* •
Observed regret test: Consider a round $t$ and any $j\in\mathcal{M}_{t}$.
Then, by virtue of Lemma 21 (Appendix A.4), the left-hand side of the observed
regret test for pair $(i,j)$ is upper-bounded with probability at east
$1-\delta$ as
$\displaystyle\sum_{k\in\mathcal{V}_{t,i,j}}(\ 1{1}{\left\\{a_{k,i}\neq
y_{k}\right\\}}$ $\displaystyle-\ 1{1}{\left\\{a_{k,j}\neq y_{k}\right\\}})$
$\displaystyle\leq\sum_{k\in\mathcal{V}_{t,i,j}}(h(x_{k,a_{k,j}})-h(x_{k,a_{k,i}}))+0.72\sqrt{|\mathcal{V}_{t,i,j}|L(|\mathcal{V}_{t,i,j}|,\delta)}$
$\displaystyle\leq\sum_{k\in\mathcal{V}_{t,i,j}}(h(x_{k,a^{\star}_{k}})-h(x_{k,a_{k,i}}))+0.72\sqrt{|\mathcal{V}_{t,i,j}|L(|\mathcal{V}_{t,i,j}|,\delta)}$
$\displaystyle=R(\mathcal{V}_{t,i,j})+0.72\sqrt{|\mathcal{V}_{t,i,j}|L(|\mathcal{V}_{t,i,j}|,\delta)}~{},$
where the second inequality follows from the definition of the best prediction
$a^{*}_{k}$ for round $k$. Finally, in event $\mathcal{E}_{i}$ the regret of
$i$ in rounds $\mathcal{V}_{t,i,j}$ is bounded by Lemma 10 as
$\displaystyle R(\mathcal{V}_{t,i,j})\leq\sum_{k\in\mathcal{V}_{t,i,j}}1\wedge
B_{k,i}~{}.$
Therefore, this test does not trigger for pair $(i,j)$ in round $t$. By a
union bound, this happens with probability at least $1-M\delta$.
* •
Label complexity test: By Lemma 10, the number of labels requested by $i$ up
to round $t$ is at most
$\displaystyle\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}\leq\inf_{\epsilon\in(0,1/2]}|\mathcal{T}_{t,i}^{\epsilon}|+\frac{1}{\epsilon^{2}}\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}B_{k,i}^{2}\wedge\frac{1}{4}~{}.$
We now use Lemma 23 (Appendix A.4) to upper-bound
$|\mathcal{T}_{t,i}^{\epsilon}|$ simultaneously for all $\epsilon$ as
$\displaystyle|\mathcal{T}_{t,i}^{\epsilon}|\leq
3\epsilon^{\gamma}|\mathcal{T}_{t,i}|+2L(|\mathcal{T}_{t,i}|,\delta/\log_{2}(12t))~{}.$
By plugging this expression into the previous bound (and taking a union bound
over $i$) we show that the label complexity test is not triggered.
* •
$d_{i}$ test: Using the assumption that $\mathcal{E}_{i}$ holds and Lemma 5,
we can bound the left-hand side of the test as
$\displaystyle\sum_{k\in\mathcal{T}_{t,i}}(\frac{1}{2}\wedge
I_{k,i}B_{k,i}^{2})$ $\displaystyle\leq 8(\log\det
Z_{t,i}+2\log(1/\delta)+S_{i_{1}}^{2}+1/16)\log\det Z_{t,i}$
$\displaystyle\leq
8(\log\det(H+I)+2\log(1/\delta)+S_{i_{1}}^{2}+17/16)(\log\det(H+I)+1)$
$\displaystyle=8d(S_{i_{1}},\delta)$
and by the assumption that $d_{i_{2}}\geq d(S_{i_{1}},\delta)$, learner $i$ is
not be eliminated by this test.
This concludes the proof. ∎
#### A.2.1 Label Complexity Analysis
###### Lemma 12 (Label complexity of Algorithm 2).
In event $\bigcap_{i=\langle i_{1},i_{2}\rangle\in\mathcal{M}_{1}\colon
i_{1}\geq{i^{\star}_{1}}}\mathcal{E}_{i}$, Algorithm 2 queries with
probability at least $1-M\delta$
$\displaystyle N(T)$ $\displaystyle=O\left(\sum_{i=\langle
i_{1},i_{2}\rangle\in\mathcal{M}_{1}}\left(\frac{d_{i_{2}}}{\epsilon^{2}}+\epsilon^{\gamma}T\left(1\wedge\frac{d(S_{T,n}(h),\delta)}{d_{i_{2}}}\right)^{\gamma+1}\right)+ML(T,\delta/\log
T)\right)$
labels.
###### Proof.
We can decompose the total number of label requests as
$\displaystyle N(T)$
$\displaystyle=\sum_{t=1}^{T}I_{t,i_{t}}=\sum_{i=1}^{M}\sum_{t\in\mathcal{T}_{T,i}}I_{t,i}=\sum_{i\in\mathcal{M}_{1}}N(\mathcal{T}_{T,i})~{}.$
Since each learner $i$ satisfied the label complexity test except possibly for
the round where it was eliminated, we have
$\displaystyle N(\mathcal{T}_{T,i})$
$\displaystyle=O\left(\inf_{\epsilon\in(0,1/2)}\biggl{(}\epsilon^{\gamma}|\mathcal{T}_{T,i}|+\frac{1}{\epsilon^{2}}\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}B_{k,i}^{2}\wedge\frac{1}{4}\biggr{)}+L(|\mathcal{T}_{T,i}|,\delta/\log
t)\right)$
$\displaystyle=O\left(\inf_{\epsilon\in(0,1/2)}\biggl{(}\epsilon^{\gamma}\sum_{k\in[T]}p_{k,i}+\frac{1}{\epsilon^{2}}\sum_{k\in\mathcal{T}_{t,i}}I_{k,i}B_{k,i}^{2}\wedge\frac{1}{4}\biggr{)}+L(T,\delta/\log
T)\right)$
$\displaystyle=O\left(\inf_{\epsilon\in(0,1/2)}\biggl{(}\epsilon^{\gamma}\sum_{k\in[T]}p_{k,i}+\frac{d_{i_{2}}}{\epsilon^{2}}\biggr{)}+L(T,\delta/\log
T)\right)~{},$ (15)
where the second inequality holds with probability at least $1-\delta$ by
Lemma 22 and the final inequality holds by the $d_{i}$ test. We now bound
$\sum_{k\in[T]}p_{k,i}$ as
$\displaystyle\sum_{k\in[T]}p_{k,i}\leq T(1\wedge
d_{i_{2}}^{-(\gamma+1)}d_{i^{\star}_{2}}^{\gamma+1})\leq
Td_{i_{2}}^{-(\gamma+1)}(8d(S_{T,n}(h),\delta))^{\gamma+1}\wedge T$
where we used that by Lemma 11 learner $i^{\star}$ never gets eliminated in
the considered event. ∎
#### A.2.2 Regret Analysis
To bound the overall cumulative regret of Algorithm 2, we decompose the rounds
$[T]$ into the following three disjoint sets of rounds
$[T]=\mathcal{R}_{i^{\star}}\dot{\cup}\,\,\mathcal{U}_{i^{\star}}\dot{\cup}\,\,\mathcal{O}_{i^{\star}},$
(16)
where
* •
$\mathcal{R}_{i^{\star}}=\\{t\in[T]\colon I_{t,i^{\star}}=1\\}$ are the rounds
where $i^{\star}$ requests a label,
* •
$\mathcal{U}_{i^{\star}}=\\{t\in[T]\colon I_{t,i^{\star}}=0,I_{t,i_{t}}=0\\}$
are the rounds where $i^{\star}$ does not request the label and the label was
not observed,
* •
$O_{i^{\star}}=\\{t\in[T]\colon I_{t,i^{\star}}=0,I_{t,i_{t}}=1\\}$ are the
rounds where $i^{\star}$ does not request the label and the label was
observed.
In the following three lemmas, we bound the regret in these sets of rounds
separately.
###### Lemma 13 (Regret in rounds where $i^{\star}$ requests).
In event $\bigcap_{i=\langle i_{1},i_{2}\rangle\in\mathcal{M}_{1}\colon
i_{1}\geq i^{\star}_{1}}\mathcal{E}_{i}$, the regret in rounds where
$i^{\star}=\langle i^{\star}_{1},i^{\star}_{2}\rangle$ would request the label
is bounded with probability at least $1-\delta$ for all $\epsilon\in(0,1/2)$
as
$\displaystyle
R(\mathcal{R}_{i^{\star}})=O\left(\frac{M}{\epsilon}2^{\gamma+1}d(S_{i^{\star}_{1}},\delta)^{\gamma+2}+\frac{M}{\epsilon}2^{\gamma+1}d(S_{i^{\star}_{1}},\delta)^{\gamma+1}L(T,\delta)+\epsilon
T_{\epsilon}\right)~{}.$ (17)
###### Proof.
In any round, the largest instantaneous regret possible is
$2|h(x_{t,1})-1/2|=2|h(x_{t,-1})-1/2|=2|\Delta_{t,i^{\star}}|$, no matter
whether the prediction of $i^{\star}$ was followed or not. Thus, the regret in
rounds $\mathcal{R}_{i^{\star}}$ can be bounded as
$\displaystyle R(\mathcal{R}_{i^{\star}})\leq
2\sum_{t\in\mathcal{R}_{i^{\star}}}|\Delta_{t,i^{\star}}|=2\sum_{t\in\mathcal{R}_{i^{\star}}}\
1{1}{\left\\{|\Delta_{t,i^{\star}}|>\epsilon\right\\}}|\Delta_{t,i^{\star}}|+2\epsilon|\mathcal{R}_{i^{\star}}^{\epsilon}|,$
for any $\epsilon\in(0,1/2)$ where
$\mathcal{R}_{i^{\star}}^{\epsilon}=\\{t\in\mathcal{R}_{i^{\star}}\colon|\Delta_{t}|\leq\epsilon\\}$.
On rounds $\mathcal{R}_{i^{\star}}$, learner $i^{\star}$ wants to query the
label which means $\widehat{\Delta}_{t,i^{\star}}\leq B_{t,i^{\star}}$.
Moreover in $\mathcal{E}_{i^{\star}}$, the conditions
$0\leq\widehat{\Delta}_{t,i^{\star}}-\widehat{\Delta}_{t,i^{\star}}\leq
B_{t,i^{\star}}$ and $0\leq\widehat{\Delta}_{t,i^{\star}}$ hold. Combining
both inequalities gives $|\Delta_{t,i^{\star}}|\leq B_{t,i^{\star}}$ and we
can further bound the display above as
$\displaystyle R(\mathcal{R}_{i^{\star}})\leq$
$\displaystyle\sum_{t\in\mathcal{R}_{i^{\star}}}\
1{1}{\left\\{|\Delta_{t,i^{\star}}|>\epsilon\right\\}}(1\wedge
2B_{t,i^{\star}})+2\epsilon|\mathcal{R}_{i^{\star}}^{\epsilon}|$
$\displaystyle\leq$ $\displaystyle\sum_{t\in\mathcal{R}_{i^{\star}}}\
1{1}{\left\\{|\Delta_{t,i^{\star}}|>\epsilon\right\\}}\left(1\wedge\frac{2B_{t,i^{\star}}^{2}}{\epsilon}\right)+2\epsilon|\mathcal{R}_{i^{\star}}^{\epsilon}|$
$\displaystyle\leq$
$\displaystyle\frac{2}{\epsilon}\sum_{t\in\mathcal{R}_{i^{\star}}}\left(\frac{\epsilon}{2}\wedge
B_{t,i^{\star}}^{2}\right)+2\epsilon|\mathcal{R}_{i^{\star}}^{\epsilon}|~{}.$
To bound the remaining sum, we appeal to the randomized potential lemma in
Lemma 25. We denote $\underline{p}^{\star}=\min_{k\in[T]}p_{k,i^{\star}}$ the
smallest probability of $i^{\star}$ in any round. Then Lemma 25 gives with
probability at least $1-\delta$
$\displaystyle\sum_{t\in\mathcal{R}_{i^{\star}}}\left(\frac{\epsilon}{2}\wedge
B_{t,i^{\star}}^{2}\right)$
$\displaystyle\leq\sum_{t\in\mathcal{R}_{i^{\star}}}\left(\frac{1}{4}\wedge
B_{t,i^{\star}}^{2}\right)\leq
4\gamma_{T,i^{\star}}^{2}\sum_{t\in\mathcal{R}_{i^{\star}}}\left(\frac{1}{16\gamma_{T,i^{\star}}^{2}}\wedge\|\phi(x_{t,a_{t,i^{\star}}})\|_{Z_{t-1,i^{\star}}^{-1}}^{2}\right)$
$\displaystyle\leq
4\gamma_{T,i^{\star}}^{2}\biggl{(}1+\frac{3}{16\underline{p}^{\star}\gamma_{T,i^{\star}}^{2}}L(T,\delta)\biggr{)}+\frac{8\gamma_{T,i^{\star}}^{2}}{\underline{p}^{\star}}(1+\frac{1}{16\gamma_{T,i^{\star}}^{2}})\log\det
Z_{T,i^{\star}}$
$\displaystyle\leq\frac{12\gamma_{T,i^{\star}}^{2}+\frac{1}{2}}{\underline{p}^{\star}}\log\det
Z_{T,i^{\star}}+\frac{3}{4\underline{p}^{\star}}L(T,\delta)~{},$
because $\gamma_{t,i^{\star}}$ is non-decreasing in $T$. Plugging this back
into the previous display yields
$\displaystyle R(\mathcal{R}_{i^{\star}})$ $\displaystyle\leq
24\frac{\gamma_{T,i^{\star}}^{2}+\frac{1}{24}}{\epsilon\underline{p}^{\star}}\log\det
Z_{T,i^{\star}}+\frac{3}{2\epsilon\underline{p}^{\star}}L(T,\delta)+2\epsilon|\mathcal{R}_{i^{\star}}^{\epsilon}|$
$\displaystyle\leq
48\frac{d(S_{i^{\star}_{1}},\delta)}{\epsilon\underline{p}^{\star}}+\frac{3}{2\epsilon\underline{p}^{\star}}L(T,\delta)+2\epsilon
T_{\epsilon}~{}.$
Now, Lemma 11 ensures that $i^{\star}$ never gets eliminated in the considered
event. Therefore
$\displaystyle\frac{1}{\underline{p}^{\star}}\leq\frac{\sum_{i\in\mathcal{M}_{1}}d_{i_{2}}^{-(\gamma+1)}}{d_{i^{\star}_{2}}^{-(\gamma+1)}}=d_{i^{\star}_{2}}^{\gamma+1}M\leq
M(2d(S_{i^{\star}_{1}},\delta))^{\gamma+1}~{},$
where the last inequality follows from Eq. (13). Plugging this bound back into
the previous display yields
$\displaystyle
R(\mathcal{R}_{i^{\star}})\leq\frac{48M}{\epsilon}2^{\gamma+1}d(S_{i^{\star}_{1}},\delta)^{\gamma+2}+\frac{3M}{2\epsilon}2^{\gamma+1}d(S_{i^{\star}_{1}},\delta)^{\gamma+1}L(T,\delta)+2\epsilon
T_{\epsilon}~{},$
as claimed. ∎
###### Lemma 14 (Regret in unobserved rounds where $i^{\star}$ does not
request).
In event $\mathcal{E}_{i^{\star}}$,
$\displaystyle R(\mathcal{U}_{i^{\star}})\leq M~{}.$ (18)
###### Proof.
If $i^{\star}$ is not requesting the label then $i^{\star}$ predicts the label
as $a^{*}_{t}$. From the disagreement test $i_{t}$ will predict the same label
as $i^{\star}$ so there should be no regret, except when a learner gets
eliminated. Since there are at most $M$ learners and the regret per round is
at most $1$, the total regret on rounds $\mathcal{U}_{i^{\star}}$ can at most
be $M$. ∎
###### Lemma 15 (Regret in observed rounds where $i^{\star}$ does not
request).
In event $\bigcap_{i=\langle i_{1},i_{2}\rangle\in\mathcal{M}_{1}\colon
i_{1}\geq i^{\star}_{1}}\mathcal{E}_{i}$, the regret in rounds where
$i^{\star}$ does not request the label, but the label was still observed is
bounded as
$\displaystyle R$ $\displaystyle(\mathcal{O}_{i^{\star}})$
$\displaystyle=O\left(\sum_{i=\langle
i_{1},i_{2}\rangle\in\mathcal{M}_{1}}\inf_{\epsilon\in(0,1/2)}\left(\frac{d_{i_{2}}}{\epsilon}+T\left(\frac{\epsilon\,d(S_{T,n}(h),\delta)}{d_{i_{2}}}\right)^{\gamma+1}+\frac{L(T,\delta)}{\epsilon}\right)+ML(T,\delta/\log
T)\right)~{}.$
###### Proof.
Note that we can decompose the regret in those rounds as
$\displaystyle R(\mathcal{O}_{i^{\star}})=\sum_{i\neq
i_{*}}R(\mathcal{V}_{T,i,i^{\star}})$
since no regret occurs if the played action agrees with the action proposed by
$i^{\star}$ which did not request a label and in $\mathcal{E}_{i^{\star}}$
does not incur any regret in such rounds. We bound
$R(\mathcal{V}_{T,i,i^{\star}})$ by using the fact that in all but at most one
of those rounds both the observed regret test and the $d_{i}$ test did not
trigger. This gives
$\displaystyle\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}(\
1{1}{\left\\{a_{k,i}\neq y_{k}\right\\}}-\ 1{1}{\left\\{a_{k,i^{\star}}\neq
y_{k}\right\\}})\leq\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}1\wedge
B_{k,i}+1.45\sqrt{|{\mathcal{V}}_{T,i,i^{\star}}|L(|{\mathcal{V}}_{T,i,i^{\star}}|,\delta)}+1~{}.$
We now apply the concentration argument in Lemma 21 to bound the LHS from
below as
$\displaystyle\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}(\
1{1}{\left\\{a_{k,i}\neq y_{k}\right\\}}-\ 1{1}{\left\\{a_{k,i^{\star}}\neq
y_{k}\right\\}})$
$\displaystyle\geq\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}(h(x_{k,a_{k,i^{\star}}})-h(x_{k,a_{k,i}}))-0.72\sqrt{|\mathcal{V}_{T,i,i^{\star}}|L(|\mathcal{V}_{T,i,i^{\star}}|,\delta)}$
$\displaystyle=\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}(h(x_{k,a^{\star}_{k}})-h(x_{k,a_{k,i}}))-0.72\sqrt{|\mathcal{V}_{T,i,i^{\star}}|L(|\mathcal{V}_{T,i,i^{\star}}|,\delta)}$
$\displaystyle=R(\mathcal{V}_{T,i,i^{\star}})-0.72\sqrt{|\mathcal{V}_{T,i,i^{\star}}|L(|\mathcal{V}_{T,i,i^{\star}}|,\delta)}~{},$
where $a_{k}^{\star}$ is the optimal prediction in round $k$. Combining the
previous two displays allows us to bound the regret from above for any
$\epsilon\in(0,1/2)$ as
$\displaystyle R(\mathcal{V}_{T,i,i^{\star}})$
$\displaystyle\leq\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}(1\wedge
B_{k,i})+3\sqrt{|{\mathcal{V}}_{T,i,i^{\star}}|L(T,\delta)}+1~{}$
$\displaystyle\leq\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}(1\wedge
I_{k,i}B_{k,i})\
1{1}{\left\\{B_{k,i}\geq\epsilon\right\\}}+\frac{5}{2}\epsilon|{\mathcal{V}}_{T,i,i^{\star}}|+\frac{3}{2}\frac{L(T,\delta)}{\epsilon}+1~{}$
$\displaystyle\leq\frac{1}{\epsilon}\sum_{k\in\mathcal{V}_{T,i,i^{\star}}}(\epsilon\wedge
I_{k,i}B_{k,i}^{2})+\frac{5}{2}\epsilon|{\mathcal{V}}_{T,i,i^{\star}}|+\frac{3}{2}\frac{L(T,\delta)}{\epsilon}+1~{}$
$\displaystyle\leq
8\frac{d_{i}}{\epsilon}+\frac{5}{2}\epsilon|{\mathcal{V}}_{T,i,i^{\star}}|+\frac{3}{2}\frac{L(T,\delta)}{\epsilon}+1~{},$
where the last inequality applies the condition of the $d_{i}$ test. Since
${\mathcal{V}}_{T,i,i^{\star}}$ can only contain rounds where $i$ was chosen
and requested a label, we can apply the label complexity bound from Eq. (15)
(with $\sum_{k\in[T]}p_{k,i}$ therein upper bounded as explained just
afterwards) which gives
$\displaystyle|{\mathcal{V}}_{T,i,i^{\star}}|=O\left(\inf_{\epsilon\in(0,1/2)}\biggl{(}\epsilon^{\gamma}T\left(\frac{d(S_{T,n}(h),\delta)}{d_{i_{2}}}\right)^{\gamma+1}+\frac{d_{i_{2}}}{\epsilon^{2}}\biggr{)}+L(T,\delta/\log
T)\right)~{},$ (19)
and plugging this back into the previous bound yields, for any $i=\langle
i_{1},i_{2}\rangle$,
$\displaystyle R(\mathcal{V}_{T,i,i^{\star}})$
$\displaystyle=O\left(\frac{d_{i_{2}}}{\epsilon}+T\left(\frac{\epsilon\,d(S_{T,n}(h),\delta)}{d_{i_{2}}}\right)^{\gamma+1}+\frac{L(T,\delta)}{\epsilon}+L(T,\delta/\log
T)\right)~{}.$
Summing over $i\neq i^{*}$ gives the claimed result. ∎
#### A.2.3 Putting it all together
Putting together the above results gives rise to the following guarantee on
the regret and the label complexity of Algorithm 2, presented in the main body
of the paper as Theorem 2.
###### Theorem 4.
Let Algorithm 2 be run with parameters $\delta$, $\gamma\leq\alpha$ with a
pool of base learners $\mathcal{M}_{1}$ of size $M$ on an i.i.d. sample
$(x_{1},y_{1}),\ldots,(x_{T},y_{T})\sim\mathcal{D}$, where the marginal
distribution $\mathcal{D}_{\mathcal{X}}$ fulfills the low-noise condition with
exponent $\alpha\geq 0$ w.r.t. a function $h$ that satisfies (1) and
complexity $S_{T,n}(h)$. Let also $\mathcal{M}_{1}$ contain at least one base
learner $i$ such that $\sqrt{2}S_{T,n}(h)\leq S_{i}\leq 2\sqrt{2}S_{T,n}(h)$
and $d_{i}=\Theta(L_{H}(L_{H}+\log(M\log T/\delta)+S^{2}_{T,n}(h)))$, where
$L_{H}=\log\det(I+H)$, being $H$ the NTK matrix of depth $n$ over the set of
points $\\{x_{t,a}\\}_{t=1,\ldots,T,\,a=\pm 1}$. Also assume $m\geq
CT^{4}\log(2Tn/\delta)n^{6}\left(T^{2}\vee 1/\lambda_{0}^{4}\right)$ where $C$
is the constant in Lemma 1 and Lemma 2. Then with probability at least
$1-\delta$ the cumulative regret $R_{T}$ and the total number of queries
$N_{T}$ are simultaneously upper bounded as follows:
$\displaystyle R_{T}$
$\displaystyle=O\left(M\,\Bigl{(}L_{H}\bigl{(}L_{H}+\log(M\log
T/\delta)+S^{2}_{T,n}(h)\bigl{)}\Bigl{)}^{\gamma+1}T^{\frac{1}{\gamma+2}}+M\,L(T,\delta)\right)$
$\displaystyle N_{T}$
$\displaystyle=O\left(M\,\Bigl{(}L_{H}\bigl{(}L_{H}+\log(M\log
T/\delta)+S^{2}_{T,n}(h)\bigl{)}\Bigl{)}^{\frac{\gamma}{\gamma+2}}T^{\frac{2}{\gamma+2}}+M\,L(T,\delta)\right)~{},$
where $L(T,\delta)$ is the logarithmic term defined at the beginning of
Algorithm 2’s pseudocode.
###### Proof.
Using the decomposition in Eq. (16) combined with Lemmas 13, 14, and 15 we see
that the regret of Algorithm 2 can be bounded as
$\displaystyle R(T)$
$\displaystyle\leq~{}R(\mathcal{R}_{i_{\star}})+R(\mathcal{U}_{i_{\star}})+R(\mathcal{O}_{i_{\star}})$
$\displaystyle=O\Biggl{(}\frac{M}{\epsilon}2^{\gamma+1}d(S_{i^{\star}_{1}},\delta)^{\gamma+2}+\frac{M}{\epsilon}2^{\gamma+1}d(S_{i^{\star}_{1}},\delta)^{\gamma+1}L(T,\delta)+\epsilon
T_{\epsilon}$ $\displaystyle~{}~{}~{}~{}+\sum_{i=\langle
i_{1},i_{2}\rangle\in\mathcal{M}_{1}}\inf_{\epsilon\in(0,1/2)}\left(\frac{d_{i_{2}}}{\epsilon}+T\left(\frac{\epsilon\,d(S_{T,n}(h),\delta)}{d_{i_{2}}}\right)^{\gamma+1}+\frac{L(T,\delta)}{\epsilon}\right)+ML(T,\delta/\log
T)\Biggl{)}~{}.$
We first bound term $T_{\epsilon}$ through Lemma 23 (Appendix A.4). This
gives, with probability at least $1-\delta$,
$T_{\epsilon}=O\left(T\epsilon^{\gamma}+\log\frac{\log T}{\delta}\right)~{},$
simultaneously over $\epsilon$. Plugging back into the above, collecting terms
and resorting to a big-oh notation that disregards multiplicative constants
independent of $T$, $M$, $1/\delta$ yields
$\displaystyle R(T)$
$\displaystyle=O\Biggl{(}\frac{M}{\epsilon}\Bigl{(}d(S_{T,n}(h),\delta)^{\gamma+2}+d(S_{T,n}(h),\delta)^{\gamma+1}L(T,\delta)\Bigl{)}+\epsilon^{\gamma+1}T+ML(T,\delta/\log
T)$ (20) $\displaystyle\qquad\qquad+\sum_{i=\langle
i_{1},i_{2}\rangle\in\mathcal{M}_{1}}\inf_{\epsilon\in(0,1/2)}\left(\frac{d_{i_{2}}}{\epsilon}+T\left(\frac{\epsilon\,d(S_{T,n}(h),\delta)}{d_{i_{2}}}\right)^{\gamma+1}+\frac{L(T,\delta)}{\epsilon}\right)\Biggl{)}~{},$
(21)
holding simultaneously for all $\epsilon\in(0,1/2)$.
Now, the sum of the first two terms in the RHS (that is, Eq. (20)) is
minimized by selecting $\epsilon$ of the form
$\epsilon=\left(M\left(\frac{d(S_{T,n}(h),\delta)^{\gamma+2}+d(S_{T,n}(h),\delta)^{\gamma+1}L(T,\delta)}{T}\right)\right)^{\frac{1}{\gamma+2}}~{}$
which, plugged back into (20) gives
$\displaystyle(\ref{e:first term})$
$\displaystyle=O\left(\Bigl{(}M\left(d(S_{T,n}(h),\delta)^{\gamma+2}+d(S_{T,n}(h),\delta)^{\gamma+1}L(T,\delta)\right)\Bigl{)}^{\frac{\gamma+1}{\gamma+2}}\,T^{\frac{1}{\gamma+2}}+ML(T,\delta/\log
T)\right)$
$\displaystyle=O\left(Md(S_{T,n}(h),\delta)^{\gamma+1}\,T^{\frac{1}{\gamma+2}}\,L(T,\delta/\log
T)\right)~{}.$
Notice that $\epsilon$ is constrained to lie in $(0,1/2)$. If that is not the
case with the above choice of $\epsilon$, our bound delivers vacuous regret
guarantees.
As for the sum in (21), each term in the sum is individually minimized by an
$\epsilon$ of the form
$\epsilon=\left(\frac{(d_{i_{2}}+L(T,\delta))\cdot
d^{\gamma+1}_{i_{2}}}{T\cdot
d(S_{T,n}(h),\delta)^{\gamma+1}}\right)^{\frac{1}{\gamma+2}}.$
Notice that the above value of $\epsilon$ lies in the range $(0,\frac{1}{2})$
provided $d_{i_{2}}=o(T^{\frac{1}{\gamma+2}})$. Hence we simply assume that
our model selection algorithm is performed over base learners with $d_{i_{2}}$
bounded as above. In fact, if $d(S_{T,n}(h),\delta)$ exceeds this range then
our bounds become vacuous.
Next, substituting the value of $\epsilon$ obtained above we get that Eq. (21)
can be bounded as
$(\ref{e:second
term})=O\left(Md(S_{T,n}(h),\delta)^{\frac{\gamma+1}{\gamma+2}}T^{\frac{1}{\gamma+2}}\right).$
Combining the bounds on Eq. (20) and Eq. (21) we get the claimed bound on the
regret $R_{T}$.
Next, we bound the label complexity of the our model selection procedure. From
Lemma 12 we have that the label complexity can be bounded by
$\displaystyle N_{T}$ $\displaystyle=O\left(\sum_{i=\langle
i_{1},i_{2}\rangle\in\mathcal{M}_{1}}\left(\frac{d_{i_{2}}}{\epsilon^{2}}+\epsilon^{\gamma}T\left(1\wedge\frac{d(S_{T,n}(h),\delta)}{d_{i_{2}}}\right)^{\gamma+1}\right)+ML(T,\delta/\log
T)\right)~{}.$ (22)
Next consider a term in the summation in Eq. (22) with $d_{i_{2}}\geq
d(S_{T,n}(h),\delta)$. The following value of $\epsilon$ minimizes the term:
$\epsilon=\left(\frac{d_{i_{2}}}{T^{\frac{1}{\gamma+2}}}d(S_{T,n}(h),\delta)^{-\frac{\gamma+1}{\gamma+2}}\right).$
Again we notice that this is a valid range of $\epsilon$ provided that
$d_{i_{2}}=o(T^{\frac{1}{\gamma+2}})$. Substituting back into Eq. (22) we
obtain that the label complexity incurred due to such terms (denoted by
$N_{1}(T)$) is bounded as
$\displaystyle N_{1}(T)$
$\displaystyle=O\left(M\frac{T^{\frac{2}{\gamma+2}}d(S_{T,n}(h),\delta)^{\frac{2(\gamma+1)}{\gamma+2}}}{d_{i_{2}}}+ML(T,\delta/\log
T)\right)$
$\displaystyle=O\left(M{T^{\frac{2}{\gamma+2}}d(S_{T,n}(h),\delta)^{\frac{\gamma}{\gamma+2}}}+ML(T,\delta/\log
T)\right).$ (23)
Finally, consider a term in the summation in Eq. (22) with
$d_{i_{2}}<d(S_{T,n}(h),\delta)$. Then the value of $\epsilon$ that minimizes
the term equals
$\epsilon=\left(\frac{d_{i_{2}}}{T}\right)^{\frac{1}{\gamma+2}}.$
Substituting back into Eq. (22), we get that the label complexity incurred by
such terms (denoted by $N_{2}(T)$) is bounded by
$\displaystyle N_{2}(T)$
$\displaystyle=O\left(M{T^{\frac{2}{\gamma+2}}d(S_{T,n}(h),\delta)^{\frac{\gamma}{\gamma+2}}}+ML(T,\delta/\log
T)\right).$ (24)
Noting that $N_{T}=N_{1}(T)+N_{2}(T)$, we get the claimed bound on the label
complexity of the algorithm. ∎
### A.3 Extension to non-Frozen NTK
Following [44], in order to avoid computing $f(x,\theta_{0})$ for each input
$x$, we replace each vector $x_{t,a}\in\mathbb{R}^{2d}$ by
$[x_{t,a},x_{t,a}]/\sqrt{2}\in\mathbb{R}^{4d}$, matrix $W_{l}$ by
$\begin{pmatrix}W_{l}&0\\\ 0&W_{l}\end{pmatrix}\in\mathbb{R}^{4d\times 4d}$,
for $l=1,\ldots,n-1$, and $W_{n}$ by
$\left(W_{n}^{\top},-W_{n}^{\top}\right)^{\top}\in\mathbb{R}^{2d}$. This
ensures that the initial output of neural network $f(x,\theta_{0})$ is always
0 for any $x$.
#### A.3.1 Non-Frozen NTK Base Learner
The pseudocode for the base learner in the non-frozen case is contained in
Algorithm 3. Unlike Algorithm 1, Algorithm 3 updates $\theta_{t}$ using
gradient descent. The update of $\theta_{t}$ is handled by the pseudocode in
Algorithm 4.
Input: Confidence level $\delta$, complexity parameter $S$, network width $m$
and depth $n$, number of rounds $T$, step size $\eta$, number of gradient
descent steps $J$ .
Initialization:
* •
Generate each entry of $W_{k}$ independently from $\mathcal{N}(0,4/m)$, for
$k\in[n-1]$, and each entry of $W_{n}$ independently from
$\mathcal{N}(0,2/m)$;
* •
Define $\phi_{t}(x)=g(x;\theta_{t-1})/\sqrt{m}$, where $\theta_{t-1}=\langle
W_{1},\ldots,W_{n}\rangle\in\mathbb{R}^{p}$ is the weight vector of the neural
network so generated at round $t-1$;
* •
Set $Z_{0}=I\in\mathbb{R}^{p\times p}$ .
for _$t=1,2,\ldots,T$_
Observe instance $x_{t}\in\mathcal{X}$ and build $x_{t,a}\in\mathcal{X}^{2}$,
for $a\in\mathcal{Y}$
Set
$\mathcal{C}_{t-1}=\\{\theta:\|\theta-\theta_{t-1}\|_{Z_{t-1}}\leq\frac{\gamma_{t-1}}{\sqrt{m}}\\}$,
with $\gamma_{t-1}=3(\sqrt{\log{\det Z_{t-1}}+3\log(1/\delta)}+S)$
Set $\displaystyle U_{t,a}=$ $\displaystyle
f(x_{t,a},{\theta}_{t-1})+\gamma_{t-1}\|\phi_{t-1}(x_{t,a})\|_{Z_{t-1}^{-1}}+\mbox{$\frac{1}{\sqrt{T}}$}$
Predict $a_{t}=\arg\max_{a\in\mathcal{Y}}U_{t,a}$
Set $I_{t}=\ 1{1}{\left\\{|U_{t,a_{t}}-1/2|\leq B_{t}\right\\}}\in\\{0,1\\}$
with
$B_{t}=2\gamma_{t-1}\|\phi_{t-1}(x_{t,a_{t}})\|_{Z_{t-1}^{-1}}+\frac{2}{\sqrt{T}}$
if _$I_{t}=1$_
Query $y_{t}\in\mathcal{Y}$, and set loss $\ell_{t}=\ell(a_{t},y_{t})$
Update $\displaystyle Z_{t}$
$\displaystyle=Z_{t-1}+\phi_{t}(x_{t,a_{t}})\phi_{t}(x_{t,a_{t}})^{\top}$
$\displaystyle{\theta}_{t}$
$\displaystyle=\operatorname{TrainNN}\biggl{(}\eta,\,J,\,m,\,\\{x_{s,a_{s}}\,|\,s\in[t],I_{s}=1\\},\,\\{\ell_{s}\,|\,s\in[t],I_{s}=1\\},\,{\theta}_{0}\biggr{)}$
else
$Z_{t}=Z_{t-1}$, $\theta_{t}=\theta_{t-1}$, $\gamma_{t}=\gamma_{t-1}$,
$\mathcal{C}_{t}=\mathcal{C}_{t-1}$ .
Algorithm 3 NTK Selective Sampler.
Input: Step size $\eta$, number of gradient descent steps $J$, network width
$m$, contexts $\\{x_{i}\\}_{i=1}^{l}$, loss values $\\{\ell_{i}\\}_{i=1}^{l}$,
initial weight ${\theta}^{(0)}$.
Set
$\mathcal{L}({\theta})=\sum_{i=1}^{l}(f(x_{i},\theta)-1+\ell_{i})^{2}/2+m\|{\theta}-{\theta}^{(0)}\|_{2}^{2}$.
for _$j=0,\ldots,J-1$_
${\theta}^{(j+1)}={\theta}^{(j)}-\eta\nabla\mathcal{L}({\theta}^{(j)})$
Return ${\theta}^{(J)}$
Algorithm 4 TrainNN($\eta$, $J$, $m$, $\\{x_{i}\\}_{i=1}^{l}$,
$\\{\ell_{i}\\}_{i=1}^{l}$, ${\theta}^{(0)})$
Note that both Algorithm 1 and Algorithm 3 determine the confidence ellipsoid
$\mathcal{C}_{t}$ by updating $\theta_{t}$, $\gamma_{t}$ and $Z_{t}$. To tell
apart the two learners, we use $\bar{\gamma}_{t}$, $\bar{Z}_{t}$ and
$\bar{\theta}_{t}$ to denote the ellipsoid parameters for Algorithm 1. We make
use of a few relevant lemmas from [44] and its references therein stating that
in the over-parametrized regime, i.e., when $\displaystyle
m\geq{\mbox{poly}}(T,n,\lambda_{0}^{-1},S^{-1},\log(1/\delta))$, the gradient
descent update does not leave $\theta_{t}$ and $Z_{t}$ too far from the
corresponding $\bar{\theta}_{t}$ and $\bar{Z}_{t}$. Moreover, the neural
network $f$ is close to its first order approximation. The interested reader
is referred to Lemmas B.2 through B.6 of [44]. Combining these results with
the analysis in Section A.1 we bound the label complexity and regret for
Algorithm 3.
The below proofs are mainly sketched, since they follow from a combination of
the arguments in Section A.1 and some technical lemmas in [44].
We re-define here $\mathcal{E}_{0}$ to be the event where (4) and (5) hold
along with all the bounds in the well-approximation lemmas of [44] (Lemmas B.2
throug B.6). From [44], there exists a constant $C$ such that if
$m\geq CT^{19}n^{27}(\log m)^{3}$
then $\mathbb{P}(\mathcal{E}_{0})\geq 1-\delta$. Event $\mathcal{E}$ is
defined as in Eq. (7) with this specific event $\mathcal{E}_{0}$ therein.
We give a new version of Lemma 3 below, which implies that event $\mathcal{E}$
still holds with high probability for Algorithm 3, with a specific learning
rate $\eta$, number of gradient descent steps $J$ and network width $m$.
###### Lemma 16.
There exist positive constants $\bar{C}_{1},\bar{C}_{2}$ such that if
$\displaystyle\eta=\frac{\bar{C}_{1}}{2mnT}~{},\qquad\qquad
J=\frac{4nT}{\bar{C}_{1}}\log\frac{S}{CnT^{3/2}}~{},\qquad\qquad
m\geq\bar{C}_{2}T^{19}n^{27}(\log m)^{3}$
and $\sqrt{2}S_{T,n}(h)\leq S$, then under event $\mathcal{E}_{0}$ for any
$\delta\in(0,1)$ we have with probability at least $1-\delta$
$\displaystyle\|{\theta}^{*}-{\theta}_{t}\|_{Z_{t}}\leq\gamma_{t}/\sqrt{m}$
simultaneously for all $t>0$. In other words, under event $\mathcal{E}_{0}$,
${\theta}^{*}\in\mathcal{C}_{t}$ with high probability for all $t$.
###### Proof sketch.
In Lemma 5.2 of [44], it is shown that
$\displaystyle\sqrt{m}\|{\theta}^{*}-{\theta}_{t}\|_{Z_{t}}$
$\displaystyle\leq\sqrt{1+Cm^{-1/6}\sqrt{\log m}n^{4}t^{7/6}}$
$\displaystyle\hskip 72.26999pt\times\left(\sqrt{\log{\det
Z_{t}}+Cm^{-1/6}\sqrt{\log m}n^{4}t^{5/3}+2\log(1/\delta)}+S\right)$
$\displaystyle~{}~{}~{}~{}+Cn\left((1-\eta m)^{J/2}t^{3/2}+Cm^{-1/6}\sqrt{\log
m}n^{7/2}t^{19/6}\right)$
for some constant $C$ under event $\mathcal{E}_{0}$ and the assumption that
$\sqrt{2}S_{T,n}(h)\leq S$. Setting $\eta=\frac{\bar{C}_{1}}{2mnT}$ and
$J=\frac{4nT}{\bar{C}_{1}}\log\frac{S}{CnT^{3/2}}$ allows us to bound
$Cn(1-\eta m)^{J/2}T^{3/2}$ by $S$. Lastly, since $m$ satisfies
$\frac{C^{2}\sqrt{\log m}\,n^{9/2}T^{19/6}}{m^{1/6}}\leq 1~{},$
we have
$\displaystyle\sqrt{m}\|{\theta}^{*}-{\theta}_{t}\|_{Z_{t}}$
$\displaystyle\leq\sqrt{2}\left(\sqrt{\log{\det
Z_{t}}+1+2\log(1/\delta)}+S\right)+S+1$ $\displaystyle\leq
3\left(\sqrt{\log{\det Z_{t}}+3\log(1/\delta)}+S\right)~{},$
as claimed. ∎
We next show the properties of $\widehat{\Delta}_{t}$ and $\Delta_{t}$, which
is a new version of Lemma 6 for the non-frozen case.
###### Lemma 17.
Assume $\displaystyle m\geq poly(T,n,\lambda_{0}^{-1},S,\log(1/\delta))$ and
$\sqrt{2}S_{T,n}(h)\leq S$. Then under event $\mathcal{E}$ we have
$0\leq\widehat{\Delta}_{t}-\Delta_{t}\leq B_{t}$ and $\
0\leq\widehat{\Delta}_{t}$, where $B_{t}$ is the querying threshold in
Algorithm 3, i.e.,
$B_{t}=2\gamma_{t-1}\|\phi_{t}(x_{t,a_{t}})\|_{Z_{t-1}^{-1}}+\frac{2}{\sqrt{T}}~{}.$
###### Proof.
Denote
$\tilde{U}_{t,a}=\max_{{\theta}\in\mathcal{C}_{t-1}}\langle
g(x_{t,a};{\theta}_{t-1}),{\theta}-{\theta}_{0}\rangle=\langle
g(x_{t,a};{\theta}_{t-1}),{\theta}_{t-1}-{\theta}_{0}\rangle+\gamma_{t-1}\|\phi_{t}(x_{t,a})\|_{Z_{t-1}^{-1}}~{}.$
We decompose
$\widehat{\Delta}_{t}-\Delta_{t}=(U_{t,a}-\tilde{U}_{t,a})+(\tilde{U}_{t,a}-h(x_{t,a}))=:A_{1}+A_{2}~{}.$
For $A_{1}$, by definition of $U_{t,a}$ in Algorithm 3 we have
$\displaystyle U_{t,a}-\tilde{U}_{t,a}=f(x_{t,a};{\theta}_{t-1})-\langle
g(x_{t,a};{\theta}_{t-1}),{\theta}_{t-1}-{\theta}_{0}\rangle+\frac{1}{\sqrt{T}}~{}.$
Under event $\mathcal{E}$, the bound in Lemma B.4 of [44] holds. That is,
there is a constant $C_{2}$ such that
$\displaystyle|f(x_{t,a};{\theta}_{t-1})-\langle g(x_{t,a};{\theta}_{t-1}),$
$\displaystyle{\theta}_{t-1}-{\theta}_{0}\rangle|$
$\displaystyle=|f(x_{t,a};{\theta}_{t-1})-f(x_{t,a};{\theta}_{0})-\langle
g(x_{t,a};{\theta}_{t-1}),{\theta}_{t-1}-{\theta}_{0}\rangle|$
$\displaystyle\leq C_{2}m^{-1/6}\sqrt{\log m}n^{3}t^{2/3}~{}.$
Setting $m$ so large as to satisfy $C_{2}m^{-1/6}\sqrt{\log
m}n^{3}T^{2/3}\leq\frac{1}{2\sqrt{T}}$ gives us
$\frac{1}{2\sqrt{T}}\leq A_{1}\leq\frac{3}{2\sqrt{T}}~{}.$
To estimate $A_{2}$ we decompose it further as
$\displaystyle A_{2}$ $\displaystyle=\left(\tilde{U}_{t,a}-\langle
g(x_{t,a};{\theta}_{t-1}),{\theta}^{\star}-{\theta}_{0}\rangle\right)+\left(\langle
g(x_{t,a};{\theta}_{t-1}),{\theta}^{\star}-{\theta}_{0}\rangle-\langle
g(x_{t,a};{\theta}_{0}),{\theta}^{\star}-{\theta}_{0}\rangle\right)$
$\displaystyle=:A_{3}+A_{4}~{}.$
Following the argument in Lemma 6 we can show the inequality $0\leq A_{3}\leq
2\gamma_{t-1}\|\phi_{t}(x_{t,a_{t}})\|_{Z_{t-1}^{-1}}$ under event
$\mathcal{E}$. By Cauchy-Schwartz inequality
$|A_{4}|\leq\|g(x_{t,a};{\theta}_{t-1})-g(x_{t,a};{\theta}_{0})\|_{2}\|{\theta}^{\star}-{\theta}_{0}\|_{2}$.
Using the assumption that the bounds in Lemmas B.5 and B.6 in [44] hold and
$\sqrt{2}S_{T,n}(h)\leq S$, there exists a constant $C_{1}$ such that
$\displaystyle|A_{4}|\leq\|g(x_{t,a};{\theta}_{t-1})-g(x_{t,a};{\theta}_{0})\|_{2}\|{\theta}^{\star}-{\theta}_{0}\|_{2}\leq
C_{1}Sm^{-1/6}\sqrt{\log m}n^{7/2}t^{1/6}~{}.$
Setting $m$ large enough to satisfy $C_{1}Sm^{-1/6}\sqrt{\log
m}n^{7/2}T^{1/6}\leq\frac{1}{2\sqrt{T}}$ gives us
$-\frac{1}{2\sqrt{T}}\leq A_{2}\leq
2\gamma_{t-1}\|\phi_{t}(x_{t,a_{t}})\|_{Z_{t-1}^{-1}}+\frac{1}{2\sqrt{T}}~{}.$
Combining the bound for $A_{1}$ and $A_{2}$ we obtain
$\displaystyle 0\leq\widehat{\Delta}_{t}-\Delta_{t}\leq B_{t}~{},$
which proves the first part of the claim.
Next, since $U_{t,a}-h(x_{t,a})\geq 0$ for $a\in\mathcal{Y}$, we also have
$U_{t,1}+U_{t,-1}\geq h(x_{t,1})+h(x_{t,-1})=1$
which, by definition of $a_{t}$, gives $U_{t,a_{t}}\geq\frac{1}{2}$, i.e.,
$\widehat{\Delta}_{t}\geq 0$. This concludes the proof. ∎
As a consequence of the above lemma, like in the frozen case, on rounds where
Algorithm 3 does not issue a query, we are confident that prediction $a_{t}$
suffers no regret.
Before bounding the label complexity and regret, we give the following lemma
which is the non-frozen counterpart to Lemma 5 in Section A.1. The proof
follows from very similar arguments, and is therefore omitted.
###### Lemma 18.
Let $\eta$, $J$ and $m$ be as in Lemma 16 and $\sqrt{2}S_{T,n}(h)\leq S$. Then
for any $b>0$ we have
$\sum_{t=1}^{T}b\wedge I_{t}B_{t}^{2}=O\left(\left(\log\det
Z_{T}+\log(1/\delta)+S^{2}+b\right)\log\det Z_{T}\right)~{}.$ (25)
Combining the above lemmas we can bound the label complexity and regret
similar to Section A.1.
###### Lemma 19.
Let $\eta$, $J$ be as in Lemma 16, $\displaystyle m\geq
poly(T,n,\lambda_{0}^{-1},S,\log(1/\delta))$, and $\sqrt{2}S_{T,n}(h)\leq S$.
Then under event $\mathcal{E}$ for any $\epsilon\in(0,1/2)$ we have
$\displaystyle N_{T}$
$\displaystyle=O\left(T_{\epsilon}+\frac{1}{\epsilon^{2}}(\log\det
Z_{T}+\log(1/\delta)+S^{2})\log\det Z_{T}\right)$
$\displaystyle=O\left(T_{\epsilon}+\frac{1}{\epsilon^{2}}\left(\log\det(I+H)+\log(1/\delta)+S^{2}\right)\log\det(I+H)\right)~{}.$
###### Lemma 20.
Let $\eta$, $J$ be as in Lemma 16, $\displaystyle m\geq
poly(T,n,\lambda_{0}^{-1},S,\log(1/\delta))$, and $\sqrt{2}S_{T,n}(h)\leq S$.
Then under event $\mathcal{E}$ for any $\epsilon\in(0,1/2)$ we have,
$\displaystyle R_{T}$ $\displaystyle=O\left(\epsilon
T_{\epsilon}+\frac{1}{\epsilon}\left(\log\det{Z}_{T}+\log(1/\delta)+S^{2}\right)\log\det{Z}_{T}\right)$
$\displaystyle=O\left(\epsilon
T_{\epsilon}+\frac{1}{\epsilon}\left(\log\det(I+H)+\log(1/\delta)+S^{2}\right)\,\log\det(I+H)\right)~{}.$
The rest of the analysis follows from the same argument that relies on Lemma
23 (Appendix A.4) allowing one to replace $T_{\epsilon}$ by
$O\left(T\epsilon^{\alpha}+O\left(\log\frac{\log T}{\delta}\right)\right),$
and culminating into a statement very similar to Theorem 1.
#### A.3.2 Model Selection for Non-Frozen NTK Base Learners
The pseudocode for the model selection algorithm applied to the case where the
base learners are of the form of Algorithm 3 instead of Algorithm 1 is very
similar to Algorithm 2, and so is the corresponding analysis. The adaptation
to non-frozen base learners simply requires to change a constant.
Specifically, we replace ‘8’ in the $d_{i}$ test of Algorithm 2 with ‘432’,
all the rest remains the same, provided the definition of $B_{t,i}$ (querying
threshold of the $i$-th base learner) is now taken from Algorithm 3 ($B_{t}$
therein).
An analysis very similar to Lemma 11 shows that a well-specified learner is
(with high probability) not removed from the pool $\mathcal{M}_{t}$, while the
label complexity and the regret analyses mimic the corresponding analyses
contained in Section A.2.1 and A.2.2, with inflated constants and network
width $m$.
### A.4 Ancillary technical lemmas
###### Lemma 21.
Let $i,j\in\mathcal{M}_{1}$ be two base learners. with probability at least
$1-2\delta$ the following concentration bound holds for all rounds $t$
$\displaystyle\left|\displaystyle\sum_{k\in\mathcal{V}_{t,i,j}}(\
1{1}{\left\\{a_{k,i}\neq y_{k}\right\\}}-\ 1{1}{\left\\{a_{k,j}\neq
y_{k}\right\\}}+h(x_{k,a_{k,i}})-h(x_{k,a_{k,j}}))\right|\leq
0.72\sqrt{|\mathcal{V}_{t,i,j}|L(|\mathcal{V}_{t,i,j}|,\delta)}~{}.$
###### Proof.
We write the LHS of the inequality to show as
$\left|\sum_{k=1}^{t}Y_{k}\right|$ where
$\displaystyle Y_{k}=\ 1{1}{\left\\{k\in\mathcal{V}_{t,i,j}\right\\}}(\
1{1}{\left\\{a_{k,j}=y_{k}\right\\}}-\
1{1}{\left\\{a_{k,i}=y_{k}\right\\}}+h(x_{k,a_{k,i}})-h(x_{k,a_{k,j}})).$
and let $\mathbb{E}_{k}$ and $\operatorname{Var}_{k}$ denote expectation and
variance conditioned on everything before $y_{k}$ (including
$x_{k},a_{k,i},a_{k,j}$ and $i_{k}$). Note that $Y_{k}$ is a martingale
difference sequence since $\mathbb{E}_{k}Y_{k}=0$. Further, $H_{k}=\
1{1}{\left\\{k\in\mathcal{V}_{t,i,j}\right\\}}(1+h(x_{k,a_{k,i}})-h(x_{k,a_{k,j}}))$
and $G_{k}=-\
1{1}{\left\\{k\in\mathcal{V}_{t,i,j}\right\\}}(-1+h(x_{k,a_{k,i}})-h(x_{k,a_{k,j}}))$
are predictable sequences with $-G_{k}\leq Y_{k}\leq H_{k}$. Thus, we can
apply Lemma 27 and get that with probability at least $1-\delta$, for all
$t\in\mathbb{N}$
$\displaystyle\sum_{i=1}^{t}Y_{i}$ $\displaystyle\leq 1.44\sqrt{(W_{t}\vee
m)\left(1.4\log\log\left(2\left(\frac{W_{t}}{m}\vee
1\right)\right)+\log\frac{5.2}{\delta}\right)}$ $\displaystyle\leq
0.72\sqrt{|\mathcal{V}_{t,i,j}|\left(1.4\log\log\left(2|\mathcal{V}_{t,i,j}|\right)+\log\frac{5.2}{\delta}\right)}=0.72\sqrt{|\mathcal{V}_{t,i,j}|L(|\mathcal{V}_{t,i,j}|,\delta)}$
where $W_{t}=|\mathcal{V}_{t,i,j}|/4$ and $m=1/4$. We can apply the same
argument to $-Y_{k}$ which yields the statement to show. ∎
###### Lemma 22.
For any $i\in\mathcal{M}_{1}$ the number of rounds in which $i$ was played is
bounded with probability at least $1-\delta$ for all $t\in[T]$ as
$\displaystyle|\mathcal{T}_{t,i}|\leq\frac{3}{2}\sum_{k=1}^{t}p_{k,i}+1.45L(t,\delta)~{}.$
###### Proof.
###### Proof.
We can write the size of $T_{t,i}$ by its definition as
$|\mathcal{T}_{t,i}|=\sum_{k=1}^{t}\ 1{1}{\left\\{i_{k}=i\right\\}}$. We
denote by $\mathcal{F}_{k}$ the $\sigma$-field induced by all observed
quantities in Algorithm 2 before $i_{k}$ is sampled (including the set of
active learners $\mathcal{M}_{k}$). By construction
$(\mathcal{F}_{t})_{t\in\mathbb{N}}$ is a filtration. Note further that $\
1{1}{\left\\{i_{k}=i\right\\}}$ conditioned on $\mathcal{F}_{k}$ is Bernoulli
random variable with probability $p_{k,i}$. We can therefore apply Lemma 26
with $Y_{k}=\ 1{1}{\left\\{i_{k}=i\right\\}}-p_{k,i}$, $m=p_{1,i}$ (which is a
fixed quantity) and
$W_{t}=\sum_{k=1}^{t}p_{k,i}(1-p_{k,i})\leq\sum_{k=1}^{t}p_{k,i}$. This gives
that with probability at least $1-\delta$
$\displaystyle\sum_{k=1}^{t}\
1{1}{\left\\{i_{k}=i\right\\}}-\sum_{k=1}^{t}p_{k,i}\leq$ $\displaystyle
1.44\sqrt{L(t,\delta)\sum_{k=1}^{t}p_{k,i}}+0.41L(t,\delta)$
$\displaystyle\leq$
$\displaystyle\frac{1}{2}\sum_{k=1}^{t}p_{k,i}+1.45L(t,\delta).$
Note that $W_{t}/p_{1,i}\leq t$ holds because the smallest non-zero
probability $p_{k,i}$ is $p_{1,i}$. Rearranging terms yields the desired
statement. ∎
∎
###### Lemma 23.
Under the low-noise assumption with exponent $\alpha\geq 0$, each of the
following three bounds holds for any $i\in[M]$ with probability at least
$1-\log_{2}(12T)\delta$:
$\displaystyle\forall
t\in[T],\epsilon\in(0,1/2)\colon\quad|\mathcal{T}_{t,i}^{\epsilon}|$
$\displaystyle\leq 3\epsilon^{\alpha}\sum_{k=1}^{t}p_{k,i}+2L(t,\delta),$ (26)
$\displaystyle\forall
t\in[T],\epsilon\in(0,1/2)\colon\quad|\mathcal{T}_{t,i}^{\epsilon}|$
$\displaystyle\leq
3\epsilon^{\alpha}|\mathcal{T}_{t,i}|+2L(|\mathcal{T}_{t,i}|,\delta),$ (27)
$\displaystyle\epsilon\in(0,1/2)\colon\qquad T_{\epsilon}$ $\displaystyle\leq
3\epsilon^{\alpha}T+2L(T,\delta)~{}.$ (28)
###### Proof.
We here show the result for Eq. (26). The arguments for Eq. (27) and Eq. (28)
follow analogously (by considering $\ 1{1}{\left\\{i_{k}=i\right\\}}$ and $1$
instead of $p_{k,i}$). To show Eq. (26), we first prove this condition for a
_fixed_ $\epsilon\in(0,1/2]$: We begin by writing $T_{t,i}^{\epsilon}$ by its
definition as
$\displaystyle|\mathcal{T}_{t,i}^{\epsilon}|=\sum_{k=1}^{t}\
1{1}{\left\\{i_{k}=i\right\\}}\
1{1}{\left\\{|\Delta_{k}|\leq\epsilon\right\\}}~{}.$
We denote by $\mathcal{F}_{k}$ the $\sigma$-field induced by all quantities
determined up to the end of round $k-1$ in Algorithm 2 (including the set of
active learners $\mathcal{M}_{k}$ but not $i_{k}$ or $x_{k}$). By construction
$(\mathcal{F}_{t})_{t\in\mathbb{N}}$ is a filtration. Conditioned on
$\mathcal{F}_{k}$, the r.v. $\ 1{1}{\left\\{i_{k}=i\right\\}}\
1{1}{\left\\{|\Delta_{k}|\leq\epsilon\right\\}}$ is a Bernoulli random
variables with probability $q_{k}\leq p_{k,i}\epsilon^{\alpha}$, because the
choice of learner and the distribution of $|\Delta_{k}|\leq\epsilon$ are
independent in each round and by low noise condition, the latter is at most
$\epsilon^{\alpha}$. We can therefore apply Lemma 26 with $Y_{k}=\
1{1}{\left\\{i_{k}=i\right\\}}\
1{1}{\left\\{|\Delta_{k}|\leq\epsilon\right\\}}-q_{k}$, $m=q_{1}$ and
$W_{t}=\sum_{k=1}^{t}q_{k}(1-q_{k})\leq\sum_{k=1}^{t}q_{k}$. This gives that
with probability at least $1-\delta$
$\displaystyle\sum_{k=1}^{t}\ 1{1}{\left\\{i_{k}=i\right\\}}\
1{1}{\left\\{|\Delta_{k}|\leq\epsilon\right\\}}-\sum_{k=1}^{t}q_{k}\leq$
$\displaystyle 1.44\sqrt{L(t,\delta)\sum_{k=1}^{t}q_{k}}+0.41L(t,\delta)$
$\displaystyle\leq$
$\displaystyle\frac{1}{2}\sum_{k=1}^{t}q_{k}+1.45L(t,\delta),$
where the second inequality follows from AM-GM. Rearranging terms and using
$q_{k}\leq p_{k,i}\epsilon^{\alpha}\leq p_{k,i}$ gives for a fixed $\epsilon$
$\displaystyle|\mathcal{T}_{t,i}^{\epsilon}|\leq\frac{3}{2}\epsilon^{\alpha}\sum_{k=1}^{t}p_{k,i}+1.45L(t,\delta)~{}.$
(29)
We now consider the following set of values for $\epsilon$
$\displaystyle\mathcal{K}=\left\\{\left(\frac{1}{3T}\right)^{1/\alpha}2^{\frac{i-1}{\alpha}}\colon
i=1,\dots,\log_{2}\left(\frac{3T}{2^{\alpha-1}}\right)\right\\}\cap\\{1/2\\}~{}.$
and apply the argument above for all $\epsilon\in\mathcal{K}$ which gives that
with probability at least $1-\delta|\mathcal{K}|\geq 1-\log_{2}(12T)\delta$,
the bound in Eq. (29) holds for all $\epsilon\in\mathcal{K}$ and
$t\in\mathbb{N}$ simultaneously. In this event, consider any arbitrary
$\epsilon\in(0,1/2)$ and $t\in[T]$. Then
$\displaystyle|\mathcal{T}_{t,i}^{\epsilon}|\leq|\mathcal{T}_{t,i}^{\epsilon^{\prime}}|\leq\frac{3}{2}{\epsilon^{\prime}}^{\alpha}\sum_{k=1}^{t}p_{k,i}+1.45L(t,\delta),$
where $\epsilon^{\prime}=\min\\{x\in\mathcal{K}\colon x\geq\epsilon\\}$. If
$\epsilon^{\prime}$ is the smallest value in $\mathcal{K}$, then
$\frac{3}{2}{\epsilon^{\prime}}^{\alpha}\sum_{k=1}^{t}p_{k,i}\leq
1/2\leq\nicefrac{{1}}{{2}}L(t,\delta)$. Thus, the RHS is bounded as
$2L(t,\delta)$ in this case. If $\epsilon^{\prime}$ is not the smallest value
in $\mathcal{K}$, then by construction $\epsilon^{\alpha}\geq
2{\epsilon^{\prime}}^{\alpha}$ and the RHS is bounded as
$\frac{3}{2}{\epsilon^{\prime}}^{\alpha}\sum_{k=1}^{t}p_{k,i}+1.45L(t,\delta)\leq
3{\epsilon}^{\alpha}\sum_{k=1}^{t}p_{k,i}+1.45L(t,\delta$. Combining both
cases gives the desired result for Eq. (26). ∎
###### Lemma 24 (Elliptical potential, Lemma C.2 [34]).
Let $x_{1},\dots,x_{n}\in\mathbb{R}^{d}$ and
$V_{t}=V_{0}+\sum_{i=1}^{t}x_{i}x_{i}^{\top}$ and $b>0$ then
$\displaystyle\sum_{t=1}^{n}b\wedge\|x_{t}\|_{V_{t-1}^{-1}}^{2}\leq\frac{b}{\log(b+1)}\log\frac{\det
V_{n}}{\det V_{0}}\leq(1+b)\log\frac{\det V_{n}}{\det V_{0}}.$
###### Lemma 25 (Randomized elliptical potential).
Let $x_{1},x_{2},\dots\in\mathbb{R}^{d}$ and $I_{1},I_{2},\dots\in\\{0,1\\}$
and $V_{0}\in\mathbb{R}^{d\times d}$ be random variables so that
$\mathbb{E}[I_{k}|x_{1},I_{1},\dots,x_{k-1},I_{k-1},x_{k},V_{0}]=p_{k}$ for
all $k\in\mathbb{N}$. Further, let
$V_{t}=V_{0}+\sum_{i=1}^{t}I_{i}x_{i}x_{i}^{\top}$. Then
$\displaystyle\sum_{t=1}^{n}b\wedge\|x_{t}\|_{V_{t-1}^{-1}}^{2}$
$\displaystyle\leq 1\vee 2.9\frac{b}{p}\left(1.4\log\log\left(2bn\vee
2\right)+\log\frac{5.2}{\delta}\right)+\frac{2}{p}\left(1+b\right)\log\frac{\det
V_{n}}{\det V_{0}}$
holds with probability at least $1-\delta$ for all $n$ simultaneously where
$p=\min_{k}p_{k}$ is the smallest probability.
###### Proof.
This proof is a slight generalization of the Lemma C.4 in [34]. We provide the
full proof here for convenience: We decompose the sum of squares as
$\displaystyle\sum_{t=1}^{n}b\wedge\|x_{t}\|_{V_{t-1}^{-1}}^{2}\leq\frac{1}{p}\sum_{t=1}^{n}(bI_{t}\wedge\|I_{t}x_{t}\|_{V_{t-1}^{-1}}^{2})+\sum_{t=1}^{n}\frac{1}{p_{t}}(p_{t}-I_{t})(b\wedge\|x_{t}\|_{V_{t-1}^{-1}}^{2})$
(30)
The first term can be controlled using the standard elliptical potential lemma
in Lemma 24 as
$\displaystyle\frac{1}{p}\sum_{t=1}^{n}(bI_{t}\wedge\|I_{t}x_{t}\|_{V_{t-1}^{-1}}^{2})\leq\frac{1}{p}\left(1+b\right)\ln\frac{\det
V_{n}}{\det V_{0}}.$
For the second term, we apply an empirical variance uniform concentration
bound. Let
$\mathcal{F}_{i-1}=\sigma(V_{0},x_{1},p_{1},I_{1},\dots,x_{i-1},I_{i-1},x_{i},p_{i})$
be the sigma-field up to before the $i$-th indicator. Let
$Y_{i}=\frac{1}{p_{i}}(p_{i}-I_{i})\left(\|x_{i}\|^{2}_{V_{i-1}^{-1}}\wedge
b\right)$ which is a martingale difference sequence because
$\mathbb{E}[Y_{i}|\mathcal{F}_{i-1}]=0$ and consider the process
$S_{t}=\sum_{i=1}^{t}Y_{i}$ with variance process
$\displaystyle W_{t}$
$\displaystyle=\sum_{i=1}^{t}\mathbb{E}[Y_{i}^{2}|\mathcal{F}_{i-1}]=\sum_{i=1}^{t}\frac{1}{p_{i}^{2}}\left(\|x_{i}\|^{2}_{V_{i-1}^{-1}}\wedge
b\right)^{2}\mathbb{E}[(p-I_{i})^{2}|\mathcal{F}_{i-1}]$
$\displaystyle=\sum_{i=1}^{t}\frac{1-p_{i}}{p_{i}}\left(\|x_{i}\|^{2}_{V_{i-1}^{-1}}\wedge
b\right)^{2}\leq\sum_{i=1}^{t}\frac{b}{p_{i}}\left(\|x_{i}\|^{2}_{V_{i-1}^{-1}}\wedge
b\right)\leq\sum_{i=1}^{t}\frac{b^{2}}{p_{i}}.$
Note that $Y_{t}\leq b$ and therefore, $S_{t}$ satisfies with variance process
$W_{t}$ the sub-$\psi_{P}$ condition of [22] with constant $c=b$ (see Bennett
case in Table 3 of [22]). By Lemma 26 below, the bound
$\displaystyle S_{t}\leq$ $\displaystyle~{}1.44\sqrt{(W_{t}\vee
m)\left(1.4\ln\ln\left(2(W_{t}/m\vee 1)\right)+\ln\frac{5.2}{\delta}\right)}$
$\displaystyle+0.41b\left(1.4\ln\ln\left(2(W_{t}/m\vee
1)\right)+\ln\frac{5.2}{\delta}\right)$
holds for all $t\in\mathbb{N}$ with probability at least $1-\delta$. We set
$m=\frac{b}{p}$ and upper-bound the RHS further as
$\displaystyle
1.44\sqrt{\frac{b}{p}\left(1\vee\sum_{i=1}^{t}\left(b\wedge\|x_{i}\|^{2}_{V_{i-1}^{-1}}\right)\right)\left(1.4\ln\ln\left(2bt\vee
2\right)+\ln\frac{5.2}{\delta}\right)}$
$\displaystyle+0.41b\left(1.4\ln\ln\left(2bt\vee
2\right)+\ln\frac{5.2}{\delta}\right)$
$\displaystyle\leq\frac{1}{2}\left(1\vee\sum_{i=1}^{t}\left(b\wedge\|x_{i}\|^{2}_{V_{i-1}^{-1}}\right)\right)+1.45\frac{b}{p}\left(1.4\ln\ln\left(2bt\vee
2\right)+\ln\frac{5.2}{\delta}\right),$
where the inequality is an application of the AM-GM inequality. Thus, we have
shown that with probability at least $1-\delta$, for all $n$, the second term
in Eq. (30) is bounded as
$\displaystyle\frac{1}{p}\sum_{t=1}^{n}(p_{t}-I_{t})(b\wedge\|x_{t}\|_{V_{t-1}^{-1}}^{2})\leq\frac{1}{2}\left(1\vee\sum_{i=1}^{n}\left(\|x_{i}\|^{2}_{V_{i-1}^{-1}}\wedge
b\right)\right)+Z.$
where $Z=1.45\frac{b}{p}\left(1.4\ln\ln\left(2bn\vee
2\right)+\ln\frac{5.2}{\delta}\right)$. And when combining all bounds on the
sum of squares term in Eq. (30), we get that either
$\sum_{i=1}^{n}\left(\|x_{i}\|^{2}_{V_{i-1}^{-1}}\wedge b\right)\leq 1$ or
$\displaystyle\sum_{i=1}^{n}\left(\|x_{i}\|^{2}_{V_{i-1}^{-1}}\wedge b\right)$
$\displaystyle\leq 2Z+\frac{2}{p}\left(1+b\right)\ln\frac{\det V_{n}}{\det
V_{0}}$ $\displaystyle\leq\frac{4}{p}(1+b)\ln\frac{\ln(2bn\vee 2)5.2\det
V_{n}}{\delta\det V_{0}}$
which gives the desired statement. ∎
###### Lemma 26 (Time-uniform Bernstein bound).
In the terminology of [22], let $S_{t}=\sum_{i=1}^{t}Y_{i}$ be a
sub-$\psi_{P}$ process with parameter $c>0$ and variance process $W_{t}$. Then
with probability at least $1-\delta$ for all $t\in\mathbb{N}$
$\displaystyle S_{t}$ $\displaystyle\leq 1.44\sqrt{(W_{t}\vee
m)\left(1.4\log\log\left(2\left(\frac{W_{t}}{m}\vee
1\right)\right)+\log\frac{5.2}{\delta}\right)}$
$\displaystyle\qquad+0.41c\left(1.4\log\log\left(2\left(\frac{W_{t}}{m}\vee
1\right)\right)+\log\frac{5.2}{\delta}\right)$
where $m>0$ is arbitrary but fixed. This holds in particular when
$W_{t}=\sum_{i=1}^{t}\mathbb{E}_{i-1}Y^{2}$ and $Y_{i}\leq c$ for all
$i\in\mathbb{N}$.
###### Proof.
The proof follows directly from Theorem 1 with the condition in Table 3 and
their stitching boundary in Eq. (10) of [22]. ∎
###### Lemma 27 (Time-uniform Hoeffding bound).
Let $Y_{t}$ be a a martingale difference sequence and $G_{t},H_{t}$ two
predictable sequences such that $-G_{t}\leq Y_{t}\leq H_{t}$. Then with
probability at least $1-\delta$ for all $t\in\mathbb{N}$
$\displaystyle\sum_{i=1}^{t}Y_{i}$ $\displaystyle\leq 1.44\sqrt{(W_{t}\vee
m)\left(1.4\log\log\left(2\left(\frac{W_{t}}{m}\vee
1\right)\right)+\log\frac{5.2}{\delta}\right)}$
where $m>0$ is arbitrary but fixed and
$W_{t}=\frac{1}{4}\sum_{i=1}^{t}(G_{i}+H_{i})^{2}$.
###### Proof.
We use the results of [22]. In their terminology, Table 3 in that work shows
that $\sum_{i=1}^{t}Y_{i}$ is a sub-$\psi_{N}$ process with variance process
$W_{t}$. We can thus apply their Theorem 1 with the stitching boundary in
their Eq. (10) with $c=0$. Setting $\eta=2$ and $s=1.4$ gives the desired
result. ∎
|
# Sequential community mode estimation
Shubham Anand Jain Shreyas Goenka Divyam Bapna Nikhil Karamchandani
Jayakrishnan Nair
###### Abstract
We consider a population, partitioned into a set of communities, and study the
problem of identifying the largest community within the population via
sequential, random sampling of individuals. There are multiple sampling
domains, referred to as _boxes_ , which also partition the population. Each
box may consist of individuals of different communities, and each community
may in turn be spread across multiple boxes. The learning agent can, at any
time, sample (with replacement) a random individual from any chosen box; when
this is done, the agent learns the community the sampled individual belongs
to, and also whether or not this individual has been sampled before. The goal
of the agent is to minimize the probability of mis-identifying the largest
community in a _fixed budget_ setting, by optimizing both the sampling
strategy as well as the decision rule. We propose and analyse novel algorithms
for this problem, and also establish information theoretic lower bounds on the
probability of error under any algorithm. In several cases of interest, the
exponential decay rates of the probability of error under our algorithms are
shown to be optimal up to constant factors. The proposed algorithms are
further validated via simulations on real-world datasets.
###### keywords:
mode estimation , limited precision sampling , sequential algorithms , fixed
budget , multi-armed bandits
###### PACS:
0000 , 1111
###### MSC:
0000 , 1111
††journal: Performance Evaluation
[inst1]organization=Department of Electrical Engineering,addressline=IIT
Bombay, country=India
## 1 Introduction
Several applications in online learning involve sequential sampling/polling of
an underlying population. A classical learning task in this space is _online
cardinality estimation_ , where the goal is to estimate the size of a set by
sequential sampling of elements from the set (see, for example, [1, 2, 3]).
The key idea here is to use ‘collisions,’ i.e., instances where the same
element is sampled more than once, to estimate the size of the set. Another
recent application is _community exploration_ , where the goal of the learning
agent is to sample as many distinct elements as possible, given a family of
sampling distributions/domains to poll from (see [4, 5]).
In this paper, we focus on the related problem of _community mode estimation_.
Here, the goal of the learning agent is to estimate the largest community
within a population of individuals, where each individual belongs to a unique
community. The agent has access to a set of sampling domains, referred to as
_boxes_ in this paper, which also partition the population. The agent can, at
any sampling epoch, choose which box to sample from. Having chosen one such
box to sample from, a random individual from this box gets revealed to the
agent, along with the community that individual belongs to. After a fixed
budget of samples is exhausted, the learning agent reveals its estimate of the
largest community (a.k.a., the community mode) in the population. The goal of
the agent is in turn to minimize the probability of mis-identifying the
community mode, by optimizing (i) the policy for sequential sampling of boxes,
and (ii) the decision rule that determines the agent’s response as a function
of all observations.
One application that motivates this formulation is election polling. In this
context, communities might correspond to the party/candidate an individual
votes for, while boxes might correspond, for instance, to different
cities/states that individuals reside in. In this case, community mode
identification corresponds to predicting the winning party/candidate. A
related (and contemporary) application is the detection of the dominant strain
of a virus/pathogen within a population of infected individuals. Here,
communities would correspond to different strains, and boxes would correspond
to different regions/jurisdictions.
Another application of a different flavour is as follows. Consider a setting
where an agent interacts with a database which has several entries, each with
an associated label, and the agent is interested in identifying the most
represented label in the database. For concreteness, consider a user who polls
a movie recommendation engine which hosts a large catalogue of movies, each
belonging to a particular genre, to discover the most prevalent genre in the
catalogue.111Other relevant objectives, such as discovering the most popular
genre in terms of ratings or the genre most ‘rewarding’ for the user, can be
incorporated with some modifications to the framework studied here. In each
round, the user might provide a genre (community) to the recommendation engine
which then suggests a movie (individual) from that genre (perhaps based on
other user ratings). Depending on the recommendations seen thus far, the user
selects the next genre to poll and so on. Now, either due to privacy
considerations or simply the lack of knowledge of all the available genres, it
might not be feasible for the user to share the exact genre he/she wants to
view in each round and might only provide coarser directions (box). For
example, while there might be specific genres available such as dark comedy,
romantic comedy, slapstick comedy etc., the user might only indicate its
choice as ‘comedy’ and then let the recommendation engine suggest some movie
belonging to any of the sub-genres in the broad genre. At one extreme, the
user might prefer complete privacy and not suggest any genre in each round, in
which case the recommendation engine will have to choose a movie over the
entire database. This resembles the mixed community setting studied in this
paper. The opposite end of the spectrum is where the user does not care about
privacy and instead specifies a sub-genre in each round from which the
recommendation engine can then suggest a movie. This corresponds to the
separated community setting. We refer to the intermediate scenario where the
user provides coarse directives as the community-disjoint box setting.
The formulation we consider here has some parallels with the classical multi-
armed bandit (MAB) problem [6]; specifically, the fixed budget best arm
identification formulation [7]. Indeed, one may interpret communities in our
formulation as arms in an MAB problem. However, there are two crucial
differences between the two formulations. The first difference lies in the
stochastic behavior of the reward/observation sequence. In the classical MAB
problem, each pull of an arm yields an i.i.d. reward drawn from an arm
specific reward distribution. However, in the community mode detection
problem, the sequence of collisions (or equivalently, the evolution of the
number of distinct individuals seen) does not admit an i.i.d. description.
(Indeed, whether or not a certain sample from a box results in a collision
depends in a non-stationary manner on the history of observations from that
box.) The second difference between the two formulations lies in the extent of
sampling control on part of the agent. In the MAB setting, the agent can pull
any arm it chooses at any sampling epoch. However, in our formulation, the
agent cannot sample directly from a community of its choice; it must instead
choose a box to sample from, limiting its ability to target specific
communities to explore.
In terms of the extent of sampling control that the agent has, the opposite
end of the spectrum to the MAB setting is when samples are simply generated by
an underlying distribution and the agent can only use these observations to
estimate some property of the underlying distribution. This classical problem
of property estimation from samples generated from an underlying distribution
has a long and rich history. There has been a lot of work recently on
characterizing the optimal sample complexity for estimating various properties
of probability distributions including entropy [8, 9], support size and
coverage [10, 11], and ‘Lipschitz’ properties [12] amongst others. Closer to
the problem studied in this paper, the problem of mode estimation was
originally studied in [13, 14] with the focus on statistical properties of
various estimators such as consistency. More recently, the instance-optimal
sample complexity of mode estimation for any discrete distribution was derived
in [15]. Our formulation differs from this line of work in the non-i.i.d.
nature of the observations as well as the partial ability that the agent has
to control the sampling process, by being able to query any box at a given
instant.
Our contributions are summarized as follows.
* 1.
We begin by considering a special case of our model where the entire
population is contained within a single box; we refer to this as the _mixed
community setting_ (see Section 3). In this setting, the sampling process is
not controlled, and the learning task involves only the decision rule. We show
that a simple decision rule, based on counting the number of distinct
individuals encountered from each community, is optimal, via comparison of an
upper bound on the probability of error (mis-identification of the community
mode) under the proposed algorithm with an information theoretic lower bound.
For this setting, we also highlight the impact of being able to identify
sampled individuals (i.e., determine whether or not the sampled individual has
been seen before) on the achievable performance in community mode estimation.
* 2.
Next, we consider the case where each community lies in its own box; the so-
called _separated community setting_ (see Section 4). Here, we show that the
commonly used approach of detecting pairwise collisions (see [4]) is sub-
optimal. Next, a near-optimal algorithm is proposed that borrows the sampling
strategy of the classical _successive rejects_ policies for MABs [7], but
differentiates communities based on the number of distinct individuals
encountered (which is different from the classical MAB setting where arms are
differentiated based on their empirical average rewards).
* 3.
Next, we consider a setting that encompasses both the mixed community as well
as the separated community settings; we refer to it as the _community-disjoint
box setting_ (see Section 5). Here, each community is contained within a
single box (though a box might contain multiple communities). For this case,
we propose novel algorithms that combine elements from the mixed and separated
community settings. Finally, we show how the algorithms designed for the
community-disjoint box setting can be extended to the fully general case,
where communities are arbitrarily spread across boxes.
* 4.
Finally, we validate the algorithms proposed on both synthetic as well as
real-world datasets (see Section 6).
We conclude this section by making a comparison between our contributions and
the literature on the fixed budget MAB problem. Near optimal algorithms for
the fixed budget MAB problem (see, for example, [7, 16]) follow a sampling
strategy of _successive rejection_ of arms, wherein the sampling budget is
split across multiple phases, and at the end of each phase, a certain number
of (worst performing) arms are eliminated from further consideration. Some of
our algorithms for the community mode estimation problem follow a similar
sampling strategy and eliminate boxes in phases; specifically, we often use
the same sampling schedule as in the classical successive rejects algorithm
proposed in [7]. However, the elimination criterion we use is different: it is
based on the number of distinct individuals seen (so far) from each community.
Given that this statistic evolves in a non-stationary Markovian fashion over
time, this distinction makes our analysis more complex.
Our information theoretic lower bounds are inspired by the framework developed
in [17] for the fixed budget MAB problem. However, as before, the key
distinction in our proofs stems from the difference in stochastic nature of
the observation process: while reward observations for each arm in the
classical MAB setup are i.i.d., the number of distinct individuals seen from
each community evolves as an absorbing Markov chain in the community mode
estimation problem.
## 2 Problem Formulation
Consider a population consisting of $N$ individuals. Each individual belongs
to exactly one out of $m$ communities, labelled $1,2,\cdots,m.$ Additionally,
the population is partitioned across $b$ _sampling domains_ , also referred to
as ‘boxes’ in this paper. The boxes are labelled $1,2,\cdots,b.$ Our learning
goal is to identify, via random sequential sampling of the boxes, the largest
community (a.k.a., the community mode).
We represent the partitioning of the population across communities and boxes
via a $b\times m$ matrix $D.$ The entry in the $i$th row and $j$th column of
this matrix, denoted by $d_{ij},$ equals the number of individuals in box $i$
who are in community $j$. Throughout the paper, we refer to $D$ as the
_instance_ associated with the learning task. Let $d_{j}:=\sum_{i}d_{ij}$
denote the size of community $j$, and $N_{i}:=\sum_{j}d_{ij}$ denote the size
of box $i$.
The learning agent a priori knows only the set of boxes and the set of
communities. It can access the population by querying an oracle. The input to
this oracle is a box number, and the response from the oracle is a (uniformly
chosen) random individual from this box and the community that individual
belongs to. Individuals are sampled with replacement, i.e., the same
individual can be sampled multiple times. Additionally, we assume that the
learning agent is able to ‘identify’ the sampled individual, such that it
knows whether (and when) the sampled individual had been seen before.222Note
that this does not require the agent to store a unique identifier (like, say,
the social security number) associated with each sampled individual. The agent
can simply assign its own _pseudo-identity_ to an individual the first time
the individual is seen. This sampling model has been applied before in a
variety of contexts, including cardinality estimation (see [1, 2]) and
community exploration (see [4]). For each query, the agent can decide which
box to sample based on the oracle responses received thus far. At the end of a
fixed budget of $t$ oracle queries, the agent outputs its estimate
$\hat{h}^{*}\in[m]$ of the community mode
$h^{*}(D)=\operatorname*{arg\,max}_{j\in[m]}d_{j}$ in the underlying instance
$D.$333We use the notation $[a:b]$ to denote the set $\\{a,a+1,\ldots,b\\}$
for any $a,b\in\mathbb{Z}$, $b\geq a.$ For $b\in\mathbb{N},$ $[b]:=[1:b].$ The
agent makes an error if $\hat{h}^{*}\notin h^{*}(D)$, and the broad goal of
this paper is to design sequential community mode estimation algorithms that
minimize the probability of error.
Formally, for any $k\in[t]$, a sequential algorithm $\mathcal{A}$ has to
specify a box $b_{k}$ to sample for the $k$th query, this choice being a
function of only past observations. The probability of error for an algorithm
$\mathcal{A}$ under an instance $D,$ with a budget of $t$ oracle queries, is
given by
$P_{e}(D,\mathcal{A},t)\overset{\Delta}{=}\mathbb{P}(\hat{h}^{*}\notin
h^{*}(D))$. An algorithm $\mathcal{A}$ is said to be _consistent_ if, for any
instance $D,$ $\lim_{t\rightarrow\infty}P_{e}(D,\mathcal{A},t)=0.$ We often
suppress the dependence on the budget $t$ and also the algorithm $\mathcal{A}$
(when the algorithm under consideration is clear from the context) when
expressing the probability of error, denoting it simply as $P_{e}(D).$
For notational simplicity, we assume throughout that the instance $D$ is has a
unique largest community, with $h^{*}(D)$ denoting the largest community; our
results easily generalize to the case where $D$ has more than one largest
community. In the following sections, for various settings of interest, we
prove instance-specific upper bounds on the probability of error of our
proposed algorithms. We are also able to prove information theoretic lower
bounds on the probability of error under _any_ algorithm (within a broad class
of _reasonable_ algorithms). In some cases, we show that the exponential decay
rate of the information theoretic lower bound with respect to the horizon
matches (up to a factor that is logarithmic in the number of boxes) the
corresponding decay rate for our algorithm-specific upper bounds; this implies
the near optimality of our algorithms.
Remark: As is also the case with algorithms for the fixed budget MAB problem,
the probability of error under our proposed algorithms typically decays
exponentially with respect to the budget $t,$ i.e.,
$P_{e}(D)\leq\mu(D)e^{-\lambda(D)t},$ where $\mu(D),$ and $\lambda(D)$ are
instance (and algorithm) dependent positive constants. Our primary goal would
be to characterize and optimize the exponential decay rate $\lambda(D)$ above.
With the focus thus being on the decay rate, the value of the exponential pre-
factor $\mu(D)$ in our bounds will often be loose; this is also the case in
the fixed budget MAB literature.
Remark: It is also important to note that in the classical fixed budget MAB
problem, the decay rates associated with the upper bounds on the probability
of error under the best known algorithms _do not_ match exactly the decay
rates corresponding to the best known information theoretic lower bounds: the
two decay rates differ by a multiplicative factor that is logarithmic in the
number of arms [18]. Given this fundamental gap in the state of the art, it is
common practice to refer an algorithm as near optimal if the decay rate
associated with its upper bound is a logarithmic (in the number of arms)
factor away from the decay rate in the best known information theoretic lower
bound. Interestingly, we observe a similar multiplicative mismatch between the
decay rates in our upper and lower bound for the community mode estimation
problem (as noted above).
The remainder of this paper is organized as follows. We begin by considering
the _mixed community setting_ in Section 3, where all individuals belong to a
single box ($b=1$); in this special case, the instance matrix $D$ has a single
row. Note that in the mixed community setting, the agent has no control on the
sampling process. Next, in Section 4, we study the opposite end of the
spectrum with respect to sampling selectivity, where each community
constitutes a unique box ($b=m$); this corresponds to $D$ being a diagonal
matrix (up to row permutations). We refer to this special case as the
_separated community setting._ Next, in Section 5, we consider the
intermediate setting, where each community is entirely contained within a
single box. This corresponds to each column of $D$ having exactly one non-zero
entry. The algorithms presented in this section also extend to the most
general case, where each community may be spread across multiple boxes.
Finally, in Section 6, we present simulation results that compare the proposed
algorithms on both synthetic data as well as several real-world datasets. We
conclude this section with a summary of our main results.
### Summary of main results
In Tables 1, 2, and 3, we present a summary of our results, classified by
setting. For ease of presentation, only the decay rates associated with our
(upper and lower) bounds on probability of error are mentioned here.
Table 1: Summary of the mixed community setting (decay rates) Sampling model | Lower bound | Algorithm | Upper bound
---|---|---|---
Identityless | $\log\left(\frac{N}{N-\left(\sqrt{d_{1}}-\sqrt{d_{2}}\right)^{2}}\right)$ | SFM | $\log\left(\frac{N}{N-\left(\sqrt{d_{1}}-\sqrt{d_{2}}\right)^{2}}\right)$
(Theorem 2) | (Theorem 1)
Identity | $\log\left(\frac{N}{N-\left(d_{1}-d_{2}+1\right)}\right)$ | DSM | $\log\left(\frac{N}{N-(d_{1}-d_{2})}\right)$
(Theorem 4) | (Theorem 3)
Table 2: Summary of the separated community setting (decay rates) Lower Bound | Algorithm | Upper Bound
---|---|---
$\frac{3}{H_{2}\left(D\right)}$ | DS-SR | $\frac{1}{\overline{log}(b)H(D)}$
(Theorem 8) | | (Theorem 6)
Table 3: Summary of the community-disjoint box setting (decay rates) Lower Bound | Algorithms | Upper Bound
---|---|---
$\min\left(\frac{\Gamma}{H_{2}^{b}\left(D\right)},\log\left(\frac{N_{1}}{N_{1}-(d_{11}-c_{1}+1)}\right)\right)$ | DS-SR, ENDS-SR | $\min\left(\frac{1}{\overline{log}(b)H^{b}(D)},\frac{1}{2\overline{log}(b)}\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{1}}\right)\right)$
(Theorems 12, 13) | (Theorem 10)
Table 1 summarizes our results for the mixed-community setting, where for
simplicity, we have represented the community sizes as
$d_{1},d_{2},\ldots,d_{m},$ with $d_{1}>d_{2}\geq d_{3}\geq\cdots\geq d_{m}.$
In this case, we consider both an _identityless_ sampling model, wherein the
identity of the sampled individual is not revealed to the learning agent, as
well as the identity-based model described in our problem formulation. As we
point out in Section 3, the decay rate corresponding to the identity-based
sampling model exceeds that under the identityless model, indicating that
identity information helps to improve the performance of mode identification.
Note that the decay rates corresponding to our upper and lower bounds match
exactly for the identity-based sampling model, and almost exactly for the
identity-based model. Since the mixed-community setting consists of a single
box, the multiplicative discrepancy described above between the decay rates in
the upper and lower bounds does not arise here.
In Table 2, we summarize our main results for the separated community setting.
Since there is a single community per box here, we once again represent the
community/box sizes as $d_{1},d_{2},\ldots,d_{b},$ with $d_{1}>d_{2}\geq
d_{3}\geq\cdots\geq d_{b}.$ The decay rate in our lower bound is expressed in
terms of the instance-dependent complexity metric
$H_{2}(D):=\sum_{i=2}^{b}\frac{1}{\log\left(d_{1}\right)-\log\left(d_{i}\right)}$,
and that in our upper bound is expressed in terms of the related complexity
metric $H(D),$ which is within a
$\overline{log}(b)=\frac{1}{2}+\sum_{i=2}^{b}\frac{1}{i}$ factor of $H_{2}(D)$
(see Lemma 7).
Table 3 summarizes our main results for the community-disjoint box setting.
Here, $d_{11}$ denotes the size of the largest community, which is contained
in Box 1, $c_{1}$ denotes the size of the second largest community in Box 1,
and for $i\geq 2,$ $c_{i}$ denotes the size of the largest community in Box
$i.$ The remaining constants in the decay rate expressions are defined in
Section 5. The decay rates corresponding to the upper and lower bounds are
expressed as a minimum of two terms: the first corresponds to the (sub)task of
identifying the box containing the largest community, while the second
corresponds to the (sub)task of identifying the largest community within that
box. As we elaborate in Section 5, for a certain class of (reasonable)
instances, the two decay rates can be shown to be within constant factors of
one another.
## 3 Mixed Community Setting
We first consider the mixed community setting, where $b=1,$ i.e., the instance
matrix $D$ has a single row. In other words, the population is completely
‘mixed’ and for each query, the agent obtains a uniformly random sample from
the entire population. Thus, the sampling process in this case is
uncontrolled, and the learning task is to simply identify the largest
community based on the $t$ samples obtained.
In the mixed community setting, we also consider an _identity-less_ sampling
model, wherein the agent only learns the community that the sampled individual
belongs to, without any other identifying information. Under this sampling
model, the agent cannot tell whether or not an individual who has been sampled
has been seen before. This model not only forms a benchmark for our subsequent
analysis of identity-based sampling, but is also of independent interest,
given its privacy-preserving property.
Throughout this section, since there is a single box, we drop the first index
in $d_{ij},$ and represent the instance simply as
$D=(d_{1},d_{1},\cdots,d_{m}).$ Also, without loss of generality, we order the
communities as $d_{1}>d_{2}\geq d_{3}\geq\cdots\geq d_{m}$.
### 3.1 Identity-less sampling
We begin by analysing the identity-less sampling model in the mixed community
setting. Note that in this case, the response to each oracle query is
community $i$, with a probability proportional to the size of the $i$th
community. Thus, the agent receives $t$ i.i.d. samples from the discrete
distribution $(p_{1},p_{2},\cdots,p_{m}),$ where $p_{i}=d_{i}/N.$ Hence, the
learning task boils down to the identification of the mode of this
distribution, using a fixed budget of $t$ i.i.d. samples.444The same mode
identification problem was considered in the _fixed confidence_ setting
recently in [15].
#### 3.1.1 Algorithm
We consider a natural algorithm in this setting, which we call the Sample
Frequency Maximization (SFM) algorithm: return the empirical mode, i.e., the
community which has produced the largest number of samples, with ties broken
randomly. One would anticipate that this algorithm is optimal, since the
vector $(\hat{\mu}_{j}(t),\ 1\leq j\leq m),$ where $\hat{\mu}_{j}(t)$ denotes
the number of samples from community $j$ over $t$ oracle queries, is a
sufficient statistic for the distribution $D.$ The probability of error under
the SFM algorithm is bounded from above as follows.
###### Theorem 1.
Consider the mixed community setting, under the identity-less sampling model.
For any instance $D,$ the Sample Frequency Maximization algorithm has a
probability of error upper bounded as
$\displaystyle
P_{e}{(D)}\leq(m-1)\left(1-\frac{(\sqrt{d_{1}}-\sqrt{d_{2}})^{2}}{N}\right)^{t}.$
The proof, which follows from a straightforward application of the Chernoff
bound, can be found in A. Note that the probability of error under the SFM
algorithm decays exponentially with the budget $t,$ the decay rate being (at
least) $\log\left(\frac{N}{N-(\sqrt{d_{1}}-\sqrt{d_{2}})^{2}}\right).$ The
optimality of this decay rate is established next, via an information-
theoretic lower bound on the probability of error under any consistent
algorithm.
#### 3.1.2 Lower Bound
The following theorem establishes an asymptotic lower bound on the probability
of error under any consistent algorithm which uses identity-less sampling.
Recall that under a consistent algorithm, for any underlying instance $D$ the
probability of error converges to zero as $t\rightarrow\infty.$
###### Theorem 2.
In the mixed community setting, under the identity-less sampling model, any
consistent algorithm on an instance $D$ satisfies
$\displaystyle\liminf_{t\rightarrow\infty}\frac{1}{t}\log(P_{e}(D))\geq-\log\left(\frac{N}{N-(\sqrt{d_{1}}-\sqrt{d_{2}})^{2}}\right).$
The proof of this theorem, which uses ideas from the proof of [17, Theorem
12], can be found in B. Since the exponential decay rate in the above lower
bound matches that in the upper bound corresponding to the SFM algorithm for
any instance $D$, it follows that SFM is asymptotically decay-rate optimal
(under identity-less sampling).
### 3.2 Identity Sampling
Having considered the case of identity-less sampling in the previous section,
we now revert to the identity-based sampling model described in Section 2. We
show that identity information can be used to improve the accuracy of
community mode estimation. We begin by proposing and analysing a simple
algorithm for community mode estimation, and then establish information-
theoretic lower bounds.
#### 3.2.1 Algorithm
Under identity-based sampling, we propose a simple _Distinct Samples
Maximization_ (DSM) algorithm: The DSM algorithm tracks the number of
_distinct_ individuals seen from each community, and returns the community
that has produced the greatest number over the $t$ queries, with ties broken
randomly. As before, this is the natural algorithm to consider under identity-
based sampling, given that the vector $(S_{j}(t),\ 1\leq j\leq m)$, where
$S_{j}(t)$ denotes the number of distinct individuals from community $j$ seen
over $t$ oracle queries, is a sufficient statistic for $D$ (see [2]). The
probability of error under the DSM algorithm is bounded as follows.
###### Theorem 3.
In the mixed community setting, for any instance $D,$ the Distinct Samples
Maximization (DSM) algorithm has a probability of error upper bounded as
$\displaystyle P_{e}(D)\leq
2(m-1)\exp\left(-\frac{t\left(d_{1}-\frac{\sum_{i=2}^{m}d_{i}}{m-1}\right)^{2}}{32Nd_{1}}\right)\quad\text{
for }t\leq
min\left\\{\frac{d_{1}+d_{m}}{2d_{1}}N,\frac{16Nd_{1}}{(d_{1}-d_{m})^{2}}\right\\},$
(1) $\displaystyle
P_{e}(D)\leq{\binom{d_{1}}{d_{2}}}\left(1-\frac{d_{1}-d_{2}}{N}\right)^{t}{={\binom{d_{1}}{d_{2}}}\exp\left(-t\log\left(\frac{N}{N-d_{1}+d_{2}}\right)\right)}\quad\forall
t.$ (2)
Theorem 3 provides two upper bounds on the probability of error. The bound (2)
holds for all values of budget $t,$ while the bound (1) which is only
applicable for small to moderate budget values, tends to be tighter for small
values of $t.$ Note that (2) implies that the probability of error under the
DSM algorithm decays exponentially with $t,$ with decay rate (at least)
$\log\left(\frac{N}{N-(d_{1}-d_{2})}\right).$ Note that this decay rate
exceeds the optimal decay rate under identity-less sampling from Theorem 2,
since
$d_{1}-d_{2}>(\sqrt{d_{1}}-\sqrt{d_{2}})^{2}\Rightarrow\log\left(\frac{N}{N-(d_{1}-d_{2})}\right)>\log\left(\frac{N}{N-(\sqrt{d_{1}}-\sqrt{d_{2}})^{2}}\right).$
This shows that identity information indeed improves the accuracy of community
mode estimation.
###### Proof.
The proof of (1) relies on an argument using McDiarmid’s inequality, and is
given in C. The proof of (2) is given by a coupon collector style argument.
The error probability is upper bounded by the probability of the event that
there exists a subset of $d_{1}-d_{2}$ individuals in the largest community
$C_{1}$, such that none of them are sampled in the $t$ queries. Thus we have
$\displaystyle P_{e}(D)$
$\displaystyle\leq{\binom{d_{1}}{d_{2}}}\left(1-\frac{d_{1}-d_{2}}{N}\right)^{t}.$
The details can be found in C. ∎
#### 3.2.2 Lower Bounds
Next, we show that the exponential decay rate of the probability of error
under the DSM algorithm is (nearly) optimal via an information-theoretic lower
bound.
###### Theorem 4.
In the mixed community setting, for any consistent algorithm, the probability
of error corresponding to an instance $D$ is bounded below asymptotically as
$\displaystyle\liminf_{t\rightarrow\infty}\frac{\log(P_{e}(D))}{t}\geq-\log\left(\frac{N}{N-(d_{1}-d_{2}+1)}\right).$
Note that Theorem 4 implies that the DSM algorithm is nearly decay-rate
optimal; the small discrepancy between the decay rate under DSM and that in
the lower bound ($(d_{1}-d_{2})$ replaced by $(d_{1}-d_{2}+1)$) stems from the
discreteness of the space of alternative instances in our change of measure
argument. The proof of this Theorem can be found in D.
## 4 Separated Community Setting
In this section, we consider the _separated community_ setting, where each box
contains a single and unique community (so that $b=m$). Compared to the mixed
community setting considered in Section 3, this setting represents the
opposite end of the spectrum with respect to sampling selectivity on part of
the agent—the agent can now choose exactly which community to sample from at
any time. Note that identity-less sampling is not meaningful in the separated
community setting, since the agent can only gauge the size of a community by
observing ‘collisions,’ which occur when the same individual is sampled again.
At a high level, the separated community setting has connections with the
(fixed budget) multi-armed bandit (MAB) problem, with boxes/communities
corresponding to arms. However, the reward structure in the separated
community setting is different from that in a classical MAB problem; indeed,
whether or not a sample taken from any community represents a collision
depends on past samples from that community. Nevertheless, we show that tools
from the MAB literature can still be adapted to design near-optimal algorithms
for estimating the largest community in our setting.
Throughout this section, we denote the size of the community in the $b$th box
by $d_{b}$, dropping the redundant second index since there is only one
community in each box. Thus, an instance can be defined by the vector
$D=(d_{1},d_{2},\cdots,d_{b}).$ WLOG, we order the communities such that
$d_{1}>d_{2}\geq d_{3}...\geq d_{b}$.
We begin by considering a simple approach, where at each decision epoch, the
agent queries a pair of samples from any chosen community, and checks whether
or not a collision has occurred, i.e., the same individual has been sampled
both times. Since the event of such a (pairwise, consecutive) collision is
independent of past samples, and its probability is inversely proportional to
the size of the community, this provides a direct mapping to the MAB setting,
allowing off-the-shelf MAB algorithms to be applied.555Note that this approach
only looks for ‘immediate’ collisions and does not track collisions across the
entire observation history. However, we find that this approach, which has
been used before in the literature (for example, see [4] for an application of
this approach to community exploration), is sub-optimal. Next, we propose and
analyse an algorithm that tracks the number of distinct individuals seen from
each community, and performs a successive elimination of communities until one
‘winner’ remains. We show that this approach is near-optimal, by comparing its
performance to an information-theoretic lower bound.
### 4.1 Algorithms
We begin by describing the successive rejects (SR) algorithm for fixed-budget
MABs, proposed in [7] for best arm identification. The SR algorithm is known
to be near-optimal in this setting. Our algorithms for the estimation of the
largest community, which borrow the sampling framework of the SR algorithm,
are described next.
Successive rejects algorithm: Consider an MAB problem with $b$ arms. The class
of successive rejects (SR) algorithms is parameterized by natural numbers
$K_{1},K_{2},\cdots,K_{b-1},$ satisfying $0=:K_{0}\leq K_{1}\leq
K_{2}\leq\cdots\leq K_{b-1},$ and $\sum_{j=1}^{b-2}K_{j}+2K_{b-1}\leq t,$
where $t$ denotes the budget/horizon. The algorithm proceeds in $b-1$ phases,
with one arm being rejected from further consideration at the end of each
phase. Specifically, in Phase $r,$ the $b-r+1$ surviving arms are each pulled
$K_{r}-K_{r-1}$ times. At the end of this round, the worst performing666In the
classical setting where the best arm is defined as the one with the greatest
mean reward, the worst performing arm would be the one with the smallest
empirical mean estimate. surviving arm, based on the $K_{r}$ samples seen so
far, is rejected. The output of the algorithm is the arm that survives
rejection at the end of Phase $b-1.$ The original SR algorithm proposed in [7]
used $K_{r}\propto\frac{t-b}{b-r+1},$ so that
$K_{r}=\left\lceil\frac{1}{\overline{log}(b)}\frac{t-b}{b-r+1}\right\rceil,$
(3)
where $\overline{log}(b)=\frac{1}{2}+\sum_{i=2}^{b}\frac{1}{i}$. Other SR
variants, including _uniform exploration_ ($K_{r}=\lfloor t/b\rfloor$ for
$1\leq r\leq b-1$) and _successive halving_ (see [19]) have also been
considered in the literature. In the remainder of this paper, when we refer to
the SR algorithm, we mean the specific algorithm proposed in [7], with phases
defined via (3).
Algorithm 1 Consecutive-collision SR algorithm
1:Set $\mathcal{B}=[b]$ $\triangleright$ Set of surviving boxes
2:Set $K_{0}=0$,
$K_{r}=\lceil\frac{1}{\overline{\log}(b)}\frac{t/2-b}{b-r+1}\rceil\quad(1\leq
r\leq b-1$)
3:for $r=1,2,..b-1$ do
4: For each box in $\mathcal{B},$ perform $(K_{r}-K_{r-1})$ sample pairs
5: Set $C_{i}^{r}$ as number of consecutive (within disjoint sample pairs)
collisions in box $i\in\mathcal{B}$
6:
$\mathcal{B}=\mathcal{B}\setminus\\{\operatorname*{arg\,max}_{i\in\mathcal{B}}C_{i}^{r}\\}$
(ties broken randomly)
7:Return $\hat{h}^{*}$ = lone surviving box in $\mathcal{B}$
Consecutive-collision SR algorithm: In this algorithm, we map the largest
community identification problem to an MAB best arm identification problem.
Each community is treated as an arm, and an arm pull consists of two samples
drawn from that community. The reward is binary, being 1 if the arm pull does
not result in a collision, and 0 if it does. Thus, the mean reward associated
with arm (community) $i$ equals $1-\frac{1}{d_{i}},$ so that the best arm (the
one with the highest mean reward) corresponds to the largest community. Note
that since each arm pull corresponds to 2 samples, the budget of the MAB
reformulation equals $t/2.$ On this MAB reformulation, we apply the SR
algorithm of [7] to identify the largest community; this is formalized as
Algorithm 1. Adapting the proof of [7, Theorem 2] for our setting yields the
following upper bound on the probability of error under the Consecutive-
collision SR (CC-SR) algorithm.
###### Theorem 5.
In the separated community setting, for any instance $D,$ the Consecutive-
collision SR (CC-SR) algorithm given in Algorithm 1 has a probability of error
that is upper bounded as
$\displaystyle
P_{e}(D)\leq\frac{b(b-1)}{2}\mathrm{exp}\left(-\frac{(t/2-b)}{4\overline{log}(b){H}^{c}(D)}\right),$
where $\Delta_{i}=\frac{1}{d_{i}}-\frac{1}{d_{1}}$, and
${H}^{c}(D)=\underset{i\in[2:b]}{max}\frac{i\Delta_{i}^{-2}}{d_{i}}$.
The proof of Theorem 5, which uses the Chernoff bound to concentrate the
number of consecutive collisions from each community, can be found in E.
Distinct Samples SR algorithm: We now present an algorithm that ranks
communities by the number of distinct individuals seen. Note that this
involves tracking collisions across the entire observation history of each
community. Specifically, we use the same sampling strategy as the SR
algorithm, and at the end of each phase, eliminate from further consideration
that community which has produced the least number of distinct individuals so
far.777Note however that in the original SR algorithm for MABs, the cumulative
reward from each arm has i.i.d. increments. In the present setting however,
the cumulative number of distinct individuals seen from any community does not
have i.i.d. increments. This algorithm, which we refer to as the Distinct
Samples SR (DS-SR) algorithm, is stated formally as Algorithm 2.
Algorithm 2 Distinct Samples SR algorithm (separated community setting)
1:Set $\mathcal{B}=[b]$ $\triangleright$ Set of surviving boxes
2:Set $K_{0}=0$,
$K_{r}=\lceil\frac{1}{\overline{\log}(b)}\frac{t-b}{b-r+1}\rceil\quad(1\leq
r\leq b-1$)
3:for $r=1,2,..b-1$ do
4: Sample each box in $\mathcal{B},$ $K_{r}-K_{r-1}$ times
5: Set $S_{i}^{r}$ as number of distinct individuals seen so far from box
$i\in\mathcal{B}$
6:
$\mathcal{B}=\mathcal{B}\setminus\\{\operatorname*{arg\,min}_{i\in\mathcal{B}}S_{i}^{r}\\}$
(ties broken randomly)
7:Set $\hat{b}$ as lone surviving box in $\mathcal{B}$
8:Return $\hat{h}^{*}=$ lone surviving box in $\mathcal{B}$
###### Theorem 6.
In the separated community setting, for any instance $D$ the Distinct Samples
SR (DS-SR) algorithm given in Algorithm 2 has a probability of error that is
upper bounded as
$\displaystyle
P_{e}(D)\leq\left(\sum_{r=1}^{b-1}\binom{d_{1}}{d_{b-r+1}}\right)\exp\left(-\frac{(t-b)}{\overline{log}(b)H(D)}\right),$
where $H(D)=\underset{i\in[2:b]}{max}\frac{i}{\log(d_{1})-\log(d_{i})}$.
###### Proof.
We begin by noting that $P_{e}(D)=\sum_{r}P^{r}_{e}(D)$, where $P^{r}_{e}(D)$
is the probability that box $1$ is eliminated in phase $r$. Since at least one
of the $r$ smallest communities is guaranteed to survive in phase $r,$ box $1$
will not be eliminated in the $r$th phase if the agent has seen at least
$d_{b-r+1}+1$ distinct samples from box 1. Thus, $P^{r}_{e}(D)$ is upper
bounded by the probability of the event that there exists a subset of
$d_{1}-d_{b-r+1}$ individuals in box $1$, such that none of them are sampled
in the $K_{r}$ queries made until the end of the $r$th phase. Therefore,
$\displaystyle P^{r}_{e}(D)$
$\displaystyle\leq\binom{d_{1}}{d_{b-r+1}}\left(1-\frac{(d_{1}-d_{b-r+1})}{d_{1}}\right)^{K_{r}}$
$\displaystyle\implies P^{r}_{e}(D)$
$\displaystyle\leq\binom{d_{1}}{d_{b-r+1}}\exp\left(-K_{r}\log\left(\frac{d_{1}}{d_{b-r+1}}\right)\right)$
Summing across $r$, we get that
$P_{e}(D)\leq\sum_{r=1}^{b-1}\binom{d_{1}}{d_{b-r+1}}\exp\left(-K_{r}\log\left(\frac{d_{1}}{d_{b-r+1}}\right)\right)$
(4)
Using $K_{r}=\lceil\frac{1}{\overline{log}(b)}\frac{t-b}{b-r+1}\rceil$ for
$1\leq r\leq b-1$, we note that
$\displaystyle
K_{r}\log\left(\frac{d_{1}}{d_{b-r+1}}\right)\geq\frac{(t-b)\log\left(\frac{d_{1}}{d_{b-r+1}}\right)}{\overline{log}(b)(b-r+1)}\geq\frac{(t-b)}{\overline{log}(b)H(D)}.$
Combining with (4), we have
$\displaystyle
P_{e}(D)\leq\left(\sum_{r=1}^{b-1}\binom{d_{1}}{d_{b-r+1}}\right)\exp\left(-\frac{(t-b)}{\overline{log}(b)H(D)}\right).$
∎
Having analysed the CC-SR algorithm and the DS-SR algorithms, it is
instructive to compare the exponential decay rates corresponding to the upper
bounds of the probability of error under these algorithms. From Theorems 5 and
6, this boils down to comparing the instance-dependent parameters $H^{c}(D)$
and $H(D)$ respectively, which encode the ‘hardness’ of the underlying
instance. Note that the values of these parameters are larger for instances
where the size of the largest community is close to the sizes of the competing
communities, and hence it would be harder for an algorithm to correctly
estimate the mode. Consequently, the achievable probability of error from
Theorems 5 and 6 is also higher for harder instances. Furthermore, note that
$\displaystyle H^{c}(D)$
$\displaystyle=\underset{i\in[2:b]}{max}\frac{id_{1}^{2}d_{i}}{(d_{1}-d_{i})^{2}}\stackrel{{\scriptstyle(a)}}{{>}}\underset{i\in[2:b]}{max}\frac{d_{1}d_{i}}{d_{1}-d_{i}}\frac{i}{\log(d_{1})-\log(d_{i})}$
$\displaystyle\geq\frac{d_{1}d_{b}}{d_{1}-d_{b}}\underset{i\in[2:b]}{max}\frac{i}{\log(d_{1})-\log(d_{i})}=\frac{d_{1}d_{b}}{d_{1}-d_{b}}H(D).$
Here, the bound $(a)$ follows from the fact that $\log(x)>\frac{x-1}{x}$ for
$x>1.$ Since $H^{c}(D)>\frac{d_{1}d_{b}}{d_{1}-d_{b}}H(D),$ this means that
$H^{c}(D)\gg H(D)$ for most instances of interest, which suggests that the DS-
SR algorithm has a far superior performance as compared to the CC-SR algorithm
(at least for large budget values). Our simulation results in Section 6 are
also consistent with this observation.
Next, we establish the near optimality of the Distinct Samples SR algorithm
via an information theoretic lower bound.
### 4.2 Lower Bounds
While the decay rate in the upper bound of the DS-SR algorithm was expressed
in terms of the hardness parameter $H(D),$ the information theoretic lower
bound for the separated community setting is expressed in terms of a related
hardness parameter
$H_{2}(D):=\sum_{i=2}^{b}\frac{1}{\log(d_{1})-\log(d_{i})}.$ $H(D)$ and
$H_{2}(D)$ are comparable upto a logarithmic (in the number of boxes) factor,
as shown below.
###### Lemma 7.
$\frac{H(D)}{2}\leq H_{2}(D)\leq\overline{log}(b)H(D).$
The proof of Lemma 7 can be found in I.
We now state a lower bound on the probability of error in the separate
community setting for any algorithm in a natural algorithm class. The lower
bound is non-asymptotic and is expressed in terms of the maximum of the
probability of error under the original instance and an alternate instance
which has a lower ‘hardness’. This is similar in form to the corresponding
lower bound for the standard multi-armed bandit setting in [17, Theorem 16].
###### Theorem 8.
In the separated community setting, consider any algorithm that only uses the
number of distinct samples from each community (box) to decide which box to
sample from at each instant as well as to make the final estimate of the
community mode. For any instance $D$, there exists an alternate instance
$D^{[a]},a\in[2:b]$, such that $H_{2}(D^{[a]})\leq H_{2}(D)$ and
$\displaystyle
max\left(P_{e}(D),P_{e}(D^{[a]})\right)\geq\frac{1}{4}\exp\left(-\frac{3t}{H_{2}(D)}\right).$
In the alternate instance $D^{[a]}$, only the size of community $a$ is changed
from $d_{a}$ to $\lceil\frac{d_{1}^{2}}{d_{a}}\rceil$.
The proof of Theorem 8 uses the following lemma.
###### Lemma 9.
For any algorithm $\mathcal{A}$ and instance $D$, there exists a box
(community) $a\in[2:b]$ such that
$E_{D}[N_{a}(t)]\leq\frac{t}{(\log(d_{1})-\log(d_{a}))H_{2}(D)}$, where
$N_{a}(t)$ denotes the number of times box $a$ is sampled in $t$ queries under
$\mathcal{A}$.
###### Proof.
Assume there exists no such community. Then,
$\displaystyle\sum_{a=2}^{b}E_{D}[N_{a}(t)]>\sum_{a=2}^{b}\frac{t}{(\log(d_{1})-\log(d_{a}))H_{2}(D)}=t,$
which is a contradiction. ∎
###### Proof of Theorem 8.
Consider an algorithm $\mathcal{A}$ which bases all decisions only on the
number of distinct individuals seen from each community (box). In this case,
$S_{j},$ the number of distinct samples from box (community) $j$ evolves as a
Markov chain over $[0:d_{j}],$ with transitions occurring each time the box is
pulled. From state $s,$ this chain transitions to (the same) state $s$ with
probability $q_{D}^{j}(s,s)=\frac{s}{d_{j}},$ and to state $s+1$ with
probability $q_{D}^{j}(s,s)=\frac{d_{j}-s}{d_{j}}.$
Now, from Lemma 9 there exist a box $a\in[2:b]$ which satisfies
$E[N_{a}(t)]\leq\frac{t}{(\log(d_{1})-\log(d_{a}))H_{2}(D)}$. Consider the
alternate instance
$D^{[a]}=(d_{1}^{\prime},d_{2}^{\prime},\ldots,d_{b}^{\prime})$ mentioned in
the statement of the theorem, wherein $d_{a}^{\prime}=\lceil
d_{1}^{2}/d_{a}\rceil$, $d_{j}^{\prime}=d_{j}\ \forall j\neq a$. Note that the
community mode under the alternate instance $D^{\prime}$ is $a,$ different
from that under the original instance $D$. Furthermore, note that under the
alternate instance $D^{[a]}$ the transition probabilities
$q_{D^{[a]}}^{k}(u,v)$ remain the same for all $k\neq a$. For box $a,$
$\displaystyle\log\left(\frac{q_{D}^{a}(s,s)}{q_{D^{[a]}}^{a}(s,s)}\right)$
$\displaystyle=\log\left(\frac{\lceil
d_{1}^{2}/d_{a}\rceil}{d_{a}}\right)\leq\log\left(\frac{d_{1}^{3}}{d_{a}^{3}}\right),$
(5)
$\displaystyle\log\left(\frac{q_{D}^{a}(s,s+1)}{q_{D^{[a]}}^{a}(s,s+1)}\right)$
$\displaystyle=\log\left(\frac{1-s/d_{a}}{1-s/\lceil
d_{1}^{2}/d_{a}\rceil}\right).$ (6)
Here, (5) because
$\displaystyle\lceil d_{1}^{2}/d_{a}\rceil\leq
1+d_{1}^{2}/d_{a}=(d_{a}+d_{1}^{2})/d_{a}\Rightarrow\frac{\lceil
d_{1}^{2}/d_{a}\rceil}{d_{a}}\leq\frac{d_{a}+d_{1}^{2}}{d_{a}^{2}}=\frac{d_{a}^{2}+d_{1}^{2}d_{a}}{d_{a}^{3}}\leq\frac{d_{1}^{3}}{d_{a}^{3}}.$
Next, let $\mathbb{P}_{D},\mathbb{P}_{D^{[a]}}$ denote the probability
measures induced by the algorithm under consideration by the instances $D,$
$D^{[a]},$ respectively. Then, given a trajectory
$x=(a(1),s(1),\cdots,a(t),s(t)),$ where $a(k)$ denotes the box pulled on the
$k$th query (action), and $s(k)=(s_{j}(k),\ j\in[b])$ is the vector of states
corresponding to the arms after the $k$th query, the log-likelihood ratio is
given by
$\displaystyle\log\frac{\mathbb{P}_{D}(x)}{\mathbb{P}_{D^{[a]}}(x)}=\sum_{k}\sum_{u,v}N_{k}(u,v,0,t)\log\left(\frac{q_{D}^{k}(u,v)}{q_{D^{[a]}}^{k}(u,v)}\right),$
where $N_{k}(u,v,0,t)$ represents the number of times the transition from
state $u$ to state $v$ happens in the Markov chain corresponding to box $k$
over the $t$ queries. Combining with (5), (6), we get
$\displaystyle D(\mathbb{P}_{D}||\mathbb{P}_{D^{[a]}})$
$\displaystyle=E_{D}\left[\log\frac{\mathbb{P}_{D}(x)}{\mathbb{P}_{D^{[a]}}(x)}\right]$
$\displaystyle\leq\sum_{s}E_{D}[N_{a}(s,s,0,t)]\log\left(\frac{d_{1}^{3}}{d_{a}^{3}}\right)+E_{D}[N_{a}(s,s+1,0,t)]\log\left(\frac{1-s/d_{a}}{1-s/\lceil
d_{1}^{2}/d_{a}\rceil}\right)$
where $D(\cdot||\cdot)$ denotes the Kullback-Leibler divergence. Note that
$\displaystyle\lceil
d_{1}^{2}/d_{a}\rceil>d_{a}\implies\frac{1-s/d_{a}}{1-s/\lceil
d_{1}^{2}/d_{a}\rceil}\leq 1\implies\log\left(\frac{1-s/d_{a}}{1-s/\lceil
d_{1}^{2}/d_{a}\rceil}\right)\leq 0.$
Thus, we have
$\displaystyle
D(\mathbb{P}_{D}||\mathbb{P}_{D^{[a]}})\leq\sum_{s}E_{D}[N_{a}(s,s,0,t)]\log\left(\frac{d_{1}^{3}}{d_{a}^{3}}\right)\leq
E_{D}[N_{a}(t)]\log\left(\frac{d_{1}^{3}}{d_{a}^{3}}\right)$
Next, we use Lemma 20 from [17] (alternatively, see Lemma 21 in I) to get that
$\displaystyle
max\left(P_{e}(D),P_{e}(D^{[a]})\right)\geq\frac{1}{4}\exp\left(-D(\mathbb{P}_{D}||\mathbb{P}_{D^{[a]}})\right)\geq\frac{1}{4}\exp\left(-E_{D}[N_{a}(t)]\log\left(\frac{d_{1}^{3}}{d_{a}^{3}}\right)\right),$
where $P_{e}(D)$ is the probability of error under instance $D$. Finally, we
use the bound on $E_{D}[N_{a}(t)]$ from Lemma 9 to get
$\displaystyle\max\left(P_{e}\left(D\right),P_{e}(D^{[a]})\right)\geq\frac{1}{4}\exp\left(-\frac{3t}{H_{2}(D)}\right).$
It now remains to show that $H_{2}(D^{[a]})\leq H_{2}(D)$. This is equivalent
to showing
$\displaystyle\sum_{i\in[b],i\neq
a}\frac{1}{\log(\lceil\frac{d_{1}^{2}}{d_{a}}\rceil)-\log(d_{i})}\leq\sum_{i\in[b],i\neq
1}\frac{1}{\log(d_{1})-\log(d_{i})}.$
This condition follows from the following term-by-term comparisons:
$\displaystyle\frac{1}{\log(\lceil\frac{d_{1}^{2}}{d_{a}}\rceil)-\log(d_{i})}$
$\displaystyle\leq\frac{1}{\log(d_{1})-\log(d_{i})}\quad(i\neq 1,a)$
$\displaystyle\frac{1}{\log(\lceil\frac{d_{1}^{2}}{d_{a}}\rceil)-\log(d_{1})}$
$\displaystyle\leq\frac{1}{\log(d_{1})-\log(d_{a})}$
∎
Comparing the upper and lower bounds on the probability of error for the
separated community setting in Theorems 6 and 8, we see that the expressions
for the decay rates differ (ignoring universal constants) in terms of $H(D)$
vs $H_{2}(D)$, which from Lemma 7, are at most a factor of $\overline{log}(b)$
apart. In other words, the decay rate under DS-SR is optimal, upto a
logarithmic (in the number of boxes) factor. This is similar to the optimality
guarantees available in fixed-budget MAB setting (see [7, 17]).
## 5 Community-disjoint Box Setting
In this section, we consider an intermediate setting that generalizes both the
mixed and separated community settings. Specifically, we consider the case
where each community exists in exactly one box; i.e, all the members of a
community $j$ are present in the same box. (Though any box may contain
multiple communities.) In this setting, which we refer to as the _community-
disjoint box setting,_ we propose algorithms that combine elements from the
algorithms presented before for the mixed and separated community settings.
For a class of reasonable instances, we are also able to establish the near
optimality of certain algorithms. Finally, we show that the algorithms
presented in this section can be generalized to handle the most general model,
where communities are arbitrarily spread across boxes.
Under the community-disjoint box setting, each column of the instance matrix
$D$ has exactly one non-zero entry. Without loss of generality, we assume that
$d_{11}$ is the largest value in the matrix $D$; hence, box 1 contains the
largest community (also labeled 1). Also without loss of generality, we order
boxes by the sizes of the largest communities in them; i.e, if $g_{i},1\leq
i\leq b$ is the size of the largest community in box $i$, then $d_{11}=$
$g_{1}>g_{2}\geq g_{3}\geq...\geq g_{b}$. Additionally, we define $c_{i}$ to
be the largest _competing_ community in a box–that is, $c_{i}=g_{i},i\neq 1$,
and $c_{1}$ is the second largest community in the first box. We state our
results in terms of $d_{11}$ and $(c_{i},\ i\in[b]).$
### 5.1 Algorithms
The first algorithm we consider for this setting is a generalization of the
Distinct Samples SR algorithm from Algorithm 2, where we now eliminate boxes
successively. Specifically, the algorithm proceeds in $b-1$ phases; one box
being eliminated from subsequent consideration in each of the phases. At the
end of the final phase, the algorithm outputs the community that produced the
largest number of distinct samples from the last surviving box. Since we have
multiple communities in each box, our elimination criterion in each phase is
based on the seemingly largest community in each surviving box. In particular,
let $S_{ij}^{r}$ denote the number of distinct individuals encountered from
community $j$ in box $i$ at the end of phase $r.$ We eliminate, at the end of
phase $r,$ the (surviving) box that minimizes $\max_{j}S_{ij}^{r}.$ This
algorithm, which we continue to refer to as the Distinct Samples SR (DS-SR)
algorithm (with some abuse of notation), is presented formally in Algorithm 3.
Algorithm 3 Distinct Samples SR algorithm (community-disjoint box setting)
1:Set $\mathcal{B}=[b]$ $\triangleright$ Set of surviving boxes
2:Set $K_{0}=0$,
$K_{r}=\lceil\frac{1}{\overline{\log}(b)}\frac{t-b}{b-r+1}\rceil\quad(1\leq
r\leq b-1$)
3:for $r=1,2,..b-1$ do
4: Sample each box in $\mathcal{B},$ $K_{r}-K_{r-1}$ times
5: Set $S_{ij}^{r}$ as number of distinct individuals seen so far from
community $j$ in box $i\in\mathcal{B}$
6: Set, for $i\in\mathcal{B},$ $f_{i}=\max_{j}S_{ij}^{r}$
7:
$\mathcal{B}=\mathcal{B}\setminus\\{\operatorname*{arg\,min}_{i\in\mathcal{B}}f_{i}\\}$
(ties broken randomly)
8:Set $\hat{b}$ as lone surviving box in $\mathcal{B}$
9:Return $\hat{h}^{*}=\operatorname*{arg\,max}_{j}S_{\hat{b}j}^{(b-1)}$ (ties
broken randomly)
###### Theorem 10.
In the community-disjoint box setting, for any instance $D$, the Distinct
Samples SR (DS-SR) algorithm given in Algorithm 3 has a probability of error
upper bounded as
$P_{e}(D)\leq\left(\sum_{i=2}^{b}\binom{d_{11}}{c_{i}}\right)\exp\left(-\frac{(t-b)}{\overline{log}(b)H^{b}(D)}\right)+\binom{d_{11}}{c_{1}}\exp\left(-\frac{(t-b)\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{1}}\right)}{2\overline{log}(b)}\right),$
(7)
where
$H^{b}(D)=\underset{i\in[2:b]}{\max}\frac{i}{\log(N_{1})-\log(N_{1}-d_{11}+c_{i})}$.
The upper bound on the probability of error under the DS-SR algorithm above is
a sum of two terms. The first term in (7) bounds the probability of
misidentifying the box containing the largest community, while the second term
in (7) bounds the probability of misidentifying the largest community within
the correct box (box 1). Not surprisingly, the second term is structurally
similar to the bound (2) we obtained in Theorem 3 for the mixed community
setting (restricted to box 1). The proof of Theorem 10 can be found in F.
The DS-SR algorithm works well in practice, particularly for large budget
values. However, its performance can be sub-par for moderate budget values on
certain types of instances; particularly instances where the largest community
is contained within a very large box. In such cases, it can happen that
$\mathbb{E}\left[S_{11}^{r}\right]<\mathbb{E}\left[S_{ij}^{r}\right]$ for
another community $j$ in a box $i\neq 1,$ making it likely that box 1 gets
eliminated early. We propose modified algorithms to resolve this issue, under
the additional assumption that the box sizes are known a priori to the
learning agent.888This is a natural assumption is several applications. For
example, in the context of election polling, an agent might know a priori the
total number of voters in each city/state. The first modification replaces
uniform exploration of boxes with a proportional exploration of the surviving
boxes in each phase, resulting in a sampling process (within each phase)
somewhat analogous to the mixed community setting considered in Section 3. A
second class of algorithms retains uniform box exploration, but normalizes
$S_{ij}^{r}$ to reflect the size of each box (algorithms in this class
differing with respect to the specific normalization performed). This latter
class of algorithm can also be extended to the original setting where the box
sizes are unknown, by replacing the box size by its maximum likelihood
estimator.
We begin by describing our first modification of the DS-SR algorithm, which we
refer to as the Distinct Samples Proportional SR (DS-PSR) algorithm. The DS-
PSR algorithm apportions the budget across phases in the same manner as DS-SR,
but the queries within each phase are distributed across surviving boxes in
proportion to their sizes. Formally, this corresponds to the same description
as Algorithm 3, except that in Line 4, each box $i\in\mathcal{B}$ is sampled
$T(\mathcal{B},r,i)$ times, where
$T(\mathcal{B},r,i):=\lfloor\frac{N_{i}}{\sum_{k\in\mathcal{B}}N_{k}}(K_{r}-K_{r-1})(b-r+1)\rfloor.$
Experimentally, we find that DS-PSR performs very well. However, a tight
characterization of the decay rate corresponding to the probability of error
is challenging, since the number of queries available to each surviving box in
phase $r,$ for $1<r\leq b-1,$ is a random quantity, that depends on the
sequence of prior box eliminations.
Next, we describe the normalized variants of the DS-SR algorithm. The first,
which we refer to as the Normalized Distinct Samples SR (NDS-SR) algorithm, is
described by changing the definition of $f_{i}$ in Line 6 of Algorithm 3 to
$f_{i}^{\mathrm{NDS-SR}}=\max_{j}\frac{S_{ij}^{r}}{S_{i}^{r}}N_{i},$
where $S_{i}^{r}$ denotes the number of distinct individuals seen from box $i$
(across different communities) by the end of phase $r.$ This normalization is
justified as follows: $S_{ij}^{r}/S_{i}^{r}$ is an unbiased estimator of
$d_{ij}/N_{i},$ i.e., the fraction of box $i$ that is comprised by community
$j.$
The final variant we propose, referred to as the Expectation-Normalized
Distinct Samples SR (ENDS-SR) algorithm, uses the following alternative
normalization of $f_{i}$ in Line 6 of Algorithm 3:
$f_{i}^{\mathrm{ENDS-
SR}}=\max_{j}\frac{S_{ij}^{r}}{\mathbb{E}\left[S_{i}^{r}\right]}N_{i}.$
This normalization has a similar justification: indeed,
$\frac{S_{ij}^{r}}{\mathbb{E}\left[S_{i}^{r}\right]}$ is another (more
tractable) unbiased estimator of $d_{ij}/N_{i}.$
Both NDS-SR and ENDS-SR perform quite well in practice. It is challenging to
analytically bound the performance of NDS-SR, due to the difficulty in
concentrating the fractions $S_{ij}^{r}/S_{i}^{r}.$ However, the probability
of error under ENDS-SR admits an upper bound analogous to that under DS-SR
(albeit more cumbersome). Interestingly, the exponential decay rate of the
probability of error under ENDS-SR is identical to that under DS-SR.
###### Theorem 11.
In the community-disjoint box setting, for any instance $D$,
$\limsup_{t\rightarrow\infty}\frac{\log P_{e}(D,\text{\emph{ENDS-
SR}},t)}{t}\leq-\frac{1}{\overline{log}(b)}\min\left(\frac{1}{H^{b}(D)},\frac{1}{2}\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{1}}\right)\right).$
The proof of Theorem 11 can be found in G. The intuition behind Theorem 11 is
that for large $t,$ $\mathbb{E}\left[S_{i}^{r}\right]\approx N_{i},$ so that
$f_{i}^{\mathrm{ENDS-SR}}\approx S_{ij}^{r},$ making the elimination criterion
under ENDS-SR nearly identical to that under DS-SR.
### 5.2 Lower Bounds
We now derive information theoretic lower bounds on the probability of error
in the community-disjoint box setting, and compare the decay rates suggested
by the lower bounds to the decay rate under DS-SR.
Our first lower bound captures the complexity of simply identifying the
largest community from within box 1.
###### Theorem 12.
For any consistent algorithm, the probability of error corresponding to an
instance $D$ in the community-disjoint box setting is asymptotically bounded
below as
$\liminf_{t\rightarrow\infty}\frac{P_{e}(D)}{t}\geq-\log\left(\frac{N_{1}}{N_{1}-(d_{11}-c_{1}+1)}\right).$
Note that Theorem 12 follows directly from Theorem 3 for the mixed community
setting.
Our second lower bound is complementary, in that it captures the complexity of
identifying the box containing the largest community. To state this bound, we
define
$H^{b}_{2}(D)=\sum_{i=2}^{b}\frac{1}{\log(N_{1})-\log(N_{1}-d_{11}+c_{i})}.$
Then, following along similar lines as the proof of Theorem 6, we can show
that
$\displaystyle\frac{H^{b}(D)}{2}\leq
H^{b}_{2}(D)\leq\overline{log}(b)H^{b}(D).$
###### Theorem 13.
In the community-disjoint box setting, consider any algorithm that only uses
the number of distinct samples from each community to decide which box to
sample from at each instant as well as to make the final estimate for the
community mode. For any instance $D$, there exists an alternate instance
$D^{[a]},\ a\in[2:b]$, with $H_{2}^{b}(D^{[a]})\leq H_{2}^{b}(D)$ such that
$\displaystyle\max\left(P_{e}(D),P_{e}(D^{[a]})\right)\geq\frac{1}{4}\exp\left(-\frac{t\Gamma}{H_{2}^{b}(D)}\right),$
where
$\Gamma=\max\left(\frac{\log\left(\lceil\frac{N_{1}(N_{a}-c_{a}+d_{11})}{(N_{1}-d_{11}+c_{a})}\rceil\right)-\log\left(N_{a}\right)}{\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{a}}\right)},\max_{i=2}^{b}\frac{\log\left(\lceil\frac{N_{1}(N_{a}-c_{a}+c_{i})}{(N_{1}-d_{11}+c_{i})}\rceil\right)-\log\left(N_{a}\right)}{\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{a}}\right)}\right).$
The alternate instance $D^{[a]}$ is constructed by increasing the size of only
the largest community in box $a$, such that the new size of box $a$ is
$N_{a}^{\prime}=\max\left(\lceil
N_{1}\frac{(N_{a}-c_{a}+d_{11})}{(N_{1}-d_{11}+c_{a})}\rceil,\max_{i=2}^{b}\lceil
N_{1}\frac{(N_{a}-c_{a}+c_{i})}{(N_{1}-d_{11}+c_{i})}\rceil\right).$
The proof of Theorem 13 follows along similar lines as the proof of Theorem 8.
Details can be found in H.
Comparing the upper and lower bounds on the probability of error for the box
setting in Theorems 10, 12, and 13, we see that the expressions for the
exponents differ primarily in i) the presence of $H^{b}(D)$ vs $H_{2}^{b}(D)$,
which differ by at most a factor of $\overline{log}(b)$; and ii) the presence
of an additional factor $\Gamma$ in the lower bound. Note that
$\displaystyle\max\left(\lceil
N_{1}\frac{(N_{a}-c_{a}+d_{11})}{(N_{1}-d_{11}+c_{a})}\rceil,\max_{i=2}^{b}\lceil
N_{1}\frac{(N_{a}-c_{a}+c_{i})}{(N_{1}-d_{11}+c_{i})}\rceil\right)\leq\lceil\frac{N_{1}(N_{a}-c_{a}+d_{11})}{N_{1}-d_{11}+c_{b}}\rceil$
$\displaystyle\leq\frac{N_{1}(N_{a}-c_{a}+d_{11})}{N_{1}-d_{11}+c_{b}}+1\leq\frac{N_{1}(N_{a}-c_{a}+d_{11}+1)}{N_{1}-d_{11}+c_{b}}.$
Using $\frac{x-1}{x}\leq\log(x)\leq x-1$ for all $x>0$ and the above
inequality, we get
$\displaystyle\Gamma\leq\frac{\log(N_{1})+\log(N_{a}-c_{a}+d_{11}+1)-\log(N_{a})-\log(N_{1}-d_{11}+c_{b})}{\log(N_{1})-\log(N_{1}-d_{11}+c_{a})}$
$\displaystyle=\frac{\log(N_{1}/(N_{1}-d_{11}+c_{b}))+\log((N_{a}-c_{a}+d_{11}+1)/N_{a})}{\log(N_{1}/(N_{1}-d_{11}+c_{a}))}$
$\displaystyle\leq\frac{(d_{11}-c_{b})/(N_{1}-d_{11}+c_{b})+(d_{11}-c_{a}+1)/N_{a}}{(d_{11}-c_{a})/N_{1}}$
$\displaystyle\leq\frac{(d_{11}-c_{b})}{(d_{11}-c_{a})}\cdot\frac{N_{1}}{(N_{1}-d_{11}+c_{b})}\cdot\frac{(2N_{1}+N_{a})}{N_{a}}.$
In particular, the above inequality implies that $\Gamma$ is bounded by a
constant under the following natural assumptions on the class of underlying
instances: i) the largest community size is at most a fraction of its
corresponding box size, i.e., $d_{11}\leq(1-\delta_{1})N_{1}$ for some
$\delta_{1}>0$; ii) the size of the competing communities in other boxes is
most a fraction of the largest community size, i.e.,
$c_{a}\leq(1-\delta_{2})d_{11}$ for some $\delta_{2}>0$ $\forall a\neq 1$; and
iii) all the box sizes are within a multiplicative constant factor $\beta$ of
each other $(\beta>1)$. Under these assumptions,
$\Gamma\leq\frac{2\beta+1}{\delta_{1}\delta_{2}}$.
We compare this lower bound to the first term in the upper bound given in
Theorem 10. We note that these terms only differ by an order of
$\overline{log}(b)\Gamma$. When $\Gamma$ is bounded from above, such as in the
case described above, the DS-SR estimator matches the lower bound upto
logarithmic factors for the problem of picking the correct box in the final
stage of the algorithm, and is hence near-optimal. Comparing the second term
in the upper bound from Theorem 10 to Theorem 12, we find a similar
logarithmic factor between the decay rates. Thus, the DS-SR algorithm is decay
rate optimal up to logarithmic factors for the problem of picking the right
community out of a box, given the correct box. This is natural and intuitive,
due to its similarity with the mixed community DSM algorithm. Hence, the set
of instances where DS-SR might not perform well in comparison to other
algorithms can be characterized as instances where it is hard to pick the
correct box containing the largest community; intuitively, these instances
would produce a large value of the parameter $\Gamma.$
### 5.3 The general setting
Finally, we consider the most general setting, where communities are
arbitrarily spread across boxes. From an algorithmic standpoint, the key
challenge here is that it is no longer appropriate to eliminate boxes from
consideration sequentially as in SR algorithms, since the largest community
might be spread across multiple boxes. Accordingly, the algorithms we propose
for the general setting are ‘single phase’ variants of the algorithms proposed
in Section 5.1.
The single phase variant of Algorithm 3, which we refer to as the Distinct
Samples Uniform Exploration (DS-UE) algorithm is stated as follows: sample
each box $\lfloor t/b\rfloor$ times, and return the community that produces
the largest number of distinct individuals. The probability of error under
this algorithm can be bounded using the ideas we have used before, only the
bounds are more cumbersome.
If the box sizes are known, one can also perform a single-phase proportional
sampling of boxes, resulting effectively in a sampling process similar to the
mixed community setting (except the budget is apportioned deterministically
across boxes rather than the random allocation in the mixed community setting)
. We refer to the corresponding algorithm, which outputs the community that
produced the largest number of distinct individuals after $t$ queries, as the
Distinct Samples Proportional Exploration (DS-PE) algorithm.
Finally, we state the normalized single phase variant of DS-UE, which we refer
to as NDS-UE: Each box is sampled $\lfloor t/b\rfloor$ times, and the output
of NDS-UE is the community that maximizes $\sum_{i}\frac{S_{ij}}{S_{i}}N_{i}.$
ENDS-UE can be analogously defined.
To summarize, some of our algorithms for the disjoint box setting can indeed
be applied and evaluated analytically in the general setting. However, we do
not at present have a tight information theoretic lower bound for the general
setting (or indeed, even for the disjoint box setting); the proof techinques
we have used in the lower bounds for the mixed/separated community settings
appear to be insufficient to handle the general case. So even though our
algorithms for the general setting perform well in empirical evaluations (see
Section 6), new methodological innovations are required to close the gap
between upper and lower bounds.
## 6 Experimental Results
In this section, we present extensive simulation results comparing the
performance of various algorithms discussed in the previous sections. We use
both synthetic data as well as data gathered from real-world datasets for our
experiments. For each experiment, we averaged the results over multiple runs
(500-3000 depending on the complexity of the instance).
### 6.1 Mixed Community Mode Estimation
We begin with the mixed community setting studied in Section 3 where all
individuals are placed in a single box. We demonstrate the difference in
performance of the identity-less Sample Frequency Maximization (SFM) and the
identity-based Distinct Samples Maximization (DSM) algorithms via simulations
on synthetic data. We consider two instances, each with $4000$ individuals in
a single box, partitioned into communities as $[1000,990,600,500,500,410]$ and
$[1000,900,630,520,520,430]$ respectively. As suggested by Theorems 2 and 4,
we find that the difference in the convergence rates of the two estimators
becomes more pronounced when the two largest communities are close in size.
See Figure 1 where we plot the probability of error $\log(P_{e})$ vs the query
budget $t$ for the two instances.
(a) Instance: [1000, 990, 600, 500, 500, 410]
(b) Instance: [1000, 900, 630, 520, 520, 430]
Figure 1: $\log(P_{e}(D))$ vs $t$ for mixed community setting
### 6.2 Separated Community Mode Estimation
Next, we consider the separated community setting studied in Section 4 where
each community is in a unique box. As above, we consider two instances with
community sizes given by $[1000,990,600,500,500,410]$ and
$[1000,900,630,520,520,430]$ respectively. We plot the performance of the
Consecutive-Collision SR (CC-SR) and Distinct Samples SR (DS-SR) algorithms in
Figure 2. As indicated by our results in Theorems 5 and 6, the DS-SR algorithm
greatly outperforms the CC-SR algorithm.
(a) Instance: [1000, 990, 600, 500, 500, 410]
(b) Instance: [1000, 900, 630, 520, 520, 430]
Figure 2: $\log(P_{e}(D))$ vs $t$ for separated community setting
### 6.3 Community-Disjoint Box Mode Estimation
Here, we look at the setting where the communities are partitioned across the
boxes and thus each box can have multiple communities, as described in Section
5. We use the following two real-world datasets for comparing the performance
of various estimators under this setting.
* 1.
Brazil Real Estate Dataset [20]: This dataset contains a total of 97353
apartment listings spread across 26 states and 3273 municipalities in Brazil.
Mapping it to our framework, the apartments correspond to individual entities,
the municipalities represent communities and the states they are located in
denote the boxes. Our goal is to identify the municipality (community) with
the largest number of listings by (randomly) sampling apartment listings from
various states.
Corresponding to this dataset, the four largest communities (municipalities
with the most listed apartments) are of sizes [3929, 2322, 2414, 1876]. The
top five box sizes are [80935, 3551, 2035, 1871, 1646], with the largest box
corresponding to the state of Sao Paolo. Thus, one box has a much larger size
than all others in this dataset and in fact, contains all of the the four
largest communities.
* 2.
Airbnb Rental Listing Dataset [21]: This dataset contains a total of 48895
rental listings spread across $5$ regions and $221$ neighborhoods in New York
city. Here, the apartments correspond to individual entities, the
neighbourhoods represent communities and the broad regions they are located in
denote the boxes.
The top five communities (neighbourhoods) have sizes [3920, 3741, 2658, 2465,
1971]. The top 5 box sizes are [21661, 20104, 5666, 1091, 373]. Unlike the
previous dataset, the two largest boxes (corresponding to Manhattan and
Brooklyn respectively) are of comparable size here. Furthermore, the two boxes
contain multiple competing communities of size comparable to the largest
community. The largest box contains the communities with sizes 2658 and 1971,
while the second largest box contains communities of sizes 3920 (mode), 3714,
and 2465.
Results We compare the performance of the various algorithms discussed in
Section 5.1 on the two datasets described above. These include the Distinct
Samples-Successive Rejects (DS-SR) and its generalization Distinct Samples
Proportional SR (DS-PSR) when the box sizes are known. We also consider the
normalized variants of DS-SR, given by Normalized Distinct Samples SR (NDS-SR)
and Expectation-Normalized Distinct Samples SR (ENDS-SR) when box sizes are
known as well as Normalized Distinct Samples SR (NDS-SR (MLE)) when the box
sizes are unknown, by replacing the box size by its maximum likelihood
estimator.
Figure 3(a) shows the performance of the various algorithms on the Brazil Real
Estate dataset. DS-SR which splits queries uniformly across all surviving
boxes performs the worst while DS-PSR which does the division in proportion to
box sizes performs the best. This is to be expected since there is one box
which is much larger than all others and this box contains all of the
competing largest communities. Thus, because of the uniform exploration in DS-
SR, there might be fewer samples from the individual communities in the
largest box in the initial rounds and it might get eliminated, which explains
the poor performance for moderate query budgets. This shortcoming is addressed
by DS-PSR which assign many more queries to the largest box which contains the
community mode. The normalized variants NDS-SR and ENDS-SR also perform much
better than DS-SR since they use the box sizes to determine the elimination
criteria in each round. In comparison to these, the NDS-SR (MLE) performs
poorer for low query budget due to erroneous box size estimates but
demonstrates similar performance for larger budgets.
(a) Brazil Real Estate Dataset
(b) Airbnb Rental Listing Dataset
Figure 3: $\log(P_{e}(D))$ vs $t$ for box community setting
Figure 3(b) shows the performance of the various algorithms on the Airbnb
Apartment Listing dataset. Here again, DS-PSR performs the best since it
allocates queries in proportion to box sizes. However, unlike the previous
dataset, all the other algorithms have comparable performance. This includes
DS-SR which does not use any box size information and is still able to perform
better since the box sizes are relatively closer to each other for this
dataset and the number of communities in each box are also fewer which makes
it unlikely that the box containing the largest community is eliminated.
### 6.4 General Setting Mode Estimation
Figure 4: $\log(P_{e}(D))$ vs $t$ for the Youtube Video Dataset, General Box
Setting
Finally, we consider the general setting where individuals in a community can
be spread across multiple boxes. Section 5.3 described various single-round
algorithms for this setting, namely the Distinct Samples Uniform Exploration
(DS-UE) which doesn’t need any box size information and divides the query
budget equally among all boxes; the Distinct Samples Proportional Exploration
(DS-PE) which assigns queries in proportion to the box sizes; and the various
normalized single phase variants of DS-UE, which we refer to as NDS-UE, ENDS-
UE and NDS-UE (MLE). To compare the performance of these different estimators
under the general setting, we use the following dataset.
* 1.
Trending Youtube Video Statistics Dataset [22]: This dataset contains the top
trending videos for different regions such as Canada, US, and Japan, out of
which we consider six regions. Mapped to our framework, a region corresponds
to a box, a channel denotes a community, and each video represents an
individual entity. The goal is to find the most popular channel which has the
largest number of trending videos across the six regions. Note that a
particular channel (community) can have trending videos (individuals) spread
across different regions (boxes) and thus this dataset corresponds to the
general setting. This dataset contains 239662 videos, each associated with one
of 17773 channels. Top 5 channels have [870, 809, 752, 717, 712] top trending
videos across regions. The boxes have comparable size, given by [40881, 40840,
40724, 38916, 37352, 40949].
Figure 4 shows the performance of the various algorithms on the above dataset.
Note that all the estimators are able to achieve an exponential decay in the
probability of error with the query budget even in this general setting.
Furthermore, here the rate of decay for all the estimators is comparable since
the box sizes are all similar and thus the knowledge of box sizes does not
provide a distinct advantage. However, in terms of the absolute value, DS-UE
performs slightly poorly as compared to the other algorithms which either use
prior knowledge of box sizes or learn estimates for them using samples.
## Appendix A Proof of Theorem 1
Let $\hat{\mu}_{i}(t)$ be the number of samples seen from $C_{i}$ over the
horizon. We have
$\displaystyle\hat{\mu}_{i}(t)=\sum_{j=1}^{t}\mathds{1}_{\text{\\{person j
$\in$ $C_{i}$\\}}}$ $\displaystyle\Rightarrow
E[\hat{\mu}_{i}(t)]=\mu_{i}(t)=\frac{td_{i}}{N}.$
Using the union bound on $P_{e}{(D)}$, we get
$\displaystyle
P_{e}{(D)}\leq\sum_{i=2}^{m}P(\hat{\mu}_{i}(t)-\hat{\mu}_{1}(t)\geq 0).$
The Chernoff bound gives us
$\displaystyle
P\left(\hat{\mu}_{k}(t)-\hat{\mu}_{1}(t)-\left(\mu_{k}(t)-\mu_{1}(t)\right)\geq
w\right)$ $\displaystyle\leq\min\limits_{\lambda>0}e^{-\lambda
w}E\left[e^{\lambda(\hat{\mu}_{k}(t)-\hat{\mu}_{1}(t)-(\mu_{k}(t)-\mu_{1}(t))}\right]$
$\displaystyle=\min\limits_{\lambda>0}e^{-\lambda[w+(\mu_{k}(t)-\mu_{1}(1))]}\left[\frac{d_{k}e^{\lambda}}{N}+\frac{d_{1}e^{-\lambda}}{N}+\left(1-\frac{d_{1}+d_{k}}{N}\right)\right]^{t}.$
Choosing $w=\mu_{1}(t)-\mu_{k}(t)$ and minimizing over $\lambda$,
$\displaystyle P(\hat{\mu}_{k}(t)-\hat{\mu}_{1}(t)\geq
0)\leq\left[1-\frac{(\sqrt{d_{1}}-\sqrt{d_{k}})^{2}}{N}\right]^{t}$ (8)
$\displaystyle\Rightarrow
P_{e}{(D)}\leq\sum_{i=2}^{m}P(\hat{\mu}_{i}(t)-\hat{\mu}_{1}(t)\geq 0)$
$\displaystyle\leq\sum_{i=2}^{m}\left[1-\frac{(\sqrt{d_{1}}-\sqrt{d_{k}})^{2}}{N}\right]^{t}$
$\displaystyle\leq(m-1)\left[1-\frac{(\sqrt{d_{1}}-\sqrt{d_{2}})^{2}}{N}\right]^{t}.$
## Appendix B Proof of Theorem 2
To prove the theorem, we consider two instances $D=(d_{1},d_{2},\ldots,d_{m})$
and $D^{\prime}=(d_{1}^{\prime},d_{2}^{\prime},\ldots,d_{m}^{\prime})$, where
the optimal community in $D$ is $C_{1}$ and the optimal community in
$D^{\prime}$ is $C_{2}$. We note that the mixed community setting can be
modelled as a probability distribution over communities, with the probability
of sampling $C_{i}$ under $D$ and $D^{\prime}$ being $p_{i}=d_{i}/N$ and
$p_{i}^{\prime}=d_{i}^{\prime}/N$ respectively. Let the probability
distributions corresponding to instances $D$ and $D^{\prime}$ be
$\Theta=(p_{1},p_{2},...p_{m})$ and
$\Theta^{\prime}=(p_{1}^{\prime},p_{2}^{\prime},...p_{m}^{\prime})$
respectively. Further, let the sequence of $t$ samples be denoted by
$X_{1},X_{2},\ldots,X_{t}$ where $X_{i}$ is the index of the community that is
sampled at time $i$, and let
$\mathbb{P}_{\Theta},\mathbb{P}_{\Theta^{\prime}}$ denote the probability
measures induced on the sample sequence by the instances $D$, $D^{\prime}$.
Next, we state a few lemmas which will help in the proof of the theorem.
###### Lemma 14.
For every event $\mathcal{E}\in F_{t}$, where
$F_{t}=\sigma(X_{1},X_{2},...X_{t})$,
$\displaystyle\mathbb{P}_{\Theta^{\prime}}(\mathcal{E})=\mathbb{E}_{\Theta}[\mathds{1}_{\mathcal{E}}\exp(-L_{t})],$
where
$L_{t}=\sum_{i=1}^{t}\log\left(\frac{p_{X_{i}}}{p^{\prime}_{X_{i}}}\right)$
and $\mathds{1}$ is the indicator random variable.
###### Proof.
This is analogous to [17, Lemma 18]. ∎
###### Lemma 15.
For every event $\mathcal{E}\in F_{t}$,
$\displaystyle\mathbb{E}_{\Theta}[L_{t}|\mathcal{E}]\geq\log\frac{\mathbb{P}_{\Theta}(\mathcal{E})}{\mathbb{P}_{\Theta^{\prime}}(\mathcal{E})}.$
###### Proof.
From Lemma 14, we know that
$\mathbb{P}_{\Theta^{\prime}}(\mathcal{E})=\mathbb{E}_{\Theta}[\exp(-L_{t})\mathds{1}_{\mathcal{E}}]$.
Then, using Jensen’s inequality on $\exp(-x)$, we have that
$\displaystyle\mathbb{P}_{\Theta^{\prime}}(\mathcal{E})=\mathbb{E}_{\Theta}[\exp(-L_{t})\mathds{1}_{\mathcal{E}}]=\mathbb{E}_{\Theta}[\mathbb{E}_{\Theta}[\exp(-L_{t})|\mathds{1}_{\mathcal{E}}]\mathds{1}_{\mathcal{E}}]\geq\mathbb{E}_{\Theta}[\exp(-\mathbb{E}_{\Theta}[L_{t}|\mathcal{E}])\mathds{1}_{\mathcal{E}}]$
$\displaystyle=\exp(-\mathbb{E}_{\Theta}[L_{t}|\mathcal{E}])\mathbb{P}_{\Theta}(\mathcal{E})$
The last line above proves the lemma. ∎
###### Lemma 16.
If
$d(x,y)=x\log\left(\frac{x}{y}\right)+(1-x)\log\left(\frac{(1-x)}{(1-y)}\right)$,
then for every event $\mathcal{E}\in F_{t}$,
$\displaystyle\mathbb{E}_{\Theta^{\prime}}[-L_{t}]\geq
d(\mathbb{P}_{\Theta^{\prime}}(\mathcal{E}),\mathbb{P}_{\Theta}(\mathcal{E})).$
###### Proof.
From Lemma 15 we know that
$\displaystyle\mathbb{E}_{\Theta^{\prime}}[-L_{t}|\mathcal{E}]\geq\log\left(\frac{\mathbb{P}_{\Theta^{\prime}}(\mathcal{E})}{\mathbb{P}_{\Theta}(\mathcal{E})}\right),\mathbb{E}_{\Theta^{\prime}}[-L_{t}|\mathcal{E}^{c}]\geq\log\left(\frac{\mathbb{P}_{\Theta^{\prime}}(\mathcal{E}^{c})}{\mathbb{P}_{\Theta}(\mathcal{E}^{c})}\right).$
Using the total law of probability and the above inequality, we get
$\displaystyle\mathbb{E}_{\Theta^{\prime}}[-L_{t}]=\mathbb{E}_{\Theta^{\prime}}[-L_{t}|\mathcal{E}]\mathbb{P}_{\Theta^{\prime}}(\mathcal{E})+\mathbb{E}_{\Theta^{\prime}}[-L_{t}|\mathcal{E}^{c}]\mathbb{P}_{\Theta^{\prime}}(\mathcal{E}^{c})\geq
d(\mathbb{P}_{\Theta^{\prime}}(\mathcal{E}),\mathbb{P}_{\Theta}(\mathcal{E}^{c})).$
∎
Consider a consistent algorithm $\mathcal{A}$, and let $P_{e}(D)$ and
$P_{e}(D^{\prime})$ denote the probabilities of error for $\mathcal{A}$ under
the instances $D$ and $D^{\prime}$ respectively. Denote the community that is
output by $\mathcal{A}$ as $\hat{h}^{*}$, and let $S$ be the event that
$\hat{h}^{*}=1$. Thus, $P_{e}(D)=1-\mathbb{P}_{\Theta}(S)$ and
$P_{e}(D^{\prime})\geq\mathbb{P}_{\Theta^{\prime}}(S)$. Since algorithm
$\mathcal{A}$ is consistent and thus its probability of error on both
$D,D^{\prime}$ goes to zero as the number of samples $t$ grows large, we have
that for every $\epsilon>0$ there exists $t_{0}(\epsilon)$ such that for all
$t\geq
t_{0}(\epsilon),\mathbb{P}_{\Theta^{\prime}}(S)\leq\epsilon\leq\mathbb{P}_{\Theta}(S)$.
For $t\geq t_{0}(\epsilon)$,
$\displaystyle\mathbb{E}_{\Theta^{\prime}}[-L_{t}]\geq
d(\mathbb{P}_{\Theta^{\prime}}(S),\mathbb{P}_{\Theta}(S))\geq
d(\epsilon,\mathbb{P}_{\Theta}(S))\geq\epsilon\log\left(\frac{\epsilon}{\mathbb{P}_{\Theta}(S)}\right)+(1-\epsilon)\log\left(\frac{(1-\epsilon)}{P_{e}(D)}\right)$
$\displaystyle\geq\epsilon\log(\epsilon)+(1-\epsilon)\log\left(\frac{(1-\epsilon)}{P_{e}(D)}\right)$
Taking the limsup, using
$\mathbb{E}_{\Theta^{\prime}}[-L_{t}]=t.D(\Theta^{\prime}||\Theta)$ where
$D(\cdot||\cdot)$ denotes the Kullback-Leibler divergence, and letting
$\epsilon\rightarrow 0$, we get
$\displaystyle\limsup_{t\rightarrow\infty}-\frac{1}{t}\log(P_{e}(D))\leq
D(\Theta^{\prime}||\Theta).$
Consider $\Theta=(p_{1},p_{2},...p_{m})$ and
$\Theta^{\prime}=(\frac{\sqrt{p_{1}p_{2}}-\delta}{C},\frac{\sqrt{p_{1}p_{2}}+\delta}{C},\frac{p_{3}}{C},...\frac{p_{m}}{C})$,
where $C=1-(\sqrt{p_{1}}-\sqrt{p_{2}})^{2}$ and $\delta>0$ is sufficiently
small so that $\Theta^{\prime}$ is a probability distribution. Then, we get
$\displaystyle\limsup_{t\rightarrow\infty}-\frac{1}{t}\log(P_{e}(D))\leq\log\left(\frac{1}{C}\right)+\left(\frac{\sqrt{p_{1}p_{2}}-\delta}{C}\right)\log\left(\frac{\sqrt{p_{1}p_{2}}-\delta}{p_{1}}\right)+\left(\frac{\sqrt{p_{1}p_{2}}+\delta}{C}\right)\log\left(\frac{\sqrt{p_{1}p_{2}}+\delta}{p_{2}}\right)$
$\displaystyle\
\implies\limsup_{t\rightarrow\infty}-\frac{1}{t}\log(P_{e}(D))\leq\log\left(\frac{1}{C}\right)\text{
(letting $\delta\downarrow 0$).}$
## Appendix C Proof of Theorem 3
We will begin by proving the first assertion in the theorem statement which
provides an upper bound on the probability of error for
$t\leq\min\left\\{\frac{d_{1}+d_{m}}{2d_{1}}N,\frac{16Nd_{1}}{(d_{1}-d_{m})^{2}}\right\\}$.
Let $S_{i}(t)$ denote the number of distinct samples seen from community
$C_{i}$ in $t$ samples. We have the following lemma:
###### Lemma 17.
The probability of error of the DSM algorithm is bounded as
$\displaystyle
P_{e}(D)\leq\sum_{i=2}^{m}P(S_{i}(t)-S_{1}(t)>0)+\frac{1}{2}P(S_{i}(t)=S_{1}(t)).$
###### Proof.
For any $i\in 2,3,\ldots,m$, it is clear that when $S_{i}(t)-S_{1}(t)>0$, DSM
will erroneously output $i$ as the index of the community mode. Furthermore,
since DSM breaks ties arbitrarily, with some positive probability (bounded by
$1/2$) it makes the same error when $S_{i}(t)=S_{1}(t)$. Together with the
union bound over all $i\in 2,3,\ldots,m$, this gives the above result. ∎
Next, for each $k\in\\{2,3,\ldots,m\\}$ let $Z_{k}$ be the random variable
denoting the number of samples observed from communities $C_{1}$ and
$C_{k}$.999Note that $Z_{k}$ corresponds to the total number of samples from
communities $C_{1}$ and $C_{k}$, not necessarily distinct. We note that the
expected value of $Z_{k}$ is given by
$\displaystyle E[Z_{k}]=\frac{(d_{1}+d_{k})t}{N}.$ (9)
Define events
$E_{k1}=\\{Z_{k}\in[(1-\epsilon_{k})E[Z_{k}],(1+\epsilon_{k})E[Z_{k}]]\\}$ and
$E_{k2}=E_{k1}^{c}$, with
$\epsilon_{k}=\frac{\sqrt{\frac{9}{64}\beta_{k}^{4}+\frac{3}{2}\beta_{k}^{2}}-\frac{3}{8}\beta_{k}^{2}}{2}\mbox{
where }\beta_{k}=\frac{d_{1}-d_{k}}{d_{1}+d_{k}}.$ (10)
It is easy to verify that $\beta_{k}<1$ and
$\epsilon_{k}\leq\min\\{\beta_{k},1/2\\}$. Then, we have
$\displaystyle\ P(S_{k}(t)-S_{1}(t)>0)+\frac{1}{2}P(S_{k}(t)=S_{1}(t))$
$\displaystyle\leq$ $\displaystyle\
P(S_{k}(t)-S_{1}(t)>0|E_{k1})P(E_{k1})+P(S_{k}(t)-S_{1}(t)>0|E_{k2})P(E_{k2})$
$\displaystyle\hskip
126.47249pt+\frac{1}{2}P(S_{k}(t)=S_{1}(t)|E_{k1})P(E_{k1})+\frac{1}{2}P(S_{k}(t)=S_{1}(t)|E_{k2})P(E_{k2})$
$\displaystyle\leq$ $\displaystyle\ P(S_{k}(t)-S_{1}(t)\geq
0|E_{k1})P(E_{k1})+P(S_{k}(t)-S_{1}(t)>0|E_{k2})P(E_{k2})+\frac{1}{2}P(S_{k}(t)=S_{1}(t)|E_{k2})P(E_{k2}).$
(11)
Note that the LHS above appears for each $k\in\\{2,3,\ldots,m\\}$ in the upper
bound on $P_{e}(D)$ in Lemma 17. We will bound the terms in the RHS
separately, and then combine them together to get an overall upper bound on
$P_{e}(D)$. To begin with, note that
$E[S_{i}(t)|Z_{k}]=d_{i}\left[1-\left(1-\frac{1}{d_{1}+d_{k}}\right)^{Z_{k}}\right],\text{for
$i\in\\{1,k\\}$}.$ (12)
We consider the function $f(x_{1},x_{2},x_{3},...,x_{t})=S_{k}(t)-S_{1}(t)$
where $x_{i}$ is the identity of the individual sampled at the $i$-th instant.
Note that for any $i\in\\{1,2,\ldots,t\\}$ and for all
$x_{1},x_{2},x_{3},...,x_{t},x_{i}^{\prime}\in\\{1,2,\ldots,N\\}$, we have
$|f(x_{1},x_{2},...,x_{i},...,x_{t})-f(x_{1},x_{2},...,x_{i}^{\prime},...,x_{t})|\leq
c_{i}\triangleq 2\mathds{1}_{x_{i}\ \in\ C_{1}\cup C_{k}}$. Then, conditioning
on $Z_{k}$ and applying McDiarmid’s inequality, we get
$\displaystyle P(f-E[f|Z_{k}]\geq t^{\prime}|Z_{k})\leq P(|f-E[f|Z_{k}]|\geq
t^{\prime}|Z_{k})$ $\displaystyle\leq\mathrm{exp}\left(-\frac{2t^{\prime
2}}{\sum_{i=1}^{t}c_{i}^{2}}\right)=\mathrm{exp}\left(-\frac{t^{\prime
2}}{2Z_{k}}\right).$
Plugging in $t^{\prime}=-E[f|Z_{k}]$, and computing $E[f|Z_{k}]$ using
Equation (12), we obtain
$\displaystyle P(f\geq 0|Z_{k})=P(S_{k}(t)-S_{1}(t)\geq 0|Z_{k})$
$\displaystyle\leq
exp\left(-\frac{(d_{1}-d_{k})^{2}\left[1-\left(1-\frac{1}{d_{1}+d_{k}}\right)^{Z_{k}}\right]^{2}}{2Z_{k}}\right).$
(13)
We will start with deriving an upper bound on the first term in the RHS of
equation (11) given by $P(S_{k}(t)-S_{1}(t)\geq 0|E_{k1})P(E_{k1})$.
Conditioned on the event $E_{k1}$, we have
$Z_{k}\in[(1-\epsilon_{k})E[Z_{k}],(1+\epsilon_{k})E[Z_{k}]]$. Furthermore,
from the statement of the first part of the theorem statement and the
definitions of $\epsilon_{k},\beta_{k}$ from equation (10), we have the
following sequence of assertions:
$t\leq\frac{d_{1}+d_{k}}{2d_{1}}N\Rightarrow\beta_{k}=\frac{d_{1}-d_{k}}{d_{1}+d_{k}}\leq\frac{N}{t}-1\Rightarrow\epsilon_{k}\leq\frac{N}{t}-1\Rightarrow
Z_{k}\leq(1+\epsilon_{k})\frac{t(d_{1}+d_{k})}{N}\leq d_{1}+d_{k}.$
Using the above inequalities and the Taylor series expansion, we have
$\displaystyle\left[1-\left(1-\frac{1}{d_{1}+d_{k}}\right)^{Z_{k}}\right]$
$\displaystyle\geq\left[\frac{Z_{k}}{d_{1}+d_{k}}-\frac{{Z_{k}}^{2}}{2(d_{1}+d_{k})^{2}}\right]\geq\frac{Z_{k}}{2(d_{1}+d_{k})}.$
(14)
Plugging the bound above in equation (13), and using
$Z_{k}\geq(1-\epsilon_{k})E[Z_{k}]=(1-\epsilon_{k})(d_{1}+d_{k})t/N,$ we have
$\displaystyle P(S_{k}(t)-S_{1}(t)\geq 0|E_{k1})\times P(E_{k1})\leq
P(S_{k}(t)-S_{1}(t)\geq 0|E_{k1})\leq
exp\left(-\frac{t(1-\epsilon_{k})(d_{1}-d_{k})^{2}}{8N(d_{1}+d_{k})}\right),$
(15)
thus giving us an upper bound on the first term in the RHS of equation (11).
For bounding the sum of the second and third terms in the RHS of equation
(11), we use the following lemma:
###### Lemma 18.
For any $k\in\\{2,3,\ldots,m\\}$ so that $d_{k}\leq d_{1}$ and for any $l\geq
0$, we have
$\displaystyle
P(S_{k}(t)-S_{1}(t)>0|Z_{k}=l)+\frac{1}{2}P(S_{k}(t)=S_{1}(t)|Z_{k}=l)\leq\frac{1}{2}$
###### Proof.
Note that the theorem statement is equivalent to showing that, when $d_{k}\leq
d_{1}$,
$\displaystyle P(S_{k}(t)-S_{1}(t)>0|Z_{k}=l)\leq
P(S_{k}(t)-S_{1}(t)<0|Z_{k}=l),$
which says that, conditioned on the total number of samples from communities
$1$ and $k$ together being some fixed $l$, the likely event is that the
community $1$, whose size is at least that of community $k$, will have as many
or more distinct individuals than community $k$. Given $d_{k}\leq d_{1}$, this
is intuitive and while it can be argued formally, we skip the argument here
for brevity. ∎
Using Lemma 18, we get that the second and third terms in the RHS of equation
(11) are bounded as
$\displaystyle
P(S_{k}(t)-S_{1}(t)>0|E_{k2})P(E_{k2})+\frac{1}{2}P(S_{k}(t)=S_{1}(t)|E_{k2})P(E_{k2})\leq\frac{1}{2}P(E_{k2}).$
Further, using Chernoff’s inequality for $P(E_{k2})$ and
$E[Z_{k}]=(d_{1}+d_{k})t/N$, we have
$\displaystyle\frac{1}{2}P(E_{k2})=\frac{1}{2}P(|Z_{k}-E[Z_{k}]|>\epsilon_{k})\leq\mathrm{exp}\left(-\frac{\epsilon_{k}^{2}(d_{1}+d_{k})t}{3N}\right).$
(16)
Finally, combining Lemma 17, equation (15), and equation (16), we get the
following upper bound on $P_{e}(D)$.
$\displaystyle
P_{e}(D)\leq\sum_{k=2}^{m}\mathrm{exp}\left(-\frac{t(1-\epsilon_{k})(d_{1}-d_{k})^{2}}{8N(d_{1}+d_{k})}\right)+\mathrm{exp}\left(-\frac{\epsilon_{k}^{2}(d_{1}+d_{k})t}{3N}\right).$
From the value of $\epsilon_{k}$ in equation (10), we have that the exponents
in the two terms of the summation above are equal. Thus, we have
$\displaystyle
P_{e}(D)\leq\sum_{k=2}^{m}2\mathrm{exp}\left(-\frac{t(1-\epsilon_{k})(d_{1}-d_{k})^{2}}{8N(d_{1}+d_{k})}\right)\leq\sum_{k=2}^{m}2\mathrm{exp}\left(-\frac{t(d_{1}-d_{k})^{2}}{16N(d_{1}+d_{k})}\right)\leq\sum_{k=2}^{m}2\mathrm{exp}\left(-\frac{t(d_{1}-d_{k})^{2}}{32Nd_{1}}\right),$
(17)
where the first inequality is true because $\epsilon_{k}\leq 1/2$; and the
second inequality follows since $d_{k}\leq d_{1}$ for all
$k\in\\{2,3,\ldots,m\\}$.
The next result comments on the shape of the function
$f(x)=\mathrm{exp}(-\frac{t(d_{1}-x)^{2}}{32Nd_{1}})$, which appears in
equation (17) above.
###### Lemma 19.
The function $f(x)=\mathrm{exp}\left(-\frac{t(d_{1}-x)^{2}}{32Nd_{1}}\right)$
is concave for any $x\geq d_{m}$ and
$t\leq\frac{16Nd_{1}}{(d_{1}-d_{m})^{2}}$.
###### Proof.
We differentiate $f(x)$ twice to confirm that it is concave.
$\displaystyle
f^{\prime\prime}(x)=\frac{t}{16Nd_{1}}\mathrm{exp}\left(-\frac{t(d_{1}-x)^{2}}{32Nd_{1}}\right)\left(\frac{t}{16Nd_{1}}(d_{1}-x)^{2}-1\right)$
Using the inequality $t\leq\frac{16Nd_{1}}{(d_{1}-d_{m})^{2}}$, we have that
$\displaystyle
f^{\prime\prime}(x)\leq\frac{t}{16Nd_{1}}\mathrm{exp}\left(-\frac{t(d_{1}-x)^{2}}{32Nd_{1}}\right)\left(\frac{(d_{1}-x)^{2}}{(d_{1}-d_{m})^{2}}-1\right)$
which implies $f^{\prime\prime}(x)\leq 0$ since $x\geq d_{m}$. ∎
From (17) and using Lemma 19, we have from Jensen’s inequality that for
$t\leq\min\left\\{\frac{d_{1}+d_{m}}{2d_{1}}N,\frac{16Nd_{1}}{(d_{1}-d_{m})^{2}}\right\\}$
$\displaystyle P_{e}(D)\leq
2\sum_{k=2}^{m}\mathrm{exp}\left(-\frac{t(d_{1}-d_{k})^{2}}{32Nd_{1}}\right)\leq
2(m-1)\mathrm{exp}\left(-\frac{t\left(d_{1}-\frac{\sum_{k=2}^{m}d_{i}}{m-1}\right)^{2}}{32Nd_{1}}\right),$
which proves the first assertion in the theorem statement.
For the second assertion in the theorem statement, note that the algorithm
will certainly not make an error if the number of distinct individuals seen
from the $i$-th community, $S_{i}(t)\geq d_{2}+1$, where $d_{2}$ denotes the
size of the second-largest community. Hence, the probability of error is
bounded as $P_{e}(D)\leq P(S_{1}(t)\leq d_{2})$. Further, note that if the
event $\\{S_{1}(t)\leq d_{2}\\}$ occurs, then there exists a set of
$d_{1}-d_{2}$ individuals in $C_{1}$ which remain unsampled in the $t$
samples. Thus, we have
$\displaystyle P_{e}(D)\leq P(S_{1}(t)\leq
d_{2})\leq\binom{d_{1}}{d_{2}}\left(1-\frac{d_{1}-d_{2}}{N}\right)^{t}.$
## Appendix D Proof of Theorem 4
This proof is similar in spirit to the proof of [23, Theorem 1]. Consider an
instance $D=(d_{1},d_{2},\ldots,d_{m}).$ First, we note that since
$(S_{j}(t)),\ 1\leq j\leq m)$ is a sufficient statistic for $D,$ it suffices
to restrict attention to (consistent) algorithms whose output depends only on
the vector $(S_{j}(t),\ 1\leq j\leq m).$ Given this restriction, we track the
temporal evolution of the vector $S(k)=(S_{j}(k),\ 1\leq j\leq m),$ where
$S_{j}(k)$ is the number of distinct individuals from community $j$ seen in
the first $k$ oracle queries. This evolution can be modeled as an absorbing
Markov chain over state space $\prod_{j=1}^{m}\\{0,1,\cdots d_{i}\\},$ with
$S(0)=(0,0,\cdots,0).$ Next, let us write down the transition probabilities
$q_{D}(s,s^{\prime})$ for each state pair $(s,s^{\prime}).$ Note that from
state $s,$ the chain can transition to the states $s+e_{j}$ for $1\leq j\leq
m,$ where the vector $e_{j}$ has 1 in the $j$th position and 0 elsewhere, or
remain in state $s.$ Moreover, $q_{D}(s,s+e_{j})=(d_{j}-s_{j})/N,$ and
$q_{D}(s,s)=\frac{\sum_{j=1}^{m}s_{j}}{N}.$
Recall that by assumption, community $1$ is the largest community for the
instance $D.$ Let us consider an alternate instance
$D^{\prime}=(d_{1}^{\prime},d_{2}^{\prime},\ldots,d_{m}^{\prime})$ such that
$d_{1}^{\prime}=d_{2}-1$, $d_{j}^{\prime}=d_{j}\ \forall j\neq 1$, and
$N^{\prime}=N-d_{1}+d_{2}-1$. Note that the community mode under the alternate
instance $D^{\prime}$ is different from that under the original instance $D$.
Thus, for state $s$ that is feasible under both $D$ and $D^{\prime}$,
$\log\left(\frac{q_{D^{\prime}}(s,s)}{q_{D}(s,s)}\right)=\log\left(\frac{N}{N-d_{1}+d_{2}-1}\right).$
Similarly, for state pair $(s,s+e_{j})$ that is feasible under both $D$ and
$D^{\prime}$,
$\displaystyle\log\left(\frac{q_{D^{\prime}}(s,s+e_{j})}{q_{D}(s,s+e_{j})}\right)$
$\displaystyle=\log\left(\frac{N}{N-d_{1}+d_{2}-1}\right),j\neq 1,$
$\displaystyle\log\left(\frac{q_{D^{\prime}}(s,s+e_{1})}{q_{D}(s,s+e_{1})}\right)$
$\displaystyle=\log\left(\frac{N(d_{2}-1-s_{1})}{(N-d_{1}+d_{2}-1)(d_{1}-s_{1})}\right)=\log\left(\frac{N}{N-d_{1}+d_{2}-1}\right)+\log\left(\frac{d_{2}-1-s_{1}}{d_{1}-s_{1}}\right).$
Therefore, for any state pair $(s,s^{\prime})$ such that
$q_{D}(s,s^{\prime}),q_{D^{\prime}}(s,s^{\prime})>0,$ we have
$\log\left(\frac{q_{D^{\prime}}(s,s^{\prime})}{q_{D}(s,s^{\prime})}\right)\leq\log\left(\frac{N}{N-d_{1}+d_{2}-1}\right).$
(18)
Next, let $\mathbb{P}_{D},\mathbb{P}_{D^{\prime}}$ denote the probability
measures induced by the algorithm under consideration under the instances $D$
and $D^{\prime},$ respectively. Then, given a state evolution sequence
$(S(1),\cdots,S(t))$, the log-likelihood ratio is given by
$\displaystyle\log\frac{\mathbb{P}_{D^{\prime}}(S(1),\cdots,S(t))}{\mathbb{P}_{D}(S(1),\cdots,S(t))}=\sum_{s,s^{\prime}}N(s,s^{\prime},t)\log\left(\frac{q_{D^{\prime}}(s,s^{\prime})}{q_{D}(s,s^{\prime})}\right),$
where $N(s,s^{\prime},t)$ represents the number of times the transition from
state $s$ to state $s$ occurs over the course of $t$ queries. Combining with
(18), we get
$\displaystyle\log\frac{\mathbb{P}_{D^{\prime}}(S(1),\cdots,S(t))}{\mathbb{P}_{D}(S(1),\cdots,S(t))}\leq
t\log\left(\frac{N}{N-d_{1}+d_{2}-1}\right),$
which implies
$D(\mathbb{P}_{D^{\prime}}||\mathbb{P}_{D})=E_{D^{\prime}}\left[\log\frac{\mathbb{P}_{D^{\prime}}(S(1),\cdots,S(t))}{\mathbb{P}_{D}(S(1),\cdots,S(t))}\right]\leq
t\log\left(\frac{N}{N-d_{1}+d_{2}-1}\right),$ (19)
where $D(\cdot||\cdot)$ denotes the Kullback-Leibler divergence. On the other
hand, since the algorithm produces an estimate $\hat{h}^{*}$ of the community
mode based solely on $S(t)$, we have from the data-processing inequality (see
[24]) that
$D(\mathbb{P}_{D^{\prime}}||\mathbb{P}_{D})\geq
D\bigl{(}Ber(\mathbb{P}_{D^{\prime}}(\hat{h}^{*}=1))||Ber(\mathbb{P}_{D}(\hat{h}^{*}=1))\bigr{)},$
(20)
where $Ber(x)$ denotes the Bernoulli distribution with parameter $x\in(0,1)$.
Recall that the community mode under $D$ is community $1$, while it is
community 2 under $D^{\prime}$. Then from the definition of consistent
algorithms, for every $\epsilon>0,$ $\exists$ $t_{0}(\epsilon)$ such that for
$t\geq
t_{0}(\epsilon),\mathbb{P}_{D^{\prime}}(\hat{h}^{*}=1)\leq\epsilon\leq\mathbb{P}_{D}(\hat{h}^{*}=1)$.
Thus, we have
$\displaystyle
D(Ber(\mathbb{P}_{D^{\prime}}(\hat{h}^{*}=1))||Ber(\mathbb{P}_{D}(\hat{h}^{*}=1)))\geq
D(Ber(\epsilon)||Ber(\mathbb{P}_{D}(\hat{h}^{*}=1)))$
$\displaystyle\quad\geq\epsilon\log\left(\frac{\epsilon}{\mathbb{P}_{D}(\hat{h}^{*}=1)}\right)+(1-\epsilon)\log\left(\frac{1-\epsilon}{\mathbb{P}_{D}(\hat{h}^{*}\neq
1)}\right)\geq\epsilon\log(\epsilon)+(1-\epsilon)\log\left(\frac{1-\epsilon}{\mathbb{P}_{D}(\hat{h}^{*}\neq
1)}\right).$
Using $\epsilon\rightarrow 0$ and $\mathbb{P}_{D}(\hat{h}^{*}\neq
1)=P_{e}(D)$, we have
$\displaystyle
D(Ber(\mathbb{P}_{D^{\prime}}(\hat{h}^{*}=1))||Ber(\mathbb{P}_{D}(\hat{h}^{*}=1)))\geq-\log(P_{e}(D)).$
Finally, combining with (19) and (20), we have that
$\displaystyle\liminf_{t\rightarrow\infty}\frac{\log(P_{e}(D))}{t}\geq-\log\left(\frac{N}{N-(d_{1}-d_{2}+1)}\right).$
## Appendix E Proof of Theorem 5
Note that
$\displaystyle P_{e}(D)$ $\displaystyle\leq\sum_{r=1}^{b-1}P(\text{$C_{1}$
gets eliminated in round\leavevmode\nobreak\ $r$}).$
Let $S_{i}(K)$ denote the number of (immediate pairwise) collisions recorded
in $C_{i}$ after $K$ pairs of samples. Since at least one of the smallest $r$
communities is guaranteed to be present during round $r,$
$\displaystyle P_{e}(D)$
$\displaystyle\leq\sum_{r=1}^{b-1}\sum_{j=b+1-r}^{b}P(S_{j}(K_{r})-S_{1}(K_{r})\leq
0)$ $\displaystyle\leq\sum_{r=1}^{b-1}rP(S_{b+1-r}(K_{r})-S_{1}(K_{r})\leq
0).$ (21)
Denoting, for $i\neq 1,$ $f_{i}(K):=S_{i}(K)-S_{1}(K),$ we now derive an upper
bound on $P(f_{i}(K)\leq 0).$ Applying Chernoff’s inequality, for $\lambda\leq
0,$
$\displaystyle P(f_{i}(K)\leq 0)$ $\displaystyle\leq E\left[e^{\lambda
f_{i}(K)}\right]$
$\displaystyle=\biggl{[}\frac{1}{d_{1}d_{i}}+\left(1-\frac{1}{d_{1}}\right)\left(1-\frac{1}{d_{i}}\right)+e^{\lambda}\left(1-\frac{1}{d_{1}}\right)\frac{1}{d_{i}}+e^{-\lambda}\left(1-\frac{1}{d_{i}}\right)\frac{1}{d_{1}}\biggr{]}^{K}.$
Setting $e^{\lambda}=\sqrt{\frac{d_{i}-1}{d_{1}-1}}$,
$\displaystyle P(f_{i}(K)\leq 0)$
$\displaystyle\leq\left(1-\frac{(\sqrt{d_{1}-1}-\sqrt{d_{i}-1})^{2}}{d_{1}d_{i}}\right)^{K}\leq\mathrm{exp}\left(-\frac{K(\sqrt{d_{1}-1}-\sqrt{d_{i}-1})^{2}}{d_{1}d_{i}}\right).$
Since $d_{1}>d_{i}$,
$(\sqrt{d_{1}-1}-\sqrt{d_{i}-1})^{2}>\frac{((d_{1}-1)-(d_{i}-1))^{2}}{4(d_{1}-1)}>\frac{((d_{1}-1)-(d_{i}-1))^{2}}{4d_{1}}=\frac{(d_{1}-d_{i})^{2}}{4d_{1}}$.
$\displaystyle\Rightarrow P(f_{i}(K)\leq
0)\leq\mathrm{exp}\left(-\frac{{K}(d_{1}-d_{i})^{2}}{4d_{1}^{2}d_{i}}\right).$
Substituting the above into (21),
$\displaystyle P_{e}(D)$ $\displaystyle\leq\sum_{r=1}^{b-1}r\
\mathrm{exp}\left(-\frac{K_{r}(d_{1}-d_{b+1-r})^{2}}{4d_{1}^{2}d_{b+1-r}}\right).$
Since
$K_{r}=\left\lceil\frac{1}{\overline{log}(b)}\frac{t/2-b}{b+1-r}\right\rceil$,
where $\overline{log}(b)=\frac{1}{2}+\sum_{i=2}^{b}\frac{1}{i}$ and
$\Delta_{i}=\frac{1}{d_{i}}-\frac{1}{d_{1}}$,
$\displaystyle P_{e}(D)\leq\sum_{r=1}^{b-1}r\
\mathrm{exp}\left(-\frac{{K_{r}d_{b+1-r}\Delta_{(b+1-r)}^{2}}}{4}\right).$
For ${H}^{c}(D)=\underset{i\in[2:b]}{max}\frac{i\Delta_{i}^{-2}}{d_{i}}$,
$\displaystyle
K_{r}d_{b+1-r}\Delta_{(b+1-r)}^{2}\geq\frac{(t/2-b)}{\overline{log}(b){H}^{c}(D)}$
$\displaystyle\Rightarrow
P_{e}(D)\leq\frac{b(b-1)}{2}\mathrm{exp}\left(-\frac{(t/2-b)}{4\overline{log}(b){H}^{c}(D)}\right).$
## Appendix F Proof of Theorem 10
Let $P^{i}_{e}(D)$ denote the probability of the community mode being
eliminated at the $i$th step; i.e, for $i\leq b-1,P^{i}_{e}(D)$ denotes the
probability of removing box $1$ in phase $i$ of SR, and $P^{b}_{e}(D)$ denotes
the probability of choosing the wrong community from box 1 after this box
survived the $(b-1)$ SR phases. Then, we have
$\displaystyle P_{e}(D)$
$\displaystyle=\sum_{i=1}^{b-1}P^{i}_{e}(D)+P^{b}_{e}(D),$ $\displaystyle
P^{i}_{e}(D)$
$\displaystyle\leq\binom{d_{11}}{c_{b-i+1}}\exp\left(-K_{i}\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{b-i+1}}\right)\right)\quad(1\leq
i\leq b-1),$ $\displaystyle P^{b}_{e}(D)$
$\displaystyle\leq\binom{d_{11}}{c_{1}}\exp\left(-K_{b-1}\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{1}}\right)\right),$
where the second and third statements are based on a coupon collector
argument, similar to the one employed in the proof of Theorem 6 for the
separated community setting. The proof is now completed by substituting the
values of $K_{r},$ and using the definition of $H^{b}(D).$
## Appendix G Proof of Theorem 11
We show that ENDS-SR has the same decay rate as DS-SR. Recall that the
comparison function used in ENDS-SR is
$\displaystyle\frac{S_{ij}N_{i}}{E[S_{i}]},$
where $S_{ij}$ is the number of distinct samples from community $i$ in box
$j$, and $S_{i}$ is the number of distinct samples from box $i$. At the end of
$r$ rounds,
$\displaystyle
E[S_{i}]=N_{i}\left(1-\left(1-\frac{1}{N_{i}}\right)^{K_{r}}\right).$
Similar to the coupon collector argument in the proof of Theorem 10, we let
$P^{i}_{e}(D)$ be the probability of the community mode being eliminated in
the $i$th step. We have that
$\displaystyle P_{e}(D)\leq\sum_{i=1}^{b}P_{e}^{i}(D).$
After $r\leq b-1$ rounds/phases, the comparison function for the largest
community equals
$\displaystyle\frac{S_{11}}{\left(1-\left(1-\frac{1}{N_{1}}\right)^{K_{r}}\right)}.$
For some community $j$ in box $i$, the comparison function is
$\displaystyle\frac{S_{ij}}{\left(1-\left(1-\frac{1}{N_{i}}\right)^{K_{r}}\right)}\leq\frac{c_{i}}{\left(1-\left(1-\frac{1}{N_{m}}\right)^{K_{r}}\right)},$
where $N_{m}=\max_{i}N_{i}$. Thus, if we have
$\displaystyle
S_{11}>\frac{c_{b-r+1}\left(1-\left(1-\frac{1}{N_{1}}\right)^{K_{r}}\right)}{\left(1-\left(1-\frac{1}{N_{m}}\right)^{K_{r}}\right)},$
then the community mode cannot be eliminated in the $r$th round. For round
$r=b$, we just note that
$\displaystyle S_{11}>c_{1}$
is sufficient for the community mode estimate to be correct. Applying the
coupon collector argument on these events, by using the notation
$\displaystyle
f_{i}(K):=\frac{c_{i}\left(1-\left(1-\frac{1}{N_{1}}\right)^{K}\right)}{\left(1-\left(1-\frac{1}{N_{m}}\right)^{K}\right)},$
we have
$\displaystyle
P_{e}(D)\leq\sum_{i=1}^{b-1}\binom{d_{11}}{f_{b-i+1}(K_{i})}\exp\left(-K_{i}\log\left(\frac{N_{1}}{N_{1}-d_{11}+f_{b-i+1}(K_{i})}\right)\right)+\binom{d_{11}}{c_{1}}\exp\left(-K_{b}\log\left(\frac{N_{1}}{N_{1}-d_{11}+c_{1}}\right)\right).$
We note that, as $t\rightarrow\infty$, $f_{i}(t)\rightarrow c_{i},$ which then
implies the statement of the theorem.
## Appendix H Proof of Theorem 13
We first state the following lemma (analogous to Lemma 9) for this setting
(the proof is straightforward and omitted):
###### Lemma 20.
For any algorithm $\mathcal{A}$ and instance $D$, there must exist a box
$a\in[2:b]$ such that
$E_{D}[N_{a}(t)]\leq\frac{t}{(\log(N_{1})-\log(N_{1}-d_{11}+c_{a}))H_{2}^{b}(D)}$,
where $N_{a}(t)$ denotes the number of times box $a$ is sampled in $t$ queries
under $\mathcal{A}$.
###### Proof of Theorem 13.
Given an instance $D$, we construct an alternate instance $D^{[a]}$ by
changing the size of the largest community in box $a$ (corresponding to the
one specified by Lemma 20) from $c_{a}$ to
$g_{a}^{\prime}=c_{a}+N^{\prime}_{a}-N_{a}.$ 101010We use $g^{\prime}_{a}$ and
not $c^{\prime}_{a}$ to denote the new size of this community because in the
alternate instance $D^{[a]}$, this community is the largest community, and is
thus no longer the _competing_ community in box $a.$ Note that the size of box
$a$ changes from $N_{a}$ to $N_{a}^{\prime}=N_{a}+g_{a}^{\prime}-c_{a}.$
Furthermore, we can see that the community mode under instance $D^{[a]}$ is
different from the one under the original instance $D$, since
$g_{a}^{\prime}=c_{a}+N^{\prime}_{a}-N_{a}\geq
c_{a}+\frac{N_{1}(N_{a}-c_{a}+d_{11})}{(N_{1}-d_{11}+c_{a})}-N_{a}>c_{a}+(N_{a}-c_{a}+d_{11})-N_{a}=d_{11}.$
Following steps similar to the proof of Theorem 8, we get
$\displaystyle D(\mathbb{P}_{D},\mathbb{P}_{D^{[a]}})\leq
E_{D}[N_{a}(t)]\log\left(\frac{N_{a}^{\prime}}{N_{a}}\right).$
From the definition of $\Gamma,$ it follows that
$\frac{N_{a}^{\prime}}{N_{a}}=\left(\frac{N_{1}}{N_{1}-d_{11}+c_{a}}\right)^{\Gamma}.$
Thus, invoking Lemma 20, we have
$\displaystyle
D(\mathbb{P}_{D},\mathbb{P}_{D^{[a]}})\leq\frac{t\Gamma}{H^{b}_{2}(D)}.$
Finally, similar to the proof of Theorem 8, we use Lemma 21 to get
$\displaystyle\max\left(P_{e}(D),P_{e}(D^{[a]})\right)\geq\frac{1}{4}\exp\left(-\frac{t\Gamma}{H_{2}^{b}(D)}\right)$
which matches the statement of the theorem.
Finally, we show that
$\displaystyle H_{2}^{b}(D^{[a]})\leq H_{2}^{b}(D)\Leftrightarrow\sum_{i\neq
a}\frac{1}{\log(N_{a}^{\prime})-\log(N_{a}^{\prime}-g_{a}^{\prime}+c_{i}^{\prime})}\leq\sum_{i\neq
1}\frac{1}{\log(N_{1})-\log(N_{1}-d_{11}+c_{i})}$
We do this in two steps:
1. 1.
Firstly, for each $i\notin\\{1,a\\}$, we show that the term corresponding to
box $i$ in the sum on the left is smaller than the corresponding term in the
sum on the right, i.e.,
$\displaystyle\frac{1}{\log(N_{a}^{\prime})-\log(N_{a}^{\prime}-g_{a}^{\prime}+c_{i}^{\prime})}$
$\displaystyle\leq\frac{1}{\log(N_{1})-\log(N_{1}-d_{11}+c_{i})}$
$\displaystyle\mbox{or equivalently, }\quad\frac{N_{1}}{N_{1}-d_{11}+c_{i}}$
$\displaystyle\leq\frac{N_{a}^{\prime}}{N_{a}^{\prime}-g_{a}^{\prime}+c_{i}^{\prime}}.$
This follows from the following sequence of inequalities.
$\displaystyle\frac{N_{1}}{(N_{1}-d_{11}+c_{i})}=\frac{N_{1}(N_{a}-c_{a}+c_{i})}{(N_{1}-d_{11}+c_{i})(N_{a}-c_{a}+c_{i})}\leq\frac{N_{a}^{\prime}}{N_{a}-c_{a}+c_{i}}=\frac{N_{a}^{\prime}}{N_{a}^{\prime}-g_{a}^{\prime}+c_{i}^{\prime}}$
where the last step follows since $N_{a}^{\prime}=N_{a}+g_{a}^{\prime}-c_{a}$
and $c_{i}^{\prime}=c_{i}$ for $i\notin\\{1,a\\}$.
2. 2.
Secondly, we show that the term corresponding to box $1$ in the sum on the
left is smaller than the term corresponding to box $a$ in the sum on the
right, i.e,
$\displaystyle\frac{1}{\log(N_{a}^{\prime})-\log(N_{a}^{\prime}-g_{a}^{\prime}+c_{1}^{\prime})}$
$\displaystyle\leq\frac{1}{\log(N_{1})-\log(N_{1}-d_{11}+c_{a})}$
$\displaystyle\mbox{or equivalently, }\quad\frac{N_{1}}{(N_{1}-d_{11}+c_{a})}$
$\displaystyle\leq\frac{N_{a}^{\prime}}{N_{a}^{\prime}-g_{a}^{\prime}+d_{11}}.$
This follows from the following sequence of inequalities.
$\displaystyle\frac{N_{1}}{N_{1}-d_{11}+c_{a}}=\frac{N_{1}(N_{a}-c_{a}+d_{11})}{(N_{a}-c_{a}+d_{11})(N_{1}-d_{11}+c_{a})}\leq\frac{N_{a}^{\prime}}{N_{a}-c_{a}+d_{11}}=\frac{N_{a}^{\prime}}{N_{a}^{\prime}-g_{a}^{\prime}+d_{11}},$
where the last step is true because
$N_{a}^{\prime}-g_{a}^{\prime}=N_{a}-c_{a}$.
This completes the proof.
∎
## Appendix I Other Lemmas
###### Lemma 21.
Let $\rho_{0}$ and $\rho_{1}$ be two probability distributions supported on
some set $\chi$, with $\rho_{1}$ absolutely continuous with respect to
$\rho_{0}$. Then for any measurable function $\phi:\chi\rightarrow\\{0,1\\}$,
$\displaystyle
P_{X\sim\rho_{0}}(\phi(X)=1)+P_{X\sim\rho_{1}}(\phi(X)=0)\geq\frac{1}{2}\exp\left(-D(\rho_{0}||\rho_{1})\right)$
###### Proof.
This is [17, Lemma 20]. ∎
###### Lemma 22.
$\frac{H(D)}{2}\leq H_{2}(D)\leq\overline{log}(b)H(D).$
###### Proof.
For the inequality on the left, we note that
$\displaystyle
H_{2}(D)=\sum_{i=2}^{b}\frac{1}{\log(d_{1})-\log(d_{i})}\geq\sum_{i=2}^{j}\frac{1}{\log(d_{1})-\log(d_{i})}\geq\frac{j-1}{\log(d_{1})-\log(d_{j})}\forall
j\in[2:b]$
Since this is true for all $j\in[2:b]$, taking the max of these values and
using that $j-1\geq\frac{j}{2},j\geq 2$ we have
$\displaystyle H_{2}(D)\geq\max_{j{\neq
1}}\frac{j/2}{\log(d_{1})-\log(d_{j})}=\frac{H(D)}{2}$
For the inequality on the right, we multiply and divide each term in the
summation of $H_{2}(D)$ by $i$:
$\displaystyle
H_{2}(D)=\sum_{i=2}^{b}\frac{i}{i(\log(d_{1})-\log(d_{i}))}\leq\sum_{i=2}^{b}\frac{H(D)}{i}\leq\overline{log}(b)H(D)$
This completes the proof of both inequalities in the statement of the lemma. ∎
## References
* [1] M. Finkelstein, H. G. Tucker, J. A. Veeh, Confidence intervals for the number of unseen types, Statistics & Probability Letters 37 (4) (1998) 423–430.
* [2] C. Budianu, S. Ben-David, L. Tong, Estimation of the number of operating sensors in large-scale sensor networks with mobile access, IEEE Transactions on Signal Processing 54 (5) (2006) 1703–1715.
* [3] M. Bressan, E. Peserico, L. Pretto, Simple set cardinality estimation through random sampling, arXiv preprint arXiv:1512.07901 (2015).
* [4] X. Chen, W. Huang, W. Chen, J. C. Lui, Community exploration: from offline optimization to online learning, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018.
* [5] S. Bubeck, D. Ernst, A. Garivier, Optimal discovery with probabilistic expert advice: finite time analysis and macroscopic optimality, Journal of Machine Learning Research 14 (2013) 601–623.
* [6] T. Lattimore, C. Szepesvári, Bandit algorithms, Cambridge University Press, 2020\.
* [7] J.-Y. Audibert, S. Bubeck, Best Arm Identification in Multi-Armed Bandits, in: COLT - 23th Conference on Learning Theory - 2010, Haifa, Israel, 2010, p. 13 p.
URL https://hal-enpc.archives-ouvertes.fr/hal-00654404
* [8] C. Caferov, B. Kaya, R. O’Donnell, A. C. Say, Optimal bounds for estimating entropy with pmf queries, in: International Symposium on Mathematical Foundations of Computer Science, Springer, 2015, pp. 187–198.
* [9] J. Acharya, A. Orlitsky, A. T. Suresh, H. Tyagi, Estimating rényi entropy of discrete distributions, IEEE Transactions on Information Theory 63 (1) (2016) 38–56.
* [10] Y. Hao, A. Orlitsky, Data amplification: Instance-optimal property estimation, arXiv preprint arXiv:1903.01432 (2019).
* [11] Y. Wu, P. Yang, Sample complexity of the distinct elements problem, Mathematical Statistics and Learning 1 (1) (2018) 37–72.
* [12] Y. Hao, A. Orlitsky, Unified sample-optimal property estimation in near-linear time, in: Advances in Neural Information Processing Systems, 2019, pp. 11104–11114.
* [13] H. Chernoff, Estimation of the mode, Annals of the Institute of Statistical Mathematics 16 (1) (1964) 31–41.
* [14] E. Parzen, On estimation of a probability density function and mode, The Annals of Mathematical Statistics 33 (3) (1962) 1065–1076.
* [15] D. Shah, T. Choudhury, N. Karamchandani, A. Gopalan, Sequential mode estimation with oracle queries, Proceedings of the AAAI Conference on Artificial Intelligence 34 (04) (2020) 5644–5651. doi:10.1609/aaai.v34i04.6018.
URL https://ojs.aaai.org/index.php/AAAI/article/view/6018
* [16] Z. Karnin, T. Koren, O. Somekh, Almost optimal exploration in multi-armed bandits, in: S. Dasgupta, D. McAllester (Eds.), Proceedings of the 30th International Conference on Machine Learning, Vol. 28 of Proceedings of Machine Learning Research, PMLR, Atlanta, Georgia, USA, 2013, pp. 1238–1246.
URL https://proceedings.mlr.press/v28/karnin13.html
* [17] E. Kaufmann, O. Cappé, A. Garivier, On the Complexity of Best Arm Identification in Multi-Armed Bandit Models, Journal of Machine Learning Research 17 (2016) 1–42.
* [18] A. Carpentier, A. Locatelli, Tight (lower) bounds for the fixed budget best arm identification bandit problem, in: V. Feldman, A. Rakhlin, O. Shamir (Eds.), 29th Annual Conference on Learning Theory, Vol. 49 of Proceedings of Machine Learning Research, PMLR, Columbia University, New York, New York, USA, 2016, pp. 590–604.
URL https://proceedings.mlr.press/v49/carpentier16.html
* [19] Z. Karnin, T. Koren, O. Somekh, Almost optimal exploration in multi-armed bandits, in: International Conference on Machine Learning, 2013, pp. 1238–1246.
* [20] Properati, Real estate listings - brazil, https://data.world/properati/real-estate-listings-brazil, accessed: 2021-05-24 (2016).
* [21] M. Cox, Inside airbnb - new york city, http://insideairbnb.com/get-the-data.html, accessed: 2021-05-24 (2021).
* [22] M. Jolly, Trending youtube video statistics, https://www.kaggle.com/datasnaek/youtube-new, accessed: 2021-05-24 (2019).
* [23] V. Moulos, Optimal best markovian arm identification with fixed confidence, in: H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, R. Garnett (Eds.), Advances in Neural Information Processing Systems, 2019.
* [24] T. M. Cover, J. A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing), Wiley-Interscience, USA, 2006.
|
MengluLi1 Xiao-PingZhang2,∗
# Interpretable Temporal Class Activation Representation for Audio Spoofing
Detection
###### Abstract
Explaining the decisions made by audio spoofing detection models is crucial
for fostering trust in detection outcomes. However, current research on the
interpretability of detection models is limited to applying XAI tools to post-
trained models. In this paper, we utilize the wav2vec 2.0 model and attentive
utterance-level features to integrate interpretability directly into the
model's architecture, thereby enhancing transparency of the decision-making
process. Specifically, we propose a class activation representation to
localize the discriminative frames contributing to detection. Furthermore, we
demonstrate that multi-label training based on spoofing types, rather than
binary labels as bonafide and spoofed, enables the model to learn distinct
characteristics of different attacks, significantly improving detection
performance. Our model achieves state-of-the-art results, with an EER of 0.51%
and a min t-DCF of 0.0165 on the ASVspoof2019-LA set.
###### keywords:
Deepfake audio, XAI, Speech Anti-spoofing, Deepfake detection,
Interpretability
## 1 Introduction
††∗X.-P. Zhang is the corresponding author.
Audio spoofing detection techniques have gained more attention recently due to
the threat brought to automatic speaker verification (ASV) systems and their
harmful impact of spoofed audio on societies. Spoofing countermeasures attempt
to improve detection performance by developing both classifier structure and
feature extraction methods. The detection algorithms usually operate upon the
hand-crafted acoustic feature [1, 2, 3] until the emergence of advanced
learnable front-ends, involving convolution-based networks [4, 5] or self-
supervised learning (SSL)-based architectures [6, 7], which outperform the
traditional feature extraction. Regarding architectures of the classifier,
Deep learning (DL)-based countermeasures have demonstrated their advance
compared to traditional machine learning models, such as GMM [8]. Notably,
light CNN [9, 10], ResNet [11, 12], and DARTS [13, 14] algorithms have made
great achievements in increasing the detection accuracy on both known and
unknown spoofing attacks generated by Text-to-Speech (TTS) and Voice
Conversion (VC) techniques.
While current state-of-the-art models achieve high accuracy on publicly
available datasets, they fail to provide explanations for their detection
outcomes or the decision-making process due to the black-box nature of DL-
based techniques. Designing a detection algorithm with higher
interpretability, especially through visualization, is crucial. It not only
clarifies how and why a detection model makes decisions, fostering trust in
the algorithm and its outputs, but also allows for understanding the
architecture of the algorithm and potential improvements by adjusting specific
segments. To address this problem, we propose an interpretable audio spoofing
detection model with a guaranteed high detection rate. We utilize a pre-
trained SSL model to extract frame-level features, which are then combined
with utterance-level information to form a high-level embedding of the audio
input. Our proposed detection pipeline incorporates an attention mechanism
along feature channels to generate a temporal class activation representation.
This representation effectively localizes the most discriminative frames
contributing to different labels in the detection process, while also making
them visually accessible.
The new contributions in this work are (1) We propose a novel audio spoofing
detection model that leverages an effective feature set comprising SSL-based
frame-level features and attentive utterance-level features. (2) The proposed
model provides a class activation map as a visualizable interpretation of
detection results, revealing the underlying temporal dynamics. (3) We
demonstrate the effectiveness of employing multi-label classification
training, rather than binary labels, to learn distinct characteristics of TTS
and VC-based artifacts.
## 2 Related Work
A group of existing works have utilized explainable artificial intelligence
(XAI) tools to uncover the behaviour of deep neural network algorithms in
detecting spoofed audio [15]. Ge et al. [16] utilize SHapley Additive
exPlanations (SHAP) [17] to identify the characteristics of artifacts relied
on various spoofing attacks. Lim et al. [18] apply both Deep Taylor [19] and
layer-wise relevance propagation (LRP) [20] to learn the attribution score of
audio formats in spectrograms. The Gradient-weighted Class Activation Mapping
(Grad-CAM) [21] is used in [22] to identify the significant regions in the
spectrogram. Motivated by Grad-CAM, we construct a learnable class activation
mechanism to localize the discriminative regions. However, unlike the existing
approaches that apply XAI tools externally, our method provides internal
justification for the decision-making process by considering both detection
capability and outcome interpretability simultaneously. We utilize the class
activation representation within our proposed detection model to identify and
visualize the crucial frames that determine detection outcomes.
## 3 Proposed Model
In this section, we elaborate the feature extraction module and the detection
pipeline of our model. The feature extraction module consists of both frame-
level and utterance-level representation. The detection pipeline includes
channel attention mechanism conditioning on the temporal features. The
architecture of the proposed detection model is illustrated in Figure 1.
Figure 1: Overall architecture of proposed audio spoofing detection model.
### 3.1 The SSL-based feature at the frame level
The SSL models [23, 24] have demonstrated its ability to generate latent
representations of raw audio waveform. Our proposed model utilizes a pre-
trained wav2vec 2.0 XLS-R model [25] as the front-end feature extractor to
obtain temporal representations for the raw audio inputs. The wav2vec 2.0
model consists of a CNN-based encoder module, a context network with the
Transformer architecture, and a quantization module, to produce a quantized
latent speech representation that captures the dependent information from the
entire audio sequence [23]. The selected wav2vec 2.0 XLS-R model, with 300
million parameters, is pre-trained on unlabelled speech data from various
sources with multiple languages. During the training phase, we fine-tune the
all parameters in this pre-trained model with our downstream classifier using
labelled training data, which makes this SSL-based front-end feature extractor
learn the deep embedding that more adapt to the spoofing detection task. Given
an input audio $x$, the corresponding frame-level feature representation
$\bm{S^{f}}\in\mathbb{R}^{T\times C}$ is extracted, where $T$ and $C$ refer to
the number of time frames and channels, respectively. The feature
representation is then fed to two stacks consisting of a fully connected (FC)
layer, batch normalization (BN) with ReLU activation, and a dropout layer for
data downsampling.
### 3.2 Attentive statistical feature at the utterance level
The deep embedding extracted from the wav2vec 2.0 model represent the speech
information at frame levels. Additionally, the utterance-level information is
also crucial to spoofing detection. Therefore, we implement attentive
statistical pooling [26] on the frame-level embedding to obtain an utterance-
level feature. Given the frame-level embedding, $\bm{S^{f}}$, we first
calculate its frame-level attentive score $e_{t}$ for each frame $t$ by:
$\displaystyle e_{t}$ $\displaystyle=f(W\bm{S}^{f}_{t}+b),$ (1)
where $f(\cdot)$ is tanh function, and the parameters $W$ and $b$ are shared
across all $C$ channels to avoid overfitting. Then, the score $e_{t}$ is
normalized over all frames using a softmax function:
$\displaystyle\alpha_{t}$
$\displaystyle=\frac{\exp(e_{t})}{\sum_{\tau}^{T}\exp(e_{\tau})}.$ (2)
The normalized attentive score represents the importance of each frame $t$,
and then it works as the weights to be applied on the embedding, $\bm{S^{f}}$,
to calculate the mean and standard deviation respectively, as follows:
$\displaystyle\tilde{\mu}$
$\displaystyle=\sum_{t}^{T}\alpha_{t}\bm{S}^{f}_{t},$ (3)
$\displaystyle\tilde{\sigma}$
$\displaystyle=\sqrt{\sum_{t}^{T}\alpha_{t}\bm{S}^{f}_{t}\odot\bm{S}^{f}_{t}-\tilde{\mu}\odot\tilde{\mu}}.$
(4)
The weighted mean $\tilde{\mu}$ and the weighted standard deviation
$\tilde{\sigma}$ are concatenated and projected into 1-D representation
$\bm{S^{u}}\in\mathbb{R}^{1\times C}$ as an utterance-level feature. This
weighted statistics describe the distribution of important frames across the
entire utterance. In this way, the utterance-level feature provides a higher-
level perspective by focusing on specific frames to emphasize discriminative
factors in the spoofing process.
The utterance-level feature $\bm{S^{u}}$ and the frame-level feature
$\bm{S^{f}}$ are concatenated along the time dimension to form the final
feature representation $\bm{S}\in\mathbb{R}^{T^{\prime}\times C}$, where
$T^{\prime}=T+1$.
### 3.3 The detection pipeline using temporal class activation
Our downstream detection pipeline receives the extracted audio features to
determine the type of spoofing while also learning to identify when the
spoofing sounds occur, as further detailed below.
#### 3.3.1 Extracting channel attention vector (CAV)
Given the feature embedding $\bm{S}$, we extract the CAV to indicate the
importance of different feature channels contributing each class type $k$ as
formulated in Equation 5:
$\displaystyle\bm{A}_{k}$ $\displaystyle=W_{k}^{\top}\bm{S},$ (5)
where $W_{k}\in\mathbb{R}^{T^{\prime}}$ is the weight corresponding to $k$-th
class across all time frames, and $\bm{A}_{k}\in\mathbb{R}^{C}$ is the CAV for
each class.
#### 3.3.2 Classifier on CAV with WCE loss
We pass $\bm{A}_{k}$ through a FC layer to make the first label prediction,
resulting in a prediction logit vector $\bm{z}\in\mathbb{R}^{K}$, where $K$ is
the total number of classes. The $\bm{z}$ and the utterance-level label
$\bm{y}$ are compared to compute a weighted multi-label cross-entropy (WCE)
loss in the following formula:
$\displaystyle\mathcal{L}_{CAV}$
$\displaystyle=-\frac{1}{K}\sum_{k=1}^{K}\bm{W}_{CE}[k]\cdot\log\frac{\exp(\bm{z}[k])}{\sum_{k}\exp(\bm{z}[k])}\cdot\bm{y}[k],$
(6)
where $\bm{z}[k]$ and $\bm{y}[k]\in\\{0,1\\}$ denote the predict logit and
ground-truth label of $k$-th class. $\bm{W}_{CE}$ is the weight assigned to
each class $k$.
#### 3.3.3 Extracting temporal class activation (TCA) feature
We implement a learnable gating mechanism onto the CAV, $\bm{A}_{k}$, which
effectively selects and emphasizes the discriminative feature channels. The
gating mechanism is an FC layer with a softmax function along the dimension of
the feature channel, which gives an attentive tensor denoted as
$\bm{M}\in\mathbb{R}^{C\times K}$. Then, we apply $\bm{M}$ on the feature
embedding $\bm{S}$ through the inner product. Additionally, the prediction
logit vector $\bm{z}$ is used as a class-specific mask, thereby generating an
TCA feature $\bm{S^{\prime}}\in\mathbb{R}^{T^{\prime}\times C}$, which
highlights the discriminative regions along the temporal domain for each
class. We obtain $\bm{S^{\prime}}$ as follows,
$\displaystyle\bm{M}_{c,k}$
$\displaystyle=\frac{\exp(w_{gate}\bm{A}_{c,k})}{\sum_{c}^{C}\exp(w_{gate}\bm{A}_{c,k})},$
(7) $\displaystyle\bm{S^{\prime}}$
$\displaystyle=\bm{z}\cdot(\bm{S}\odot\bm{M}).$ (8)
where $\bm{A_{c,k}}$ denotes the $c$-th item of the channel attention vector
$\bm{A_{k}}$, and $w_{gate}$ acts as a scalar weight for the gating mechanism.
#### 3.3.4 Classifier on TCA feature with WCE loss
The second classifier operates based on the TCA feature $\bm{S^{\prime}}$. As
$\bm{S^{\prime}}$ contains both frame-level and utterance-level information,
instead of aggregating all feature elements in $\bm{S^{\prime}}$ using global
pooling, we separate
$\bm{S^{\prime}}=\\{s^{\prime}_{1},s^{\prime}_{2},...,s^{\prime}_{T},s^{\prime}_{T+1}\\}$
into two segments
$\bm{S^{\prime}_{f}}=\\{s^{\prime}_{1},s^{\prime}_{2},...,s^{\prime}_{T}\\}$
of the length of $T$, and $\bm{S^{\prime}_{u}}=\\{s^{\prime}_{T+1}\\}$ and
then apply average pooling to each feature segment. An FC layer is applied
after the pooling operation to obtain a new prediction logit vector
$\bm{z^{\prime}}\in\mathbb{R}^{K}$ for $\bm{S^{\prime}}$. $\bm{z^{\prime}}$ is
also used to compute a weighted CE loss using Equation 6 resulting a
$\mathcal{L}_{TCA}$. Both $\mathcal{L}_{TCA}$ and $\mathcal{L}_{CAV}$ utilize
the same weight $\bm{W}_{CE}$ across each class.
#### 3.3.5 Overall objective function
The overall objective function for the detection model is
$\displaystyle\mathcal{L}$
$\displaystyle=\lambda_{1}\mathcal{L}_{CAV}+\lambda_{2}\mathcal{L}_{TCA}$ (9)
where $\lambda_{1}$ and $\lambda_{2}$ are different weight values to balance
between two individual losses. $\lambda_{1}$ and $\lambda_{2}$ are set to 0.3
and 0.7 respectively in our model.
## 4 Experiment and Evaluation
### 4.1 Dataset and evaluation metrics
We use the ASVspoof2019 logical access (LA) dataset [27] for the experiments.
The spoofed data in the training and development sets are generated by four
TTS methods and two VC methods, while the evaluation set consists of 13
different and unseen methods to evaluate the generalization ability of the
detector. We fix all audio samples to the same length of 4 seconds either by
truncating the longer audio clips or concatenating the shorter audio clips
repeatedly. We evaluate the detection performance with two metrics: minimum
normalized tandem detection cost function (min t-DCF) [28] and the Equal Error
Rate (EER). A detection result with a lower min t-DCF value or EER score is
regarded to be more accurate.
Table 1: Performance on the ASVspoof 2019 evaluation set in terms of min t-DCF and pooled EER for state-of-the-art single systems and our proposed system. System | Front-end | min t-DCF | EER(%)
---|---|---|---
Ours | wav2vec 2.0 | 0.0165 | 0.51
Ma et al. [29] | raw waveform | 0.0187 | 0.64
Jung et al. [4] | SincNet | 0.0275 | 0.83
Li et al. [30] | SincNet | 0.0317 | 0.93
Ma et al. [31] | LFCC | 0.0294 | 0.98
Tak et al. [5] | SincNet | 0.0335 | 1.06
Li et al. [2] | LFCC | 0.0345 | 1.06
Luo et al. [1] | LFCC | 0.0328 | 1.07
Wang et al. [13] | wav2vec 2.0 | - | 1.08
Wang et al. [6] | wav2vec 2.0 | - | 1.28
Yang et al. [32] | CQT | 0.0490 | 1.54
Hua et al. [11] | raw waveform | - | 1.64
Ge et al. [14] | raw waveform | 0.0517 | 1.77
Table 2: Breakdown of EER (%) performance for all 13 attacks in ASVspoof2019 LA evaluation set with attack types specified (TTS, VC). Our proposed model, the single state-of-the-art and ablation study results are reported. Pooled EER is shown in the last column. System | A07 | A08 | A09 | A10 | A11 | A12 | A13 | A14 | A15 | A16 | A17 | A18 | A19 | EER(%)
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
TTS | TTS | TTS | TTS | TTS | TTS | VC | VC | VC | TTS | VC | VC | VC
Proposed | 0.02 | 0.24 | 0.04 | 0.29 | 0.06 | 0.61 | 0.01 | 0.10 | 1.24 | 0.01 | 0.61 | 0.30 | 0.06 | 0.51
Ma et al. [29] | 0.15 | 0.15 | 0.02 | 0.41 | 0.10 | 0.06 | 0.02 | 0.35 | 0.41 | 0.30 | 2.19 | 0.27 | 0.42 | 0.64
w/o utterance-level feature | 0.01 | 0.02 | 0.01 | 1.32 | 0.02 | 0.42 | 0.00 | 0.08 | 2.71 | 0.00 | 5.74 | 0.61 | 0.04 | 1.12
w/o classifier on CAV | 0.00 | 0.02 | 0.00 | 0.24 | 0.02 | 0.06 | 0.01 | 0.06 | 0.83 | 0.00 | 2.01 | 1.47 | 0.08 | 0.61
w/ binary label | 0.00 | 0.55 | 0.01 | 0.83 | 0.04 | 2.57 | 0.05 | 0.53 | 9.45 | 0.00 | 14.4 | 1.06 | 0.08 | 4.26
Figure 2: Visualizing the temporal class activation feature on the selected
samples, each spoofed audio sample is labeled with its attack type. Different
colors denote the class-relevant attention, with color intensity representing
the level of contribution to detection results.
### 4.2 Model implementation details with multi-label training
The model is implemented with the PyTorch framework. We adopt the pre-trained
XLS-R model 111https://huggingface.co/facebook/wav2vec2-xls-r-300m with 300
million parameters based on the wav2vec 2.0 base model. The XLS-R model is
pre-trained on 436k hours of unlabeled speech in 128 languages, which produces
the speech embedding with a dimension size of 1024 in each 20 millisecond. The
resulting embedding from the wav2vec 2.0 model is compressed to a size of 128
by two linear layers with 512 and 128 hidden units, with dropout layers set at
a rate of 0.2.
During training and validation, we consider spoofing detection as a multi-
label classification problem instead of a binary classification. Based on the
spoofing generating types, the data labelled as spoofed in the training and
validation subsets are categorized into two groups, TTS and VC. Therefore, the
ground truth includes three classes of labels in total, which are bonafide,
TTS spoofed and VC spoofed. We believe that multi-label training will
encourage the model to learn more distinct characteristics to identify TTS and
VC-generated speech, thereby potentially increasing the accuracy of detecting
spoofing speech.
To manage the data imbalance in the training set, we utilize the WCE loss,
where the weights assigned to bonafide, TTS spoofed and VC spoofed are 8, 1,
and 1 respectively. An Adam optimizer [33] with a weight decay of $10^{-4}$ is
used. The model was trained with 50 epochs with a mini-batch size of 10 and a
learning rate of $10^{-5}$. The model with the minimum validation loss for the
development set was selected as the best model for evaluation. All experiments
were performed on a single GeForce RTX 3090 GPU and the implementation code is
publicly available 222https://github.com/menglu-lml/Interpretable-Detection-
IS24.
### 4.3 Experiment result
The performance result of our proposed model is presented in Table 1. Table 1
also illustrates the performance comparison between our proposed model and the
state-of-the-art single systems. The comparisons highlight that our model
outperforms not only other single models utilizing the SSL-based features but
also End-to-End detection systems and other systems employing a variety of
feature types, including learnable feature embedding and hand-crafted acoustic
features.
The first two rows of Table 2 presents the breakdown performance of our
proposed model and the state-of-the-art for each different spoofing attack in
the evaluation set. The results show that our model effectively detects both
TTS and VC-based spoofing speech, and outperforms the state-of-the-art method
on 8 attacks. In particular, our model achieves a notably low EER score on the
A17 attack, which is labelled as the worst-case scenario among all attacks
[34]. This is significant because the A17 utilizes direct waveform
concatenation method on bonafide human voice, resulting in a greater challenge
for detection.
### 4.4 Ablation study
The last three rows of Table 2 illustrates the results of the ablation
experiments to demonstrate the merit of the design choices. Removing the
utterance-level part in the feature embedding leads to a performance
degradation of 54.5% in terms of EER. It shows that the attentive utterance-
level feature effectively emphasizes discriminative frames contributing to the
detection process across the entire utterance. We demonstrate the underlying
connection of CAV to the detection process by ablating the WCE loss upon CAV
in the objective function. This leads to a 16% degradation in EER, dropping to
0.51%. Notably, with the loss upon CAV, our model enhanceperformances in
detecting VC attacks involving the direct conversion of human voices
(A17-A19). The effectiveness of multi-label training is also presented. Using
binary labels in training results in a degradation to 4.26% in EER, with the
decline primarily attributed to the failure to detect VC attacks. It shows
that multi-label training allows the detection model to learn the
discriminative factors in TTS and VC-based spoofed audio separately, which
gains a deeper understanding of the different characteristics of each attack
type.
### 4.5 Evaluation of the visual interpretability
As Figure 2 shows, we visualize the temporal class activation feature on the
audio samples within the evaluation dataset. The visualization uses different
colors to denote the detected audio types for each frame, including TTS-based
spoofed, VC-based spoofed, or bonafide. The intensity of color represents the
detection confidence. Notably, bonafide and TTS-based spoofed (A07-A12, A16)
audio samples are correctly classified in Figure 2. However, some VC-based
spoofed samples (A13-A15) are mislabelled as TTS-based, as indicated by the
activation feature's color. It occurs because the audio in A13-A15 are
generated by the combined VC-TTS spoofing systems, where the TTS voice serves
as the source speaker for the VC model. In such cases, our model effectively
detects the TTS-based voice source in attacks A13-A15, demonstrating that it
has learned the distinct characteristics of spoofed audio generated by TTS and
VC. It is further supported by the correct classification of spoofed audio
generated by pure VC models that utilize human voice as the source (A17-A19).
Additionally, Figure 2 localizes the most discriminative frames in the
detection process, providing justification for the decision made by our
proposed model.
## 5 Conclusion
We are the first to incorporate interpretability directly into the
architecture of audio spoofing detection models, enhancing the transparency of
their decision-making processes while ensuring a high detection rate. The
proposed learnable class activation mechanism identifies and visualizes the
crucial frames that contribute detection outcomes. Our model achieves an EER
of 0.51% on ASVspoof2019 LA set by leveraging utterance-level features and
multi-label classification training. We aim to apply this interpretable model
to detect partially spoofed audio by localizing the spoofing segments as
future work.
## References
* [1] A. Luo, E. Li, Y. Liu, X. Kang, and Z. J. Wang, ``A capsule network based approach for detection of audio spoofing attacks,'' in _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2021, pp. 6359–6363.
* [2] C. Li, F. Yang, and J. Yang, ``The role of long-term dependency in synthetic speech detection,'' _IEEE Signal Processing Letters_ , vol. 29, pp. 1142–1146, 2022.
* [3] M. Li and X.-P. Zhang, ``Robust audio anti-spoofing system based on low-frequency sub-band information,'' in _2023 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)_. IEEE, 2023, pp. 1–5.
* [4] J.-w. Jung, H.-S. Heo, H. Tak, H.-j. Shim, J. S. Chung, B.-J. Lee, H.-J. Yu, and N. Evans, ``Aasist: Audio anti-spoofing using integrated spectro-temporal graph attention networks,'' in _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2022, pp. 6367–6371.
* [5] H. Tak, J. weon Jung, J. Patino, M. Kamble, M. Todisco, and N. Evans, ``End-to-end spectro-temporal graph attention networks for speaker verification anti-spoofing and speech deepfake detection,'' in _Proc. 2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge_ , 2021.
* [6] X. Wang and J. Yamagishi, ``Investigating self-supervised front ends for speech spoofing countermeasures,'' in _Proc. The Speaker and Language Recognition Workshop (Odyssey 2022)_ , 2022, pp. 100–106.
* [7] H. Tak, M. Todisco, X. Wang, J.-w. Jung, J. Yamagishi, and N. Evans, ``Automatic speaker verification spoofing and deepfake detection using wav2vec 2.0 and data augmentation,'' in _The Speaker and Language Recognition Workshop_ , 2022.
* [8] M. Li, Y. Ahmadiadli, and X.-P. Zhang, ``A comparative study on physical and perceptual features for deepfake audio detection,'' in _Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia_ , 2022, pp. 35–41.
* [9] X. Ma, T. Liang, S. Zhang, S. Huang, and L. He, ``Improved lightcnn with attention modules for asv spoofing detection,'' in _2021 IEEE International Conference on Multimedia and Expo (ICME)_. IEEE, 2021, pp. 1–6.
* [10] A. Tomilov, A. Svishchev, M. Volkova, A. Chirkovskiy, A. Kondratev, and G. Lavrentyeva, ``Stc antispoofing systems for the asvspoof2021 challenge,'' in _Proc. ASVspoof 2021 Workshop_ , 2021, pp. 61–67.
* [11] G. Hua, A. B. J. Teoh, and H. Zhang, ``Towards end-to-end synthetic speech detection,'' _IEEE Signal Processing Letters_ , vol. 28, pp. 1265–1269, 2021.
* [12] X. Li, N. Li, C. Weng, X. Liu, D. Su, D. Yu, and H. Meng, ``Replay and synthetic speech detection with res2net architecture,'' in _ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP)_. IEEE, 2021, pp. 6354–6358.
* [13] C. Wang, J. Yi, J. Tao, H. Sun, X. Chen, Z. Tian, H. Ma, C. Fan, and R. Fu, ``Fully automated end-to-end fake audio detection,'' in _Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia_ , 2022, pp. 27–33.
* [14] W. Ge, J. Patino, M. Todisco, and N. Evans, ``Raw Differentiable Architecture Search for Speech Deepfake and Spoofing Detection,'' in _Proc. 2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge_ , 2021, pp. 22–28.
* [15] M. Li, Y. Ahmadiadli, and X.-P. Zhang, ``Audio anti-spoofing detection: A survey,'' _arXiv preprint arXiv:2404.13914_ , 2024.
* [16] W. Ge, M. Todisco, and N. Evans, ``Explainable Deepfake and Spoofing Detection: An Attack Analysis Using SHapley Additive exPlanations,'' in _Proc. The Speaker and Language Recognition Workshop (Odyssey 2022)_ , 2022, pp. 70–76.
* [17] S. M. Lundberg and S.-I. Lee, ``A unified approach to interpreting model predictions,'' _Advances in neural information processing systems_ , vol. 30, 2017.
* [18] S.-Y. Lim, D.-K. Chae, and S.-C. Lee, ``Detecting deepfake voice using explainable deep learning techniques,'' _Applied Sciences_ , vol. 12, no. 8, p. 3926, 2022.
* [19] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K.-R. Müller, ``Explaining nonlinear classification decisions with deep taylor decomposition,'' _Pattern recognition_ , vol. 65, pp. 211–222, 2017.
* [20] H. Bharadhwaj, ``Layer-wise relevance propagation for explainable deep learning based speech recognition,'' in _2018 IEEE International symposium on signal processing and information technology (ISSPIT)_. IEEE, 2018, pp. 168–174.
* [21] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, ``Grad-cam: Visual explanations from deep networks via gradient-based localization,'' in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 618–626.
* [22] B. Halpern, F. Kelly, R. van Son, and A. Alexander, ``Residual Networks for Resisting Noise: Analysis of an Embeddings-based Spoofing Countermeasure,'' in _Proc. The Speaker and Language Recognition Workshop (Odyssey 2020)_ , 2020, pp. 326–332.
* [23] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, ``wav2vec 2.0: A framework for self-supervised learning of speech representations,'' _Advances in neural information processing systems_ , vol. 33, pp. 12 449–12 460, 2020.
* [24] W.-N. Hsu, B. Bolte, Y.-H. H. Tsai, K. Lakhotia, R. Salakhutdinov, and A. Mohamed, ``Hubert: Self-supervised speech representation learning by masked prediction of hidden units,'' _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 29, pp. 3451–3460, 2021.
* [25] A. Babu, C. Wang, A. Tjandra, K. Lakhotia, Q. Xu, N. Goyal, K. Singh, P. von Platen, Y. Saraf, J. Pino _et al._ , ``Xls-r: Self-supervised cross-lingual speech representation learning at scale,'' _arXiv preprint arXiv:2111.09296_ , 2021.
* [26] Q. Wang, K. Okabe, K. A. Lee, H. Yamamoto, and T. Koshinaka, ``Attention mechanism in speaker recognition: What does it learn in deep speaker embedding?'' in _2018 IEEE Spoken Language Technology Workshop (SLT)_. IEEE, 2018, pp. 1052–1059.
* [27] X. Wang, J. Yamagishi, M. Todisco, H. Delgado, A. Nautsch, N. Evans, M. Sahidullah, V. Vestman, T. Kinnunen, K. A. Lee _et al._ , ``Asvspoof 2019: A large-scale public database of synthesized, converted and replayed speech,'' _Computer Speech & Language_, vol. 64, p. 101114, 2020.
* [28] T. Kinnunen, K. A. Lee, H. Delgado, N. Evans, M. Todisco, M. Sahidullah, J. Yamagishi, and D. A. Reynolds, ``t-dcf: a detection cost function for the tandem assessment of spoofing countermeasures and automatic speaker verification,'' in _Speaker Odyssey 2018 The Speaker and Language Recognition Workshop_ , 2018.
* [29] Q. Ma, J. Zhong, Y. Yang, W. Liu, Y. Gao, and W. W. Ng, ``Convnext based neural network for anti-spoofing,'' _arXiv preprint arXiv:2209.06434_ , 2022.
* [30] C. Li, F. Yang, and J. Yang, ``Multi-scale information aggregation for spoofing detection,'' _Available at SSRN 4251042_.
* [31] X. Ma, S. Zhang, S. Huang, J. Gao, Y. Hu, and L. He, ``How to boost anti-spoofing with x-vectors,'' in _2022 IEEE Spoken Language Technology Workshop (SLT)_. IEEE, 2023, pp. 593–598.
* [32] M. Yang, K. Zheng, X. Wang, Y. Sun, and Z. Chen, ``Comparative analysis of asv spoofing countermeasures: Evaluating res2net-based approaches,'' _IEEE Signal Processing Letters_ , 2023.
* [33] D. P. Kingma and J. Ba, ``Adam: A method for stochastic optimization,'' _arXiv preprint arXiv:1412.6980_ , 2014.
* [34] M. Todisco, X. Wang, V. Vestman, M. Sahidullah, H. Delgado, A. Nautsch, J. Yamagishi, N. W. D. Evans, T. H. Kinnunen, and K. A. LEE, ``Asvspoof 2019: Future horizons in spoofed and fake audio detection,'' in _Interspeech_ , 2019.
|
# A note on stability properties of powers of polymatroidal ideals
Amir Mafi and Dler Naderi Amir Mafi, Department of Mathematics, University Of
Kurdistan, P.O. Box: 416, Sanandaj, Iran<EMAIL_ADDRESS>Dler Naderi,
Department of Mathematics, University of Kurdistan, P.O. Box: 416, Sanandaj,
Iran<EMAIL_ADDRESS>
###### Abstract.
Let $I$ be a matroidal ideal of degree $d$ of a polynomial ring
$R=K[x_{1},...,x_{n}]$, where $K$ is a field. Let $\operatorname{astab}(I)$
and $\operatorname{dstab}(I)$ be the smallest integers $m$ and $n$, for which
$\operatorname{Ass}(I^{m})$ and $\operatorname{depth}(I^{n})$ stabilize,
respectively. In this paper, we show that $\operatorname{astab}(I)=1$ if and
only if $\operatorname{dstab}(I)=1$. Moreover, we prove that if $d=3$, then
$\operatorname{astab}(I)=\operatorname{dstab}(I)$. Furthermore, we show that
if $I$ is an almost square-free Veronese type ideal of degree $d$, then
$\operatorname{astab}(I)=\operatorname{dstab}(I)=\lceil\frac{n-1}{n-d}\rceil$.
###### Key words and phrases:
Polymatroidal ideal, depth and associated primes stability number
###### 2010 Mathematics Subject Classification:
Primary 13A15; Secondary 13A30, 13C15
## Introduction
Throughout this paper, we assume that $R=K[x_{1},...,x_{n}]$ is the polynomial
ring in $n$ variables over a field $K$ with the maximal ideal
$\mathfrak{m}=(x_{1},...,x_{n})$, $I$ a monomial ideal of $R$ and $G(I)$ the
unique minimal monomial generators set of $I$. Let $\operatorname{Ass}(I)$ be
the set of associated prime ideals of $R/I$. Brodmann [2] showed that there
exists an integer $t_{0}$ such that
$\operatorname{Ass}(I^{t})=\operatorname{Ass}(I^{t_{0}})$ for all $t\geq
t_{0}$. The smallest such integer $t_{0}$ is called the index of Ass-stability
of $I$, and denoted by $\operatorname{astab}(I)$. Moreover,
$\operatorname{Ass}(I^{t_{0}})$ is called the stable set of associated prime
ideals of $I$. It is denoted by $\operatorname{Ass}^{\infty}(I)$. Brodmann [3]
also showed that there exists an integer $t_{0}$ such that
$\operatorname{depth}R/I^{t}=\operatorname{depth}R/I^{t_{0}}$ for all $t\geq
t_{0}$. The smallest such integer $t_{0}$ is called the index of depth
stability of $I$ and denoted by $\operatorname{dstab}(I)$. The first author
and Herzog [10] proved that if $n=3$ then, for any graded ideal $I$ of $R$,
$\operatorname{astab}(I)=\operatorname{dstab}(I)$. Also, they showed that for
$n=4$ the indices $\operatorname{astab}(I)$ and $\operatorname{dstab}(I)$ are
unrelated. Herzog, Rauf and Vladoiu [12] showed that for every polymatroidal
ideal of Veronese type $\operatorname{astab}(I)=\operatorname{dstab}(I)$ and
for every transversal polymatroidal ideal
$\operatorname{astab}(I)=1=\operatorname{dstab}(I)$. Herzog and Qureshi [11]
proved that if $I$ is a polymatroidal ideal of $R$, then
$\operatorname{astab}(I),\operatorname{dstab}(I)<\ell(I)$, where $\ell(I)$ is
the analytic spread of $I$, that is, the dimension of
$\mathcal{R}(I)/{{\mathfrak{m}}\mathcal{R}(I)}$, where $\mathcal{R}(I)$
denotes the Rees ring of $I$. Moreover, they conjectured that
$\operatorname{astab}(I)=\operatorname{dstab}(I)$ for all polymatroidal ideals
$I$. This conjecture does not have a positive answer in general, see [16] for
counterexamples. The ideals which provide these counterexamples are
polmatroidal ideals, but neither
$\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)$ nor $I$ is matroidal. Thus it
is an open question whether all matroidal ideals and all polymatroidal ideals
with $\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$ satisfy the equality
$\operatorname{astab}(I)=\operatorname{dstab}(I)$.
In this paper, we show that $\operatorname{astab}(I)=1$ if and only if
$\operatorname{dstab}(I)=1$. Also, we prove that if $I$ is a matroidal ideal
of degree $3$, then $\operatorname{astab}(I)=\operatorname{dstab}(I)$.
Furthermore, if $I$ is a polymatroidal ideal of degree $3$ and
$\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$, then
$\operatorname{astab}(I)=\operatorname{dstab}(I)$. In the end, we show that if
$I$ is an almost square-free Veronese type ideal of degree $d$, then
$\operatorname{astab}(I)=\operatorname{dstab}(I)=\lceil\frac{n-1}{n-d}\rceil$.
For any unexplained notion or terminology, we refer the reader to [9] or [18].
Several explicit examples were performed with the help of the computer algebra
system Macaulay2 [6].
## 1\. Preliminaries
In this section, we recall some definitions and known results which are used
in this paper. Let, as before, $K$ be a field and $R=K[x_{1},\ldots,x_{n}]$ be
the polynomial ring in $n$ variables over $K$ with each $\deg x_{i}=1$. For a
monomial ideal $I$ of $R$ and $G(I)=\\{u_{1},\ldots,u_{t}\\}$, we set
$\operatorname{supp}(I)=\cup_{i=1}^{t}\operatorname{supp}(u_{i})$, where
$\operatorname{supp}(u)=\\{x_{i}|u=x_{1}^{a_{1}}\ldots x_{n}^{a_{n}},a_{i}\neq
0\\}$ and we set $\gcd(I)=\gcd(u_{1},\ldots,u_{m})$. The linear relation graph
$\Gamma_{I}$ associated to a monomial ideal is the graph whose vertex set
$V(\Gamma_{I})$ is a subset of $\\{x_{1},\ldots,x_{n}\\}$ for which
$\\{x_{i},x_{j}\\}\in E(\Gamma_{I})$ if and only if there exist
$u_{k},u_{l}\in G(I)$ such that $x_{i}u_{k}=x_{j}u_{l}$ (see [11, Definition
3.1]). We say that the monomial ideal $I$ is full-supported if
$\operatorname{supp}(I)=\\{x_{1},\ldots,x_{n}\\}$. The monomial localization
of a monomial ideal $I$ with respect to a monomial prime ideal $\mathfrak{p}$
is the monomial ideal $I(\mathfrak{p})$ which is obtained from $I$ by
substituting the variables $x_{i}\notin\mathfrak{p}$ by $1$. The monomial
localization $I(\mathfrak{p})$ can also be described as the saturation
$I:(\prod_{x_{i}\notin{\mathfrak{p}}}x_{i})^{\infty}$ and when $I$ is a
square-free monomial ideal we see that
$I(\mathfrak{p})=I:(\prod_{x_{i}\notin{\mathfrak{p}}}x_{i})$. Let
$\mathfrak{p}$ be a monomial prime ideal of $R$. Then
$\mathfrak{p}=\mathfrak{p}_{A}$ for some subset $A\subseteq\\{1,\ldots,n\\}$,
where $\mathfrak{p}_{A}=\\{x_{i}|i\notin A\\}$.
A monomial ideal $I$ is called a polymatroidal ideal, if it is generated in a
single degree with the exchange property that for each two elements $u,v\in
G(I)$ such that $\deg_{x_{i}}(u)>\deg_{x_{i}}(v)$ for some $i$, there exists
an integer $j$ such that $\deg_{x_{j}}(u)<\deg_{x_{j}}(v)$ and
$x_{j}(u/x_{i})\in I$. The polymatroidal ideal $I$ is called matroidal if $I$
is generated by square-free monomials (see [7]). For a polymatroidal ideal $I$
one can compute the analytic spread as $\ell(I)=r-s+1$, where
$r=|V(\Gamma_{I})|$ and $s$ is the number of connected components of
$\Gamma_{I}$ (see [11, Lemma 4.2]).
The product of polymatroidal ideals is again polymatroidal (see [5, Theorem
5.3]). In particular each power of a polymatroidal ideal is polymatroidal.
Also, $I$ is a polymatroidal ideal if and only if $(I:u)$ is a polymatroidal
ideal for all monomials $u$ (see [1, Theorem 1.1]). Furthermore, localizations
of polymatroidal ideals at monomial prime ideals are again polymatroidal [12,
Corollary 3.2]. According to [11] and [12], every polymatroidal ideal
satisfying the persistence property and non-increasing depth functions, that
is, if $I$ is a polymatroidal ideal then, for all $k$, there is the following
sequences: $\operatorname{Ass}(I^{k})\subseteq\operatorname{Ass}(I^{k+1})$ and
$\operatorname{depth}(R/I^{k+1})\leq\operatorname{depth}(R/I^{k}).$
In addition, every polymatroidal ideal is a normal ideal (see [12, Theorem
3.4]). One of the most distinguished polymatroidal ideals is the ideal of
Veronese type. Consider the fixed positive integers $d$ and $1\leq
a_{1}\leq...\leq a_{n}\leq d$. The ideal of Veronese type of $R$ indexed by
$d$ and $(a_{1},\ldots,a_{n})$ is the ideal $I_{(d;a_{1},\ldots,a_{n})}$ which
is generated by those monomials $u=x_{1}^{i_{1}}\ldots x_{n}^{i_{n}}$ of $R$
of degree $d$ with $i_{j}\leq a_{j}$ for each $1\leq j\leq n$. Note that if
$a_{i}=1$ for all $i$, then we use $I_{d;n}$ instead of $I_{(d;1,\ldots,1)}$.
We say that $I$ is an almost square-free Veronese ideal of degree $d$ when
$I\neq 0$, $G(I)\subseteq G(I_{d;n})$ and
$\mid{G(I)}\mid\geq\mid{G(I_{d;n})}\mid-1$ (see [15]).
Herzog and Vladoiu [14] proved the following interesting results about
matroidal ideals.
###### Theorem 1.1.
Let $I$ be a matroidal ideal of $R$ generated in degree $d$, and denote as
before by $s$ the number of connected components of $\Gamma_{I}$. Let $I$ be
full-supported and $\gcd(I)=1$. The following statements hold:
* (i)
$s\leq d$. In addition, $V(\Gamma_{I})=\\{x_{1},...,x_{n}\\}$ and $s=d$ if and
only if $\operatorname{dstab}(I)=1$;
* (ii)
$I\subseteq{\mathfrak{p}_{1}}\cap\ldots\cap{\mathfrak{p}_{s}},$ where
$\mathfrak{p}_{1},\ldots,{\mathfrak{p}_{s}}$ are the monomial prime ideals
generated by the sets of vertices of the connected components
$\Gamma_{1},\ldots,\Gamma_{s}$ of $\Gamma_{I}$;
* (iii)
$\operatorname{dstab}(I)=1$ if and only if
$I={\mathfrak{p}_{1}}\ldots{\mathfrak{p}_{d}}$, where
${\mathfrak{p}_{1}},\ldots,{\mathfrak{p}_{d}}$ are monomial prime ideals in
pairwise disjoint sets of variables.
From Theorem 1.1 (iii) one can conclude that for all full-supported matroidal
ideals with $\gcd(I)=1$ if $\operatorname{dstab}(I)=1$, then
$\operatorname{astab}(I)=1$ (see [14, Theorem 2.5] and [12, Corollary 4.6]).
## 2\. The results
Throughout this section, we assume that $I$ is a full-supported monomial ideal
and $gcd(I)=1$.
###### Lemma 2.1.
Let $I$ be a polymatroidal ideal of degree $d\geq 3$ and for all $i$,
$I(\mathfrak{p}_{\\{i\\}})=\mathfrak{p}_{i_{1}}\cap\mathfrak{p}_{i_{2}}\cap\ldots\cap\mathfrak{p}_{i_{d-1}}$,
where $G(\mathfrak{p}_{i_{j}})\cap G(\mathfrak{p}_{i_{k}})=\emptyset$ for all
$1\leq j\neq k\leq d-1$. Then
$I=\mathfrak{p}_{1}\cap\mathfrak{p}_{2}\cap\ldots\cap\mathfrak{p}_{d}$, where
$G(\mathfrak{p}_{r})\cap G(\mathfrak{p}_{s})=\emptyset$ for all $1\leq r\neq
s\leq d$.
###### Proof.
Suppose that
$I(\mathfrak{p}_{\\{i\\}})=\mathfrak{p}_{i_{1}}\cap\mathfrak{p}_{i_{2}}\cap\ldots\cap\mathfrak{p}_{i_{d-1}}$,
where $G(\mathfrak{p}_{i_{j}})\cap G(\mathfrak{p}_{i_{k}})=\emptyset$ for all
$1\leq j\neq k\leq d-1$. We may assume that
$I(\mathfrak{p}_{\\{1\\}})=I(\mathfrak{p}_{\\{1,\ldots,k_{1}\\}})$ for some
$k_{1}\geq 1$. Since $G(\mathfrak{p}_{i_{j}})\cap
G(\mathfrak{p}_{i_{k}})=\emptyset$ for all $1\leq j\neq k\leq d-1$, it follows
that $x_{1},\ldots,x_{k_{1}}\notin G(\mathfrak{p}_{i_{j}})$ for all $1\leq
j\leq d-1$. Without lose of generality of the proof by new labeling we may
assume that
$I(\mathfrak{p}_{\\{1\\}})=I(\mathfrak{p}_{\\{1,\ldots,k_{1}\\}})=\mathfrak{p}_{1}\cap\mathfrak{p}_{2}\cap\ldots\cap\mathfrak{p}_{d-1}$
such that $G(\mathfrak{p}_{j})\cap G(\mathfrak{p}_{k})=\emptyset$ for all
$1\leq j\neq k\leq d-1$ where
$\mathfrak{p}_{t}=(x_{{k_{t}}+1},\ldots,x_{k_{t+1}})$ for all $1\leq t\leq
d-1$ such that $x_{k_{d}}=x_{n}$. We claim that
$I=\mathfrak{p}_{1}\cap\mathfrak{p}_{2}\cap\ldots\cap\mathfrak{p}_{d}$, where
$G(\mathfrak{p}_{i})\cap G(\mathfrak{p}_{j})=\emptyset$ for all $1\leq i\neq
j\leq d$ and $\mathfrak{p}_{d}=(x_{1},\ldots,x_{k_{1}})$. We assume that the
claim is not true. Therefore
$I=\mathfrak{p}_{1}\cap\mathfrak{p}_{2}\cap\ldots\cap\mathfrak{p}_{d}\cap\mathfrak{p}_{d+1}\cap\ldots\cap\mathfrak{p}_{d+r}$
such that $r\geq 1$, $G(\mathfrak{p}_{j})\cap G(\mathfrak{p}_{k})=\emptyset$
for all $1\leq j\neq k\leq d-1$ and
$x_{s}\in\mathfrak{p}_{d}\cap\mathfrak{p}_{d+1}\cap\ldots\cap\mathfrak{p}_{d+r}$
for all $s\in\\{1,\ldots,k_{1}\\}$. Indeed, if
$x_{s}\notin\mathfrak{p}_{d}\cap\mathfrak{p}_{d+1}\cap\ldots\cap\mathfrak{p}_{d+r}$
for some $s\in\\{1,\ldots,k_{1}\\}$, then $x_{s}\notin\mathfrak{p}_{j}$ for
some $j\in\\{d,\ldots,d+r\\}$. Thus
$I(\mathfrak{p}_{\\{s\\}})\subseteq\mathfrak{p}_{1}\cap\mathfrak{p}_{2}\cap\ldots\cap\mathfrak{p}_{d-1}\cap\mathfrak{p}_{j}$
and this is a contradiction. Now, by our assumption there is
$m\in\\{0,1,\ldots,r\\}$ such that
$I(\mathfrak{p}_{\\{k_{1}+1\\}})=\mathfrak{p}_{2}\cap\mathfrak{p}_{3}\cap\ldots\cap\mathfrak{p}_{d-1}\cap\mathfrak{p}_{d+m}$,
where $G(\mathfrak{p}_{j})\cap G(\mathfrak{p}_{k})=\emptyset$ and
$G(\mathfrak{p}_{j})\cap G(\mathfrak{p}_{d+m})=\emptyset$ for all $2\leq j\neq
k\leq d-1$. Suppose that $m=0$ and so in this case
$I(\mathfrak{p}_{\\{k_{1}+1\\}})=\mathfrak{p}_{2}\cap\mathfrak{p}_{3}\cap\ldots\cap\mathfrak{p}_{d-1}\cap\mathfrak{p}_{d}$.
Hence $x_{k_{1}+1}\in\mathfrak{p}_{d+1}\cap\ldots\cap\mathfrak{p}_{d+r}$ and
there exists $i\in\\{2,\ldots,k_{2}-k_{1}\\}$ such that
$x_{k_{1}+i}\in\mathfrak{p}_{d}$, since otherwise we have
$\mathfrak{p}_{d}=(x_{1},\ldots,x_{k_{1}})$ and this is our claim. Also, there
exists $i\in\\{1,2,\ldots,k_{3}-k_{2}\\}$ such that
$I(\mathfrak{p}_{\\{k_{2}+i\\}})=\mathfrak{p}_{1}\cap\mathfrak{p}_{3}\cap\ldots\cap\mathfrak{p}_{d-1}\cap\mathfrak{p}_{d}$,
since otherwise
$\mathfrak{p}_{d}=(x_{1},\ldots,x_{k_{1}},x_{k_{2}+1},\ldots,x_{k_{3}})$ and
this is impossible because in this case
$\mathfrak{p}_{2}\subseteq\mathfrak{p}_{d}$. Suppose that $i=1$ and so
$I(\mathfrak{p}_{\\{k_{2}+1\\}})=\mathfrak{p}_{1}\cap\mathfrak{p}_{3}\cap\ldots\cap\mathfrak{p}_{d-1}\cap\mathfrak{p}_{d}$.
Thus there exists $j\in\\{2,\ldots,k_{3}-k_{2}\\}$ such that
$x_{k_{2}+j}\in\mathfrak{p}_{d}$, since otherwise
$\mathfrak{p}_{d}=(x_{1},\ldots,x_{k_{1}})$ and this is impossible. Suppose
that $j=2$ and so $x_{k_{2}+2}\in\mathfrak{p}_{d}$. Therefore
$I(\mathfrak{p}_{\\{k_{1}+1\\}})=\mathfrak{p}_{2}\cap\mathfrak{p}_{3}\cap\ldots\cap\mathfrak{p}_{d-1}\cap\mathfrak{p}_{d}$
such that $x_{k_{2}+2}\in\mathfrak{p}_{2}\cap\mathfrak{p}_{d}$ and this is
impossible. Hence the claim is true and this completes the proof.
The following example says that if $d=2$, then the above lemma does not hold.
###### Example 2.2.
Let
$I=(xz,xu,xv,xw,yz,yu,yv,yw,zv,zw,uv,uw)=(x,y,z,u)\cap(z,u,v,w)\cap(x,y,v,w)$
be a matroidal ideal of degree $2$ such that
$I(\mathfrak{p}_{\\{x\\}})=I(\mathfrak{p}_{\\{y\\}})=(z,u,v,w),I(\mathfrak{p}_{\\{z\\}})=I(\mathfrak{p}_{\\{u\\}})=(x,y,v,w)$
and $I(\mathfrak{p}_{\\{v\\}})=I(\mathfrak{p}_{\\{w\\}})=(x,y,z,u)$. But there
are no prime ideals $\mathfrak{p}_{1},\mathfrak{p}_{2}$ such that
$I=\mathfrak{p}_{1}\cap\mathfrak{p}_{2}$ and $G(\mathfrak{p}_{1})\cap
G(\mathfrak{p}_{2})=\emptyset$.
###### Theorem 2.3.
Let $I$ be a matroidal ideal of degree $d$. Then $\operatorname{astab}(I)=1$
if and only if $\operatorname{dstab}(I)=1$. In particular,
$I=\mathfrak{p}_{1}\mathfrak{p_{2}}\ldots\mathfrak{p}_{d}$ where
$G(\mathfrak{p}_{i})\cap G(\mathfrak{p}_{j})=\emptyset$ for all $1\leq i\neq
j\leq d$.
###### Proof.
If $\operatorname{dstab}(I)=1$, then by Theorem 1.1 there is nothing to prove.
Conversely, we use induction on $d$. If $d=2$, then by [16, Theorem 2.12] the
result follows. Suppose that $d>2$ and that the result has been proved for
smaller values of $d$. Since $\operatorname{astab}(I)=1$, we have
$\operatorname{Ass}(I)=\operatorname{Ass}^{\infty}(I)$ and since $I$ is a
matroidal ideal, it follows that
$\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$. Hence by [16, Propositions
2.8, 2.9] and the inductive hypothesis,
$1=\operatorname{astab}(I)=\operatorname{astab}(I(\mathfrak{p}_{\\{i\\}}))=\operatorname{dstab}(I(\mathfrak{p}_{\\{i\\}}))$.
Since $\operatorname{dstab}(I(\mathfrak{p}_{\\{i\\}}))=1$, by Theorem 1.1 we
have
$I(\mathfrak{p}_{\\{i\\}})=\mathfrak{p}_{i_{1}}\mathfrak{p}_{i_{2}}\ldots\mathfrak{p}_{i_{d-1}}$
for all $i$ such that $G(\mathfrak{p}_{i_{j}})\cap
G(\mathfrak{p}_{i_{k}})=\emptyset$ for all $1\leq j\neq k\leq d-1$. Now by
Lemma 2.1, $I=\mathfrak{p}_{1}\mathfrak{p}_{2}\ldots\mathfrak{p}_{d}$ such
that $G(\mathfrak{p}_{i})\cap G(\mathfrak{p}_{j})=\emptyset$ for all $1\leq
i\neq j\leq d$. Again by using Theorem 1.1 the result follows.
In the following result, let $s(I)$ be the number of connected components of
$\Gamma_{I}$ and we denote $u[j_{i}]$ the monomial
$x_{j_{1}}\ldots\widehat{x_{j_{i}}}\ldots x_{j_{t}}$, where the term
$x_{j_{i}}$ of $u=x_{j_{1}}\ldots{x_{j_{i}}}\ldots x_{j_{t}}$ is omitted and
$t\leq n$.
###### Lemma 2.4.
Let $I$ be a matroidal ideal of degree $d$. Then
$s(I(\mathfrak{p}_{\\{k\\}})\geq s(I)$.
###### Proof.
It is enough to prove that every edge of graph
$\Gamma_{I(\mathfrak{p}_{\\{k\\}})}$ is an edge of graph $\Gamma_{I}$. Suppose
that $\\{x_{i},x_{j}\\}\in E(\Gamma_{I(\mathfrak{p}_{\\{k\\}})})$. Then there
exists $u[k],v[k]\in G(I(\mathfrak{p}_{\\{k\\}}))$ such that
$x_{i}u[k]=x_{j}v[k]$. Thus $x_{i}u[k]x_{k}=x_{j}v[k]x_{k}$ and so
$x_{i}u=x_{j}v$. Therefore $\\{x_{i},x_{j}\\}\in E(\Gamma_{I})$. This
completes the proof.
###### Proposition 2.5.
Let $I$ be a polymatroidal ideal of degree $d$ and also
$I(\mathfrak{p}_{\\{k\\}})$ be a polymatroidal of degree $d-1$ in
$k[x_{1},\ldots,\widehat{x_{k}},\ldots\ldots,x_{n}]$ for some $k$. If
$\mathfrak{p}_{\\{k\\}}\in\operatorname{Ass}^{\infty}(I_{\\{k\\}})$, then
$\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)$.
###### Proof.
Suppose that
$\mathfrak{p}_{\\{k\\}}\in\operatorname{Ass}^{\infty}(I({\mathfrak{p}_{\\{k\\}}}))$.
Then by using [11, Corollary 1.6 and Lemma 4.2] we have
$s(I(\mathfrak{p}_{\\{k\\}}))=1$ and so by Lemma 2.4 $s(I)=1$. Thus
$\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)$, as required.
###### Lemma 2.6.
Let $I$ be a matroidal ideal of degree $d>1$ such that $s(I)<d$. Then there
are monomials $u,v,w\in G(I)$ and variables $x_{i},x_{j}$ in $R$ such that
$x_{i}u=x_{j}v$ and $x_{i}x_{j}|w$.
###### Proof.
We use induction on $d$. If $d=2$, then there is nothing to prove. Suppose
that $d>2$ and that the result has been proved for $d-1$. Now we prove that
the result holds for $d$. Since $I$ is a matroidal ideal, it is clear that
there are variables $x_{i},x_{j}$ in $R$ and monomials $u,v\in G(I)$ such that
$x_{i}u=x_{j}v$. Suppose that, by contrary, if $x_{i}u=x_{j}v$, then
$x_{i}x_{j}\nmid w$ for all monomials $w\in G(I)$. Since $x_{i}x_{j}\nmid w$
for every monomial $w\in G(I)$, it follows that if $x_{i}f\in G(I)$ for some
monomial $f$ of degree $d-1$, then $x_{j}f\in G(I)$. Suppose that
$A=\\{x_{1},\ldots,x_{k}\\}$ is a set of variables such that $x_{i}u=x_{j}v$
for some $u,v\in G(I)$. In this case we can write $I=(x_{1},\ldots,x_{k})J+L$
such that $J$ is a matroidal ideal of degree $d-1$ and
$x_{i}\notin\operatorname{supp}(L)$ for all $x_{i}\in A$. We claim that $L=0$.
Let $s^{{}^{\prime}}\in G(L)$ and $s\in G(J)$. By exchange property over
$s^{{}^{\prime}}$ and $x_{1}s$, there exists
$x_{l}\in\operatorname{supp}(s^{{}^{\prime}})$ such that $x_{l}s\in G(I)$.
Since $x_{1}x_{l}\nmid w$ for all monomials $w\in G(I)$, it follows that
$x_{l}\in A$. Therefore $L=0$ and so $I=(x_{1},\ldots,x_{k})J$. Since $J$ is a
matroidal ideal of degree $d-1$, if $s(J)<d-1$ then by induction hypothesis
there are variables $x_{k},x_{l}$ and monomials
$u^{{}^{\prime}},v^{{}^{\prime}},w^{{}^{\prime}}\in G(J)$ such that
$x_{k}u^{{}^{\prime}}=x_{l}v^{{}^{\prime}}$ and $x_{k}x_{l}|w^{{}^{\prime}}$.
In this case $x_{1}x_{k}u^{{}^{\prime}}=x_{1}x_{l}v^{{}^{\prime}}$,
$x_{k}x_{l}|x_{1}w^{{}^{\prime}}$ and
$x_{1}u^{{}^{\prime}},x_{1}v^{{}^{\prime}},x_{1}w^{{}^{\prime}}\in G(I)$. This
is a contradiction and so $s(J)=d-1$. By Theorem 1.1 we have
$J=\mathfrak{p}_{2}\ldots\mathfrak{p}_{d}$ such that $G(\mathfrak{p}_{i})\cap
G(\mathfrak{p}_{j})=\emptyset$ for all $2\leq i\neq j\leq d$. Therefore
$I=\mathfrak{p}_{1}\mathfrak{p}_{2}\ldots\mathfrak{p}_{d}$, where
$\mathfrak{p}_{1}=(x_{1},\ldots,x_{k})$ and $G(\mathfrak{p}_{i})\cap
G(\mathfrak{p}_{j})=\emptyset$ for all $1\leq i\neq j\leq d$. In this case
$s(I)=d$ and this is a contradiction. Thus the induction process is completed,
as required.
Following [8], let $I=(u_{1},\ldots,u_{s})$ be a monomial ideal with linear
quotients with respect to the ordering $u_{1},\ldots,u_{s}$. We denote by
$q_{i}(I)$ the number of variables which are required to generate the colon
ideal $(u_{1},\ldots,u_{i-1}):u_{i}$. Let $q(I)=\max\\{q_{i}(I)\mid 2\leq
i\leq s\\}$. It is proved in [13, Corollary 1.6] that the length of the
minimal free resolution of $R/I$ over $R$ is equal to $q(I)+1$. Hence
$\operatorname{depth}R/I=n-q(I)-1$. Thus in particular the integer $q(I)$ is
independent of the particular choice of the ordering of the monomials which
gives linear quotients. Polymatroidal ideals have linear quotients with
respect to the reverse lexicographical order of the generators, see [5,
Theorem 5.2]. Chiang-Hsieh in [4, Theorem 2.5] proved that if $I\subset R$ is
a full-supported matroidal ideal of degree $d$, then
$\operatorname{depth}R/I=d-1$.
###### Proposition 2.7.
Let $I$ be a matroidal ideal of degree $d$ such that
$\operatorname{dstab}(I)>1$. Then
$\operatorname{depth}(R/I^{2})<\operatorname{depth}(R/I)$.
###### Proof.
Since $\operatorname{dstab}(I)>1$, by [14, Proposition 2.3] it follows that
$s(I)<d$. Thus, by Lemma 2.6, there are monomials $u,v,w\in G(I)$ and
variables $x_{i},x_{j}$ in $R$ such that $x_{i}u=x_{j}v$ and $x_{i}x_{j}|w$.
After relabeling the variables we may assume that $x_{n-d}u=x_{n-d+1}v$ and
$x_{n-d}x_{n-d+1}|w$. Thus there is a monomial $s$ of degree $d-1$ such that
$u=x_{n-d+1}s$ and $v=x_{n-d}s$. Also, there is a monomial $w_{1}$ of degree
$d-2$ such that $w=x_{n-d}x_{n-d+1}w_{1}$. Hence $m=x_{n-d}x_{n-d+1}s^{2}\in
G(I^{2})$. Suppose that $\operatorname{supp}(s)=\\{x_{n-d+2},\ldots,x_{n}\\}$.
Let $J$ denote the monomial ideal generated by those monomials $r\in G(I^{2})$
such that $r$ is bigger than $m$ with respect to the reverse lexicographic
order induced by the ordering $x_{1}>x_{2}>\ldots>x_{n}$. For each $1\leq
l\leq{n-d-1}$, there is a monomial belonging to $G(I^{2})$ which is divided by
$x_{l}$. Thus there is a variable $x_{k}$ with $n-d\leq k\leq n$ such that
$x_{l}(m/x_{k})\in G(I^{2})$. Since $x_{k}<x_{l}$, it follows that
$x_{l}(m/x_{k})\in J$ and so $x_{l}m\in J$. Therefore $x_{l}\in J:m$ for all
$1\leq l\leq n-d-1$. Consequently, one has $q(I^{2})\geq n-d-1$. Now, by
exchange properties over elements $m=x_{n-d}x_{n-d+1}s^{2}$ and
$x_{n-d}^{2}x_{n-d+1}sw_{1}$ of $G(I^{2})$ there is $n-d+2\leq k\leq n$ such
that $x_{n-d}(x_{n-d}x_{n-d+1}s^{2}/x_{k})\in G(I^{2})$. Since
$x_{n-d}>x_{k}$, we have $x_{n-d}(x_{n-d}x_{n-d+1}s^{2}/x_{k})\in J$. Thus
$x_{n-d}m\in J$ and so $x_{n-d}\in J:m$. By the above argument over elements
$m=x_{n-d}x_{n-d+1}s^{2}$ and $x_{n-d}x_{n-d+1}^{2}sw_{1}$ of $G(I^{2})$, we
have $x_{n-d+1}\in J:m$. Therefore $q(I^{2})\geq n-d+1$ and so
$\operatorname{depth}(R/I^{2})<\operatorname{depth}(R/I)$, as required.
###### Proposition 2.8.
Let $I$ be a matroidal ideal of degree $3$ and
$\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$. Then
$\operatorname{astab}(I)=\operatorname{dstab}(I)\leq 2$.
###### Proof.
Since $\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$, by [16, Proposition
2.9 and Theorem 2.12] we have
$\operatorname{astab}(I)=\operatorname{astab}(I(\mathfrak{p}_{\\{i\\}}))=\operatorname{dstab}(I(\mathfrak{p}_{\\{i\\}}))$
for some $1\leq i\leq n$. We may assume $i=1$ and so
$\operatorname{astab}(I)=\operatorname{astab}(I(\mathfrak{p}_{\\{1\\}}))=\operatorname{dstab}(I(\mathfrak{p}_{\\{1\\}}))$.
We can consider two cases:
Case 1: Let $I(\mathfrak{p}_{\\{1\\}})$ be full-supported. Since
$deg(I(\mathfrak{p}_{\\{1\\}}))=2$, by the proof of [16, Corollary 2.13] we
have
$\operatorname{astab}(I(\mathfrak{p}_{\\{1\\}}))=\operatorname{dstab}(I(\mathfrak{p}_{\\{1\\}}))=1$
and so $\operatorname{astab}(I)=1$. Hence by Theorem 2.3 we have
$\operatorname{dstab}(I)=1=\operatorname{astab}(I)$.
Case 2: Suppose $I(\mathfrak{p}_{\\{1\\}})$ is not full-supported. In this
case we may assume
$I(\mathfrak{p}_{\\{1\\}})=I(\mathfrak{p}_{\\{1,\ldots,k\\}})$ and
$I(\mathfrak{p}_{\\{1,\ldots,k\\}})$ is full-supported in
$k[x_{k+1},\ldots,x_{n}]$. If
$\mathfrak{p}_{\\{1,\ldots,k\\}}\notin\operatorname{Ass}^{\infty}(I)$, then
$\mathfrak{p}_{\\{1,\ldots,k\\}}\notin\operatorname{Ass}^{\infty}(I(\mathfrak{p}_{\\{1,\ldots,k\\}}))$.
Hence
$\operatorname{astab}(I(\mathfrak{p}_{\\{1,\ldots,k\\}}))=\operatorname{dstab}(I(\mathfrak{p}_{\\{1,\ldots,k\\}}))=1$.
Thus $\operatorname{astab}(I)=1$ and so $\operatorname{dstab}(I)=1$. If
$\mathfrak{p}_{\\{1,\ldots,k\\}}\in\operatorname{Ass}^{\infty}(I)$, then
$\mathfrak{p}_{\\{1,\ldots,k\\}}\in\operatorname{Ass}^{\infty}(I(\mathfrak{p}_{\\{1,\ldots,k\\}}))$.
Thus by the proof of [16, Corollary 2.13] we have
$\operatorname{astab}(I)=\operatorname{astab}(I(\mathfrak{p}_{\\{1,\ldots,k\\}}))=\operatorname{dstab}(I(\mathfrak{p}_{\\{1,\ldots,k\\}}))=2$
and also by Theorem 2.3 we have $\operatorname{dstab}(I)>1$. By Proposition
2.7, we have $\operatorname{depth}(R/I^{2})\leq 1$. Since
$\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$, it follows
$\operatorname{depth}(R/I^{2})=\operatorname{depth}(R/I^{i})=1$ for all $i\geq
2$. Therefore $\operatorname{dstab}(I)=2$. This completes the proof.
###### Lemma 2.9.
Let $I$ be a polymatroidal ideal of degree $2$ and
$\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)$. Then
$\operatorname{astab}(I)=\operatorname{dstab}(I)\leq 2$
###### Proof.
By [16, Theorem 2.12], we have
$\operatorname{astab}(I)=\operatorname{dstab}(I)$. We have two cases:
Case 1: Let $G(I)$ have at least one pure power of a variable, say
$x_{i}^{2}\in G(I)$. Then $\operatorname{supp}(x_{i}^{2})=\\{x_{i}\\}$. Now by
using the same argument as used in the proof of Proposition 2.7 we conclude
that $q(I)\geq n-1$ and it therefore follows $\operatorname{depth}(R/I)=0$.
Thus $\operatorname{dstab}(I)=1$ and so
$\operatorname{astab}(I)=\operatorname{dstab}(I)\leq 2$.
Case 2: Let $G(I)$ do not have any pure power of the variables. Then $I$ is
square-free and so $I$ is a matroidal ideal. Thus, by [16, Corollary 2.13],
$\operatorname{astab}(I)=\operatorname{dstab}(I)\leq 2$. This completes the
proof.
###### Proposition 2.10.
Let $I$ be a polymatroidal ideal of degree $3$ and
$\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)\setminus\operatorname{Ass}(I)$.
Then $\operatorname{astab}(I)=\operatorname{dstab}(I)$.
###### Proof.
Suppose that $\mathfrak{p}\in\operatorname{Ass}^{\infty}(I)$, where
$\mathfrak{p}\neq\mathfrak{m}$. Then there exists $t$ such that
$\mathfrak{p}R_{\mathfrak{p}}\in\operatorname{Ass}(I^{t}(\mathfrak{p}))$.
Since $\deg(I(\mathfrak{p}))\leq 2$, by Lemma 2.9 we have
$\mathfrak{p}R_{\mathfrak{p}}\in\operatorname{Ass}(I^{2}(\mathfrak{p}))$ and
so $\mathfrak{p}\in\operatorname{Ass}(I^{2})$. Hence
$\operatorname{Ass}^{\infty}(I)=\operatorname{Ass}(I^{2})\cup\\{\mathfrak{m}\\}$.
It therefore follows that $\operatorname{astab}(I)=\operatorname{dstab}(I)$.
Note that in Proposition 2.10, if $\mathfrak{m}\in\operatorname{Ass}(I)$ then
the result does not hold. The first author and Karimi [16, Example 2.21] have
given the following example:
###### Example 2.11.
Consider the polymatroidal ideal
$I=(x_{1}x_{2}x_{3},x_{2}^{2}x_{3},x_{2}x_{3}^{2},x_{1}x_{2}x_{4},x_{2}^{2}x_{4},x_{2}x_{4}^{2},x_{1}x_{3}x_{4},x_{3}^{2}x_{4},x_{3}x_{4}^{2},x_{2}x_{3}x_{4})$
of degree $3$. Then $\mathfrak{m}\in\operatorname{Ass}(I)$,
$\operatorname{dstab}(I)=1$ and $\operatorname{astab}(I)=2$.
The following corollary immediately follows from Proposition 2.10.
###### Corollary 2.12.
Let $I$ be a matroidal ideal of degree $3$ such that
$\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)$. Then
$\operatorname{astab}(I)=\operatorname{dstab}(I)$.
The following result easily follows from Corollary 2.12 and Proposition 2.8.
###### Corollary 2.13.
Let $I$ be a matroidal ideal of degree $3$. Then
$\operatorname{astab}(I)=\operatorname{dstab}(I)$.
###### Proposition 2.14.
Let $I$ be a polymatroidal ideal of degree $3$ such that
$\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$. Then
$\operatorname{astab}(I)=\operatorname{dstab}(I)$.
###### Proof.
Since $\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$, it follows that
$q(I)<n-1$ and so the polymatroidal ideal $I$ can not contain the pure power
of variables. If $x_{i}^{2}x_{j}\notin G(I)$ for all $i,j$, then $I$ is a
matroidal ideal and so, by Proposition 2.8,
$\operatorname{astab}(I)=\operatorname{dstab}(I)$. Now suppose
$x_{i}^{2}x_{j}\in G(I)$ for some $i,j$. In this case $q(I)\geq n-2$ and so
$\operatorname{depth}(R/I)\leq 1$. Again, since
$\mathfrak{m}\notin\operatorname{Ass}^{\infty}(I)$ it follows that
$1=\operatorname{depth}(R/I)=\operatorname{depth}(R/I^{i})$ for all $i$. Thus
$\operatorname{dstab}(I)=1$ and also
$\mathfrak{p}_{\\{i\\}}\notin\operatorname{Ass}^{\infty}(I(\mathfrak{p}_{\\{i\\}}))$.
Therefore, by [17, Remark 9] and [16, Remark 2.6, Theorem 2.12],
$\operatorname{Ass}(I^{t})=\bigcup_{i=1}^{n}\operatorname{Ass}((I(\mathfrak{p}_{\\{i\\}}))^{t})=\bigcup_{i=1}^{n}\operatorname{Ass}(I(\mathfrak{p}_{\\{i\\}}))=\operatorname{Ass}(I)$
for all $t$ and so $\operatorname{astab}(I)=1$. Therefore
$\operatorname{astab}(I)=\operatorname{dstab}(I)=1$, as required.
###### Proposition 2.15.
Let $J$ be an almost square-free Veronese type ideal of degree $d\geq 2$ and
$gcd(J)=1$. Then $\mathfrak{m}\in\operatorname{Ass}^{\infty}(J)$.
###### Proof.
If $J$ is the square-free Veronese type ideal, then by [12, Corollary 5.5]
there is nothing to prove. Now, suppose that $gcd(J)=1$ and we may assume that
$n\geq 4$. We use induction on $d$. Suppose $d=2$ and we may assume
$x_{n-1}x_{n}\notin G(J)$. In this case $x_{1}x_{2},x_{1}x_{3}\in G(J)$ and so
$\\{x_{2},x_{3}\\}\in E(\Gamma_{J})$. Also,
$x_{2}x_{3},x_{2}x_{4},\ldots,x_{2}x_{n}\in G(J)$. Hence
$\\{x_{1},x_{3}\\},\ldots,\\{x_{1},x_{n}\\}\in E(\Gamma_{J})$ and so
$\Gamma_{J}$ is connected. Therefore $\ell(J)=n$ and hence
$\mathfrak{m}\in\operatorname{Ass}^{\infty}(J)$. Suppose that $d>2$ and that
the result has been proved for $d-1$. Now we prove that the result holds for
$d$. We may assume that $x_{n-d+1}x_{n-d+2}\ldots x_{n}\notin G(J)$. Thus
$I_{d;n}=J+(x_{n-d+1}x_{n-d+2}\ldots x_{n})$ and so we have
$J=(x_{1},\ldots,x_{n-d})\cap I_{d;n}$. Thus $J(\mathfrak{p}_{\\{i\\}})$ is a
square-free Veronese type ideal for all $1\leq i\leq n-d$ and
$J(\mathfrak{p}_{\\{i\\}})$ is an almost square-free Veronese type for all
$n-d+1\leq i\leq n$. It therefore follows that
$s(J(\mathfrak{p}_{\\{i\\}}))=1$ for all $i$ and so by Lemma 2.4, $s(J)=1$.
Hence $\ell(J)=n$ and so $\mathfrak{m}\in\operatorname{Ass}^{\infty}(J)$, as
required.
Herzog, Rauf and Vladoiu [12] proved that If $I=I_{d;n}$ is a square-free
Veronese type ideal, then
$\operatorname{astab}(I)=\operatorname{dstab}(I)=\lceil\frac{n-1}{n-d}\rceil$.
The following theorem extends this result.
###### Theorem 2.16.
Let $J$ be an almost square-free Veronese type ideal of degree $d\geq 2$ and
$gcd(J)=1$. Then
$\operatorname{astab}(J)=\operatorname{dstab}(J)=\lceil\frac{n-1}{n-d}\rceil$.
###### Proof.
If $J$ is the square-free Veronese type ideal, then by [12, Corollary 5.7]
there is nothing to prove. Now, suppose that $gcd(J)=1$ and we may assume that
$d\leq n-2$. Assume that $k=\lceil\frac{n-1}{n-d}\rceil$. In this case
$\frac{n-1}{n-d}=1+\frac{d-1}{n-d}\leq d$ and so we may assume that $k\leq d$.
Let $I$ be the square-free Veronese type ideal of degree $d$ and we may assume
that $I=(J,u)$ such that $u=x_{n}x_{n-1}\ldots x_{n-d}x_{n-d+1}\notin G(J)$.
It is clear that $I^{k}:u^{k-1}x_{1}\ldots
x_{d-1}=\mathfrak{m}=J^{k}:u^{k-1}x_{1}\ldots x_{d-1}$. Indeed, if $1\leq
j\leq k-1$ or $d\leq j\leq n$ then $x_{j}u^{k-1}x_{1}\ldots
x_{d-1}=(x_{1}u)(x_{2}u)\ldots(x_{k-1}u)(x_{j}x_{k}\ldots
x_{d-1})=(\frac{x_{1}u}{x_{i_{1}}})(\frac{x_{2}u}{x_{i_{2}}})\ldots(\frac{x_{k-1}u}{x_{i_{k-1}}})(x_{i_{1}}\ldots
x_{i_{k-1}}x_{k}\ldots x_{d-1}x_{j})\in I^{k}$ and if $k\leq j\leq d-1$ then
$x_{j}u^{k-1}x_{1}\ldots
x_{d-1}=(x_{j}u)(x_{1}u)\ldots(x_{k-2}u)(x_{k-1}x_{k}\ldots x_{d-1})=\\\
(\frac{x_{j}u}{x_{i_{1}}})(\frac{x_{2}u}{x_{i_{2}}})\ldots(\frac{x_{k-2}u}{x_{i_{k-1}}})(x_{i_{1}}\ldots
x_{i_{k-1}}x_{k-1}x_{k}\ldots x_{d-1})\in I^{k}$, where
$x_{i_{1}},\ldots,x_{i_{k-1}}$ are distinctive elements of
$\\{x_{n-d+1},\ldots,x_{n}\\}$. Since
$\operatorname{dstab}(J)=\min\\{t|\mathfrak{m}\in\operatorname{Ass}(I^{t})\\}$,
it follows that
$\operatorname{dstab}(J)\leq\operatorname{dstab}(I)=\operatorname{astab}(I)\leq
d$ and $\operatorname{dstab}(J)\leq\operatorname{astab}(J)$. By Proposition
2.15 $\mathfrak{m}\in\operatorname{Ass}^{\infty}(J)$ and so we assume that $t$
is the smallest integer such that
$\operatorname{Ass}^{\infty}(J)=\operatorname{Ass}(J^{t})$. By using the above
argument we have $\mathfrak{m}\in\operatorname{Ass}^{\infty}(I)$. Thus
$\operatorname{astab}(I)=\operatorname{dstab}(I)\leq\operatorname{dstab}(J)$
and so
$\operatorname{dstab}(J)=\operatorname{dstab}(I)=\operatorname{astab}(I)=k$.
By induction on $d$ we will prove that
$\operatorname{astab}(J)=\operatorname{dstab}(J)=k$. If $d=2$, then by [16,
Proposition 2.12] the result follows. Let $d\geq 3$ and the result has been
proved for smaller values of $d$. By [16, Remark 2.6] we have
$\operatorname{Ass}(J^{k+j})\setminus\\{\mathfrak{m}\\}=\bigcup_{i=1}^{n}\operatorname{Ass}(J^{k+j}(\mathfrak{p}_{\\{i\\}}))$
for all $j$. By the induction hypothesis
$\operatorname{astab}(J(\mathfrak{p}_{\\{i\\}})=\operatorname{dstab}(J(\mathfrak{p}_{\\{i\\}})\leq
k$. This implies that
$\operatorname{Ass}(J^{k+j})\setminus\\{\mathfrak{m}\\}=\operatorname{Ass}(J^{k})\setminus\\{\mathfrak{m}\\}$
and so $\operatorname{Ass}(J^{k+j})=\operatorname{Ass}(J^{k})$ for all $j$.
Therefore $\operatorname{astab}(J)\leq k=\operatorname{dstab}(J)$ and so
$\operatorname{astab}(J)=\operatorname{dstab}(J)=\operatorname{dstab}(I)=\operatorname{astab}(I)=k$,
as required.
Acknowledgement: We would like to thank deeply grateful to the referee for the
careful reading of the manuscript and the helpful suggestions. The second
author has been supported financially by Vice-Chancellorship of Research and
Technology, University of Kurdistan under research Project No. 99/11/19299.
## References
* [1] S. Bandari and J. Herzog, Monomial localizations and polymatroidal ideals, Eur. J. Comb., 34(2013), 752-763.
* [2] M. Brodmann, Asymptotic stability of $\operatorname{Ass}(M/{I^{n}M})$, Proc. Am. Math. Soc., 74(1979), 16-18.
* [3] M. Brodmann, The asymptotic nature of the analytic spread, Math. Proc. Cambridge Philos. Soc., 86(1979), 35-39.
* [4] H. J. Chiang-Hsieh, Some arithmetic properties of matroidal ideals, Comm. Algebra, 38(2010), 944-952.
* [5] A. Conca and J. Herzog, Castelnuovo-Mumford regularity of products of ideals, Collect. Math., 54(2003), 137-152.
* [6] D. R. Grayson and M. E. Stillman, Macaulay 2, a software system for research in algebraic geometry, Available at http://www.math.uiuc.edu/Macaulay2/.
* [7] J. Herzog and T. Hibi, Discrete polymatroids, J. Algebraic Combin., 16(2002), 239-268.
* [8] J. Herzog and T. Hibi, Cohen-Macaulay polymatroidal ideals, Eur. J. Comb., 27(2006), 513-517.
* [9] J. Herzog and T. Hibi, Monomial ideals, GTM., 260, Springer, Berlin, (2011).
* [10] J. Herzog and A. Mafi, Stability properties of powers of ideals in regular local rings of small dimension, Pacific J. Math., 295(2018), 31-41.
* [11] J. Herzog and A. A. Qureshi, Persistence and stability properties of powers of ideals, J. Pure and Appl. Algebra, 219(2015), 530-542.
* [12] J. Herzog, A. Rauf and M. Vladoiu, The stable set of associated prime ideals of a polymatroidal ideal, J. Algebr Comb., 37(2013), 289-312.
* [13] J. Herzog and Y. Takayama, Resolutions by mapping cones, Homology Homotopy Appl., 4(2002), 277-294.
* [14] J. Herzog and M. Vladoiu, Squarefree monomial ideals with constant depth function, J. Pure and Appl. Algebra, 217(2013), 1764-1772.
* [15] M. Jafari, A. Mafi and H. Saremi, Sequentially Cohen-Macaulay matroidal ideals, Filomat, 34 (2020), 4233-4244.
* [16] Sh. Karimi and A. Mafi, On stability properties of powers of polymatroidal ideals, Collect. Math., 70(2019), 357-365.
* [17] T. N. Trung, Stability of associated primes of integral closures of monomial ideals, J. Comb. Theory, Series A, 116(2009), 44-54.
* [18] R. H. Villarreal, Monomial Algebras, Monographs and Research Notes in Mathematics, Chapman and Hall/CRC, (2015).
|
# Methods of Informational Trends Analytics and Fake News Detection on Twitter
Bohdan M. Pavlyshenko
Ivan Franko National University of Lviv, Ukraine
<EMAIL_ADDRESS>www.linkedin.com/in/bpavlyshenko/
###### Abstract
In the paper, different approaches for the analysis of news trends on Twitter
has been considered. For the analysis and case study, informational trends on
Twitter caused by Russian invasion of Ukraine in 2022 year have been studied.
A deep learning approach for fake news detection has been analyzed. The use of
the theory of frequent itemsets and association rules, graph theory for news
trends analytics have been considered.
Keywords: fake news detection, twitter, news trends, frequent itemsets,
transformers, deep learning, users’ communities.
## 1 Introduction
News have an essential impact in many areas of society, politics and business.
That is why one can see a lot of attempts to produce manipulative and fake
news to get a specified response in the society. One of horrible world events
is Russian invasion of Ukraine on February, 24 2022. It causes a large
informational news flow on social networks, including producing manipulative
and fake news to shape a specified explanation and justification of invasion.
One of the goals claimed by Russia was the ’denazification’ of Ukraine. One of
the allegations of Russian propaganda was that Ukraine was developing the
biological weapon in special laboratories.
Tweets, the messages of Twitter microblogs, have high density of semantically
important keywords. It makes it possible to get semantically important
information from tweets and generate the features of predictive models for the
decision-making support. Different studies of Twitter are considered in the
papers [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. In [14, 15, 16], we study
different approaches for the analysis of messages on Twitter, as well as the
use of tweet features for forecasting different kinds of events.
In this paper, we consider the methods for the analysis of Twitter trends and
for detecting fake news. As fake news, we will consider the news information
which is not true as well as the information which can contain real facts, but
with incorrectly specified accents, and the focuses that lead to distorted
conclusion and incorrect understanding of underlying processes. For our
analysis, we considered informational trends caused by Russian invasion of
Ukraine in 2022. In the study, we also consider the possible impact of
informational trends on different companies working in Russia during this
conflict.
## 2 News Trends on Twitter
For the analysis, we used a combination of keywords related to thematic areas
under consideration. The keywords related to the entity under consideration
can be treated as a thematic field. The use of semantic and thematic fields
for text analytics is considered in [17, 18, 19, 16]. To load tweets, we have
used Twitter API v2. For the ’ukraine nazi’ thematic field the following
Twitter API query "(ukraine OR ukrainian OR ukraine’s) (nazi OR nazism,
nazists OR neonazi OR neo-nazi)" has been used. For the ’ukraine biological
weapon’ thematic field, the query "ukraine biological (weapon OR weapons OR
warfare)" has been used. For the analysis, the following main python packages
were used: ’pandas’, ’matplotlib’, ’seaborn’, ’tweepy’. Figures 1-4 show the
time series for tweet counts for different queries. The lower case of query
keywords allows searching tweets with the keywords with both lower and upper
cases. As the results show, for the ’ukraine nazi’ thematic field, the
discussion of underlying problems rose dramatically after February 24, the
date of Russian invasion of Ukraine. The amount of tweets related to this
theme before that date was at the minimum level. That itself leads to the
conclusion that the problem with nazi in Ukraine was just a formal reason to
justify the invasion. Another claim of Russian propaganda was about biological
weapons that were allegedly being developed in Ukrainian laboratories (Figure
3). For instance, it was claimed that a special virus was being developed and
it was meant to be distributed through bats and migratory birds.
Figure 1: Time series of tweets for the query ’ukraine’ Figure 2: Time series
of tweets for the thematic field ’ukraine nazi’ Figure 3: Time series of
tweets for the thematic field ’ukraine biological weapon’ Figure 4: Time
series of tweets
## 3 Deep Learning Approach for Fake News Detection
Fake news can be detected and revealed by analyzing facts and comparing them
with reality and other news sources. Let us consider the methods which make it
possible to detect fake news automatically. It is not easy to develop an AI
system which can analyze the facts. But for manipulative news, it is typical
to amplify them artificially in different ways, e.g. by retweeting
manipulative tweets many times using different users’ accounts. Some accounts
can be bots which were artificially created, others can belong to real users.
It makes it possible to detect fake news using an approach which analyzes the
patterns of users’ behavior. Also, fake news have specific patterns in the
text of messages. Both users’ behavior and text patterns can be captured by
deep machine learning algorithms. As the features for a predictive model, we
used tweet texts and the list of users’ usernames who retweeted those tweets.
For model developing, evaluation and prediction, the Python framework
’pytorch’ was used. The ML model consists of several concatenated neural
subnetworks: subnetwork with DistilBERT transformer which ingests tokens of
tweet text,s subnetwork with the embedding layer with averaging which ingests
the mixture of encoded words of tweet texts and lists of usernames of
retweeters, as well as a subnetwork for the components of truncated singular
value decomposition of TF-IDF matrix for the list of usernames of retweeters.
Figure 5 shows the structure of the deep learning model for fake and
manipulative news detection. For our case study, the loaded tweets with the
thematic fields ’ukraine nazi’ and ’ukraine biological weapon’ were used. For
the model training and evaluation, the tweet datasets with a specified list of
users who retweeted those tweets were created. For the analysis, only the
tweets with a specified threshold for retweet counts were included. The
dataset was labeled using an appropriate tweet id, usernames, hashtags of
tweets which can be treated as fake or manipulative. Figure 6 shows model
evaluation results on the validation dataset.
Figure 5: Deep learning model structure Figure 6: Model evaluation results on
the validation dataset
The examples of tweets which were recognized by the deep learning model as
fake or manipulation news for the thematic fields ’ukraine nazi’ and ’ukraine
biological weapon’ are at [20].
## 4 Detecting Artificially Generated News
One of the ways to form manipulative trends is to produce artificially created
news tweet messages [21, 22, 23], e.g. by paraphrasing an initial text using
Seq2Seq neural network models. For this purpose, GPT-2, and BART transformers
can be used. Each pretrained transformer has its own patterns for generating
texts using an encoder-decoder approach. These patterns can be detected by
other transformers which are fine-tuned on a dataset with an artificialy
generated text. For the model fine-tuning, a TweepFake dataset of artificially
generated texts [22] was used. The framework ’pytorch’ was used for the model
developing and evaluation. We tried two models – the neural network with the
DistilBERT transformer layer and the neural network with the concatenation of
the DistilBERT transformer with embedding layers for the usernames of users
who post tweets. Figures 7, 8 show the model evaluation on the validation
dataset for these two models.
Figure 7: Evaluation results for the model with DisstilBERT transformer layer
Figure 8: Evaluation results for the model with the concatenation DisstilBERT
transformer layer and embedding layer for usernames
As the results show, the embedding layer for usernames can improve accuracy
scores, it means that usernames have predictive potential for detecting
artifitially created tweets. We applied the fine-tuned model to the datasets
of tweets of the semantic fields under consideration and have received the
following results: human - 80%, GPT-2-10%, RNN - 0.3%, Others - 3%.
## 5 The Impact of the Discussion About Company’s Behavior On Its Stock
Prices
Russian invasion of Ukraine has huge impact on users’ attitude to different
companies and shapes discussion trends on Twitter. Figure 9 shows time series
for tweet counts related to the consideration of different companies in Russia
during Russian invasion of Ukraine. For the case study, let us consider
McDonald’s. Figure 10 shows the tweet counts in the trend of considering this
company, a rolling mean of this time series, with a 7-day window and stock
price for the ’MCD’ ticker for the McDonald’s company.
Figure 9: Time series of tweets for trends related to different companies
Figure 10: Time series of tweets related to McDonald’s, rolling mean of the
time series, stock price time series for the ’MCD’ ticker
As a result of public discussion, it is known that McDonald’s has stopped its
activity in Russia. After this announcement, the stock price for the ’MCD’
ticker has returned to almost initial price value before the invasion period
(Figure 10). The stock price time series for the ’MCD’ ticker were loaded
using Python package ’yfinance’. It shows that public consideration of a
company’s behavior which is reflected on the social networks can have impact
on the stock market. It is supposed that consideration has some cumulative
effect, that is why the rolling mean of the time series of tweet counts
corresponds more precisely to company’s stock prices on the stock market. Let
us analyze the dependency of stock prices on the tweet counts rolling mean. It
is important to estimate the uncertainty of such dependency. For the modeling
of such a dependency, the Bayesian regression was applied. The probabilistic
model can be considered as follows:
$\begin{split}&y\sim Student_{t}(\nu,\mu,\sigma),\\\ &\mu=\alpha+\beta
x,\end{split}$ (1)
where $\nu$ is a distribution parameter, known as degrees of freedom. Target
variable $y$ is described by Student’s t-distribution which has fat tails that
makes it possible to take into account extreme events and values that enable
us to estimate uncertainties more accurately. For Bayesian inference
calculations, we used a Python package ’pystan’ for Stan platform for
statistical modeling [24]. For the analysis, as a feature independent
variable, we used z-scores for the rolling mean of tweet counts time series,
as a target variable, we used z-scores for stock price for ’MCD’ ticker. As a
result of sampling, the mean value as well as quantiles 0.01, 0.05, 0.95, 0.99
for the target variable were calculated. Figure 11 shows the results of
modeling. Figure 12 shows the probability density function for $\beta$
parameter. Quantile 0.05 for predicted target variable can be treated as the
value of risk (VaR) that is a quantitative characteristic for risk assessment.
Figure 11: Normalized time series of stock price for the ’MCD’ ticker (y),
prediction for stock price (pred), quantiles for prediction (0.01 - pred_q01,
0.05 - pred_q05, 0.95 - pred_q95, 0.99 - pred_q99 ) Figure 12: Probability
density function for $\beta$ model parameter
## 6 Analysis of Tweets Using Frequent Itemsets
The frequent itemsets and associative rules theory is often used in the
intellectual analysis [25, 26, 27, 28, 29, 30, 31, 32]. It can be used in a
text data analysis to identify and analyze certain sets of objects, which are
often found in large arrays and are characterized by certain features. Let us
consider the algorithms for detecting frequent sets and associative rules on
the example of processing microblog messages on tweets. Figure 13 shows
keyword frequencies for the specified thematic field ’ukraine nazi’. Using
these keywords, one can calculate frequent itemsets. Figure 14 shows the graph
of frequent itemsets which describes the semantic structure of entities for a
specified thematic field. Figure 14 shows the graph of the subset of
association rules, Figure 16 shows the association rules represented by a
grouped matrix. Figure 17 shows the association rules which contain the
keyword ’fake’. Figures 18–20 show the similar calculation for the thematic
field ’ukraine biological weapon’. Figures 21–23 show the subset of these
frequent itemsets and association rules which contain the keyword ’bird’. The
quantitative characteristics of frequent itemsets like value of support can be
used as a predictive feature in machine learning models.
Figure 13: Keyword frequencies related to the thematic field ’ukraine nazi’
Figure 14: Graph of semantic frequent itemsets Figure 15: Graph of association
rules Figure 16: Association rules represented by a grouped matrix Figure 17:
Association rules represented by a grouped matrix with the keyword ’fake’
Figure 18: Keyword frequencies related to the thematic field ’ukraine
biological weapon’ Figure 19: Graph of semantic frequent itemsets Figure 20:
Graph of association rules Figure 21: Graph of semantic frequent itemsets with
the keyword ’bird’ Figure 22: Graph of association rules with the keyword
’bird’ Figure 23: Association rules with the keyword ’bird’ represented by a
grouped matrix
## 7 Graph Structure of Tweets
The relationships among users can be considered as a graph, where vertices
denote users and edges denote their connections. Using graph mining
algorithms, one can detect user communities and find ordered lists of users by
various characteristics, such as Hub, Authority, PageRank, Betweenness. To
identify user communities, we used the Community Walktrap algorithm and to
visualize them we used Fruchterman-Reingold algorithm, which are implemented
in the package ’igraph’ [33] for the R programming language environment. The
Community Walktrap algorithm searches for related subgraphs, also called
communities, by random walk [34]. A graph which shows the relationships
between users can be represented by Fruchterman-Reingold algorithm [35]. The
qualitative structure of user’s connections can be used for aggregating
different quantitative time series and, in such a way, creating new features
for predictive models which can be used, for example, for predicting target
variables. Figure 24 shows users connections and revealed communities for the
subset of tweets which are related to the trends under consideration. The
results show that some communities marked by different colors are highly
isolated and have only few connections outside. This kind of communities can
be treated as suspicious, since artificially created communities for
amplifying manipulative news are also highly isolated and their activity is
often concentrated on amplifying by retweeting tweets from a limited set of
users. Therefore, the numerical characteristics of users communities can have
a predictive potential.
Figure 24: Graph of users’ connections
## 8 Conclusion
The obtained results show that an effective system for detecting fake and
manipulative news can be developed using combined neural network which consist
of three concatenated subnetworks: the subnetwork with DistilBERT transformer
for tweet texts, the subnetwork with embedding of tweet words and usernames of
users who retweeted tweets, and the subnetwork for the components of singular
value decomposition of TF-IDF matrix for lists of usernames of users who
retweeted tweets. Discussions on social networks about companies behavior has
impact on their business and their stock prices on the stock market. To
analyze such an impact and make risk assessment, Bayesian regression can be
used. Using the theory of frequent itemsets and association rules along with
thematic fields of keywords, makes it possible to reveal the semantic
structure for entities in news messages. The quantitative characteristics
association rules like support and confidence can be used as features for
machine learning predictive models. Using the graph theory, the hidden
communities of users can be revealed and their characteristics can be used in
a machine learning approach for fake and manipulative news detection.
## References
* [1] A. Java, X. Song, T. Finin, and B. Tseng, ‘‘Why we twitter: understanding microblogging usage and communities,’’ in Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis, pp. 56–65, 2007.
* [2] H. Kwak, C. Lee, H. Park, and S. Moon, ‘‘What is Twitter, a social network or a news media?,’’ in Proceedings of the 19th international conference on World wide web, pp. 591–600, 2010.
* [3] A. Pak and P. Paroubek, ‘‘Twitter as a corpus for sentiment analysis and opinion mining,’’ in LREc, vol. 10, pp. 1320–1326, 2010.
* [4] M. Cha, H. Haddadi, F. Benevenuto, and K. Gummadi, ‘‘Measuring user influence in twitter: The million follower fallacy,’’ in Proceedings of the International AAAI Conference on Web and Social Media, vol. 4, 2010.
* [5] F. Benevenuto, T. Rodrigues, M. Cha, and V. Almeida, ‘‘Characterizing user behavior in online social networks,’’ in Proceedings of the 9th ACM SIGCOMM conference on Internet measurement, pp. 49–62, 2009.
* [6] J. Bollen, H. Mao, and X. Zeng, ‘‘Twitter mood predicts the stock market,’’ Journal of computational science, vol. 2, no. 1, pp. 1–8, 2011.
* [7] S. Asur and B. A. Huberman, ‘‘Predicting the future with social media,’’ in 2010 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology, vol. 1, pp. 492–499, IEEE, 2010.
* [8] D. Shamma, L. Kennedy, and E. Churchill, ‘‘Tweetgeist: Can the twitter timeline reveal the structure of broadcast events,’’ CSCW Horizons, pp. 589–593, 2010.
* [9] M. Wang and G. Hu, ‘‘A novel method for twitter sentiment analysis based on attentional-graph neural network,’’ Information, vol. 11, no. 2, p. 92, 2020\.
* [10] V. Balakrishnan, S. Khan, and H. R. Arabnia, ‘‘Improving cyberbullying detection using twitter users’ psychological features and machine learning,’’ Computers & Security, vol. 90, p. 101710, 2020.
* [11] N. Grinberg, K. Joseph, L. Friedland, B. Swire-Thompson, and D. Lazer, ‘‘Fake news on twitter during the 2016 us presidential election,’’ Science, vol. 363, no. 6425, pp. 374–378, 2019.
* [12] O. Ajao, D. Bhowmik, and S. Zargari, ‘‘Fake news identification on twitter with hybrid cnn and rnn models,’’ in Proceedings of the 9th international conference on social media and society, pp. 226–230, 2018.
* [13] S. Helmstetter and H. Paulheim, ‘‘Weakly supervised learning for fake news detection on twitter,’’ in 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 274–277, IEEE, 2018.
* [14] B. M. Pavlyshenko, ‘‘Forecasting of Events by Tweets Data Mining,’’ Electronics and information technologies, no. 10, pp. 71–85, 2018.
* [15] B. M. Pavlyshenko, ‘‘Can Twitter Predict Royal Baby’s Name ?,’’ Electronics and information technologies, no. 11, pp. 52–60, 2019.
* [16] B. M. Pavlyshenko, ‘‘Forming predictive features of tweets for decision-making support,’’ in International Scientific Conference “Intellectual Systems of Decision Making and Problem of Computational Intelligence”, pp. 479–490, Springer, 2021.
* [17] B. Pavlyshenko, ‘‘Classification analysis of authorship fiction texts in the space of semantic fields,’’ Journal of Quantitative Linguistics, vol. 20, no. 3, pp. 218–226, 2013.
* [18] B. Pavlyshenko, ‘‘Clustering of authors’ texts of english fiction in the vector space of semantic fields,’’ Cybernetics and Information Technologies, vol. 14, no. 3, pp. 25–36, 2014.
* [19] B. Pavlyshenko, ‘‘Genetic optimization of keyword subsets in the classification analysis of authorship of texts,’’ Journal of Quantitative Linguistics, vol. 21, no. 4, pp. 341–349, 2014.
* [20] ‘‘Examples of Automatically Recognized Tweets As Fake or Manipulative.’’ URL: https://bit.ly/37u4viL.
* [21] G. Jawahar, M. Abdul-Mageed, and L. V. Lakshmanan, ‘‘Automatic detection of machine generated text: A critical survey,’’ arXiv preprint arXiv:2011.01314, 2020.
* [22] T. Fagni, F. Falchi, M. Gambini, A. Martella, and M. Tesconi, ‘‘Tweepfake: About detecting deepfake tweets,’’ Plos one, vol. 16, no. 5, p. e0251415, 2021.
* [23] T. Murayama, ‘‘Dataset of fake news detection and fact verification: A survey,’’ arXiv preprint arXiv:2111.03299, 2021.
* [24] B. Carpenter, A. Gelman, M. D. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell, ‘‘Stan: A probabilistic programming language,’’ Journal of statistical software, vol. 76, no. 1, 2017.
* [25] R. Agrawal, R. Srikant, et al., ‘‘Fast algorithms for mining association rules,’’ in Proc. 20th int. conf. very large data bases, VLDB, vol. 1215, pp. 487–499, 1994.
* [26] R. Agrawal, H. Mannila, R. Srikant, H. Toivonen, A. I. Verkamo, et al., ‘‘Fast discovery of association rules.,’’ Advances in knowledge discovery and data mining, vol. 12, no. 1, pp. 307–328, 1996.
* [27] C.-K. Chui, B. Kao, and E. Hung, ‘‘Mining frequent itemsets from uncertain data,’’ in Pacific-Asia Conference on knowledge discovery and data mining, pp. 47–58, Springer, 2007.
* [28] K. Gouda and M. J. Zaki, ‘‘Efficiently mining maximal frequent itemsets,’’ in Proceedings 2001 IEEE International Conference on Data Mining, pp. 163–170, IEEE, 2001.
* [29] R. Srikant, Q. Vu, and R. Agrawal, ‘‘Mining association rules with item constraints.,’’ in Kdd, vol. 97, pp. 67–73, 1997.
* [30] M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Verkamo, ‘‘Finding interesting rules from large sets of discovered association rules,’’ in Proceedings of the third international conference on Information and knowledge management, pp. 401–407, 1994.
* [31] N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal, ‘‘Discovering frequent closed itemsets for association rules,’’ in International Conference on Database Theory, pp. 398–416, Springer, 1999.
* [32] S. Brin, R. Motwani, and C. Silverstein, ‘‘Beyond market baskets: Generalizing association rules to correlations,’’ in Proceedings of the 1997 ACM SIGMOD international conference on Management of data, pp. 265–276, 1997.
* [33] G. Csardi, T. Nepusz, et al., ‘‘The igraph software package for complex network research,’’ InterJournal, complex systems, vol. 1695, no. 5, pp. 1–9, 2006.
* [34] P. Pons and M. Latapy, ‘‘Computing communities in large networks using random walks,’’ in International symposium on computer and information sciences, pp. 284–293, Springer, 2005.
* [35] T. M. Fruchterman and E. M. Reingold, ‘‘Graph drawing by force-directed placement,’’ Software: Practice and experience, vol. 21, no. 11, pp. 1129–1164, 1991.
|
# Omni-Training: Bridging Pre-Training and Meta-Training for Few-Shot Learning
Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Philip S. Yu, , Mingsheng
Long The authors are with the School of Software, BNRist, Tsinghua
University, Beijing 100084, China. Corresponding author: Mingsheng Long,
<EMAIL_ADDRESS>
###### Abstract
Few-shot learning aims to fast adapt a deep model from a few examples. While
pre-training and meta-training can create deep models powerful for few-shot
generalization, we find that pre-training and meta-training focuses
respectively on cross-domain transferability and cross-task transferability,
which restricts their data efficiency in the entangled settings of domain
shift and task shift. We thus propose the Omni-Training framework to
seamlessly bridge pre-training and meta-training for data-efficient few-shot
learning. Our first contribution is a tri-flow Omni-Net architecture. Besides
the joint representation flow, Omni-Net introduces two parallel flows for pre-
training and meta-training, responsible for improving domain transferability
and task transferability respectively. Omni-Net further coordinates the
parallel flows by routing their representations via the joint-flow, enabling
knowledge transfer across flows. Our second contribution is the Omni-Loss,
which introduces a self-distillation strategy separately on the pre-training
and meta-training objectives for boosting knowledge transfer throughout
different training stages. Omni-Training is a general framework to accommodate
many existing algorithms. Evaluations justify that our single framework
consistently and clearly outperforms the individual state-of-the-art methods
on both cross-task and cross-domain settings in a variety of classification,
regression and reinforcement learning problems.
###### Index Terms:
Few-shot learning, data efficiency, transferability, meta-learning, pre-
training
## 1 Introduction
Deep learning [1] has achieved the state-of-the-art performance in various
machine learning tasks [2, 3, 4, 5]. However, most deep learning methods, in
particular the foundation models [6], are “data hungry”, in that the success
of these methods highly relies on large amounts of labeled data. This clearly
limits the application of deep learning to widespread domains or tasks,
especially those with sparse data and insufficient annotations, such as
personalized healthcare [7]. In order to promote the grounding of deep
learning models, few-shot learning, which aims to fast learn various complex
tasks from a few labeled data, has attracted enormous attention recently [8,
9, 10].
Human beings are gifted with the ability to quickly learn new tasks by making
use of previous experience and knowledge. In analogy to this, deep learning
models can reuse the representations learned previously to help efficiently
solve widespread downstream tasks. Recent advances have revealed that a
properly trained model endows an important property: _transferability_ , and
higher transferability indicates better generalizability to new scenarios. In
general situations as illustrated by Figure 1, complex relationships between
the pretext dataset and the new task hinder the downstream learning and pose
challenges to the transferability of learned representations. The two main
challenges come from the different distributions across domains, _i.e._ domain
shift and different semantics across tasks, _i.e._ task shift. For example, in
image classification, different domains may have different visual factors such
as different styles, viewpoints and lighting, while different tasks may have
different categories. In most cases, the two challenges entangle with each
other, making few-shot learning a very hard problem. Thus, a versatile
algorithm should bridge these two gaps and learn representations with both
_domain transferability_ and _task transferability_.
Figure 1: Illustration of the two challenges of few-shot learning. Due to the
_domain shift_ and _task shift_ between the training dataset and the test
dataset, it is hard for the trained model $\mathcal{M}_{\texttt{train}}$ to
transfer to the test set and boost its data-efficiency. An ideal training
method should learn representations with both _domain transferability_ and
_task transferability_ and adapt $\mathcal{M}_{\texttt{train}}$ to downstream
model $\mathcal{M}_{\texttt{test}}$ in a data-efficient way.
Two mainstream representation learning paradigms for few-shot learning are
_pre-training_ and _meta-training_. In pre-training, we train a high-capacity
model for a pretext task on large-scale datasets [4, 11] and fine-tune the
model on the target task [12]. In meta-training, we train the model from
diverse tasks and fast adapt the model to new tasks [13, 9, 14]. As evidenced
by recent studies, neither paradigm can dominate in the widespread few-shot
learning scenarios [15, 16, 17], because it requires knowledge that
generalizes across both domains and tasks. Pre-training representations can
transfer to widespread domains, since the pretext task is designed to be
general across domains. However, only pre-training on a single pretext task
makes it hard to fast adapt to many new tasks. In contrast, the diverse tasks
equip meta-training with the ability to fast adapt across many tasks with
extremely sparse data, but the meta-training tasks are usually domain-specific
and thus the learned representations cannot generalize well across domains.
In line with the understanding of pre-training and meta-training, we further
study both paradigms with regard to the two transferability properties and
reach a similar conclusion: pre-training methods are apt at the domain
transferability while meta-training methods at the task transferability. We
then take a step forward to exploit the collaboration between pre-training and
meta-training and draw an important finding that neither a simple ensemble nor
a tight combination can achieve both kinds of transferability. This finding
motivates us to design a new Omni-Training framework to bridge both sides for
few-shot learning.
Omni-Training seamlessly bridges pre-training and meta-training to learn deep
representations with both domain transferability and task transferability. The
first part is Omni-Net, a tri-flow architecture. Besides a joint-flow for
shared representation learning, Omni-Net introduces two new parallel flows for
pre-training and meta-training to yield representations of domain
transferability and task transferability respectively. It further coordinates
the parallel flows by routing their representations via the joint-flow, making
each gain the other kind of transferability. The second part is Omni-Loss,
which works in cooperation with the architecture for learning transferable
representations. A self-distillation strategy is imposed to both the pre-
training and meta-training objectives, forcing the parallel flows to learn
more transferable representations. Omni-Training is a general framework that
can accommodate many existing pre-training and meta-training algorithms.
Thorough evaluations on cross-task and cross-domain datasets in
classification, regression and reinforcement learning problems show that Omni-
Training consistently and clearly outperforms the individual state-of-the-art
deep learning methods.
## 2 Related Work
Few-shot learning aims to make full use of every sample and address new tasks
with a few labeled data [8, 18, 19]. In this paper, we focus on representation
learning algorithms towards it, which aim to learn transferable
representations from pretext data to reduce the data requirement of learning
new tasks. We restrict our review to two mainstream categories of
representation learning algorithms for few-shot learning that achieve state-
of-the-art performance: pre-training and meta-training.
### 2.1 Pre-Training
One line of few-shot learning methods is to learn deep representations by pre-
training deep networks with a pretext task on the training datasets. With the
prevalence of large-scale labeled datasets and the advanced computational
infrastructure, deep networks with extremely big model capacity are trained
for various applications such as computer vision [20, 21, 11] and natural
language processing [4, 22]. With such deep models, recent works re-take the
pre-training and fine-tuning paradigm and demonstrate that fine-tuning high-
capacity deep models pre-trained on large datasets achieves state-of-the-art
performance in various applications with only a few labeled data [23, 24, 25].
Pre-training is also adopted in reinforcement learning to enable learning the
policy for new environments with less interaction steps [26, 27, 28]. More
advanced pre-training strategies also boost few-shot learning performance,
such as training an ensemble of models [29] and training with knowledge
distillation [30].
There are methods towards the stage of fine-tuning on the new task. For
example, some works reuse the representations to predict parameters of new
categories [31, 32]. Some works regularize the model of the new task from the
aspects of parameters or representations to fully extract the knowledge of the
pre-trained models [33, 34]. Recent research also proposed to explore
relationships between the training and test datasets and mitigate negative
transfer [35, 36]. Cao et al. [24] proposed an ease-in-ease-out fine-tuning
method to enable transfer reinforcement learning across homotopy classes.
These methods focus on a different perspective and are in parallel with this
paper.
Pre-training approaches are simple and effective to improve data efficiency in
new scenarios, which show higher domain transferability and outperform
sophisticated meta-training methods in the cross-domain setting [15, 16, 17].
However, as the training stage only involves one pretext task, these methods
cannot quickly handle the rapid changes of semantics in new tasks [9].
### 2.2 Meta-Training
Meta-training addresses few-shot learning by learning representations
generalizable across many training tasks, which can be naturally adapted to
new tasks [37, 38, 39]. It has been widely used in a variety of applications.
Few-shot learning [8] is widely studied in the field of classification,
especially image recognition, where a typical form is to learn from a few
annotated data, _i.e._ the “N-way-K-shot” few-shot classification problems
[18, 40]. Metric-based meta-learning methods are tailored for these problems,
which learn an embedding space to form decision boundaries according to the
distances between samples [41, 13, 14, 42, 43]. Recently, embedding functions
are improved by stronger inductive bias such as graph networks [44], fine-
grained attention maps [45], task-adaptive projections [46, 47] and set-to-set
functions [48].
Some other meta-learning methods deal with various applications. Early works
build meta-learners to learn how to update the model parameters and generalize
the updating rules to new tasks [49, 50], which have been recently applied in
deep learning to enable fast adaptation of deep networks [51, 52, 53]. Such
learning to learn paradigm is also demonstrated to work for regression [51,
53] and reinforcement learning [54, 55]. Several works equip networks with
external or internal memory so that meta-knowledge can be effectively stored
and queried for data-efficient adaptation to new tasks [56, 57, 58, 59]. The
memory-augmented models are also applied to reinforcement learning to improve
data-efficiency [60, 61, 58]. These methods introduce additional parameters
and storage costs or require a particular architecture of the learner for
meta-learning.
Model agnostic meta-learning introduces the gradient-based idea, which trains
a good initialization of the deep network as the meta-knowledge such that a
small number of gradient steps and interactions in the new environment can
induce high generalization performance [9]. The idea is later improved by new
architectures [62, 63]. Such gradient-based meta-training methods show strong
performance in real robotics applications such as imitation learning [64, 65],
locomotion [66], visual navigation [67], and robot manipulation [68]. They can
also be extended to other applications such as regression and image
classification by changing the architecture and training objective [9, 69, 70,
71].
Though meta-training empowers the deep representations with the ability to
generalize across new tasks, a recent empirical study has revealed that meta-
trained representations cannot generalize across domains with distribution
shift [15]. Tseng et al. [72] use feature-wise transformation layers to
simulate various image feature distributions extracted from the training tasks
in different domains. However, the domain transferability is still limited
especially in domains with large distribution shift [16]. Our method acquires
the missing piece of domain transferability from _pre-training_ , which does
not require multiple pretext domains but achieves better cross-domain
generalization ability.
Meta-training and pre-training are apt at task transferability and domain
transferability respectively, and neither can dominate the other. A natural
idea is to integrate two types of approaches to achieve both. Sun et al. [73]
simply chain the process of pre-training and meta-training, but such a simple
combination still lacks both kinds of transferability. In contrast, our Omni-
Training framework seeks to flexibly bridge pre-training and meta-training to
empower both kinds of transferability.
## 3 Background and Analysis
We first introduce few-shot learning and its two key prerequisites: _domain
transferability_ and _task transferability_. Then we delve into two mainstream
methods, _pre-training_ and _meta-training_ , each of which learns a
representation of a specific kind of transferability and enables
generalization to either new domains or new tasks.
### 3.1 Few-Shot Learning
At the training phase, the goal is to learn a feature representation $F$ on
the training set $\mathcal{D}_{\texttt{train}}$ of sufficient labeled
examples, which enables fast solving new tasks from a few examples. At the
testing phase, the learned representation $F$ is evaluated on new tasks,
either within domain or across domains. Each task comes with a test set
$\mathcal{D}_{\texttt{test}}=\\{\mathcal{S}_{\texttt{test}},\mathcal{Q}_{\texttt{test}}\\}$
partitioned into a support set $\mathcal{S}_{\texttt{test}}$ with a few
labeled examples and a query set $\mathcal{Q}_{\texttt{test}}$ with many
unlabeled examples to predict. The learned representation $F$ should adapt
fast to each new task through the support set and then yield accurate
predictions on the query set.
The key to enable few-shot learning in downstream tasks is the transferability
of the representations. Given input $\mathbf{x}\in\mathcal{X}$ and output
$\mathbf{y}\in\mathcal{Y}$, denote the joint distribution as
$\mathcal{P}(\mathbf{x},\mathbf{y})$ and the learning task as
$\mathcal{T}:\mathbf{x}\mapsto\mathbf{y}$. The _domain transferability_
measures the generalizability under train-test distribution shift,
$\mathcal{P}_{\texttt{train}}\neq\mathcal{P}_{\texttt{test}}$, and the _task
transferability_ measures the generalizability under train-test task shift,
$\mathcal{T}_{\texttt{train}}\neq\mathcal{T}_{\texttt{test}}$. In general
situations of few-shot learning, complex relationships between the training
dataset and the new tasks entangle distribution shift and task shift. So we
should learn representations with both domain transferability and task
transferability to enable data-efficient few-shot learning.
(a) Task Transferability
(b) Domain Transferability
(c) Combination Strategies
Figure 2: Analysis: (a) Task transferability of pre-training and meta-training
in the _cross-task_ setting; (b) Domain transferability of pre-training and
meta-training in the _cross-domain_ setting; (c) Accuracy of pre-training,
meta-training and two combination strategies, Ensemble and Joint-Training.
### 3.2 Training Methods for Few-Shot Learning
Pre-Training. In pre-training approaches, deep representations are often
learned by supervised learning on a large-scale training dataset
$\mathcal{D}_{\texttt{train}}$, which facilitate data-efficient or few-shot
learning for a variety of downstream tasks. We use an abstract model composed
of a feature extractor $F$ to generate the representation and a task-specific
head $H$ to predict the output, which is applicable to various tasks. During
the training stage, the training set $\mathcal{D}_{\texttt{train}}$ is viewed
as samples from a joint distribution of inputs and labels:
$\mathcal{P}(\mathbf{x},\mathbf{y})$. Representation learning is conducted by
optimizing $H$ and $F$ over the sampled _mini-batches_ from the training
distribution with the loss $\ell_{\texttt{pre}}$ tailored to the specific task
or algorithm:
$\mathop{\min}_{H,F}{\mathbb{E}}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}(\mathbf{x},\mathbf{y})}\
\ell_{\texttt{pre}}\big{(}\mathbf{y},H\circ F\left(\mathbf{x}\right)\big{)}.$
(1)
During test, we transfer the pre-trained models on the new task
$\mathcal{D}_{\texttt{test}}=\\{\mathcal{S}_{\texttt{test}},\mathcal{Q}_{\texttt{test}}\\}$.
The feature extractor $F$ is fine-tuned and a task-specific head
$H_{\texttt{new}}$ for the new task is trained with the labeled data in
support set $\mathcal{S}_{\texttt{test}}$ and applied in query set
$\mathcal{Q}_{\texttt{test}}$.
Meta-Training. In meta-training, the representations are learned to perform
well across a set of tasks sampled from a task distribution constructed from
the training set. Specifically, the training set
$\mathcal{D}_{\texttt{train}}$ is viewed as a distribution of tasks
$\mathcal{T}$. Each task mimics the testing situation, which contains a
support set $\mathcal{S}$ with only a few labeled samples and a query set
$\mathcal{Q}$ needing predictions. The meta-learner is optimized over
_episodes_ of tasks sampled from $\mathcal{T}$. The model $F$ and $H$ are
learned to efficiently solve each of the tasks conditioned on the support set
$\mathcal{S}$ with only a few samples, and updated by the performance
evaluated on the query set $\mathcal{Q}$:
$\mathop{\min}_{H,F}\mathbb{E}_{(\mathcal{S},\mathcal{Q})\sim\mathcal{P}(\mathcal{T})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{Q}}\
\ell_{\texttt{meta}}\big{(}\mathbf{y},H\circ
F(\mathbf{x}|\mathcal{S})\big{)},$ (2)
where $\ell_{\texttt{meta}}$ is the loss of specific meta-training algorithms
defined on each episode, _e.g._ , the meta-objective in [9]. In test time, the
models are fast adapted to the new task with its support set
$\mathcal{S}_{\texttt{test}}$ in a similar way as the training phase, and the
adapted models can be used for predictions on the query set
$\mathcal{Q}_{\texttt{test}}$.
### 3.3 Transferability Assessment
We empirically compare pre-training and meta-training in terms of task
transferability and domain transferability. We evaluate two typical methods,
Baseline [15] as the pre-training method and ProtoNet [14] as the meta-
training method. We first use two benchmarks mini-ImageNet and CUB and follow
the protocol in [15]. Note that we use test tasks from the same dataset to
relieve the influence of distribution shift and mainly focus on _task shift_.
As shown in Figure 2a, the pre-training and meta-training methods perform
comparably on mini-ImageNet-5 (5 examples per class). However, in the more
extreme situation with only 1 example per class, meta-training outperforms
pre-training, where the boost becomes larger on CUB: a fine-grained dataset
with smaller distribution shifts between tasks. The result indicates higher
task transferability of meta-training. Next, we explore the influence of
_distribution shift_ across domains. We train the model on the mini-ImageNet
dataset, but evaluate it on different domains including CUB, Cars, Places and
Plantae. As shown in Figure 2b, pre-training and meta-training have similar
in-domain performance, but pre-training consistently outperforms meta-training
in four cross-domain situations. This result indicates higher domain
transferability of pre-training.
Our key finding is that pre-training introduces higher domain transferability
while meta-training introduces higher task transferability. This explains the
phenomenon that both methods may fail in some few-shot learning scenarios [9,
15, 74, 16]. In general situations, the new tasks hold complex relationships
with the training set, presenting both challenges of distribution shift and
task shift, which entangle with each other. For example, in the in-domain
experiment, there could still be domain shift caused by different categories;
In the cross-domain experiment, while domain shift is the main challenge, task
transferability is still required to adapt across different classes. Overall,
we need to learn representations with both domain transferability and task
transferability to fully enable few-shot learning.
We study two simple ways to combine pre-training and meta-training. One is to
separately train two models with two methods, and use their ensemble for
prediction, denoted as Ensemble. The other is to jointly train the model with
both training objectives, denoted as Joint-Training. We evaluate them on three
situations of mini-ImageNet, CUB, and transferring mini-ImageNet to CUB. As
shown in Figure 2c, both combination strategies promote the performance in
some cases, but the improvement is minor and inconsistent. The gain of
Ensemble indicates that pre-training and meta-training representations endow
complementary knowledge. However, this simple ensemble lacks the knowledge
coordination between pre-training and meta-training. The improvement of Joint-
Training shows the importance to extract shared knowledge between the two
training paradigms, but this tight combination sacrifices the specific
transferability held by each approach. Such a _transferability dilemma_
motivates the proposed Omni-Training framework, which seeks to flexibly
acquire both domain transferability and task transferability for better few-
shot learning.
## 4 Omni-Training Framework
In this paper, we are interested in learning representations with both domain
transferability and task transferability by incorporating and bridging pre-
training and meta-training in a unified Omni-Training framework. As discussed
in Section 3.3, this goal is non-trivial to realize with simple combinations
of these two training paradigms. Beyond the tight combination of joint-
training, we have two more key insights in designing the framework. Our first
key insight is that the domain transferability of pre-training and the task
transferability of meta-training should be preserved. Furthermore, there
should be knowledge communication between the two types of training to enable
them to complement each other. Our second key insight is that this non-trivial
unification should be realized with the design in both network architectures
and training algorithms. These insights are embedded into the Omni-Training
framework via an Omni-Net architecture guided by an Omni-Loss.
Figure 3: The Omni-Training framework consists of three data flows: joint-flow
(green), pre-flow (blue), and meta-flow (red). The Omni-Net consists of a
backbone $F$ and an Omni-Head $H$, where $F$ is formed by stacking Omni-Layers
and $H$ is formed of three heads $H_{\texttt{joint}}$, $H_{\texttt{pre}}$ and
$H_{\texttt{meta}}$. Each Omni-Layer has a main chunk layer
$f_{\texttt{joint}}$ and two lightweight branch layers $f_{\texttt{pre}}$ and
$f_{\texttt{meta}}$, followed by activation functions $a_{\texttt{joint}}$,
$a_{\texttt{pre}}$, $a_{\texttt{meta}}$. The Omni-Loss consists of three
losses respectively for joint-training $\mathcal{L}_{\texttt{joint}}$, pre-
training $\mathcal{J}_{\texttt{pre}}$, and meta-training
$\mathcal{J}_{\texttt{meta}}$, computed on the corresponding head. We also
propose a self-distillation strategy for training the pre-flow and meta-flow,
which transfers knowledge throughout the training process.
### 4.1 Omni-Net
Omni-Net is a tri-flow architecture that is constructed by stacking Omni-
Layers for representation learning and Omni-Heads for output prediction, as
shown in Figure 3.
Omni-Layer. We aim to simultaneously preserve the domain transferability of
pre-training and the task transferability of meta-training, and promote
knowledge communication between them. Thus, as shown in Figure 3, we design an
Omni-Layer consisting of a main chunk layer $f_{\texttt{joint}}$ and two
parallel branch layers $f_{\texttt{pre}}$ and $f_{\texttt{meta}}$. It enables
three interdependent data flows with different network parameters. In the
_joint-flow_ , the training data only go through $f_{\texttt{joint}}$, which
is jointly trained by pre-training and meta-training to extract common
knowledge as well as to coordinate the two parallel flows for a better
communication between them. Besides, the two parallel data flows for pre-
training and meta-training are respectively responsible for maintaining domain
transferability and task transferability. For pre-training, the data pass
through both $f_{\texttt{joint}}$ and $f_{\texttt{pre}}$, and then these two
outputs are added as the output of this Omni-Layer in the data flow. We denote
this data flow as _pre-flow_. Similarly, for meta-training and its
corresponding _meta-flow_ , the output is derived by adding the outputs of
$f_{\texttt{joint}}$ and $f_{\texttt{meta}}$. Overall, the transformation
function of the three parallel data flows in the $l$-th Omni-Layer can be
summarized as:
$\mathbf{x}^{l}=\left\\{\begin{array}[]{lcl}f_{\texttt{joint}}^{l}(\mathbf{x}^{l-1})+f_{\texttt{pre}}^{l}(\mathbf{x}^{l-1})&&{\text{$\mathbf{x}\in$
pre-flow}}\\\ &&\\\
f_{\texttt{joint}}^{l}(\mathbf{x}^{l-1})&&{\text{$\mathbf{x}\in$ joint-
flow}}\\\ &&\\\
f_{\texttt{joint}}^{l}(\mathbf{x}^{l-1})+f_{\texttt{meta}}^{l}(\mathbf{x}^{l-1})&&{\text{$\mathbf{x}\in$
meta-flow}}\end{array}\right.$ (3)
This architecture can be transformed from the layers in existing backbones by
copying their original layers as the chunk layer $f_{\texttt{joint}}$ and
adding two similar branch layers $f_{\texttt{pre}}$ and $f_{\texttt{meta}}$.
We design the two parallel branches as _lightweight_ layers compared to the
chunk layer, which maintains parameter efficiency of the Omni-Training
framework. For example, if $f_{\texttt{joint}}$ is a convolution layer with
large kernels such as $7\times 7$ or $3\times 3$, $f_{\texttt{pre}}$ and
$f_{\texttt{meta}}$ can be convolution layers with smaller kernels such as
$1\times 1$. Some existing architectures may introduce some additional special
layers such as batch normalization and various activation functions. We let
each data flow have its specific copy of these additional layers (denoted as
$a_{\texttt{joint}}$, $a_{\texttt{pre}}$ and $a_{\texttt{meta}}$), which
strengthens the specificity of the three data flows. We omit these additional
layers in the equations for simplicity.
We stack the Omni-Layers to construct the backbone for Omni-Training, and the
tri-flow in each layer expands to the entire data flows in the whole backbone.
Specifically, we use $F_{\texttt{joint}}$ to denote the overall function of
the _joint-flow_ which stacks $f_{\texttt{joint}}^{l}$ in the backbone:
$F_{\texttt{joint}}=f_{\texttt{joint}}^{L}\circ\cdots\circ
f_{\texttt{joint}}^{l}\circ\cdots\circ f_{\texttt{joint}}^{1}.$ (4)
We use $F_{\texttt{pre}}$ to denote the overall function of the stacked layers
in the backbone that encodes the _pre-flow_ , which enables knowledge routing
by adding the joint-flow:
$F_{\texttt{pre}}=\big{(}f_{\texttt{pre}}^{L}+f_{\texttt{joint}}^{L}\big{)}\circ\cdots\circ\big{(}f_{\texttt{pre}}^{l}+f_{\texttt{joint}}^{l}\big{)}\circ\cdots\circ\big{(}f_{\texttt{pre}}^{1}+f_{\texttt{joint}}^{1}\big{)}.$
(5)
Similarly, we use $F_{\texttt{meta}}$ to denote the overall function of the
stacked layers in the backbone that encodes the _meta-flow_ , which enables
knowledge routing by adding the joint-flow:
$\scriptsize
F_{\texttt{meta}}=\big{(}f_{\texttt{meta}}^{L}+f_{\texttt{joint}}^{L}\big{)}\circ\cdots\circ\big{(}f_{\texttt{meta}}^{l}+f_{\texttt{joint}}^{l}\big{)}\circ\cdots\circ\big{(}f_{\texttt{meta}}^{1}+f_{\texttt{joint}}^{1}\big{)}.$
(6)
Such a stacked tri-flow encoding backbone has several benefits. First, it is
parameter efficient, where the main chunk parameters are reused to encode
different data flows and the architecture requires much fewer parameters than
encoding these flows separately. Second, knowledge is softly shared between
pre-training, meta-training, and joint-training by routing through the shared
parameters in the architecture. Third, the Omni-Layer does not restrict on any
specific architecture choices, but is generally applicable to various
backbones in representation learning methods.
Omni-Head. The Omni-Head $H$ generates the final predictions of the three data
flows with the backbone representations. Specifically, $H$ consists of three
heads: a joint-head $H_{\texttt{joint}}$, a pre-head $H_{\texttt{pre}}$ and a
meta-head $H_{\texttt{meta}}$. Each head takes the corresponding data flow
representations in the backbone as its input and outputs the prediction.
Architectures of the three heads rely on the task, _e.g._ , for classification
problem, the heads can be classifiers with a single fully-connected layer. The
separate outputs for the three data flows enable the use of different losses
to train the three flows as introduced in Omni-Loss below. By chaining the
backbone and the Omni-Head, we obtain the Omni-Net architecture.
### 4.2 Omni-Loss
Based on the Omni-Net architecture, our general idea is to train the
parameters of each data flow with the corresponding pre-training or meta-
training algorithm, and enhance the transferability of each flow through the
Omni-Loss.
Joint-Training. Joint-training is performed on the joint-flow with the losses
of both pre-training and meta-training. In each iteration, we sample a
standard _mini-batch_ $\mathcal{B}$ and a _task episode_
$\\{\mathcal{S},\mathcal{Q}\\}$ from the large-scale training set
$\mathcal{D}_{\texttt{train}}$. We add the pre-training loss with the mini-
batch data and the meta-training loss with the sampled task on the joint-head
$H_{\texttt{joint}}$. The joint-training loss is
$\displaystyle\mathcal{L}_{\texttt{joint}}$ $\displaystyle=\
\mathbb{E}_{\mathcal{B}\sim\mathcal{P}(\mathbf{x},\mathbf{y})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{B}}\
\ell_{\texttt{pre}}\big{(}\mathbf{y},H_{\texttt{joint}}\circ
F_{\texttt{joint}}(\mathbf{x})\big{)}$ (7) $\displaystyle+\
\mathbb{E}_{(\mathcal{S},\mathcal{Q})\sim\mathcal{P}(\mathcal{T})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{Q}}\
\ell_{\texttt{meta}}\big{(}\mathbf{y},H_{\texttt{joint}}\circ
F_{\texttt{joint}}(\mathbf{x}|\mathcal{S})\big{)},$
where $\ell_{\texttt{pre}}$ and $\ell_{\texttt{meta}}$ are the losses of pre-
training and meta-training algorithms respectively. Though the joint-training
extracts shared features between the two training paradigms, such a naive
combination fails to endow representations with both domain transferability
and task transferability simultaneously, as we have shown in Section 3.3.
Therefore, we further perform pre-training and meta-training on the two
parallel data flows respectively to explicitly preserve domain transferability
and task transferability.
Pre-Training. To specifically acquire domain transferability in the network,
we perform pre-training on the pre-flow. In each iteration, we feed each
sample $(\mathbf{x},\mathbf{y})$ from the mini-batch $\mathcal{B}$ into the
pre-flow of the Omni-Net, going through $F_{\texttt{pre}}$ and
$H_{\texttt{pre}}$, and control the final output by the pre-training loss on
the pre-flow:
$\mathcal{L}_{\texttt{pre}}=\mathbb{E}_{\mathcal{B}\sim\mathcal{P}(\mathbf{x},\mathbf{y})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{B}}\
\ell_{\texttt{pre}}\big{(}\mathbf{y},H_{\texttt{pre}}\circ
F_{\texttt{pre}}(\mathbf{x})\big{)}.$ (8)
In addition to the knowledge transfer across different branches, we further
enhance the specific transferability on each parallel branch throughout the
learning process. In order to realize it, we employ a self-distillation
strategy. Let $\theta$ denote all the parameters in the backbone $F$ and the
Omni-Head $H$, $i$ denote the training steps, we keep the temporal ensemble of
the network during the learning process, _i.e._ , an exponential moving
average (EMA) of the model parameters $\widetilde{\theta}$, which is updated
smoothly during training:
$\widetilde{\theta}_{i}=\alpha\widetilde{\theta}_{i-1}+(1-\alpha)\theta_{i}.$
(9)
The EMA model gathers knowledge from different training stages and serves as a
teacher to guide the training of the current Omni-Net. In each iteration, the
EMA model transfers knowledge to each parallel branch through knowledge
distillation. We implement this idea into _self-distillation regularization_
for the pre-flow:
$\mathcal{R}_{\texttt{pre}}=\mathbb{E}_{\mathcal{B}\sim\mathcal{P}(\mathbf{x},\mathbf{y})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{B}}\
\ell_{2}\big{(}\widetilde{H}_{\texttt{pre}}\circ\widetilde{F}_{\texttt{pre}}(\mathbf{x}),H_{\texttt{pre}}\circ
F_{\texttt{pre}}(\mathbf{x})\big{)},$ (10)
where $\widetilde{F}_{\texttt{pre}}$ and $\widetilde{H}_{\texttt{pre}}$ denote
the mapping functions of pre-flow and pre-head in the EMA model with the
temporal ensemble parameters of $\widetilde{\theta}$, and $\ell_{2}$ is the
squared loss. The pre-training loss improved by the self-distillation for the
pre-flow is
$\mathcal{J}_{\texttt{pre}}=\mathcal{L}_{\texttt{pre}}+\lambda\mathcal{R}_{\texttt{pre}},$
(11)
with $\lambda$ being a hyper-parameter to trade-off the original pre-training
loss and the self-distillation regularization.
Meta-Training. To acquire task transferability in the network, in each
iteration, we perform meta-training on the meta-flow with the sampled task
episode $(\mathcal{S},\mathcal{Q})$. Data in the support set $\mathcal{S}$ are
fed into the meta-flow to obtain the conditioned model. Then, each sample
$(\mathbf{x},\mathbf{y})$ from the query set $\mathcal{Q}$ passes through the
meta-flow conditioned on the support set to derive the meta-training loss:
$\mathcal{L}_{\texttt{meta}}=\mathbb{E}_{(\mathcal{S},\mathcal{Q})\sim\mathcal{P}(\mathcal{T})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{Q}}\
\ell_{\texttt{meta}}\big{(}\mathbf{y},H_{\texttt{meta}}\circ
F_{\texttt{meta}}(\mathbf{x}|\mathcal{S})\big{)}.$ (12)
Similar to the pre-flow, we impose the _self-distillation regularization_ to
improve the transferability of the meta-learned representations across the
training process for the meta-flow:
$\displaystyle\mathcal{R}_{\texttt{meta}}=\mathbb{E}_{(\mathcal{S},\mathcal{Q})\sim\mathcal{P}(\mathcal{T})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{Q}}\
\ell_{2}\big{(}\widetilde{H}_{\texttt{meta}}\circ\widetilde{F}_{\texttt{meta}}(\mathbf{x}|\mathcal{S}),$
(13) $\displaystyle H_{\texttt{meta}}\circ
F_{\texttt{meta}}(\mathbf{x}|\mathcal{S})\big{)},$
where $\widetilde{F}_{\texttt{meta}}$ and $\widetilde{H}_{\texttt{meta}}$
denote the mapping functions of the meta-flow and meta-head in the EMA model,
and $\ell_{2}$ is the squared loss. The training loss for the meta-flow
includes the original meta-training loss and the self-distillation
regularization as
$\mathcal{J}_{\texttt{meta}}=\mathcal{L}_{\texttt{meta}}+\lambda\mathcal{R}_{\texttt{meta}},$
(14)
with $\lambda$ to trade-off the original meta-training loss and the
regularization term.
### 4.3 Overall Framework
Training. We train Omni-Net with the Omni-Loss to perform joint-training, pre-
training and meta-training simultaneously:
$\mathcal{O}_{\texttt{Omni}}=\mathcal{J}_{\texttt{pre}}+\mathcal{J}_{\texttt{meta}}+\mathcal{L}_{\texttt{joint}}.$
(15)
With the cooperation of Omni-Net and Omni-Loss, our framework trains the two
parallel flows to obtain both domain transferability and task transferability
and coordinates the two parallel flows to enable their knowledge
communication, addressing both challenges of _domain shift_ and _task shift_
in few-shot learning problems.
Inference. During the test time, we transfer knowledge learned from Omni-
Training by reusing or fine-tuning the learned model and retraining a new
Omni-Head for the new tasks on the labeled data in the support set
$\mathcal{S}_{\texttt{test}}$. Since we focus on the representation learning
stage but do not focus on the test time adaptation techniques, we train the
new Omni-Head consisting of a new joint-head
$H^{\texttt{new}}_{\texttt{joint}}$, a new pre-head
$H^{\texttt{new}}_{\texttt{pre}}$ and a new meta-head
$H^{\texttt{new}}_{\texttt{meta}}$ following the corresponding algorithms we
have used for pre-training and meta-training. Then for each test sample
$\mathbf{x}\in\mathcal{Q}_{\texttt{test}}$, we predict $\mathbf{x}$ using one
of the three heads or their ensemble based on the real application
constraints. For example, if we need to deploy the model to a real-time
prediction application, we only use the prediction of the meta-head for fast
adaptation using only a few gradient updates. If there is no resource
restriction, we can use the ensemble of all three heads for more accurate
predictions.
## 5 Omni-Training Algorithms
We provide instantiations and implementations of the Omni-Training framework
by incorporating some mainstream pre-training and meta-training algorithms.
The framework can generalize to a wider variety of algorithms as shown in our
experiments.
### 5.1 Pre-Training Algorithms
Classification. The pre-training algorithm for classification is known as
Baseline [15] in few-shot learning literature. To instantiate,
$H_{\texttt{pre}}$ is a fully-connected layer with weights
$[\mathbf{w}_{1},...,\mathbf{w}_{K}]$ and biases $[b_{1},...,b_{K}]$ for $K$
classes, $F_{\texttt{pre}}$ and $H_{\texttt{pre}}$ are pre-trained on training
dataset $\mathcal{D}_{\texttt{train}}$ by using cross-entropy as
$\ell_{\texttt{pre}}$:
$\ell_{\texttt{pre}}\big{(}\mathbf{y},H_{\texttt{pre}}\circ
F_{\texttt{pre}}(\mathbf{x})\big{)}=-\log\left(\frac{\exp(\mathbf{w}_{y}^{T}F_{\texttt{pre}}(\mathbf{x}))}{\sum_{k}\exp(\mathbf{w}_{k}^{T}F_{\texttt{pre}}(\mathbf{x}))}\right),$
(16)
where $y$ is the class index of the ground-truth class label $\mathbf{y}$ for
$\mathbf{x}$. The model is then fine-tuned on the support set
$\mathcal{S}_{\texttt{test}}$ for the new task with a new classification head
$H^{\texttt{new}}_{\texttt{pre}}$.
Regression. In the pre-training algorithm for regression, we use a fully-
connected layer as the pre-head $H_{\texttt{pre}}$ to predict the output. Here
the loss is defined as the squared error between the target value $y$ and the
prediction, also known as the L2 loss:
$\ell_{\texttt{pre}}\big{(}y,H_{\texttt{pre}}\circ
F_{\texttt{pre}}(\mathbf{x})\big{)}=\big{(}H_{\texttt{pre}}\circ
F_{\texttt{pre}}(\mathbf{x})-y\big{)}^{2}.$ (17)
Reinforcement Learning. In the pre-training algorithm for reinforcement
learning, we use the policy gradient in REINFORCE [75]. The Omni-Net serves as
the parameterized policy $\pi=H_{\texttt{pre}}\circ F_{\texttt{pre}}$ with a
fully-connected head $H_{\texttt{pre}}$ to predict the action given a state.
Here the loss is defined as the expected return over the policy:
$\ell_{\texttt{pre}}\left(H_{\texttt{pre}}\circ
F_{\texttt{pre}}\right)=\mathbb{E}_{\tau\sim\pi}\big{[}\sum\nolimits_{t=0}^{\infty}{r}(s_{t},a_{t})\big{]}$.
The gradient of the pre-training loss $\ell_{\texttt{pre}}$ with respect to
the parameters $\theta$ of the policy $\pi$, _i.e._ , the policy gradient, is
defined as
$\nabla_{\theta}\ell_{\texttt{pre}}\left(H_{\texttt{pre}}\circ
F_{\texttt{pre}}\right)=\sum\limits_{s}p^{\pi}(s)\sum\limits_{a}\nabla_{\theta}\pi(a|s)Q^{\pi}(s,a).$
(18)
$p^{\pi}(s)$ is discounted weighting of the probability of encountering states
$s$ from the initial states and $Q^{\pi}$ is the Q-function for $\pi$ [76].
### 5.2 Meta-Training Algorithms
Model-Agnostic Meta-Learning (MAML). In meta-training, we first consider
model-agnostic meta-learning (MAML) [9], a gradient-based learning rule to
rapidly adapt to new tasks with few data and gradient steps. In each
iteration, we sample an episode of a support set $\mathcal{S}$ and a query set
$\mathcal{Q}$, and optimize the MAML loss:
$\displaystyle\ell_{\texttt{meta}}\big{(}\mathbf{y},H_{\texttt{meta}}\circ
F_{\texttt{meta}}(\mathbf{x}|\mathcal{S})\big{)}=\ell\big{(}\mathbf{y},H_{\texttt{meta}}\circ
F_{\texttt{meta}}(\mathbf{x};\theta^{\prime})\big{)},$ (19)
for each sample $(\mathbf{x},\mathbf{y})\in\mathcal{Q}$ in the query set. Here
$\theta$ is the parameters of $H_{\texttt{meta}}$ and $F_{\texttt{meta}}$ in
the meta-flow, and
$\theta^{\prime}=\theta-\nabla_{\theta}\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{S}}\
\ell\big{(}\mathbf{y},H_{\texttt{meta}}\circ
F_{\texttt{meta}}(\mathbf{x};\theta)\big{)}$ is the model parameters after a
single gradient update on the support set $\mathcal{S}$. MAML has few
restrictions on the model architecture and learning task, and can be widely
used on various tasks such as regression, classification and reinforcement
learning, by specifying the task-aware loss $\ell$.
Prototypical Networks. In the few-shot learning literature, one of the well-
established meta-training algorithms is ProtoNet [14]. Let $\mathcal{S}_{k}$
denote the samples with the class index $k$ in a support set $\mathcal{S}$ in
the episode, the prototype of this class $\mathbf{c}_{k}$ is the mean of the
embedded data in $\mathcal{S}_{k}$:
$\mathbf{c}_{k}=\mathbb{E}_{(\mathbf{x},\mathbf{y})\in{\mathcal{S}_{k}}}F_{\texttt{meta}}(\mathbf{x})$.
A metric-based classifier predicts the probability distribution of each query
point $\mathbf{x}$ based on its Euclidean distances $d$ to the prototypes,
which is penalized by a cross-entropy loss for classification:
$\scriptsize\ell_{\texttt{meta}}\big{(}\mathbf{y},H_{\texttt{meta}}\circ
F_{\texttt{meta}}(\mathbf{x}|\mathcal{S})\big{)}=-\log\left(\frac{\exp(-d(F_{\texttt{meta}}(\mathbf{x}),\mathbf{c}_{y}))}{\sum_{k=1}^{K}\exp(-d(F_{\texttt{meta}}(\mathbf{x}),\mathbf{c}_{k}))}\right).$
(20)
For new tasks, the labeled data in the support set
$\mathcal{S}_{\texttt{test}}$ are used to compute the prototypes of each new
class. Then we can classify new samples in the query set
$\mathcal{Q}_{\texttt{test}}$ by their nearest prototype.
## 6 Experiments
We evaluate our Omni-Training framework with comprehensive experiments on
cross-task and cross-domain settings in classification, regression and
reinforcement learning problems to testify the few-shot learning performances.
All the codes and datasets will be available online at
https://github.com/thuml/Omni-Training.
### 6.1 Classification
TABLE I: The results of the new tasks with $5$ or $1$ labeled samples per class on mini-ImageNet and CUB datasets. Method | Backbone | mini-ImageNet | CUB
---|---|---|---
$K=5$ | $K=1$ | $K=5$ | $K=1$
MatchingNet [13] | ResNet-18 | $68.88\pm 0.69$ | $52.91\pm 0.88$ | $83.64\pm 0.60$ | $72.36\pm 0.90$
ProtoNet [14] | ResNet-18 | $73.68\pm 0.65$ | $54.16\pm 0.82$ | $87.42\pm 0.48$ | $71.88\pm 0.91$
RelationNet [42] | ResNet-18 | $69.83\pm 0.68$ | $52.48\pm 0.86$ | $82.75\pm 0.58$ | $67.59\pm 1.02$
MAML [9] | ResNet-18 | $65.72\pm 0.77$ | $49.61\pm 0.92$ | $82.70\pm 0.65$ | $69.96\pm 1.01$
TADAM [46] | ResNet-12 | $76.70\pm 0.30$ | $58.50\pm 0.30$ | $-$ | $-$
GNN [44] | ResNet-18 | $78.80\pm 0.78$ | $57.40\pm 0.98$ | $90.74\pm 0.57$ | $78.52\pm 1.03$
LEO [69] | WRN28-10 | $77.59\pm 0.12$ | $61.76\pm 0.08$ | $78.27\pm 0.16$ | $68.22\pm 0.22$
Baseline [15] | ResNet-18 | $74.27\pm 0.63$ | $51.75\pm 0.80$ | $82.85\pm 0.55$ | $65.51\pm 0.87$
Baseline++ [15] | ResNet-18 | $75.68\pm 0.63$ | $51.87\pm 0.77$ | $83.58\pm 0.54$ | $67.02\pm 0.90$
MTL [73] | ResNet-12 | $75.50\pm 0.80$ | $61.20\pm 1.80$ | $-$ | $-$
MetaOpt [71] | ResNet-12 | $78.63\pm 0.36$ | $62.64\pm 0.61$ | $90.90\pm 0.23$ | $80.23\pm 0.44$
TapNet [47] | ResNet-12 | $76.36\pm 0.10$ | $61.65\pm 0.15$ | $-$ | $-$
Robust20 [29] | ResNet-18 | $81.59\pm 0.42$ | $63.95\pm 0.61$ | $84.62\pm 0.44$ | $69.47\pm 0.69$
CAN [45] | ResNet-12 | $79.44\pm 0.34$ | $63.85\pm 0.48$ | $-$ | $-$
RFS [30] | ResNet-12 | $82.14\pm 0.43$ | $64.82\pm 0.60$ | $-$ | $-$
Neg-Margin [77] | ResNet-18 | $80.94\pm 0.59$ | $62.33\pm 0.82$ | $89.40\pm 0.40$ | $72.66\pm 0.90$
PMM [78] | ResNet-18 | $77.76\pm 0.58$ | $60.11\pm 0.73$ | $86.01\pm 0.50$ | $73.94\pm 1.10$
Multi-Task [79] | ResNet-12 | $77.72\pm 0.09$ | $59.84\pm 0.22$ | $-$ | $-$
Meta-Maxup [80] | ResNet-12 | $79.38\pm 0.24$ | $62.81\pm 0.34$ | $-$ | $-$
OT-Proto | ResNet-18 | $81.26\pm 0.57$ | $64.31\pm 0.86$ | $91.09\pm 0.38$ | $81.18\pm 0.78$
OT-Proto | ResNet-12 | ${82.36\pm 0.54}$ | ${66.62\pm 0.80}$ | ${91.93\pm 0.38}$ | ${82.94\pm 0.73}$
OT-GNN | ResNet-18 | $\mathbf{87.14\pm 0.59}$ | $\mathbf{70.99\pm 0.97}$ | $\mathbf{95.96\pm 0.33}$ | $\mathbf{87.73\pm 0.78}$
Datasets. We consider few-shot classification problems with four datasets: in-
domain datasets mini-ImageNet [13], CUB [81], and cross-domain datasets mini-
ImageNet$\rightarrow$CUB, Multi-domain. mini-ImageNet is a subset of ILSVRC-12
dataset [82] for generic object recognition. It contains $100$ classes with
$600$ images per class. We use the same split introduced by [52], which
respectively splits $64$/$16$/$20$ classes for the training/validation/testing
set. CUB is a fine-grained dataset of birds with a total of $200$ classes and
$11,788$ images. We follow the protocol of [83] and split the dataset into
$100$/$50$/$50$ classes for training/validation/testing. mini-
ImageNet$\rightarrow$CUB is a cross-domain dataset. Following [15], we use
mini-ImageNet dataset as the training set and split the CUB set as $50$/$50$
classes for validation and testing. Multi-domain is another cross-domain
dataset. We follow the split in [72] and use the datasets of mini-ImageNet,
CUB, Cars [84], Places [85] and Plantae [86] as different domains. We explore
two settings. The first is training the model on the mini-ImageNet domain and
evaluating on other four domains. The second is the leave-one-out setting
which selects one domain for evaluation and trains the model with all other
domains.
TABLE II: The classification accuracy of the new tasks with $5$ or $1$ labeled samples per class in the _cross-domain_ setting, mini-ImageNet$\rightarrow$CUB. Method | $K=5$ | $K=1$
---|---|---
MatchingNet [13] | $53.07\pm 0.74$ | $38.78\pm 0.73$
ProtoNet [14] | $62.02\pm 0.70$ | $40.07\pm 0.75$
RelationNet [42] | $57.71\pm 0.73$ | $37.71\pm 0.69$
MAML [9] | $51.34\pm 0.72$ | $40.15\pm 0.65$
GNN [44] | $65.56\pm 0.87$ | $43.65\pm 0.86$
Baseline [15] | $65.57\pm 0.70$ | $43.59\pm 0.74$
Baseline++ [15] | $62.04\pm 0.76$ | $44.14\pm 0.77$
Robust20 [29] | $65.04\pm 0.57$ | $-$
Neg-Margin [77] | $67.03\pm 0.76$ | $-$
PMM [78] | $68.77\pm 0.90$ | $-$
OT-Proto | $71.30\pm 0.71$ | $50.42\pm 0.82$
OT-GNN | $\mathbf{75.83\pm 0.82}$ | ${50.89\pm 0.91}$
Implementation Details. We use ResNet-18 in [15] and ResNet-12 with dropblocks
in [46] as the backbone for mini-ImageNet, CUB and mini-
ImageNet$\rightarrow$CUB. Following [72], we use ResNet-10 on Multi-domain for
a fair comparison. We refactor ResNet into a backbone for Omni-Training by
transforming all convolution layers into Omni-Layers, where each Omni-Layer
uses the $1\times 1$ convolution layer as the lightweight branch layer. We
employ Baseline in [15] as the pre-training method and explore two powerful
meta-training methods, ProtoNet [14] and GNN [44], denoted as OT-Proto and OT-
GNN respectively. In each iteration, a mini-batch is sampled with the batch
size of $64$ for pre-training, and an episode of task is sampled for meta-
training, with a support set containing $5$ categories each having $5$ labeled
instances, and a query set containing the same categories with $16$ instances
per class. We apply standard data augmentation including random crop, left-
right flip and color jitter to the training samples. We train our framework
with $100$ epochs for the mini-ImageNet, mini-ImageNet$\rightarrow$CUB and
Multi-domain datasets, and with $400$ epochs for the CUB dataset. We use
accuracy on the validation set to choose the best model for testing. In the
test stage, we randomly sample $600$ tasks from the testing set. Each task
contains $5$ unseen classes with $K=5$ or $K=1$ labeled samples per class as
the support set, and another $16$ instances per class as the query set to be
predicted. The average accuracy as well as the $95\%$ confidence intervals are
reported. The hyper-parameter is chosen as $\alpha=0.99$, $\lambda=3.0$. We
train the networks from scratch and use Adam optimizer [87] with an initial
learning rate of $0.001$.
TABLE III: The results of the tasks from unseen domains with $5$ or $1$
labeled samples per class in the _Multi-domain_ setting (trained with mini-
ImageNet).
Method | CUB | Cars | Places | Plantae
---|---|---|---|---
$K=5$ | $K=1$ | $K=5$ | $K=1$ | $K=5$ | $K=1$ | $K=5$ | $K=1$
MatchingNet [13] | $51.37\pm 0.77$ | $35.89\pm 0.51$ | $38.99\pm 0.64$ | $30.77\pm 0.47$ | $63.16\pm 0.77$ | $49.86\pm 0.79$ | $46.53\pm 0.68$ | $32.70\pm 0.60$
ProtoNet [14] | $57.64\pm 0.85$ | $38.18\pm 0.76$ | $42.84\pm 0.73$ | $29.72\pm 0.59$ | $68.86\pm 0.70$ | $49.24\pm 0.81$ | $47.41\pm 0.70$ | $35.02\pm 0.63$
RelationNet [42] | $57.77\pm 0.69$ | $42.44\pm 0.77$ | $37.33\pm 0.68$ | $29.11\pm 0.60$ | $63.32\pm 0.76$ | $48.64\pm 0.85$ | $44.00\pm 0.60$ | $33.17\pm 0.64$
GNN [44] | $62.25\pm 0.65$ | $45.69\pm 0.68$ | $44.28\pm 0.63$ | $31.79\pm 0.51$ | $70.84\pm 0.65$ | $53.10\pm 0.80$ | $52.53\pm 0.59$ | $35.60\pm 0.56$
FT-Matching [72] | $55.23\pm 0.83$ | $36.61\pm 0.53$ | $41.24\pm 0.65$ | $29.82\pm 0.44$ | $64.55\pm 0.75$ | $51.07\pm 0.68$ | $41.69\pm 0.63$ | $34.48\pm 0.50$
FT-Relation [72] | $59.46\pm 0.71$ | $44.07\pm 0.77$ | $39.91\pm 0.69$ | $28.63\pm 0.59$ | $66.28\pm 0.72$ | $50.68\pm 0.87$ | $45.08\pm 0.59$ | $33.14\pm 0.62$
FT-GNN [72] | ${66.98\pm 0.68}$ | ${47.47\pm 0.75}$ | $44.90\pm 0.64$ | $31.61\pm 0.53$ | $73.94\pm 0.67$ | ${55.77\pm 0.79}$ | $53.85\pm 0.62$ | $35.95\pm 0.58$
OT-Proto | $65.17\pm 0.75$ | $45.83\pm 0.78$ | $\mathbf{51.19\pm 0.71}$ | ${34.82\pm 0.70}$ | ${74.16\pm 0.69}$ | $55.73\pm 0.89$ | ${57.88\pm 0.69}$ | ${39.51\pm 0.71}$
OT-GNN | $\mathbf{70.24\pm 0.82}$ | $\mathbf{49.51\pm 0.96}$ | ${48.99\pm 0.83}$ | $\mathbf{35.31\pm 0.78}$ | $\mathbf{79.61\pm 0.81}$ | $\mathbf{61.74\pm 1.05}$ | $\mathbf{60.08\pm 0.81}$ | $\mathbf{40.52\pm 0.81}$
TABLE IV: The results of the tasks from unseen domains with $5$ or $1$ labeled
samples per class in the _Multi-domain_ setting (trained with all other
domains).
Method | CUB | Cars | Places | Plantae
---|---|---|---|---
$K=5$ | $K=1$ | $K=5$ | $K=1$ | $K=5$ | $K=1$ | $K=5$ | $K=1$
MatchingNet [13] | $51.92\pm 0.80$ | $37.90\pm 0.55$ | $39.87\pm 0.51$ | $28.96\pm 0.45$ | $61.82\pm 0.57$ | $49.01\pm 0.65$ | $47.29\pm 0.51$ | $33.21\pm 0.51$
ProtoNet [14] | $59.26\pm 0.89$ | $39.31\pm 0.72$ | $43.66\pm 0.68$ | $29.52\pm 0.54$ | $68.03\pm 0.61$ | $47.96\pm 0.77$ | $49.35\pm 0.72$ | $35.40\pm 0.68$
RelationNet [42] | $62.13\pm 0.74$ | $44.33\pm 0.59$ | $40.64\pm 0.54$ | $29.53\pm 0.45$ | $64.34\pm 0.57$ | $47.76\pm 0.63$ | $46.29\pm 0.56$ | $33.76\pm 0.52$
GNN [44] | $69.26\pm 0.68$ | $49.46\pm 0.73$ | $48.91\pm 0.67$ | $32.95\pm 0.56$ | $72.59\pm 0.67$ | $51.39\pm 0.80$ | $58.36\pm 0.68$ | $37.15\pm 0.60$
FT-Matching [72] | $61.41\pm 0.57$ | $43.29\pm 0.59$ | $43.08\pm 0.55$ | $30.62\pm 0.48$ | $64.99\pm 0.59$ | $52.51\pm 0.67$ | $48.32\pm 0.57$ | $35.12\pm 0.54$
FT-Relation [72] | $64.99\pm 0.54$ | $48.38\pm 0.63$ | $43.44\pm 0.59$ | $32.21\pm 0.51$ | $67.35\pm 0.54$ | $50.74\pm 0.66$ | $50.39\pm 0.52$ | $35.00\pm 0.52$
FT-GNN [72] | $\mathbf{73.11\pm 0.68}$ | $\mathbf{51.51\pm 0.80}$ | $49.88\pm 0.67$ | $34.12\pm 0.63$ | $77.05\pm 0.65$ | ${56.31\pm 0.80}$ | $58.84\pm 0.66$ | $42.09\pm 0.68$
OT-Proto | $67.76\pm 0.74$ | $46.62\pm 0.77$ | ${52.02\pm 0.74}$ | ${36.36\pm 0.70}$ | ${73.57\pm 0.66}$ | $52.20\pm 0.81$ | ${59.37\pm 0.69}$ | ${40.95\pm 0.66}$
OT-GNN | $71.78\pm 0.83$ | $49.78\pm 0.94$ | $\mathbf{54.39\pm 0.83}$ | $\mathbf{37.00\pm 0.79}$ | $\mathbf{78.03\pm 0.80}$ | $\mathbf{58.27\pm 0.99}$ | $\mathbf{62.22\pm 0.79}$ | $\mathbf{43.02\pm 0.85}$
Results on Cross-Task Benchmarks. We first evaluate our method on the general
dataset mini-ImageNet and the fine-grained dataset CUB. These two scenarios
are considered as _cross-task_ benchmarks, as the training and testing data
are from the same domain. The results with $K=5$ and $K=1$ are shown in Table
I. Omni-Training outperforms corresponding pre-training and meta-training
methods, especially in extremely difficult scenarios with only $1$ labeled
instance. Note that although from the same dataset, there still exists domain
shift between the training and test sets caused by the split of different
label sets. Our framework manages to incorporate pre-training and meta-
training effectively to acquire both domain transferability and task
transferability and thus achieves higher performance. Omni-Training
outperforms state-of-the-art algorithms, including MTL [73] which combines
pre-training and meta-training sequentially. This confirms that our design can
better bridge pre-training and meta-training.
Results on Cross-Domain Benchmarks. We consider two more challenging _cross-
domain_ benchmarks, mini-imageNet$\rightarrow$CUB and Multi-domain. Different
from the cross-task benchmarks discussed above, in the cross-domain setting,
the testing data are not only from different classes, but also from different
domains, causing greater domain shift between the training and testing data.
As shown in Table II, meta-training algorithms degrade due to the domain shift
while pre-training algorithms generalize better to the unseen domain. Omni-
Training outperforms meta-training methods by a large margin, indicating the
significance of domain transferability in the cross-domain setting. Also,
Omni-Training outperforms the pre-training Baseline, which reveals the
importance of task transferability to fully enable few-shot learning.
A more challenging benchmark is Multi-domain with more domains and larger
domain shift. Table III reports the results of the first setting where we
train on the mini-ImageNet domain and test on other four domains. Table IV
reports the results of the second leave-one-out setting, where we choose one
domain as the unseen test domain and train the model with all other domains.
We specially compare Omni-Training with Feature-Transformation [72], which is
a framework adopts domain generalization [88] into meta-training to obtain
both domain transferability and task transferability. Among its
implementations, FT-GNN achieves the best performance by incorporating a
strong meta-training algorithm, GNN [44]. When trained on mini-ImageNet, Omni-
Training can still achieve comparable or better performance than FT-GNN with a
simple meta-training algorithm such as ProtoNet. We also incorporate GNN into
Omni-Training to form OT-GNN, which generally outperforms FT-GNN in most
tasks. Note that FT-GNN has a special design for domain generalization, which
is tailored for the multi-domain setting. But OT-GNN also achieves better
performance on most cases, confirming that Omni-Training works generally well
in different situations.
### 6.2 Regression
Datasets. For few-shot regression problems, we conduct experiments on a
sinusoid dataset following [9]. Specifically, the regression problem is to
predict the output $y$ on a sine wave given the input $x$. We define a task as
regressing a sine wave with a particular amplitude and phase from some labeled
data and consider a continuous task distribution in which the amplitude varies
within $[0.1,5.0]$ and the phase varies within $[0,2\pi]$. The input datapoint
$x$ is sampled uniformly from $[-5.0,5.0]$ for all tasks. The training dataset
$\mathcal{D}_{\texttt{train}}$ contains a large number of sampled sine waves
and each test task
$\\{\mathcal{S}_{\texttt{test}},\mathcal{Q}_{\texttt{test}}\\}$ is an unseen
sinusoid with a few labeled datapoints in $\mathcal{S}_{\texttt{test}}$ and
other points which need prediction in $\mathcal{Q}_{\texttt{test}}$. The goal
is to train a regression model on $\mathcal{D}_{\texttt{train}}$ to predict
the outputs of the datapoints in the query set $\mathcal{Q}_{\texttt{test}}$
after adaptation with a few labeled data in $\mathcal{S}_{\texttt{test}}$.
Implementation Details. We take the mean-squared error between the predictions
and ground-truth values as the training loss. We use Baseline [15] for pre-
training and use MAML [9] for meta-training. We employ a backbone with $2$
fully-connected layers of size $64$ with the activation function of Tanh. The
training set $\mathcal{D}_{\texttt{train}}$ has 30000 randomly sampled tasks
and each task is a sine wave with $50$ labeled datapoints. We then enable few-
shot regression on a new sine wave with a support set of $K=\\{5,10,20\\}$
labeled examples and test the adapted model on points in
$\mathcal{Q}_{\texttt{test}}$ of the wave. We train the model on
$\mathcal{D}_{\texttt{train}}$ and fine-tune it on the $K$ labeled examples
for the new sine wave with an SGD optimizer. The learning rate for the inner
loop is $0.02$ and that for parameter update is initialized as $0.01$.
(a) $K=5$
(b) $K=10$
(c) $K=20$
Figure 4: The few-shot regression results of different training methods with
different gradients steps and different support set sizes $K$.
(a) Baseline
(b) MAML
(c) Omni-Training
Figure 5: The recovered sine wave of Baseline, MAML and Omni-Training. The
models are updated using $5$ sampled points with $1$ or $10$ gradient steps.
Results. We sample 100 new tasks for testing and report the mean squared error
after fine-tuning with different gradient steps from $1$ to $10$. As shown in
Figure 4, Baseline generally performs worse than MAML. The tasks change
rapidly during the training and test stages in this problem and task
transferability is important, which is missing for pre-training methods. With
different numbers of labeled data and of gradient steps, Omni-Training
consistently improves upon the meta-training method, which shows the efficacy
of Omni-Training for regression tasks.
We further conduct a case study and show the typical sine waves recovered by
pre-training, meta-training and Omni-Training with $5$ labeled samples and
with $1$ or $10$ gradient steps in Figure 5. We also show the ground-truth
sine wave and the labeled points in the support set. MAML and Omni-Training
quickly regress closed to the ground-truth curve, while the process is much
slower for Baseline. Compared with MAML, the recovered curve of Baseline
maintains smooth, which is an important common property in the sinusoid
distribution. Omni-Training also maintains a smoother curve, which
simultaneously fits these datapoints quickly and preserves the domain
transferability of sine waves. This explains the improvements brought by the
Omni-Training framework.
(a) 2D Navigation
(b) Locomotion-Velocity
(c) Locomotion-Direction
Figure 6: Average expected return of the tasks in the 2D Navigation
environment and the Locomotion environment for reinforcement learning.
### 6.3 Reinforcement Learning
Environments. For reinforcement learning problems, we follow the learning
protocol in [9] with several sets of tasks based on two simulated continuous
control environments: 2D Navigation and Locomotion in the rllab benchmark
suite [89].
In the 2D Navigation environment, the goal is to move to a target position in
2D. The state space is the 2D location and the action space is the 2D
velocity, where the action is in the range of $\left[-0.1,0.1\right]$. The
reward is the negative squared distance to the goal, and the episodes
terminate when the agent is within $0.01$ of the goal or at the horizon of
$H=100$. We construct a task by randomly sampling a goal position from a unit
square.
In the Locomotion environment, we adopt the agent in the Mujoco HalfCheetah
environment [90] and follow its state and action space. We evaluate on two
sets of tasks. The first aims to run at a particular velocity. The reward is
the negative absolute value between the current velocity and the goal
velocity, which is chosen uniformly at random between $0.0$ and $2.0$ for
different tasks. The second aims to run in a particular direction. The reward
is the magnitude of the velocity in the forward or backward direction. The
horizons of both tasks are set as $H=200$.
Implementation Details. We adopt the policy as a neural network with two
fully-connected layers of $64$ hidden units and the Tanh activation function.
We train the policy with the REINFORCE algorithm [91]. We use the standard
linear feature baseline proposed by [89], which is fitted separately at each
iteration for each sampled task in the batch. We train the model with $500$
iterations. In each iteration, $20$ different tasks are sampled for the 2D
navigation environment and $40$ tasks are sampled for Locomotion, where $20$
trajectories are sampled for each task. During the test stage for few-shot
reinforcement learning, we randomly sample $2000$ new tasks for evaluation.
Each task contains trajectories with rewards as the support set. We use $20$
trajectories from each task for each gradient step and use $1$ to $4$ gradient
steps for adaptation to new tasks. We use $20$ trajectories as the query set
to compute the final testing reward of each task. We also use Baseline [15] as
the pre-training method and MAML [9] as the meta-training method. In each
iteration of meta-training, the policy is first trained using a single
gradient step on the support trajectories with the inner loop step size $0.1$,
and then meta-updated on the query trajectories with the outer loop step size
$0.03$.
Results. The results of the reinforcement learning tasks in the two
environments are shown in Figure 6. Omni-Training outperforms both Baseline
and MAML with large margins in the 2D Navigation environment, which
demonstrates that the model with both domain and task transferability can
boost the generalization performance in this case. In the Locomotion
environment, the performance gap between MAML and Baseline becomes larger,
indicating more complex cross-task situations. Omni-Training still improves
upon MAML in the velocity tasks. In the direction tasks, the pre-training
method fails to generalize across these complex tasks with limited
trajectories and updates, thereby performing similarly to the random
initialization. In this extreme case, Omni-Training still performs comparably
with MAML, without being negatively influenced. These results have proved the
generalization ability of Omni-Training in a variety of complex situations.
(a) mini-ImageNet$\rightarrow$CUB
(b) Regression
(c) Extension
Figure 7: (a)(b) Comparison with two ways of simply combining pre-training and
meta-training: Ensemble and Joint-Training. (c) Extension of the Omni-Training
framework to other representation learning algorithms (OT: Omni-Training).
(a) Pre-Training
(b) Meta-Training
(c) Transferability
Figure 8: (a)(b) Training losses and validation accuracies of the pre-flow and
meta-flow in Omni-Training, compared with their corresponding baselines; (c)
The transferability of pre-training and pre-flow with different numbers of
gradient steps.
## 7 Analysis
In this section, we further empirically analyze and understand our proposed
framework. Without specification, we use the ResNet-18 as the backbone. We use
the Baseline in [15] as the pre-training method and the ProtoNet [14] as the
meta-training method.
### 7.1 Fine-grained Comparison with Baselines
Comparison with Simple Combinations. We compare Omni-Training with two simple
combinations of pre-training and meta-training discussed in Section 3.3,
_i.e._ the ensemble of the two models trained separately (Ensemble) and joint-
training with the losses of the two training paradigms (Joint-Training). We
evaluate on the classification dataset mini-ImageNet$\rightarrow$CUB and the
sinusoid regression dataset. We use $K=5$ or $K=1$ labeled samples in the
support set in classification and use $K=5$ or $K=10$ labeled points with $2$
gradient steps of parameter update in regression. As shown in Figure 7a and
7b, Ensemble and Joint-Training do not always lead to improvements, and the
performance gain is minor. Omni-Training instead outperforms all the compared
methods consistently, which demonstrates that the proposed Omni-Net and the
Omni-Loss designs provide a better solution to bridge pre-training and meta-
training and acquire both domain transferability and task transferability.
Extension to Other Algorithms. Despite the competitive performance on various
benchmarks, we also want to demonstrate that different few-shot learning
algorithms can benefit from the Omni-Training framework. We extend Omni-
Training to more algorithms. Since most pre-training algorithms adopt the
similar pre-training and fine-tuning process, we mainly investigate the
varieties of meta-training algorithms including MatchingNet [13], MAML [9] and
RelationNet [42]. We conduct experiments in the mini-ImageNet dataset since
some algorithms cannot deal with the regression problem. As shown in Figure
7c, Omni-Training with different algorithms significantly outperforms the
corresponding baselines. This demonstrates that our framework can generally
accommodate different few-shot learning algorithms.
(a) Figure 9: Attention comparison of the representations from the pre-
training model, the meta-training model and the three data flows in Omni-
Training.
Comparison of Each Flow with Baselines. We investigate whether the
coordination of pre-training and meta-training with the shared parameters in
our _tri-flow_ architecture can improve the performance of specific flows.
Figure 8a reports the training losses and validation accuracies of the _pre-
flow_ in Omni-Training and pre-training algorithm Baseline [15] alone, while
Figure 8b reports the results of the _meta-flow_ in Omni-Training and the
meta-training algorithm ProtoNet [14]. The experiments are conducted in the
CUB dataset with $K=5$. The pre-flow and the meta-flow in Omni-Training reach
lower losses and higher accuracies than the baselines trained independently.
Even though the pre-flow and Baseline achieve nearly the same training loss,
the pre-flow achieves much higher validation accuracy than Baseline. This
shows that the knowledge communication enables pre-flow to obtain part of task
transferability and meta-flow to obtain part of domain transferability to
improve their performance.
We also compare the transferability of the pre-training method and the pre-
flow on the mini-ImageNet and mini-ImageNet$\rightarrow$CUB datasets. As shown
in Figure 8c, the pre-flow also outperforms pre-training in various
situations. We further investigate fine-tuning the representations with $1/10$
gradient steps. The performance of the pre-training model drops a lot with
limited updates, but the pre-flow still performs well and comparably with the
pre-training model updated $10$ more times. This reveals that the pre-flow
also acquires task transferability to fast adapt across tasks. These results
demonstrate that Omni-Training coordinates the two parallel flows and makes
each gain the other kind of transferability.
(a) CUB
(b) mini-ImageNet$\rightarrow$CUB
(c) Parameter Sensitivity
Figure 10: (a)(b) Results of modifying different numbers of layers into Omni-Layers on the CUB dataset and the mini-ImageNet$\rightarrow$CUB dataset respectively. (c) The sensitivity of the performance with respect to the hyper-parameters $\lambda$. TABLE V: Ablation study on the losses in the Omni-Training framework. $\mathcal{L}_{\texttt{pre}}$ | $\mathcal{L}_{\texttt{meta}}$ | $\mathcal{L}_{\texttt{joint}}$ | $\mathcal{R}$ | ImageNet | CUB | ImageNet$\rightarrow$CUB
---|---|---|---|---|---|---
✓ | - | - | - | $74.37$ | $82.80$ | $65.57$
- | ✓ | - | - | $73.66$ | $87.91$ | $62.14$
- | - | ✓ | - | $76.95$ | $85.15$ | $66.09$
✓ | ✓ | - | - | $78.12$ | $87.34$ | $69.66$
✓ | ✓ | ✓ | - | $80.79$ | $88.54$ | $70.12$
✓ | ✓ | ✓ | ✓ | $81.26$ | $91.09$ | $71.30$
TABLE VI: Ablation study on the model size. Method | #Params | ImageNet | CUB | ImageNet$\rightarrow$CUB
---|---|---|---|---
ProtoNet | 11.17M | $73.68$ | $87.42$ | $62.02$
ProtoNet* | 13.98M | $73.44$ | $87.81$ | $61.27$
Omni-Training | 13.98M | $81.26$ | $91.09$ | $71.30$
Comparison of Attention Maps. We compare the spatial attention in different
representations learned by pre-training, meta-training and the three data
flows in Omni-Training. From Figure 9, we observe that pre-training
representations focus on a broad area containing the objects as well as some
noisy context, which fully grab the domain knowledge but lack some
concentration on the important information to discriminate different
categories. On the contrary, the meta-training representations focus on a very
small area with very concise information, which is easy to generalize across
tasks quickly but also easy to make mistakes when the attending area deviates
only a little from the objects. Such deviation is more likely to occur with
the domain shift. Such attention heatmaps are consistent with our analyses
before that pre-training learns representations with higher domain
transferability while meta-training learns representations with higher task
transferability.
Switching to Omni-Training, the pre-flow focuses on a more concise area only
including the whole object while ignoring the noisy context. The meta-flow
focuses on a broader area to grab more knowledge in the whole domain and
increase the tolerance of mistakes. This observation demonstrates that there
is knowledge transfer between pre-flow and meta-flow, which coordinates these
two flows and improves them with the other kind of transferability. The joint-
flow shows a different attention map from the pre-flow and the meta-flow. This
also demonstrates that the three flows in the Omni-Training framework focus on
different areas on the input space and form a more comprehensive understanding
of the datapoints.
### 7.2 Framework Analysis
Ablation Study of Losses. We conduct an ablation study by using different
combinations of losses in the Omni-Training framework. For the losses of
$\mathcal{L}_{\texttt{pre}}$, $\mathcal{L}_{\texttt{meta}}$ and
$\mathcal{L}_{\texttt{joint}}$, if we do not use any of the three losses, we
will not use the corresponding branch for inference. We report results on
mini-ImageNet, CUB and mini-ImageNet$\rightarrow$CUB datasets with $K=5$ in
Table V. We observe that all of the loss functions in the tri-flow design
including the self-distillation regularization contribute to the improvement
of the Omni-Training framework.
Influence of the Model Size. In Omni-Net, we use lightweight $1\times 1$
convolution layers for the parallel branches. Although the number of
parameters does not increase significantly (from $11.17$M to $13.98$M if we
use ResNet-18), there is still a concern that the performance gain of Omni-
Training may come from the increase in the model size. Thus, we add the same
parameters as these additional $1\times 1$ convolution layers to the original
ResNet-18 backbone, and denote it as ResNet-18*. Though having the same number
of parameters, ResNet-18* is different from our Omni-Training backbone because
it does not have different data flows inside respectively for pre-training and
meta-training, and is only trained with one learning paradigm. We train
ProtoNet [14] with the ResNet-18* backbone (denoted as ProtoNet*) and report
the accuracy with the support set size $K=5$ in Table VI.
Despite having more parameters, ProtoNet* does not show obvious improvement
over ProtoNet. This indicates that simply increasing the model complexity does
not ensure better performance. Omni-Training has comparable parameters with
ProtoNet*, but outperforms ProtoNet* with a large margin. This reveals that
the main reason that improves the performance is not increasing the model
size, but coordinating pre-training and meta-training to learn deep
representations with both domain transferability and task transferability.
Backbone Modification. We investigate the incluence of the number of Omni-
Layers of the backbone. Since ResNet-18 is composed of $8$ Res-Blocks, we
attempt to keep the first $n$ Res-Blocks unchanged and transform the rest
$8-n$ blocks into Omni-Layers. The first index of the block with Omni-Layers
is $n+1$. We train the models with these modified backbones. We report
classification results with $K=5$ in the CUB dataset (Figure 10a) and the
mini-ImageNet$\rightarrow$CUB dataset (Figure 10b). When the index of the
first block with Omni-Layers is $1$, which means the whole backbone is changed
into Omni-Net, the model performs best. As the index increases, which means
more preceding layers are completely shared between different flows as done in
Multi-Task Learning, the accuracy drops sharply. This reveals the efficacy of
the Omni-Layers on learning the three flows to coordinate pre-training and
meta-training. Omni-Net is a general-purpose backbone for few-shot learning.
Parameter Sensitivity. We analyze the sensitivity of the loss trade-off hyper-
parameter $\lambda$. We report the accuracy on the mini-ImageNet dataset with
$K=1$ and on the cross-domain mini-ImageNet$\rightarrow$CUB dataset with $K=5$
in Figure 10c. We observe that the model performs well in a range of
parameters: $[1.0,3.0]$. However, the performance degrades when setting
$\lambda=0$, _i.e._ , removing the self-distillation regularization. In
general, we use the same hyper-parameter: $\lambda=3.0$ for the different
tasks in our experiments to avoid over-tuning it.
## 8 Conclusion
This paper focuses on learning transferable representations for few-shot
learning, which enables the model to fast generalize to new domains and tasks
with a few examples. We pinpoint that domain transferability and task
transferability are the key factors to data-efficiency in downstream tasks. We
further empirically show that pre-training and meta-training methods and
simple combinations of them cannot obtain both domain transferability and task
transferability, so we propose Omni-Training to bridge pre-training and meta-
training with both types of transferability. With the tri-flow Omni-Net
architecture, the model preserves the specific transferability of pre-training
and meta-training and coordinates these flows by routing their representations
via the joint-flow, making each gain the other kind of transferability. We
design an Omni-Loss to learn the three flows and impose a self-distillation
regularization to enable knowledge transfer across the training process. Omni-
Training is a general framework that accommodates various existing pre-
training and meta-training algorithms. Thorough evaluations on cross-task and
cross-domain datasets in classification, regression and reinforcement learning
problems shows that Omni-Training consistently and clearly outperforms the
state-of-the-art deep learning methods for few-shot learning.
## Acknowledgments
This work was supported by the National Megaproject for New Generation AI
(2020AAA0109201), National Natural Science Foundation of China (62022050 and
62021002), Beijing Nova Program (Z201100006820041), and BNRist Innovation Fund
(BNR2021RC01002).
## References
* [1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _nature_ , vol. 521, no. 7553, pp. 436–444, 2015.
* [2] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _CVPR_ , 2016, pp. 770–778.
* [3] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot _et al._ , “Mastering the game of go with deep neural networks and tree search,” _nature_ , vol. 529, no. 7587, pp. 484–489, 2016.
* [4] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” _arXiv preprint_ , 2018.
* [5] D. Adiwardana, M.-T. Luong, D. R. So, J. Hall, N. Fiedel, R. Thoppilan, Z. Yang, A. Kulshreshtha, G. Nemade, Y. Lu _et al._ , “Towards a human-like open-domain chatbot,” _arXiv preprint_ , 2020.
* [6] R. Bommasani, P. Liang _et al._ , “On the opportunities and risks of foundation models,” _arXiv preprint_ , 2021.
* [7] C. Yu, J. Liu, and S. Nemati, “Reinforcement learning in healthcare: A survey,” _arXiv preprint_ , 2019.
* [8] L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” _TPAMI_ , vol. 28, no. 4, pp. 594–611, 2006.
* [9] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in _ICML_ , 2017, pp. 1126–1135.
* [10] X. Wang, J. Gao, M. Long, and J. Wang, “Self-tuning for data-efficient deep learning,” in _ICML_ , 2021, pp. 10 738–10 748.
* [11] A. Kolesnikov, L. Beyer, X. Zhai, J. Puigcerver, J. Yung, S. Gelly, and N. Houlsby, “Big transfer (bit): General visual representation learning,” in _ECCV_ , 2020, pp. 491–507.
* [12] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” _OpenAI blog_ , 2018.
* [13] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra _et al._ , “Matching networks for one shot learning,” in _NeurIPS_ , 2016, pp. 3630–3638.
* [14] J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in _NeurIPS_ , 2017, pp. 4077–4087.
* [15] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang, “A closer look at few-shot classification,” in _ICLR_ , 2019.
* [16] Y. Guo, N. C. Codella, L. Karlinsky, J. V. Codella, J. R. Smith, K. Saenko, T. Rosing, and R. Feris, “A broader study of cross-domain few-shot learning,” in _ECCV_ , 2020, pp. 124–141.
* [17] V. Dumoulin, N. Houlsby, U. Evci, X. Zhai, R. Goroshin, S. Gelly, and H. Larochelle, “Comparing transfer and meta learning approaches on a unified few-shot classification benchmark,” _arXiv preprint_ , 2021.
* [18] Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a few examples: A survey on few-shot learning,” _ACM Computing Surveys_ , vol. 53, no. 3, pp. 1–34, 2020.
* [19] Y. Yu, “Towards sample efficient reinforcement learning.” in _IJCAI_ , 2018, pp. 5739–5743.
* [20] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in _CVPR_ , 2016, pp. 2818–2826.
* [21] K. He, R. Girshick, and P. Dollár, “Rethinking imagenet pre-training,” in _ICCV_ , 2019, pp. 4918–4927.
* [22] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever _et al._ , “Language models are unsupervised multitask learners,” _OpenAI blog_ , vol. 1, no. 8, p. 9, 2019.
* [23] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language models are few-shot learners,” in _NeurIPS_ , 2020, pp. 1877–1901.
* [24] Z. Cao, M. Kwon, and D. Sadigh, “Transfer reinforcement learning across homotopy classes,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 2, pp. 2706–2713, 2021.
* [25] Z. Zhu, K. Lin, and J. Zhou, “Transfer learning in deep reinforcement learning: A survey,” _arXiv preprint_ , 2020.
* [26] T. Xie, N. Jiang, H. Wang, C. Xiong, and Y. Bai, “Policy finetuning: Bridging sample-efficient offline and online reinforcement learning,” _arXiv preprint_ , 2021.
* [27] V. Campos, P. Sprechmann, S. S. Hansen, A. Barreto, S. Kapturowski, A. Vitvitskyi, A. P. Badia, and C. Blundell, “Beyond fine-tuning: Transferring behavior in reinforcement learning,” in _ICML Workshop on Unsupervised Reinforcement Learning_ , 2021.
* [28] M. Schwarzer, N. Rajkumar, M. Noukhovitch, A. Anand, L. Charlin, D. Hjelm, P. Bachman, and A. Courville, “Pretraining representations for data-efficient reinforcement learning,” _arXiv preprint_ , 2021.
* [29] N. Dvornik, J. Mairal, and C. Schmid, “Diversity with cooperation: Ensemble methods for few-shot classification,” in _ICCV_ , 2019, pp. 3722–3730.
* [30] Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola, “Rethinking few-shot image classification: a good embedding is all you need?” in _ECCV_ , 2020, pp. 266–282.
* [31] S. Qiao, C. Liu, W. Shen, and A. L. Yuille, “Few-shot image recognition by predicting parameters from activations,” in _CVPR_ , 2018, pp. 7229–7238.
* [32] H. Qi, M. Brown, and D. G. Lowe, “Low-shot learning with imprinted weights,” in _CVPR_ , 2018, pp. 5822–5830.
* [33] L. Xuhong, Y. Grandvalet, and F. Davoine, “Explicit inductive bias for transfer learning with convolutional networks,” in _ICML_ , 2018.
* [34] X. Li, H. Xiong, H. Wang, Y. Rao, L. Liu, Z. Chen, and J. Huan, “Delta: Deep learning transfer using feature map with attention for convolutional networks,” in _ICLR_ , 2019.
* [35] X. Chen, S. Wang, B. Fu, M. Long, and J. Wang, “Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning,” in _NeurIPS_ , 2019.
* [36] K. You, Z. Kou, M. Long, and J. Wang, “Co-tuning for transfer learning,” in _NeurIPS_ , 2020.
* [37] J. Schmidhuber, “Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-… hook,” Ph.D. dissertation, Technische Universität München, 1987.
* [38] D. K. Naik and R. J. Mammone, “Meta-neural networks that learn by learning,” in _IJCNN_ , 1992, pp. 437–442.
* [39] S. Thrun and L. Pratt, _Learning to learn_. Springer Science & Business Media, 1998.
* [40] J. Lu, P. Gong, J. Ye, and C. Zhang, “Learning from very few samples: A survey,” _arXiv preprint_ , 2020.
* [41] G. Koch, R. Zemel, R. Salakhutdinov _et al._ , “Siamese neural networks for one-shot image recognition,” in _ICML deep learning workshop_ , 2015\.
* [42] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in _CVPR_ , 2018, pp. 1199–1208.
* [43] K. Allen, E. Shelhamer, H. Shin, and J. Tenenbaum, “Infinite mixture prototypes for few-shot learning,” in _ICML_ , 2019, pp. 232–241.
* [44] V. Garcia and J. Bruna, “Few-shot learning with graph neural networks,” in _ICLR_ , 2018.
* [45] R. Hou, H. Chang, M. Bingpeng, S. Shan, and X. Chen, “Cross attention network for few-shot classification,” in _NeurIPS_ , 2019, pp. 4005–4016.
* [46] B. Oreshkin, P. R. López, and A. Lacoste, “Tadam: Task dependent adaptive metric for improved few-shot learning,” in _NeurIPS_ , 2018, pp. 721–731.
* [47] S. W. Yoon, J. Seo, and J. Moon, “Tapnet: Neural network augmented with task-adaptive projection for few-shot learning,” in _ICML_ , 2019, pp. 7115–7123.
* [48] H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha, “Few-shot learning via embedding adaptation with set-to-set functions,” in _CVPR_ , 2020.
* [49] Y. Bengio, S. Bengio, and J. Cloutier, “Learning a synaptic learning rule: Université de montréal,” _Département d’informatique et de recherche opérationnelle_ , 1990.
* [50] J. Schmidhuber, “Learning to control fast-weight memories: An alternative to dynamic recurrent networks,” _Neural Computation_ , vol. 4, no. 1, pp. 131–139, 1992.
* [51] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. De Freitas, “Learning to learn by gradient descent by gradient descent,” in _NeurIPS_ , 2016, pp. 3981–3989.
* [52] S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” in _ICLR_ , 2017.
* [53] Z. Li, F. Zhou, F. Chen, and H. Li, “Meta-sgd: Learning to learn quickly for few-shot learning,” _arXiv preprint_ , 2017.
* [54] Z. Xu, H. van Hasselt, and D. Silver, “Meta-gradient reinforcement learning,” _arXiv preprint_ , 2018.
* [55] R. Houthooft, R. Y. Chen, P. Isola, B. C. Stadie, F. Wolski, J. Ho, and P. Abbeel, “Evolved policy gradients,” _arXiv preprint_ , 2018.
* [56] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap, “Meta-learning with memory-augmented neural networks,” in _ICML_ , 2016, pp. 1842–1850.
* [57] T. Munkhdalai and H. Yu, “Meta networks,” in _ICML_ , 2017, pp. 2554–2563.
* [58] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel, “A simple neural attentive meta-learner,” in _ICLR_ , 2018.
* [59] T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler, “Rapid adaptation with conditionally shifted neurons,” in _ICML_ , 2018, pp. 3664–3673.
* [60] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel, “Rl2: Fast reinforcement learning via slow reinforcement learning,” _arXiv preprint_ , 2016.
* [61] J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blundell, D. Kumaran, and M. Botvinick, “Learning to reinforcement learn,” _arXiv preprint_ , 2016.
* [62] Y. Lee and S. Choi, “Gradient-based meta-learning with learned layerwise metric and subspace,” in _ICML_ , 2018, pp. 2927–2936.
* [63] H. Yao, Y. Wei, J. Huang, and Z. Li, “Hierarchically structured meta-learning,” in _ICML_ , 2019, pp. 7045–7054.
* [64] Y. Duan, M. Andrychowicz, B. C. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba, “One-shot imitation learning,” in _NeurIPS_ , 2017.
* [65] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine, “One-shot visual imitation learning via meta-learning,” in _CoRL_ , 2017, pp. 357–368.
* [66] K. Frans, J. Ho, X. Chen, P. Abbeel, and J. Schulman, “Meta learning shared hierarchies,” _arXiv preprint_ , 2017.
* [67] M. A. Jamal and G.-J. Qi, “Task agnostic meta-learning for few-shot learning,” in _CVPR_ , 2019, pp. 11 719–11 727.
* [68] A. Xie, A. Singh, S. Levine, and C. Finn, “Few-shot goal inference for visuomotor learning and planning,” in _CoRL_ , 2018, pp. 40–52.
* [69] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell, “Meta-learning with latent embedding optimization,” in _ICLR_ , 2019.
* [70] L. Bertinetto, J. F. Henriques, P. Torr, and A. Vedaldi, “Meta-learning with differentiable closed-form solvers,” in _ICLR_ , 2019.
* [71] K. Lee, S. Maji, A. Ravichandran, and S. Soatto, “Meta-learning with differentiable convex optimization,” in _CVPR_ , 2019, pp. 10 657–10 665.
* [72] H.-Y. Tseng, H.-Y. Lee, J.-B. Huang, and M.-H. Yang, “Cross-domain few-shot classification via learned feature-wise transformation,” in _ICLR_ , 2020\.
* [73] Q. Sun, Y. Liu, T.-S. Chua, and B. Schiele, “Meta-transfer learning for few-shot learning,” in _CVPR_ , 2019, pp. 403–412.
* [74] E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, U. Evci, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P. Manzagol, and H. Larochelle, “Meta-dataset: A dataset of datasets for learning to learn from few examples,” in _ICLR_ , 2020.
* [75] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour, “Policy gradient methods for reinforcement learning with function approximation,” in _NeurIPS_ , 2000, pp. 1057–1063.
* [76] R. S. Sutton and A. G. Barto, _Reinforcement Learning: An Introduction_. The MIT Press, 2018.
* [77] B. Liu, Y. Cao, Y. Lin, Q. Li, Z. Zhang, M. Long, and H. Hu, “Negative margin matters: Understanding margin in few-shot classification,” in _ECCV_ , 2020, pp. 438–455.
* [78] A. Afrasiyabi, J.-F. Lalonde, and C. Gagné, “Persistent mixture model networks for few-shot image classification,” _arXiv preprint_ , 2020.
* [79] H. Wang, H. Zhao, and B. Li, “Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation,” in _ICML_ , 2021.
* [80] R. Ni, M. Goldblum, A. Sharaf, K. Kong, and T. Goldstein, “Data augmentation for meta-learning,” in _ICML_ , 2021, pp. 8152–8161.
* [81] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” 2011.
* [82] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein _et al._ , “Imagenet large scale visual recognition challenge,” _IJCV_ , vol. 115, no. 3, pp. 211–252, 2015\.
* [83] N. Hilliard, L. Phillips, S. Howland, A. Yankov, C. D. Corley, and N. O. Hodas, “Few-shot learning with metric-agnostic conditional embeddings,” _arXiv preprint_ , 2018.
* [84] J. Krause, M. Stark, J. Deng, and L. Fei-Fei, “3d object representations for fine-grained categorization,” in _ICCV_ , 2013, pp. 554–561.
* [85] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,” _TPAMI_ , vol. 40, no. 6, pp. 1452–1464, 2017.
* [86] G. V. Horn, O. M. Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. J. Belongie, “The inaturalist species classification and detection dataset,” in _CVPR_ , 2018, pp. 8769–8778.
* [87] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint_ , 2014.
* [88] G. Blanchard, G. Lee, and C. Scott, “Generalizing from several related classification tasks to a new unlabeled sample,” in _NeurIPS_ , 2011, pp. 2178–2186.
* [89] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel, “Benchmarking deep reinforcement learning for continuous control,” in _ICML_ , 2016, pp. 1329–1338.
* [90] E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in _IROS_ , 2012, pp. 5026–5033.
* [91] R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” _Machine learning_ , vol. 8, no. 3, pp. 229–256, 1992.
|
Univalent homotopy type theory (HoTT) may be seen as a language for the category of $\infty$-groupoids.
It is being developed as a new foundation for mathematics and as an
internal language for (elementary) higher toposes.
We develop the theory of factorization systems, reflective subuniverses, and modalities in homotopy type theory, including their construction using a “localization” higher inductive type.
This produces in particular the ($n$-connected, $n$-truncated)
factorization system as well as internal presentations of subtoposes,
through lex modalities.
We also develop the semantics of these constructions.
§ INTRODUCTION
In traditional modal logic, a modality is a unary operation on propositions.
The classical examples are $\Box$ (“it is necessary that”) and $\lozenge$ (“it is possible that”).
In type theory and particularly dependent type theory, such as homotopy type theory, where propositions are regarded as certain types, it is natural to extend the notion of modality to a unary operation on types.
For emphasis we may call this a “typal modality”, or a “higher modality” since it acts on the “higher types” available in homotopy type theory (not just “sets” but types containing higher homotopy).
There are many kinds of propositional modalities, but many of them are either monads or comonads.
Monads and comonads on a poset (such as the poset of propositions) are also automatically idempotent, but this is no longer true for more general monads and comonads.
Thus there are many possible varieties of typal and higher modalities.
Typal modalities in non-dependent type theory have a wide range of applications in computer science.
In particular, following the pioneering work of [30], monadic typal modalities are commonly used to model effects in programming languages.
Non-dependent modal type theory is now a flourishing field with this and many other applications; see [15] for an overview.
In this paper we take a first step towards the study of higher modalities in homotopy type theory, restricting our attention to idempotent, monadic ones.
These are especially convenient for a number of reasons.
One is that in homotopy type theory, as in higher category theory, we expect a general monad (or comonad) to require infinitely many higher coherence conditions, which we don't know how to express in the finite syntax of type theory; whereas an idempotent one can instead be described using the universal property of a reflector into a subcategory.
(We can still use particular non-idempotent monadic modalities, such as the “partial elements” monad of [1, 17], without making all this coherence explicit, but it is harder to develop a general theory of them.)
Another is that in good situations, an idempotent monad can be extended to all slice categories consistently, and thereby represented “fully internally” in type theory as an operation $\modal : \UU\to\UU$ on a type universe.
Idempotent comonadic modalities have also been considered in dependent type theory and homotopy type theory (see for instance [31, 16, 34, 38]), but they generally require modifying the judgmental structure of type theory.
By contrast, our theory of modalities can be (and has been) formalized in existing proof assistants without modifying the underlying type theory.
Idempotent monadic modalities also include many very important
examples. The $(-1)$-truncation in homotopy type theory is a
higher-dimensional version of the bracket modality, which in 1-category theory
characterizes regular categories [4]. More
generally, the $n$-truncation modalities are prominent examples of modalities; indeed almost all of the theory of truncation and connectedness in <cit.> is just a specialization of the theory of a general modality.
More generally, we can produce idempotent monadic modalities by localization or nullification at small families, using a higher inductive type.
Finally, among idempotent monadic modalities we also find the left exact ones, which correspond semantically to subtoposes.
For the rest of this paper we will say simply modality to mean an idempotent monadic modality.
However, this should be regarded as only a local definition; in more general contexts the word “modality” should continue to encompass comonadic modalities and other sorts.
In fact, our use of the word “modality” will be a little more specific even than this.
If we express internally the most naïve notion of “idempotent monad on $\UU$”, we obtain a notion that we call a reflective subuniverse.
However, many reflective subuniverses that arise in practice, including truncation and left exact modalities (and, in fact, all concrete examples we will consider in this paper), satisfy the further property of being closed under $\Sigma$-types; it is these that we will call modalities.
We emphasize this property not just because it holds in many examples, but because it can be equivalently expressed by giving the modal operator a dependent elimination principle analogous to that of an inductive type.
This is a very natural thing to ask for when generalizing propositional modalities to typal operations.
The naturalness of this notion of modality is further supported by the fact that it has many equivalent characterizations.
In addition to a reflective subuniverse closed under $\Sigma$-types and a modal operator with a dependent eliminator, a modality can be defined using a “dependent universal property”, and more interestingly as a stable orthogonal factorization system.
The right class of maps in the factorization system consists of those whose fibers belong to the subuniverse (“modal maps”), while the left class consists of those whose fibers have contractible reflection into the subuniverse (“connected maps”).
The internal nature of the definition means that a stable factorization system is entirely determined by the fibers of its right class, which form a modality.[Non-stable factorization systems are not so determined, although they do have an underlying reflective subuniverse, and most reflective subuniverses can be extended to factorization systems.]
We prove the equivalence of all these definitions in <ref>, developing along the way some basic theory of reflective subuniverses, connected maps, and factorization systems.
In unaugmented Martin-Löf type theory we can define a few particular modalities, such as the double-negation modality, and the “open modality” associated to any mere proposition.
However, most interesting modalities require higher inductive types for their construction, including the $n$-truncations and the dual “closed modality” associated to a proposition.
In <ref> we give a general construction of modalities using a higher inductive localization type: given a family of maps $F:\prd{a:A} B(a) \to C(a)$, a type $X$ is $F$-local if the precomposition map $(C(a)\to X) \to (B(a)\to X)$ is an equivalence for all $a:A$, and the $F$-localization $\localization{F} X$ is the universal $F$-local type admitting a map from $X$.
We call a modality accessible if it can be generated by localization; this is inspired by the corresponding notion in category theory.
Accessible modalities include the $n$-truncation and open and closed modalities, as well as many examples from homotopy theory, where localization is a standard technique; thus we expect them to be a useful tool in the synthetic development of homotopy theory inside type theory.[Our notion of localization, being internal, is a little stronger than the standard sort of localization in homotopy theory; but in many cases it is equivalent.
The higher inductive construction of localization, when interpreted model-categorically according to the semantics of [26], also appears to be new and may be of independent interest in homotopy theory.]
In general, localization at a family of maps produces a reflective subuniverse (and, in fact, an orthogonal factorization system), but not necessarily a modality.
However, there is a simple condition which ensures that we do get a modality, namely that $C(a)=\unit$ for all $a:A$.
In this case the local types are those for which “every map $B(a)\to X$ is uniquely constant”; following standard terminology in homotopy theory we call them $B$-null and the corresponding localization $B$-nullification.
Any accessible modality can be presented as a nullification.
A very important class of modalities that excludes the $n$-truncations are the left exact, or lex, ones, which we study in <ref>.
These have many equivalent characterizations, but the most intuitive is simply that the reflector preserves finite limits.
When homotopy type theory is regarded as an internal language for higher toposes, lex modalities correspond to subtoposes.
In the traditional internal logic of 1-toposes, subtoposes are
represented by Lawvere-Tierney operators on the subobject
classifier, which generate a subtopos by internal
sheafification. Goldblatt [20] provides an overview
of the modal logic perspective on these operators on propositions.
Dependent type theory allows us to speak directly about the subtopos as an operation on a type universe (the lex modality), and show internally that any Lawvere-Tierney operator on the universe of propositions gives rise to a lex modality.
There is an additional subtlety here that only arises for $\infty$-toposes and homotopy type theory.
In 1-topos theory, and indeed in $n$-topos theory for any $n<\infty$, every lex modality (subtopos) arises from a Lawvere-Tierney operator; but in $\infty$-topos theory this is no longer true.
The subtoposes that are determined by their behavior on propositions are called topological in [28], and we appropriate this name for lex modalities of this sort as well.
The dual cotopological sort of lex modalities, including the hypercompletion, are harder to construct in type theory, but we can at least show that insofar as they exist they behave like their $\infty$-categorical analogues.
When this paper was written, we did not know any condition on a type family $B$ that ensured that $B$-nullification is lex and such that any accessible lex modality can be presented by such a $B$.
But as we were preparing it for final publication, [2] found such a condition: that $B$ is closed under taking path spaces.
In this case we may refer to $B$-nullification as a lex nullification.
<ref> displays in a Venn diagram all the different structures discussed above.
Lex modalities are a subclass of modalities, which are a subclass of reflective subuniverses.
In principle all three structures can be either accessible or non-accessible, although in practice non-accessible ones are very hard to come by; with topological modalities a subclass of the accessible lex ones.
Individual examples are displayed in single boxes, while general classes of examples (obtained by localization and restricted classes thereof) are displayed in double boxes.
(0,0.7) ellipse (6 and 3.8);
at (0,4.2) reflective subuniverses;
(-1.3,.5) ellipse (4.3 and 3.2);
at (-1.3,3.4) accessible;
(1.2,.3) ellipse (4.2 and 3.1);
at (2.8,2.7) modalities;
(1.3,-.5) ellipse (3.7 and 2);
at (3.5,.8) lex;
(0,-.8) ellipse (2 and 1.3);
at (0,0.2) topological;
[rectangle,draw] at (0,2.5) $n$-truncation;
[rectangle,draw,double] at (0,2) nullifications;
[rectangle,draw,double] at (1.5,1) lex nullifications;
[rectangle,draw,double] at (-4.3,1) localizations;
[rectangle,draw] at (3.8,1.7) double neg.;
[rectangle,draw] at (-.5,-.5) open;
[rectangle,draw] at (.5,-.5) closed;
[rectangle,draw,double] at (0,-1) prop. nullifications;
[rectangle,draw] at (3,0) hypercompletion?;
Modalities and related structures
Viewing accessible lex modalities as subtoposes, we naturally expect that the subtopos should support its own internal language.
This is true, although we do not prove it precisely; we simply observe that the universe of modal types is closed under many type constructors and admits its own versions of all the others.
In particular, the universe of modal types for an accessible lex modality is itself a modal type for the same modality (in fact, this characterizes lex modalities among accessible ones).
Since any $\infty$-topos arises as a subtopos of a presheaf $\infty$-topos, we can essentially reduce the problem of finding univalent internal languages for $\infty$-toposes to that of finding them for presheaf $\infty$-toposes (and of finding universes closed under accessible lex modalities; see <ref>).
A similar argument, using judgementally strict idempotent monads, has already been used in the so-called “cubical stack” models of type theory [14, 13] (which do not actually in general lie in $\infty$-stack toposes) to prove independence results for homotopy type theory.
We end the main part of the paper with a general “fracture and gluing” theorem about modalities: if $\modal$ is any modality and $\lozenge$ is a lex modality that is “strongly disjoint” from $\modal$, then the join $\lozenge\lor\modal$ in the poset of modalities can be constructed using a “pullback fracture square”.
When applied to the open and closed modalities associated to a proposition, this specializes to an internal viewpoint on Artin gluing.
We call it a “fracture theorem” since the pullback squares appear formally analogous to the fracture squares in the classical theory of localization and completion at primes, though we do not know of a precise relationship.
In the final part of the paper, <ref>, we sketch a semantic interpretation of our theory in terms of comprehension categories and $(\infty,1)$-toposes.
In particular, we show that well-behaved reflective subcategories of $(\infty,1)$-toposes give rise to modalities in their internal languages, while dually modalities give rise to reflective subcategories of syntactic $(\infty,1)$-categories.
In this discussion we ignore the issue of universes, which it is not known how to model semantically in general $(\infty,1)$-toposes (except in a weak sense).
We will freely use the results and the notations from [40].
In fact, parts of this work have already appeared as <cit.>.
We generalize much of this section 7.6 to general modalities in our <ref>, which also sharpens the results in <cit.>.
In particular, we will freely use function extensionality and the univalence axiom, often without comment.
Finally, we note that many of the results in this paper have been formalized in the Coq proof assistant [5].
However, the organization of results in the library is rather different than in this paper.
A rough correspondence is as follows; unless otherwise noted all files are in the directory.
Sections Library files
<ref> and
Examples (<ref>) , , , ,
<ref> and
<ref> and
There are also some differences in the proof techniques used in the library and in this paper.
In the library, localizations are constructed using “$\infty$-extendability” as a characterization of equivalences to avoid function extensionality hypotheses, as described in [35].
In addition, much attention is paid to ensuring appropriate universe polymorphism with parametrized modules; this is described in <cit.>.
We will not discuss these issues further here; see the cited references and the comments in the library for more information.
§ MODALITIES, REFLECTIVE SUBUNIVERSES AND FACTORIZATION SYSTEMS
In this section we will introduce the following four notions of modality
and prove that they are all equivalent:
* Higher modalities
* Uniquely eliminating modalities
* $\Sigma$-closed reflective subuniverses
* Stable orthogonal factorization systems
After their equivalence has been established, we will call all of them simply modalities.
The first three definitions have the following data in common: by a modal operator we mean a function $\modal:\UU\to\UU$, and by a modal unit we mean a family of functions $\modalunit^\modal:\prd*{A:\UU}A\to\modal A$.[In general we write $f:\prd*{x:A} B(x)$ instead of $f:\prd{x:A} B(x)$ to indicate that the argument $x$ of $f$ is implicit.]
Given these data, we say a type $X$ is modal if $\modalunit[X]:X\to\modal X$ is an equivalence, and we write $\UU_\modal \defeq \sm{X:\UU} \ismodal(X)$ for the subuniverse of modal types.
More generally, if $\mathcal{M}:\UU\to\prop$ is any predicate on the universe, we write $\UU_{\mathcal{M}} \defeq \sm{X:\UU} \mathcal{M}(X)$.
A higher modality consists of a modal operator and modal unit together with
* for every $A:\UU$ and every dependent type $P:\modal A\to\UU$, a
\begin{equation*}
\mathsf{ind}^\modal_A:\big(\prd{a:A}\modal(P(\eta(a)))\big)\to\prd{z:\modal A}\modal(P(z)).
\end{equation*}
* An identification
\begin{equation*}
\mathsf{comp}^\modal_A(f,x):\id{\mathsf{ind}^\modal_A(f)(\eta(x))}{f(x)}
\end{equation*}
for each $f:\prd{x:A}\modal(P(\eta(x)))$ and $x:A$.
* For any $x,y:\modal A$ the modal unit $\modalunit[(\id{x}{y})]:\id{x}{y}\to \modal(\id{x}{y})$ is an equivalence.
One might think of eliminating into a $P:\modal A\to \UU_\modal$ directly rather than into $\modal\circ P$ for a $P:\modal A\to \UU$, but in that case we would be unable to show that $\modal A$ is a modal type (<ref>).
A uniquely eliminating modality consists of
a modal operator and modal unit such that the function
\begin{equation*}
\lam{f} f\circ\modalunit[A] : (\prd{z:\modal A}\modal(P(z)))\to(\prd{x:A}\modal(P(\modalunit[A](x))))
\end{equation*}
is an equivalence for any $A$ and any $P:\modal A\to\UU$.
A reflective subuniverse is a family $\ismodal:\UU\to\prop$, together with a
modal operator and modal unit such that $\ismodal(\modal A)$ for
every $A:\UU$, and for every $B:\UU$ satisfying
$\ismodal(B)$, the function
\begin{equation*}
\lam{f} f\circ \modalunit[A]:(\modal A\to B)\to (A\to B)
\end{equation*}
is an equivalence.
A reflective subuniverse is $\Sigma$-closed if whenever $\ismodal(X)$ and $\ismodal(P(x))$ for all $x:X$, we have $\ismodal(\sm{x:X}P(x))$.
Note that unlike <ref>, in <ref> the notion of “modal type” is part of the data.
However, we will show in <ref> that $\ismodal(A)$ if and only if $\modalunit[A]$ is an equivalence.
An orthogonal factorization system consists of
predicates $\mathcal{L},\mathcal{R}:\prd*{A,B:\UU} (A\to B)\to\prop$ such that
* $\mathcal{L}$ and $\mathcal{R}$ are closed under composition and contain all identities (i.e. they are subcategories of the category of types that contain all the objects), and
* the type $\fact_{\mathcal{L},\mathcal{R}}(f)$ of factorizations
\begin{equation*}
\begin{tikzcd}
A \arrow[rr,"f"] \arrow[dr,swap,"f_{\mathcal{L}}"] & & B \\
& \im_{\mathcal{L},\mathcal{R}}(f) \arrow[ur,swap,"f_{\mathcal{R}}"]
\end{tikzcd}
\end{equation*}
of $f$, with $f_{\mathcal{L}}$ in $\mathcal{L}$ and $f_{\mathcal{R}}$ in $\mathcal{R}$, is contractible.
More precisely, the type $\fact_{\mathcal{L},\mathcal{R}}(f)$ is defined to
be the type of
\begin{equation*}
\end{equation*}
consisting of a type $\im_{\mathcal{L},\mathcal{R}}(f)$, a function $f_{\mathcal{L}}:A\to \im_{\mathcal{L},\mathcal{R}}(f)$ with
$p:\mathcal{L}(f_{\mathcal{L}})$, a function $f_{\mathcal{R}}:\im_{\mathcal{L},\mathcal{R}}(f)\to B$ with $q:\mathcal{R}(f_{\mathcal{R}})$, and an identification $h:\id{f}{f_{\mathcal{R}}\circ f_{\mathcal{L}}}$. The type $\im_{\mathcal{L},\mathcal{R}}(f)$ is called
the $(\mathcal{L},\mathcal{R})$-image of $f$.
A type $X$ is said to be $(\mathcal{L},\mathcal{R})$-modal if
the map $!:X\to\unit$ is in $\mathcal{R}$ (and hence $!_\mathcal{L}$
is an equivalence).
An orthogonal factorization system is said to be stable if the class
$\mathcal{L}$ is stable under pullbacks (By
<ref>, $\mathcal{R}$ is always stable under pullbacks).
By univalence, the fact that $\mathcal{L}$ and $\mathcal{R}$ contain all identities implies that they each contain all equivalences.
Conversely, if $f\in \mathcal{L}\cap\mathcal{R}$, then $(\idfunc,f)$ and $(f,\idfunc)$ are both $(\mathcal{L},\mathcal{R})$-factorizations of $f$, and hence equal; which implies that $f$ is an equivalence.
Thus, $\mathcal{L}\cap\mathcal{R}$ consists exactly of the equivalences.
We now consider a few examples.
Since we will eventually prove all the definitions to be equivalent, we can use any one of them to describe any particular example.
The prime example is the $n$-truncation modality $\truncf n$ as studied in <cit.>, which we also denote $\truncmod{n}$.
This can be given as a higher modality, using its induction principle and the fact that $\trunc n A$ is an $n$-type and the identity types of an $n$-type are again $n$-types (indeed, $(n-1)$-types).
The corresponding stable orthogonal factorization system, consisting of $n$-connected and $n$-truncated maps, is also constructed in <cit.>; our construction in <ref> will be a generalization of this.
Let $Q$ be a mere proposition.
The open modality determined by $Q$ is defined by $\open Q A = (Q\to A)$, with unit $\modalunit[A](x) = \lam{\nameless}x : A \to (Q \to A)$.
(We call it “open” because semantically, it generalizes the open subtopos associated to a subterminal object of a topos, which in turn is so named because in the case of sheaves on a topological space $X$ it specializes to the open subspaces of $X$.)
To show that this is a higher modality, suppose we have $P: (Q\to A) \to \UU$ and $f:\prd{a:A} Q \to P(\lam{\nameless} a)$.
Then for any $z:Q\to A$ and $q:Q$ we have $f(z(q),q) : P(\lam{\nameless} z(q))$.
And since $Q$ is a mere proposition, we have $z(q) = z(q')$ for any $q':Q$, hence $e(z,q) : (\lam{\nameless} z(q)) = z$ by function extensionality.
This gives
\[ \lam{z}{q} \trans{e(z,q)}{(f(z(q),q))} : \prd{z:Q\to A} Q \to P(z). \]
For the computation rule, we have
\begin{align*}
(\lam{z}{q} \trans{e(z,q)}{(f(z(q),q))})(\lam{\nameless} a) &= \lam{q} \trans{e(\lam{\nameless} a,q)}{(f(a,q))}\\
&= \lam{q} f(a,q) = f(a)
\end{align*}
by function extensionality, since $e(\lam{\nameless} a,q) = \refl{}$.
Finally, if $x,y:Q\to A$, then $(x=y) \simeq \prd{q:Q} x(q) = y(q)$, and the map
\[ \Big(\prd{q:Q} x(q) = y(q)\Big) \to \Big( Q \to \prd{q:Q} x(q) = y(q)\Big) \]
is (by currying) essentially precomposition with a product projection $Q\times Q\to Q$, and that is an equivalence since $Q$ is a mere proposition.
Again, let $Q$ be a mere proposition.
The closed modality determined by $Q$ is defined by $\closed Q A = Q \ast A$, the join of $Q$ and $A$ (the pushout of $Q$ and $A$ under $Q\times A$).
(As for open modalities, closed modalities generalize closed subtoposes, which in turn generalize closed subspaces of topological spaces.)
We show that this is a $\Sigma$-closed reflective subuniverse.
Define a type $B$ to be modal if $Q \to \iscontr(B)$, and note that it is indeed the case that $Q\to\iscontr(Q\ast A)$, for any type $A$.
By the universal property of pushouts, a map $Q \ast A \to B$ consists of a map $f:A\to B$ and a map $g:Q\to B$ and for any $a:A$ and $q:Q$ an identification $p:f(a)=g(q)$.
But if $Q \to \iscontr(B)$, then $g$ and $p$ are uniquely determined, so this is just a map $A\to B$.
Thus $(\closed Q A \to B) \to (A\to B)$ is an equivalence, so we have a reflective subuniverse.
It is $\Sigma$-closed since the dependent sum of a contractible family of types over a contractible base is contractible.
The double negation modality is defined by $A\mapsto \neg\neg A$, i.e. $(A\to \emptyt)\to \emptyt$, with $\modalunit(a) = \lam{g} g(a)$.
We show that this is a uniquely eliminating modality.
Since the map $\lam{f}f\circ \modalunit[A]$ that must be an equivalence has mere propositions as domain and codomain, it suffices to give a map in the other direction.
Thus, let $P: \neg\neg A \to \UU$ and $f:\prd{a:A} \neg \neg P(\lam{g} g(a))$; given $z:\neg\neg A$ we must derive a contradiction from $g:\neg P(z)$.
Since we are proving a contradiction, we can strip the double negation from $z$ and assume given an $a:A$.
And since $\neg\neg A$ is a mere proposition, we have $z = \lam{g} g(a)$, so that we can transport $f(a)$ to get an element of $\neg\neg P(z)$, contradicting $g$.
The trivial modality is the identity function on $\UU$.
It coincides with $\open \top$ and with $\closed\bot$.
Dually, the zero modality sends all types to $\unit$.
It is equivalently the $(-2)$-truncation, and coincides with $\open\bot$ and with $\closed \top$.
In each of <ref>
we have defined what it means for a type to be modal. In each case, being
modal is a family of mere propositions indexed by the universe, i.e. a subuniverse.
We will show in <ref> that each kind of structure is completely determined by this subuniverse.
(<ref> is more general, not requiring $\Sigma$-closedness.)
It follows that the type of all modalities of each
kind is a subset of the set $\UU\to\prop$ of all subuniverses, and in particular is a set.
This makes it easier to establish
the equivalences of the different kinds of modalities.
It suffices
to show that any modality of one kind determines a modality of the next kind
with the same modal types, which we will do as follows:
higher modality [dr,bend left,"<ref>"]
3cmstable factorization system [ur,bend left,"<ref>"]
3cmuniquely eliminating modality [dl,bend left,"<ref>"]
3cm$\Sigma$-closed reflective subuniverse [ul,bend left,"<ref>"]
Before <ref> we take the opportunity to develop a bit more theory of reflective subuniverses, including closure under identity types (<ref>) and dependent products
(<ref>), along with several equivalent characterizations of $\Sigma$-closedness (<ref>).
Of these equivalences, the most surprising is that a stable factorization system is uniquely determined by its underlying reflective subuniverse of types.
This is false for stable factorization systems on arbitrary categories. However, an analogous fact is true in classical set-based mathematics for stable factorization systems on the category of sets (although in that case there are much fewer interesting examples). It is this fact about the category of sets which is analogous to the statement we prove in type theory about factorization systems on the category of types.
We will also see in <ref> that when type theory is interpreted in a higher category, the data of a reflective subuniverse or modality has to be interpreted “fiberwise”, giving a richer structure than a single reflective subcategory.
§.§ Higher modalities
We start by showing that a higher modality is determined by its modal types, and gives rise to a uniquely eliminating modality.
If $\modal$ is a higher modality, then any type of the form $\modal X$ is modal.
We want to show that the modal unit $\modalunit[\modal X]:\modal X\to\modal\modal X$
is an equivalence. By the induction principle and the computation rule for
higher modalities, we find a function $f:\modal \modal X\to\modal X$ with
the property that $f\circ \modalunit[\modal X]\htpy\idfunc[\modal X]$. We wish to
show that we also have $\modalunit[\modal X]\circ f\htpy\idfunc$. Since identity
types of types of the form $\modal Y$ are declared to be modal, it is
equivalent to find a term of type
\begin{equation*}
\prd{z:\modal \modal X}\modal(\modalunit[\modal X](f(z))=z).
\end{equation*}
Now we are in the position to use the induction principle of higher modalites
again, so it suffices to show that $\modalunit(f(\modalunit(z)))=\modalunit(z)$
for any $z:\modal X$. This follows from the fact that $f\circ\modalunit=\idfunc$.
The data of two higher modalites $\modal$ and $\modal'$
are identical if and only if they have the same modal types.
Another way of stating this is that the function from the type of all
modalities on $\UU$ to the type $\UU\to\prop$ of predicates on $\UU$, given
by mapping a modality to the predicate $\ismodal$, is an embedding. Thus, we
need to show that for any predicate $\mathcal{M}:\UU\to\prop$, we can find at
most one modality for which $\mathcal{M}$ is the class of modal types.
To be precise, consider for any $\mathcal{M}:\UU\to\prop$ and $X:\UU$, the type
of tuples $(Y,p,\pi,I,C)$ such that
* $Y$ is a type.
* $p:\mathcal{M}(Y)$.
* $\pi:X\to Y$.
* $I_P:(\prd{x:X} P(\pi(x)))\to(\prd{y:Y} P(y))$ for any $P:Y\to\UU_{\mathcal{M}}$.
* $C$ witnesses that each $I_P$ is a right inverse of precomposing with $\pi$.
We will show that this type is a mere proposition.
First, we show that the
type of pairs $(I,C)$, with $I$ and $C$ of the indicated types, is a mere
proposition for any $(Y,p,\pi)$. After that, we show that the type of triples
$(Y,p,\pi)$ is also a mere proposition. These two facts combined prove the
Consider a type $Y$ satisfying $\mathcal{M}$, and a function $\pi:X\to Y$, and
let $(I,C)$ and $(I',C')$ be two terms witnessing that $Y$ satisfies an induction
principle with a computation rule. We want to show that $(I,C)=(I',C')$, and of
course it suffices to show that $(I(s),C(s))=(I'(s),C(s))$ for any
$P:Y\to\UU_{\mathcal{M}}$ and $s:\prd{x:X}P(\pi(x))$.
To show that $I(s,y)=I'(s,y)$ for any $y:Y$, we use
the induction principle $(I,C)$. So it suffices to show that
$I(s,\pi(x))=I'(s,\pi(x))$. Both of these terms are equal to $s(x)$. Thus,
we obtain a proof $J(s,y)$ that $I(s,y)=I'(s,y)$, with the property that
Now we need to show that $\trans{J(s)}{C(s)}=C'(s)$, which is equivalent
to the property we just stated. This finishes the proof that the type of
the induction principle and computation rule is a mere proposition.
It remains to show that $(Y,\pi)=(Y',\pi')$, provided that $Y$ and $Y'$ are both
in $\mathcal{M}$, and that both sides satisfy
the induction principle and computation rule. It suffices to find an equivalence
$f:Y\to Y'$ such that $f\circ \pi=\pi'$.
From the induction principles of $Y$ resp. $Y'$, we obtain a function
$f:Y\to Y'$ with the property that $f\circ \pi=\pi'$, and a function
$f':Y'\to Y$ with the property that $f'\circ \pi'=\pi$.
To show that $f'\circ f=\idfunc$ we use the induction principle
of $Y$. Since the type $f'(f(y))=y$ is in $\mathcal{M}$, it suffices to show that
$f'(f(\pi(y)))=\pi(y)$. This readily follows from the defining properties of $f$
and $f'$. Similarly, we have $f\circ f'=\idfunc$.
A higher modality is a uniquely eliminating modality, with the
same modal types.
Let $\modal$ be a modality with modal units $\modalunit[A]$. Our goal is to show
that the pre-composition map
\begin{equation*}
\lam{s}s\circ\modalunit[A]:(\prd{z:\modal A}\modal(P(z)))\to(\prd{a:A}\modal(P(\modalunit[A](a))))
\end{equation*}
is an equivalence for each $A:\UU$ and $P:\modal A\to\UU$.
By the given induction principle and computation rule, we obtain a
right inverse $\mathsf{ind}^\modal_A$ of $\blank\circ\modalunit[A]$.
To show that it is a left inverse, consider $s:\prd{z:\modal A}\modal(P(z))$.
We need to find a homotopy
\begin{equation*}
\prd{z:\modal A}\id{s(z)}{\mathsf{ind}^\modal_A(s\circ \modalunit_A)(z)}.
\end{equation*}
By assumption we have that $P(x)$ is
modal for each $z:\modal A$ and hence it follows that $\id{s(x)}{\mathsf{ind}^\modal_A(s\circ \modalunit_A)(x)}$
is modal for each $x$. Hence it suffices to find a function of type
\begin{equation*}
\prd{a:A}\id{s(\modalunit_A(a))}{\mathsf{ind}^\modal_A(s\circ \modalunit_A)(\modalunit_A(a))}.
\end{equation*}
This follows straight from the computation rule of higher modalities.
§.§ Uniquely eliminating modalities
Next, we show that a uniquely eliminating modality is determined by its modal types, and gives rise to a $\Sigma$-closed reflective subuniverse.
Given a uniquely eliminating modality, $\modal X$ is modal for any type $X$.
Using the elimination principle of $\modal \modal X$, we find a function
$f:\modal \modal X\to\modal X$ and an identification $f\circ\modalunit[\modal X]=\idfunc[\modal X]$.
By uniqueness, the function
\[ (\modal \modal X\to\modal \modal X) \to (\modal X\to\modal \modal X) \]
is an equivalence, and hence its fiber over $\modalunit[\modal X]$:
\begin{equation*}
\sm{g:\modal \modal X\to\modal \modal X} g\circ\modalunit[\modal X]=\modalunit[\modal X]
\end{equation*}
is contractible. Since both $\idfunc[\modal \modal X]$ and $\modalunit[\modal X]\circ f$
are in this type (with suitable identifications), we find that $f$ is also the
right inverse of $\modalunit[\modal X]$. This shows that $\modalunit[\modal X]$ is an
equivalence, so $\modal X$ is modal.
The data of two uniquely eliminating modalities $\modal$ and $\modal'$ are equivalent if and only if both have the same modal types.
We need to show that the type of uniquely eliminating modalities
with a given class $\mathcal{M}:\UU\to\prop$ of modal types
is a mere proposition. Since the types of the form $\modal X$ are modal,
it suffices to show that for any class $\mathcal{M}
:\UU\to\prop$ and any type $X$, the type of tuples $(Y,p,\pi,H)$ is a mere proposition, where:
* $Y:\UU$.
* $p:\mathcal{M}(Y)$.
* $\pi:X\to Y$.
* For each $P$, $H_P$ witnesses that the function
\begin{equation*}
(\lam{s}s\circ \pi):(\prd{y:Y}\modal(P(y)))\to(\prd{x:X}\modal(P(\pi(x))))
\end{equation*}
is an equivalence.
Let $(Y,p,\pi,H)$ and $(Y',p',\pi',H')$ be such tuples. To show that they are
equal, it suffices to show that $(Y,\pi)=(Y',\pi')$ because the other things
in the list are terms of mere propositions. Furthermore, showing that
$(Y,\pi)=(Y',\pi')$ is equivalent to finding an equivalence $f:\eqv{Y}{Y'}$ with
the property that $f\circ\pi=\pi'$. By $H$, there is such a function, and by
$H'$ there is a function $f':Y'\to Y$ such that $f'\circ\pi'=\pi$. Now the
uniqueness gives that $f'\circ f$ is the only function from $Y$ to $Y$ such
that $f'\circ f\circ\pi=\pi$ and of course $\idfunc[Y]$ is another such function.
Therefore it follows that $f'\circ f=\idfunc$, and similarly it follows that
$f\circ f'=\idfunc$.
Any uniquely eliminating modality determines a $\Sigma$-closed reflective
subuniverse with the same modal types.
It is immediate from the definition of uniquely eliminating modalities
that every map $f:A\to B$ into a modal type $B$ has a homotopy unique extension to $\modal A$
along the modal unit:
\begin{equation*}
\begin{tikzcd}
A \arrow[dr,"f"] \arrow[d,swap,"\modalunit_A"] \\ \modal A \arrow[r,densely dotted,swap,"\tilde f"] & B.
\end{tikzcd}
\end{equation*}
Since the types of the form $\modal X$ are modal, we obtain a reflective subuniverse.
It remains to verify that the type $\sm{z:\modal X}\modal(P(z))$ is modal for
any type $X$ and $P:X\to\UU$. We have the function
\begin{equation*}
\varphi\defeq\lam{m}\pairr{f(m),g(m)}:\modal(\sm{z:\modal X}\modal(P(z)))\to\sm{z:\modal X}\modal(P(z)),
\end{equation*}
f (xu x) : (z:X(P(z)))→X
g (xu u) : w:(z:X(P(z))) (P(f(w))).
Our goal is to show that $\varphi$ is an inverse to the modal unit.
Note that
\begin{equation*}
\varphi(\modalunit(x,y)) \jdeq \pairr{f(\modalunit(x,y)),g(\modalunit(x,y))} \jdeq \pairr{x,y},
\end{equation*}
so we see immediately that $\varphi$ is a left inverse of $\modalunit$.
To show that $\varphi$ is a right inverse of $\modalunit$, note that the type
of functions $h$ fitting in a commuting triangle of the fom
\begin{equation*}
\begin{tikzcd}[column sep=-3em]
\modal(\sm{z:\modal X}\modal(P(z))) \arrow[rr,densely dotted,"h"] & & \modal(\sm{z:\modal X}\modal(P(z))) \\
\phantom{\modal(\sm{z:\modal X}\modal(P(z)))} & \sm{z:\modal X}\modal(P(z)) \arrow[ul,"\modalunit"] \arrow[ur,swap,"\modalunit"] & \phantom{\modal(\sm{z:\modal X}\modal(P(z)))}
\end{tikzcd}
\end{equation*}
is a fiber over $\modalunit$ of a precomposition equivalence, and hence contractible.
Since this type also contains the identity function, it suffices
to show that $(\modalunit\circ\varphi)\circ\modalunit=\modalunit$; but this follows
from the fact that $\varphi$ is a left inverse of the modal unit.
§.§ Σ-closed reflective subuniverses
Now we study reflective subuniverses in a bit more detail, and end by
showing that $\Sigma$-closed ones give rise to stable factorization
systems. $\Sigma$-closure is used in <ref> to
show that left maps and right maps are closed under composition.
§.§.§ Properties of reflective subuniverses
For any $\mathcal{M}:\UU\to\prop$ and any type $X$, the type of triples $(Y,f,I)$ consisting of
* $Y:\UU_{\mathcal{M}}$,
* $f:X\to Y$, and
* $I:\prd{Z:\UU_{\mathcal{M}}}\isequiv(\lam{g}g\circ f:(Y\to Z)\to(X\to Z))$
is a mere proposition.
Consider $(Y,f,I)$ and $(Y',f',I')$ of the described type. Since $I$ and $I'$
are terms of a mere proposition, it suffices to show that $(Y,f)=(Y',f')$. In
other words, we have to find an equivalence $g:Y\to Y'$ such that $g\circ f'=f$.
By $I(Y')$, the type of
pairs $(g,h)$ consisting of a function $g:Y\to Y'$ such that $h:g\circ f=f'$ is contractible. By
$I'(Y)$, the type of pairs $(g',h')$ consisting of a function $g':Y'\to Y$
such that $h':g'\circ f'=f$ is contractible.
Now $g'\circ g$ is a function such that $g'\circ g\circ f=g'\circ f'=f$, as
is $\idfunc[Y]$. By contractibility, it follows that $g'\circ g=\idfunc[Y]$.
Similarly, $g\circ g'=\idfunc[Y']$.
The data of any two reflective subuniverses with the same modal types are the same.
Given the modal types, the rest of the data of a reflective subuniverse consists of, for each type $X$, a triple $(Y,f,I)$ as in <ref>.
Thus, by <ref>, these data form a mere proposition.
Given a reflective subuniverse, a type $X$ is modal if and only if $\modalunit[X]$ is an equivalence.
Certainly if $\modalunit[X]$ is an equivalence, then $X$ is modal since it is equivalent to the modal type $\modal X$.
Conversely, if $X$ is modal then we have a triple $(X,\idfunc[X],\nameless)$ inhabiting the type from <ref>, which also contains $(\modal X,\modalunit[X],\nameless)$.
Since this type is a mere proposition, these two elements are equal; hence $\modalunit[X]$ is, like $\idfunc[X]$, an equivalence.
Given a reflective subuniverse, if a modal unit $\modalunit[X]$ has a left inverse (i.e. a retraction), then it is an equivalence, and hence $X$ is modal.
Suppose $f$ is a left inverse of $\modalunit[X]$, i.e. $f\circ \modalunit[X] = \idfunc[X]$.
Then $\modalunit[X]\circ f\circ \modalunit[X] = \modalunit[X]$, so $\modalunit[X]\circ f$ is a factorization of $\modalunit[X]$ through itself.
By uniqueness of such factorizations, $\modalunit[X]\circ f = \idfunc[\modal X]$.
Thus $f$ is also a right inverse of $\modalunit[X]$, hence $\modalunit[X]$ is an equivalence.
In the following lemma we show that any reflective subuniverse is a `a functor up to homotopy', i.e. that the localization operation has an action on morphisms which preserves composition and identities.
Given $f:A\to B$ we have an induced map $\modal f : \modal A \to \modal B$, preserving identities and composition up to homotopy.
Moreover, for any $f$ the naturality square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f"] \arrow[d,swap,"\modalunit"] & B \arrow[d,"\modalunit"] \\
\modal A \arrow[r,swap,"\modal f"] & \modal B
\end{tikzcd}
\end{equation*}
Define $\modal f$ to be the unique function such that $\modal f \circ \modalunit[A] = \modalunit[B] \circ f$, using the universal property of $\modalunit[A]$.
The rest is easy to check using further universal properties.
Given a reflective subuniverse and any type $X$, the map $\modal \modalunit[X] : \modal X \to \modal\modal X$ is an equivalence.
By naturality, we have $\modal \modalunit[X] \circ \modalunit[X] = \modalunit[\modal X] \circ \modalunit[X]$.
Hence $\modal \modalunit[X] = \modalunit[\modal X]$ by the universal property of $\modalunit[X]$, but $\modalunit[\modal X]$ is an equivalence by <ref>.
Given a reflective subuniverse, a type $X$ is modal if and only if $(\blank \circ f) : (B\to X) \to (A\to X)$ is an equivalence for any function $f:A\to B$ such that $\modal f$ is an equivalence.
If $\modal f$ is an equivalence and $X$ is modal, then by the universal property of $\modalunit$, we have a commutative square
\[
\begin{tikzcd}
(B\to X) \ar[r,"\blank\circ f"] & (A\to X) \\
(\modal B\to X) \ar[r,"\blank\circ\modal f"'] \ar[u,"{\blank\circ \modalunit[B]}"] &
(\modal A \to X) \ar[u,"{\blank\circ \modalunit[A]}"']
\end{tikzcd}
\]
in which all but the top map are equivalences; thus so is the top map.
Conversely, since $\modal\modalunit[X]$ is an equivalence, the hypothesis implies that
$(\blank \circ \modalunit[X]) : (\modal X\to X) \to (X\to X)$
is an equivalence.
In particular, its fiber over $\idfunc[X]$ is inhabited, i.e. $\modalunit[X]$ has a retraction; hence $X$ is modal.
Consider a reflective subuniverse with modal operator $\modal$, and let $P:X\to\UU$ for some type $X:\UU$.
Then the unique map for which the triangle
\begin{equation*}
\begin{tikzcd}
\sm{x:X}P(x) \arrow[d,swap,"\modalunit"] \arrow[dr,"{\lam{\pairr{x,y}}\modalunit(x,\modalunit(y))}"] \\
\modal(\sm{x:X}P(x)) \arrow[r,densely dotted] & \modal(\sm{x:X}\modal(P(x)))
\end{tikzcd}
\end{equation*}
commutes, is an equivalence.
Since both codomains are modal, it suffices to show that ${\lam{\pairr{x,y}}\modalunit(x,\modalunit(y))}$ has the universal property of $\modalunit[\sm{x:X}P(x)]$, i.e. that any map $(\sm{x:X}P(x)) \to Y$, where $Y$ is modal, extends uniquely to $\modal(\sm{x:X}\modal(P(x)))$.
But we have
\begin{align*}
((\sm{x:X}P(x)) \to Y)
\prd{x:X} P(x) \to Y\\
\prd{x:X} \modal(P(x)) \to Y\\
(\sm{x:X}\modal(P(x))) \to Y\\
\modal (\sm{x:X}\modal(P(x))) \to Y
\end{align*}
and it is easy to see that this is the desired precomposition map.
For any reflective subuniverse, if $X$ is modal, then so is the identity type $x=y$ for any $x,y:X$.
Let $X$ be a modal type, and let $x,y:X$. We have a map
$\modal(x=y)\to\unit$. The outer square in the diagram
\begin{equation*}
\begin{tikzcd}
\modal(x=y) \arrow[ddr,bend right=15] \arrow[drr,bend left=15] \\
& (x=y) \arrow[r] \arrow[d] \arrow[ul,"\modalunit"] \arrow[dr, phantom, "\lrcorner", very near start] & \unit \arrow[d,"x"] \\
& \unit \arrow[r,swap,"y"] & X
\end{tikzcd}
\end{equation*}
commutes, because both maps extend the map $(x=y)\to X$ along $\modalunit$, and
such extensions are unique because $X$ is assumed to be modal.
Hence the universal property of the pullback gives
a left inverse of $\modalunit:(x=y)\to\modal(x=y)$, so by <ref> $(x=y)$ is modal.
Given a reflective subuniverse,
if $P(x)$ is modal for all $x:X$, then so is $\prd{x:X}P(x)$.
By <ref>, it suffices to define a left inverse of the modal unit
$\modalunit:(\prd{x:A}P(x))\to \modal(\prd{x:A}P(x))$. By the universal property
of dependent product, extending
\begin{equation*}
\begin{tikzcd}
\prd{x:A}P(x) \arrow[r,"{\idfunc}"] \arrow[d,"\modalunit"] & \prd{a:A}P(a) \arrow[d,"{\psi\,\defeq\,\lam{f}{a}\modalunit[P(a)](f(a))}"] \\
\modal(\prd{x:A}P(x)) \arrow[r,densely dotted] & \prd{a:A}\modal(P(a))
\end{tikzcd}
\end{equation*}
is equivalent to extending
\begin{equation*}
\begin{tikzcd}[column sep=large]
\prd{x:A}P(x) \arrow[r,"{\mathsf{ev}_a}"] \arrow[d,swap,"{\modalunit}"]
& P(a) \arrow[d,"{\modalunit}"] \\
\modal(\prd{x:A}P(x)) \arrow[r,densely dotted,swap,"{\modal(\mathsf{ev}_a)}"] & \modal(P(a))
\end{tikzcd}
\end{equation*}
for any $a:A$. Thus, we find
\begin{equation*}
\end{equation*}
as the solution to the first extension problem. In the first extension problem,
the function $\psi$ is an equivalence by the assumption that each $P(a)$ is
modal, so we obtain a retraction of the modal unit.
Taking $X=\unit+\unit$, so that $P:X\to\UU$ is just a pair of types, we conclude that if $A$ and $B$ are modal then so is $A\times B$.
Moreover, we have:
Given any reflective subuniverse, the modal operator $\modal$ preserves finite cartesian products (including the unit type).
In the nullary case, the statement is that the unit type $\unit$ is modal, which follows directly from <ref>.
In the binary case, we have to show that the modal extension
\begin{equation*}
\begin{tikzcd}
X\times Y \arrow[d,swap,"{\modalunit[X\times Y]}"] \arrow[dr,"\lam{\pairr{x,y}}\pairr{\modalunit[X](x),\modalunit[Y](y)}"] \\
\modal(X\times Y) \arrow[r,densely dotted] & \modal X\times\modal Y
\end{tikzcd}
\end{equation*}
is an equivalence.
But $(\modal(X\times Y),\modalunit[X\times Y],\nameless)$ inhabits the type from <ref>, so if we can show that $(\modal X\times \modal Y,\lam{\pairr{x,y}}\pairr{\modalunit[X](x),\modalunit[Y](y)})$ also extends to an inhabitant of that type, then they will be equal, inducing an equivalence that by uniqueness must be the map above.
To show this, first note that $\modal X\times \modal Y$ is modal, as remarked above.
And for any modal type $Z$ we have
\begin{align*}
(X\times Y \to Z)
&\eqvsym X\to (Y\to Z)\\
&\eqvsym X\to (\modal Y\to Z)\\
&\eqvsym \modal X\to (\modal Y\to Z)\\
&\eqvsym \modal X\times \modal Y\to Z
\end{align*}
given by precomposition as desired.
Here in the penultimate step we use the fact that the function type $\modal Y\to Z$ is modal since $Z$ is, by <ref>.
Given any reflective subuniverse, the modal operator preserves mere propositions.
A type $P$ is a mere proposition if and only if the diagonal $P\to P\times P$ is an equivalence.
The result then follows from <ref>.
By contrast, even modalities do not generally preserve $n$-types for any $n\ge 0$.
For instance, the “shape” modality of [38] takes the topological circle, which is a 0-type, to the homotopical circle, which is a 1-type, and the topological 2-sphere, which is also a 0-type, to the homotopical 2-sphere, which is (conjecturally) not an $n$-type for any finite $n$.
However, we will see in <ref> that lex modalities do preserve $n$-types for all $n$.
The basic properties of types and maps in homotopy type theory, such as being contractible, being a proposition, being an $n$-type, being an equivalence, and so on, are all constructed (perhaps inductively) out of identity types and $\Sigma$- and $\Pi$-types.
Thus, a $\Sigma$-closed reflective subuniverse is closed under them as well.
That is, if $A$ and $B$ are modal and $f:A\to B$, then the propositions “$A$ is contractible”, “$A$ is an $n$-type”, “$f$ is an equivalence”, and so on, are all modal as well.
§.§.§ $\Sigma$-closed reflective subuniverses
Let $\mathcal{M}:\UU\to\prop$ be a reflective subuniverse with modal
operator $\modal$. We say
that a type $X$ is $\modal$-connected if $\modal X$ is contractible,
and we say that a function $f:X\to Y$ is $\modal$-connected if each
of its fibers is. Similarly, we say that $f$ is modal if each of its
fibers is.
Note that a type $X$ is modal or $\modal$-connected just when the map $X\to\unit$ is.
Recall from <ref> that the open modality associated to a proposition $Q$ is defined by $\open Q(A) \defeq (Q\to A)$.
We claim that $A$ is $\open Q$-connected if and only if $Q \to\iscontr(A)$.
In other words, $(Q \to\iscontr(A))\eqvsym \iscontr(Q\to A)$.
For on the one hand, if $Q\to \iscontr(A)$, then $Q\to A$; while any two $f,g:Q\to A$ can be shown equal by function extensionality, since if $Q$ then $A$ is contractible.
But on the other hand, if $\iscontr(Q\to A)$ and $Q$, then $\eqv{(Q\to A)}{A}$, hence $\iscontr(A)$.
Note that $Q \to\iscontr(A)$ is also the defining condition for the $\closed Q$-modal types from <ref>.
That is, the $\open Q$-connected types coincide with the $\closed Q$-modal types.
We will come back to this relationship in <ref>.
The following theorem combines Lemma 7.5.7 and Theorem 7.7.4 of [40].
Given a reflective universe with modal operator $\modal$,
the following are equivalent:
* It is $\Sigma$-closed.
* It is uniquely eliminating.
* The modal units are $\modal$-connected.
To show <ref>$\Leftrightarrow$<ref>, let $Y$ be modal and $P:Y \to UU_\modal$, and consider for any $X$ the following commuting square:
\begin{equation*}
\begin{tikzcd}
\Big(\modal X \to \sm{y:Y}P(y)\Big) \arrow[r] \arrow[d] & \Big(X \to \sm{y:Y}P(y)\Big) \arrow[d] \\
\sm{g:\modal X\to Y}\prd{z:\modal X}P(g(z)) \arrow[r] & \sm{f:X\to Y}\prd{x:X}P(f(x))
\end{tikzcd}
\end{equation*}
The vertical maps are equivalences, so for any $X,Y,P$ the top map is an equivalence if and only if the bottom is.
If <ref> holds, the top map is an equivalence for all $X,Y,P$.
But the converse is also true, since we can take $X \defeq \sm{y:Y}P(y)$ to obtain a retraction for its unit.
The bottom map is induced by the map $(\modal X\to Y) \to (X\to Y)$, which is an equivalence since $Y$ is modal, and the family of maps
\[\Big(\prd{z:\modal X} P(g(z))\Big) \to \Big(\prd{x:X} P(g(\modalunit[X](x)))\Big) \]
for all $g:\modal X\to Y$; thus it is an equivalence just when each of these maps is.
If <ref> holds, then this is true for all $X,Y,P,g$.
But the converse is also true, since we can take $Y \defeq \modal X$ and $g\defeq \idfunc[\modal X]$.
This completes the proof of <ref>$\Leftrightarrow$<ref>.
To show <ref>$\Rightarrow$<ref>, we want a term of type
\begin{equation*}
\prd{z:\modal X}\iscontr(\modal(\hfib{\modalunit}{z})).
\end{equation*}
Using the dependent eliminators, it is easy to find a term
$s:\prd{z:\modal X}\modal(\hfib{\modalunit}{z})$ with the property that
$s\circ\modalunit(x)=\modalunit(x,\refl{\modalunit(x)})$. Now we need to show
\begin{equation*}
\prd{z:\modal X}{w:\modal(\hfib{\modalunit}{z})}w=s(z).
\end{equation*}
Since the type $w=s(z)$ is modal, this is equivalent to
\begin{equation*}
\prd{z:\modal X}{x:X}{p:\modalunit(x)=z} \modalunit(x,p)=s(z).
\end{equation*}
Moreover, the type $\sm{z:\modal X}\modalunit(x)=z$ is contractible, so this
is equivalent to
\begin{equation*}
\prd{x:X} \modalunit(x,\refl{\modalunit(x)})=s(\modalunit(x)),
\end{equation*}
of which we have a term by the defining property of $s$.
Finally, to show <ref>$\Rightarrow$<ref> we show that for any $\modal$-connected map $f:X\to Y$ and any family $P:Y \to \UU_\modal$ of modal types of $Y$, the precomposition map
\begin{equation*}
\Big(\prd{y:Y}P(y)\Big)\to \Big(\prd{x:X}P(f(x))\Big)
\end{equation*}
is an equivalence. This is because we have a commuting square
\begin{equation*}
\begin{tikzcd}
\prd{y:Y}\Big(\modal(\hfib{f}{y})\to P(y)\Big) \arrow[r] \arrow[d] & \prd{y:Y}\Big(\hfib{f}{y}\to P(y)\Big) \arrow[d] \\
\prd{y:Y}P(y) \arrow[r] & \prd{x:X}P(f(x))
\end{tikzcd}
\end{equation*}
In this square the map on the left is an equivalence by the contractibility of $\modal(\hfib{f}{y})$; the map on the right is an equivalence by the dependent universal property of identity types; and the top map is an equivalence by the universal property of modalities. Therefore the bottom map is an equivalence.
Given $f:A\to B$ and $g:B\to C$ and a reflective subuniverse $\modal$, if $f$ is $\modal$-connected, then $g$ is $\modal$-connected if and only if $g\circ f$ is $\modal$-connected.
That is, $\modal$-connected maps are closed under composition and right cancellable.
Recall that for $f:X\to Y$ and $g:Y\to Z$, one has $\hfib{g\circ f}{z}=\sm{p:\hfib{g}{z}}\hfib{f}{\proj1(p)}$.
Thus, for any $z:C$ we have
\begin{align*}
\modal(\hfib{g\circ f}{z})
& \eqvsym
\modal(\sm{p:\hfib{g}{z}}\hfib{f}{\proj1(p)}) \\
& \eqvsym
\modal(\sm{p:\hfib{g}{z}}\modal(\hfib{f}{\proj1(p)}))
\qquad \text{(by \cref{lem:sum_idempotent})}\\
& \eqvsym
\modal(\sm{p:\hfib{g}{z}}\unit) \\
& \eqvsym
\modal\hfib{g}{z}
\end{align*}
using the fact that $f$ is $\modal$-connected.
Thus, one is contractible if and only if the other is.
In general it is not true that if $g$ and $g\circ f$ are $\modal$-connected then $f$ is; this is one of the equivalent characterizations of lex modalities (<ref>).
A $\Sigma$-closed reflective subuniverse determines a stable orthogonal factorization system with the same
modal types.
Define $\mathcal{L}$ to be the class of $\modal$-connected
maps and $\mathcal{R}$ to be the the class of modal maps.
We first show that both $\mathcal{L}$ and $\mathcal{R}$ are closed under
Since $\hfib{g\circ f}{z}=\sm{p:\hfib{g}{z}}\hfib{f}{\proj1(p)}$, by $\Sigma$-closedness if $f$ and $g$ are both in $\mathcal{R}$ then so is $g\circ f$.
Thus $\cR$ is closed under composition; while <ref> implies that $\cL$ is closed under composition.
And since the fibers of an identity map are contractible, and contractible types are both modal and $\modal$-connected, both $\mathcal{L}$ and $\mathcal{R}$ contain all identities.
To obtain a factorization system,
it remains to show that the type of
$(\mathcal{L},\mathcal{R})$-factorizations of any function $f:X\to Y$ is contractible.
\[\pairr{X,f}=_{(\sm{Z:\UU} Z\to Y)} \pairr{\sm{y:Y}\hfib{f}{y},\proj1},\]
it is sufficient to
show that $\fact_{\mathcal{L},\mathcal{R}}(\proj1)$ is contractible for any
$\proj1:\sm{y:Y}P(y)\to Y$. But $\proj1$ factors as
\begin{equation*}
\begin{tikzcd}
\sm{y:Y}P(y) \arrow[r,"p_\mathcal{L}"] & \sm{y:Y}\modal(P(y)) \arrow[r,"p_\mathcal{R}"] & Y
\end{tikzcd}
\end{equation*}
where $p_\mathcal{L}\defeq\total{\modalunit[P(\blank)]}$ and $p_\mathcal{R}\defeq\proj1$.
The fibers of $p_\mathcal{R}$ are $\modal(P(\blank))$, so it follows
immediately that $p_\mathcal{R}$ is in $\mathcal{R}$.
Moreover, since
$\eqv{\hfib{\total{\modalunit}}{\pairr{y,u}}}{\hfib{\modalunit[P(y)]}{u}}$ and each $\modalunit$ is $\modal$-connected, it follows that $p_\mathcal{L}$ is in
Now consider any other factorization $(I,g,h,H)$ of $\proj1$ into
an $\cL$-map $g:(\sm{y:Y}P(y))\to I$ followed by an $\cR$-map $h:I\to Y$. Since
$I=\sm{y:Y}\hfib{h}{y}$, we have a commuting square
\begin{equation*}
\begin{tikzcd}
\sm{y:Y}P(y) \arrow[r,"g"] \arrow[d,swap,"{\total{\gamma}}"]
& I \arrow[d,"h"] \\
\sm{y:Y}\hfib{h}{y} \arrow[ur,equals] \arrow[r,swap,"\proj1"] & Y
\end{tikzcd}
\end{equation*}
in which $\gamma(y,u)\defeq \pairr{g(y,u),H(y,u)}$.
It follows that
\[(I,g,h,H)=\left(\tsm{y:Y}\hfib{h}{y},\total{\gamma},\proj1,\nameless\right).\]
Thus it suffices to show that there is a commuting triangle
\begin{equation*}
\begin{tikzcd}[column sep=0]
\phantom{\hfib{h}{y}} & P(y) \arrow[dl,swap,"\modalunit"] \arrow[dr,"{\gamma_y}"] & \phantom{\modal(P(y))} \\
\modal(P(y)) \arrow[rr,equals] & & \hfib{h}{y}
\end{tikzcd}
\end{equation*}
for all $y:Y$.
We will do this using <ref>, by showing that $\gamma_y$ has the same universal property as $\modalunit[P(y)]$.
This follows from the following calculation:
\begin{align*}
(\hfib{h}{y}\to Z) & \eqvsym ((\sm{w:\hfib{h}{y}}\modal(\hfib{g}{\proj1(w)}))\to Z) \\
& \eqvsym ((\sm{w:\hfib{h}{y}}\hfib{g}{\proj1(w)})\to Z) \\
& \eqvsym (\hfib{h\circ g}{y}\to Z) \\
& \eqvsym (P(y)\to Z),
\end{align*}
which we can verify is given by precomposition with $\gamma_y$.
It remains to show that our orthogonal factorization system is stable. Consider a pullback diagram
\begin{equation*}
\begin{tikzcd}
A' \arrow[d,swap,"k"] \arrow[r,"f"] & A \arrow[d,"l"] \\
B' \arrow[r,swap,"g"] & B
\end{tikzcd}
\end{equation*}
in which $l$ is in $\mathcal{L}$. By the pasting lemma for pullbacks, it
follows that $\hfib{k}{b}=\hfib{l}{g(b)}$ for each $b:B'$. Thus, it follows that
$k$ is in $\mathcal{L}$.
§.§.§ Connected maps
The $\modal$-connected maps introduced in <ref> have a number of other useful properties.
Most of these are stated in <cit.> for the special case of the $n$-truncation modality, but essentially the same proofs work for any modality.
In fact, most of these properties are true about an arbitrary reflective subuniverse, although a few of the proofs must be different.
Thus, for this subsection, let $\modal$ be a reflective subuniverse, not in general $\Sigma$-closed.
If $f : A \to B$ is $\modal$-connected, then it induces an equivalence
$\modal f : \eqv{\modal{A}}{\modal{B}}$.
To define an inverse $g:\modal B \to \modal A$, by the universal property of $\modal B$, it suffices to define a map $B\to \modal A$.
But given $b:B$, we have a map $\proj1 : \hfib{f}{b} \to A$, hence $\modal\proj1 : \modal \hfib{f}{b} \to \modal A$.
And $\modal \hfib{f}{b}$ is contractible since $f$ is $\modal$-connected, so it has a point $c_b$, and we define $g(\modalunit[B](b)) = \modal \proj1(c_b)$.
Now by the universal property of $\modal A$ and $\modal B$, it suffices to show that the composites $g\circ \modal f \circ \modalunit[A]$ and $\modal f\circ g \circ \modalunit[B]$ are equal to $\modalunit[A]$ and $\modalunit[B]$ respectively.
In the first case, for $a:A$ we have
\begin{align*}
g(\modal f(\modalunit[A](a)))
&= g(\modalunit[B](f(a)))\\
&= \modal \proj1(c_{f(a)})\\
&= \modal \proj1(\modalunit[\hfib f b](a,\refl{f(a)}))\\
&= \modalunit[A](\proj1(a,\refl{f(a)}))\\
&= \modalunit[A](a),
\end{align*}
using in the third line the fact that $\modal(\hfib f b)$ is contractible.
And in the second case, for $b:B$ we have
\begin{align*}
\modal f(g(\modalunit[B](b)))
&= \modal f(\modal \proj1(c_b))\\
&= \modal(f\circ \proj1)(c_b)\\
&= \modal(\lam{u:\hfib f b} b)(c_b)\\
&= \modal(\lam{u:\unit} b)(\modalunit[\unit](\ttt))\\
&= \modalunit[B](b)
\end{align*}
where in the last two lines we use the commutativity of the following diagram:
\[
\begin{tikzcd}
\hfib f b \ar[d] \ar[r] \ar[rr,bend left,"{\lam{u:\hfib f b} b}"] & \unit \ar[r,"b"] \ar[d,"{\modalunit[\unit]}"] \ar[dl,"{c_b}"] & B \ar[d,"{\modalunit[B]}"] \\
\modal(\hfib f b) \ar[r] \ar[rr,bend right,"{\modal (\lam{u:\hfib f b} b)}"'] & \modal \unit \ar[r] & \modal B
\end{tikzcd}
\]
and the fact that $\modal\unit$ is contractible.
The converse of <ref> is false in general, even for modalities; we will see in <ref> that it holds exactly when $\modal$ is lex.
Recall that $\modaltype$ denotes the universe of modal types.
Note that the projection $\proj1 : (\sm{x:A} P(x)) \to A$ is $\modal$-modal if and only if $P$ factors through $\modaltype$.
The following generalizes the unique elimination property of $\modalunit$ to arbitrary $\modal$-connected maps.
For $f:A\to B$ and $P:B\to\modaltype$, consider the following function:
\begin{equation*}
\lam{s} s\circ f :\Parens{\prd{b:B} P(b)}\to\Parens{\prd{a:A}P(f(a))}.
\end{equation*}
For a fixed $f$, the following are equivalent.
* $f$ is $\modal$-connected.
* For every $P:B\to \modaltype$, the map $\lam{s} s\circ f$ is an equivalence.
* For every $P:B\to \modaltype$, the map $\lam{s} s\circ f$ has a section.
First suppose $f$ is $\modal$-connected and let $P:B\to\modaltype$. Then:
\begin{align*}
\prd{b:B} P(b) & \eqvsym \prd{b:B} \Parens{\modal{\hfib{f}b} \to P(b)}
\tag{since $\modal{\hfib{f}b}$ is contractible}\\
& \eqvsym \prd{b:B} \Parens{\hfib{f}b\to P(b)}
\tag{since $P(b)$ is modal}\\
& \eqvsym \prd{b:B}{a:A}{p:f(a)= b} P(b)\\
& \eqvsym \prd{a:A} P(f(a))
\end{align*}
and the composite equivalence is indeed composition with $f$.
Thus, <ref>$\Rightarrow$<ref>, and clearly <ref>$\Rightarrow$<ref>.
To show <ref>$\Rightarrow$<ref>, let
$P(b)\defeq \modal{\hfib{f}b}$.
Then <ref> yields a map $c:\prd{b:B} \modal{\hfib{f}b}$ with
$c(f(a))=\modalunit{\pairr{a,\refl{f(a)}}}$. To show that each $\modal{\hfib{f}b}$ is contractible, we will show that $c(b)=w$ for any $b:B$ and $w:\modal{\hfib{f}b}$.
In other words, we must show that the identity function $\modal{\hfib{f}b} \to \modal{\hfib{f}b}$ is equal to the constant function at $c(b)$.
By the universal property of $\modal{\hfib{f}b}$, it suffices to show that they become equal when precomposed with $\modalunit[\hfib{f}b]$, i.e. we may assume that $w = \modalunit\pairr{a,p}$ for some $a:A$ and $p:f(a)=b$.
But now path induction on $p$ reduces our goal to the given $c(f(a))=\modalunit{\pairr{a,\refl{f(a)}}}$.
A type $A$ is $\modal$-connected if and only if the “constant functions” map
B \to (A\to B)
is an equivalence for every modal type $B$.
Dually, we will prove in <ref> that when $\modal$ is a modality, if this holds for all $\modal$-connected $A$ then $B$ is modal.
Let $B$ be a modal type and let $f:A\to B$ be a function.
If $f$ is $\modal$-connected, then the induced function $g:\modal A\to B$ is an equivalence; the converse holds if $\modal$ is $\Sigma$-closed.
By <ref>, if $f$ is $\modal$-connected then $\modal f$ is an equivalence.
But $g$ is the composite $\modalunit[B]^{-1}\circ \modal f$, hence also an equivalence.
Conversely, by <ref>, $\modalunit$ is $\modal$-connected.
Thus, since $f = g\circ \modalunit[A]$, if $g$ is an equivalence then $f$ is also $\modal$-connected.
Let $f:A\to B$ be a function and $P:A\to\type$ and $Q:B\to\type$ be type families. Suppose that $g:\prd{a:A} P(a)\to Q(f(a))$
is a family of $\modal$-connected functions.
If $f$ is also $\modal$-connected, then so is the function
\begin{align*}
\varphi &:\Parens{\sm{a:A} P(a)}\to\Parens{\sm{b:B} Q(b)}\\
\varphi(a,u) &\defeq \pairr{f(a),g_a(u)}.
\end{align*}
Conversely, if $\varphi$ and each $g_a$ are $\modal$-connected, and moreover $Q$ is fiberwise merely inhabited (i.e. we have $\brck{Q(b)}$ for all $b:B$), then $f$ is $\modal$-connected.
For any $b:B$ and $v:Q(b)$ we have
\begin{align*}
\modal{\hfib{\varphi}{\pairr{b,v}}} & \eqvsym \modal{\sm{a:A}{u:P(a)}{p:f(a)= b} \trans{p}{g_a(u)}= v}\\
& \eqvsym \modal{\sm{w:\hfib{f}b}{u:P(\proj1(w))} g_{\proj 1 w}(u)= \trans{\opp{\proj2(w)}}{v}}\\
& \eqvsym \modal{\sm{w:\hfib{f}b} \hfib{g(\proj1 w)}{\trans{\opp{\proj 2(w)}}{v}}}\\
& \eqvsym \modal{\sm{w:\hfib{f}b} \modal{\hfib{g(\proj1 w)}{\trans{\opp{\proj 2(w)}}{v}}}}\\
& \eqvsym \modal{\hfib{f}b}
\end{align*}
where the transportations along $f(p)$ and $f(p)^{-1}$ are with respect to $Q$, and we use <ref> on the penultimate line.
Therefore, if either of $\modal{\hfib{\varphi}{\pairr{b,v}}}$ or $\modal{\hfib{f}b}$ is contractible, so is the other.
In particular, if $f$ is $\modal$-connected, then $\modal{\hfib{f}b}$ is contractible for all $b:B$, and hence so is $\modal{\hfib{\varphi}{\pairr{b,v}}}$ for all $(b,v):\sm{b:B} Q(b)$.
On the other hand, if $\varphi$ is $\modal$-connected, then $\modal{\hfib{\varphi}{\pairr{b,v}}}$ is contractible for all $(b,v)$, hence so is $\modal{\hfib{f}b}$ for any $b:B$ such that there exists some $v:Q(b)$.
Finally, since contractibility is a mere proposition, it suffices to merely have such a $v$.
Let $P,Q:A\to\type$ be type families and $f:\prd{a:A} \Parens{P(a)\to Q(a)}$.
Then $\total f: \sm{a:A}P(a) \to \sm{a:A} Q(a)$ is $\modal$-connected if and only if each $f(a)$ is $\modal$-connected.
We have
$\hfib{\total f}{\pairr{x,v}}\eqvsym\hfib{f(x)}v$
for each $x:A$ and $v:Q(x)$. Hence $\modal{\hfib{\total f}{\pairr{x,v}}}$ is contractible if and only if
$\modal{\hfib{f(x)}v}$ is contractible.
Of course, the “if” direction of <ref> is a special case of <ref>.
This suggests a similar generalization of the “only if” direction of <ref>, which would be a version of <ref> asserting that if $f$ and $\varphi$ are $\modal$-connected then so is each $g_a$.
However, this is not true in general; we will see in <ref> that it holds if and only if the modality is lex.
Finally, we note that the $\modal$-modal and $\modal$-connected maps are classified.
More generally, we prove the following generalization of <cit.>.
Let $P:\UU\to\prop$ be a predicate on the universe, let $\UU_P\defeq
\sm{X:\UU}P(x)$ and $(\UU_P)_\bullet\defeq\sm{X:\UU_P}X$. The projection
$\proj1:(\UU_P)_\bullet\to\UU_P$ classifies the maps whose fibers satisfy $P$, in the sense that these are exactly the maps that occur as pullbacks of it.
The fiber of $\proj1:(\UU_P)_\bullet\to\UU_P$ over $X:\UU_P$ is $X$, which satisfies $P$ by definition.
Thus all fibers of this map satisfy $P$, hence so do all fibers of any of its pullbacks.
Conversely, let $f:Y\to X$ be any map into $X$. Then $\hfibfunc{f}:X\to\UU$ factors through
$\UU_P$ if and only if all the fibers of $f$ satisfy $P$. Let us write
$P(f)$ for $\prd{x:X}P(\hfib{f}{x})$. Then we see that the equivalence
$\chi$ of Theorem 4.8.3 of [40] restricts to an
\begin{equation*}
\chi^P:(\sm{Y:\UU}{f:Y\to X}P(f))\to(X\to\UU_P).
\end{equation*}
Now observe that the outer square and the square on the right in the diagram
\begin{equation*}
\begin{tikzcd}[column sep=6em]
Y \arrow[d,swap,"f"] \arrow[rr,"{\lam{y}\pairr{\hfib{f}{f(y)},\blank,\pairr{y,\refl{f(y)}}}}"] & & \pointed{(\UU_P)} \arrow[r] \arrow[d] & \pointed{\UU} \arrow[d] \\
X \arrow[rr,swap,"{\hfibfunc{f}}"] & & \UU_P \arrow[r] & \UU
\end{tikzcd}
\end{equation*}
are pullback squares. Hence the square on the left is a pullback square.
The $\modal$-modal maps are classified by the universe of $\modal$-modal types, and the $\modal$-connected maps are classified by the universe of $\modal$-connected types.
§.§ Stable orthogonal factorization systems
To complete <ref>, we will show that stable orthogonal factorization systems are also determined by their modal types, and give rise to higher modalities.
§.§.§ Orthogonal factorization systems
In classical category theory, orthogonal factorization systems are equivalently characterized by a unique lifting property.
We begin with the analogue of this in our context.
Let $(\mathcal{L},\mathcal{R})$ be an orthogonal factorization system, and
consider a commutative square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f"] \arrow[d,swap,"l"] \ar[dr,phantom,"\scriptstyle S"] & X \arrow[d,"r"] \\
B \arrow[r,swap,"g"] & Y
\end{tikzcd}
\end{equation*}
(i.e. paths $S : r\circ f = g\circ l$)
for which $l$ is in $\mathcal{L}$ and $r$ is in $\mathcal{R}$. We define
$\fillers S$ to be the type of diagonal fillers
of the above diagram, i.e. the type of tuples $(j,H_f,H_g,K)$ consisting of
$j:B\to X$, $H_f:j\circ l=f$ and $H_g:r\circ j=g$ and an equality $K : r\circ H_f = \ct S{(H_g \circ l)}$.
The equality $K$ is required because of homotopy coherence: the commutativity of the given square and of the two triangles are not mere propositions but data consisting of homotopies inhabiting those squares and triangles, so to actually have a “filler” in the homotopy coherent sense we need to know that the “pasting composite” of the two triangles is the given square.
Let $(\mathcal{L},\mathcal{R})$ be an orthogonal factorization system, and
consider a commutative square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f"] \arrow[d,swap,"l"] \ar[dr,phantom,"\scriptstyle S"] & X \arrow[d,"r"] \\
B \arrow[r,swap,"g"] & Y
\end{tikzcd}
\end{equation*}
for which $l$ is in $\mathcal{L}$ and $r$ is in $\mathcal{R}$. Then the type
$\fillers S$ of diagonal fillers is contractible.
By the fact that every morphism factors uniquely as a left map followed by a
right map, we may factorize $f$ and $g$ in $(\mathcal{L},\mathcal{R})$ as $H_f : f = f_\cR \circ f_\cL$ and $H_g : g = g_\cR \circ g_\cL$, obtaining the diagram
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f_{\mathcal{L}}"] \arrow[d,swap,"l"] & \im(f) \arrow[r,"f_{\mathcal{R}}"] & X \arrow[d,"r"] \\
B \arrow[r,swap,"g_{\mathcal{L}}"] & \im(g) \arrow[r,swap,"g_{\mathcal{R}}"] & Y.
\end{tikzcd}
\end{equation*}
Now both $(r\circ f_{\mathcal{R}})\circ f_{\mathcal{L}}$ and
$g_{\mathcal{R}}\circ(g_{\mathcal{L}}\circ l)$ are factorizations
of the same function $r\circ f:A\to Y$.
Since $\fact_{\mathcal{L},\mathcal{R}}(r\circ f)$ is contractible, so is its identity type
\[ (\im(f), f_\cL, r\circ f_\cR, r\circ H_f) = (\im(g), g_\cL \circ l, g_\cR, \ct{S}{(H_g\circ l)}). \]
This identity type is equivalent to
\begin{multline*}
\sm{e:\im(f) \simeq \im(g)}{H_\cL : g_\cL \circ l = e\circ f_\cL}{H_\cR : r\circ f_\cR = g_\cR\circ e}\\
(\ct{(r\circ H_f)}{(H_\cR \circ f_\cL)} = \ct S{\ct{(H_g \circ l)}{(g_\cR \circ H_\cL)}})
\end{multline*}
Now since $\fact_{\cL,\cR}(f)$ and $\fact_{\cL,\cR}(g)$ are also contractible, we can sum over them to get that the following type is contractible:
\begin{multline*}
\sm{\im(f):\UU}{f_\cL : A \to \im(f)}{f_\cR : \im(f) \to X}{H_f : f = f_\cR \circ f_\cL}\\
\sm{\im(g):\UU}{g_\cL : B \to \im(g)}{g_\cR : \im(g) \to Y}{H_g : g = g_\cR \circ g_\cL}\\
\sm{e:\im(f) \simeq \im(g)}{H_\cL : g_\cL \circ l = e\circ f_\cL}{H_\cR : r\circ f_\cR = g_\cR\circ e}\\
(\ct{(r\circ H_f)}{(H_\cR \circ f_\cL)} = \ct S{\ct{(H_g \circ l)}{(g_\cR \circ H_\cL)}})
\end{multline*}
(omitting the hypotheses that $f_\cL,g_\cL\in\cL$ and $f_\cR,g_\cR\in\cR$).
Reassociating and removing the contractible type $\sm{\im(g):\UU}(\im(f) \simeq \im(g))$, and renaming $\im(f)$ as simply $I$, this is equivalent to
\begin{multline*}
\sm{I:\UU}{f_\cL : A \to I}{f_\cR : I \to X}{H_f : f = f_\cR \circ f_\cL}\\
\sm{g_\cL : B \to I}{g_\cR : I \to Y}{H_g : g = g_\cR \circ g_\cL}{H_\cL : g_\cL \circ l = f_\cL}{H_\cR : r\circ f_\cR = g_\cR}\\
(\ct{(r\circ H_f)}{(H_\cR \circ f_\cL)} = \ct S{\ct{(H_g \circ l)}{(g_\cR \circ H_\cL)}})
\end{multline*}
Removing the contractible $\sm{f_\cL : A \to I} (g_\cL \circ l = f_\cL)$ and $\sm{g_\cR : I \to Y} (r\circ f_\cR = g_\cR)$, this becomes
\begin{multline*}
\sm{I:\UU}{f_\cR : I \to X}{g_\cL : B \to I}{H_f : f = f_\cR \circ g_\cL \circ l}{H_g : g = r\circ f_\cR \circ g_\cL}\\
(r\circ H_f = \ct S{(H_g \circ l)})
\end{multline*}
Inserting a contractible $\sm{j:B\to X} (f_\cR \circ g_\cL = j)$, and reassociating some more, we get
\begin{multline*}
\sm{j:B\to X}{I:\UU}{f_\cR : I \to X}{g_\cL : B \to I}{H_j:f_\cR \circ g_\cL = j}\\
\sm{H_f : f = f_\cR \circ g_\cL \circ l}{H_g : g = r\circ f_\cR \circ g_\cL}
(r\circ H_f = \ct S{(H_g \circ l)})
\end{multline*}
But now $\sm{I:\UU}{f_\cR : I \to X}{g_\cL : B \to I}{H_j:f_\cR \circ g_\cL = j}$ is just $\fact_{\cL,\cR}(j)$, hence contractible.
Removing it, we get
\begin{equation*}
\sm{j:B\to X}{H_f : f = j \circ l}{H_g : g = r\circ j}(r\circ H_f = \ct S{(H_g \circ l)})
\end{equation*}
which is just $\fillers S$.
Therefore, this is also contractible.
For any class $\mathcal{C}:\prd*{A,B:\UU}(A\to B)\to\prop$ of maps, we define
* $^{\bot}\mathcal{C}$ to be the class of maps with (unique) left lifting
property with respect to all maps in $\mathcal{C}$: the mere proposition
$({}^\bot\mathcal{C})(l)$ asserts that for every commutative square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f"] \arrow[d,swap,"l"] \ar[dr,phantom,"S"] & X \arrow[d,"r"] \\
B \arrow[r,swap,"g"] & Y
\end{tikzcd}
\end{equation*}
with $r$ in $\mathcal{C}$, the type $\fillers S$ of diagonal fillers is contractible.
* $\mathcal{C}^\bot$ to be the class of maps with the dual (unique) right lifting
property with respect to all maps in $\mathcal{C}$.
* $l\perp r$ to mean $r\in \{l\}^\perp$ (equivalently, $l\in {}^{\perp}\{r\}$).
In an orthogonal factorization system $(\mathcal{L},\mathcal{R})$, one has
$\mathcal{L}={^\bot\mathcal{R}}$ and $\mathcal{L}^\bot=\mathcal{R}$.
We first show that $\mathcal{L}={^\bot\mathcal{R}}$, i.e. we show that
$\mathcal{L}(f)\leftrightarrow {^\bot\mathcal{R}}(f)$ for any map $f$. Note
that the implication $\mathcal{L}(f)\to {^\bot\mathcal{R}}(f)$ follows from
Let $f:A\to B$ be a map in ${^\bot\mathcal{R}}$.
We wish to show that $\mathcal{L}(f)$. Consider the factorization
$(f_{\mathcal{L}},f_{\mathcal{R}})$ of $f$. Then the square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f_{\mathcal{L}}"] \arrow[d,swap,"f"] & \mathsf{im}_{\mathcal{L},\mathcal{R}}(f) \arrow[d,"f_{\mathcal{R}}"] \\
B \arrow[r,swap,"\idfunc"] & B
\end{tikzcd}
\end{equation*}
commutes. Since $f$ has the left lifting property, the type of diagonal fillers
of this square is contractible. Thus we have a section $j$ of $f_{\mathcal{R}}$.
The map $j\circ f_\mathcal{R}$ is then a diagonal filler of the square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f_{\mathcal{L}}"] \arrow[d,swap,"f_{\mathcal{L}}"] & \mathsf{im}_{\mathcal{L},\mathcal{R}}(f) \arrow[d,"f_{\mathcal{R}}"] \\
\mathsf{im}_{\mathcal{L},\mathcal{R}}(f) \arrow[r,swap,"f_{\mathcal{R}}"] & B.
\end{tikzcd}
\end{equation*}
Of course, the identity map $\idfunc[\mathsf{im}_{\mathcal{L},\mathcal{R}}(f)]$
is also a diagonal filler for this square, so the fact that the type of
such diagonal fillers is contractible implies that $j\circ f_{\mathcal{R}}=\idfunc$.
Thus, $j$ and $f_\cR$ are inverse equivalences, and so the pair $(B,f)$ is equal to the pair $(\mathsf{im}_{\mathcal{L},\mathcal{R}}(f),f_\cL)$.
Hence $f$, like $f_\cL$, is in $\cL$.
Similarly, <ref> also implies that $\mathcal{R}(f)\to \mathcal{L}^\bot(f)$
for any map $f$, while we can prove $\mathcal{L}^\bot(f)\to\mathcal{R}(f)$ analogously to ${^\bot\mathcal{R}}(f)\to\mathcal{L}(f)$.
The data of two orthogonal factorization systems $(\mathcal{L},\mathcal{R})$ and
$(\mathcal{L}',\mathcal{R}')$ are identical if and only if
“Only if” is obvious.
Conversely, if $\mathcal{R}=\mathcal{R}'$, then by <ref> we have $\cL = \cL'$, and the remaining data of an orthogonal factorization system is a mere proposition.
For each $l:X\to Y$ such that $\mathcal{L}(l)$ and each type $Z$, the function
\begin{equation*}
\lam{g} g\circ l: (\sm{g:Y\to Z}\mathcal{R}(g))\to(\sm{f:X\to Z}\mathcal{R}(f))
\end{equation*}
is a monomorphism. Also, for each $r:X\to Y$ such that $\mathcal{R}(r)$ and
each type $Z$, the function
\begin{equation*}
\lam{f} r\circ f : (\sm{f:Z\to X}\mathcal{L}(f))\to(\sm{g:Z\to Y}\mathcal{L}(g))
\end{equation*}
is a monomorphism.
We prove the first statement. Suppose $g,g':Y\to Z$ are two $\mathcal{R}$-maps
such that $H:g\circ l=f$ and $H':g'\circ l=f$. Then we obtain two ...
From every orthogonal factorization system we obtain a reflective subcategory with the same modal types.
We define $P(A)$ to be the proposition that the unique map $A\to\unit$ is in
For any type $A$, there is a unique factorization
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"{\modalunit[A]}"] & \modal A \arrow[r] & \unit
\end{tikzcd}
\end{equation*}
of the unique map $A\to\unit$, where $\modalunit[A]$ is in $\mathcal{L}$. This
defines the operation $\modal$ and the modal units.
Now let $A:\UU$ and $B:\UU_P$, and consider $f:A\to B$. We have to show that
the type of extensions of $f$ along $\modalunit$ is contractible.
It is immediate that the type of such extensions is equivalent to the type
$\mathsf{fill}_{\mathcal{L},\mathcal{R}}(f,g)$ of diagonal fillers
of the square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"f"] \arrow[d,swap,"{\modalunit[A]}"] & B \arrow[d] \\
\modal A \arrow[r,swap,"g"] & \unit.
\end{tikzcd}
\end{equation*}
By <ref>, the assumption that $P(B)$ holds and the fact that $\modalunit[A]$ is
in $\mathcal{L}$, we know that this type of diagonal fillers is contractible.
Let $(\mathcal{L},\mathcal{R})$ be an orthogonal factorization system. Then
the class $\mathcal{R}$ is stable under pullbacks.
Consider a pullback diagram
\begin{equation*}
\begin{tikzcd}
A \arrow[d,swap,"k"] \arrow[r,"g"] & X \arrow[d,"h"] \\
B \arrow[r,swap,"f"] & Y
\end{tikzcd}
\end{equation*}
where $h:X\to Y$ is assumed to be in $\mathcal{R}$, and let $k=k_{\mathcal{R}}\circ k_\mathcal{L}$ be a factorization of $k$.
Then the outer rectangle in the diagram
\begin{equation*}
\begin{tikzcd}
A \arrow[r,equals] \arrow[d,swap,"k_{\mathcal{L}}"] & A \arrow[d,swap,"k"] \arrow[r,"g"] & X \arrow[d,"h"] \\
\im_{\mathcal{L},\mathcal{R}}(k) \arrow[r,swap,"k_{\mathcal{R}}"] & B \arrow[r,swap,"f"] & Y
\end{tikzcd}
\end{equation*}
commutes, so by <ref> there is a diagonal lift $j:\im_{\mathcal{L},\mathcal{R}}(k)\to X$ with $i \circ k_{\cL} = g$ and $h\circ i = f \circ k_{\cR}$.
Then by the universal property of pullbacks, we obtain a map $j:\im_{\mathcal{L},\mathcal{R}}(k)\to A$ with $g\circ j = i$ and $k\circ j=k_{\mathcal{R}}$.
And since $g\circ j \circ k_{\cL} = i\circ k_{\cL} = g$ and $k\circ j\circ k_{\cL} = k_{\cR}\circ k_{\cL} = k$ (by homotopies coherent with the pullback square), the uniqueness aspect of the pullback gives $j\circ k_{\mathcal{L}}=\idfunc$.
It suffices to show that $k_{\mathcal{L}}$ is an equivalence, and since we already have that $j\circ k_{\mathcal{L}}=\idfunc$ we only need to show that $k_{\mathcal{L}}\circ j=\idfunc$.
We do this using the contractibility of the type of diagonal fillers. Consider the square
\begin{equation*}
\begin{tikzcd}
A \arrow[r,"k_{\mathcal{L}}"] \arrow[d,swap,"k_{\mathcal{L}}"] & \im_{\mathcal{L},\mathcal{R}}(k) \arrow[d,"k_{\mathcal{R}}"] \\
\im_{\mathcal{L},\mathcal{R}}(k) \arrow[r,swap,"k_{\mathcal{R}}"] & B,
\end{tikzcd}
\end{equation*}
for which $\idfunc:\im_{\mathcal{L},\mathcal{R}}(k)\to \im_{\mathcal{L},\mathcal{R}}(k)$ (with the trivial homotopies) is a diagonal filler. However, we also have the homotopies $k_{\mathcal{L}}\circ j\circ k_{\mathcal{L}} \htpy k_{\mathcal{L}}$ and $k_{\mathcal{R}}\circ k_{\mathcal{L}}\circ j\htpy k\circ j\htpy k_{\mathcal{R}}$. This shows that we have a second diagonal filler, of which the underlying map is $k_{\mathcal{L}}\circ j$. Since the type of diagonal fillers is contractible, it follows that $k_{\mathcal{L}}\circ j=\idfunc$, as desired.
§.§.§ Stable orthogonal factorization systems
Given $l,r,f,g$ and a homotopy $S : r \circ f = g \circ l$, consider as $b:B$ varies all the diagrams of the form
\begin{equation*}
\begin{tikzcd}
\hfib{l}{b} \arrow[r,"\proj1"] \arrow[d,"!"'] & A \arrow[d,swap,"l"] \arrow[r,"f"] \ar[dr,phantom,"S"] & X \arrow[d,"r"] \\
\unit \arrow[r,swap,"b"] & B \arrow[r,swap,"g"] & Y
\end{tikzcd}
\end{equation*}
and write $S_b : r \circ (f \circ \proj1) = (g\circ b) \circ \mathord !$ for the induced commutative square.
Then the map
\begin{equation*}
\fillers{S} \to \prd{b:B}\fillers{S_b},
\end{equation*}
defined by precomposition with $b$, is an equivalence.
The domain and codomain of the map in question are by definition
\[
\sm{j:B\to X}{H_f :j\circ l=f}{H_g:r\circ j=g} r\circ H_f = \ct{S}{(H_g\circ l)}
\]
\begin{equation*}
\prd{b:B}\sm{j_b:\unit\to X}{H_{f,b} : j_b\circ \mathord{!}=f\circ \proj1}{H_{g,b}: r\circ j_b=g\circ b} r\circ H_{f,b} = \ct{S_b}{(H_{g,b}\circ \mathord!)}.
\end{equation*}
The latter is equivalent (using function extensionality and contractibility of $\unit$) to
\begin{multline*}
\prd{b:B}\sm{j_b:X}{H_{f,b} : \prd{u:\hfib l b} j_b=f(\proj1(u))}{H_{g,b}: r(j_b)=g(b)}\\
\prd{u:\hfib l b} r(H_{f,b}(u)) = \ct{S_b}{H_{g,b}}.
\end{multline*}
and thereby to
\begin{multline*}
\sm{j:B\to X}{H_{f} : \prd{b:B}\prd{u:\hfib l b} j(b)=f(\proj1(u))}{H_{g}: \prd{b:B} r(j(b))=g(b)}\\
\prd{b:B}\prd{u:\hfib l b} r(H_{f}(b,u)) = \ct{S_b}{H_{g}(b)}.
\end{multline*}
Modulo these equivalences, the desired map acts as the identity on $j:B\to X$.
Moreover, its action on the remaining parts is given by the equivalences
\begin{align*}
(j\circ l = f)
&\eqvsym \prd{a:A} j(l(a)) = f(a)\\
&\eqvsym \prd{a:A}{b:B}{p:l(a)=b} j(l(a)) = f(a)\\
&\eqvsym \prd{b:B}{a:A}{p:l(a)=b} j(b) = f(a)\\
&\eqvsym \prd{b:B} \prd{u:\hfib l b} j(b) = f(\proj1(u))
\end{align*}
\begin{equation*}
(r\circ j = g)
\eqvsym \prd{b:B} r(j(b)) = g(b)
\end{equation*}
\begin{align*}
(r\circ H_f = \ct{S}{(H_g\circ l)})
&\eqvsym \prd{a:A} r(H_f(a)) = \ct{S(a)}{H_g(l(a))}\\
&\eqvsym \prd{a:A}{b:B}{p:l(a)=b} r(H_f(a)) = \ct{S(a)}{H_g(l(a))}\\
&\eqvsym \prd{b:B}{a:A}{p:l(a)=b} r(H_f(a)) = \ct{S(a)}{H_g(b)}\\
&\eqvsym \prd{b:B}{u:\hfib l b} r(H_f(b,u)) = \ct{S_b}{H_g(b)}
\end{align*}
hence the whole thing is an equivalence.
In any orthogonal factorization system
$(\mathcal{L},\mathcal{R})$, if
$l:A\to B$ is a map such that $\hfib{l}{b} \to \unit$ is in $\cL$ for each $b:B$, then also $l$ itself is in $\cL$.
By <ref>, $l$ is in $\cL$ iff $\fillers S$ is contractible for each $r\in\cR$ and $S$ as in <ref>, while similarly $\hfib{l}{b} \to \unit$ is in $\cL$ iff $\fillers {S_b}$ is contractible.
But the product of contractible types is contractible.
In any stable orthogonal factorization system, if $l\perp r$ for all maps $l\in\cL$ of the form $l:A\to \unit$, then $r\in\cR$.
In particular, for any modality $\modal$, if $X\to (A\to X)$ is an equivalence for all $\modal$-connected types $A$, then $X$ is modal.
By <ref>, for any $l\in\cL$ and commutative square $S$ from $l$ to $r$, we have $\fillers{S} \eqvsym \prd{b:B}\fillers{S_b}$.
Since $(\cL,\cR)$ is stable, each map $\mathord{!}_b:\hfib{l}{b}\to \unit$ is also in $\cL$, so that $\mathord{!}_b\perp r$ by assumption.
Thus $\fillers{S_b}$ is contractible for all $b$, hence so is $\fillers{S}$.
For the second statement, the type $A\to X$ is equivalent to the type of commutative squares
\[
\begin{tikzcd}
A \ar[r,"f"] \ar[d] & X \ar[d] \\ \unit\ar[r] & \unit
\end{tikzcd}
\]
and the type of fillers for such a square is equivalent to the type of $x:X$ such that $f(a) = x$ for all $a:A$, i.e. the fiber of $X\to (A\to X)$ over $f$.
Thus, the assumption ensures that all such types of fillers are contractible, i.e. $l\perp r$ for all $\modal$-connected maps of the form $l:A\to \unit$, so the first statement applies.
Let $(\mathcal{L},\mathcal{R})$ be a stable orthogonal factorization system.
Then a map $r:X\to Y$ is in $\mathcal{R}$ if and only if $\hfib{r}{y}$
is $(\mathcal{L},\mathcal{R})$-modal for each $y:Y$.
The class of right maps is stable under pullbacks by <ref>,
so it suffices to show that any map with modal fibers is in $\mathcal{R}$.
Let $r:X\to Y$ be a map with modal fibers. Our goal is to show that
$r$ is in $\mathcal{R}$. By <ref> it suffices to show that
$r$ has the right lifting property with respect to the left maps.
Consider a diagram of the form
\begin{equation*}
\begin{tikzcd}
A \arrow[d,swap,"l"] \arrow[r,"f"] & X \arrow[d,"r"] \\
B \arrow[r,swap,"g"] & Y
\end{tikzcd}
\end{equation*}
in which $l$ is a map in $\mathcal{L}$.
We wish to show that the type of diagonal fillers is contractible.
By <ref>, the type of diagonal fillers of the above diagram
is equivalent to the dependent product of the types of fillers of
\begin{equation*}
\begin{tikzcd}
\hfib{l}{b} \arrow[d] \arrow[r,"f\circ i_b"] & X \arrow[d,"r"] \\
\unit \arrow[r,swap,"g(b)"] & Y
\end{tikzcd}
\end{equation*}
indexed by $b:B$. Thus, it suffices that the type of diagonal fillers for this
square is contractible for each $b:B$. Since any filler factors uniquely through
the pullback $\unit\times_Y X$, which is $\hfib{r}{g(b)}$, the type of diagonal
fillers of the above square is equivalent to the type of diagonal fillers of the
\begin{equation*}
\begin{tikzcd}
\hfib{l}{b} \arrow[d] \arrow[r,densely dotted] & \hfib{r}{g(b)} \arrow[d] \\
\unit \arrow[r,equals] & \unit
\end{tikzcd}
\end{equation*}
where the dotted map is the uniqe map into the pullback $\hfib{r}{g(b)}$. In
this square, the left map is in $\mathcal{L}$ because $\mathcal{L}$ is assumed
to be stable under pullbacks, and the right map is in $\mathcal{R}$ by assumption,
so the type of diagonal fillers is contractible.
Any two stable orthogonal factorization systems with the same modal types are
By <ref> it follows that any orthogonal factorization system
is completely determined by the class of right maps.
By <ref> it follows that in a stable orthogonal factorization
system, the class of right maps is completely determined by the modal types.
Any stable orthogonal factorization system determines a higher modality with
the same modal types.
For every type $X$ we have the $(\cL,\cR)$-factorization $X\to\modal X\to\unit$ of the
unique map $X\to\unit$. This determines the modal unit
$\modalunit:X\to\modal X$ which is in $\mathcal{L}$, and the
unique map $\modal X\to\unit$ is in $\mathcal{R}$, i.e. $\modal X$ is $(\cL,\cR)$-modal.
To show the induction principle, let $P:\modal X\to\UU$ and $f:\prd{x:X} \modal(P(\eta(x)))$.
Then we have a (judgmentally) commutative square
\begin{equation*}
\begin{tikzcd}
X \arrow[r,"f"] \arrow[d,swap,"\modalunit"] & \sm{z:\modal X}\modal(P(z)) \arrow[d,"\proj1"] \\
\modal X \arrow[r,equals] & \modal X.
\end{tikzcd}
\end{equation*}
Note that by <ref>,
the projection $\proj1:(\sm{z:\modal X}\modal(P(z)))\to\modal X$ is in $\mathcal{R}$
because its fibers are modal. Also, the modal unit
$\modalunit:X\to\modal X$ is in $\mathcal{L}$.
Thus, by <ref>, the type of fillers of this square is contractible.
Such a filler consists of a function $s$ and homotopies filling the two triangles
\begin{equation*}
\begin{tikzcd}
X \arrow[r,"f"] \arrow[d,swap,"\modalunit"] & \sm{z:\modal X}\modal(P(z)) \arrow[d,"\proj1"] \\
\modal X \arrow[r,equals] \arrow[ur,densely dotted] & \modal X
\end{tikzcd}
\end{equation*}
whose composite is reflexivity, i.e. the type
\begin{multline*}
\sm{s:\modal X \to \sm{z:\modal X}\modal(P(z))}{H:\prd{z:\modal X} \proj1(s(z))=z}{K:\prd{x:X} s(\modalunit(x))=f(x)}\\
\prd{x:X} \proj1(K(x)) = H(\modalunit(x)).
\end{multline*}
If we decompose $s$, $f$, and $K$ by their components, we get
\begin{multline*}
\sm{s_1:\modal X \to \modal X}{s_2:\prd{z:\modal X} \modal(P(s_1(z)))}{H:\prd{z:\modal X} s_1(z)=z}\\
\sm{K_1:\prd{x:X} s_1(\modalunit(x))=f_1(x)}{K_2 :\prd{x:X} s_2(\modalunit(x)) =_{K_1(x)} f_2(x)}\\
\prd{x:X} K_1(x) = H(\modalunit(x)).
\end{multline*}
Now we can contract $s_1$ and $H$, and also $K_1$ with the final unnamed homotopy, to get
\begin{equation*}
\sm{s_2:\prd{z:\modal X} \modal(P(z))} \prd{x:X} s_2(\modalunit(x)) = f_2(x).
\end{equation*}
But this is just the type of extensions of $f$ along $\modalunit$, i.e. the fiber of precomposition by $\modalunit$.
Thus, precomposition by $\modalunit$ is an equivalence, so in fact we have a uniquely eliminating modality.
By <ref>, the identity types of $\modal X$ are modal, so we have a higher modality as well.
§ LOCALIZATION
Localization is the process of inverting a specified class of maps.
In category theory, the localization of a category $\mathcal{C}$ at a family of maps $F$ is obtained by adding formal inverses to those maps freely, obtaining a category $\mathcal{C}[F^{-1}]$ with a universal functor $\mathcal{C}\to \mathcal{C}[F^{-1}]$ sending each map in $F$ to an isomorphism.
In good situations, this universal functor is equivalent to the reflection onto a reflective subcategory of $\mathcal{C}$, which consists of the $F$-local objects: those that “see each map in $F$ as an isomorphism”.
We will not be concerned here with the universal property of the localized category; instead we are interested in constructing reflective subcategories of local objects.
We can do this with a higher inductive type, giving a general construction of reflective subuniverses and modalities.
§.§ Local types and null types
Consider a family $F:\prd{a:A}B(a)\to C(a)$ of maps. We say that a type $X$
is $F$-local if the function
\begin{equation*}
\lam{g}g\circ F_a : (C(a)\to X)\to (B(a)\to X)
\end{equation*}
is an equivalence for each $a:A$.
In other words, $X$ is $F$-local if every $f:B(a)\to X$ extends uniquely to a map $\bar{f}:C(a)\to X$, along the map $F_a:B(a)\to C(a)$, as indicated in the diagram
\begin{equation*}
\begin{tikzcd}
B(a) \arrow[r,"f"] \arrow[d,swap,"F_a"] & X. \\
C(a) \arrow[ur,densely dotted,swap,"\bar{f}"]
\end{tikzcd}
\end{equation*}
Thus, one might say that a type $X$ is $F$-local if it is (right) orthogonal to the maps $F_a$, or that it “thinks each map $F_a$ is an equivalence”.
In <ref> we will see that the $F$-local types determine a reflective subuniverse.
In most of our examples $C$ will be the constant family $\unit$, giving the following specialization.
Let $B:A\to \UU$ be a type family. A type $X$ is said to be $B$-null if the map
\begin{equation*}
\lam{x}\lam{b}x : X \to (B(a) \to X)
\end{equation*}
is an equivalence for each $a:A$.
In other words, $X$ is $B$-null if and only if any map $f:B(a)\to X$ has a unique extension to a map $\unit\to X$, as indicated in the diagram
\begin{equation*}
\begin{tikzcd}
B(a) \arrow[r,"f"] \arrow[d] & X. \\
\unit \arrow[ur,densely dotted]
\end{tikzcd}
\end{equation*}
Thus, a type $X$ is $B$-null if it is (right) orthogonal to the types $B(a)$, or that it “thinks each type $B(a)$ is contractible”.
In <ref> we will see that the $B$-null types determine a modality.
* The unit type is local for any family of maps.
* Since $\emptyt\to X$ is contractible for any type $X$, a type is $\emptyt$-null if and only if it is contractible.
* Any type is $\unit$-null.
* A type $X$ is $\bool$-null if and only if $X$ is a mere proposition. To see this, recall that a mere proposition is a type for which any two points can be identified. A map of type $\bool\to X$ is equivalently specified by two points in $X$. If $X$ is assumed to be $\bool$-null, and $x,y:X$ are points in $X$, then it follows that there is a (unique) point $z:X$ such that $x=z$ and $y=z$. In particular it follows that $x=y$, so we conclude that $X$ is a mere proposition.
* More generally, a type is $\Sn^{n+1}$-null if and only if it is $n$-truncated.
This follows from <cit.>.
* If $Q$ is a mere proposition, then the $Q$-null types are exactly the $\open Q$-modal types (see <ref>).
We choose to consider the notion of being local at a family of maps, rather than
as a class of maps (i.e. a subtype of $\sm{X,Y:\UU}X\to Y$). A family of maps (indexed by a type $A$ in $\UU$) is intrinsically small with respect to $\UU$, whereas a class
is not. By localizing at a small family of maps, we obtain a small type constructor.
Nevertheless, one can show that for any family $F$ of maps, a type is $F$-local
if and only if it is local at the class $\im(F)$, when $\im(F)$ is regarded
as a subtype of $\sm{X,Y:\UU}X\to Y$. A similar relation holds for
set-quotients in [33].
A more nontrivial example is the following.
Let $A$ be a type, and let $\susp(A)$ be its suspension.
Then a type $X$ is $\susp(A)$-local if and only if its identity types are $A$-local.
The universal property of $\susp(A)$ is that
\[ (\susp(A) \to X) \simeq \sum_{x,y:A} (A\to (x=y)). \]
Since $A\simeq \sum_{x,y:A} (x=y)$, to say that $X$ is $\susp(A)$-local is to say that the map
\[ \Big(\sm{x,y:A} (x=y)\Big) \to \Big(\sm{x,y:A} (A\to (x=y))\Big) \]
is an equivalence.
But this is the total space of the fiberwise map
\[ (x=y) \to (A\to (x=y)) \]
for all $x,y:A$, hence it is an equivalence if and only if they all are, i.e. if and only if all identity types of $X$ are $A$-local.
Since the $n$-sphere $\Sn^n$ is equivalent the $n$-fold suspension of $\bool$, it follows that:
A type if $\Sn^{n+1}$-local if and only if it is an $n$-type.
Suppose $F:\prd{a:A}B(a)\to C(a)$ is a family of maps, and $X$ is a type for
which the map
\begin{equation*}
\lam{g}g\circ F_a : (C(a)\to X)\to (B(a)\to X)
\end{equation*}
is an embedding. Then the identity types of $X$ are $F$-local.
We will see [later] that the converse also holds.add autoref
By the assumption that the map $\blank\circ F_a: (C(a)\to X)\to(B(a)\to X)$ is
an embedding,
its action on paths
\begin{equation*}
\mapfunc{\blank\circ F_a}(g,g'):{(g=g')}\to{(g\circ F_a=g'\circ F_a)}
\end{equation*}
is an equivalence for any $g,g':C(a)\to X$.
In particular, we have this equivalence for $g\defeq \lam{c}x$ and
$g'\defeq\lam{c}y$, for any $x,y:X$.
We have a commuting square
\begin{equation*}
\begin{tikzcd}
(\lam{c}x=\lam{c}y) \arrow[d,swap,"{\lam{p}\mapfunc{\blank\circ F_a}(p)}"] \arrow[r,"\eqvsym"] & (x=y)^{C(a)} \arrow[d,"\blank\circ F_a"] \\
(\lam{b}x=\lam{b}y) \arrow[r,swap,"\eqvsym"] & (x=y)^{B(a)}
\end{tikzcd}
\end{equation*}
of which the top and bottom maps are equivalences by the function extensionality
principle. It follows that the right map is a composite of equivalences.
Hence we see that the type $x=y$ is $F$-local.
If $X$ is an $F$-local type, then so are its identity types.
§.§ Localizing at a family of maps
In this subsection we introduce the localization operation and show that it determines a reflective subuniverse, which is a modality in the case of nullification.
We define a modal operator $\localization{F}:\UU\to\UU$ called localization at $F$, via a construction involving higher inductive types.
The idea is that one of the point constructors will be the modal unit $\modalunit[X]$ and the other constructors build in exactly the data making each $\lam{g}g\circ F_a$ an equivalence.
For this to be homotopically well-behaved, we have to choose a “good” notion of equivalence such as those in <cit.>.
Any such choice is possible, but some are easier than others.
Of those in [40], “bi-invertibility” is easiest because it allows us to avoid 2-path constructors.
However, the following notion of equivalence, which doesn't appear in [40], is easier still.
As we will see, this is because although it does include 2-path constructors, the four data it comprises can be broken into two pairs that can be treated “uniformly” despite occuring at “different dimensions”; thus we only need to deal explicitly with one point constructor and one path constructor (and no 2-path constructors).
For $f:A\to B$ we write
\begin{equation*}
\mathsf{rinv}(f) \defeq \sm{g:B\to A} (f\circ g = \idfunc[B])
\end{equation*}
and for $x,y:A$ we write $\apfunc{f}^{x,y} : (x=y) \to (fx=fy)$ for the action of $f$ on identities.
We say that $f$ is path-split if we have an inhabitant of the following type:
\[ \mathsf{pathsplit}(f) \defeq \mathsf{rinv}(f) \times \prd{x,y:A} \mathsf{rinv}(\apfunc{f}^{x,y}). \]
For any $f$ we have $\eqv{\mathsf{pathsplit}(f)}{\isequiv(f)}$.
If $f$ is path-split, to show that it is an equivalence it suffices to show that its right inverse $g$ is also a left inverse, i.e. that $gfx=x$ for all $x:A$.
But $fgfx = fx$ since $f\circ g = \idfunc[B]$, and $\apfunc{f} : (gfx=x) \to (fgfx=fx)$ has a right inverse, so $gfx=x$.
This gives a map $\mathsf{pathsplit}(f) \to \isequiv(f)$; to show that it is an equivalence, we may assume that its codomain is inhabited.
But if $f$ is an equivalence, then so is $\apfunc{f}^{x,y}$, and hence $\mathsf{rinv}(f)$ and $\mathsf{rinv}(\apfunc{f}^{x,y})$ are both contractible.
So in this case $\mathsf{pathsplit}(f)$ and $\isequiv(f)$ are both contractible, hence equivalent.
Now let $F:\prd{a:A} B(a) \to C(a)$ be a family of functions and $X:\UU$.
As a “first approximation” to the localization $\localization{F}(X)$, let $\localhit{F}{X}$ be the higher inductive type with the following constructors:
* $\alpha_X : X \to \localhit{F}{X}$
* $\mathsf{ext} : \prd*{a:A} (B(a) \to \localhit{F}{X}) \to (C(a) \to \localhit{F}{X})$
* $\mathsf{isext} : \prd*{a:A}{f:B(a)\to\localhit{F}{X}}{b:B(a)}\id{\mathsf{ext}(f)(F_a(b))}{f(b)}$.
The induction principle of $\localhit{F}{X}$ is that for any type family $P:\localhit{F}{X}\to \UU'$, if there are terms
\begin{align*}
N & : \prd{x:X}P(\alpha_X(x))\\
R & : \prd*{a:A}{f:B(a)\to\localhit{F}{X}}(\prd{b:B(a)}P(f(b)))\to\prd{c:C(a)} P(\mathsf{ext}(f,c)) \\
S & : \prd*{a:A}{f:B(a)\to\localhit{F}{X}}{f':\prd{b:B(a)}P(f(b))}{b:B(a)}\dpath{P}{\mathsf{isext}(f,b)}{R(f')(F_a(b))}{f'(b)},
\end{align*}
then there is a section $s:\prd{x:\localhit{F}{X}}P(x)$ such that $s\circ \alpha_X= N$.
(The section $s$ also computes on $\mathsf{ext}$ and $\mathsf{isext}$, but we will not need those rules.)
Note that the family $P$ does not have to land in the same universe $\UU$ that contains our types $A,B,C,X$; this will be important in <ref>.
This approximation $\localhit{F}{X}$ behaves like we expect $\localization{F}(X)$ to behave when mapping into local types:
If $Y$ is $F$-local (and $X$ is arbitrary), then precomposition with $\alpha_X$
\[ (-\circ \alpha_X) : (\localhit{F}{X} \to Y) \to (X\to Y) \]
is an equivalence.
We will show that this map is path-split.
First we have to construct a right inverse to it, i.e. given $g:X\to Y$ we must extend it to $\localhit{F}{X}$.
We will apply the induction principle using the constant family $Y$ over $\localhit{F}{X}$ and $N\defeq g$, so that the computation rule shows that what we get is an extension of $g$.
To construct the cases of $R$ and $S$, let $f:B(a)\to \localhit{F}{X}$, and let $f':B(a)\to Y$.
Our goal is to construct $R(f,f'):C(a)\to Y$ together with a witness $S(f,f')$ that the triangle
\begin{equation*}
\begin{tikzcd}[column sep=large]
B(a) \arrow[dr,"{f'}"] \arrow[d,swap,"F_a"] \\
C(a) \arrow[r,swap,"{R(f,f')}"] & Y
\end{tikzcd}
\end{equation*}
But $Y$ is $F$-local, so the map
\[ (-\circ F_a) : (C(a) \to Y) \to (B(a)\to Y) \]
is an equivalence, and hence in particular has a right inverse; applying this right inverse to $f'$ gives $R$ and $S$.
Second, we must suppose given $g,h:\localhit{F}{X} \to Y$ and construct a right inverse to
\[ \apfunc{(-\circ \alpha_X)} : (g=h) \to (g\circ \alpha_X = h\circ \alpha_X). \]
Thus, suppose we have $K : \prd{x:X} g(\alpha_X(x)) = h(\alpha_X(x))$; we must extend $K$ to a homotopy $\tilde{K} : \prd{z:\localhit{F}{X}} g(z)=h(z)$ such that $\tilde{K}(\alpha_X(x)) = K(x)$.
We will apply the induction principle using the family $P:\localhit{F}{X} \to \UU$ defined by $P(z) \defeq (g(z)=h(z))$, and $N\defeq K$.
To construct the cases of $R$ and $S$, let $f:B(a)\to \localhit{F}{X}$ and $f':\prd{b:B(a)} gfb = hfb$.
Our goal is to construct $R(f,f'):\prd{c:C(a)} g(\mathsf{ext}(f,c))=h(\mathsf{ext}(f,c))$ together with a witness $S(f,f')$ that for any $b:B(a)$ we have
\begin{equation}
R(f,f')(F_a(b)) = \ct{\ap{g}{\mathsf{isext}(f,b)}}{\ct{f'(b)}{\ap{h}{\mathsf{isext}(f,b)}^{-1}}}.\label{eq:locpsRS}
\end{equation}
However, once again, since $Y$ is $F$-local, the map
\[ (-\circ F_a) : (C(a) \to Y) \to (B(a)\to Y) \]
is an equivalence, and hence in particular
\begin{equation}
\apfunc{(-\circ F_a)} :
(g\circ \mathsf{ext}(f) = h\circ \mathsf{ext}(f)) \to (g\circ \mathsf{ext}(f) \circ F_a = h\circ \mathsf{ext}(f) \circ F_a)\label{eq:locpsap}
\end{equation}
has a right inverse.
But the right-hand side of (<ref>) inhabits the codomain of (<ref>), so applying this right inverse gives $R$ and $S$.
In general, $\localhit{F}{X}$ is not $F$-local: its constructors only ensure that each map
\[ (-\circ F_a) : (C(a) \to \localhit{F}{X}) \to (B(a) \to \localhit{F}{X}) \]
has a right inverse, not that it is an equivalence.
(In fact, $\localhit{F}{X}$ is the “free algebraically $F$-injective type on $X$”, cf. [7].)
However, it does happen in many common cases that $\localhit{F}{X}$ is already $F$-local (and hence the $F$-localization of $X$).
Specifically, this happens whenever each $(-\circ F_a)$ already has a left inverse, which happens whenever each $F_a : B(a) \to C(a)$ has a right inverse.
For instance, if $C(a)\defeq\unit$ for all $a$ (so that we are talking about $B$-nullification), then this happens whenever all the types $B(a)$ are inhabited (i.e. we have $\prd{a:A}B(a)$); cf. <cit.>.
In particular, this occurs for $\Sn^{n+1}$-nullification for $n\ge -1$, which as we saw in <ref> coincides with $n$-truncation.
In this case $\localhit{F}{X}$ essentially reduces to the “hub and spoke” construction of truncations from <cit.>.
A concrete example where $\localhit{F}{X}$ is not yet $F$-local is $\emptyset$-nullification, where $\localhit{F}{X} = X+\unit$, but only contractible types are $\emptyset$-null.
Note that $\emptyset = \Sn^{-1}$, so this is equivalently $(-2)$-truncation.
To modify $\localhit{F}{X}$ to become $F$-local using bi-invertibility or half-adjoint equivalences, we would need to add two more constructors to $\localhit{F}{X}$ corresponding to the additional two pieces of data in those definitions of equivalence, and then add two more cases to the proof of <ref> to deal with those constructors.
Moreover, these additional cases are rather more difficult than the ones we gave, since they involve homotopies “on the other side”.
Fortunately, with path-splitness, we can instead use a simple trick.
Given any map $f:B\to C$, let $\Delta_f : B\to B\times_C B$ be its diagonal and $\nabla_f : C +_B C \to C$ its codiagonal.
For any $f:B\to C$ and any $X$, we have a commuting triangle
\begin{equation*}
\begin{tikzcd}[column sep=-2em]
\phantom{(C\to X) \times_{(B\to X)} (C\to X)} & (C\to X) \arrow[dl,swap,"(-\circ \nabla_f)"] \arrow[dr,"\Delta_{(-\circ f)}"] \\
(C +_B C \to X) \arrow[rr,"\sim"] & & (C\to X) \times_{(B\to X)} (C\to X)
\end{tikzcd}
\end{equation*}
in which the bottom map is an equivalence.
By the universal property of the pushout.
For any $f:B\to C$, we have
\[ \mathsf{pathsplit}(f) \eqvsym \mathsf{rinv}(f) \times \mathsf{rinv}(\Delta_f). \]
Decomposing $B\times_C B$ and its identity types into $\Sigma$-types, we have
\begin{align*}
\mathsf{rinv}(\Delta_f)
&\eqvsym \prd{x,y:B}{p:fx=fy}\sm{z:B}{q:x=z}{r:z=y} \ct{\apfunc{f}^{x,z}(q)}{\apfunc{f}^{z,y}(r)} = p\\
&\eqvsym \prd{x,y:B}{p:fx=fy}\sm{r:x=y} \apfunc{f}^{x,y}(r) = p\\
&\eqvsym \prd{x,y:B} \mathsf{rinv}(\apfunc{f}^{x,y}).\qedhere
\end{align*}
For $f:B\to C$, a type $X$ is $f$-local if and only if both maps
\begin{align*}
(-\circ f) &: (C\to X) \to (B\to X) \\
(-\circ \nabla_f) &: (C\to X) \to (C +_B C \to X)
\end{align*}
have right inverses, and if and only if both of these maps are equivalences.
By <ref>, $X$ is $f$-local if and only if $(-\circ f)$ and $\Delta_{(-\circ f)}$ have right inverses, but by <ref> the latter is equivalent to $(-\circ \nabla_f)$.
The second statement follows since the diagonal of an equivalence is an equivalence.
<ref> implies that for $F$-locality it suffices for precomposition with each $F_a$ and $\nabla_{F_a}$ to have right inverses.
But $\localhit{F}{X}$ is the universal way to make precomposition with each $F_a$ have right inverses, so to localize we just need to add all the morphisms $\nabla_{F_a}$ to $F$.
Specifically, for any $F:\prd{a:A} B(a) \to C(a)$, define $\hat B,\hat C : A+A \to \UU$ and a family $\hat F: \prd{a:A+A} \hat B(a) \to \hat C(a)$ by
B̂((a)) B(a) Ĉ((a)) C(a) F̂((a)) F_a
B̂((a)) C(a) +_B(a) C(a) Ĉ((a)) C(a) F̂((a)) ∇_F_a.
For any $X:\UU$, the localization of $X$ at $F$ is $\localization{F}(X) \defeq \localhit{\hat F}{X}$, and $\modalunit[X] : X\to \localization{F}(X)$ is $\alpha^{\hat F}_X$.
As noted in <ref>, a simple example where $\localhit{F}{X}$ is not yet $F$-local is $\emptyset$-nullification, where $F$ is the single map $\emptyset\to\unit$.
In this case $\hat F$ consists of $\emptyset\to\unit$ and the fold map $\nabla : \unit+\unit \to \unit$.
The constructors of $\localhit{\hat F}{X}$ corresponding to the former give it a point, and those corresponding to the latter make it a mere proposition (in fact they are the constructors of $(-1)$-truncation, i.e. $\Sn^{0}$-nullification).
Thus, $\localhit{\hat F}{X}$ is contractible, i.e. $\emptyset$-local.
For any $F:\prd{a:A} B(a) \to C(a)$, the type $\localization{F}(X)$ is $F$-local.
The constructors of $\localization{F}(X)$ as $\localhit{\hat F}{X}$ say that the precomposition maps
\[ (-\circ \hat F_a) : (\hat C(a) \to \localhit{\hat F}{X}) \to (\hat B(a) \to \localhit{\hat F}{X}) \]
have right inverses for all $a:A+A$.
But by definition of $\hat F$, these maps consist of precomposition with each $F_a$ and $\nabla_{F_a}$.
Thus, by <ref>, $\localhit{\hat F}{X}$ is $F$-local.
If $Y$ is $F$-local (and $X$ is arbitrary), then precomposition with $\modalunit[X]$
\[ (-\circ \modalunit[X]) : (\localization{F}(X) \to Y) \to (X\to Y) \]
is an equivalence.
By the second clause of <ref>, any $F$-local type is also $\hat F$-local; so this follows from <ref>.
The subuniverse of $F$-local types in $\UU$ is a reflective subuniverse, with modal operator $\localization{F}$.
By <ref>.
§.§ Nullification and accessibility
A general localization is only a reflective subuniverse, but there is a convenient sufficient condition for it to be a modality: if each $C(a)=\unit$.
A localization modality of this sort is called nullification.
If $F:\prd{a:A} B(a) \to C(a)$ is such that each $C(a)=\unit$, then localization at $F$ is a modality, called nullification at $B$.
It suffices to show that for any $B:A\to\UU$, the $B$-null types are $\Sigma$-closed.
Thus, let $X:\UU$ and $Y:X\to \UU$ be such that $X$ and each $Y(x)$ are $B$-null.
Then for any $a:A$ we have
\begin{align*}
(B(a)\to \sm{x:X} Y(x))
&\eqvsym \sm{g:B(a)\to X} \prd{b:B(a)} Y(g(b)) \\
&\eqvsym \sm{x:X} B(a) \to Y(x) \\
&\eqvsym \sm{x:X} Y(x)
\end{align*}
with the inverse equivalence being given by constant maps.
Thus, $\sm{x:X} Y(x)$ is $B$-null.
Of course, it might happen that $\localization{F}$ is a modality even if $F$ doesn't satisfy the condition of <ref>.
For instance, if $B:A\to \UU$ has a section $s:\prd{a:A} B(a)$, then localizing at the family $s' : \prd{a:A} \unit \to B(a)$ is equivalent to nullifying at $B$, since in a section-retraction pair the section is an equivalence if and only if the retraction is.
However, we can say the following.
If $F:\prd{a:A} B(a)\to C(a)$ is such that $\localization{F}$ is a modality, then there exists a family $E:D\to \UU$ such that $\localization{F}$ coincides with nullification at $E$.
Write $\modal\defeq\localization{F}$ and $\modalunit$ for its modal unit.
Define $D = \sm{a:A} (\modal (B(a)) + \modal(C(a)))$, and $E:D\to \UU$ by
\begin{align*}
E(a,\inl(b)) &\defeq \hfib{\modalunit[B(a)]}{b}\\
E(a,\inr(c)) &\defeq \hfib{\modalunit[C(a)]}{c}.
\end{align*}
Then since $\modalunit$ is $\modal$-connected, each $E(d)$ is $\modal$-connected, and hence every $F$-local type is $E$-null.
On the other hand, suppose $X$ is an $E$-null type.
Each $\modalunit[B(a)]$ and $\modalunit[C(a)]$ is $\localization{E}$-connected, since their fibers are $\localization{E}$-connected (by definition); thus $X$ is also $\modalunit[B(a)]$-local and $\modalunit[C(a)]$-local.
But we have the following commutative square:
\[
\begin{tikzcd}[column sep=large]
B(a) \ar[r,"{\modalunit[B(a)]}"] \ar[d,"F_a"'] & \modal(B(a)) \ar[d,"{\modal(F_a)}"]\\
C(a) \ar[r,"{\modalunit[C(a)]}"'] & \modal(C(a))
\end{tikzcd}
\]
and ${\modal(F_a)}$ is an equivalence; thus $X$ is also $F_a$-local.
So the $F$-local types coincide with the $E$-null types.
This shows that the following pair of definitions are consistent.
A reflective subuniverse on $\UU$ is said to be accessible if it is the localization at a family of maps in $\UU$, indexed by a type in $\UU$.
Similarly, a modality $\modal$ on $\UU$ is said to be accessible if it is the nullification at a family of types in $\UU$, indexed by a type in $\UU$.
Explicitly, a presentation of a reflective subuniverse $\modal$ of $\UU$ consists of a family of maps $F : \prd{a:A} B(a) \to C(a)$, where $A:\UU$ and $B,C:A\to\UU$, such that $\modal = \localization{F}$.
Similarly, a presentation of a modality $\modal$ consists of a family of types $B: A\to\UU$, where $A:\UU$, such that $\modal = \localization{\lam{a} B(a)\to \unit}$.
Note that being accessible is structure; different families can present the same reflective subuniverse or modality.
As a trivial example, note that localizing at the empty
type, and localizing at the type family on $\bool$ defined by
$\bfalse\mapsto \emptyt$ and $\btrue\mapsto \unit$ both map all types to contractible types.
However, we are usually only interested in properties of presentations insofar as they determine properties of subuniverses.
For instance, by <ref>, a reflective subuniverse is a modality exactly when it has a presentation in which each $C(a)=\unit$.
Similarly, in <ref> we will define a modality to be “topological” if it has a presentation in which each $C(a)=\unit$ and each $B(a)$ is a mere proposition.
The trivial modality $\truncf{(-2)}$ is presented by $\emptyt$, while the propositional truncation modality $\truncf{(-1)}$ is presented by $\bool$. More generally, the
$n$-truncation modality $\truncf{n}$ is presented by the $(n+1)$-sphere $\Sn^{n+1}$.
For every mere proposition $P$, the open modality $\open P (X) \defeq (P\to X)$ from <ref> is
presented by the singleton type family $P$.
To see this, note that $\modalunit[X] : X \to (P\to X)$ is the same as the map in the definition of locality, so that $X$ is modal for the open modality on $P$ if and only if it is $P$-null.
(If $P$ is not a mere proposition, however, then $X\mapsto (P\to X)$ is not a modality, and in particular does not coincide with localization at $P$.)
The closed modality $\closed P$ from <ref> associated to a mere proposition $P$ is presented by the type family $\lam{x} \emptyt : P \to \UU$.
For by definition, $A$ is null for this family if and only if for any $p:P$ the map $A \to (\emptyt \to A)$ is an equivalence.
But $\emptyt \to P$ is contractible, so this says that $P\to\iscontr(A)$, which was the definition of $\closed P$-modal types from <ref>.
One of the main uses of accessibility is when passing between universes.
Our definitions of reflective subuniverses and modalities are relative to a particular universe $\UU$, but most examples are “uniform” or “polymorphic” and apply to types in all universes (or all sufficiently large universes) simultaneously.
Accessibility is one technical condition which ensures that this holds and that moreover these modal operators on different universes “fit together” in a convenient way.
For instance, we have:
If $\modal$ is an accessible reflective subuniverse on a universe $\UU$, and $\UU'$ is a larger universe containing $\UU$, then there is a reflective subuniverse $\modal'$ on $\UU'$ such that:
* If $\modal$ is a modality, so is $\modal'$.
* A type $X:\UU$ is $\modal'$-modal if and only if it is $\modal$-modal.
* For $X:\UU$, the induced map $\modal' X \to \modal X$ is an equivalence.
* A type $X:\UU'$ is $\modal'$-modal if and only if $(\blank\circ f) : (B\to X) \to (A\to X)$ is an equivalence for any map $f:A\to B$ in $\UU$ such that $\modal(f)$ is an equivalence.
* $\modal'$ depends only on $\modal$, not on a choice of presentation for it.
Since $\modal$ is accessible, it is generated by some family $F:\prd{a:A} B(a) \to C(a)$.
Define $\modal':\UU'\to\UU'$ to be the higher inductive localization at the same family $F$, which lives in $\UU'$ as well since $\UU'$ is larger than $\UU$.
If $\modal$ is a modality, we can take each $C(a)=\unit$ so that $\modal'$ is also a modality, giving <ref>.
The notion of $F$-locality for a type $X$ is independent of what universe $X$ lives in, giving <ref>.
Moreover, because the induction principle for a higher inductive localization allows us to eliminate into any type in any universe, <ref> applies no matter what universe the target lives in.
Thus, if $X:\UU$ then $\modal X$ and $\modal' X$ have the same universal property, hence are canonically equivalent, giving <ref>.
To prove <ref>, note first that certainly each $\modal (F_a)$ is an equivalence, so any type with the stated property is $F$-local.
Conversely, if $X$ is $F$-local, hence $\modal'$-modal, then $(B\to X) \to (A\to X)$ is certainly an equivalence for any map $f$ such that $\modal'(f)$ is an equivalence; but $\modal'$ and $\modal$ coincide on $\UU$.
Thus <ref> holds; and this implies <ref> since a reflective subuniverse is determined by its modal types.
We refer to the $\modal'$ constructed in <ref> as the canonical accessible extension of $\modal$ to $\UU'$.
Our characterizations of the truncation and open and closed modalities in <ref> made no reference to the ambient universe.
Thus, when these modalities are defined in the standard ways on $\UU$ and $\UU'$ respectively, their $\UU'$-version is the canonical accessible extension of their $\UU$-version.
By contrast, the double-negation modality $\neg\neg$ is defined in a polymorphic way on all universes, but in general there seems no reason for it to be accessible on any of them.
However, if propositional resizing holds, then it is the nullification at $\bool$ together with all propositions $P$ such that $\neg\neg P$ holds, and hence accessible.
Whether or not any inaccessible modalities remain after imposing propositional resizing may depend on large-cardinal principles.
It is shown in [10] that this is the case for the analogous question about reflective sub-$(\infty,1)$-categories of the $(\infty,1)$-category of $\infty$-groupoids.
Suppose that all types in $\UU$ are 0-types.
We have tacitly assumed that all universes are closed under all higher inductive types, so (assuming univalence) this is not actually possible, but to get a feeling for what else could in principle go wrong suppose we drop that assumption.
Then if $F$ is a family such that the higher inductive type $\localization{F}$ does not preserve 0-types, we might (depending on what we assume about closure under higher inductive types) still be able to define a modality on $\UU$ by $\modal X = \trunc0{\localization{F}X}$.
But if $\UU'$ is a larger universe containing non-0-types, then this $\modal$ would not eliminate into types in $\UU'$, and if we define $\modal'$ by localizing at $F$ in $\UU'$ then the canonical map $\modal' X \to \modal X$ would be the 0-truncation rather than an equivalence.
So <ref> is not as trivial as it may seem.
It is tempting to think that any reflective subuniverse $\modal$ on $\UU$ could be extended to an accessible one on $\UU'$ by localizing at the family of all functions in $\UU$ that are inverted by $\modal$ (or nullifying at the family of all $\modal$-connected types in $\UU$, in the case of modalities), which is a $\UU'$-small family though not a $\UU$-small one.
This does produce an accessible reflective subuniverse $\modal'$ of $\UU'$ such that the $\modal'$-modal types in $\UU$ coincide with the $\modal$-modal ones, but there seems no reason why the modal operators $\modal'$ and $\modal$ should agree on types in $\UU$.
Reflective subuniverses and modalities defined by localization have another convenient property: their eliminators have a strict judgmental computation rule (assuming that our higher inductive localization type has a judgmental computation rule on point-constructors, which is usually assumed).
This will be useful in <ref>.
§.§ Non-stable factorization systems
We have seen in <ref> that $\Sigma$-closed reflective subuniverses are equivalent to stable orthogonal factorization systems.
Without $\Sigma$-closedness and stability, this equivalence fails.
However, we can still say:
Any orthogonal factorization system has an underlying reflective subuniverse, consisting of those types $X$ such that $X\to\unit$ is in $\cR$.
If $Y$ is modal in this sense, then by applying orthogonality to squares of the form
\[
\begin{tikzcd}
A \ar[d,"f"'] \ar[r] & Y \ar[d] \\ B \ar[r] & \unit
\end{tikzcd}
\]
we see that if $f:A\to B$ lies in $\cL$, then precomposition
\[ (-\circ f) : (B\to Y) \to (A\to Y) \]
is an equivalence.
Thus, it suffices to show that for every $X$ there is an $\cL$-map $X\to \modal X$ where $\modal X\to \unit$ is in $\cR$; but this is just an $(\cL,\cR)$-factorization of the map $X\to\unit$.
Conversely, in classical category theory there are various ways of extending a reflective subcategory to a factorization system.
One canonical one is considered in [11], but this is harder to reproduce homotopy-theoretically.
(It is possible in what is there called the “simple” case, hence also the “semi-left-exact” case — which includes all modalities, as the case of “stable units” — but we will not investigate that construction here.)
Instead, if we have an accessible reflective subuniverse presented by localization at a family of maps, we can generalize the construction of localization to produce a factorization system (though in general the result will depend on the choice of presentation, not just on the reflective subuniverse we started with).
To avoid too much wrangling with witnesses of commutative squares, we will factorize dependent types rather than functions.
In this case, right orthogonality (<ref>) can be expressed in the following way.
Given $l:A\to B$ and $X:Y\to\UU$, and functions $g:B\to Y$ and $f:\prd{a:A} X(g(l(a)))$ forming a judgmentally commutative square
\begin{equation}
\begin{tikzcd}[column sep=large]
A \ar[d,"l"'] \ar[r,"{(g\circ l,f)}"] & \sm{y:Y}X(y) \ar[d,"\proj1"] \\ B \ar[r,"g"'] & Y
\end{tikzcd}\label{eq:dfill-sq}
\end{equation}
a dependent filler in this square consists of a morphism ${j:\prd{b:B} X(g(b))}$ and a homotopy $j\circ l \sim f$.
That is, the type of dependent fillers is
\begin{equation}
\dfill{l,X,g,f} \defeq \sm{j:\prd{b:B} X(g(b))} \prd{a:A} j(l(a)) = f(a).\label{eq:dep-fillers}
\end{equation}
Recall that for a map $f:B\to C$, we denote by $\Delta_f : B\to B\times_C B$ its diagonal and $\nabla_f : C +_B C \to C$ its codiagonal.
We have the following dependent generalization of <ref>:
Let $f:B\to C$ and $X:Y\to\UU$ and $g:C\to Y$; then we have a commuting triangle
\begin{equation*}
\begin{tikzcd}
& \prd{c:C} X(g(c)) \arrow[dl,swap,"(-\circ \nabla_f)"] \arrow[d,"\Delta_{(-\circ f)}"] \\
\prd{z:C +_B C} X(g'(z)) \arrow[r,"\sim"'] &
\Big(\prd{c:C} X(g(c))\Big) \times_{(\prd{b:B} X(g(f(b))))} \Big(\prd{c:C} X(g(c))\Big)
\end{tikzcd}\end{equation*}
where $g':C+_BC \to Y$ is induced by $g$ on both copies of $C$, and the bottom map is an equivalence.
Like the non-dependent case <ref>, this follows from the universal property of the pushout.
And similarly for <ref>:
For $l:B\to C$ and $X:Y\to\UU$, the following are equivalent.
* The map $\proj1 : (\sm{y:Y}X(y)) \to Y$ is right orthogonal to $l$.
* For every $g:C\to Y$ and $f:\prd{b:B} X(g(l(b)))$, the type $\dfill{l,X,g,f}$ of dependent fillers in (<ref>) is contractible.
* For every $g:C\to Y$, the precomposition map
\begin{equation}
(-\circ l) : \Big(\prd{c:C} X(g(c))\Big) \to \Big(\prd{b:B} X(g(l(b)))\Big)\label{eq:dfill-eqv}
\end{equation}
is an equivalence.
* For every $g:C\to Y$, the precomposition maps
\begin{align*}
(-\circ l) &: \Big(\prd{c:C} X(g(c))\Big) \to \Big(\prd{b:B} X(g(l(b)))\Big)\\
(-\circ \nabla_l) &: \Big(\prd{c:C} X(g(c))\Big) \to \Big(\prd{z:C+_BC} X(g'(z))\Big)
\end{align*}
have right inverses.
* For every $g:C\to Y$, the maps in <ref> are equivalences.
The equivalence of <ref> and <ref> is immediate, since $\dfill{l,X,g,f}$ is the fiber of (<ref>) over $f$.
And as in <ref>, <ref> is equivalent to <ref> and <ref> using <ref>.
Finally, regarding <ref>, if we have any commutative square
\[
\begin{tikzcd}
B \ar[d,"l"'] \ar[r,"f'"] \ar[dr,phantom,"S"] & \sm{y:Y}X(y) \ar[d,"\proj1"] \\ C \ar[r,"g"'] & Y
\end{tikzcd}
\]
witnessed by $S:\proj1 \circ f'=g\circ l$, we can define $f(b) \defeq \trans{S(b)}{\proj2(f'(b))}$ to get an equivalent and judgmentally commutative square as in (<ref>).
Thus, <ref> is equivalent to its restriction to such squares.
But given such a square, the type of ordinary diagonal fillers (<ref>) is equivalent to
\[ \sm{j:C\to \sm{y:Y} X(y)}{H_f : j\circ l = (g\circ l,f)}{H_g : \proj1 \circ j = g} \proj1 \circ H_f = H_g \circ l \]
and thereby to
\begin{multline*}
\sm{j_1:C\to Y}{j_2 : \prd{c:C} X(j_1(c))}\\
\sm{H_{f1} : j_1 \circ l = g\circ l}{H_{f2} : \dpath{X}{H_{f1}}{j_2\circ l}{f}}{H_g : j_1 = g} H_{f1} = H_g \circ l.
\end{multline*}
But now we can contract two based path spaces (combining $j_1$ with $H_g$, and $H_{f1}$ with the final unnamed equality $H_{f1} = H_g\circ l$) to get the type (<ref>) of dependent fillers.
Let $F:\prd{a:A} B(a) \to C(a)$ and let $X:Y\to\UU$ be a type family.
We define an indexed higher inductive type $\factorhit{F}{Y}{X} : Y\to \UU$ by the following constructors:
\begin{align*}
\beta_X &: \prd{y:Y} X(y) \to \factorhit{F}{Y}{X}(y)\\
\mathsf{lift} &: \prd*{a:A}{g:C(a) \to Y}{f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))}{c:C(a)} \factorhit{F}{Y}{X}(g(c))\\
\mathsf{islift} &
\!\begin{multlined}[t]
: \prd*{a:A}{g:C(a) \to Y}{f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))}{b:B(a)}\\
\mathsf{lift}(g,f,F_a(b)) = f(b).
\end{multlined}
\end{align*}
Diagrammatically, $\mathsf{lift}$ and $\mathsf{islift}$ comprise a specified dependent filler for any judgmentally commutative square as follows:
\[
\begin{tikzcd}
B(a) \ar[d,"{F_a}"'] \ar[r,"f"] & \sm{y:Y} \factorhit{F}{Y}{X}(y) \ar[d,"\proj1"] \\
C(a) \ar[ur,dotted] \ar[r,"g"'] & Y.
\end{tikzcd}
\]
The induction principle of $\factorhit{F}{Y}{X}$ says that for any $P:\prd{y:Y} \factorhit{F}{Y}{X}(y) \to \UU$ with
\begin{align*}
N &: \prd{y:Y}{x:X(y)} P(y,\beta_X(y,x))\\
R &
\!\begin{multlined}[t]
: \prd{a:A}{g:C(a) \to Y}{f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))}\\
\prd{f':\prd{b:B(a)} P(g(F_a(b)),f(b))}{c:C(a)} P(g(c),\mathsf{lift}(g,f,c))
\end{multlined}
\\
S &
\!\begin{multlined}[t]
: \prd{a:A}{g:C(a) \to Y}{f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))}\\
\prd{f':\prd{b:B(a)} P(g(F_a(b)),f(b))}{b:B(a)} \dpath{P}{\mathsf{islift}(g,f,b)}{R(g,f,f',F_a(b))}{f'(b)}
\end{multlined}
\end{align*}
there is a section $s:\prd{y:Y}{w:\factorhit{F}{Y}{X}(y)} P(y,w)$ such that $s \circ \beta_X = N$ (plus two more computation rules we ignore).
Note that by transporting along $\mathsf{islift}$, the types of $R$ and $S$ are equivalent to
\begin{align*}
R' &
\!\begin{multlined}[t]
: \prd{a:A}{g:C(a) \to Y}{f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))}\\
\prd{f':\prd{b:B(a)} P(g(F_a(b)),\mathsf{lift}(g,f,F_a(b)))}{c:C(a)} P(g(c),\mathsf{lift}(g,f,c))
\end{multlined}
\\
S' &
\!\begin{multlined}[t]
: \prd{a:A}{g:C(a) \to Y}{f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))}\\
\prd{f':\prd{b:B(a)} P(g(F_a(b)),\mathsf{lift}(g,f,F_a(b)))}{b:B(a)} \id{R(g,f,f',F_a(b))}{f'(b)}.
\end{multlined}
\end{align*}
With this modification, the inputs of the induction principle are a judgmentally commutative square
\begin{equation}
\begin{tikzcd}
\sm{y:Y} X(y) \ar[d,"{(\idfunc[Y],\beta_X)}"'] \ar[r,"N"] & \sm{y:Y}{w:\factorhit{F}{Y}{X}(y)} P(y,w) \ar[d,"\proj1"] \\
\sm{y:Y} \factorhit{F}{Y}{X}(y) \ar[r,equals] &\sm{y:Y} \factorhit{F}{Y}{X}(y)
\end{tikzcd}\label{eq:Nsq}
\end{equation}
together with a specified dependent filler for each judgmentally commutative square of the form
\[
\begin{tikzcd}[column sep=huge]
B(a) \ar[rr,"{(g\circ F_a,\mathsf{lift}(g,f,F_a(-)),f')}"] \ar[d,"{F_a}"'] && \sm{y:Y}{w:\factorhit{F}{Y}{X}(y)} P(y,w) \ar[d,"\proj1"] \\
C(a) \ar[rr,"{(g,\mathsf{lift}(g,f,-))}"'] && \sm{y:Y} \factorhit{F}{Y}{X}(y),
\end{tikzcd}
\]
while the output of the induction principle is a dependent filler in (<ref>).
If $P:\prd{y:Y} \factorhit{F}{Y}{X}(y) \to \UU$ is such that
\[\proj1 : (\sm{y:Y}{w:\factorhit{F}{Y}{X}} P(y,w)) \to \sm{y:Y} \factorhit{F}{Y}{X}\]
is right orthogonal to $F$, then
\[(-\circ \beta_X) : \Big(\prd{y:Y}{w:\factorhit{F}{Y}{X}(y)} P(y,w)\Big) \to \Big(\prd{y:Y}{x:X(y)} P(y,\beta_X(x))\Big) \]
is an equivalence.
As in <ref>, we will show that it is path-split using the induction principle of $\factorhit{F}{Y}{X}$.
First, given $h:\prd{y:Y}{x:X(y)} P(y,\beta_X(x))$, we take $P(y,w) \defeq P(y,w)$ and $N\defeq h$.
To give the remaining data $R,S$, suppose given $a:A$, $g:C(a) \to Y$, $f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))$, and $f':\prd{b:B(a)} P(g(F_a(b)),f(b))$.
Now we can apply <ref> with $l\defeq F_a$ and $f\defeq f'$: an inhabitant of (<ref>) consists exactly of the desired $R$ and $S$.
Second, given $h,k:\prd{y:Y} (\factorhit{F}{Y}{X}(y) \to P(y))$ and $p:h\circ \beta_X = k\circ \beta_X$, we take $P(y,x) \defeq (h(y,x)=k(y,x))$ and $N\defeq p$.
To give $R,S$, suppose given $a:A$, $g:C(a) \to Y$, $f:\prd{b:B(a)} \factorhit{F}{Y}{X}(g(F_a(b)))$, and
\[f':\prd{b:B(a)} h(g(F_a(b)),f(b))=k(g(F_a(b)),f(b)).\]
\begin{align*}
j(c) &\defeq h(g(c),\mathsf{lift}(g,f,c))\\
j'(c) &\defeq k(g(c),\mathsf{lift}(g,f,c))\\
q(b) &\defeq k(g(F_a(b)),f(b)).
\end{align*}
Then we can apply <ref> to the square
\[
\begin{tikzcd}
B(a) \ar[d,"F_a"'] \ar[r,"q"] & \sm{y:Y} P(y) \ar[d,"\proj1"] \\
C(a) \ar[r,"g"'] & Y.
\end{tikzcd}
\]
We have
\[ j'(F_a(b)) \jdeq k(g(F_a(b)),\mathsf{lift}(g,f,F_a(b))) = k(g(F_a(b)),f(b)) \jdeq q(b) \]
\begin{multline*}
j(F_a(b)) \jdeq h(g(F_a(b)),\mathsf{lift}(g,f,F_a(b))) = h(g(F_a(b)),f(b))\\ \overset p= k(g(F_a(b)),f(b)) \jdeq q(b),
\end{multline*}
giving two inhabitants $(j,\nameless)$ and $(j',\nameless)$ of (<ref>), which are therefore equal.
This equality consists of an equality $j=j'$, which gives precisely $R$, and an equality between the above two paths, which gives precisely $S$.
Given $F:\prd{a:A} B(a) \to C(a)$, define $\cR = F^{\perp}$ and $\cL = {}^{\perp}\cR$, and let $\hat F$ be as in <ref> and $\factorhit{\hat F}{Y}{X}$ constructed as above for $\hat F$.
Then for any $X:Y\to\UU$, the composite
\[ \Big(\sm{y:Y} X(y)\Big) \to \Big(\sm{y:Y} \factorhit{\hat F}{Y}{X}(y)\Big) \to Y \]
is an $(\cL,\cR)$-factorization.
Therefore, $(\cL,\cR)$ is an orthogonal factorization system.
By <ref>, if $\proj1$ is right orthogonal to $F$, then it is also right orthogonal to $\hat F$.
Since every function is equivalent to one of the form $\proj1$, we have $F^{\perp} = {\hat F}^{\perp}$.
Thus, since applying <ref> to $\hat F$ shows that the first factor of this factorization is in ${}^{\perp}({\hat F}^{\perp})$, it is also in ${}^{\perp}({F}^{\perp}) = \cL$.
On the other hand, the constructors $\mathsf{lift}$ and $\mathsf{islift}$ show that the second factor $\proj1 : \big(\sm{y:Y} \factorhit{\hat F}{Y}{X}(y)\big) \to Y$ of this factorization satisfies <ref><ref> for $F$, since the fibers of these maps are the types of dependent fillers against morphisms in $\hat F$.
Thus, this second factor is in $\cR$.
Finally, in <ref> we defined orthogonal factorization systems by the uniqueness of factorizations and proved from this the orthogonality of the two classes of maps; but it is easy to show that, as in classical category theory, orthogonality implies the uniqueness of factorizations when they exist, since any two factorizations must lift uniquely against each other.
§ LEFT EXACT MODALITIES
We have seen that the modal operator of any reflective subuniverse preserves products, but even for a modality it does not generally preserve pullbacks.
If it does, we call the modality “left exact” or just “lex”.
In higher topos theory, lex modalities coincide with reflective sub-toposes.
We can construct them by nullifying any family of propositions (<ref>); these correspond categorically to the “topological” localizations (in 1-topos theory, every subtopos is topological).
§.§ Lex, topological, and cotopological modalities
For a modality $\modal$, the following are equivalent.
* If $A$ is $\modal$-connected, then so is $(x=y)$ for any $x,y:A$.
* Whenever $A$ and $\sm{x:A}B(x)$ are $\modal$-connected, then so is $B(x)$ for all $x:A$.
* Any map between $\modal$-connected types is $\modal$-connected.
* Any $\modal$-modal function between $\modal$-connected types is an equivalence.
* If $f:A\to B$ is $\modal$-connected, and $g:\prd{a:A} P(a) \to Q(f(a))$ is such that $\total g:(\sm{x:A} P(x)) \to (\sm{y:B} Q(y))$ is $\modal$-connected, then $g_a:P(a)\to Q(fa)$ is also $\modal$-connected for each $a:A$.
* Given a commutative square
\begin{equation}
\begin{tikzcd}
B \ar[r,"h"] \ar[d,"g"'] & A \ar[d,"f"] \\
D \ar[r,"k"'] & C
\end{tikzcd}\label{eq:lex-commsq}
\end{equation}
in which $f$ and $g$ are $\modal$-connected, then for any $a:A$ the induced map $\hfib{h}{a} \to \hfib{k}{f(a)}$ is $\modal$-connected.
* Any commutative square (<ref>) in which $f$ and $g$ are $\modal$-connected and $h$ and $k$ are $\modal$-modal is a pullback.
* For any $f:A\to B$ and $b:B$, the evident map $\hfib{f}{b} \to \hfib{\modal f}{\modalunit b}$ is $\modal$-connected.
* For any $A$ and $x,y:A$, the induced map $\modal(x=y) \to (\modalunit[A](x) = \modalunit[A](y))$ is an equivalence.
* The functor $\modal$ preserves pullbacks.
* $\modal$-connected maps satisfy the 2-out-of-3 property.
* If $\modal f: \modal A\to \modal B$ is an equivalence, then $f$ is $\modal$-connected.
* For any $\modal$-connected type $A$ and any $P:A\to \modaltype$, there is a $Q:\modaltype$ such that $P(a)\eqvsym Q$ for all $a:A$.
When they hold, we say that $\modal$ is lex.
The equivalence <ref>$\Leftrightarrow$<ref> is easy, using the definition of $\modal$-connected maps and the fact that any function is equivalent to a fibration.
And <ref>$\Rightarrow$<ref> since $\hfib f b \jdeq \sm{a:A} (f(a)=b)$ and $\modal$-connected types are closed under $\Sigma$ (since $\modal$-connected maps are closed under composition, being the left class of a factorization system).
Condition <ref> is a special case of <ref>, since a function that is both modal and connected is an equivalence.
But assuming <ref>, if $f:A\to B$ is any function between $\modal$-connected types, then in its $(\cL,\cR)$-factorization $A\xrightarrow{e} I\xrightarrow{m} B$ the type $I$ is also connected by right cancellation.
Thus <ref> implies that $m$ is an equivalence; thus $f$, like $e$, is $\modal$-connected, giving <ref>.
Assuming <ref>, in the situation of <ref> the $3\times 3$ lemma for fiber sequences allows us to identify the fiber of $g_a$ over $q:Q(f(a))$ with the fiber over $(a,\refl{f(a)})$ of the induced map $\hfib{\total{g}}{(f(a),q)} \to \hfib{f}{f(a)}$: |
# Complex-valued K-means clustering of interpolative separable density
fitting algorithm for large-scale hybrid functional enabled ab initio
molecular dynamics simulations within plane waves
Shizhe Jiao Hefei National Research Center for Physical Sciences at the
Microscale, and Anhui Center for Applied Mathematics, University of Science
and Technology of China, Hefei, Anhui 230026, China Jielan Li Hefei National
Research Center for Physical Sciences at the Microscale, and Anhui Center for
Applied Mathematics, University of Science and Technology of China, Hefei,
Anhui 230026, China<EMAIL_ADDRESS>Xinming Qin Hefei National
Research Center for Physical Sciences at the Microscale, and Anhui Center for
Applied Mathematics, University of Science and Technology of China, Hefei,
Anhui 230026, China Lingyun Wan Hefei National Research Center for Physical
Sciences at the Microscale, and Anhui Center for Applied Mathematics,
University of Science and Technology of China, Hefei, Anhui 230026, China Wei
Hu Hefei National Research Center for Physical Sciences at the Microscale,
and Anhui Center for Applied Mathematics, University of Science and Technology
of China, Hefei, Anhui 230026, China<EMAIL_ADDRESS>Jinlong Yang Key
Laboratory of Precision and Intelligent Chemistry, and Department of Chemical
Physics, University of Science and Technology of China, Hefei, Anhui 230026,
China
###### Abstract
K-means clustering, as a classic unsupervised machine learning algorithm, is
the key step to select the interpolation sampling points in interpolative
separable density fitting (ISDF) decomposition. Real-valued K-means clustering
for accelerating the ISDF decomposition has been demonstrated for large-scale
hybrid functional enabled ab initio molecular dynamics (hybrid AIMD)
simulations within plane-wave basis sets where the Kohn-Sham orbitals are
real-valued. However, it is unclear whether such K-means clustering works for
complex-valued Kohn-Sham orbitals. Here, we apply the K-means clustering into
hybrid AIMD simulations for complex-valued Kohn-Sham orbitals and use an
improved weight function defined as the sum of the square modulus of complex-
valued Kohn-Sham orbitals in K-means clustering. Numerical results demonstrate
that this improved weight function in K-means clustering algorithm yields
smoother and more delocalized interpolation sampling points, resulting in
smoother energy potential, smaller energy drift and longer time steps for
hybrid AIMD simulations compared to the previous weight function used in the
real-valued K-means algorithm. In particular, we find that this improved
algorithm can obtain more accurate oxygen-oxygen radial distribution functions
in liquid water molecules and more accurate power spectrum in crystal silicon
dioxide compared to the previous K-means algorithm. Finally, we describe a
massively parallel implementation of this ISDF decomposition to accelerate
large-scale complex-valued hybrid AIMD simulations containing thousands of
atoms (2,744 atoms), which can scale up to 5,504 CPU cores on modern
supercomputers.
## 1 Introduction
The transposed Khatri-Rao product 1, 2 (also known as face-splitting product)
$Z=\\{z_{ij}:=\phi_{i}(\mathbf{r})\psi_{j}^{\ast}(\mathbf{r})\\}_{1\leq i\leq
N_{\phi},1\leq j\leq N_{\psi}}\in\mathbb{C}^{N_{r}\times(N_{\phi}N_{\psi})}$
of Kohn-Sham orbitals $\phi_{i}(\mathbf{r})$ and $\psi_{j}(\mathbf{r})$ in
real space $\\{\mathbf{r_{i}}\\}_{i=1}^{N_{r}}$ is inevitable for the multi-
center integrals of advanced electronic structure calculations in density
functional theory (DFT), 3, 4 especially for the Hartree-Fock (HF) 5, 6 and
post-HF electronic structure theory, such as time-dependent density functional
theory (TDDFT), 7, 8 GW approximation 9, 10, 11, 12 plus Bethe-Salpeter
equation (BSE), 13 Second-order Møller-Plesset perturbation theory (MP2), 14,
15, 16 and random phase approximation (RPA). 17 In order to reduce such high
computational cost and memory usage of such multi-center integrals in the
Kohn-Sham DFT calculations, several low rank approximation algorithms have
been proposed, such as the Cholesky decomposition, 18, 19 resolution of
identity (RI), 20, 21, 22, 23 tensor hypercontraction (THC) 24, 25, 26 and
pseudospectral decomposition. 27, 28 However, it is difficult to apply these
low rank approximation algorithms to the cases of atomic forces and
vibrational frequencies especially for periodic systems within plane-wave
basis sets, which can be used for a wide range of applications such as
geometry optimization and ab initio molecular dynamics (AIMD) simulation.
Recently, Lu et al. proposed a new tensor hypercontraction (THC) algorithm by
using the randomized QR factorization with column pivoting (QRCP) procedure
29, namely interpolative separable density fitting (ISDF), 30, 31 which can
achieve an effectively low rank approximation of the transposed Khatri-Rao
product of the Kohn-Sham orbitals ($\phi_{i}$ and $\psi_{j}$) and compress
their redundant information with cubic-scaling computational cost of
$O(N_{r}N_{\phi}N_{\psi})$. The transposed Khatri-Rao product of the Kohn-Sham
orbitals can be expressed by
$z_{ij}=\phi_{i}(\mathbf{r})\psi_{j}^{\ast}(\mathbf{r})\approx\sum_{\mu=1}^{N_{\mu}}\zeta_{\mu}(\mathbf{r})\phi_{i}(\mathbf{r_{\mu}})\psi_{j}^{\ast}(\mathbf{r_{\mu}})$,
where $\\{\mathbf{r}_{\mu}\\}_{\mu=1}^{N_{\mu}}$ are a set of interpolation
points from grid points $\\{\mathbf{r}_{i}\\}_{i=1}^{N_{r}}$ in real space,
$N_{\mu}$ is proportional to $\sqrt{N_{\phi}N_{\psi}}$
($N_{\mu}=t\sqrt{N_{\phi}N_{\psi}}$, $t$ is the rank truncation constant), and
$\zeta_{\mu}(\mathbf{r})$ is the auxiliary basis functions (ABFs). The ISDF
decomposition has already been applied successfully in several types of multi-
center integrals in the Kohn-Sham DFT calculations within Gaussian-type
orbitals (GTOs), numerical atomic orbitals (NAOs), and plane-wave (PW) basis
sets, such as hybrid DFT calculations, 31, 32, 33, 34 RPA correlation, 30
quantum Monte Carlo (QMC) simulations, 35 TDDFT, 36 MP2, 37 GW 38, 39 and BSE
40 calculations, for molecular and periodic systems.
The ISDF decomposition can be divided into two key steps, 31 including
selecting the interpolation points (IPs) and computing interpolation vectors
(IVs). The IVs can be computed easily by a least-squares fitting procedure
when the IPs are selected. The IPs mean the selection of a set of nonuniform
grid points $\\{\mathbf{r}_{\mu}\\}_{\mu=1}^{N_{\mu}}$ where the values of the
orbital pairs evaluated are almost consistent with that evaluated in all grid
points $\\{\mathbf{r}_{i}\\}_{i=1}^{N_{r}}$. Two approaches have been proposed
to select the IPs. The standard approach is the randomized QRCP as mentioned
previously, 29 which is accurate but expensive in the ISDF decomposition
process. 31 Another approach is the centroidal Voronoi tessellation (CVT)
algorithm proposed by Dong et al., 32 which only requires the information from
the electron density in the DFT calculations. The CVT method can be performed
easily by K-means clustering algorithm, a classical unsupervised machine
learning algorithm. Such K-means clustering algorithm aims at dividing $N_{r}$
data points into $N_{\mu}$ clusters. Each cluster includes all points whose
distances from the centroid of the cluster are smaller than that of other
clusters. Since K-means clustering only converges to a local optimal solution,
the accuracy of K-means clustering strongly depends on the selection of
initial centroids and the definition of distance and centroids. 41, 42, 32, 34
For real-valued orbitals, recent numerical results 31, 32, 33, 34 have
demonstrated that K-means clustering algorithms with relatively simple weight
definitions can yield reasonably high accuracy at a much lower cost.
However, the Kohn-Sham orbitals are usually represented as complex-valued
Bloch functions with k-point sampling 43, 44, 45 and noncollinear spin density
functional theory 46 for periodic systems and time-dependent wavefunctions 47,
48, 12, 49, 50 for real-time time-dependent density functional theory (RT-
TDDFT). For example, the transposed Khatri-Rao product of complex-valued Kohn-
Sham orbitals
$Z=\\{z_{ij}:=\sum_{k,l=1}^{N_{g}}\phi_{i,k}\psi_{j,l}^{\ast}e^{i(G_{k}-G_{l})\cdot\mathbf{r}}\\}$,
where $N_{g}$ is the number of plane-wave basis sets. $\phi_{i,k}$ and
$\psi_{j,l}^{\ast}$ are expansion coefficients within plane-wave basis sets
for $\phi_{i}$ and $\psi_{j}^{\ast}$, respectively. In this case, conventional
K-means clustering algorithms get into trouble 45, 46, 50 because the
centroids and distance cannot be well-defined for complex-valued points. An
alternative solution is to separate the complex data into real and imaginary
parts, then perform twice K-means clustering calculations and merge the real
and imaginary centroids together. This solution does not work for complex-
valued Kohn-Sham orbitals, since it is difficult to merge the real and
imaginary parts for their one-to-one correlation. To the best of our
knowledge, there are few works on complex-valued K-means clustering algorithms
yet. 51 In particular, Zhang et al. used complex-valued K-means to learn
filters in conventional neural networks for sleep stage classification. 52
They defined complex-valued centroids and inner product instead of Euclidean
distance. However, the complex number was converted from the real number in
this work and the real and imaginary parts are dependent on each other. The
approach is too expensive to be suitable for complex-valued Kohn-Sham orbitals
in the DFT calculations to deal with the transposed Khatri-Rao product
directly.
In the ISDF method, we introduce a weighted K-means clustering algorithm.
Because the real and imaginary parts of complex-valued Kohn-Sham orbitals are
discrete on the same grids in real space, we can perform K-means clustering on
the grids in real space and define different centroids from that in
conventional K-means clustering algorithm, which is defined as the weighted
average of all points belonging to the cluster. We convert the information of
complex-valued Kohn-Sham orbitals into the weight function. Therefore, it is
important to choose an appropriate weight function to compute the weighted
average for complex-valued Kohn-Sham orbitals.
In this work, we present an improved weight function desirable for complex-
valued Kohn-Sham orbitals in the DFT calculations. We apply successfully the
improved K-means clustering algorithm into complex-valued hybrid functional
calculations with plane-wave basis sets and achieve the acceleration of large-
scale hybrid density functional calculations containing thousands of atoms. In
particular, the ab initio molecular dynamics for complex-valued hybrid DFT
calculations in molecular and solid systems can be performed, such as liquid
water molecules 53, 53, 54, 55, 56, 57, 58, 32 and aluminium-silicon alloy,
59, 60 which is important but expensive within plane basis sets. 61, 62, 63,
64, 65, 66 We demonstrate that the energy potential calculated using this
improved weight function shows smoother, smaller energy drift and longer time
steps compared to the previous weight function in the K-means algorithm.
Therefore, this improved K-means clustering algorithm can accurately and
efficiently accelerate large-scale and long-time AIMD with complex-valued
hybrid DFT calculations.
This work is organized as follows. Section 2 gives a brief description of the
theoretical methodology, including the ISDF method, the complex-valued K-means
clustering algorithm in ISDF, the combination of hybrid DFT calculations with
ISDF method as well as their parallel implementation. Section 3 validates the
numerical accuracy and computational efficiency of the complex-valued K-means
clustering algorithm for ISDF decomposition to accelerate the hybrid DFT
calculations. A summary and outlook is given in Section 4.
## 2 Methodology
### 2.1 Interpolative separable density fitting
The ISDF decomposition is a new THC algorithm proposed by Lu and Ying firstly
29 and then promoted by Hu et al. for large-scale hybrid DFT calculations 31.
This algorithm can achieve low rank approximation of transposed Khatri-Rao
product (also known as face-splitting product)
$Z=\\{z_{ij}:=\phi_{i}(\mathbf{r})\psi_{j}^{\ast}(\mathbf{r})\\}_{1\leq i\leq
N_{\phi},1\leq j\leq N_{\psi}}\in\mathbb{C}^{N_{r}\times(N_{\phi}N_{\psi})}$
of Kohn-Sham orbitals $\phi_{i}$ and $\psi_{j}$.
$\phi_{i}(\mathbf{r})\psi_{j}^{\ast}(\mathbf{r})\approx\sum_{\mu=1}^{N_{\mu}}\zeta_{\mu}(\mathbf{r})C_{\mu}^{ij}$
(1)
where the transposed Khatri-Rao product can be approximately decomposed into
auxiliary basis functions (ABFs) $\zeta_{\mu}(\mathbf{r})$ and expansion
coefficients $C_{\mu}^{ij}$. The number of auxiliary basis functions
$N_{\mu}=t\sqrt{N_{\phi}N_{\psi}}$ can be regarded as the numerical rank of
the decomposition, where $t$ is a small parameter to achieve the compromise
between the numerical accuracy and computational efficiency. The key to ISDF
decomposition is to solve the expansion coefficients. The tensor
hypercontraction (THC) 67, 68, 26, 69 algorithms have made a success on it.
The ISDF algorithm provides a new tensor hypercontraction to obtain the
coefficients
$C_{\mu}^{ij}=\phi_{i}(\mathbf{r_{\mu}})\psi_{j}^{\ast}(\mathbf{r_{\mu}})$ (2)
where $\\{r_{\mu}\\}_{\mu=1}^{N_{\mu}}$ are a set of interpolation points from
grid points $\\{r_{i}\\}_{i=1}^{N_{r}}$ in real space. Therefore, the
transposed Khatri-Rao product can be expressed in the following form
$\phi_{i}(\mathbf{r})\psi_{j}^{\ast}(\mathbf{r})\approx\sum_{\mu=1}^{N_{\mu}}\zeta_{\mu}(\mathbf{r})\phi_{i}(\mathbf{r_{\mu}})\psi_{j}^{\ast}(\mathbf{r_{\mu}})$
(3)
Thus the ISDF method can be divided into two steps. 31 The first one is to get
the expansion coefficients $C_{\mu}^{ij}$, namely to compute the IPs, which
can be achieved by using the QRCP procedure or the weighted K-means clustering
algorithm. From the matrix point of view, IPs procedure is to select $N_{\mu}$
rows from $Z$ for fitting the entire matrix $Z$. Because Z matrix is not full
rank, the QRCP procedure can achieve the low rank decomposition of $Z$ as
follows,
$Z^{T}\Pi=Q\begin{bmatrix}R_{11}&R_{12}\\\
0&0\end{bmatrix}_{N_{\phi}N_{\psi}\times N_{r}}$ (4)
where $Z^{T}$ is the transpose of matrix $Z$, $\Pi$ is the permutation matrix
whose first $N_{\mu}$ columns give the IPs
$\\{\mathbf{r}_{\mu}\\}_{\mu=1}^{N_{\mu}}$. $Q$ and $R_{11}$ denote orthogonal
matrix and non-singular upper triangular matrix, respectively. The absolute
values of the diagonal entries for matrix $R_{11}$ follow a non-increasing
order. As the standard method, the computational cost of the QRCP procedure
scales as $O(N_{\mu}^{2}N_{r})$, while the cost of K-means clustering
algorithm is $O(N_{r}N_{\mu})$, as Table 1 shows.
The second step is to get the ABFs $\zeta_{\mu}(\mathbf{r})$, namely to
compute the IVs, which can be obtained by the least-squares procedure. After
we obtain the expansion coefficients $C_{\mu}^{ij}$, eq 1 can be written in
matrix form
$Z\approx\Theta C$ (5)
where $Z$ is $N_{r}\times N_{e}^{2}$ matrix ($N_{\phi}\approx N_{\psi}\sim
O(N_{e})$, $N_{e}$ is the number of electrons), which comes from
$\phi_{i}(\mathbf{r})\psi_{j}^{\ast}(\mathbf{r})$ sampled on a set of dense
real space grids $\\{\mathbf{r}_{i}\\}_{i=1}^{N_{r}}$.
$\Theta=[\zeta_{1},\zeta_{2},...,\zeta_{N_{\mu}}]$, $N_{r}\times N_{\mu}$
matrix, namely IVs.
$C=[\phi_{i}(\mathbf{r}_{1})\psi_{j}^{\ast}(\mathbf{r}_{1}),...,\phi_{i}(\mathbf{r}_{\mu})\psi_{j}^{\ast}(\mathbf{r}_{\mu}),...,\phi_{i}(\mathbf{r}_{N_{\mu}})\psi_{j}^{\ast}(\mathbf{r}_{N_{\mu}})]^{T}$.
Thus the IVs $\Theta$ can be given by
$\Theta=ZC^{T}(CC^{T})^{-1}$ (6)
where $ZC^{T}$ and $CC^{T}$ both need $O(N_{e}^{4})$ floating point
operations. Nevertheless, the separable structure of $Z$ and $C$ can reduce
the operations dramatically. 31 As is well known,
$\sum_{i,j}\phi_{i}\psi_{j}=(\sum_{i}\phi_{i})(\sum_{j}\psi_{j})$ (7)
Thus the $\mu$-th row, $\nu$-th column element of $ZC^{T}$ can be written as
$P^{\phi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})P^{\psi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})$
(8)
where $P^{\phi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})$ and
$P^{\psi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})$ are defined as
$\begin{split}P^{\phi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})&=\sum_{i}^{N_{\phi}}\phi_{i}(\mathbf{r}_{\mu})\phi_{i}^{\ast}(\mathbf{r}_{\nu})\\\
P^{\psi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})&=\sum_{i}^{N_{\psi}}\psi_{i}(\mathbf{r}_{\mu})\psi_{i}^{\ast}(\mathbf{r}_{\nu})\end{split}$
(9)
Here to compute $P^{\phi}$ and $P^{\psi}$ takes $O(N_{e}^{3})$ floating point
operations and the multiplication of $P^{\phi}$ and $P^{\psi}$ only needs
$O(N_{e}^{2})$ floating point operations. This conclusion also applies to
$CC^{T}$. Therefore, we can reduce the computational complexity of IVs from
$O(N_{e}^{4})$ to $O(N_{e}^{3})$.
Table 1: Computational cost and memory usage of IPs and IVs in ISDF decomposition. Notice that $N_{r}\approx 1,000\times N_{e}$, and $N_{\phi}\approx N_{\psi}\approx N_{\mu}\sim O(N_{e})$ in the plane-wave basis sets. Step | Algorithm | Computation | Memory
---|---|---|---
IPs | QRCP | $O(N_{\mu}^{2}N_{r})$ | $O(N_{r}N_{\mu})$
K-means | $O(N_{r}N_{\mu})$ | $O(N_{r}N_{\mu})$
IVs | Least-squares | $O(N_{r}N_{\mu}N_{e})$ | $O(N_{r}N_{\phi}N_{\psi})$
### 2.2 Complex-valued K-means clustering in ISDF
As an unsupervised machine learning algorithm, the K-means clustering
algorithm has been demonstrated to be much cheaper than the QRCP procedure.
32, 34 Ideally, conventional K-means clustering algorithm is for seeking the
solution to the following optimization problem.
$argmin\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r_{k}}\in
C_{\mu}}||Z(\mathbf{r}_{k})-Z(\mathbf{r}_{\mu})||^{2}$ (10)
Here we divide the $N_{r}$ data points into $N_{\mu}$ clusters
$\\{C_{\mu}\\}_{\mu=1}^{N_{\mu}}$. Each cluster can be denoted by its
centroid, namely the IPs we need.
${C_{\mu}}=\\{\ Z(\mathbf{r}_{i})\ |\
dist(Z(\mathbf{r}_{i}),Z(\mathbf{r}_{\mu}))\leq
dist(Z(\mathbf{r}_{i}),Z(\mathbf{r}_{m}))\ for\ all\ m\neq\mu\ \\}$ (11)
where the $dist(Z(\mathbf{r}_{i}),Z(\mathbf{r}_{\mu}))$ means distance between
data points $Z(\mathbf{r}_{i})$ and centroid $Z(\mathbf{r}_{\mu})$. The
optimization problem can only be solved by iterative calculations and K-means
clustering converges to a local minimum. The accuracy of K-means clustering
strongly depends on the selection of initial centroids and definition of
distance and centroids. 41, 42, 32, 34 The conventional K-means clustering is
mainly used for real-valued data points. As mentioned above, due to the
dependence of the real and imaginary parts, there is a dilemma when we apply
the conventional K-means clustering algorithm to the complex-valued Kohn-Sham
orbitals. In addition, it is quite expensive to perform direct K-means
clustering on the transposed Khatri-Rao product.
In the ISDF method, we effectively avoid these problems. We perform K-means
clustering on the grids in real space where the real and imaginary parts of
complex-valued Kohn-Sham orbitals map the same grids instead of the transposed
Khatri-Rao product. APPENDIX A verifies the feasibility of this strategy. In
ISDF method, the optimization problem is reduced to
$argmin\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r_{k}}\in
C_{\mu}}w(\mathbf{r}_{k})||\mathbf{r}_{k}-\mathbf{r}_{\mu}||^{2}$ (12)
where the $w(\mathbf{r}_{k})$ is the weight function. The distances between
the grid points are calculated by the Euclidean metric. The centroids
$\mathbf{r}_{\mu}$ can be defined by the weighted average of all points that
belong to the corresponding cluster as follows
$\mathbf{r}_{\mu}=\frac{\sum_{\mathbf{r}_{j}\in
C_{\mu}}\mathbf{r}_{j}w(\mathbf{r}_{j})}{\sum_{\mathbf{r}_{j}\in
C_{\mu}}w(\mathbf{r}_{j})}$ (13)
The information of complex-valued Kohn-Sham orbitals is converted into the
weight function. Therefore, it is very important to select and define the
empirical weight function. Different empirical weight functions have been
proposed for different data types in practice. 32, 34
For hybrid functional electronic structure calculations within numerical
atomic orbitals (NAOs), we have proposed the norm of the row of orbital pairs
as the weight function, 34 namely
$w(\mathbf{r})=\sum_{i,j=1}^{N_{b}}|\varphi_{i}(\mathbf{r})||\varphi_{j}(\mathbf{r})|$
(14)
where $\\{\varphi_{i}\\}_{i=1}^{N_{b}}$ denote the real-valued NAOs and
$N_{b}$ denotes the number of NAOs. It should be noticed that such two sets of
real-valued orbitals involved in the transposed Khatri-Rao product are the
same.
For real-valued hybrid DFT calculations within plane-wave basis sets,32 we
have defined the weight function as
$w(\mathbf{r})=\sum_{i,j=1}^{N_{e}}|\phi_{i}(\mathbf{r})|^{2}|\psi_{j}(\mathbf{r})|^{2}=(\sum_{i=1}^{N_{e}}|\phi_{i}|^{2})(\sum_{j=1}^{N}|\psi_{j}|^{2})$
(15)
The weight function is the product of the square modulus (abbreviated as PSM)
of real-valued Kohn-Sham orbitals $\\{\phi_{i}\\}_{i=1}^{N_{e}}$ and
$\\{\psi_{j}\\}_{j=1}^{N}$, where $N$ is the number of Kohn-Sham orbitals. It
should be noticed that such two sets of real-valued Kohn-Sham orbitals
involved in the transposed Khatri-Rao product are different because a two-
level self-consistent field (SCF) iteration procedure 70 is used in hybrid DFT
calculations within plane-wave basis sets.
However, the PSM weight function defined in Eq.15 is prone to more zero
elements, which makes the sampling points more localized in the real system as
shown in FIG. 11 (e, f), and the selected points are concentrated around the
ball and stick model. Because the rows of Z selected at the IPs are linearly
independent, the IPs should be delocalized as much as possible. Therefore, in
the case of complex-valued Kohn-Sham orbitals in the DFT calculations with
plane-wave basis sets, we use an improved weighted function for the transposed
Khatri-Rao product of two different sets of complex-valued Kohn-Sham orbitals
defined as
$w(\mathbf{r})=(\sum_{i=1}^{N_{e}}|\phi_{i}|^{\alpha})+(\sum_{j=1}^{N}|\psi_{j}|^{\alpha})$
(16)
when the weight function $\alpha=$ 2.0 is the sum of the square modulus
(abbreviated as SSM) of Kohn-Sham orbitals. It should be noticed that the SSM
weight function forms the second term of wavefunction and the PSM weight
function forms the fourth term of the wavefunction, because of the value of
weight function is less than 1, such improved weight function is an
alternative weight function that could show better results, superior to the
PSM weight function defined in Eq.15. The $\alpha=$ 1.0, 3.0, and 4.0
represent that the weight function is the sum of the modulus (SM), cubic
modulus (SCM) and quartic modulus (SQM) of Kohn-Sham orbitals, respectively.
Eq.16 is also valid for real-valued Kohn-Sham orbitals because real numbers
are subsets of complex numbers. FIG. 1 demonstrates the IPs selected by the
K-means with SSM and PSM as well as QRCP procedures for BH3NH3 with different
B-N distances. It is obvious that the K-means with SSM can yield IPs which are
more dispersed and delocalized than the K-means with PSM and QRCP, which
demonstrates that SSM is more suitable to simulate electron density as a
weight function. In addition, we verify the feasibility of SSM as the weight
function by demonstrating that the interpolation points using K-means with SSM
approximately minimize the residual for the ISDF decomposition (See APPENDIX
A).
Figure 1: Comparison of the IPs selected by the K-means with SSM and PSM as
well as QRCP procedures for BH3NH3 with different B-N distances of
$d_{\textrm{B-N}}$ = 1.6 Åand 2.8 Å, including (a, b) the electron density
(yellow isosurfaces), (c, d) the IPs (yellow pentagrams) by the K-means with
SSM, (e, f) the IPs (yellow pentagrams) by the K-means with PSM and (g, h) the
IPs (yellow pentagrams) by the QRCP procedure. Algorithm 1 K-means Clustering
Algorithm to Compute Interpolation Points in ISDF
Input: Grid points $\\{\mathbf{r}_{i}\\}_{i=1}^{N_{r}}$, Weight function
$w(\mathbf{r})$.
Output: Interpolation points $\\{\mathbf{r}_{\mu}\\}_{\mu=1}^{N_{\mu}}$
1: Initialize centroids $\\{\mathbf{r}_{\mu}^{\\{0\\}}\\}$, set $t\leftarrow
0$.
2: while convergence not reached do
3: Classification step: Assign $N_{r}$ points
$\\{\mathbf{r}_{i}\\}_{i=1}^{N_{r}}$ to the cluster $C_{\mu}^{(t)}$
4: Compute new centroids:
$\mathbf{r}_{\mu}^{(t+1)}\leftarrow{\sum_{\mathbf{r}_{j}\in
C_{\mu}^{(t)}}\mathbf{r}_{j}w(\mathbf{r}_{j})}/{\sum_{\mathbf{r}_{j}\in
C_{\mu}^{(t)}}w(\mathbf{r}_{j})}$
5: set $t\leftarrow t+1$
6: end while
7: Update
$\\{\mathbf{r}_{i}\\}_{i=1}^{N_{\mu}}\leftarrow\\{\mathbf{r}_{\mu}^{\\{t\\}}\\}$
.
### 2.3 Low rank approximation of complex-valued hybrid DFT via ISDF
The key spirit of DFT is to solve the Kohn-Sham equations expressed as
$H\psi_{j}=(-\frac{1}{2}\Delta_{\mathbf{r}}+V_{\text{ion}}+V_{\text{H}}[\rho]+V_{\text{XC}}[\\{\phi_{i}\\}])\psi_{j}=\epsilon_{j}\psi_{j}$
(17)
where $H$ is the Kohn-Sham Hamiltonian, $\psi_{j}$ is the $j$-th Kohn-Sham
orbital, $\epsilon_{j}$ is the corresponding orbital energy and
$V_{\text{ion}}$ is the ionic potential. In real space, the Hartree potential
$V_{\text{H}}$ is defined as
$V_{\textrm{H}}[\rho](\mathbf{r})=\int{\dfrac{\rho(\mathbf{r^{\prime}})}{|\mathbf{r}-\mathbf{r^{\prime}}|}}d\mathbf{r^{\prime}}$
(18)
the electron density is given as
$\rho(\mathbf{r})=\sum_{i=1}^{N_{e}}|\psi_{i}(\mathbf{r})|^{2}$ (19)
It should be noticed that the accuracy of the Kohn-Sham DFT calculations
strongly depends on the exchange-correlation potential
$V_{XC}[\\{\phi_{i}\\}]$, which is defined as
$V_{XC}[\\{\phi_{i}\\}]=V_{X}[\\{\phi_{i}\\}]+V_{C}[\rho]$ (20)
where $\\{\phi_{i}\\}$ denote the occupied orbitals. $V_{X}[\\{\phi_{i}\\}]$
and $V_{C}[\rho]$ represent the exchange and correlation potentials,
respectively. For complex-valued hybrid DFT, the Hartree-Fock exchange
operator is
$V_{X}[\\{\phi_{i}\\}](\mathbf{r},\mathbf{r^{\prime}})=-\sum_{i=1}^{N_{e}}{\frac{\phi_{i}^{\ast}(\mathbf{r})\phi_{i}(\mathbf{r^{\prime}})}{|\mathbf{r}-\mathbf{r^{\prime}}|}}$
(21)
When the Hartree-Fock exchange operator is applied to the orbitals
$(V_{X}[\\{\phi_{i}\\}]\psi_{j})(\mathbf{r})=-\sum_{i=1}^{N_{e}}\phi_{i}(\mathbf{r})\int{\frac{\phi_{i}^{\ast}(\mathbf{r^{\prime}})\psi_{j}(\mathbf{r^{\prime}})}{|\mathbf{r}-\mathbf{r^{\prime}}|}{d}\mathbf{r^{\prime}}}$
(22)
For large basis sets for discretizing the Kohn-Sham equations, such as the
plane-wave basis sets, it is more efficient to use an iterative
diagonalization procedure to compute the eq. 17. In practice, several DFT
packages, such as Quantum Espresso 71 and PWDFT 70, separate the self-
consistent field (SCF) iteration of all occupied orbitals into inner SCF
iteration and outer SCF iteration, called two-level SCF produce. In the inner
SCF iteration, the exchange operator $V_{X}$ defined by occupied orbitals
$\\{\phi_{i}\\}$ in eq. 21 is fixed, so that the Hamiltonian operator only
relies on the electron density $\rho$, which has to be updated constantly. In
the outer SCF iteration, the output orbitals from the inner SCF iteration will
be used for updating the exchange operator until it converges. In each inner
SCF iteration, we must solve the two-center integrals of eq. 22. The practical
numerical solution is to solve $O(N_{e}^{2})$ Poisson-like equations, which is
the most expensive part for hybrid DFT calculations within plane-wave basis
sets.
Under the ISDF decomposition of transposed Khatri-Rao product of complex-
valued Kohn-Sham orbitals
$Z=\\{z_{ij}:=\phi_{i}^{\ast}(\mathbf{r})\psi_{j}(\mathbf{r})\\}\in\mathbb{C}^{N_{r}\times
N_{e}^{2}}$, we can substitute Eq.(1) into Eq.(22)
$\begin{split}(V_{X}[\\{\phi_{i}\\}]\psi_{j})(\mathbf{r})&\approx-\sum_{i=1}^{N_{e}}\phi_{i}(\mathbf{r})\int{\frac{\sum_{\mu=1}^{N_{\mu}}\zeta_{\mu}(\mathbf{r^{\prime}})\phi_{i}^{\ast}(\mathbf{r_{\mu}})\psi_{j}(\mathbf{r_{\mu}})}{|\mathbf{r}-\mathbf{r^{\prime}}|}{d}\mathbf{r^{\prime}}}\\\
&=-\sum_{\mu=1}^{N_{\mu}}({\int{\frac{\zeta_{\mu}(\mathbf{r^{\prime}})}{|\mathbf{r}-\mathbf{r^{\prime}}|}}{d}\mathbf{r^{\prime}}{{\sum_{i=1}^{N_{e}}\phi_{i}(\mathbf{r})\phi_{i}^{\ast}(\mathbf{r_{\mu}})\psi_{j}(\mathbf{r_{\mu}})}}})\\\
&=-\sum_{\mu=1}^{N_{\mu}}({V_{\mu}(\mathbf{r}){{\sum_{i=1}^{N_{e}}\phi_{i}(\mathbf{r})\phi_{i}^{\ast}(\mathbf{r_{\mu}})\psi_{j}(\mathbf{r_{\mu}})}}})\end{split}$
(23)
where the projected Hartree-Fock exchange integral under the ISDF
decomposition is defined as
$\begin{split}V_{\mu}(\mathbf{r})=\int{\frac{\zeta_{\mu}(\mathbf{r^{\prime}})}{|\mathbf{r}-\mathbf{r^{\prime}}|}}{d}\mathbf{r^{\prime}}\end{split}$
(24)
As a consequence, the number of Poisson-like equations to be solved is reduced
to $N_{\mu}\sim O(N_{e})$ from $O(N_{e}^{2})$.
Figure 2: Three different types of data partition for the matrix used in the
ISDF formulation for hybrid density functional calculations: (a) 2D block
cyclic partition ($I_{R}\times I_{C}$ MPI processor grid), (b) 1D column
cyclic partition (1 × $P_{n}$ MPI processor grid), and (c) 1D row cyclic
partition ($P_{n}$ × 1 MPI processor grid). $P_{n}$ is total computational
cores and $I_{R}\times I_{C}=P_{n}$.
### 2.4 Parallel implementation
We implement this complex-valued K-means clustering algorithm for large-scale
hybrid DFT calculations within plane-wave basis sets in PWDFT, 70 which is an
open source plane-wave based electronic structure calculations software. We
also realize a parallel implementation of such low-scaling hybrid DFT
calculations in PWDFT as shown in FIG. 3.
Figure 3: Flowchart of the ISDF formulation in PWDFT. Red, blue and orange
boxes denote 1D column block cyclic partition, 2D block cyclic partition and
1D row block cyclic partition, respectively.
The discretized Kohn-Sham orbitals can be denoted by
$\Phi=[\phi_{1},\phi_{2},...,\phi_{N_{e}}]\in\mathbb{C}^{N_{r}\times N_{e}}$
and $\Psi=[\psi_{1},\psi_{2},...,\psi_{N}]\in\mathbb{C}^{N_{r}\times N}$. Thus
the parallel is implemented easily with the aid of ScaLAPACK library when
$P_{n}$ processors are used. There are three different data partition types
including 1D column cyclic partition, 1D row cyclic partition and 2D block
cyclic partition in our program, as shown in FIG. 2. It is easy for parallel
implementation when we apply the Hamiltonian operator to the orbitals and use
a sequential fast Fourier transformation (FFT) library by the 1D column block
cyclic partition. The 2D block partition is suitable for QRCP method and
matrix inversion, while the 1D row block partition should be adopted for
matrix multiplication and K-means method. The transform among the different
data partition types is achieved by the pdgemr2d subroutine in the ScaLAPACK
library. FIG. 3 demonstrates the flowchart of the ISDF formulation in PWDFT.
Firstly, the orbitals $\Phi$ and $\Psi$ can be stored using the 1D column
cyclic partition as the input. The interpolation points can be computed by the
K-means clustering algorithm. The weight functions are computed from $\Phi$
and $\Psi$. The $N_{r}$ grid points are equally distributed to each core to
compute the distances between grid points and centroids for parallel
implementation of the K-means clustering part. Then the quasi density matrices
$P^{\phi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})\in\mathbb{C}^{N_{\mu}\times
N_{\mu}}$ and
$P^{\psi}(\mathbf{r}_{\mu},\mathbf{r}_{\nu})\in\mathbb{C}^{N_{\mu}\times
N_{\mu}}$ in Eq. 9 are transformed into the 2D block cyclic partition to
construct the IVs. The matrix $ZC^{T}$ in Eq. 6 can be calculated in parallel
fashion and the matrix $CC^{T}$ is exactly subsampling rows of $ZC^{T}$. After
we obtain the IVs
$\Theta=[\zeta_{1},\zeta_{2},...,\zeta_{N_{\mu}}]\in\mathbb{C}^{N_{r}\times
N_{\mu}}$ by linear equation solver in ScaLAPACK, the data partition type
should be converted from 2D block cyclic partition to 1D column cyclic
partition for computing the
$V=[V_{1},V_{2},...,V_{N_{\mu}}]\in\mathbb{C}^{N_{r}\times N_{\mu}}$.
Table 2: Accuracy of complex-valued ISDF (SSM, $\alpha=2$) based HSE06 calculations by using K-means clustering algorithm to compute the IPs with respect to the rank parameter $t$ for liquid water molecules (H2O)64, semiconducting solid Si216 and metallic aluminium-silicon alloy Al176Si24, including the VBM $E_{\textrm{VBM}}$ (eV), CBM $E_{\textrm{CBM}}$ (eV), the energy gap $E_{\textrm{g}}$ (eV), the absolute errors of Hartree-Fock exchange energy $\Delta E_{\textrm{HF}}$ (Ha/atom), total energy $\Delta E$ (Ha/atom) and atomic forces $\Delta F$ (Ha/Bohr). The ACE-enabled HSE06 calculations are used for the reference. $t$ | $E_{\textrm{VBM}}$ | $E_{\textrm{CBM}}$ | $E_{\textrm{g}}$ | $\Delta E_{\textrm{HF}}$ | $\Delta E$ | $\Delta F$
---|---|---|---|---|---|---
Liquid water molecules (H2O)64 ($N_{\textrm{band}}=255$)
4.0 | -3.8136 | 2.4196 | 6.2333 | $2.22\times 10^{-04}$ | $3.12\times 10^{-04}$ | $1.30\times 10^{-02}$
6.0 | -3.8261 | 2.2910 | 6.1170 | $2.63\times 10^{-05}$ | $4.39\times 10^{-05}$ | $5.59\times 10^{-04}$
8.0 | -3.8291 | 2.2740 | 6.1031 | $3.46\times 10^{-06}$ | $1.49\times 10^{-05}$ | $3.22\times 10^{-04}$
10.0 | -3.8298 | 2.2713 | 6.1011 | $1.26\times 10^{-07}$ | $1.86\times 10^{-05}$ | $6.54\times 10^{-05}$
12.0 | -3.8301 | 2.2708 | 6.1010 | $5.68\times 10^{-07}$ | $2.81\times 10^{-05}$ | $5.01\times 10^{-05}$
16.0 | -3.8302 | 2.2706 | 6.1008 | $6.65\times 10^{-07}$ | $2.87\times 10^{-05}$ | $5.68\times 10^{-05}$
20.0 | -3.8300 | 2.2706 | 6.1006 | $3.65\times 10^{-09}$ | $4.17\times 10^{-07}$ | $2.11\times 10^{-05}$
Ref | -3.8299 | 2.2705 | 6.1004 | $0.00\times 10^{00}$ | $0.00\times 10^{00}$ | $0.00\times 10^{00}$
Semiconducting bulk silicon solid Si216 ($N_{\textrm{band}}=432$)
4.0 | 6.7419 | 8.3454 | 1.6035 | $2.53\times 10^{-03}$ | $2.96\times 10^{-03}$ | $2.73\times 10^{-03}$
6.0 | 6.6641 | 8.1544 | 1.4903 | $3.66\times 10^{-04}$ | $4.64\times 10^{-04}$ | $6.47\times 10^{-04}$
8.0 | 6.6513 | 8.1040 | 1.4527 | $7.45\times 10^{-05}$ | $9.97\times 10^{-05}$ | $1.82\times 10^{-04}$
10.0 | 6.6480 | 8.0959 | 1.4479 | $1.87\times 10^{-05}$ | $2.83\times 10^{-05}$ | $1.99\times 10^{-04}$
12.0 | 6.6472 | 8.0942 | 1.4470 | $4.73\times 10^{-06}$ | $9.66\times 10^{-06}$ | $4.56\times 10^{-05}$
16.0 | 6.6469 | 8.0932 | 1.4463 | $1.77\times 10^{-07}$ | $1.51\times 10^{-06}$ | $1.60\times 10^{-05}$
20.0 | 6.6468 | 8.0930 | 1.4462 | $4.27\times 10^{-07}$ | $3.66\times 10^{-07}$ | $5.35\times 10^{-06}$
Ref | 6.6468 | 8.0930 | 1.4462 | $0.00\times 10^{00}$ | $0.00\times 10^{00}$ | $0.00\times 10^{00}$
Metallic aluminium-silicon alloy Al176Si24 ($N_{\textrm{band}}=312$)
4.0 | 7.9370 | 8.0369 | 0.0999 | $3.93\times 10^{-03}$ | $4.20\times 10^{-03}$ | $4.09\times 10^{-03}$
6.0 | 7.8118 | 7.9168 | 0.1050 | $6.91\times 10^{-04}$ | $7.29\times 10^{-04}$ | $1.22\times 10^{-03}$
8.0 | 7.7760 | 7.8759 | 0.0999 | $8.50\times 10^{-05}$ | $8.82\times 10^{-05}$ | $3.20\times 10^{-04}$
10.0 | 7.7714 | 7.8703 | 0.0989 | $1.76\times 10^{-05}$ | $1.88\times 10^{-05}$ | $1.02\times 10^{-04}$
12.0 | 7.7708 | 7.8695 | 0.0987 | $6.13\times 10^{-06}$ | $7.04\times 10^{-06}$ | $4.82\times 10^{-05}$
16.0 | 7.7706 | 7.8693 | 0.0986 | $1.09\times 10^{-06}$ | $1.77\times 10^{-06}$ | $2.96\times 10^{-05}$
20.0 | 7.7705 | 7.8691 | 0.0986 | $1.73\times 10^{-07}$ | $6.25\times 10^{-07}$ | $1.18\times 10^{-05}$
Ref | 7.7705 | 7.8691 | 0.0986 | $0.00\times 10^{00}$ | $0.00\times 10^{00}$ | $0.00\times 10^{00}$
## 3 Results and discussion
In this section, we demonstrate the numerical accuracy and computational
efficiency of the K-means clustering algorithm of complex-valued Kohn-Sham
orbitals for ISDF to accelerate hybrid density functional calculations. All
calculations are implemented in the PWDFT 70 software package and Message
Passing Interface (MPI) is used for handling data communication. We use the
Hartwigsen-Goedecker-Hutter (HGH) norm-conserving pseudopotentials72 and the
HSE06 73 functional to describe the electronic structures of molecules and
solids.
Figure 4: Atomic structures of (a) insulator liquid water molecules (H2O)64,
(b) semiconducting bulk silicon solid Si216 and (c) metallic aluminium-silicon
alloy Al176Si24. The red, white, yellow and purple circles denote O, H, Si and
Al atoms, respectively.
We first benchmark the numerical accuracy of ISDF-K-means with weight function
SSM by comparing standard HSE06 results. Then we compare different numerical
accuracy of ISDF-K-means with SSM, ISDF-K-means with PSM and ISDF-QRCP.
Furthermore, we compare the AIMD results from K-means with different weight
functions. Finally, we demonstrate the computational scaling of ISDF-K-means
as well as ISDF-QRCP and the parallel scalability of ISDF-enabled hybrid
density functional calculations on modern heterogeneous supercomputers.
### 3.1 Numerical accuracy
#### 3.1.1 Total energy and atomic forces
Firstly, we test the accuracy of the ISDF method for complex-valued Kohn-Sham
orbitals taking liquid water molecules (H2O)64, semiconducting bulk silicon
Si216 ($E_{gap}$ = 1.45 eV) and metallic disordered aluminium-silicon alloy
Al176Si24 ($E_{gap}$ $\textless$ 0.1 eV) as examples, whose crystal structures
are demonstrated in FIG. 4. The cutoff energies for (H2O)64, Si216 and
Al176Si24 are 60.0, 20.0 and 20.0 Ha, respectively. For (H2O)64 system, the
DFT-D2 method is used to account for the van der Waals (VdW) interaction. 74
We obtain all reference results utilizing adaptively compressed exchange (ACE)
algorithm, 75, 70 which reduces the cost of applying Hartree-Fock exchange
operator into Kohn-Sham orbitals without loss of accuracy.
Figure 5: Accuracy of complex-valued ISDF based hybrid functional calculations
(HSE06) obtained by using the K-means (SMS and PMS) and QRCP procedures to
select the interpolation points, with varying rank parameter t from 4 to 20
for Si216, Al176Si24 and (H2O)64, including the error of ((a), (b), (c))
Hartree-Fock exchange energy $\Delta E_{\textrm{HF}}$ (Ha/atom), ((d), (e),
(f)) total energy $\Delta E$ (Ha/atom) and ((g), (h), (i)) atomic forces
$\Delta F$ (Ha/Bohr).
Table 2 demonstrates the valence band maximum (VBM) energy level, the
conduction band minimum (CBM) energy level, the energy gap $E_{\textrm{g}}$,
the absolute errors of Hartree-Fock exchange energy $\Delta E_{\textrm{HF}}$,
total energy $\Delta E$ (Ha/atom) as well as atomic forces $\Delta F$
(Ha/Bohr) by ISDF-based HSE06 calculations using K-means with weight function
SMS with respect to the rank parameter $t$. Here we define
$\displaystyle\Delta
E_{\textrm{HF}}=(E_{\textrm{HF}}^{ISDF}-E_{\textrm{HF}}^{Ref})/N_{A}$ (25)
$\displaystyle\Delta E=(E^{ISDF}-E^{Ref})/N_{A}$ $\displaystyle\Delta
F=\max\limits_{I}|(F_{I}^{ISDF}-F_{I}^{Ref})|$
where $N_{A}$ is the total number of atoms, and $I$ is the atom index.
Figure 6: Variation of eigenvalue error in hybrid HSE06 DFT calculations using
K-means with SSM and PSM as well as QRCP method with respect to the rank
parameter $t$ ($t$ = 6 and $t$ = 12) for (a) Si216, (b) Al176Si24 and (c)
(H2O)64.
We remark that above physical quantities are approaching the reference value
gradually as the rank parameter $t$ is increasing for three studied systems.
Besides, for all tested systems, the error of energy gap is less than 0.01 eV
and the error of total energy per atom is less than $10^{-4}$ Ha/atom when $t$
is set to 8.0, which suggests that a small rank parameter can yield enough
accurate results. The accuracy of standard method QRCP as well as K-means
method with SSM and PSM are compared in FIG. 5. When the same rank parameter
$t$ is set, the K-means method with SSM yields higher accuracy than K-means
with PSM for Hartree-Fock exchange energy, total energy and atomic forces in
most cases. In order to further compare the accuracy of K-means with different
weight functions and QRCP, we demonstrate variation of eigenvalue error with
different rank parameters $t$ = 6.0 and 12.0 for Si216, Al176Si24 and (H2O)64,
as shown in FIG. 6. The error of the $i$-th eigenvalue is calculated by
$\Delta\epsilon_{i}=\epsilon_{i}^{ISDF}-\epsilon_{i}^{Ref}$. Similarly, it is
obvious that the K-means with SSM shows a lower error of eigenvalues for all
tested systems.
#### 3.1.2 AIMD
In order to further verify the accuracy of the ISDF method with the improved
K-means algorithm, we perform AIMD optimized by hybrid DFT calculations for
bulk silicon system Si64, aluminium-silicon alloy Al176Si24 and silicon
dioxide SiO2 under the NVE ensemble and liquid water molecules (H2O)64 under
the NVT ensemble. For NVT ensemble, a single level Nose-Hoover thermostat 76,
77 is used at 295 K of temperature and the mass of the Nose-Hoover thermostat
is 85,000 a.u.
Figure 7: Energy potential of hybrid HSE06 DFT AIMD simulations by using the
ISDF-K-means with different weight functions and ISDF-QRCP methods as well as
ACE-enabled procedure as the reference on the bulk silicon Si64.
FIG. 7 demonstrates that ISDF-K-means with different weight functions and
ISDF-QRCP method results in the total potential energy along the MD trajectory
with a time step of 1.0 fs on bulk silicon system Si64. We remark that the
absolute error from K-means method with SSM is smaller than that from K-means
method with PSM. Moreover, the K-means method with SSM yields a much smoother
potential energy change compared with that yielded by the K-means method with
PSM. The possible reason is that the interpolation points selected by weight
function SSM represent the distribution of electron density better than that
by weight function PSM. Therefore, weight function SSM will perform better in
some cases where a smooth potential energy surface is required, such as
geometry optimization. In addition, we also compare the potential energy
change along the MD trajectory of K-means method with SM ($\alpha=1$), SSM
($\alpha=2$), SCM ($\alpha=3$) as well as SQM ($\alpha=4$). We remark that the
weight function SM exhibits a similar performance to the weight function SSM.
Note that the K-means method with SQM yields almost the same potential energy
change as that from the K-means method with PSM. It is because both weight
function SQM and PSM are essentially quartic modulus of wavefunctions in
hybrid functional calculations where $\Psi$ is the same as $\Phi$ when the
unoccupied orbitals are not computed.
Figure 8: Relatively energy drift of hybrid HSE06 AIMD simulations by using
the ISDF-K-means with different weight functions and ISDF-QRCP methods as well
as ACE-enabled procedure as the reference on the bulk silicon Si64.
Energy drift represents the change of total energy compared to the initial
total energy along the MD trajectory, which can be defined by
$E_{drift}(t)=(E_{tot}(t)-E_{tot}(0))/E_{tot}(0)$. FIG. 8 shows the controlled
energy drift by the ISDF-K-means with different weight functions and ISDF-QRCP
on bulk silicon system Si64. We remark that the K-means with SSM exhibits less
loss of accuracy than the K-means with PSM and QRCP methods. Besides,
divergence occurs for the energy drift of K-means with PSM while that of
K-means with SSM maintains stable accuracy. On the other hand, the performance
of the K-means method with SM is similar to that of the K-means method with
SSM, while the performance of the K-means method with SQM is similar to that
of the K-means method with PSM, which is consistent with the results from the
energy potential change. In addition, for AIMD with a time step of 1.0 fs on
aluminium-silicon alloy Al176Si24, as FIG.S1 and FIG.S2 show, the K-means
method with SSM also yields slightly smoother potential energy and less loss
of accuracy of energy drift than the K-means method with PSM. Similarly, for
liquid water molecules, the K-means with SSM exhibits higher accuracy than the
K-means with PSM on the whole, as shown in FIG.S5. From FIG.S4, the potential
energy yielded by the K-means with SSM is more consistent with the reference
results compared with that by the K-means with PSM for water molecules, which
suggests a stronger stability of SSM. In order to further check the stability
of the K-means with SSM method, we perform a long MD simulation for 21.5 ps
with a time step of 12.0 fs on Si64, as FIG. 9 shows. K-means with SSM
exhibits less loss of accuracy and better stability than K-means with PSM.
Figure 9: Relatively energy drift of hybrid HSE06 AIMD simulations with a time
step of 12.0 fs for Si64.
We also sample the oxygen-oxygen radial distribution functions (RDF) by hybrid
HSE06 AIMD simulations for 0.37 ps with a time step of 1.0 fs on liquid water
system (H2O)64 under NVT ensemble at 295 K using K-means with SSM and K-means
with PSM, as FIG. 10 shows. We remark that the root mean square error (RMSE)
of K-means with SSM (0.022) is much smaller than that of K-means with PSM
(0.042), which demonstrates better accuracy for SSM compared with PSM.
Therefore, this improved K-means clustering algorithm in the ISDF
decomposition can accurately and efficiently realize low-scaling, large-scale
and long-time AIMD with complex-valued hybrid DFT calculations.
Figure 10: Oxygen-oxygen radial distribution functions gOO(r) of the liquid
water system (H2O)64 at 295 K obtained from hybrid HSE06 + DFT-D2 AIMD
simulations for 0.37 ps with a time step 1.0 fs using the ISDF-K-means with
SSM (red solid line) and ISDF-K-means with PSM (blue solid line) methods and
the ACE algorithm (black dashed line as the reference). The root mean square
error (RMSE) of SSM and PSM with respect to the reference is 0.022 and 0.042,
respectively.
In addition to studying the oxygen-oxygen RDF of liquid water system (H2O)64,
we also calculate the power spectrum 78 of the crystal silicon dioxide SiO2 by
using HSE06 AIMD simulations under NVE ensemble at 300 K using K-means with
SSM and K-means with PSM. As shown in FIG. 11, the black vertical line shows
three different vibration frequencies in experiment 79, which is Si-O
asymmetrical bending vibration with 464 cm -1, Si-O symmetrical bending
vibration with 695 cm-1 and Si-O asymmetrical stretching vibration with
1080-1175 cm-1, respectively, we could see it is closer to the asymmetrical
bending vibration frequency and asymmetrical bending vibration frequency when
using K-means with SSM. Furthermore, we could find a small peak with 695 cm-1
using K-means with SSM, however, we couldn’t see this in K-means with PSM.
Therefore, the improved K-means clustering with SSM in ISDF algorithm could
simulate more accurate power spectral vibrational frequencies in hybrid AIMD
simulations.
Figure 11: Power spectrum of the crystal silicon dioxide system SiO2 at 300K
obtained from hybrid HSE06 AIMD simulations for 2ps with a time step 1.0 fs
using the ISDF-K-means with SSM (red solid line) and ISDF-K-means with PSM
(blue solid line) methods and the ACE algorithm (black dashed line as the
reference). The experimental results are shown as four black vertical lines.
### 3.2 Computational efficiency
#### 3.2.1 Computational scaling
To compare the computational efficiency of K-means and QRCP method, we
demonstrate the computational time of IPs selected by QRCP and K-means method
and IVs for complex-valued wavefunctions in FIG.S5. The tested systems are
bulk silicon systems including from 8 to 512 atoms. The cutoff energy is set
to 10 Ha. From the fitting curves, we remark that selecting IPs by K-means is
much faster than that by QRCP. For periodic systems where $N_{r}$ is
proportional to the number of atoms, the K-means algorithm scales as
$O(N^{2.0})$ while the QRCP method scales as $O(N^{3.0})$, which is consistent
with the conclusion in numerical atomic orbitals. 34 The IVs procedure by the
least-square method scales as $O(N^{2.4})$. For ISDF-QRCP method, the IPs
procedure is the most time-consuming part due to the expensive QRCP. However,
the IVs procedure becomes the dominant part when the K-means method replaces
the QRCP.
#### 3.2.2 Parallel scalability
Figure 12: (a) The change of wallclock time in ISDF, inner SCF and outer SCF
iteration with respect to the number of cores for the Si1000 system. (b) The
change of wallclock time in ISDF, inner SCF and outer SCF iteration with
respect to system size.
We test the parallel scalability of the complex-valued ISDF method with
K-means clustering algorithm for large-scale hybrid density functional (HSE06)
calculations, as shown in FIG. 12. FIG. 12(a) shows the change of the
wallclock time of ISDF part, inner SCF and outer SCF in one outer SCF
iteration with respect to the numbers of cores for the bulk silicon system
containing 1,000 atoms, which illustrates strong parallel scalability of our
algorithm. We remark that the wallclock time of inner SCF, ISDF and outer SCF
exhibits excellent scalability up to 2,000 CPU cores. As for the weak parallel
scalability, FIG. 12(b) demonstrates the change of wallclock time with respect
to the numbers of atoms for the bulk silicon system including from 64 to 2,744
atoms. The ISDF method scales well with respect to the system size (up to
2,744 atoms) on 5,504 CPU cores. The hybrid density functional calculations
for complex-valued Kohn-Sham orbitals require more usage of computation and
memory, which can be improved by performing OpenMP parallel implementation in
our future work. 80 Therefore, this improved K-means clustering algorithm can
accurately and efficiently accelerate large-scale and long-time ab initio
molecular dynamics with complex-valued hybrid DFT calculations.
## 4 Conclusion and outlook
In conclusion, we present an improved K-means clustering algorithm for
complex-valued wavefunctions to select interpolation sampling points in the
ISDF decomposition. By applying the new K-means clustering algorithm with SSM
into hybrid density functional calculations, we demonstrate that the improved
K-means clustering algorithm yields more accurate and smoother interpolation
sampling points compared to the K-means clustering with PSM for complex-valued
Kohn-Sham orbitals. In particular, K-means with SSM exhibits less loss of
accuracy and better stability for AIMD RDF and power spectrum simulations with
hybrid density functional. Moreover, we implement the parallel ISDF
decomposition for large-scale hybrid functional calculations. We show that the
ISDF can scale up to 5,504 CPU cores for a system containing 2,744 atoms. The
complex-valued wavefunction is indispensable for multi-k points sampling DFT
and RT-TDDFT. The K-means clustering algorithm is more suitable than QRCP
method for the dynamic simulation process because of its cheap cost.
Therefore, we will apply the complex-valued K-means clustering algorithm to
excited-state RT-TDDFT with hybrid functionals in our future work.
## Acknowledgments
This work is partly supported by the Strategic Priority Research Program of
the Chinese Academy of Sciences (XDB0450101), the Innovation Program for
Quantum Science and Technology (2021ZD0303306), the National Natural Science
Foundation of China (22288201, 22173093, 21688102), by the Anhui Provincial
Key Research and Development Program (2022a05020052), the National Key
Research and Development Program of China (2016YFA0200604, 2021YFB0300600),
and the CAS Project for Young Scientists in Basic Research (YSBR-005). The
authors thank the Hefei Advanced Computing Center, the Supercomputing Center
of Chinese Academy of Sciences, the Supercomputing Center of USTC, the
National Supercomputing Center in Wuxi, and Tianjin, Shanghai, and Guangzhou
Supercomputing Centers for the computational resources.
## Data Availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## Appendix A Verification of the feasibility for K-means with SSM
In order to verify the feasibility of SSM as the weight function, here we
demonstrate that the interpolation points using K-means with SSM approximately
minimize the residual for the ISDF decomposition. For simplicity, suppose
$N=N_{\phi}=N_{\psi}$, transposed Khatri-Rao product $Z$ is
$Z(\mathbf{r})=[\phi_{i}(\mathbf{r})\psi_{j}^{\ast}(\mathbf{r})]_{i,j=1}^{N}$.
We cluster $N_{r}$ matrix rows of Z into subsets
$\\{C_{\mu}\\}_{\mu=1}^{N_{\mu}}$ and select $N_{\mu}$ matrix row
$Z(\mathbf{r}_{\mu})$ for representing each $C_{\mu}$. Thus the error of ISDF
can be approximated as
$R=\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r_{k}}\in
C_{\mu}}||Z(\mathbf{r_{k}})-Proj_{span\\{Z(\mathbf{r}_{\mu})\\}}Z(\mathbf{r_{k}})||^{2}$
(26)
where the projection is the $L^{2}$ inner product
$Proj_{span\\{Z(\mathbf{r}_{\mu})\\}}Z(\mathbf{r_{k}})=\frac{Z(\mathbf{r_{k}})\cdot
Z^{\ast}(\mathbf{r}_{\mu})}{Z(\mathbf{r}_{\mu})\cdot
Z^{\ast}(\mathbf{r}_{\mu})}Z(\mathbf{r}_{\mu})$ (27)
we define electron density
$\rho(\mathbf{r}_{\mu})=\sum_{i=1}^{N}|\phi_{i}(\mathbf{r}_{\mu})|^{2}=\sum_{j=1}^{N}|\psi_{j}(\mathbf{r}_{\mu})|^{2}$,
$\Phi(\mathbf{r})=[\phi_{i}(\mathbf{r})]_{i=1}^{N}$,
$\Psi(\mathbf{r})=[\psi_{j}(\mathbf{r})]_{j=1}^{N}$.
$\begin{split}Z(\mathbf{r}_{\mu})\cdot
Z^{\ast}(\mathbf{r}_{\mu})&=(\Phi(\mathbf{r}_{\mu})\cdot\Psi^{\ast}(\mathbf{r}_{\mu}))(\Phi^{\ast}(\mathbf{r}_{\mu})\cdot\Psi(\mathbf{r}_{\mu}))\\\
&=\sum_{i,j=1}^{N}|\phi_{i}(\mathbf{r}_{\mu})|^{2}|\psi_{j}(\mathbf{r}_{\mu})|^{2}\\\
&=(\sum_{i=1}^{N}|\phi_{i}(\mathbf{r}_{\mu})|^{2})(\sum_{j=1}^{N}|\psi_{j}(\mathbf{r}_{\mu})|^{2})\\\
&=\rho^{2}(\mathbf{r}_{\mu})\end{split}$ (28)
we have
$\begin{split}R&=\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}||Z(\mathbf{r_{k}})-\frac{Z(\mathbf{r_{k}})\cdot
Z^{\ast}(\mathbf{r}_{\mu})}{Z(\mathbf{r}_{\mu})\cdot
Z^{\ast}(\mathbf{r}_{\mu})}Z(\mathbf{r}_{\mu})||^{2}\\\
&=\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in C_{\mu}}Z(\mathbf{r_{k}})\cdot
Z^{\ast}(\mathbf{r_{k}})[1-\frac{(Z^{\ast}(\mathbf{r_{k}})\cdot
Z(\mathbf{r}_{\mu}))(Z(\mathbf{r_{k}})\cdot
Z^{\ast}(\mathbf{r}_{\mu}))}{(Z(\mathbf{r_{k}})\cdot
Z^{\ast}(\mathbf{r_{k}}))(Z(\mathbf{r}_{\mu})\cdot
Z^{\ast}(\mathbf{r}_{\mu}))}]\\\ &=\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}\rho^{2}(\mathbf{r}_{k})[1-\frac{(\Phi(\mathbf{r}_{k})\cdot\Phi^{\ast}(\mathbf{r}_{\mu}))^{2}(\Psi(\mathbf{r}_{k})\cdot\Psi^{\ast}(\mathbf{r}_{\mu}))^{2}}{\rho^{2}(\mathbf{r}_{k})\rho^{2}(\mathbf{r}_{\mu})}]\\\
&=\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}\rho^{2}(\mathbf{r}_{k})[1-cos^{2}(\theta_{1}(\mathbf{r}_{k},\mathbf{r}_{\mu}))cos^{2}(\theta_{2}(\mathbf{r}_{k},\mathbf{r}_{\mu}))]\\\
&=\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}\rho^{2}(\mathbf{r}_{k})[sin^{2}(\theta_{1}(\mathbf{r}_{k},\mathbf{r}_{\mu}))+sin^{2}(\theta_{2}(\mathbf{r}_{k},\mathbf{r}_{\mu}))\quad...\\\
&\quad\quad\quad\quad-
sin^{2}(\theta_{1}(\mathbf{r}_{k},\mathbf{r}_{\mu}))sin^{2}(\theta_{2}(\mathbf{r}_{k},\mathbf{r}_{\mu}))]\\\
&\leq\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}\rho^{2}(\mathbf{r}_{k})[sin^{2}(\theta_{1}(\mathbf{r}_{k},\mathbf{r}_{\mu}))+sin^{2}(\theta_{2}(\mathbf{r}_{k},\mathbf{r}_{\mu}))]\end{split}$
(29)
where $\theta_{1}(\mathbf{r}_{k},\mathbf{r}_{\mu})$ and
$\theta_{2}(\mathbf{r}_{k},\mathbf{r}_{\mu})$ are the angles between the
vectors $\Phi(\mathbf{r}_{k})$ and $\Phi^{\ast}(\mathbf{r}_{\mu})$ as well as
$\Psi(\mathbf{r}_{k})$ and $\Psi^{\ast}(\mathbf{r}_{\mu})$, respectively.
Because
$\begin{split}&\rho(\mathbf{r}_{k})[sin^{2}(\theta_{1}(\mathbf{r}_{k},\mathbf{r}_{\mu}))+sin^{2}(\theta_{2}(\mathbf{r}_{k},\mathbf{r}_{\mu}))]\\\
&=\Phi(\mathbf{r}_{k})\cdot\Phi^{\ast}(\mathbf{r}_{k})sin^{2}(\theta_{1}(\mathbf{r}_{k},\mathbf{r}_{\mu}))+\Psi(\mathbf{r}_{k})\cdot\Psi^{\ast}(\mathbf{r}_{k})sin^{2}(\theta_{2}(\mathbf{r}_{k},\mathbf{r}_{\mu}))\\\
&\leq||\Phi(\mathbf{r}_{k})-\Phi^{\ast}(\mathbf{r}_{\mu})||^{2}+||\Psi(\mathbf{r}_{k})-\Psi^{\ast}(\mathbf{r}_{\mu})||^{2}\end{split}$
(30)
we can obtain
$\begin{split}R&\leq\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}\rho(\mathbf{r}_{k})[||\Phi(\mathbf{r}_{k})-\Phi^{\ast}(\mathbf{r}_{\mu})||^{2}+||\Psi(\mathbf{r}_{k})-\Psi^{\ast}(\mathbf{r}_{\mu})||^{2}]\\\
&\approx\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}\rho(\mathbf{r}_{k})(||\nabla_{r}\Phi(\mathbf{r}_{\mu})||^{2}+||\nabla_{r}\Psi(\mathbf{r}_{\mu})||^{2})||\mathbf{r}_{k}-\mathbf{r}_{\mu}||^{2}\\\
&=\frac{1}{2}\sum_{\mu=1}^{N_{\mu}}\sum_{\mathbf{r}\in
C_{\mu}}(\sum_{i=1}^{N}|\phi_{i}(\mathbf{r}_{k})|^{2}+\sum_{j=1}^{N}|\psi_{j}(\mathbf{r}_{k})|^{2})\quad...\\\
&\quad\quad\quad\quad(||\nabla_{r}\Phi(\mathbf{r}_{\mu})||^{2}+||\nabla_{r}\Psi(\mathbf{r}_{\mu})||^{2})||\mathbf{r}_{k}-\mathbf{r}_{\mu}||^{2}\end{split}$
(31)
Thus the minimization criterion of weighted K-means with SSM can be derived
when the spatial inhomogeneity of the gradient $\Phi(\mathbf{r})$ and
$\Psi(\mathbf{r})$ is neglected.
## References
* Slyusar 1998 Slyusar, V. End Products in Matrices in Radar Applications. _Radioelectron. Commun. Syst._ 1998, _41_ , 50–53
* Khatri and Rao 1968 Khatri, C. G.; Rao, C. R. Solutions to Some Functional Equations and Their Applications to Characterization of Probability Distributions. _Sankhya_ 1968, _30_ , 167–180
* Hohenberg and Kohn 1964 Hohenberg, P.; Kohn, W. Inhomogeneous Electron Gas. _Phys. Rev._ 1964, _136_ , B864
* Kohn and Sham 1965 Kohn, W.; Sham, L. J. Self-Consistent Equations Including Exchange and Correlation Effects. _Phys. Rev._ 1965, _140_ , A1133
* Slater 1951 Slater, J. C. A Simplification of the Hartree-Fock Method. _Phys. Rev._ 1951, _81_ , 385
* Becke 1993 Becke, A. D. A New Mixing of Hartree–Fock and Local Density-Functional Theories. _J. Chem. Phys._ 1993, _98_ , 1372–1377
* Casida 1995 Casida, M. E. Response Theory for Molecules. _Recent Advances in Density Functional Methods:(Part I)_ 1995, _1_ , 155
* Sternheimer 1954 Sternheimer, R. M. Electronic Polarizabilities of Ions from the Hartree-Fock Wave Functions. _Phys. Rev._ 1954, _96_ , 951
* Hedin 1965 Hedin, L. New Method for Calculating the One-Particle Green’s Function with Application to the Electron-gas Problem. _Phys. Rev._ 1965, _139_ , A796
* Hybertsen and Louie 1986 Hybertsen, M. S.; Louie, S. G. Electron Correlation in Semiconductors and Insulators: Band Gaps and Quasiparticle Energies. _Phys. Rev. B_ 1986, _34_ , 5390
* Aryasetiawan and Gunnarsson 1998 Aryasetiawan, F.; Gunnarsson, O. The GW method. _Rep. Prog. Phys._ 1998, _61_ , 237
* Onida et al. 2002 Onida, G.; Reining, L.; Rubio, A. Electronic Excitations: Density-Functional versus Many-body Green’s-Function Approaches. _Rev. Mod. Phys._ 2002, _74_ , 601
* Salpeter and Bethe 1951 Salpeter, E. E.; Bethe, H. A. A Relativistic Equation for Bound-State Problems. _Phys. Rev._ 1951, _84_ , 1232
* Häser 1993 Häser, M. Møller-Plesset (MP2) Perturbation Theory for Large Molecules. _Theor. Chim. Acta._ 1993, _87_ , 147–173
* Feyereisen et al. 1993 Feyereisen, M.; Fitzgerald, G.; Komornicki, A. Use of Approximate Integrals in Ab Initio Theory. An Application in MP2 Energy Calculations. _Chem. Phys. Lett._ 1993, _208_ , 359–363
* Bernholdt and Harrison 1996 Bernholdt, D. E.; Harrison, R. J. Large-scale Correlated Electronic Structure Calculations: the RI-MP2 Method on Parallel Computers. _Chem. Phys. Lett._ 1996, _250_ , 477–484
* Ren et al. 2012 Ren, X.; Rinke, P.; Joas, C.; Scheffler, M. Random-Phase Approximation and Its Applications in Computational Chemistry and Materials Science. _J. Mater. Sci._ 2012, _47_ , 7447–7471
* Beebe and Linderberg 1977 Beebe, N. H. F.; Linderberg, J. Simplifications in the Generation and Transformation of Two-electron Integrals in Molecular Calculations. _Int. J. Quantum. Chem._ 1977, _12_ , 683–705
* Røeggen and Wisløff-Nilssen 1986 Røeggen, I.; Wisløff-Nilssen, E. On the Beebe-Linderberg Two-Electron Integral Approximation. _Chem. Phys. Lett._ 1986, _132_ , 154–160
* Whitten 1973 Whitten, J. L. Coulombic Potential Energy Integrals and Approximations. _J. Chem. Phys._ 1973, _58_ , 4496–4501
* Dunlap et al. 1979 Dunlap, B. I.; Connolly, J. W. D.; Sabin, J. R. On Some Approximations in Applications of X$\alpha$ Theory. _J. Chem. Phys._ 1979, _71_ , 3396–3402
* Vahtras et al. 1993 Vahtras, O.; Almlöf, J.; Feyereisen, M. Integral Approximations for LCAO-SCF Calculations. _Chem. Phys. Lett._ 1993, _213_ , 514–518
* Weigend 2002 Weigend, F. A Fully Direct RI-HF Algorithm: Implementation, Optimised Auxiliary Basis Sets, Demonstration of Accuracy and Efficiency. _Phys. Chem. Chem. Phys._ 2002, _4_ , 4285–4291
* Hohenstein et al. 2012 Hohenstein, E. G.; Parrish, R. M.; Martínez, T. J. Tensor Hypercontraction Density Fitting. I. Quartic Scaling Second- and Third-Order Møller-Plesset Perturbation Theory. _J. Chem. Phys._ 2012, _137_ , 044103
* Parrish et al. 2012 Parrish, R. M.; Hohenstein, E. G.; Martínez, T. J.; Sherrill, C. D. Tensor Hypercontraction. II. Least-squares Renormalization. _J. Chem. Phys._ 2012, _137_ , 224106
* Hohenstein et al. 2012 Hohenstein, E. G.; Parrish, R. M.; Sherrill, C. D.; Martínez, T. J. Communication: Tensor Hypercontraction. III. Least-squares Tensor Hypercontraction for the Determination of Correlated Wavefunctions. _J. Chem. Phys._ 2012, _137_ , 221101
* Friesner 1985 Friesner, R. A. Solution of Self-Consistent Field Electronic Structure Equations by a Pseudospectral Method. _Chem. Phys. Lett._ 1985, _116_ , 39–43
* Friesner 1987 Friesner, R. A. Solution of the Hartree–Fock Equations for Polyatomic Molecules by a Pseudospectral Method. _J. Chem. Phys._ 1987, _86_ , 3522–3531
* Lu and Ying 2015 Lu, J.; Ying, L. Compression of the Electron Repulsion Integral Tensor in Tensor Hypercontraction Format with Cubic Scaling Cost. _J. Comput. Phys._ 2015, _302_ , 329–335
* Lu and Thicke 2017 Lu, J.; Thicke, K. Cubic Scaling Algorithms for RPA Correlation Using Interpolative Separable Density Fitting. _J. Comput. Phys._ 2017, _351_ , 187–202
* Hu et al. 2017 Hu, W.; Lin, L.; Yang, C. Interpolative Separable Density Fitting Decomposition for Accelerating Hybrid Density Functional Calculations with Applications to Defects in Silicon. _J. Chem. Theory Comput._ 2017, _13_ , 5420–5431
* Dong et al. 2018 Dong, K.; Hu, W.; Lin, L. Interpolative Separable Density Fitting Through Centroidal Voronoi Tessellation with Applications to Hybrid Functional Electronic Structure Calculations. _J. Chem. Theory Comput._ 2018, _14_ , 1311–1320
* Qin et al. 2020 Qin, X.; Liu, J.; Hu, W.; Yang, J. Interpolative Separable Density Fitting Decomposition for Accelerating Hartree–Fock Exchange Calculations within Numerical Atomic Orbitals. _J. Phys. Chem. A_ 2020, _124_ , 5664–5674
* Qin et al. 2020 Qin, X.; Li, J.; Hu, W.; Yang, J. Machine Learning K-Means Clustering Algorithm for Interpolative Separable Density Fitting to Accelerate Hybrid Functional Calculations with Numerical Atomic Orbitals. _J. Phys. Chem. A_ 2020, _124_ , 10066–10074
* Malone et al. 2018 Malone, F. D.; Zhang, S.; Morales, M. A. Overcoming the Memory Bottleneck in Auxiliary Field Quantum Monte Carlo Simulations with Interpolative Separable Density Fitting. _J. Chem. Theory Comput._ 2018, _15_ , 256–264
* Hu et al. 2020 Hu, W.; Liu, J.; Li, Y.; Ding, Z.; Yang, C.; Yang, J. Accelerating Excitation Energy Computation in Molecules and Solids within Linear-Response Time-Dependent Density Functional Theory via Interpolative Separable Density Fitting Decomposition. _J. Chem. Theory Comput._ 2020, _16_ , 964–973
* Lee et al. 2019 Lee, J.; Lin, L.; Head-Gordon, M. Systematically Improvable Tensor Hypercontraction: Interpolative Separable Density-Fitting for Molecules Applied to Exact Exchange, Second-and Third-order Møller–Plesset Perturbation Theory. _J. Chem. Theory Comput._ 2019, _16_ , 243–263
* Gao and Chelikowsky 2020 Gao, W.; Chelikowsky, J. R. Accelerating Time-Dependent Density Functional Theory and GW Calculations for Molecules and Nanoclusters with Symmetry Adapted Interpolative Separable Density Fitting. _J. Chem. Theory Comput._ 2020, _16_ , 2216–2223
* Ma et al. 2021 Ma, H.; Wang, L.; Wan, L.; Li, J.; Qin, X.; Liu, J.; Hu, W.; Lin, L.; Yang, C.; Yang, J. Realizing Effective Cubic-Scaling Coulomb Hole Plus Screened Exchange Approximation in Periodic Systems via Interpolative Separable Density Fitting with a Plane-Wave Basis Set. _J. Phys. Chem. A_ 2021, _125_ , 7545–7557
* Hu et al. 2018 Hu, W.; Shao, M.; Cepellotti, A.; Felipe, H.; Lin, L.; Thicke, K.; Yang, C.; Louie, S. G. Accelerating Optical Absorption Spectra and Exciton Energy Computation via Interpolative Separable Density Fitting. International Conference on Computational Science. 2018; pp 604–617
* Vassilvitskii and Arthur 2006 Vassilvitskii, S.; Arthur, D. k-means++: The Advantages of Careful Seeding. Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms. 2006; pp 1027–1035
* Singh et al. 2013 Singh, A.; Yadav, A.; Rana, A. K-means with Three Different Distance Metrics. _Int. J. Comput. Appl._ 2013, _67_ , 13–17
* Bloch 1929 Bloch, F. Über Die Quantenmechanik Der Elektronen in Kristallgittern. _Z. Phys._ 1929, _52_ , 555–600
* Bloch 1957 Bloch, F. Generalized Theory of Relaxation. _Phys. Rev._ 1957, _105_ , 1206
* Wu et al. 2021 Wu, K.; Qin, X.; Hu, W.; Yang, J. Low-Rank Approximations Accelerated Plane-Wave Hybrid Functional Calculations with K-point Sampling. _J. Chem. Theory Comput._ 2021, _18_ , 206–218
* Chen et al. 2023 Chen, S.; Wu, K.; Hu, W.; Yang, J. Low-Rank Approximations for Accelerating Plane-Wave Hybrid Functional Calculations in Unrestricted and Noncollinear Spin Density Functional Theory. _J. Chem. Phys._ 2023, _158_ , 134106
* Runge and Gross 1984 Runge, E.; Gross, E. K. Density-Functional Theory for Time-Dependent Systems. _Phys. Rev. Lett._ 1984, _52_ , 997
* Yabana and Bertsch 1996 Yabana, K.; Bertsch, G. Time-Dependent Local-Density Approximation in Real Time. _Phys. Rev. B_ 1996, _54_ , 4484
* Andreani et al. 2005 Andreani, C.; Colognesi, D.; Mayers, J.; Reiter, G.; Senesi, R. Measurement of Momentum Distribution of Lightatoms and Molecules in Condensed Matter Systems using Inelastic Neutron Scattering. _Adv. Phys._ 2005, _54_ , 377–469
* Li et al. 2023 Li, J.; Wan, L.; Jiao, S.; Hu, W.; Yang, J. Low-rank Approximations to Accelerate Hybrid Functional Enabled Real-Time Time-Dependent Density Functional Theory within Plane Waves. _Electron. Struct._ 2023, _5_ , 014008
* Wu et al. 2021 Wu, K.-D.; Kondra, T. V.; Rana, S.; Scandolo, C. M.; Xiang, G.-Y.; Li, C.-F.; Guo, G.-C.; Streltsov, A. Operational Resource Theory of Imaginarity. _Phys. Rev. Lett._ 2021, _126_ , 090401
* Zhang and Wu 2018 Zhang, J.; Wu, Y. Complex-Valued Unsupervised Convolutional Neural Networks for Sleep Stage Classification. _Comput. Methods Programs Biomed._ 2018, _164_ , 181–191
* Santra et al. 2007 Santra, B.; Michaelides, A.; Scheffler, M. On the Accuracy of Density-Functional Theory Exchange-Correlation Functionals for H Bonds in Small Water Clusters: Benchmarks Approaching the Complete Basis Set Limit. _J. Chem. Phys._ 2007, _127_ , 184104
* Santra et al. 2009 Santra, B.; Michaelides, A.; Scheffler, M. Coupled Cluster Benchmarks of Water Monomers and Dimers Extracted from Density-Functional Theory Liquid Water: The Importance of Monomer Deformations. _J. Chem. Phys._ 2009, _131_ , 124509
* Santra et al. 2011 Santra, B.; Klimeš, J.; Alfe, D.; Tkatchenko, A.; Slater, B.; Michaelides, A.; Car, R.; Scheffler, M. Hydrogen Bonds and Van der Waals Forces in Ice at Ambient and High Pressures. _Phys. Rev. Lett._ 2011, _107_ , 185701
* Santra et al. 2013 Santra, B.; Klimeš, J.; Tkatchenko, A.; Alfè, D.; Slater, B.; Michaelides, A.; Car, R.; Scheffler, M. On the Accuracy of Van der Waals Inclusive Density-Functional Theory Exchange-Correlation Functionals for Ice at Ambient and High Pressures. _J. Chem. Phys._ 2013, _139_ , 154702
* DiStasio Jr et al. 2014 DiStasio Jr, R. A.; Santra, B.; Li, Z.; Wu, X.; Car, R. The Individual and Collective Effects of Exact Exchange and Dispersion Interactions on the Ab Initio Structure of Liquid Water. _J. Chem. Phys._ 2014, _141_ , 084502
* Ko et al. 2020 Ko, H.-Y.; Jia, J.; Santra, B.; Wu, X.; Car, R.; DiStasio Jr, R. A. Enabling Large-Scale Condensed-Phase Hybrid Density Functional Theory Based Ab Initio Molecular Dynamics. 1. Theory, Algorithm, and Performance. _J. Chem. Theory Comput._ 2020, _16_ , 3757–3785
* Khoo et al. 2011 Khoo, K.; Chan, T.-L.; Kim, M.; Chelikowsky, J. R. Ab Initio Molecular Dynamics Simulations of Molten Al1-xSix Alloys. _Phys. Rev. B_ 2011, _84_ , 214203
* Zhang et al. 2017 Zhang, G.; Lin, L.; Hu, W.; Yang, C.; Pask, J. E. Adaptive Local Basis Set for Kohn–Sham Density Functional Theory in a Discontinuous Galerkin Framework II: Force, Vibration, and Molecular Dynamics Calculations. _J. Comput. Phys._ 2017, _335_ , 426–443
* Chawla and Voth 1998 Chawla, S.; Voth, G. A. Exact Exchange in Ab Initio Molecular Dynamics: An Efficient Plane-Wave Based Algorithm. _J. Chem. Phys._ 1998, _108_ , 4697–4700
* Mandal et al. 2021 Mandal, S.; Thakkur, V.; Nair, N. N. Achieving an Order of Magnitude Speedup in Hybrid-Functional-and Plane-Wave-Based Ab Initio Molecular Dynamics: Applications to Proton-Transfer Reactions in Enzymes and in Solution. _J. Chem. Theory Comput._ 2021, _17_ , 2244–2255
* Guidon et al. 2008 Guidon, M.; Schiffmann, F.; Hutter, J.; VandeVondele, J. Ab Initio Molecular Dynamics Using Hybrid Density Functionals. _J. chem. Phys._ 2008, _128_ , 214204
* Ko et al. 2020 Ko, H.-Y.; Jia, J.; Santra, B.; Wu, X.; Car, R.; DiStasio Jr, R. A. Enabling Large-scale condensed-Phase Hybrid Density Functional Theory Based Ab Initio Molecular Dynamics. 1. Theory, Algorithm, and Performance. _J. Chem. Theory Comput._ 2020, _16_ , 3757–3785
* Mandal et al. 2021 Mandal, S.; Thakkur, V.; Nair, N. N. Achieving An Order of Magnitude Speedup in Hybrid-Functional-and Plane-Wave-Based Ab Initio Molecular Dynamics: Applications to Proton-Transfer Reactions in Enzymes and in Solution. _J. Chem. Theory Comput._ 2021, _17_ , 2244–2255
* Ko et al. 2021 Ko, H.-Y.; Santra, B.; DiStasio Jr, R. A. Enabling Large-Scale Condensed-Phase Hybrid Density Functional Theory-Based Ab Initio Molecular Dynamics II: Extensions to the Isobaric–Isoenthalpic and Isobaric–Isothermal Ensembles. _J. Chem. Theory Comput._ 2021, _17_ , 7789–7813
* Hohenstein et al. 2012 Hohenstein, E. G.; Parrish, R. M.; Martínez, T. J. Tensor Hypercontraction Density Fitting. I. Quartic Scaling Second-and Third-Order Møller-Plesset Perturbation Theory. _J. Chem. Phys._ 2012, _137_ , 044103
* Parrish et al. 2012 Parrish, R. M.; Hohenstein, E. G.; Martínez, T. J.; Sherrill, C. D. Tensor Hypercontraction. II. Least-squares Renormalization. _J. Chem. Phys._ 2012, _137_ , 224106
* Parrish et al. 2013 Parrish, R. M.; Hohenstein, E. G.; Martínez, T. J.; Sherrill, C. D. Discrete Variable Representation in Electronic Structure Theory: Quadrature Grids for Least-Squares Tensor Hypercontraction. _J. Chem. Phys._ 2013, _138_ , 194107
* Hu et al. 2017 Hu, W.; Lin, L.; Banerjee, A. S.; Vecharynski, E.; Yang, C. Adaptively Compressed Exchange Operator for Large-Scale Hybrid Density Functional Calculations with Applications to the Adsorption of Water on Silicene. _J. Chem. Theory Comput._ 2017, _13_ , 1188–1198
* Giannozzi et al. 2009 Giannozzi, P. et al. QUANTUM ESPRESSO: A Modular and Open-Source Software Project for Quantum Simulations of Materials. _J. Phys.: Condens. Matter_ 2009, _21_ , 395502
* Hartwigsen et al. 1998 Hartwigsen, C.; Goedecker, S.; Hutter, J. Relativistic Separable Dual-space Gaussian Pseudopotentials from H to Rn. _Phys. Rev. B_ 1998, _58_ , 3641–3662
* Heyd et al. 2006 Heyd, J.; Scuseria, G. E.; Ernzerhof, M. Erratum: ”Hybrid Functionals Based on a Screened Coulomb Potential” [J. Chem. Phys.118, 8207 (2003)]. _J. Chem. Phys._ 2006, _118_ , 8207
* Grimme 2006 Grimme, S. Semiempirical GGA-Type density functional constructed with a long-range dispersion correction. _J. Comput. Chem._ 2006, _27_ , 1787–1799
* Lin 2016 Lin, L. Adaptively Compressed Exchange Operator. _J. Chem. Theory Comput._ 2016, _12_ , 2242–2249
* Nosé 1984 Nosé, S. A Unified Formulation of the Constant Temperature Molecular Dynamics Methods. _J. Chem. Phys._ 1984, _81_ , 511–519
* Hoover 1985 Hoover, W. G. Canonical Dynamics: Equilibrium Phase-Space Distributions. _Phys. Rev. A: At., Mol., Opt. Phys._ 1985, _31_ , 1695
* Thomas et al. 2013 Thomas, M.; Brehm, M.; Fligg, R.; Vöhringer, P.; Kirchner, B. Computing Vibrational Spectra from Ab Initio Molecular Dynamics. _Phys. Chem. Chem. Phys._ 2013, _15_ , 6608–6622
* Saikia et al. 2008 Saikia, B. J.; Parthasarathy, G.; Sarmah, N. Fourier Transform Infrared Spectroscopic Estimation of Crystallinity in SiO 2 Based Rocks. _Bull. Mater. Sci._ 2008, _31_ , 775–779
* Wan et al. 2021 Wan, L.; Liu, X.; Liu, J.; Qin, X.; Hu, W.; Yang, J. Hybrid MPI and OpenMP Parallel Implementation of Large-scale Linear-Response Time-Dependent Density Functional Theory with Plane-Wave Basis Set. _Electron. Struct._ 2021, _3_ , 024004
TOC graphic
|
# One-shot and Partially-Supervised Cell Image Segmentation
Using Small Visual Prompt
Sota Kato
Meijo University
<EMAIL_ADDRESS>Kazuhiro Hotta
Meijo University
<EMAIL_ADDRESS>
###### Abstract
Semantic segmentation of microscopic cell images using deep learning is an
important technique, however, it requires a large number of images and ground
truth labels for training. To address the above problem, we consider an
efficient learning framework with as little data as possible, and we propose
two types of learning strategies: One-shot segmentation which can learn with
only one training sample, and Partially-supervised segmentation which assigns
annotations to only a part of images. Furthermore, we introduce novel
segmentation methods using the small prompt images inspired by prompt learning
in recent studies. Our proposed methods use a pre-trained model based on only
cell images and teach the information of the prompt pairs to the target image
to be segmented by the attention mechanism, which allows for efficient
learning while reducing the burden of annotation costs. Through experiments
conducted on three types of microscopic cell image datasets, we confirmed that
the proposed method improved the Dice score coefficient (DSC) in comparison
with the conventional methods. Our code is available at
https://github.com/usagisukisuki/Oneshot-Part-CellSegmentation.
## 1 Introduction
Semantic segmentation, which assigns a class label to each pixel in an image,
is a crucial technique for image analysis in the fields of medicine [27, 9,
26, 14] and biology [12, 16]. It has become possible to obtain objective
results automatically by using deep learning and various methods have been
proposed [30, 4, 31, 13, 29, 10]. However, it is necessary to require a large
number of images and ground truth labels when we design a deep learning model.
Particularly, generating ground truth requires the knowledge of human experts
and takes a lot of time and costs.
In recent studies, to tackle the above problem, few-shot segmentation [21, 18,
5, 6] for less training data, zero-shot segmentation [8, 3] for only inference
without training, and semi-supervised segmentation [17, 35], which learns with
a small number of supervised training samples, have been proposed.
Additionally, the novel idea of prompt learning has been gaining popularity in
the field of natural language processing [2]. This idea is an inference method
using large-scale pre-trained models, and it has been reported that it can
achieve higher accuracy than conventional few-shot and zero-shot learning
methods in the field of image recognition [22, 34, 37, 15].
Figure 1: Overview of our proposed strategies. Firstly, we build a pre-trained
model using only cell images. Secondly, we use the pre-trained model and the
small visual prompt to learn one-shot segmentation and partially-supervised
segmentation.
However, these approaches have been trained by natural images. As shown in
Figure 2, it cannot be adapted well to images from other fields such as
biology due to differences in domains between data. Additionally, although
conventional prompt learning methods often use both image and textual
information [20, 34, 37, 22], it is possible that general language models used
by prompt learning cannot deal with specialized terms not included in the
training data. In order to perform prompt learning to the data from different
fields, we consider that it is necessary to use a pre-trained model, which is
more specialized to the field.
Therefore, we propose three strategies for cell image segmentation as shown in
Figure 1. Firstly, we build a pre-trained model using only cell image
datasets. By using the pre-trained model, we can learn more effectively even
when training on another cell image dataset. Secondly, we present novel
strategies for one-shot segmentation and partially-supervised segmentation
employing the above pre-trained model and visual prompts with small regions.
In one-shot segmentation, we assume learning with a single image and propose a
novel learning method that utilizes a prompt image and label of a small
region. In partially-supervised segmentation, we assume that a part of an
image has annotation, and pseudo-labels are predicted using the same framework
as one-shot segmentation, and segmentation network is trained with the pseudo
labels. Since cell images often have a fractal structure in which similar
structures spread over the entire image, we consider that it is possible to
segment the entire cell image by successfully using the information from small
prompt regions. Additionally, since labeling the prompt image with the small
region is also easier than the entire image, it can reduce the annotation cost
of human experts.
We evaluated our methods on three cell image datasets with different shapes.
From the experimental results, we confirmed the effectiveness of the proposed
method in one-shot segmentation as well as the effectiveness of the pre-
training model. Furthermore, the results by the proposed partially-supervised
method and training on generated pseudo-labels demonstrated that the
difference in accuracy between the proposed method and the method using the
original dataset was within about 1.00% for all cell image datasets. We have
also confirmed that our proposed partially-supervised strategy can produce
sufficient supervised labels with only part annotations.
This paper is organized as follows. Section 2 describes related works. Section
3 describes the details of the proposed methods. Section 4 shows the
experimental results. Finally, we describe our summary and future works in
Section 5.
The main contributions of this paper are as follows:
* •
We build a pre-trained model using only cell image datasets. By using this
pre-trained model for one-shot segmentation and partially-supervised
segmentation, we can create a model with more accuracy even with less data.
* •
We present a novel method for one-shot segmentation employing the pre-trained
model and visual prompts with small regions. By using this method, we can
achieve efficient learning even if only one training sample is used.
* •
Furthermore, we present a novel strategy for partially-supervised
segmentation. We can generate pseudo labels from part labels using the same
method as in one-shot segmentation, and we train the model again using these
pseudo labels. By using this approach, we confirmed that the difference in
accuracy between the proposed method and the method using all training samples
in the original dataset was within about 1.00% for all cell image datasets.
(a)
(b)
Figure 2: Visualization results of CLIPSeg [22].
## 2 Related works
### 2.1 One-shot segmentation
Few-shot segmentation [21, 18, 19, 5, 6, 7] is a method that uses a small
amount of training data. A query image of an unknown category, a support image
of the same category, and a corresponding mask image are given as a support
set for both training and inference. The goal is to learn efficiently with a
small number of training data by using the information from the support set.
In the field of biology, [5, 6, 7] have been proposed. Especially, when only
one training image is used, it is called one-shot segmentation [28, 24], which
is a more hard problem than few-shot segmentation. Shaban et al. [28] propose
a two-branched approach to one-shot semantic image segmentation taking
inspiration from few-shot learning, and show significant improvements over the
baselines on this benchmark. Raza et al. [24] present a simple yet effective
approach, whereby exploiting information from base training classes in the
conventional one-shot segmentation set-up allows for weak supervision to be
easily used.
Furthermore, as an even more challenging problem, zero-shot segmentation has
been proposed [8, 3]. It is a method for segmenting unknown categories by
using embedded features of words and similarity between features of the pre-
trained model. Recently, the novel idea of prompt learning for zero-shot
segmentation has been proposed [22].
Although numerous approaches have been proposed, there are a few one-shot
segmentation methods for microscopic biology images. We believe that one-shot
segmentation is more necessary than few-shot segmentation because it reduces
the burden of annotation cost furthermore. Additionally, as shown in Figure 2,
conventional zero-shot segmentation cannot work well due to the differences
between data domains. Therefore, we propose a novel method for one-shot cell
image segmentation that can be improved segmentation accuracy in comparison
with conventional approaches.
### 2.2 Partially-supervised segmentation
Semantic segmentation using deep learning requires a large number of
annotations, and the performance decreases significantly when the number of
annotations is small. However, generating a large number of annotations is a
hardship for human experts. Therefore, semi-supervised learning, which
maintains performance with a small number of annotations, has been attracting
attention [17, 33, 35, 32, 38]. In the field of medical and biological
imaging, [32, 38] have been proposed, and furthermore, a similar problem
setup, which is called partially-supervised segmentation, has also been
proposed in a recent study. In semi-supervised segmentation, the setting is to
use a few pieces of data annotated throughout an image, whereas, in partially-
supervised segmentation, only a part of the image is annotated. Xu et al. [36]
propose an active learning framework. This strategy is used to select the most
uncertain partially-label image based on the probability predicted by the
segmentation model until finishing selecting the maximum number of annotated
images, and the MixUp augmentation is used to learn a proper visual
representation of both the annotated and unannotated images.
We believe that annotating a part of the image is less burden than annotating
the full image in the case of the cell image because it is very hard to
annotate each individual cell. Additionally, cell images often have a fractal
structure and we consider partially-supervised segmentation is effective.
However, the conventional technique for partially-supervised segmentation [36]
is difficult to learn because of too many active learning steps. Our proposed
partially-supervised learning is simple, and it is possible to achieve an even
level of accuracy with the original data with only two training stages.
Further, since our methods for one-shot and partially-supervised segmentations
are nearly identical, our training strategy can be used for multiple tasks.
## 3 Methodology
Figure 3: Overview of the proposed method for one-shot segmentation. Figure 4:
Overview of methods for acquiring final output using the prompt label. Prompt
label is converted to a one-hot label and the class with the highest value is
output as a prediction from the inner product result with the attention map.
In Section 3, we present our novel approach for one-shot and partially-
supervised segmentation for cell images. By using these approaches, it is
possible to learn with fewer annotations by simply preparing a visual prompt
with a small region.
In Section 3.1, we present a detail of the pre-trained model. We present a
novel architecture for one-shot learning in Section 3.2. In Section 3.3, we
present a novel learning strategy using pseudo labels under partially-
supervised segmentation.
### 3.1 Building the pre-trained model
Firstly, we train a model using only cell image datasets. These datasets used
in this study are ISBI2012 [1], ssTEM [11], and iRPE [23] datasets. The
network is U-Net [25] and the details of each dataset and training conditions
are the same in Section 4.1. We build pre-trained models using two datasets
other than the target dataset because of the fairness of the experiments. For
instance, for an evaluation experiment on ISBI2012, the pre-trained model
using only ssTEM and iRPE datasets is adopted.
Consequently, three types of pre-trained models are built in this study. These
models are used in our proposed strategies for one-shot segmentation in
Section 3.2 and partially-supervised segmentation in Section 3.3.
### 3.2 One-shot segmentation
Figure 5: Overview of our proposed strategies for partially-supervised
segmentation.
Figure 3 shows the overview of the proposed method for one-shot segmentation.
In one-shot segmentation, we use three types of input data: the target
microscopic cell image to be segmented, the prompt pairs of the small cell
image, and the corresponding ground truth. The target microscopic cell image
and the prompt cell image are fed into the segmentation network. The weights
of two networks are shared, and the number of channels in output feature maps
is the same as the number of classes in ground truth. Final output features
$\boldsymbol{x}\in\mathbb{R}^{C\times N}$ of the input image to be segmented
and the final output $\boldsymbol{p}\in\mathbb{R}^{C\times M}$ of the prompt
image are an inner product to generate a normalized attention map using a
softmax function as
$\displaystyle\beta_{i,j}=\frac{exp(s_{i,j})/\tau}{\sum_{j}^{M}exp(s_{i,j})/\tau},\>\>\>where\>\>s_{i,j}=\boldsymbol{x_{c,i}}^{\top}\boldsymbol{p_{c,j}}$
(1)
where $\beta_{i,j}$ is the attention map and represents the degree to which
the $j$-th region is associated with the $i$-th region, $C$ is the number of
classes, and $N=H_{t}\times W_{t},M=H_{s}\times W_{s}$ are the number of
feature locations from the final output feature maps. $\tau$ is a temperature
parameter that is to align probabilities for attention maps. By using one-hot
teacher label $\boldsymbol{q}\in\mathbb{R}^{C\times M}$ corresponding to the
prompt cell image, the final output is
$\boldsymbol{o=(o_{1},o_{2},...,o_{i},...,o_{N})}\in\mathbb{R}^{C\times N}$
and can be calculated in Equation (2).
$\displaystyle\boldsymbol{o_{i}}=\sum_{j}^{M}\beta_{i,j}\boldsymbol{q_{j}}^{\top}$
(2)
where $\beta_{i,j}$ is the attention map calculated by the inner product of
$\boldsymbol{x}$ and $\boldsymbol{p}$. We can obtain the similarity between
the output features from the input image and the prompt image in the attention
map. Subsequently, as shown in Figure 4, a final prediction can be performed
by referring to the class labels of output features of the prompt image that
have high similarity to the output feature of the image to be segmented.
We employ the cross entropy loss in equation (3).
$\displaystyle Loss=-\frac{1}{C}\sum_{c=1}^{C}\sum_{i=1}^{H_{t}\times
W_{t}}y_{i}^{c}\log\sigma(o_{i}^{c})$ (3)
where $C$ is the number of classes, $y_{i}^{c}$ is the teacher labels
associated with the input image, and $\sigma(o_{i}^{c})$ is the probability
value after a softmax function as
$\sigma(o_{i}^{c})=\frac{exp(o_{i}^{c})}{\sum_{j}exp(o_{i}^{j})}$. Further,
$o_{i}^{c}$ is the $c$-th element of $\boldsymbol{o_{i}}$, which is a final
output vector of the deep neural network.
Since the proposed method can utilize the information of the prompt image and
their corresponding teacher labels for both training and inference, we
consider that it can create a highly accurate model even with one-shot
segmentation.
### 3.3 Partially-supervised segmentation
Figure 5 shows the overview of our proposed strategy for partially-supervised
segmentation. In partially-supervised segmentation, it is assumed that only a
part of the input cell image is given an annotation, and cannot be trained in
the basic training method for segmentation using deep learning. Therefore, the
proposed strategy consists of two stages. First, the network is trained to
generate pseudo-labels using only part of the label information. Second,
another segmentation network is trained using generated pseudo-labels in a
full-scratch learning.
In the stage of learning to generate pseudo-labels, we crop the part region of
a cell image which is assigned annotation, and feed it with the partial label
into the network as the prompt pairs. Consequently, we use three types of
input to the segmentation network in the same way as in Section 3.2. As in the
proposed one-shot segmentation, the final output is obtained by referring to
the ground truth corresponding to the prompt image using the attention
architecture. Then, the final output is translated a mask label with one-hot
type by the argmax function, and the network trains using the mask label as a
self-supervised learning. Additionally, it trains bringing the output from the
prompt image closer to the prompt label.
We design loss functions in Equation (4)-(6). We use the cross-entropy loss
between the pseudo mask labels and the final predictions in Equation (5), and
the cross-entropy loss between the prediction regions for the prompt image and
the prompt label in Equation (6). As the final loss function, we employ in
combination in Equation (4).
$\displaystyle Loss$ $\displaystyle=$ $\displaystyle
Loss_{pseudo}+Loss_{prompt}$ (4) $\displaystyle Loss_{pseudo}$
$\displaystyle=$
$\displaystyle-\frac{1}{C}\sum_{c=1}^{C}\sum_{i=1}^{H_{t}\times
W_{t}}m_{i}^{c}\log\sigma(o_{i}^{c})$ (5) $\displaystyle Loss_{prompt}$
$\displaystyle=$
$\displaystyle-\frac{1}{C}\sum_{c=1}^{C}\sum_{i=1}^{H_{s}\times
W_{s}}q_{i}^{c}\log\sigma(p_{i}^{c})$ (6)
where $C$ is the number of classes, $m_{i}$ is the pseudo mask label, $q_{i}$
is the prompt label, $\sigma(p_{i}^{c})$ is the probability value from the
input image, and $\sigma(p_{i}^{c})$ is the probability value from the prompt
image after a softmax function as
$\sigma(p_{i}^{c})=\frac{exp(p_{i}^{c})}{\sum_{j}exp(p_{i}^{j})}$, further,
$p_{i}^{c}$ is the $c$-th element of $\boldsymbol{p_{i}}$, which is a final
output vector of the deep neural network. In one-shot segmentation, we do not
use the loss for the prompt image since we use only one prompt pair and there
is a possibility of over-fitting. However, in partially-supervised
segmentation, we add $Loss_{prompt}$ to improve the quality of the pseudo
label because we can use different prompt pairs during learning.
In the stage of training using pseudo-labels, we crop the part region of the
cell image, which is assigned annotation, and feed it with the partial label
into the network as the prompt pairs.
By using the proposed strategy, even when only some of the teachers are given,
the proposed method can extend the information of the sample images to the
whole image by using the attention structure, which enables segmentation of
the whole image.
## 4 Experiments
Table 1: Comparison results for one-shot segmentation.
---
| ISBI2012 [1] | ssTEM [11] | iRPE [23]
DSC | Average | Background | Membrane | Average | Background | Membrane | Average | Background | Membrane
$Original\>\>learning$ | | | | | | | | |
U-Net (Full-scratch) | 86.66±0.50 | 94.04±0.15 | 79.28±0.86 | 91.60±0.14 | 96.54±0.05 | 86.65±0.23 | 74.39±0.19 | 84.60±0.34 | 64.17±0.67
U-Net (Pre-trained) | 86.92±0.40 | 94.15±0.20 | 79.70±0.61 | 90.99±0.19 | 96.28±0.12 | 85.70±0.27 | 73.70±0.11 | 84.00±0.21 | 63.41±0.38
$One\mathchar 45\relax shot\>\>learning$ | | | | | | | | |
U-Net (Full-scratch) | 78.80±1.73 | 89.83±0.86 | 67.77±2.67 | 78.74±2.46 | 92.54±0.26 | 64.94±4.66 | 59.25±0.05 | 78.96±0.44 | 39.53±0.40
U-Net (Pre-trained) | 81.01±1.86 | 91.51±1.06 | 70.51±2.69 | 82.42±0.71 | 92.71±0.36 | 72.13±1.07 | 53.85±1.44 | 79.77±0.61 | 27.93±2.28
Co-FCN [24] | 78.19±2.34 | 89.87±1.62 | 66.51±3.48 | 83.61±0.24 | 93.14±0.07 | 74.09±0.46 | 54.81±3.57 | 81.88±0.32 | 27.75±7.38
OSLSM [28] | 78.67±1.43 | 90.03±0.58 | 67.32±2.29 | 84.00±0.83 | 93.09±0.50 | 74.91±1.17 | 62.94±0.76 | 79.52±0.72 | 46.36±2.25
FSMICS (one-shot) [7] | 79.85±0.93 | 90.67±0.43 | 69.03±1.42 | 83.42±0.10 | 92.82±0.17 | 74.02±0.30 | 61.14±2.18 | 79.48±0.30 | 42.79±4.65
Ours (Full-scratch) | 80.18±1.26 | 91.11±0.66 | 69.25±1.87 | 83.13±0.26 | 92.30±0.25 | 73.95±0.29 | 64.00±1.33 | 78.85±0.72 | 49.16±2.03
Ours (Pre-trained) | 81.25±1.55 | 91.68±1.04 | 70.81±2.18 | 85.47±0.08 | 94.41±0.01 | 76.53±0.15 | 63.85±1.53 | 78.56±1.69 | 49.15±1.61
Table 2: Comparison results for partially-supervised segmentation.
---
| ISBI2012 [1] | ssTEM [11] | iRPE [23]
DSC | Average | Background | Membrane | Average | Background | Membrane | Average | Background | Membrane
$Original\>\>learning$ | | | | | | | | |
U-Net (Full-scratch) | 86.66±0.50 | 94.04±0.15 | 79.28±0.86 | 91.60±0.14 | 96.54±0.05 | 86.65±0.23 | 74.39±0.19 | 84.60±0.34 | 64.17±0.67
U-Net (Pre-trained) | 86.92±0.40 | 94.15±0.20 | 79.70±0.61 | 90.99±0.19 | 96.28±0.12 | 85.70±0.27 | 73.70±0.11 | 84.00±0.21 | 63.41±0.38
$Pseudo\>\>label\>\>learning$ | | | | | | | | |
Ours (Full-scratch) | 85.94±0.28 | 94.03±0.13 | 77.85±0.43 | 89.78±0.02 | 95.80±0.05 | 83.76±0.05 | 74.35±0.12 | 85.21±0.10 | 63.49±0.25
Ours (Pre-trained) | 86.27±0.32 | 94.10±0.18 | 78.43±0.46 | 90.53±0.12 | 96.12±0.06 | 84.94±0.18 | 73.30±0.25 | 84.18±0.20 | 62.41±0.71
### 4.1 Datasets
We used 2D electron microscopy images of the ISBI2012 challenge (ISBI2012)
[1], serial sectioning transmission electron microscopy (ssTEM) [11], and
absorbance microscopy images of human iRPE cells (iRPE) [23] as datasets. All
datasets are for binary segmentation of tubular structures spread over an
image, i.e., cell membrane and background. The number of iRPE images is
$1,032$ and the pixel count is $256\times 256$. Since the resolution of ssTEM
image is $1,024\times 1,024$ and the resolution of ISBI2012 image is
$512\times 512$, and we cropped a region of $256\times 256$ pixels from the
original images due to the limitation of GPU memory. There is no overlap for
cropping areas, and consequently, the total number of crops is 320 in ssTEM
and 120 in ISBI2012. Figure 6 shows the examples in the datasets.
We randomly rearranged the images. Afterward, we divided each dataset into 2
to 1 in index order and prepared them as training or inference data, and used
three-fold cross validation while switching the training and inference data.
In the experiment of one-shot segmentation, we used only one fixed training
image from the training data, and in inference, we used the original inference
data. The prompt image was selected randomly from the training data with a
different image other than the training image, and the fixed coordinates were
cropped. Consequently, different prompt pairs were used for each 3-fold cross
validation.
In the experiment of partially-supervised segmentation, all labels in the
training data were masked by filling in zeros except for the fixed
coordinates, and we used the region where the label exists as prompt images.
The size of the prompt was set to $64\times 64$ pixels in all experiments.
Ablations of different prompt sizes are shown in Section 4.3.
(a) (b) (c)
---
Figure 6: Examples of datasets. (a) ISBI2012 dataset [1], (b) ssTEM dataset
[11], and (c) iRPE dataset [23]. All datasets are labeled as: cell membrane
(white) and background (black).
### 4.2 Training conditions
The batch size for training was set to 4, and we used Adam ($betas=0.9,0.999$)
for optimization. The initial learning rate was 0.001, and we used a learning
rate schedule that decays the learning rate by $0.1$ at 180 epoch and again at
190 epoch. For data pre-processing, training samples were flipped
horizontally, rotated with an angle randomly selected within $\theta$ =
$-90^{\circ}$ to $90^{\circ}$, and normalized from zero to one. For inference,
images were normalized from zero to one. All experiments were conducted using
a three-fold cross validation and we applied the Dice score coefficient (DSC)
as evaluation metric. The average DSC from three validations was used for
evaluation. We used a single Nvidia RTX Quadro 8000 GPU as a calculator.
### 4.3 Experimental results
---
---
(a) (b) (c) (d) (e)
---
Figure 7: Qualitative results on one-shot segmentation. The top row is for the
ISBI2012, the middle row is for the ssTEM, and the bottom row is for the iRPE
dataset. (a) Input image, (b) Label image, (c) Original U-Net, (d) Prompt
images ($64\times 64$ pixels), and (e) Ours.
#### 4.3.1 One-shot segmentation
Table 1 shows comparison results for one-shot segmentation on the three
datasets. As the comparison method, we employed conventional methods for one-
shot segmentation [24, 28, 7]. Original learning in Table 1 indicates the case
when the model is trained by full training images, and one-shot learning
indicates the case where it is only one training image. Full-scratch in Table
1 indicates the case where the pre-trained model is not used, and pre-trained
indicates the case where it is used. The bold letters show the best DSC. By
using our proposed method, the highest average DSC was achieved for all
datasets. Particularly, when we used the pre-trained model, the average DSC
was improved by over 2.45% for ISBI2012, 6.73% for ssTEM, and 4.75% for iRPE
in comparison with a baseline using the original U-Net. Furthermore, our
method was more improved DSC than conventional approaches for one-hot
segmentation. When the iRPE was used, the model with full-scratch was more
accurate than the model with pre-trained. We consider that this may be because
the iRPE cell image had a different structure compared to the two other cell
images.
Figure 7 shows qualitative comparisons of one-shot segmentation through
visualizations of three types of cell image datasets when we used a full-
scratch learning. As shown in Figure 7, the proposed method could segment the
cell membrane that U-Net could not segment well. These results demonstrated
the effectiveness of our method using the small visual prompt.
Table 3 shows an ablation study of one-shot segmentation when we evaluated
various $\tau$ parameters, which are used in the softmax function of the
attention mechanism, and smaller prompt sizes. We evaluated
$\tau=0.01,0.1,1.0,2.0$ as a temperature parameter for attention maps and
further evaluated $32\times 32$ pixels of the prompt image, which is half size
of $64\times 64$. Comparison results with various temperature parameters
demonstrated that adequate parameters depended on the data, and we consider
that this is influenced by the thickness and complexity of the cell membrane
to be segmented. Our proposed method can be adapted to a variety of cell
images by setting appropriate $\tau$. Furthermore, the accuracy did not
decrease much with smaller prompt sizes. By using smaller prompt images, we
can further reduce the burden on the annotating cost.
#### 4.3.2 Partially-supervised segmentation
---
---
(a) (b) (c)
---
Figure 8: Segmentation results on partially-supervised segmentation. The top
row is for the ISBI2012, the middle row is for the ssTEM, and the bottom row
is for the iRPE dataset. (a) Input image, (b) Label image, and (c) Ours.
---
---
(a) (b) (c) (d)
---
Figure 9: Qualitative results on partially-supervised segmentation. The top
row is for the ISBI2012, the middle row is for the ssTEM, and the bottom row
is for the iRPE dataset. (a) Training image, (b) Original label image, (c)
Partial label image ($64\times 64$ pixels), and d) Generated pseudo label
image in Stage 1 of Figure 5. Table 3: Ablation study of one-shot
segmentation.
---
Methods | Prompt size | ISBI2012 [1] | ssTEM [11] | iRPE [23]
$\tau$=0.01 (Full-scratch) | $64\times 64$ | 69.39±12.16 | 81.81±0.82 | 59.01±7.15
$\tau$=0.1 (Full-scratch) | 78.61±1.80 | 83.12±0.39 | 64.00±1.33
$\tau$=1.0 (Full-scratch) | 79.94±1.21 | 83.13±0.26 | 62.73±1.31
$\tau$=2.0 (Full-scratch) | 80.18±1.26 | 83.12±0.41 | 63.31±1.21
$\tau$=0.01 (Pre-trained) | $64\times 64$ | 76.86±4.24 | 85.47±0.08 | 58.88±6.61
$\tau$=0.1 (Pre-trained) | 81.13±2.05 | 85.40±0.09 | 58.13±5.25
$\tau$=1.0 (Pre-trained) | 81.25±1.55 | 81.73±1.09 | 62.24±1.09
$\tau$=2.0 (Pre-trained) | 81.15±1.52 | 84.18±0.37 | 63.85±1.53
$\tau$=0.01 (Full-scratch) | $32\times 32$ | 77.95±1.70 | 82.25±0.19 | 60.19±7.53
$\tau$=0.1 (Full-scratch) | 77.41±2.21 | 82.21±0.05 | 63.26±1.83
$\tau$=1.0 (Full-scratch) | 78.73±1.04 | 82.75±0.38 | 62.60±1.08
$\tau$=2.0 (Full-scratch) | 79.37±1.11 | 82.28±0.27 | 62.00±0.97
$\tau$=0.01 (Pre-trained) | $32\times 32$ | 78.06±3.26 | 82.91±0.28 | 49.66±6.15
$\tau$=0.1 (Pre-trained) | 80.45±0.84 | 82.93±0.29 | 60.17±4.45
$\tau$=1.0 (Pre-trained) | 75.89±4.77 | 81.46±0.67 | 61.75±0.81
$\tau$=2.0 (Pre-trained) | 80.35±0.44 | 75.35±2.73 | 62.84±2.66
Table 2 shows the results of pseudo-label learning for partially-supervised
segmentation on the three datasets. Original learning in Table 2 indicates the
case when the model is trained by a full annotation dataset, and pseudo label
learning indicates the case where it is trained by pseudo labels generated by
our proposed strategy. In addition, full-scratch in Table 2 indicates the case
where the pre-trained model is not used when we generate pseudo-labels, and
pre-trained indicates the case where it is used. All subsequent training with
pseudo labels is done in full scratch. We fixed 2.0 to the value of $\tau$
because we consider it to be a suitable parameter for all three types of
datasets from the results of Table 3. By using our pseudo label, there was
almost no difference in the average DSC compared to results trained using the
original overall label images even though the annotation is only in one part
of the image. Particularly, when we used the pre-trained model, the difference
of average DSC was within $0.65\%$ for ISBI2012, $0.46\%$ for ssTEM, and
$0.40\%$ for iRPE in comparison with a baseline using the original U-Net with
the pre-trained model.
Figure 8 shows segmentation results when the model was trained by our pseudo
labels in Stage 2 of Figure 5. As shown in Figure 8, we can confirm that the
predicted results are almost identical to the correct label images. Figure 9
shows the comparison results between the generated pseudo labels by our method
in Stage 1 of Figure 5 and the correct labels. Even though the hard setting,
where only part of the annotations is attached in the image, the generated
pseudo labels were of largely incorrect quality compared to the original label
images. These results demonstrated that, in the case of cell images, there is
no need to attach an annotation to the entire image, and our proposed method
can be used to reduce the annotation cost.
## 5 Conclusion
In this study, we proposed a novel one-shot segmentation method and a
partially-supervised segmentation method for microscopic cell images.
Experiments on three different cell image datasets demonstrated that our
proposed methods, which is used the pre-trained model and small visual prompt
images, can produce highly accurate models even with a small number of
training data. Furthermore, no need to attach an annotation to the entire
image by using the proposed strategy and our proposed method can be used to
reduce the annotation cost. However, we were only evaluating simple binary
cell image datasets. Therefore, evaluating whether it is equally effective for
multi-class segmentation is our future work. Additionally, we would like to
evaluate zero-shot segmentation using the proposed method.
## Acknowledgements
This work was supported by JSPS KAKENHI Grant Number 22H04735, and Tateishi
Science and Technology Promotion Foundation.
## References
* [1] Web page of the em segmentation challenge. http://brainiac2.mit.edu/isbi_challenge/.
* [2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
* [3] Maxime Bucher, Tuan-Hung Vu, Matthieu Cord, and Patrick Pérez. Zero-shot semantic segmentation. Advances in Neural Information Processing Systems, 32, 2019.
* [4] Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, and Manning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. In Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part III, pages 205–218. Springer, 2023.
* [5] Sixian Chan, Cheng Huang, Cong Bai, Weilong Ding, and Shengyong Chen. Res2-unext: a novel deep learning framework for few-shot cell image segmentation. Multimedia Tools and Applications, 81(10):13275–13288, 2022.
* [6] Youssef Dawoud, Arij Bouazizi, Katharina Ernst, Gustavo Carneiro, and Vasileios Belagiannis. Knowing what to label for few shot microscopy image cell segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3568–3577, 2023.
* [7] Youssef Dawoud, Julia Hornauer, Gustavo Carneiro, and Vasileios Belagiannis. Few-shot microscopy image cell segmentation. In Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part V, pages 139–154. Springer, 2021.
* [8] Jian Ding, Nan Xue, Gui-Song Xia, and Dengxin Dai. Decoupling zero-shot semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11583–11592, 2022.
* [9] Michael Ebner, Guotai Wang, Wenqi Li, Michael Aertsen, Premal A Patel, Rosalind Aughwane, Andrew Melbourne, Tom Doel, Anna L David, Jan Deprest, et al. An automated localization, segmentation and reconstruction framework for fetal brain mri. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 313–320. Springer, 2018.
* [10] Haruki Fujii, Hayato Tanaka, Momoko Ikeuchi, and Kazuhiro Hotta. X-net with different loss functions for cell image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3793–3800, 2021.
* [11] Stephan Gerhard, Jan Funke, Julien Martel, Albert Cardona, and Richard Fetter. Segmented anisotropic sstem dataset of neural tissue. figshare, pages 0–0, 2013.
* [12] Simon Graham and Nasir M Rajpoot. Sams-net: Stain-aware multi-scale network for instance-based nuclei segmentation in histology images. In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pages 590–594. IEEE, 2018.
* [13] Yuki Hiramatsu, Kazuhiro Hotta, Ayako Imanishi, Michiyuki Matsuda, and Kenta Terai. Cell image segmentation by integrating multiple cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2205–2211, 2018.
* [14] Debesh Jha, Pia H Smedsrud, Michael A Riegler, Pål Halvorsen, Thomas de Lange, Dag Johansen, and Håvard D Johansen. Kvasir-seg: A segmented polyp dataset. In MultiMedia Modeling: 26th International Conference, MMM 2020, Daejeon, South Korea, January 5–8, 2020, Proceedings, Part II 26, pages 451–462. Springer, 2020.
* [15] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, pages 709–727. Springer, 2022.
* [16] David Joon Ho, Chichen Fu, Paul Salama, Kenneth W Dunn, and Edward J Delp. Nuclei segmentation of fluorescence microscopy images using three dimensional convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 82–90, 2017.
* [17] Donghyeon Kwon and Suha Kwak. Semi-supervised semantic segmentation with error localization network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9957–9967, 2022.
* [18] Chunbo Lang, Gong Cheng, Binfei Tu, and Junwei Han. Learning what not to segment: A new perspective on few-shot segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8057–8067, 2022.
* [19] Gen Li, Varun Jampani, Laura Sevilla-Lara, Deqing Sun, Jonghyun Kim, and Joongkyu Kim. Adaptive prototype learning and allocation for few-shot segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8334–8343, 2021.
* [20] Quande Liu, Youpeng Wen, Jianhua Han, Chunjing Xu, Hang Xu, and Xiaodan Liang. Open-world semantic segmentation via contrasting and clustering vision-language embedding. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX, pages 275–292. Springer, 2022.
* [21] Weide Liu, Chi Zhang, Guosheng Lin, and Fayao Liu. Crnet: Cross-reference networks for few-shot segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4165–4173, 2020.
* [22] Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7086–7096, 2022.
* [23] Michael Majurski, Petru Manescu, Sarala Padi, Nicholas Schaub, Nathan Hotaling, Carl Simon Jr, and Peter Bajcsy. Cell image segmentation using generative adversarial networks, transfer learning, and augmentations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
* [24] Hasnain Raza, Mahdyar Ravanbakhsh, Tassilo Klein, and Moin Nabi. Weakly supervised one shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0–0, 2019.
* [25] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
* [26] Abhijit Guha Roy, Sailesh Conjeti, Nassir Navab, and Christian Wachinger. Inherent brain segmentation quality control from fully convnet monte carlo sampling. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 664–672. Springer, 2018.
* [27] Jo Schlemper, Ozan Oktay, Wenjia Bai, Daniel C Castro, Jinming Duan, Chen Qin, Jo V Hajnal, and Daniel Rueckert. Cardiac mr segmentation from undersampled k-space using deep latent representation learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 259–267. Springer, 2018.
* [28] Amirreza Shaban, Shray Bansal, Zhen Liu, Irfan Essa, and Byron Boots. One-shot learning for semantic segmentation. 2017\.
* [29] Eisuke Shibuya and Kazuhiro Hotta. Cell image segmentation by using feedback and convolutional lstm. The Visual Computer, 38(11):3791–3801, 2022.
* [30] Suprosanna Shit, Johannes C Paetzold, Anjany Sekuboyina, Ivan Ezhov, Alexander Unger, Andrey Zhylka, Josien PW Pluim, Ulrich Bauer, and Bjoern H Menze. cldice-a novel topology-preserving loss function for tubular structure segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16560–16569, 2021.
* [31] Athanasios Tragakis, Chaitanya Kaul, Roderick Murray-Smith, and Dirk Husmeier. The fully convolutional transformer for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3660–3669, 2023.
* [32] Kuo-Kun Tseng, Ran Zhang, Chien-Ming Chen, and Mohammad Mehedi Hassan. Dnetunet: a semi-supervised cnn of medical image segmentation for super-computing ai service. The Journal of Supercomputing, 77:3594–3615, 2021.
* [33] Yuchao Wang, Haochen Wang, Yujun Shen, Jingjing Fei, Wei Li, Guoqiang Jin, Liwei Wu, Rui Zhao, and Xinyi Le. Semi-supervised semantic segmentation using unreliable pseudo-labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4248–4257, 2022.
* [34] Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139–149, 2022.
* [35] Huisi Wu, Zhaoze Wang, Youyi Song, Lin Yang, and Jing Qin. Cross-patch dense contrastive learning for semi-supervised segmentation of cellular nuclei in histopathologic images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11666–11675, 2022.
* [36] Yanyu Xu, Xinxing Xu, Lei Jin, Shenghua Gao, Rick Siow Mong Goh, Daniel SW Ting, and Yong Liu. Partially-supervised learning for vessel segmentation in ocular images. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24, pages 271–281. Springer, 2021.
* [37] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16816–16825, 2022.
* [38] Yanning Zhou, Hao Chen, Huangjing Lin, and Pheng-Ann Heng. Deep semi-supervised knowledge distillation for overlapping cervical cell instance segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I 23, pages 521–531. Springer, 2020.
|
# Quota Trees
Tad White IDA Center for Computing Sciences, 17100 Science Drive, Bowie, MD
20715-4300 tad(at)super(dot)org
(Date: July 6, 2017)
###### Abstract.
We introduce the notion of quota trees in directed graphs. Given a nonnegative
integer “quota” for each vertex of a directed multigraph $G$, a quota tree is
an immersed rooted tree which hits each vertex of $G$ the prescribed number of
times. When the quotas are all one, the tree is actually embedded and we
recover the usual notion of a spanning arborescence (directed spanning tree).
The usual algorithms which produce spanning arborescences with various
properties typically have (sometimes more complicated) “quota” analogues.
Our original motivation for studying quota trees was the problem of
characterizing the sizes of the Myhill-Nerode equivalence classes in a
connected deterministic finite-state automaton recognizing a given regular
language. We show that the obstruction to realizing a given set of M-N class
sizes is precisely the existence of a suitable quota tree.
In this paper we develop the basic theory of quota trees. We give necessary
and sufficient conditions for the existence of a quota tree (or forest) over a
given directed graph with specified quotas, solving the M-N class size problem
as a special case. We discuss some potential applications of quota trees and
forests, and connect them to the $k$ lightest paths problem. We give two
proofs of the main theorem: one based on an algorithmic loop invariant, and
one based on direct enumeration of quota trees. For the latter, we use
Lagrange inversion to derive a formula which vastly generalizes both the
matrix-tree theorem and Cayley’s formula for counting labeled trees. We give
an efficient algorithm to sample uniformly from the set of forests with given
quotas, as well as a generalization of Edmonds’ algorithm for computing a
minimum-weight quota forest.
###### Key words and phrases:
graph traversal, graph search, automata, DFA, regular languages, Myhill-
Nerode, private information retrieval, graph immersions, arborescences,
spanning trees, Edmonds’ algorithm, lightest paths, matrix-tree, random trees,
Cayley formula, Lagrange inversion, Narayana numbers, combinatorial
reciprocity
###### 2010 Mathematics Subject Classification:
05C30, 05C85, 68R10
## 1\. Motivation and definitions
A recently proposed scheme in the area of private information retrieval [6]
rests in part on the ability to construct arbitrarily complex deterministic
finite automata (DFAs) recognizing a regular language $\mathcal{L}$. While the
theory of simplifying, or minimizing, a finite-state automaton is well known,
the inverse problem of “complicating” a DFA leads to interesting questions
about the structure of the set of DFAs recognizing $\mathcal{L}$.
The Myhill-Nerode theorem implies the existence of a unique minimal DFA
$\mathcal{D}_{\mathcal{L}}$ which recognizes $\mathcal{L}$.
$\mathcal{D}_{\mathcal{L}}$ is a quotient of any connected111Think of a DFA as
a graph $G$ having an initial node and labeled edges coming out of each node;
the DFA is connected if any node in $G$ can be reached from the initial node.
DFA $\mathcal{D}$ recognizing $\mathcal{L}$; that is, the states of
$\mathcal{D}$ can be grouped into equivalence classes, with one class for each
state of $\mathcal{D}_{\mathcal{L}}$, such that the transitions in
$\mathcal{D}$ are coherent with respect to these classes. So in order to
understand the set of connected DFAs which can recognize $\mathcal{L}$, one
wants to know what sizes these equivalence classes can take, and to have an
effective algorithm for constructing a connected DFA with given class sizes.
(Connectedness is the key issue here; if $\mathcal{D}$ is allowed to have
unreachable nodes, then there is no constraint on the sizes other than
positivity.)
The problem turns out to reduce to a very natural graph search
problem.222Throughout this paper, we will often use the term “graph” to mean
what is usually called a directed multigraph; that is, edges are directed, and
both loops and multiple edges are allowed. We will frequently encounter
directed trees, with edges directed away from the root; these are typically
called (out-)arborescences in the literature. Accordingly, forests of out-
directed trees should be called something like silvations. But we will
stubbornly just use “trees” and “forests.” In particular, it turns out that a
connected DFA with specified Myhill-Nerode class sizes exists iff one can
construct a directed tree $T$, together with an immersion $f:T\to G$, such
that the sizes of the vertex preimages match the desired equivalence class
sizes; we call $T$ a “quota tree.” When it exists, a suitable $T$ can be found
via a simple modification of standard graph traversal in which vertices are
visited multiple times, according to the class sizes; $T$ records the
traversal just as an ordinary graph search is recorded via a spanning tree.
$T$ can then be extended (in many ways) to a DFA by adding missing
transitions.
It is easy to interpret this type of graph search in applications other than
automata; the theory expresses itself most naturally in a broader context. In
section 2 we describe some scenarios in which quota trees arise naturally;
these illustrate some quota versions of standard spanning tree optimization
problems. In section 3 we formally define quota trees and forests and state
the main results. Section 4 introduces the corresponding variant of graph
search, called quota search. In section 5 we prove the “enough arrows”
theorem, which gives necessary and sufficient conditions for the existence of
quota trees (or forests) with specified quotas. In section 6 we discuss some
applications, particularly DFAs and the $k$ lightest path problem. In section
7 we address the problem of enumerating quota trees; our primary tool is the
multivariate Lagrange inversion formula. The results of this section give a
much more precise version of the “enough arrows” theorem, which vastly
generalizes both the matrix-tree theorem and Cayley’s formula for counting
labeled trees. In section 8 we strengthen the enumeration results to sample
uniformly from the set of trees (or forests) with given quotas. In section 9
we give an algorithm for finding minimal-weight quota forests. Finally, in
section 10, we identify a few areas for further research.
## 2\. Examples
Before giving formal definitions, we present a few scenarios in which quota
trees arise naturally, so that the reader can choose a comfortable motivating
context for the remainder of the paper.
#### A coupon game
A dealer has a supply of coupon books of various types; all books of a given
type are identical. Each coupon in a book allows you to buy another coupon
book at a particular price. (For example, it might be that in an $A$ book,
coupon 1 is for another $A$ book at $5, coupons 2 and 3 are for $B$ books at
$2 and $3 respectively, and coupon 4 is good for a free $D$ book.) You’re
given one or more coupon books to start with; you win if you can collect all
of the dealer’s coupon books (and you’d like to do so as cheaply as possible.)
You know how many books of each type the dealer has, and what coupons are in
what types of book. Is it possible to collect all the coupons? If so, in how
many different ways, and what is the minimum cost?
#### Network configuration
You have a supply of network devices of various types; all devices of a given
type are identical. Each type has a single input port and several output
ports, each of which can talk to a specific type of device. (For example, an
$A$ device might have an $A$ port, two $B$ ports and a $D$ port, a $B$ device
might have no output ports, and so on.) You would like to connect all of your
devices together so that a message from one particular device can then be
propagated to all of the other devices. Is this possible? If so, in how many
ways, and what configuration minimizes the number of intermediate devices on
each path? When there is only one device of each type, this is a spanning tree
problem.
#### $k$ lightest paths
Given a directed graph $G$, with nonnegative weights on the edges, and an
integer $k\geq 1$, compute the $k$ lightest paths from one or more given
source nodes to each vertex in $G$. This can be interpreted as a minimum quota
tree problem.
#### Tree coloring
Many tree-coloring problems are naturally interpreted as quota-tree questions.
For example, suppose we have $n$ colors, and a subset $S_{i}\subset[n]$ for
each $i\in[n]$. How many ways can we color a rooted tree such that $q_{i}$
nodes have color $i$, and such that the children of a color-$i$ node get
distinct colors selected from $S_{i}$? (For example, for two colors, if there
are no restrictions we get Narayana numbers; if blue nodes can only have red
children we get Motzkin coefficients. See section 7 for more examples.)
## 3\. Quota trees
By a directed multigraph we will mean a tuple $(V,E,i,t)$ where $V$ is a set
of vertices, $E$ is a set of edges, and $i:E\to V$ and $t:E\to V$ return the
initial and terminal vertices of each edge. Edges are oriented; we say an edge
$e$ goes from $i(e)$ to $t(e)$. We may abuse notation by writing $v\to w$ to
mean there is an edge in $G$ from $v$ to $w$, but as $G$ is a multigraph there
may be other edges as well. In particular, loops are allowed (that is, one may
have $t(e)=i(e)$ for some edges $e$) and the edges from $v$ to $w$ are all
distinguishable (or “labeled”) as they are distinct elements of $E$.)
A mapping $f:G\to H$ of multigraphs sends vertices to vertices, edges to
edges, and respects $i$ and $t$: thus, if $e$ is an edge from $v$ to $w$ in
$G$, then $f(e)$ is an edge in $H$ from $f(v)$ to $f(w)$.
Define the instar (resp. outstar) of a vertex $v$ to be the set of incoming
(resp. outgoing) edges at $v$:
$\mathord{\to}v=\\{e\mid t(e)=v\\};\qquad v\mathord{\to}=\\{e\mid i(e)=v\\}$
We say a map $f:G\to H$ is an out-immersion, or simply an immersion, if it
maps $v\mathord{\to}$ injectively into $f(v)\mathord{\to}$. We define a cusp
in $G$ under $f$ to be a pair of edges $e_{1}\neq e_{2}\in v\mathord{\to}$
with $f(e_{1})=f(e_{2})$; thus $f$ is an immersion iff $G$ has no cusps under
$f$.
A quota is a nonnegative-valued function $q:V(G)\to\mathbf{Z}$. A quota tree
with root $*\in V(G)$ and quota $q$ is an immersion $f:T\to G$ where
$(T,\tilde{*})$ is a rooted tree, $f(\tilde{*})=*$, and $|f^{-1}(v)|=q(v)$ for
all $v\in V(G)$. Note that if $q(v)\leq 1$ for all $v\in V(G)$, then the map
$f$ is actually an embedding, and if $q(v)$ is identically $1$, the image
$f(T)$ is a (rooted, directed) spanning tree of $G$.
Finally, a quota forest with start portfolio $s:V(G)\to\mathbf{Z}$ is a
(disjoint) union of quota trees $F=\\{T_{v,i}\mid v\in V(G),1\leq i\leq
s(v)\\}$ such that $T_{v,i}$ is rooted at $v$. The forest also immerses into
$G$; the quota it achieves is the sum of the quotas of the component forests.
Note that we treat all the roots as distinguishable: if a forest contains two
or more non-isomorphic quota trees with roots mapping to the same vertex of
$G$, permuting those trees gives a different quota forest. We will refer to a
forest with quota $q$ and start portfolio $s$ as a $(G,q,s)$-forest (or
$(G,q,s)$-tree, if $||s||_{1}=1$).
A graph with quotas, together with both an example and a nonexample of quota
trees, appears in Figure 1.
Figure 1. A digraph (a) with quotas and a single-vertex start portfolio; (b) is a valid quota tree, while (c) is not. $2$$2$$2$ | |
---|---|---
(a) | (b) | (c)
### Out-coverings
One can think of a spanning tree $G$ as “living in” $G$, but a more natural
home for a quota tree is actually a covering space of $G$, which we will now
describe. We will say a map $\pi:G\to H$ is an out-covering if
$\pi(v\mathord{\to})$ maps bijectively onto $\pi(v)\mathord{\to}$ for all $v$
in $V(G)$. In this situation, given an (out-)immersed tree $f:T\to H$ with
root $w\in H$, and a preimage $v\in\pi^{-1}(w)$, there is a unique lift
$\tilde{f}:T\to G$ with root $v$; the (right) inverse of the operation
$f\mapsto\tilde{f}$ is given by $f\mapsto\pi\circ f$.
As with topological spaces, we can define a universal out-cover by considering
paths from a distinguished vertex. A (finite directed) path in $G$ from $*$ is
a sequence $\\{e_{i}\mid 1\leq i\leq l\\}$ of directed edges of $G$, with
$i(e_{1})={*}$ and $t(e_{i})=i(e_{i+1})$. We define the universal out-cover of
$(G,{*})$ to be the directed graph $(\tilde{G},\tilde{*})$ whose vertices are
the finite directed paths from $*$, having an edge (labeled $e_{i}$) from
$e_{1}\cdots e_{i-1}$ to $e_{1}\cdots e_{i}$. It’s easy to see that
$\tilde{G}$ is a (generally infinite) rooted tree, in which the root
$\tilde{*}$ corresponds to the length-zero path in $G$ from $*$. The natural
map $\pi:\tilde{G}\to G$ taking a directed path to its endpoint in $G$ is an
immersion. Note that the in-degree of each vertex $\tilde{v}\in\tilde{G}$ is
one; the out-degree of $\tilde{v}$ is the same as the out-degree of
$\pi(\tilde{v})$. (In particular, if $G$ is a DFA over an alphabet $\Sigma$,
then $\tilde{G}$ is a regular tree, directed outward from the root, with each
vertex having out-degree $|\Sigma|$.)
With this setup, it is easy to see that if $f:(T,t)\to(G,{*})$ is an immersion
of a rooted directed tree into $G$, then $f$ can be lifted uniquely to a map
$\tilde{f}:(T,t)\to(\tilde{G},\tilde{*})$ such that
$f=\pi\circ\tilde{f}$.333There is a larger “universal cover” that appears in
the literature (see for example [11]), based on paths whose edges which need
not be coherently oriented. This is essentially the topological universal
cover of $G$ (see [13]), constructed by ignoring orientations, which also has
the same universal lifting property for immersions of rooted directed trees.
However, the universal out-cover is the smallest space which has this
property, and so is the “natural” home for quota trees. We note that Yamashita
and Kaneda, in their study of computing in anonymous networks, referred to the
universal out-cover $(\tilde{G},\tilde{*})$ as the view of $\tilde{*}$ within
the topological universal cover (see [18].) The map $\tilde{f}$ is injective,
so we can view $T$ as sitting inside of $\tilde{G}$.
## 4\. Quota search
The problems in section 2, as well as the original problem of computing
possible Myhill-Nerode class sizes, correspond to a variant of graph search in
which we are given a positive “quota” $q(v)$ for each $v\in V(G)$, and we wish
to visit each vertex $v$ exactly $q(v)$ times. (When $q(v)=1$ for all $v$,
this is ordinary graph traversal.)444Setting $q(v)=0$ for any particular $v$
is legal; it essentially amounts to working in the induced graph $G-\\{v\\}$.
We refer to this goal as quota search.
We assume familiarity with standard graph traversal as described, for example,
in [3, ch. 22], to which we will make some modifications. Given a directed
graph $G$ and a set $S$ of start vertices, generic graph search keeps track of
discovered but unprocessed vertices in a generic priority queue. As we will be
dealing with multigraphs, and visiting vertices multiple times, we will need
to be more careful to distinguish between an edge from $u$ to $v$ and the pair
$(u,v)$ itself; indeed, it is much easier to describe quota search succinctly
by considering edges rather than vertices. So our algorithm encodes the search
forest $F$ via a predecessor function $\pi:E(F)\to E(F)$, rather than the more
usual $\pi:V(G)\to V(G)$. Accordingly, we replace the usual VisitVertex
procedure with an analogous UseEdge, which inserts an edge taken from the
queue into the search forest.
Recall that the quota forest $F$ does not actually live in $G$, so we must
distinguish between an edge $\tilde{e}$ in $F$ and its image $e=f(\tilde{e})$
under the immersion $f:F\to G$, whose construction will be implicit. An edge
$\tilde{e}$ in the queue should be thought of as living in the universal cover
$\tilde{G}$, not in $G$.
Instead of coloring vertices Black or White according to whether or not they
have been visited, we keep track of the number of remaining visits to a vertex
$v$. Thus, Black and White correspond to quotas of $0$ and $1$ respectively.
In ordinary graph search, we gain nothing by repeating a search from the same
start point, but allowing repeated starts even from the same node can be
useful if we need to arrive at vertices multiple times. As described in
section 4, we replace the set $S$ of start vertices with a nonnegative start
portfolio $s:V(G)\to\mathbf{Z}$; $s(v)$ is the number of times a search can be
started from a given vertex. (Thus, if we’re doing a single search from one
particular vertex $w$, we set $s(v)=1$ if $v=w$ and $0$ otherwise.)
Finally, it is useful to distinguish two natural variants of search. In the
“exact” version of quota search, we want our search forest to contain exactly
$s(v)$ trees with root $v$. (This corresponds, in the coupon-game scenario, to
requiring that every coupon in the initial collection be used up.) In the “at-
most” version, the search forest may contain at most $s(v)$ trees with root
$v$; that is, we don’t need to use all of our coupons. The two versions are
closely related:
###### Theorem 1 (exact vs. at most solvability).
A triple $(G,q,s)$ admits an exact quota forest iff it admits an at-most quota
forest and $q(v)\geq s(v)$ for all $v\in V(G)$.
###### Proof.
Since an exact forest actually solves the at-most problem, and clearly
requires $q(v)\geq s(v)$ for all $v\in V(G)$, one direction is trivial. On the
other hand, if we have an at-most quota forest $F$ with fewer than $s(v)$
trees rooted at lifts of $v$, we can simply cut off some of the $q(v)$
occurrences in $F$ of lifts of $v$ from their parents, making them roots of
new trees. This works as long as $q(v)\geq s(v)$. ∎
Both the exact and at-most versions of quota search can be handled with a
single meta-algorithm. In both cases we initialize $Q$ with (sentinel edges
corresponding to) the start portfolio. In order to implement ExactQuotaSearch,
we simply arrange for QueueExtract to return the start portfolio first; to
implement AtMostQuotaSearch, we drop that restriction, in which case the
number of new trees created, and what their roots are, will depend on the
particular queue extraction algorithm.
We capture the resulting generic quota search meta-algorithm as Algorithm 1.
It succeeds if it ends with all quotas reduced to zero. The “enough arrows”
theorem will characterize triples $(G,q,s)$ such that a quota forest exists
(in which case the algorithm is guaranteed to succeed for any specialization
of QueueExtract.)
Algorithm 1 Generic quota search
function NewEdge($\tilde{e}$,$e^{\prime}$)
$e^{\prime}\in E(G)$; $\tilde{e}\in E(F)$ with $f(t(\tilde{e}))=i(e^{\prime})$
a new edge $\tilde{e}^{\prime}$ with $f(\tilde{e}^{\prime})=e^{\prime}$,
$\pi(\tilde{e}^{\prime})=\tilde{e}$
end function
5:
function NewSentinelEdge($v$)
$v\in V(G)$ $\triangleright$ $v$ will be the root of a tree in $F$
a new sentinel edge $\tilde{e}$ with $f(\tilde{e})=NULL$, $f(t(\tilde{e}))=v$
end function
10:
procedure UseEdge($\tilde{e}$)
an edge $\tilde{e}$ such that $v=f(t(\tilde{e}))$ satisfies $q(v)>0$
Add $\tilde{e}$ to $F$; this updates $F$ and $q(v)$ and adds
$t(\tilde{e})\mathord{\to}$ to $Q$
$F=F\cup\\{\tilde{e}\\}$
15: $q(v)\leftarrow q(v)-1$
for $e^{\prime}\in v\mathord{\to}$ do $\triangleright$ in practice, skip
$e^{\prime}$ if $q(t(e^{\prime}))$ is already zero
QueueInsert$(Q,\textsc{NewEdge}(\tilde{e},e^{\prime}))$
end for
end procedure
20:
function GenericQuotaSearch($G$,$q$,$s$)
$G$ a directed graph; $q$ and $s$ are nonnegative functions on $V(G)$
quota forest $F$, predecessor map $\pi:E(F)\to E(F)$, and immersion $f:F\to G$
$Q,F\leftarrow\emptyset$
25: for $v\in V(G),k\in\\{1,\ldots s(v)\\}$ do
$\textsc{QueueInsert}(Q,\textsc{NewSentinelEdge}(v))$
end for
while $Q$ is nonempty do
30: $\tilde{e}\leftarrow\textsc{QueueExtract}(Q)$
if $q(f(t(\tilde{e})))>0$ then
$\textsc{UseEdge}(\tilde{e})$
end if
end while
35: return $\pi,f$ unless all $q(v)$’s are zero $\triangleright$ else fail
end function
#### Algorithm success and achievable parameters
Whenever UseEdge is called, $q(v)$ is the number of remaining required visits
to $v$. Thus the algorithm succeeds (i.e. visits all vertices the required
number of times) if and only if, upon termination, $q(v)=0$ for all $v\in
V(G)$. It turns out that, in contrast with ordinary graph search, success is
not possible for all pairs $(q,s)$. We will call $(G,q,s)$ achievable if some
(and, it turns out, any) quota search in $G$ with start portfolio $s$ achieves
the quotas $q$. (It is easy to see that achievability does not depend on
whether we are talking about “exact” or “at most” quota search.) The “enough
arrows” theorem in the next section precisely characterizes the achievable
parameters.
#### Quota search viewed in $\tilde{G}$
One way to think about quota search is that we replace each vertex $v$ with a
supply of copies of itself; when we “visit” $v$, we’re actually visiting a
fresh copy. When the start portfolio is [a single copy of] a single vertex
$*$, this allows us to describe quota search as occurring in the forward
universal cover $\tilde{G}$ of $G$. Specifically, we do ordinary graph search
in $(\tilde{G},\tilde{*})$, but only visit a vertex $\tilde{v}$ provided
$q(v)>0$, where $v=\pi(\tilde{v})$, in which case we decrement $q(v)$.
Finally, if the start portfolio $s$ is a multiset of vertices, we effectively
work in the disjoint union of $s(v)$ copies of $(\tilde{G},\tilde{v})$ for all
$v$. Whether the search trees are built sequentially, or at the same time, is
controlled by the order in which QueueExtract selects edges for consideration.
#### Optimization problems
As with ordinary graph search, the versatility of this meta-algorithm comes
from the variety of ways of choosing which element to extract from $Q$ at each
step. By specializing $Q$ to be a FIFO queue, a LIFO stack, or a more general
priority queue results in quota-search we obtain quota variants of algorithms
such as breadth-first search, depth-first search, or Dijkstra’s algorithm.
If we are optimizing an objective function which depends only on the forest
$F$, but not the particular traversal of $F$, then the data associated with an
edge $\tilde{e}$ in the queue $Q$ may only depend on the unique path to
$\tilde{e}$ in $F$; we will call such data intrinsic. For example, if the
edges of $G$ have weights, it is natural to consider the “minimum quota
forest” problem, a generalization of the minimum spanning tree problem in
which we wish to minimize the sum of the weights of the edges in a quota
forest with the given start portfolio and quotas. In this case we take the key
for an edge $\tilde{e}$ in $Q$ to be the weight of its image $e=f(\tilde{e})$
in $G$. Similarly, a quota version of Dijkstra’s algorithm is obtained by
taking the key to be the sum of the weights in the path to $\tilde{e}$ in the
search forest; see section 6. In both cases the keys are intrinsic.
It may be tempting, knowing that a vertex will be visited $q(v)$ times, to
assign the $k$-th visit to a vertex a cost which depends on $k$. However, this
is not intrinsic: different traversals of the same forest could then result in
different tree costs. But it would be perfectly legal to assign edge
$\tilde{e}$ a cost which depends on the number of visits to $t(\tilde{e})$ (or
any other nodes) on the path in $F$ to $\tilde{e}$.
Of course, not all graph optimization problems are solvable via graph search.
For instance, a very natural problem is to find a minimum-weight quota tree
(or forest) given weights on the edges of $G$; here we must emphasize that we
really mean quota arborescence (or branching.) When the quotas are at most
$1$, this is just the minimum arborescence (or branching) problem. An
algorithm for solving this problem has been given by Edmonds [4] and others.
Rather than accreting a tree via graph search, it iterates through a sequence
of putative solutions. Edmonds’ algorithm adapts beautifully to find minimum
quota trees (and, in particular, find the minimum-cost solution to the coupon
collecting problem in section 2.) We discuss minimum-weight quota trees in
section 9.
#### Relaxation
Many natural priority queue keys have a property which allows us to maintain a
smaller queue. As noted previously, an intrinsic cost associated to an edge
$\tilde{e}$ in $Q$ is some function $c(\tilde{p})$ of the unique path
$\tilde{p}=\tilde{e}_{1}\cdots\tilde{e}_{k}=\tilde{e}$ in the quota forest
from the root to $\tilde{e}$. We say $c$ is append-monotonic if key order is
invariant under appending a common path: that is, if we have two paths
$\tilde{p}_{1}$ and $\tilde{p}_{2}$ satisfying $c(\tilde{p}_{1})\leq
c(\tilde{p}_{2})$, and both ending at lifts of a common vertex $v$, then for
any path $p_{3}$ in $G$ starting at $v$, then
$c(\tilde{p}_{1}\tilde{p}_{3})\leq c(\tilde{p}_{2}\tilde{p}_{3}).$
If $f$ is append-monotonic, we know the best extensions of paths will be
extensions of best paths. So we can just keep track of the $q(v)$ best paths
to each vertex $v$. This is the quota-search analogue of what is called
relaxation in ordinary graph search (see [3, Ch. 24]): namely, when we arrive
at a previously seen vertex via a new path, we can keep the better of the two
paths and discard the other. In generic quota search, we might handle this
with a min-max queue of size $q(v)$ at each node $v$, in which case a generic
implementation of QueueExtract via two stages of binary heaps would take $\lg
V+\lg q(v)$ operations.
#### Complexity analysis
In the generic version of quota search, we visit each vertex $v$ $q(v)$ times,
doing one queue extraction and $|v\mathord{\to}|$ insertions. So the number of
insertions and extractions (and space) required is $\sum_{v}q(v)Adj(v)$ where
$Adj(v)=|v\mathord{\to}|+1$. When $q(v)=1$ for all $v$, and $Q$ is a simple
queue or stack (so that insertions and extractions can be done in constant
time), note that this reduces to $O(V+E)$, the complexity of ordinary graph
search.
If $Q$ is a priority queue, this leads to a complexity of
$O\left(\sum_{v}q(v)Adj(v)(\lg\sum_{v}q(v)Adj(v))\right)$
operations if binary heaps are used. If the queue keys are append-monotonic,
we can apply relaxation as above, reducing the work to
$O\left(\sum_{v}q(v)Adj(v)(\lg V+\lg q(v))\right).$
(This reduces to $O(E\lg V)$ when the quotas are identically 1.) As usual,
more sophisticated heap structures can provide further asymptotic improvement.
## 5\. The Enough Arrows theorem
In this section, we identify two conditions which the data $(G,q,s)$ must
satisfy in order for quota search to succeed; one is global, the other is
local. We show that these conditions are in fact sufficient: there exists a
quota forest meeting the specified quotas if and only if these conditions
hold. (In section 7 we will give an independent proof based on direct
enumeration of quota forests.)
Global:
$(G,q,s)$ is _connected_ if, for every node $v$ with $q(v)>0$, there exists a
node $u$ with $s(u)>0$ and a path in $G$ from $u$ to $v$. Note this only
depends on the support of $q$ and $s$.
Local:
$(G,q,s)$ has _enough arrows_ if the inequality
(1) $s(w)+\mathbf{in}(w)\geq q(w)$
holds for each $w\in V(G)$, where $\mathbf{in}(w):=\sum_{v}q(v)m_{vw}$.
We remark that the enough arrows condition can be written as
$\mathbf{s}+\mathbf{q}M\geq\mathbf{q},$
where $\mathbf{q}$ and $\mathbf{s}$ are the vectors of values of $q$ and $s$
respectively, and $M$ is the adjacency matrix of $G$.
Connectivity is clearly necessary in order to achieve even one visit to every
vertex with positive quota. To see why having enough arrows is necessary, note
that each visit to node $w$ arises either by starting at $w$, or by following
an edge from another node $v$. We visit node $v$ $q(v)$ times; each time, we
have $m_{vw}$ edges we can potentially follow to node $w$. Thus the maximum
number of arrivals at node $w$ is the left-hand side of (1), which must be at
least $q(w)$.
A note on terminology: especially in the context of automata, directed graphs
are typically drawn with arrows representing both transitions and initial
states. The left-hand side of the inequality (1) counts the maximum number of
arrows that can be drawn into each class (see figure 1); the right-hand side
represents the number of targets that need to be hit by these arrows.
###### Theorem 2 (enough arrows).
With the notation above, generic at-most quota search in $G$ with start
portfolio $s$ will achieve the quotas $q$ if and only if $(G,q,s)$ is
connected and has enough arrows.
###### Proof.
We have already argued the necessity of these conditions. The converse is
essentially by induction on $\sum_{v,w}q(v)m_{vw}$, and will follow from the
fact that connectivity and having enough arrows are invariant under the main
loop. Connectivity is automatically preserved.
So suppose we have enough arrows entering the main loop. At each iteration,
the queue $Q$ represents an effective “at most” start portfolio; so let $s(v)$
denote the number of edges $e$ in $Q$ with $t(e)=v$. Before the QueueExtract,
we have $s(v)>0$; it decreases by one with the extraction. We consider two
cases:
Case 1: $q(v)=0$. In this case inequality in the $v$-th coordinate of (1)
continues to hold since the right-hand-side is zero; all other coordinates in
the inequality are unchanged. So (1) is preserved in this case.
Case 2: $q(v)>0$. In this case VisitVertex adds, for each $w$, $m_{vw}$ edges
$v\to w$ into $Q$, and decrements $q(v)$. Thus the increase in $s$ and the
decrease in the sum on the left-hand side of (1) exactly cancel out.
Hence both connectedness and having enough arrows are preserved. At the end of
the algorithm, there are no edges left in $Q$; (1) implies
$\textbf{0}=\mathbf{s}\geq\mathbf{q}\geq\textbf{0}$, that is, we have reduced
all the quotas to zero, and the algorithm has succeeded. ∎
#### Remarks
We revisit the special case of ordinary graph search of a directed graph $G$
from a particular vertex $*$. Assume all vertices are reachable from $*$. We
have $q(v)=1$ for all $v\in V(G)$. But, by connectivity, each vertex in $G$
must either have an edge coming into it, or must be the start vertex $*$.
Thus, in this special case, having enough arrows is a consequence of
connectivity, explaining why the issue does not become apparent for ordinary
graph traversal.
The enough arrows theorem has a very similar flavor to the following theorem
[15, Theorem 5.6.1] characterizing directed graphs with Eulerian circuits;
namely, a global connectivity condition and a local degree condition. We state
it here since we’ll need it in section 9.
###### Theorem 3.
A digraph without isolated vertices is Eulerian if and only if it is connected
and balanced (i.e. $\textrm{indeg}(v)=\textrm{outdeg}(v)$ for all vertices
$v$.)
## 6\. Applications
### DFA expansion and Myhill-Nerode class sizes
A deterministic finite-state automaton, or DFA, is a tuple
$\mathcal{D}=(S,\Sigma,\delta,i,a)$, where $S$ is a finite set of states,
$\Sigma$ is an alphabet, $\delta:S\times\Sigma\to S$ is the transition map,
$i\in S$ is the initial state, and $a\subset S$ are the accept states. It is
useful to think of a DFA as a directed multigraph over $S$; for each $i\in S$
and $s\in\Sigma$ there is a directed edge from $i$ to $\delta(i,s)$ with label
$s$.
The transition map $\delta$ has a unique extension to a map
$\delta:S\times\Sigma^{*}\to S$ satisfying
$\delta(s,w_{1}w_{2})=\delta(\delta(s,w_{1}),w_{2})$
for all states $s$ and strings $w_{1}$, $w_{2}\in\Sigma^{*}$. 666That is,
$\delta$ defines a semigroup action of $\Sigma^{*}$ on $S$. ($\delta(s,w)$
just starts at $s$ and then applies the unary operators specified by the
symbols in $w$.) The automaton $\mathcal{D}$ accepts a string $w$ iff
$\delta(i,w)\in a$; that is, the path defined by $w$, starting at the initial
state, ends at an accept state. The automaton is called connected if the
extension $\delta:S\times\Sigma^{*}\to S$ is onto; that is, all states are
reachable by some path from the initial state. The set of strings accepted by
$\mathcal{D}$ is called the language recognized by $\mathcal{D}$. For the
purposes of this paper, a language to be regular iff it is recognized by some
DFA.
Given a regular language $\mathcal{L}$, the Myhill-Nerode theorem [10, Ch. 3]
implies that there is a unique minimal DFA $\mathcal{D}_{\mathcal{L}}$ which
recognizes $\mathcal{L}$. Furthermore, if $\mathcal{D}$ is any connected DFA
recognizing $\mathcal{L}$, then there is a quotient map
$\phi:\mathcal{D}\to\mathcal{D}_{\mathcal{L}}$ which is a homomorphism in the
sense of universal algebra [1]. That is, $\phi$ maps each state of
$\mathcal{D}$ to a state of $\mathcal{D}_{\mathcal{L}}$, such that transitions
are preserved:
(2) $\delta(\phi(v),s)=\phi(\delta(v,s))\textrm{\ for $v\in\mathcal{D}$,
$s\in\Sigma$}$
Not surprisingly, $\mathcal{D}_{\mathcal{L}}$ is connected (for if it had
unreachable states, those could be omitted to yield a smaller automaton
recognizing $\mathcal{L}$.)
As in [6], we might want to be able to construct and count larger DFAS
recognizing $\mathcal{L}$. We can use the enough arrows theorem to effectively
characterize the possible sizes of the Myhill-Nerode equivalence classes in a
connected DFA recognizing a language $\mathcal{L}$. If $\mathcal{D}$ is
connected, all of its states are reachable by a graph search from the initial
state of $\mathcal{D}$. The Myhill-Nerode theorem implies that the
corresponding graph search tree in $\mathcal{D}$ corresponds to a quota search
in $\mathcal{D}_{\mathcal{L}}$, with the quota for each state in
$\mathcal{D}_{\mathcal{L}}$ being the Myhill-Nerode equivalence class size.
Therefore, the graph of $\mathcal{D}_{\mathcal{L}}$, with these quotas and the
start portfolio consisting of just the initial state, must satisfy (1).
Furthermore, the converse direction of the theorem implies that _any_
collection of class sizes having enough arrows is achievable, since the
connectedness of the minimal DFA $\mathcal{D}_{\mathcal{L}}$ is automatic. The
quota search tree that witnesses the connectivity of $\mathcal{D}$ represents
a construction of part of the transition map $\delta$ for $\mathcal{D}$, but
there will be transitions that need assigning. The remaining transitions can
be assigned completely arbitrarily, subject to the homomorphism constraint
(2). This not only characterizes the sizes of the Myhill-Nerode classes that
can arise in a DFA recognizing $\mathcal{L}$, it yields an efficient algorithm
for constructing all DFAs realizing those sizes, when the “enough arrows”
condition holds. We refer to this process as quota-based DFA expansion.
We emphasize that satisfying the connectivity and enough arrows conditions
does _not_ guarantee connectivity of a given extension structure. In
particular, it is not true that if $\mathcal{D}$ is a connected DFA, and
$\mathcal{D}^{\prime}\to\mathcal{D}$ is a quotient map with preimage sizes
satisfying (1), then $\mathcal{D}^{\prime}$ is connected. But the existence of
some connected $\mathcal{D}^{\prime}$ is guaranteed.
#### Example: the Fibonacci language
At the top of Figure 2 is the minimal DFA $\mathcal{D}_{\mathcal{L}}$
recognizing the “Fibonacci language” $\mathcal{L}$ of strings over
$\Sigma=\\{a,b\\}$ without two consecutive $b$’s. We expand this DFA to obtain
one with Myhill-Nerode class sizes $3$, $2$ and $3$ respectively, which
satisfies the “enough arrows” condition (1) since
$(1\ 0\ 0)+(3\ 2\ 3)\begin{pmatrix}1&1&0\\\ 1&0&1\\\ 0&0&2\end{pmatrix}=(6\ 3\
8)\geq(3\ 2\ 3).$
Select an initial node for $\mathcal{D}$ which maps down to the initial node
of $\mathcal{D}_{\mathcal{L}}$, and do a quota search; the red arrows in the
lower diagram in Figure 2 show the results of a (breadth-first) quota search.
This leaves some remaining transitions which can be filled in arbitrarily, as
long as they map to the correct class. One set of choices for these arrows is
shown in green.
The enough arrows theorem allows us to precisely characterize the possible
class size vectors $(x,y,z)$. (1) requires
$(1+x+y,x,y+2z)\geq(x,y,z)$
coordinatewise; the first and last of these are vacuous (in general, nodes
with self-loops give a vacuous constraint). So the necessary and sufficient
condition for $(x,y,z)$ to be the Myhill-Nerode class sizes of an automaton
recognizing $\mathcal{L}$ is simply that $x\geq y\geq 1$.
Figure 2. Expanding the Fibonacci DFA to a larger connected DFA via quota search. (a) The original DFA, with quotas $(3,2,3)$; (b) the expanded DFA. The red edges form a quota tree, guaranteeing connectivity; the green edges are a random completion to a DFA. (a) | $3$$2$$3$ababa,b
---|---
(b) | ababababababa,ba,b
Quota-based DFA construction achieves the goal of effectively generating
random connected DFAs recognizing $\mathcal{L}$, with specified Myhill-Nerode
equivalence class sizes, in such a way that any connected DFA can in principle
be produced. The Myhill-Nerode theorem guarantees that any connected DFA
$\mathcal{D}$ recognizing $\mathcal{L}$ has a quotient map down to the
(connected) minimal DFA $\mathcal{D}_{\mathcal{L}}$. The connectivity of
$\mathcal{D}$ is witnessed by some search tree $\mathcal{T}$ in the universal
path cover of $\mathcal{D}$. If we randomize QueueExtract so that it returns a
randomly selected element of $Q$, we guarantee that quota search can return
$\mathcal{T}$ as the search forest. (At this point, connectivity of
$\mathcal{D}$ implies that the Myhill-Nerode class sizes must satisfy the
“enough arrows” condition.) By assigning the remaining transitions randomly,
we guarantee that quota-based DFA expansion can produce $\mathcal{D}$. This
proves the following theorem:
###### Theorem 4 (universality of quota-based DFA expansion).
Let $\mathcal{L}$ be a regular language, and let $\mathcal{D}$ be a connected
DFA recognizing $\mathcal{L}$. Then:
* •
$\mathcal{D}$ can be constructed from the minimal DFA
$\mathcal{D}_{\mathcal{L}}$ by quota-based DFA expansion;
* •
the Myhill-Nerode equivalence class sizes of $\mathcal{D}$ must satisfy the
“enough arrows” condition, with the start portfolio being one copy of the
initial state of $\mathcal{D}$.
We remark that even when $\mathcal{T}$ is chosen uniformly from the set of
quota trees achieving the given M-N class sizes, the resulting DFA is not
sampled uniformly from the connected DFAs with these class sizes, as different
DFAs admit different numbers of spanning trees. In principle this method could
be combined with standard methods such as Markov Chain Monte Carlo. Efficient
uniform DFA generation is a topic for further research.
### $k$ shortest paths
It turns out that quota search very naturally solves the problem of
constructing a tree which contains the $k$ shortest (or lightest, if edges are
weighted) paths from a source node (or portfolio) to vertices in a graph
$G$.777See [5] for efficient algorithms and numerous references. Numerous
clever methods have been developed in connection with this problem; these are
no doubt also applicable in the more general context of quota search. For
example, when the edge weights are nonnegative, a solution is to use Dijkstra
quota search (DQS) with all quotas initialized to $k$.
For simplicity we assume that the source portfolio is a single vertex $\ast$,
so we’re building a tree $T$. Viewed as operating in the universal path cover
$\tilde{G}$, for each encounter with a vertex $v=f(t(e))$, DQS keeps track of
the distance from $\tilde{\ast}$ to a corresponding lift $\tilde{v}$ in
$\tilde{G}$. The edge $\tilde{e}$ de-queued at each step extends a path
$\tilde{p}$ in $T$ to a path $\tilde{p}\tilde{e}$ which minimizes this
distance; $\tilde{e}$ is added to $T$ if $q(v)>0$. The point now is that if we
have a path to $v$ which is among the $k$ lightest, then we may assume all
initial subpaths are among the lightest $k$ paths to their corresponding
endpoints, and are in $T$ by construction. Thus, by setting the quota at every
vertex to $k$, we are guaranteed that the quota tree consists of a set of $k$
lightest paths to all vertices.
DQS also solves the network configuration problem in section 2, although since
we are minimizing the number of edges in paths rather than their weighted
lengths, breadth-first quota search gives a simpler solution. As remarked
earlier, the coupon problem described in section 2 is an example of the
minimum quota arborescence problem; its solution requires an analogue of
Edmonds’ algorithm [4], which we will discuss in section 9.
## 7\. Counting quota trees
The enumeration of spanning trees is well understood. The most fundamental
result, the matrix-tree theorem, expresses the number of (directed) spanning
trees of a graph $G$ as a principal minor of the Laplacian of $G$. As a
special case, one obtains Cayley’s classical formula that the complete graph
$K_{n}$ has $n^{n-2}$ spanning trees with a specified root. These turn out to
be special cases of a more general result for quota trees.
As usual, $G$ is a directed (multi)graph, possibly with loops, having $m_{ij}$
distinct edges from vertex $i$ to vertex $j$. Let $q:V(G)\to\mathbf{Z}$ be a
quota function, $s:V(G)\to\mathbf{Z}$ a start portfolio, and $M=(m_{ij})$ the
adjacency matrix of $G$. The following symbol is indispensable in expressing
counts of quota trees.
Given a directed multigraph $G$ with $n\times n$ adjacency matrix
$M=(m_{ij})$, and $n$-long vectors $\mathbf{a}=(a_{i})$ and
$\mathbf{b}=(b_{i})$, define the quota symbol
(3) $\left\\{\begin{array}[]{c}\mathbf{a}\\\
\mathbf{b}\end{array}\right\\}_{G}:=\det
M(\mathbf{a},\mathbf{b})\prod_{i}\binom{a_{i}}{b_{i}}(a_{i})^{-1},$
where the binomial coefficient $\binom{n}{k}$ is zero unless $0\leq k\leq n$,
the matrix
$M(\mathbf{a},\mathbf{b})=\mathrm{diag}(\mathbf{a})-M\mathrm{diag}(\mathbf{b})$,
and for any index $i$ with $a_{i}=0$ we omit the factor $a_{i}^{-1}$ and
delete the corresponding row and column of $M(\mathbf{a},\mathbf{b})$. (We
remark that loops in $G$ do not affect $M(\mathbf{a},\mathbf{b})$ but do
affect the binomial coefficients.)
###### Theorem 5 (counting quota forests).
Let $G$, $q$ and $s$ be as above. As in the enough arrows condition (1), set
$\mathbf{in}_{j}=\sum_{i}q_{i}m_{ij}=\mathbf{q}M$ where $M=(m_{ij})$ is the
adjacency matrix of $G$. Then the number of quota forests with quota
$\mathbf{q}$ and start portfolio exactly $\mathbf{s}$ is given by
$\left\\{\begin{array}[]{c}\mathbf{in}\\\
\mathbf{q}-\mathbf{s}\end{array}\right\\}_{G}.$
The determinant arising in this theorem has a natural combinatorial
interpretation, which we will need. It represents the (weighted) counts of
spanning forests of the subgraph of $G$ determined by the support of $q$,
relative to the start portfolio $s$. In particular, the determinant is nonzero
precisely when the triple $(G,q,s)$ is connected. To state this precisely,
given weights on the edges and vertices of a graph, define the weight of a
tree to be the weight of its root times the product of the weights of the
edges it contains, and the weight of a forest to be the product of the weights
of its component trees.
###### Theorem 6 (matrix interpretation).
$\det M(\mathbf{in},\mathbf{q}-\mathbf{s})$ is the sum of the weights of all
spanning forests of $G$, where vertex $i$ has weight $s_{i}$, and an edge
$i\to j$ has weight $q_{j}-s_{j}$.
This result follows immediately from the following “matrix-forest theorem,”
which is equivalent to (but much more symmetric than) the usual “matrix-tree
theorem” [15, Thm. 5.6.4]:
###### Theorem 7 (matrix-forest).
Let $G$ be a directed multigraph with adjacency matrix $M=(m_{ij})$. Define
the Laplacian $\Delta G$ to be $\mathrm{diag}(\mathbf{in})-M$, where
$\mathbf{in}_{j}=\sum_{i}m_{ij}$. Then for indeterminates
$\mathbf{s}=(s_{i})$, $\det(\mathrm{diag}(\mathbf{s})+\Delta G)$ is the sum of
the weights of all spanning forests of $G$, where the weight of an edge $i\to
j$ is $m_{ij}$ and the weight of a vertex is $s_{i}$.
In particular, for any subset $I$ of the vertices, let $s_{I}$ denote the
monomial $\prod_{i\in I}s_{i}$. Then the coefficient
$[s_{I}]\det(\mathrm{diag}(\mathbf{s})+\Delta G)$ is the sum of the (edge)
weights of all spanning forests of $G$ with root set equal to $I$.
###### Corollary 8 (enough arrows).
A triple $(G,q,s)$ admits an exact quota forest if and only if $q(v)\geq s(v)$
for each $v$, $(G,q,s)$ is connected, and the enough arrows condition holds at
each vertex.
###### Proof.
The $i$-th binomial coefficient in Theorem 5 is nonzero precisely when the
local “enough arrows” condition holds at the $i$-th vertex and $q_{i}\geq
s_{i}$. By Theorem 7, the determinant in Theorem 5 is nonzero precisely when
there exists at least one spanning forest of (the support of $q$ in) $G$ whose
roots are contained in the support of $s$; that is, when $(G,q,s)$ is
connected. The enough arrows theorem now follows immediately from Theorem 5. ∎
We remark that the quota symbol simultaneously generalizes binomial
coefficients and spanning trees. When the graph $G$ has one vertex and no
edges, the quota symbol is a single binomial coefficient. On the other hand,
for general $G$, when the quotas are all $1$, the quota symbol
$\left\\{\begin{array}[]{c}\mathbf{in}\\\
\mathbf{q}-\mathbf{s}\end{array}\right\\}_{G}$ counts spanning forests (or
trees) of $G$. So it is not surprising that the symbol satisfies a recurrence
which reduces in the former case to Pascal’s rule, and in the latter case to
the recurrence for counting spanning trees by deleting and contracting an
edge:
$\\#T(G)=\\#T(G\setminus e)+\\#T(G/e).$
While we won’t need it here, we state the recurrence for completeness; its
proof is implicit in the proof of Theorem 13.
###### Theorem 9 (quota symbol recurrence).
The quota symbol (3) can be computed recursively as follows:
$\left\\{\begin{array}[]{c}\mathbf{a}\\\
\mathbf{b}\end{array}\right\\}_{G}=\left\\{\begin{array}[]{ll}0,&\textrm{unless
$\mathbf{0}\leq\mathbf{b}\leq\mathbf{a}$ and $\mathbf{a}\geq\mathbf{b}M$, in
which case:}\\\ 1,&\textrm{if $\mathbf{b}=\mathbf{0}$; else}\\\ 0,&\textrm{if
$\mathbf{a}=\mathbf{b}M$;}\end{array}\right.$
otherwise
$\left\\{\begin{array}[]{c}\mathbf{a}\\\
\mathbf{b}\end{array}\right\\}_{G}=\left\\{\begin{array}[]{c}\mathbf{a}-\delta_{i}\\\
\mathbf{b}-\delta_{i}\end{array}\right\\}_{G}+\left\\{\begin{array}[]{c}\mathbf{a}-\delta_{i}\\\
\mathbf{b}\end{array}\right\\}_{G}$
where $\delta_{i}$ is the vector with a $1$ in the $i$-th position and $0$
elsewhere, and $i$ is an index such that $a_{i}>(\mathbf{b}M)_{i}$.
Corresponding to the two variants of quota search we have described, one might
also ask for the number of at-most quota forests. (Of course the answers agree
for trees, but sometimes one or the other expression is easier to evaluate.)
###### Corollary 10 (counting at most quota forests).
Fix $G$, $\mathbf{q}$ and $\mathbf{s}$ as in Theorem 5. The number of
$(G,\mathbf{q},\mathbf{s}^{\prime})$-quota trees with a start portfolio
$\mathbf{s}^{\prime}$ satisfying $\mathbf{s}^{\prime}\leq\mathbf{s}$
coordinatewise is given by
$\left\\{\begin{array}[]{c}\mathbf{in}+\mathbf{s}\\\
\mathbf{q}\end{array}\right\\}_{G}.$
Before proving these results, we pause to consider some special cases. We will
let $Q_{=}(G,q,s)$ denote the number of quota forests of $G$ with quota $q$
and start portfolio exactly $s$; similarly $Q_{\leq}(G,q,s)$ will count quota
forests with start portfolio at most $s$.
#### Example: one-vertex graphs
Let $R_{k}$ denote the $k$-leaf rose, having a single vertex and $k$ loops; in
this case, quotas and start portfolios are just scalars $q$ and $s$. By
Theorem 5 and Corollary 10, we find
$Q_{=}(R_{k},q,s)=\left\\{\begin{array}[]{ll}[q=s]&\textrm{if\ }kq=0;\\\
\frac{s}{q}\binom{kq}{q-s}&\textrm{otherwise;}\end{array}\right.$
$Q_{\leq}(R_{k},q,s)=\left\\{\begin{array}[]{ll}[q=0]&\textrm{if\ }kq+s=0;\\\
\frac{s}{kq+s}\binom{kq+s}{q}&\textrm{otherwise.}\end{array}\right.$
It’s useful to check the at-most counts. For $k=0$,
$Q_{\leq}(R_{0},q,s)=\binom{s}{q}$ as expected, since we just select which of
the $s$ possible starts get chosen.
$Q_{\leq}(R_{1},q,s)=\frac{s}{q+s}\binom{q+s}{q}=\binom{q+s-1}{q}$, as it
counts the number of $s$-tuples $(T_{1},\ldots,T_{s})$ where each $T_{i}$ is a
(possibly empty) directed path graph and the total number of nodes is $k$. For
$k=2$ each $T_{i}$ is now a binary tree, and
$Q_{\leq}(R_{2},q,s)=\frac{s}{2q+s}\binom{2q+s}{q}$ is equal to the entry
$C(q+s-1,q)$ in the so-called Catalan triangle [12, A009766]; when $s=1$ this
just counts binary trees on $n$ nodes: $1,1,2,5,14,\ldots$. (For higher $k$ we
get $k$-th level Catalan numbers; see [12, A069269].)
We remark that the ordinary $q$-generating function for $Q_{\leq}(R_{k},q,s)$
is a hypergeometric series:
$\displaystyle\sum_{q}Q_{\leq}(R_{k},q,s)z^{q}$ $\displaystyle=$
$\displaystyle 1+\sum_{q\geq 1}\frac{s}{kq+s}\binom{kq+s}{q}z^{q}$
$\displaystyle=$
$\displaystyle{}_{k}F_{k-1}\left(\begin{array}[]{c}\frac{s}{k},\frac{s+1}{k},\ldots,\frac{s+k-1}{k}\\\
\frac{s+1}{k-1},\frac{s+2}{k-1},\ldots,\frac{s+k-1}{k-1}\end{array}\bigg{|}\frac{k^{k}}{(k-1)^{k-1}}z\right).$
We also note the following relationship, which falls in the category of
“combinatorial reciprocity laws” as described by Stanley [14]. When we
formally substitute $-s$ in the expression for $Q_{\leq}(R_{k},q,s)$, we
obtain $\frac{-s}{kq-s}\binom{kq-s}{q}$. When $kq\leq s$, it turns out that
this counts (up to an alternating sign) the number of ways to select a set of
$q$ disjoint copies of the path graph $P_{k}$ in the cycle graph $C_{s}$. When
$k=1$, this reduces to the usual binomial coefficient reciprocity law, namely
that $\binom{-s}{q}$ and $\binom{s}{q}$ count selections of $q$ objects from
$s$ objects respectively with and without replacement. For general $k$, this
gives a combinatorial reciprocity law for higher-order Catalan triangles.
#### Example: quota trees over $K_{n}$
It is natural to consider a start portfolio consisting of a single vertex, as
this situation arises in the context of spanning trees as well as
deterministic automata. We view the complete graph $G=K_{n}$ as a directed
graph with adjacency matrix $J_{n}-I_{n}$ (where $J_{n}$ is as usual the
matrix of all $1$’s.) We remark that quota trees over $K_{n}$ can be viewed as
trees colored with $n$ colors, having $q_{i}$ nodes of color $i$, such that a
node cannot share a color with either its parent or any of its siblings. In
the special case of a constant quota $q$ at each vertex, we get an especially
nice answer: the number of quota trees over $K_{n}$, with a given start vertex
and constant quota $q$ at each vertex, is
$\binom{(n-1)q}{q}^{n}\frac{n^{n-2}}{(n-1)^{n-1}((n-2)q+1)}.$
Taking $q=1$ yields $n^{n-2}$, so we recover as a special case Cayley’s
formula for the number of spanning trees of $K_{n}$ with a specified root.
#### Example: quota trees over $K_{n}$ with loops
Loops don’t enter into spanning trees, but are relevant to quota forests. We
remark that loops do not affect the determinant in the definition of the quota
symbol (3), but they do affect the rest of the terms. As an example, let
$K_{n}^{\circ}$ be the graph $K_{n}$ with a loop added at each vertex, so that
the adjacency matrix is the all-ones matrix. Its quota trees correspond to
tree-colorings as in the preceding example, except that a node is now allowed
to share a color with its parent. When the quota is a constant $q$ at each
root, the number of quota trees starting at any fixed root works out to be
$\binom{nq}{q}^{n}\frac{1}{n(q(n-1)+1)}.$
For $n=2$, the number of quota trees with quotas $(i,j)$ and start portfolio
at most $(1,0)$ is given by
$\displaystyle\left\\{\begin{matrix}i+j+1&i+j\\\
i&j\end{matrix}\right\\}_{K_{2}^{\circ}}:$
$\begin{array}[]{c|cccccc}i\diagdown j&0&1&2&3&4&5\\\ \hline\cr
0&1&0&0&0&0&0\\\ 1&1&1&1&1&1&1\\\ 2&1&3&6&10&15&21\\\ 3&1&6&20&50&105&196\\\
4&1&10&50&175&490&1176\\\ 5&1&15&105&490&1764&5292\\\ \end{array}$
Up to indexing, these are the Narayana numbers ([12, A001263], [15, ex.
6.36]); they appear in numerous contexts (e.g. Dyck paths counted by length
and peak, antichains in the poset $2*(k-1)*(n-k)$, the $h$-vector of the dual
simplicial complex to the associahedron $A_{n}$, etc.)
Notice that the diagonals in the preceding table add up to the Catalan
numbers; this is a special case of a very general fact. Let $\pi:\tilde{G}\to
G$ be a (not necessarily universal) out-covering, $q$ and $\tilde{q}$ quotas
on $G$ and $\tilde{G}$ respectively, and $s$ and $\tilde{s}$ start portfolios
such that $s(v)=\sum_{\tilde{v}\in\pi^{-1}(v)}\tilde{s}(\tilde{v}).$ By the
discussion in section 3, given a $(G,q,s)$ quota forest $F$, once we lift the
root of each tree to an arbitrary preimage in $\tilde{G}$, this determines a
unique lift of $F$. Thus, counting quota trees in $\tilde{G}$ refines the
counting of quota forests in $G$ in the sense that
$Q_{=}(G,q,s)=\sum_{\tilde{q}}Q_{=}(\tilde{G},\tilde{q},\tilde{s}),$
where the sum ranges over all (achievable) quotas $\tilde{q}$ such that
$\sum_{\tilde{v}\in\pi^{-1}(v)}\tilde{q}(\tilde{v})=q(v)$
for all $v\in V(G)$.
Returning to the current example, since $K_{2}^{\circ}$ has constant outdegree
2, one can construct an out-covering $K_{2}^{\circ}\to R_{2}$. So the number
of quota trees in $K_{2}^{\circ}$ where the quota has $l_{1}$-norm $n$ is the
number of quota trees in $R_{2}$ with quota $n$, which we have already seen is
given by a Catalan number.
More generally, there are five essentially different ways to write down a
strongly connected rooted two-vertex graphs with outdegree 2. In each case,
the diagonal sums of the quota tree counts are Catalan numbers, but the quotas
reflect different interesting properties of the binary trees. All five cases
appear as different entries in Sloane [12]; we list these as the first five
rows of Table 1, which collects a number of two-vertex graphs whose quota tree
counts have already been studied in other contexts.
$\begin{array}[]{c|l}G&\textrm{Corresponding entry in Sloane
\cite[cite]{[\@@bibref{}{oeis}{}{}]}}\\\ \hline\cr\lower 11.38092pt\hbox{
\leavevmode\hbox to67.43pt{\vbox
to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
16.21423pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.03755pt}{1.17638pt}\pgfsys@curveto{10.27048pt}{5.92964pt}{18.18228pt}{5.92964pt}{23.29756pt}{2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.86603}{-0.5}{0.5}{0.86603}{23.29756pt}{2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{-2.2726pt}{-0.60893pt}\pgfsys@curveto{-16.01424pt}{-4.29099pt}{-16.01424pt}{4.29099pt}{-5.74988pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{-5.7499pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>Narayana numbers}\\\ \lower 11.38092pt\hbox{ \leavevmode\hbox to65.44pt{\vbox
to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
14.22638pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.35277pt}{0.0pt}\pgfsys@lineto{22.50005pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{22.50005pt}{0.0pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.03755pt}{1.17638pt}\pgfsys@curveto{10.27048pt}{5.92964pt}{18.18228pt}{5.92964pt}{23.29756pt}{2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.86603}{-0.5}{0.5}{0.86603}{23.29756pt}{2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>ordered trees with $n$ edges and $2k$ nodes of odd degree}\\\ \lower
11.38092pt\hbox{ \leavevmode\hbox to69.51pt{\vbox
to28.85pt{\pgfpicture\makeatletter\hbox{\hskip
16.21423pt\lower-14.42638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.35277pt}{0.0pt}\pgfsys@lineto{22.50005pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{22.50005pt}{0.0pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{-2.2726pt}{-0.60893pt}\pgfsys@curveto{-16.01424pt}{-4.29099pt}{-16.01424pt}{4.29099pt}{-5.74988pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{-5.7499pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{}
{{{}{}}}{{}}{}{{{}{}}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.49031pt}{1.17638pt}\pgfsys@curveto{53.09363pt}{14.22638pt}{53.09363pt}{-14.22638pt}{33.60796pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{33.60796pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>the Catalan triangle again}\\\ \lower 11.38092pt\hbox{ \leavevmode\hbox
to67.43pt{\vbox to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
16.21423pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.03755pt}{1.17638pt}\pgfsys@curveto{10.27048pt}{5.92964pt}{18.18228pt}{5.92964pt}{23.29756pt}{2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.86603}{-0.5}{0.5}{0.86603}{23.29756pt}{2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{-2.2726pt}{-0.60893pt}\pgfsys@curveto{-16.01424pt}{-4.29099pt}{-16.01424pt}{4.29099pt}{-5.74988pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{-5.7499pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.09999pt}{0.0pt}\pgfsys@lineto{5.95271pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{5.95271pt}{0.0pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>ordered trees with $n$ edges containing $k$ (non-root) nodes adjacent to a
leaf}\\\ \lower 11.38092pt\hbox{ \leavevmode\hbox to65.44pt{\vbox
to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
14.22638pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{1.66365pt}{1.66365pt}\pgfsys@curveto{8.592pt}{8.592pt}{19.86076pt}{8.592pt}{24.24356pt}{4.2092pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.7071}{-0.7071}{0.7071}{0.7071}{24.24356pt}{4.2092pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.2726pt}{0.60893pt}\pgfsys@curveto{11.27815pt}{3.02196pt}{17.1746pt}{3.02196pt}{22.70288pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{22.70287pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.78911pt}{-1.66365pt}\pgfsys@curveto{19.86076pt}{-8.592pt}{8.592pt}{-8.592pt}{4.2092pt}{-4.2092pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.7071}{0.7071}{-0.7071}{-0.7071}{4.2092pt}{-4.2092pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.18016pt}{-0.60893pt}\pgfsys@curveto{17.1746pt}{-3.02196pt}{11.27815pt}{-3.02196pt}{5.74988pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{5.7499pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>``pat'' permutations of $[n]$ with $k$ descents}\\\ \lower 11.38092pt\hbox{
\leavevmode\hbox to65.44pt{\vbox
to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
14.22638pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.03755pt}{1.17638pt}\pgfsys@curveto{10.27048pt}{5.92964pt}{18.18228pt}{5.92964pt}{23.29756pt}{2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.86603}{-0.5}{0.5}{0.86603}{23.29756pt}{2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>Motzkin polynomial coefficients; diagonal sums are Motzkin numbers A001006}\\\
\lower 11.38092pt\hbox{ \leavevmode\hbox to69.51pt{\vbox
to28.85pt{\pgfpicture\makeatletter\hbox{\hskip
16.21423pt\lower-14.42638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}
{{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{-2.2726pt}{-0.60893pt}\pgfsys@curveto{-16.01424pt}{-4.29099pt}{-16.01424pt}{4.29099pt}{-5.74988pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{-5.7499pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.03755pt}{1.17638pt}\pgfsys@curveto{10.27048pt}{5.92964pt}{18.18228pt}{5.92964pt}{23.29756pt}{2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.86603}{-0.5}{0.5}{0.86603}{23.29756pt}{2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.35277pt}{0.0pt}\pgfsys@lineto{22.50005pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{22.50005pt}{0.0pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{}
{{{}{}}}{{}}{}{{{}{}}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.49031pt}{1.17638pt}\pgfsys@curveto{53.09363pt}{14.22638pt}{53.09363pt}{-14.22638pt}{33.60796pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{33.60796pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>2-Dyck paths of order $n$ with $k$ peaks}\\\ \lower 11.38092pt\hbox{
\leavevmode\hbox to67.43pt{\vbox
to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
16.21423pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}
{{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{-2.2726pt}{-0.60893pt}\pgfsys@curveto{-16.01424pt}{-4.29099pt}{-16.01424pt}{4.29099pt}{-5.74988pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{-5.7499pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{1.66365pt}{1.66365pt}\pgfsys@curveto{8.592pt}{8.592pt}{19.86076pt}{8.592pt}{24.24356pt}{4.2092pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.7071}{-0.7071}{0.7071}{0.7071}{24.24356pt}{4.2092pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.2726pt}{0.60893pt}\pgfsys@curveto{11.27815pt}{3.02196pt}{17.1746pt}{3.02196pt}{22.70288pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{22.70287pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.78911pt}{-1.66365pt}\pgfsys@curveto{19.86076pt}{-8.592pt}{8.592pt}{-8.592pt}{4.2092pt}{-4.2092pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.7071}{0.7071}{-0.7071}{-0.7071}{4.2092pt}{-4.2092pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.18016pt}{-0.60893pt}\pgfsys@curveto{17.1746pt}{-3.02196pt}{11.27815pt}{-3.02196pt}{5.74988pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{5.7499pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\lower 11.38092pt\hbox{ \leavevmode\hbox to67.43pt{\vbox
to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
16.21423pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.03755pt}{1.17638pt}\pgfsys@curveto{10.27048pt}{5.92964pt}{18.18228pt}{5.92964pt}{23.29756pt}{2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.86603}{-0.5}{0.5}{0.86603}{23.29756pt}{2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{-2.2726pt}{-0.60893pt}\pgfsys@curveto{-16.01424pt}{-4.29099pt}{-16.01424pt}{4.29099pt}{-5.74988pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{-5.7499pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>generalized Narayana numbers; A214457, rhombic tilings of an
$(n,k,1,1,n,k,1,1)$ octagon}\\\ \lower 11.38092pt\hbox{ \leavevmode\hbox
to75.05pt{\vbox to60.76pt{\pgfpicture\makeatletter\hbox{\hskip
16.21423pt\lower-30.37851pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}
{{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{-2.2726pt}{-0.60893pt}\pgfsys@curveto{-16.01424pt}{-4.29099pt}{-16.01424pt}{4.29099pt}{-5.74988pt}{1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.96593}{-0.25882}{0.25882}{0.96593}{-5.7499pt}{1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.17366pt}{0.90034pt}\pgfsys@curveto{10.8584pt}{4.49762pt}{17.59436pt}{4.49762pt}{22.95322pt}{2.27795pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.92387}{-0.38268}{0.38268}{0.92387}{22.9532pt}{2.27795pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{1.66365pt}{1.66365pt}\pgfsys@curveto{8.592pt}{8.592pt}{19.86076pt}{8.592pt}{24.24356pt}{4.2092pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.7071}{-0.7071}{0.7071}{0.7071}{24.24356pt}{4.2092pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.35277pt}{0.0pt}\pgfsys@lineto{22.50005pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{22.50005pt}{0.0pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{} {{}{}{{}}{}}{{}{}{{}}{}}{{}{}}{{}}
{{}{}{{}}{}}{{{}}{{}}}{{}}{{}{}{{}}{}}{{{}}{{}}}{{}}{}{{}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.72536pt}{0.60893pt}\pgfsys@curveto{44.467pt}{4.29099pt}{44.467pt}{-4.29099pt}{34.20264pt}{-1.54066pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.96593}{0.25882}{-0.25882}{-0.96593}{34.20265pt}{-1.54066pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{}
{{{}{}}}{{}}{}{{{}{}}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.49031pt}{1.17638pt}\pgfsys@curveto{53.09363pt}{14.22638pt}{53.09363pt}{-14.22638pt}{33.60796pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{33.60796pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} {{}}{}
{{{}{}}}{{}}{}{{{}{}}}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{{{{{{}}{}{}{}{}{{}}}}}{}{}{}{}}{}{}{}{}{{}}\pgfsys@moveto{30.11641pt}{1.66365pt}\pgfsys@curveto{58.63127pt}{30.17851pt}{58.63127pt}{-30.17851pt}{32.66196pt}{-4.2092pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.7071}{0.7071}{-0.7071}{-0.7071}{32.66196pt}{-4.2092pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>3-Runyon numbers}\\\ \lower 11.38092pt\hbox{ \leavevmode\hbox to65.44pt{\vbox
to28.45pt{\pgfpicture\makeatletter\hbox{\hskip
14.22638pt\lower-14.22638pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{} {}{{}}{}{}{}{}{{}}{}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.15277pt}{0.0pt}\pgfsys@curveto{2.15277pt}{1.18895pt}{1.18895pt}{2.15277pt}{0.0pt}{2.15277pt}\pgfsys@curveto{-1.18895pt}{2.15277pt}{-2.15277pt}{1.18895pt}{-2.15277pt}{0.0pt}\pgfsys@curveto{-2.15277pt}{-1.18895pt}{-1.18895pt}{-2.15277pt}{0.0pt}{-2.15277pt}\pgfsys@curveto{1.18895pt}{-2.15277pt}{2.15277pt}{-1.18895pt}{2.15277pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{{{}}}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{30.60553pt}{0.0pt}\pgfsys@curveto{30.60553pt}{1.18895pt}{29.64171pt}{2.15277pt}{28.45276pt}{2.15277pt}\pgfsys@curveto{27.26381pt}{2.15277pt}{26.29999pt}{1.18895pt}{26.29999pt}{0.0pt}\pgfsys@curveto{26.29999pt}{-1.18895pt}{27.26381pt}{-2.15277pt}{28.45276pt}{-2.15277pt}\pgfsys@curveto{29.64171pt}{-2.15277pt}{30.60553pt}{-1.18895pt}{30.60553pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\hbox{{\definecolor[named]{.}{rgb}{0,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.35277pt}{0.0pt}\pgfsys@lineto{22.50005pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{22.50005pt}{0.0pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{2.03755pt}{1.17638pt}\pgfsys@curveto{10.27048pt}{5.92964pt}{18.18228pt}{5.92964pt}{23.29756pt}{2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.86603}{-0.5}{0.5}{0.86603}{23.29756pt}{2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}
{{}}{}{{}}{{}}{{{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}}
}{{{{}}{{}}{{}}{{}}{{}}}{{{{}}{}{}{}{}{{}}}} }{{}{}}{{}} {}{}{}{{{}}{{}}{{}}}
{{{}}{{}}{{}}}
{}{{}}{}{{}}{}{{}}{}{}{}{}{}{}{}{{}}\pgfsys@moveto{26.4152pt}{-1.17638pt}\pgfsys@curveto{18.18228pt}{-5.92964pt}{10.27048pt}{-5.92964pt}{5.1552pt}{-2.97635pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.86603}{0.5}{-0.5}{-0.86603}{5.1552pt}{-2.97635pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>related to generalized Catalan sequences}\\\ \hline\cr\end{array}$ Table 1.
Some two-vertex graphs whose quota tree counts appear, possibly re-indexed, in
Sloane’s Encyclopedia of Integer Sequences. In each case, the start portfolio
is one copy of each filled-in vertex.
#### Example: quota forests over $K_{n}$ with symmetric roots
It is even more symmetrical to count quota forests over $K_{n}$, where we take
both $q$ and $s$ to be constant over all vertices. The quota tree count is
$\binom{(n-1)q}{q-s}^{n}\frac{(nq-s)^{n-1}s}{(n-1)^{n-1}q^{n}}.$
In particular, if $q=s$, the count is exactly one, reflecting the fact that
each tree in the forest is an isolated node.
#### Example: path graphs
The path graph $P_{n}$ has only a single spanning tree from any root; however,
quota trees are much more interesting. Intuitively, we have $n$ parallel
semitransparent panes of glass; at each one, a laser beam can pass through,
reflect, both, or neither. When we fire a beam into one pane, the trajectory
is then a tree immersing into $P_{n}$, whose quotas count the number of times
each pane is encountered. If all quotas are $q$, and the beam is initially
fired into one of the outer panes, the number of quota trees works out to
$\left(\frac{1}{2}\binom{2q}{q}\right)^{n-2}=a_{q}^{n-2},$
where $a_{q}=(1,3,10,35,126,\cdots)$ is sequence A001700 in Sloane. When we
fire the laser into any one of the internal panes, the answer works out to
$c_{q}a_{q}^{n-3}$, where $c_{q}=\binom{2q+1}{q}/(2q+1)$ is the $q$-th Catalan
number.
#### Example: cycle graphs
With the notation of the preceding example, the cycle graph $C_{n}$ has
$\binom{2q}{q}^{n}\frac{n}{2^{n-1}(q+1)}=\frac{2n\,a_{q}^{n}}{q+1}$
quota trees from any fixed root, when all vertex quotas are set to $q$.
#### Proof of Theorem 5
The strategy is to write down a functional equation jointly satisfied by the
generating functions for quota trees rooted at all vertices of $G$, and solve
it using the multivariate Lagrange inversion formula.888Problem 3.3.42 in [9]
is very similar; however, it counts trees rather than forests, and omits the
immersion condition. Following [9], let $R$ be a ring with unity,
$R[[\boldsymbol{\lambda}]]_{1}$ the set of formal power series in
$\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{n})$ over $R$ with
invertible constant term, and $R((\boldsymbol{\lambda}))$ the ring of formal
Laurent series over $R$.
###### Theorem 11 (Multivariate Lagrange).
[9, Th. 1.2.9]
Suppose $\mathbf{w}=(w_{1}(\mathbf{t}),\ldots,w_{n}(\mathbf{t}))$ jointly
satisfy the functional equations
$w_{i}(\mathbf{t})=t_{i}\phi_{i}(\mathbf{w})$, where
$\mathbf{t}=(t_{1},\ldots,t_{n})$. Let $f(\boldsymbol{\lambda})\in
R((\boldsymbol{\lambda}))$ and
$\boldsymbol{\phi}=(\phi_{1}(\boldsymbol{\lambda}),\ldots,\phi_{n}(\boldsymbol{\lambda}))$,
where $\phi_{i}\in R[[\boldsymbol{\lambda}]]_{1}$. Then
$f(\mathbf{w}(\mathbf{t}))=\sum_{\mathbf{q}}\mathbf{t}^{\mathbf{q}}[\boldsymbol{\lambda}^{\mathbf{q}}]\left\\{f(\boldsymbol{\lambda})\boldsymbol{\phi}^{\mathbf{q}}(\boldsymbol{\lambda})\left\|\delta_{ij}-\frac{\lambda_{j}}{\phi_{i}(\boldsymbol{\lambda})}\frac{\partial\phi_{i}(\boldsymbol{\lambda})}{\partial\lambda_{j}}\right\|\right\\}.$
Given a graph $G$ with $n$ vertices, we will take $w_{i}(\mathbf{t})$ to be
the generating function
$w_{i}(\mathbf{t})=\sum_{T}\mathbf{t}^{\mathbf{q}(T)}=\sum_{T}t_{1}^{q_{1}(T)}\cdots
t_{n}^{q_{n}(T)},$
where $T$ ranges over all quota trees rooted at vertex $i$, and $q_{j}(T)$ is
the number of occurrences of vertex $j$ in $T$. The first observation is that
the $w_{i}$’s jointly satisfy the functional equation
(5) $w_{i}(\mathbf{t})=t_{i}\prod_{j}(1+w_{j}(\mathbf{t}))^{m_{ij}}$ |
# Identifying Fixation and Saccades in Virtual Reality
Xiao-lin Chena, c and Wen-jun Houb,c CONTACT Wen-jun Hou. Email:
<EMAIL_ADDRESS>aSchool of Automation, Beijing University of Posts and
Telecommunications, Beijing, China<EMAIL_ADDRESS>bSchool of Digital Media
and Design Arts, Beijing University of Posts and Telecommunications, Beijing,
China; cBeijing Key Laboratory of Network Systems and Network Culture,
Beijing, China
###### Abstract
Gaze recognition can significantly reduce the amount of eye movement data for
a better understanding of cognitive and visual processing. Gaze recognition is
an essential precondition for eye-based interaction applications in virtual
reality. However, the three-dimensional characteristics of virtual reality
environments also pose new challenges to existing recognition algorithms.
Based on seven evaluation metrics and the Overall score (the mean of the seven
normalized metric values), we obtain optimal parameters of three existing
recognition algorithms (Velocity-Threshold Identification, Dispersion-
Threshold Identification, and Velocity & Dispersion-Threshold Identification)
and our modified Velocity & Dispersion-Threshold Identification algorithm. We
compare the performance of these four algorithms with optimal parameters. The
results show that our modified Velocity & Dispersion-Threshold Identification
performs the best. The impact of interface complexity on classification
results is also preliminarily explored. The results show that the algorithms
are not sensitive to interface complexity.
###### keywords:
Gaze-based Data, Eye tracking, Virtual Reality, Fixation
††articletype: ARTICLE
## 1 Introduction
The essence of eye-movement behavior is the allocation of human attentional
resources, no matter whether it is active or passive. One of the main topics
of eye movement research is to infer brain activity by monitoring eye
movements. Eye-tracking technology provides vital technical support for a
deeper understanding of human eye-movement behaviors and the underlying
psycho-cognitive activities. Humans mainly have six eye movement types:
fixations, saccades, smooth pursuits, optokinetic reflex, vestibulo-ocular
reflex, and vergence (Leigh and Zee, 2007). It is crucial to identify these
basic eye movement types from noisy and often inaccurate raw eye position
signals for researchers who use eye trackers in their studies. Fixation and
saccades are most frequently studied among all six eye movement types,
especially in human intention recognition and cognition state
recognition(Istance et al., 2010). Fixation identification translates from raw
eye-movement data points to fixation locations and implicitly the saccades
between them. Fixation identification significantly reduces the raw data size
by removing slight eye movements that occur during fixation with little
significance in higher-level analysis (such as tremors, drifts, and flicks
(Alpern, 1969; Ditchburn, 1980)) and merging raw fixation points into a single
representative tuple. Therefore, fixation identification can reduce the noise
and volume of raw data while retaining its most essential features to
understand cognitive and visual processing behavior.
Virtual reality devices have become more portable and affordable in recent
years. Many commercial products have equipped eye-tracking devices. Eye
movement, especially fixation, is a natural and intuitive way to interact with
the environment. It indicates where our attention is or what we will do next.
It is also a part of human nonverbal communication and regulates
interaction(Majaranta and Bulling, 2014). For example, in collaborative tasks,
gaze information can improve communication even more than verbal information
by supporting a visual basis(Gergle et al., 2013). Therefore, as an input
modality of virtual reality, eye tracking can achieve a new and more seamless
interaction mode. It enables virtual reality applications to respond to users’
attention and even users’ emotions(Brennan et al., 2008). Eye-tracking has a
long history of application in virtual reality human-computer interaction and
has five main applications: user avatar eye behavior simulation(Duchowski and
Jörg, 2020; Andrist et al., 2012; Queiroz et al., 2008; Lance and Marsella,
2008), fovea rendering (apply high quality rendering only in the gaze area to
reduce power consumption)(Weier et al., 2016; Swafford et al., 2016; Roth et
al., 2017; Albert et al., 2017), mitigation of the side effects of vergence-
accommodation conflict(Kramida, 2016; Duchowski et al., 2014; Fisker et al.,
2013; Bernhard et al., 2014), gaze-based interaction (to reduce head movement
or improve interaction efficiency)(Sidenmark and Gellersen, 2019; Rajanna and
Hansen, 2018; Piumsomboon et al., 2017; Pfeuffer et al., 2017; Sidorakis and
Koulieris, 2015), and user intent or behavior recognition(Brendan et al.,
2021; Pfeiffer et al., 2020; Alghofaili et al., 2019). Many of these studies
are based on fixation identification.
One of the prerequisites for the wider application of eye-tracking in virtual
reality is to recognize the viewpoint and its three-dimensional coordinates
based on the sampled eye-tracking data. However, vast existing algorithms on
fixation identification are based on the data collected from conventional 2D
screens with statistic stimuli, which may not suit Virtual Reality (VR) 3D
environments. Because the distribution of fixation points is expanded from
two-dimensional to three-dimensional, it becomes more complex to locate the
user’s fixation point coordinates. Mobile eye-tracking devices, such as Tobii
Glasses for ”the real reality”, are based on video streams taken by a front
camera for gaze annotation, which is still essentially analyzing data in a 2D
environment, and the accuracy of fixation identification cannot be
guaranteed(Olsen, 2012).
Duchowski et al. (2002) first present a velocity-based eye movement analysis
algorithm in three-dimensional spaces, applicable to the 3D eye movement data
in a virtual environment. They mainly solve the mapping from original 2D eye
movement data to 3D world coordinates. However, there is a lack of reasonable
evaluation methods. The authors try to compare with some experimental
conclusions in the traditional environment. Specifically, the authors find
that the average fixation duration is 1.9s in virtual reality. It is
significantly different from that in reading (150-650 msec) in previous
studies conducted in reality. However, it is difficult to explain whether the
difference comes from the virtual reality environment itself or the
algorithm’s error. Diaz et al. (2013) present methods identifying fixation and
saccadic eye movements in a virtual environment based on the research of
Duchowski et al. (2002), including the calculation of gaze-in-world angles,
angular distance from gaze to an object in the virtual world, and algorithms
for the identification of pursuit eye movements. Different approaches for
fixation identification in 3D scenes have been described by Pelz and Canosa
Pelz and Canosa (2001), Reimer and Sodhi (2006), and Munn et al. (2008).
However, these approaches are for monocular eye trackers. Although they can
identify fixation in 3D environments, they provide only a fixation direction
instead of a 3D fixation position which is important in practical application.
The study of Llanes-Jurado et al. (2020) develops a dispersion-threshold
identification algorithm for data obtained from an eye-tracking system in a
head-mounted display. Rule-based criteria are proposed to calibrate the
thresholds of the algorithm through different features. However, the
difference in the depth of field is not considered in the design of stimuli in
their research. Stimuli are presented on two planes with a fixed distance from
the user. Secondly, there is no accuracy metric of fixation coordinates to
indicate whether the predicted fixation coordinates are consistent with those
guided by the stimulus. Furthermore, they also lack a horizontal comparison of
different algorithms.
In this paper, based on the three existing gaze classification algorithms, a
modified algorithm is proposed to classify fixation and calculate its three-
dimensional coordinates. The best parameters of the four classification
algorithms, including our algorithm, are obtained through a variety of
evaluation metrics. The classification results of each algorithm are from two
tasks ( occlusion and non-occlusion) and compared. Overall, our algorithm’s
performance is the best. It can identify the user’s actual fixation position,
with a velocity threshold of 140°/s, a minimum fixation duration of 130ms, and
a dispersion threshold of 5.75° as the optimal parameters. The main
contributions of this paper are as follows:
* •
Existing evaluation metrics and classification algorithms are adapted to
virtual reality environments to calculate each algorithm’s optimal parameters.
* •
The m-IVDT algorithm is proposed to improve the accuracy of fixation
coordinates.
* •
The four algorithms have no preference for interface complexity.
The paper is organized as follows. Section 2 reviews the three existing
algorithms and introduces the proposed algorithm. Section 3 provides a
standardized evaluation system for in-depth quantitative and qualitative
analysis of classification results in VR. Section 4 describes our experiment
platform and method. Section 5 provides a comparative analysis of the four
algorithms. Section 6 concludes our work and suggests future directions.
## 2 Algorithm
Fixation-identification algorithms can be based on velocity, dispersion, or
area depending on the spatial criteria(Salvucci and Goldberg, 2000). Area-
based algorithms identify points within given areas of interest (AOIs)
representing relevant visual targets. These algorithms provide higher-level
assignment of fixation to AOIs, representing higher attentional focus levels
on display. Fixation is used as inputs to AOI algorithms. Our research goal is
low-level fixation recognition, so area-based algorithms are not in our
consideration. Velocity-based algorithms take advantage of the fact that
saccades are rapid movements compared to fixation. The most representative
velocity-based algorithm is Velocity-Threshold Identification (I-VT), the
simplest method to understand and implement. Dispersion-based algorithms
emphasize the dispersion of fixed points because they usually are near each
other. For example, Dispersion-Threshold Identification (I-DT) identifies
fixation as groups of consecutive points within a particular dispersion or
maximum separation. We also choose a hybrid algorithm based on these two
algorithms, Velocity & Dispersion-Threshold Identification (IVDT), which
integrates speed and discreteness into fixation classification.
This section describes these three algorithms. Sample algorithms are
formalized to represent the essential ideas of each class of algorithms and
express their basic techniques as simply as possible.
### 2.1 Velocity-Threshold Identification
I-VT begins by calculating point-to-point velocities for each eye data sample.
Each velocity is computed by dividing the visual angle between two adjacent
points by the time duration.
$v_{i}=\frac{\arccos{\frac{V_{i}\cdot V_{i+1}}{\parallel
V_{i}\parallel\parallel V_{i+1}\parallel}}}{|t_{i+1}-t_{i}|}\times 5.73\times
10^{4}$
where $V_{i}$ is the normalized gaze direction vector at time $t_{i}$,
$V_{i+1}$ is the normalized gaze direction vector at time $t_{i+1}$,
$5.73\times 10^{4}$ converts the unit from radians per microsecond to degrees
per second. I-VT then classifies each point as a fixation or saccade point
based on a simple velocity threshold: if the point’s velocity is lower than
the threshold, it is a fixation point; otherwise, it is a saccade point. The
process then merges consecutive fixation points into fixation groups. Finally,
I-VT translates each fixation group to a tuple
$(x_{f},y_{f},z_{f},t_{start},d)$ using the centroid (i.e., the center of
mass) coordinates of the points as $x$, $y$, and $z$, the time of the first
point as $t$, and the duration of the points as $d$. Algorithm 2.1 presents
pseudocode for this I-VT algorithm.
Algorithm 1 Velocity-Threshold Identification
1:$p_{i}$: 3D gaze position with timestamps $(x,y,z,t)$; $V_{i}$: normalized
gaze direction vector with timestamps $(a,b,c,t)$; $Vel$: velocity threshold;
2:$f_{i}$: representative coordinates corresponding to fixations groups, the
starting time and duration of these fixations groups,
$(x_{f},y_{f},z_{f},t_{start},d)$
3:$//$ calculate instantaneous visual angle
4:for $i=0\to n-1$ do
5: $v_{i}=\frac{\arccos{\frac{V_{i}\cdot V_{i+1}}{\parallel
V_{i}\parallel\parallel V_{i+1}\parallel}}}{|t_{i+1}-t_{i}|}\times 5.73\times
10^{4}$
6:end for
7:Initialize fixation group
8:for $i=0\to n-2$ do
9: if $v_{i}<Vel$ then
10: $V_{i}$ is added to the fixation group
11: else
12: if the fixation group not empty then
13: Calculate the centroid coordinates $(x_{f},y_{f},z_{f})$ of points in
fixation group
14: Save the timestamp $t$ of the first point in fixation group as $t_{start}$
15: Calculate the duration $d$ of points in fixation group
16: Initialize fixation group
17: end if
18: end if
19:end for
### 2.2 Dispersion-Threshold Identification
The previous I-DT algorithm uses a sliding window that spans consecutive data
points to check for potential fixations. This method is very useful for eye
movement data with stable sampling frequency. However, in a virtual reality
environment, due to increasing graphic rendering requirements or limited
computing power of GPUs, the data collection frequency is unstable and often
reduced. Since the raw data is obtained using the SDK (SRanpial) through a
Unity script, the data collection frequency depends on the graphic engine’s
processing speed.
To solve this problem, we adjust the algorithm. Instead of setting the initial
window, we check whether it meets the minimum fixation duration after
determining a group of fixation points. In addition, the dispersion distance
between the centroids of two adjacent fixation groups is also checked. If they
are too close (below the dispersion threshold), they are merged.
The distance metric we choose is the centroid distance method. The distance is
represented by the visual angle between the current point and the following
(or previous) point. We only check the distance of the new point to be added
to the centroid. If the dispersion is below the dispersion threshold, we
expand the fixation group and recalculate the centroid. If the dispersion is
above the dispersion threshold, the new point does not correspond to a
fixation. Then, we check whether each fixation group meets the minimum
fixation duration and whether the dispersion distance from adjacent fixation
groups meets the maximum dispersion distance. If both are met, it is regarded
as a fixation at the centroid $(x,y,z)$ of the fixation group points with the
timestamp of the first point as the fixation start timestamp and the duration
of the points as the fixation duration. This process is applied to the entire
dataset. Algorithm 2.2 presents pseudocode for this I-DT algorithm.
Algorithm 2 Dispersion-Threshold Identification
$p_{i}$: 3D gaze position with timestamps $(x,y,z,t)$; $DD_{max}$: maximum
fixation dispersion distance threshold; $Duration_{min}$: minimum fixation
duration threshold;
2:$f_{i}$: representative coordinates corresponding to fixations groups, the
starting time and duration of these fixations groups,
$(x_{f},y_{f},z_{f},t_{start},d)$
Initialize Previous fixation group $PFG$ and Current fixation group $CFG$
4:save $p_{0}$ into $PFG$
save $p_{1}$ into $CFG$
6:for $i=2\to n-1$ do
Calculate the $CFG$ centroid coordinates $(x,y,z)$
8: Calculate the dispersion distance ($DD$) between $CFG$ centroid coordinates
and $p_{i}$ coordinates
if $DD<DD_{max}$ then
10: save $p_{i}$ into $CFG$
else
12: if $CFG$ is not empty then
Calculate the duration $d$ of points in $CFG$
14: if $d>Duration_{min}$ then
Calculate the dispersion distance ($DD$) between first point in $CFG$ and last
point in $PFG$
16: if $DD<DD_{max}$ then
Marge $CFG$ into $PFG$
18: else
Calculate the $PFG$ centroid coordinates $(x_{f},y_{f},z_{f})$
20: Save the timestamp $t$ of the first point in $PFG$ as $t_{start},$
Calculate the duration $d$ of points in $PFG$
22: Initialize $PFG$
Marge $CFG$ into $PFG$
24: Initialize $CFG$
save $p_{i}$ into $CFG$
26: end if
else
28: Initialize $CFG$
save $p_{i}$ into $CFG$
30: end if
end if
32: end if
end for
### 2.3 Velocity & Dispersion-Threshold Identification
Komogortsev and Karpov (2013) propose a ternary classification algorithm
called velocity and dispersion threshold identification (IVDT). It first
identifies saccades by the velocity threshold. Subsequently, it identifies
smooth pursuits from fixation by a modified dispersion threshold and duration.
The original algorithm still needs an initial time window to carry out, and
smooth pursuit is not one of our classification categories, so we modify the
algorithm.
Figure 1: The intersection is the modified sampling point.
The I-VDT algorithm in this paper still employs three thresholds of velocity,
dispersion, and minimum fixation duration. Same as I-VT, I-VDT begins by
calculating point-to-point velocities for each eye data sample. Then I-VDT
classifies(Algorithm 2.3) each point as a fixation or saccade point based on a
simple velocity threshold: if the point’s velocity is below the threshold, it
is a fixation point; otherwise, it is a saccade point. Then, we check whether
each fixation group meets the minimum fixation duration and whether the
dispersion distance from adjacent fixation groups meets the maximum dispersion
distance. If both are met, it is regarded as a fixation at the centroid
$(x,y,z)$ of the fixation group points with the timestamp of the first point
as fixation start timestamp and the duration of the points as fixation
duration.
We use the gaze-based ray-casting method to calculate gaze-object
intersections as gaze intersection points to show where the participant is
looking at. However, when the line of sight deviates for a short time, the Z
coordinates of gaze intersection points may be very different. That is to say,
the user’s actual gaze does not change much. However, in the 3D gaze-based
ray-casting method, the Z coordinates of gaze points are quite different. In
this case, the centroid method in two-dimensional interfaces may cause a large
error on the z-axis, so we propose a modified method (m-IVDT) to calculate the
centroid of the fixation group.
The basic idea of this method is to transform the coordinates of the sampling
points whose Z coordinates reach infinity or outside the target area (in this
experimental environment, if Z is greater than or equal to 4.9, the sampling
point collides with the wall which is the farthest from the user in the
virtual room). As shown in Figure 1, firstly, we take the direction between
the pupil position and the existing fixation as the normal vector and
construct the plane through the fixation. Then we calculate the intersection
of the plane and the line of sight formed by the pupil position and the
sampling point. The intersection is the modified original sampling point to
recalculate the centroid of the fixation group, i.e., the new fixation
coordinate.
Algorithm 3 Velocity & Dispersion-Threshold Identification
$p_{i}$:3D gaze position with timestamps, $(x,y,z,t)$; $V_{i}$:normalized gaze
direction vector with timestamps, $Vel$:velocity threshold; $DD_{max}$:
maximum fixation dispersion distance threshold; $Duration_{min}$: minimum
fixation duration threshold;
$f_{i}$:representative coordinates corresponding to fixations groups, the
starting time and duration of these fixations groups,
$(x_{f},y_{f},z_{f},t_{start},d)$
3:$//$ calculate the instantaneous visual angle
for $i=0\to n-1$ do
$v_{i}=\frac{\arccos{\frac{V_{i}\cdot V_{i+1}}{\parallel
V_{i}\parallel\parallel V_{i+1}\parallel}}}{|t_{i+1}-t_{i}|}\times 5.73\times
10^{4}$
6:end for
Initialize Previous fixation group $PFG$ and Current fixation group $CFG$
save $p_{0}$ into $PFG$
9:save $p_{1}$ into $CFG$
for $i=2\to n-1$ do
Calculate the $CFG$ centroid coordinates $(x,y,z)$
12: Calculate the dispersion distance ($DD$) between $CFG$ centroid
coordinates and $p_{i}$ coordinates
if $v_{i}<Vel$ then
save $p_{i}$ into $CFG$
15: else
if $CFG$ is not empty then
Calculate the duration $d$ of points in $CFG$
18: if $d>Duration_{min}$ then
Calculate the dispersion distance ($DD$) between the first point in $CFG$ and
the last point in $PFG$
if $DD<DD_{max}$ then
21: Marge $CFG$ into $PFG$
else
Calculate the $PFG$ centroid coordinates $(x_{f},y_{f},z_{f})$
24: Save the timestamp $t$ of the first point in $PFG$ as $t_{start}$
Calculate the duration $d$ of points in $PFG$
Initialize $PFG$
27: Marge $CFG$ into $PFG$
Initialize $CFG$
save $p_{i}$ into $CFG$
30: end if
else
Initialize $CFG$
33: save $p_{i}$ into $CFG$
end if
end if
36: end if
end for
## 3 Evaluation
Komogortsev et al. (2010) define a set of qualitative and quantitative scores
to assess classification algorithms’ performance. For fixation and saccade
classification algorithm, they propose seven evaluation metrics: the average
number of fixation (ANF), average fixation duration (AFD), the average number
of saccades (ANS), and average saccade amplitude (ASA) as four well-known
metrics, and fixation quantitative score (FQnS), fixation qualitative score
(FQlS), and saccade quantitative score (SQnS) as three original metrics. The
scores originally measure the classification quality when only fixation and
saccades are present in a two-dimensional environment’s raw eye positional
trace. We perform the following slight modifications to extend behavior scores
for a three-dimensional virtual reality environment.
### 3.1 Fixation quantitative score (FQnS)
FQnS compares the amount of detected fixation behavior to the actual amount of
fixation behavior encoded in the stimuli(Komogortsev et al., 2010). Suppose
the original recorded eye-positional signal is classified as fixation with its
centroid in spatial proximity of the stimulus fixation, which is 1/3 of the
amplitude of the previous stimulus saccade(Figure 2), the total fixation
duration is incremented by the duration of the fixation group.
Figure 2: Total fixation duration.
FQnS is calculated by normalizing the total resulting fixation duration by the
actual total duration of fixation points encoded in the stimulus.
$FQnS=100\%\times\frac{total\;fixation\;duration}{stimuli\;total\;fixation\;duration}$
Ideal FQnS never reaches 100% because it takes time for the central nervous
system to send a neuronal signal to relevant muscles to execute a
saccade(Leigh and Zee, 2007). The beginning of fixation is always delayed by
200ms plus the duration of a saccade(Leigh and Zee, 2007). Therefore, ideal
FQnS is calculated by the following equation:
$D_{sacDur_{j}}=(2.2\times A_{sacAmp_{j}}+21)$
$Ideal\\_FQnS=100\%\times(1-\frac{m\times
S_{l}+\sum_{j=1}^{m}D_{sacDur_{j}}}{\sum_{i=1}^{n}D_{stimFixDur_{i}}})$
where $m$ is the number of stimulus saccades; $S_{l}$ is the saccadic latency
of 200ms; $A_{sacAmp_{j}}$ is the saccade’s amplitude of the $j_{t}h$ stimulus
saccade measured in degrees; $D_{sacDur_{j}}$ is the expected duration of the
$j_{t}h$ stimulus saccade; $n$ is the number of stimulus fixation;
$D_{stimFixDur_{i}}$ is the duration of the $i_{t}h$ stimulus fixation.
### 3.2 Fixation qualitative score (FQlS)
FQlS compares the spatial proximity of the classified eye-fixation signal to
the actual stimulus signal, indicating the positional accuracy or error of the
classified fixation. FQlS is calculated with the same formula proposed by
Komogortsev et al. (2010). If a sampled eye point is classified as fixation,
it calculates the Euclidean distance between the fixation group’s centroid
coordinates $(x_{c},y_{c},z_{c})$ and the corresponding stimulus fixation
coordinates $(x_{s},y_{s},z_{s})$. Then the average of these distances is
calculated as follows:
$fixationDistance_{i}=\sqrt{(x_{c}-x_{s})^{2}+(y_{c}-y_{s})^{2}+(z_{c}-z_{s})^{2}}$
$FQlS=\frac{1}{N}\sum_{i=1}^{N}fixationDistance_{i}$
where $N$ is the number of sampled points classified as fixation.
In Komogortsev et al. (2010), it is assumed that the ideal value of FQlS is
about $0.5$ degrees because the accuracy of modern eye trackers is generally
less than $<0.5$ degrees. It is assumed that the distance between human eyes
and the two-dimensional interface is constant, and the transformation between
visual angle and Euclidean distance is easy to compute. However, in a three-
dimensional environment, Euclidean distance cannot be directly transformed to
the visual angle. In the actual test, the accuracy of virtual reality eye
trackers is still lower than that of traditional eye trackers. According to
the preliminary analysis of the prediction data, we hypothesize that the
practical value of FQlS should be around $0.5$, and the unit is the same as
the Euclidean distance.
### 3.3 Saccade quantitative score (SQnS)
SQnS is calculated with the same formula proposed by Komogortsev et al.
(2010).Two separate numbers need to be computed. The first one represents the
amount of stimulus saccadic behavior, i.e., ”total stimuli saccade amplitude”.
The second represents the amount of classified saccadic behavior, i.e., ”total
detected saccade amplitude”. SQnS is computed by the following equation:
$SQnS=100\%\times\frac{total\;detected\;saccade\;amplitude}{total\;stimuli\;saccade\;amplitude}$
An SQnS of 100% indicates that the integral sum of detected eye saccade
amplitudes equals that of the actual stimuli. So a closer SQnS to 100% denotes
better performance.
## 4 Experiment
### 4.1 Participant
A total of 11 participants (six females and five males), ages 22-27 years with
an average age of 24 (±1.53), were recruited. All participants have a normal
or corrected-to-normal vision. Eligible for participation in the experiment
was only healthy people who did not have any cognitive or motor impairments.
None of the participants reported known visual or vestibular disorders, such
as color or night blindness, a displacement of balance. Ten had corrected
vision who used glasses or lenses during the experiment. Nine had tried HMDs
several times before, and two had no prior VR experience. Three had prior
experience with eye-based interaction.
### 4.2 Apparatus
Participants were instructed to wear an HTC Vive Pro Eye with one headset and
a built-in eye tracker. The headset had a resolution of 1440 ×1600 pixels/eye,
and 2880 ×1600 pixels were combined with a 110°field of view. The headset’s
highest refresh rate was 90 Hz. The refresh rate of the built-in eye tracker
was 120Hz, which offers a tracking precision of 0.5°-1.1°. The experience was
conducted on a PC with an Intel Core i7-9700, an NVIDIA GeForce GTX 1070 8G
GPU, and 16G DDR4 2666Hz RAM. The experimental platform was developed using
Unity 2019.4 and C#.
### 4.3 Procedure
The experiment takes approximately 10 minutes in total for each participant.
Each participant is given a brief introduction of the purposes and objectives
of the experiment before signing a consent form. Participants are asked to sit
in a natural sitting position and keep head position fixed as much as possible
during the experiment, but heads’ natural rotation is allowed.
The virtual stimuli consist of a virtual room with the participant in the
center. At the beginning of the experiment, the eye tracker is recalibrated
for accuracy by asking the participant to gaze at targets at five varying
points on display. The calibration process takes approximately 1 minute.
There are two kinds of stimuli. One stimulus presents a blue sphere in the
center of participants’ vision in the virtual room. It then stays in that
position for 1.5s before changing position. The position of the sphere is
changed 20 times in each session. Each position is generated randomly in a
cube space with a center point $(0,1.2,2.2)$ and a side length of 1.6. Only
one sphere is displayed on the scene at any time. Participants are asked to
gaze at the sphere during the whole session. The other stimulus has 19 blue
spheres randomly displayed in the same cube space as the first one. Besides,
another sphere in the center of the participant’s vision is the start sphere.
The spheres turn red in random order from the central one. Each red sphere
remains red for 1.5 seconds and then returns to blue. It is repeated until all
20 spheres turn red once. There is always one red sphere and 19 blue spheres
displayed on the scene at any time. Participants are asked to gaze at the red
sphere during the whole session.
Either of these two stimuli is repeated five times, so an experiment consists
of ten sessions, and the order of the ten sessions is random. Participants
have no break time between sessions. The target sphere display area is limited
to ensure that the participants can see the target without moving their heads
widely.
### 4.4 Data set
Our eye tracker data include users’ combined gaze origin positions, combined
normalized gaze direction vectors, pupil diameters, eye openness, head-set
positions, and corresponding timestamps (Figure 3). Eleven subjects conduct a
total of 110 sessions. After removing invalid data, 100 valid sessions are
obtained with 168075 raw data samples.
Figure 3: Eye tracker data output
Data preprocessing includes interpolating the missing data and coordinate
system transformation. The main reason for missing data is blinking. The
majority of missing data are Gaze Original and Gaze Direction Normalized. We
choose to fill the missing data with the last valid data. The reason is that
in our follow-up research, the whole classification pipeline, including data
preprocessing, should be able to run in real-time. There are 4205 missing data
samples, accounting for about 2.5% of the data set. The raw data is obtained
using the SDK (SRanpial) through a Unity script. According to the
documentation of SRanpial, Gaze Original is the point in the eye from which
the gaze ray originates, and Gaze Direction Normalized is the normalized gaze
direction of the eye. They are both based on a right-handed coordinate system.
However, Unity is based on a left-handed coordinate system. Therefore, we
multiply their X-axis coordinates by -1 to convert the right-handed coordinate
system to a left-handed one. Secondly, Gaze Original is based on the eye
position, i.e., the main camera’s position in the three-dimensional
environment, so a further conversion is needed, which adds the coordinates of
the main camera to Gaze Original.
In a virtual reality environment, the geometry of the presented stimuli is
known. 3D gaze positions can be inferred by calculating the intersection point
of the gaze direction vector and the reconstructed 3D scene with a ray-based
approach (Duchowski et al., 2001, 2002; Mansouryar et al., 2016). A gaze
direction vector and the corresponding gaze original position are used to find
the point of intersection with the reconstructed 3D scene representing the 3D
gaze point.
### 4.5 Threshold Tuning
It is important to test the performance of each classification algorithm over
a sensible range of threshold values.
In two influential reviews of research on eye movements in information
processing, Rayner (1992, 1998) reports that the mean fixation durations vary
with tasks, such as 225–250 ms for (silent) reading, 180–275 ms for visual
search, and 260–330 ms for scene viewing. As for saccade amplitude, the mean
fixation durations also vary with tasks, such as about 2° for reading, 3° for
visual search, and 4° for scene viewing. The research of Andrews and Coppola
(1999) also gives a similar conclusion. They report that the average fixation
durations for reading are 150–220 ms, for visual search are 190–230 ms, and
for scene viewing are 200–400 ms. Saccade sizes vary from 3° to 7° during
scene viewing, 3° to 8° during reading, and 4° to 7° during visual search. A
velocity threshold of 130°/s and a minimum fixation duration of 150ms are
suggested by Duchowski et al. (2002).The minimum fixation duration should be
less than the average fixation duration of each task in these studies, and the
maximum saccade amplitude should be less than the average saccade amplitude of
each task.
Based on these previous studies, for I-DT and I-VDT, the dispersion threshold
range is set from 1.0 degrees sto 6.0degrees with a step size of 0.25degrees,
and the minimal fixation duration threshold is set from 50ms to 150ms with a
step size of 10ms. The range of velocity threshold values for I-VT and I-VDT
is set from 30degrees/s to 150degrees/s with a step size of 10degrees/s.
We use the grid search method to traverse all parameter combinations. The
three algorithms classify each session’s data, and the seven evaluation
metrics are calculated respectively. Considering the simple stimulus behavior
and the normal subject pool, the following metrics are set up as ideal metric
performance: $Ideal\\_AFN=21$ fixations, $Ideal\\_AFD=1.5s$, $Ideal\\_ASA=20$
saccades, $Ideal\\_FQlS=0.5$, and $Ideal\\_SQnS=100\%$. Because the positions
of 20 stimuli spheres in each session are random, the ideal ASA in each
session is different and needs to be calculated separately. The calculation of
the ideal value of FQnS is also related to the angle between stimuli spheres
and needs to be calculated separately. Theoretically, the closer to the ideal
value, the better the algorithm’s performance in this metric, so we use the
absolute difference between the actual value and the ideal value to express
the algorithm’s performance. The unit of each metric is not the same. To make
a better comparative analysis, min-max normalization transforms the absolute
difference of each metric to $[0,1]$.
$y_{normalized}=\frac{y-y_{min}}{y_{max}-y_{min}}$
The Overall score is the average value of the normalized scores of each metric
and is taken as the final comprehensive performance score. The optimal
parameters are selected according to this score.
## 5 Result and Discussion
### 5.1 Tuning Parameter Values for Fixation-identification Algorithms
Parameter values for these algorithms greatly influence their output, so a
direct comparison between these algorithms has to be done with caution.
Because of the change of interactive environment (virtual reality), most of
the parameter values of these algorithms cannot be applied from the literature
directly. Hence, it is necessary to tune the parameter values for the
particular environment.
#### 5.1.1 Tuning Parameter Values for Velocity-Threshold Identification
A one-way ANOVA examines the impact of velocity threshold on all seven metrics
and the Overall score for Velocity-Threshold Identification. Table 1 shows the
ANOVA analysis with statistical significance. The significance value is below
0.05 for the Overall score, FQnS, FN, SN, and SQnS. Therefore, there is
statistical significance in these five metrics between different velocity
thresholds chosen. However, there is no significant difference in FQlS, FAD,
or ASA.
Table 1: One-Way ANOVA Output of Velocity-Threshold of IVT on all seven
metrics and the Overall score
| | Sum of Squares | df | Mean Square | F | Sig.
---|---|---|---|---|---|---
FQnS | Between Groups | 6.441 | 12 | 0.537 | 52.641 | 0.000
| Within Groups | 12.989 | 1274 | 0.01 | |
| Total | 19.43 | 1286 | | |
FQlS | Between Groups | 0.04 | 12 | 0.003 | 0.14 | 1.000
| Within Groups | 30.56 | 1274 | 0.024 | |
| Total | 30.6 | 1286 | | |
FN | Between Groups | 43.059 | 12 | 3.588 | 259.412 | 0.000
| Within Groups | 17.622 | 1274 | 0.014 | |
| Total | 60.681 | 1286 | | |
AFD | Between Groups | 4.239 | 12 | 0.353 | 183.296 | 0.000
| Within Groups | 2.456 | 1274 | 0.002 | |
| Total | 6.695 | 1286 | | |
SN | Between Groups | 42.621 | 12 | 3.552 | 259.408 | 0.000
| Within Groups | 17.443 | 1274 | 0.014 | |
| Total | 60.064 | 1286 | | |
ASA | Between Groups | 0.001 | 12 | 0 | 0.003 | 1.000
| Within Groups | 41.62 | 1274 | 0.033 | |
| Total | 41.621 | 1286 | | |
SQnS | Between Groups | 4.291 | 12 | 0.358 | 33.651 | 0.000
| Within Groups | 13.537 | 1274 | 0.011 | |
| Total | 17.828 | 1286 | | |
Overall Score | Between Groups | 7.745 | 12 | 0.645 | 85.513 | 0.000
| Within Groups | 9.609 | 1273 | 0.008 | |
| Total | 17.354 | 1285 | | |
It can be more clearly seen from Figure 4 that with the increase of velocity
threshold, the five evaluation metric values decrease with significant
differences, i.e., the classification result is closer to the ideal result, so
150 is the optimal parameter for the velocity threshold.
Figure 4: Line chart of velocity thresholds and all eight metrics
#### 5.1.2 Tuning Parameter Values for Dispersion-Threshold Identification
A two-way ANOVA examines the impact of minimum fixation duration and maximum
dispersion angle on the Overall score. There is no statistically significant
interaction between minimum fixation duration and maximum desperation angle
impacting the Overall score of classification, F (190, 21772) = 0.626, p =
1.000. Further note that partial eta squared is only 0.005 for our interaction
effect, which is negligible. Two-way ANOVAs examine the impact of minimum
fixation duration and maximum dispersion angle on all seven metrics. There is
a statistically significant interaction between minimum fixation duration and
maximum dispersion angle impacting FQnS, FN, SN, ASA, and SQnS, but not FQlS
or AFD. We use line charts to more intuitively present the impact of
parameters on various metrics
Figure 5: Line charts of IDT parameters and all eight metrics
As can be seen from the line charts (Figure 5), except FQlS, all metric values
decrease with the increase of dispersion threshold and max dispersion angle.
As for the minimum fixation duration, FN, AFD, Sn, ASA, SQnS, and Overall
score decrease with the increase of minimum fixation duration. Based on the
results of each metric, we choose the minimum fixation duration of 150ms and
the dispersion threshold of 5.75° as the optimal parameters of IDT.
#### 5.1.3 Tuning Parameter Values for Velocity & Dispersion-Threshold
Identification
IVDT (Velocity & Dispersion-Threshold Identification) algorithm includes two
algorithms: IVDT and m-IVDT algorithms. Based on the classification results of
these two algorithms, three-way ANOVAs examine the impact of velocity
threshold, minimum fixation duration, and maximum dispersion angle on the
Overall score and other seven metrics. This paper mainly takes the Overall
score for detailed analysis.
Table 2: Three-Way ANOVA Output of three parameter of IVDT on all seven
metrics and the Overall score
Source | Type III Sum of Squares | df | Mean Square | F | Sig.
---|---|---|---|---|---
Corrected Model | 714.340a | 2859 | 0.25 | 39.648 | 0.000
Intercept | 18994.174 | 1 | 18994.174 | 3014022.546 | 0.000
Vel_threshold | 87.296 | 12 | 7.275 | 1154.358 | 0.000
min_fix_dur | 2.015 | 10 | 0.202 | 31.977 | 0.000
max_angle | 555.574 | 19 | 29.241 | 4639.963 | 0.000
Vel_threshold * min_fix_dur | 7.575 | 120 | 0.063 | 10.016 | 0.000
Vel_threshold * max_angle | 40.726 | 228 | 0.179 | 28.344 | 0.000
min_fix_dur * max_angle | 17.658 | 190 | 0.093 | 14.747 | 0.000
Vel_threshold * min_fix_dur * max_angle | 3.563 | 2280 | 0.002 | 0.248 | 1.000
Error | 1781.532 | 282696 | 0.006 | |
Total | 21498.379 | 285556 | | |
Corrected Total | 2495.872 | 285555 | | |
a. R Squared = .286 (Adjusted R Squared = .279) | | | | |
For the classification results of IVDT, a three-way ANOVA examines the impact
of velocity threshold, minimum fixation duration, and maximum dispersion angle
on the Overall score. There is no statistically significant interaction
between velocity threshold, min fixation duration, and max dispersion angle
impacting the Overall score of classification, F (2280, 282696) = 0.248, p =
1.000. There is a statistically significant interaction between velocity
threshold and minimum fixation duration, velocity threshold and maximum
dispersion angle, and maximum fixation duration and maximum dispersion angle
impacting the Overall score. The simple main effects are analyzed for each
group of factors with interactive influence. Except when the maximum angle is
in [3,4], the impact of different minimum fixation durations on the Overall
score is not statistically significant. Other analysis results show that the
impact of various factors on the Overall score is statistically significant.
Through the line chart (Appendix A), we can more intuitively show the impact
of various factors on different indicators. Based on the analysis results of
each metric, we choose the velocity threshold of 140 °/s, the minimum fixation
duration of 110ms, and the dispersion threshold of 5.75° as the optimal
parameters of IVDT.
Table 3: Three-Way ANOVA Output of three parameter of m-IVDT on all seven
metrics and the Overall score
Source | Type III Sum of Squares | df | Mean Square | F | Sig.
---|---|---|---|---|---
Corrected Model | 905.943a | 2859 | 0.317 | 57.385 | 0.000
Intercept | 11709.328 | 1 | 11709.328 | 2120517.371 | 0.000
Vel_threshold | 93.144 | 12 | 7.762 | 1405.664 | 0.000
min_fix_dur | 3.809 | 10 | 0.381 | 68.984 | 0.000
max_angle | 726.464 | 19 | 38.235 | 6924.211 | 0.000
Vel_threshold * min_fix_dur | 6.745 | 120 | 0.056 | 10.179 | 0.000
Vel_threshold * max_angle | 51.42 | 228 | 0.226 | 40.842 | 0.000
min_fix_dur * max_angle | 20.197 | 190 | 0.106 | 19.25 | 0.000
Vel_threshold * min_fix_dur * max_angle | 4.226 | 2280 | 0.002 | 0.336 | 1.000
Error | 1561.025 | 282696 | 0.006 | |
Total | 14183.474 | 285556 | | |
Corrected Total | 2466.968 | 285555 | | |
a. R Squared = .367 (Adjusted R Squared = .361) | | | | |
For the classification results of m-IVDT, a three-way ANOVA examines the
impact of three parameters on the Overall score. There is no statistically
significant interaction between velocity threshold, minimum fixation duration,
and maximum dispersion angle impacting the Overall score of classification, F
(2280, 282696) = 0.336, p = 1.000. There is a statistically significant
interaction between velocity threshold and minimum fixation duration, velocity
threshold and maximum dispersion angle, and maximum fixation duration and
maximum dispersion angle impacting the Overall score. The simple main effects
are analyzed for each group of factors with interactive influence. The results
are the same as the analysis of IVDT, except when the maximum angle is in
[3,4], the impact of different minimum fixation durations on the Overall score
is not statistically significant. Other analysis results show that the impact
of various factors on the Overall score is statistically significant. Through
the line chart chart(Appendix B), we can more intuitively show the impact of
various factors on different indicators. Based on the analysis results of each
metric, we choose the velocity threshold of 140°/s, the minimum fixation
duration of 130ms, and the dispersion threshold of 5.75° as the optimal
parameters of m-IVDT.
### 5.2 Comparison of four algorithms
We treat every session as an independent test and calculate seven metrics and
the Overall score of the classification results of the four algorithms (with
optimal parameters) for each session. One-way ANOVA determines whether
different algorithms impact the seven metrics and the Overall score.
Table 4 shows the output of the ANOVA analysis with statistical significance.
The significance value is below 0.05 for all eight metrics. Therefore, there
is a statistically significant difference in all eight metrics between the
different algorithms.
Table 4: One-Way ANOVA Output of four algorithms on all seven metrics and the
Overall score
| FQnS | FQlS | FN | AFD | SN | ASA | SQnS | Overall Score
---|---|---|---|---|---|---|---|---
IVT | 0.172±0.09 | 0.544±0.155 | 0.104±0.063 | 0.305±0.071 | 0.106±0.063 | 0.343±0.175 | 0.032±0.047 | 0.229±0.071
IDT | 0.221±0.091 | 0.741±0.105 | 0.017±0.016 | 0.111±0.068 | 0.018±0.016 | 0.27±0.08 | 0.002±0.002 | 0.197±0.039
IVDT | 0.236±0.121 | 0.54±0.155 | 0.008±0.009 | 0.06±0.053 | 0.008±0.01 | 0.246±0.093 | 0.003±0.002 | 0.157±0.042
m-IVDT | 0.236±0.127 | 0.109±0.104 | 0.008±0.009 | 0.057±0.053 | 0.007±0.009 | 0.246±0.092 | 0.003±0.002 | 0.095±0.034
F(3,393) | 7.623 | 404.85 | 195.809 | 357.2 | 208.213 | 15.469 | 38.72 | 140.213
Sig. | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
An LSD post hoc test reveals that FQnS is statistically significantly higher
for IDT (0.221 ± 0.090, p = 0.002), IVDT (0.235 ± 0.121, p = 0.000), and
m-IVDT (0.236 ± 0.127, p = 0.000) compared to IVT (0.172 ± 0.090). FQlS is
statistically significantly higher for IVT (0.544 ± 0.155, p = 0.000), IDT
(0.741 ± 0.105, p = 0.000), and IVDT (0.540 ± 0.155, p = 0.000) compared to
m-IVDT (0.109 ± 0.104). FN is statistically significantly higher for IVT
(0.104 ± 0.063,p = 0.000) and IDT (0.017 ± 0.016, p = 0.000) compared to IVDT
(0.008 ± 0.009) and m-IVDT (0.008 ± 0.009). AFD is statistically significantly
higher for IVT (0.305 ± 0.071, p = 0.000) and IDT (0.111 ± 0.068, p = 0.000)
compared to IVDT (0.060 ± 0.053) and m-IVDT (0.057 ± 0.053). SN is
statistically significantly higher for IVT (0.106 ± 0.063, p = 0.000) and IDT
(0.018 ± 0.016, p= 0.000) compared to IVDT (0.008 ± 0.010) and m-IVDT (0.007 ±
0.009). ASA is statistically significantly higher for IVT (0.343 ± 0.175, p =
0.000) and IDT (0.270 ± 0.080, p = 0.000) compared to IVDT (0.246 ± 0.093) and
m-IVDT (0.246 ± 0.092). SQnS is statistically significantly lower for IDT
(0.002 ± 0.002, p = 0.002), IVDT (0.003 ± 0.002, p = 0.000), and m-IVDT (0.003
± 0.002, p =0.000) compared to IVT (0.032 ± 0.047). As for Overall score, it
is statistically significantly higher for IVT (0.229 ± 0.071, p = 0.000), IDT
(0.197 ± 0.039, p = 0.000), and IVDT (0.157 ± 0.042, p = 0.000) compared to
m-IVDT (0.095 ± 0.034). Figure 6 shows the statistics of the evaluation
results of each algorithm. In conclusion, IVT performs the best in FQnS,
followed by IDT. There is no difference between IVDT and m-IVDT. m-IVDT
performs the best in FQlS and the worst in IDT. There is no difference between
FQlS and IVT, but both perform better than IDT. In FN, AFD, Sn, and ASA, there
is no difference between FQlS and m-FQlS. IVT performs the worst in these
indicators, followed by IDT. IVT performs the worst in SQnS, and the other
three algorithms have no significant difference. As for the Overall score,
m-FQlS is the best, followed by FQlS and IDT, and IVT is the worst.
Figure 6: Statistics of the Four Algorithms
### 5.3 Comparison of the two tasks
We evaluate the algorithms on the two tasks in Section 4.3. The major
difference between Tasks 1 and 2 is that Task 2 has multiple objects in the
interface simultaneously and only one is the real target, while Task 1 has
only one target in the interface at any time. The main purpose is to study
whether the interface’s complexity affects the algorithms’ classification
results.
Table 5: One-Way ANOVA Output of Task type on all seven metrics and the
Overall score
| FQnS | FQlS | FN | AFD | SN | ASN | SQnS | Overall Score
---|---|---|---|---|---|---|---|---
Task1 | 0.167±0.099 | 0.5±0.266 | 0.033±0.051 | 0.127±0.115 | 0.034±0.052 | 0.256±0.117 | 0.01±0.027 | 0.161±0.070
Task2 | 0.263±0.101 | 0.469±0.265 | 0.035±0.053 | 0.139±0.122 | 0.035±0.054 | 0.295±0.125 | 0.01±0.025 | 0.178±0.070
F(1,395) | 90.157 | 1.337 | 0.078 | 1.041 | 0.062 | 10.155 | 0.023 | 5.973
Sig. | 0.000 | 0.248 | 0.781 | 0.308 | 0.804 | 0.002 | 0.878 | 0.015
One-way ANOVA determines whether the type of tasks impacts all seven metrics
and the Overall score. The result in Table 5 indicats that Task 1 has
statistically significantly lower FQnS (0.167 ± 0.167, F(1,395) = 90.157, p =
0.000) and ASA (0.256 ± 0.117, F(1,395) = 10.155, p = 0.002) compared to Task
2 (0.263 ±0.101, 0.295 ± 0.125). Furthermore, the Overall score of Task 2 is
0.178±0.070, statistically significantly different from Task 1 (0.161±0.070,
F(1,395) = 5.973, p= 0.015). Figure 7 shows the statistics of the evaluation
results of both tasks.
Figure 7: Descriptive Statistic of two tasks Table 6: Two-Way ANOVA Output of
Task type and classifier on all seven metrics and the Overall score
| FQnS | FQlS | FN | AFD | SN | ASN | SQnS | Overall Score
---|---|---|---|---|---|---|---|---
Task | F(1,389) | 96.213 | 5.062 | 0.167 | 3.773 | 0.136 | 11.193 | 0.024 | 8.348
Sig. | 0.000 | 0.025 | 0.683 | 0.053 | 0.712 | 0.001 | 0.876 | 0.004
Partial Eta Squared | 0.198 | 0.013 | 0.000 | 0.010 | 0.000 | 0.028 | 0.000 | 0.021
classifier | F(3,389) | 9.403 | 412.224 | 193.823 | 357.124 | 206.046 | 15.774 | 38.279 | 141.516
Sig. | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Partial Eta Squared | 0.068 | 0.761 | 0.599 | 0.734 | 0.614 | 0.108 | 0.228 | 0.522
Task * classifier | F(3,389) | 1.075 | 1.865 | 0.047 | 0.333 | 0.035 | 0.032 | 0.016 | 0.068
Sig. | 0.359 | 0.135 | 0.986 | 0.802 | 0.991 | 0.992 | 0.997 | 0.977
Partial Eta Squared | 0.008 | 0.014 | 0.000 | 0.003 | 0.000 | 0.000 | 0.000 | 0.001
Task1 | IVT | 0.126±0.071 | 0.574±0.128 | 0.103±0.061 | 0.297±0.067 | 0.106±0.061 | 0.326±0.163 | 0.032±0.049 | 0.223±0.066
IDT | 0.186±0.08 | 0.748±0.108 | 0.015±0.013 | 0.102±0.064 | 0.016±0.013 | 0.248±0.07 | 0.002±0.002 | 0.188±0.032
IVDT | 0.177±0.109 | 0.571±0.127 | 0.008±0.007 | 0.057±0.049 | 0.008±0.007 | 0.225±0.092 | 0.003±0.002 | 0.150±0.036
m-IVDT | 0.18±0.12 | 0.102±0.095 | 0.007±0.007 | 0.054±0.048 | 0.007±0.007 | 0.225±0.091 | 0.003±0.002 | 0.083±0.031
Task2 | IVT | 0.216±0.084 | 0.515±0.173 | 0.104±0.065 | 0.314±0.073 | 0.106±0.065 | 0.359±0.185 | 0.032±0.044 | 0.235±0.076
IDT | 0.255±0.088 | 0.734±0.103 | 0.018±0.019 | 0.121±0.071 | 0.019±0.019 | 0.29±0.084 | 0.002±0.002 | 0.206±0.043
IVDT | 0.291±0.104 | 0.511±0.173 | 0.008±0.011 | 0.062±0.058 | 0.008±0.011 | 0.266±0.09 | 0.003±0.002 | 0.164±0.045
m-IVDT | 0.288±0.111 | 0.116±0.112 | 0.008±0.01 | 0.06±0.057 | 0.008±0.011 | 0.265±0.09 | 0.004±0.002 | 0.107±0.033
Adjusted R Squared | 0.234 | 0.758 | 0.592 | 0.73 | 0.607 | 0.115 | 0.214 | 0.519
Figure 8: Statistics of the Combinations of the Two Tasks and Four Algorithms
Two-way ANOVA examines the impact of different tasks and classification
algorithms on all seven metrics and the Overall score. If both task and
algorithm types are considered, four classification algorithms and two tasks
generate eight combinations. The interaction between different tasks and
classification algorithms for all seven metrics and the Overall score cannot
be demonstrated (Table 6).Therefore, we look into the main effects. As shown
in the table, there are significant differences in all eight metrics between
algorithms. As for tasks, there are significant differences in FQnS, FQlS,
ASN, and Overall score between tasks. Further note that partial eta squares of
every metric are lower than 0.015 for the interaction effect, which is
negligible. Last but not least, adjusted r squared tells us that the percent
of the variance in each matric is attributable to task and algorithm types.
Figure 8 shows the statistics of the evaluation results of each combination of
task and algorithm.
In general, there is no interaction effect between task types and algorithms,
as there is no significant difference in the results of different task types
under different algorithms. For the impact of task types on classification
results, the main difference is that there are significant differences in
FQnS, ASA, and Overall score. The performance on Task 1 is better than that on
Task 2 in terms of these metrics, which is consistent with our expectations.
Two reasons contribute to it. One is that the complex interface affects users’
actual gaze behavior and makes it deviate from the ideal eye movement
trajectory of the stimulus design. The other is that the interface complexity
may also affect the classification accuracy of algorithms, but the current
experimental design cannot determine which one contributes more. However, the
main purpose of this analysis is to explore whether each algorithm is
sensitive to interface complexity. The results show that the task type does
not affect the classification performance, i.e., the algorithms are not
sensitive to interface complexity.
## 6 Conclusions and Future Work
This paper aims to explore the eye movement behavior classification algorithms
used in virtual reality environments. The classification algorithm’s
classification effect was evaluated by comparing the classification results
with the ideal eye movement behavior preset by stimuli: FQnS, FQlS, FN, AFD,
SN, ASA, sqns, and the average overall score of seven indicators. Firstly, the
parameters with the best all-around performance under each algorithm are
selected by analyzing the overall score under different parameter
combinations. Then compare and analyze the performance of IVT, IDT, IVDT, and
m-IVDT algorithms (optimal parameters) in eight indicators. We found that IVT
performed best on FQnS; that is, when judging whether the current moment
belongs to the fixation point through angular velocity, the accuracy is the
highest. However, due to the lack of screening for fixation points with a
short duration, there will be too many fixation points, so it performed worst
on FN, AFD, Sn, and ASA, related to the number of fixation points. The
disadvantage of IDT is that when judging the fixation point only through the
spatial position, it will be greatly affected by the error of the original
data of spatial coordinates. Therefore, IDT performs the worst in FQlS; the
fixation coordinates obtained in the IDT algorithm are the least consistent
with the stimulus. The IVDT algorithm combines the advantages of the two to a
certain extent; that is, it does not rely too much on space coordinates that
are not necessarily completely accurate. At the same time, it can avoid too
many fixation points, but it is still difficult to give accurate fixation
point coordinates. The m-IVDT proposed by us solves this problem better. By
correcting the possible wrong spatial coordinate values, the accuracy of the
eye movement fixation coordinate is improved. The classification results of
eye movement behavior have certain usability.
The main limitation of this study is that we compare the algorithms’
classification results with stimuli. The implicit assumption is that the
actual eye movement behavior of users is consistent with the changes of the
stimuli, but it may not be true. We choose the simplest visual environment and
the most basic selection task to minimize the impact of other factors on
users. In the following research, we can also consider using artificial eye
movement data as a standard method to verify our research results, and we can
also avoid the influence of stimuli on the classification results. In future
research, we will also consider using the deep learning method to classify eye
movement behavior and take the differences between individuals into account to
get an adaptive algorithm according to the user’s situation.
## Funding
Thanks to all participants. This work was supported by XXX (Grant No.: XXX).
## References
* Leigh and Zee (2007) R. J. Leigh, D. S. Zee, The Neurology of Eye Movements, Oxford University Press, 2007\.
* Istance et al. (2010) H. Istance, A. Hyrskykari, L. Immonen, S. Mansikkamaa, S. Vickers, Designing gaze gestures for gaming: An investigation of performance, in: Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, ETRA ’10, Association for Computing Machinery, 2010, pp. 323–330.
* Alpern (1969) M. Alpern, Types of movement, Eye 3 (1969) 63–151.
* Ditchburn (1980) R. W. Ditchburn, The function of small saccades., Vision Research 20 (1980) 271–272.
* Majaranta and Bulling (2014) P. Majaranta, A. Bulling, Eye Tracking and Eye-Based Human-Computer Interaction, Springer-Verlag London, 2014.
* Gergle et al. (2013) D. Gergle, R. E. Kraut, S. R. Fussell, Using visual information for grounding and awareness in collaborative tasks, Human–Computer Interaction 28 (2013) 1–39.
* Brennan et al. (2008) S. E. Brennan, C. Xin, C. A. Dickinson, M. B. Neider, G. J. Zelinsky, Coordinating cognition: The costs and benefits of shared gaze during collaborative search, Cognition 106 (2008) 1465–1477.
* Duchowski and Jörg (2020) A. Duchowski, S. Jörg, Eye Animation, Springer International Publishing, Cham, 2020, pp. 1–19. doi:10.1007/978-3-319-30808-1_3-1.
* Andrist et al. (2012) S. Andrist, T. Pejsa, B. Mutlu, M. Gleicher, Designing effective gaze mechanisms for virtual agents, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, Association for Computing Machinery, New York, NY, USA, 2012, pp. 705–714. URL: https://doi.org/10.1145/2207676.2207777. doi:10.1145/2207676.2207777.
* Queiroz et al. (2008) R. Queiroz, L. Barros, S. Musse, Providing expressive gaze to virtual animated characters in interactive applications, Comput. Entertain. 6 (2008). doi:10.1145/1394021.1394034.
* Lance and Marsella (2008) B. Lance, S. Marsella, A model of gaze for the purpose of emotional expression in virtual embodied agents, in: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1, AAMAS ’08, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2008, pp. 199–206.
* Weier et al. (2016) M. Weier, T. Roth, E. Kruijff, A. Hinkenjann, A. PérardGayot, P. Slusallek, Y. Li, Foveated real-time ray tracing for head-mounted displays, Computer Graphics Forum: Journal of the European Association for Computer Graphics 35 (2016) 289–298.
* Swafford et al. (2016) N. Swafford, J. Iglesias-Guitian, C. Koniaris, B. Moon, D. Cosker, K. Mitchell, User, metric, and computational evaluation of foveated rendering methods, in: Proceedings of the ACM Symposium on Applied Perception, SAP ’16, Association for Computing Machinery, New York, NY, USA, 2016, pp. 7–14. URL: https://doi.org/10.1145/2931002.2931011. doi:10.1145/2931002.2931011.
* Roth et al. (2017) T. Roth, M. Weier, A. Hinkenjann, Y. Li, P. Slusallek, A quality-centered analysis of eye tracking data in foveated rendering, Journal of Eye Movement Research 10 (2017).
* Albert et al. (2017) R. Albert, A. Patney, D. Luebke, J. Kim, Latency requirements for foveated rendering in virtual reality, ACM Trans. Appl. Percept. 14 (2017). URL: https://doi.org/10.1145/3127589. doi:10.1145/3127589.
* Kramida (2016) G. Kramida, Resolving the vergence-accommodation conflict in head-mounted displays, IEEE Transactions on Visualization and Computer Graphics 22 (2016) 1912–1931. doi:10.1109/TVCG.2015.2473855.
* Duchowski et al. (2014) A. Duchowski, D. House, J. Gestring, R. Wang, K. Krejtz, I. Krejtz, R. Mantiuk, B. Bazyluk, Reducing visual discomfort of 3d stereoscopic displays with gaze-contingent depth-of-field, in: Proceedings of the ACM Symposium on Applied Perception, SAP ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 39–46. URL: https://doi.org/10.1145/2628257.2628259. doi:10.1145/2628257.2628259.
* Fisker et al. (2013) M. Fisker, K. Gram, K. Thomsen, D. Vasilarou, M. Kraus, Automatic Convergence Adjustment for Stereoscopy using Eye Tracking, in: M. Chover, A. A. de Sousa (Eds.), Eurographics 2013 - Posters, The Eurographics Association, 2013. doi:10.2312/conf/EG2013/posters/023-024.
* Bernhard et al. (2014) M. Bernhard, C. Dell’mour, M. Hecher, E. Stavrakis, M. Wimmer, The effects of fast disparity adjustment in gaze-controlled stereoscopic applications, in: Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 111–118. URL: https://doi.org/10.1145/2578153.2578169. doi:10.1145/2578153.2578169.
* Sidenmark and Gellersen (2019) L. Sidenmark, H. Gellersen, Eye & head: Synergetic eye and head movement for gaze pointing and selection, in: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST ’19, Association for Computing Machinery, New York, NY, USA, 2019, pp. 1161–1174. URL: https://doi.org/10.1145/3332165.3347921. doi:10.1145/3332165.3347921.
* Rajanna and Hansen (2018) V. Rajanna, J. P. Hansen, Gaze typing in virtual reality: Impact of keyboard design, selection method, and motion, in: Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, ETRA ’18, Association for Computing Machinery, New York, NY, USA, 2018\. URL: https://doi.org/10.1145/3204493.3204541. doi:10.1145/3204493.3204541.
* Piumsomboon et al. (2017) T. Piumsomboon, G. Lee, R. Lindeman, M. Billinghurst, Exploring natural eye-gaze-based interaction for immersive virtual reality, in: 2017 IEEE Symposium on 3D User Interfaces (3DUI), 2017, pp. 36–39. doi:10.1109/3DUI.2017.7893315.
* Pfeuffer et al. (2017) K. Pfeuffer, B. Mayer, D. Mardanbegi, H. Gellersen, Gaze + pinch interaction in virtual reality, in: Proceedings of the 5th Symposium on Spatial User Interaction, SUI ’17, Association for Computing Machinery, New York, NY, USA, 2017, pp. 99–108. URL: https://doi.org/10.1145/3131277.3132180. doi:10.1145/3131277.3132180.
* Sidorakis and Koulieris (2015) N. Sidorakis, K. Koulieris, G.and Mania, Binocular eye-tracking for the control of a 3d immersive multimedia user interface, in: 2015 IEEE 1st Workshop on Everyday Virtual Reality (WEVR), 2015, pp. 15–18. doi:10.1109/WEVR.2015.7151689.
* Brendan et al. (2021) D. Brendan, C. Peacock, T. Zhang, T. S. Murdison, H. Benko, T. R. Jonker, Towards gaze-based prediction of the intent to interact in virtual reality, in: ACM Symposium on Eye Tracking Research and Applications, ETRA ’21 Short Papers, Association for Computing Machinery, New York, NY, USA, 2021\. URL: https://doi.org/10.1145/3448018.3458008. doi:10.1145/3448018.3458008.
* Pfeiffer et al. (2020) J. Pfeiffer, T. Pfeiffer, M. Meißner, E. Weiß, Eye-tracking-based classification of information search behavior using machine learning: Evidence from experiments in physical shops and virtual reality shopping environments, Information Systems Research 31 (2020) 675–691. URL: https://doi.org/10.1287/isre.2019.0907. doi:10.1287/isre.2019.0907.
* Alghofaili et al. (2019) R. Alghofaili, Y. Sawahata, H. Huang, H. Wang, T. Shiratori, L. Yu, Lost in style: Gaze-driven adaptive aid for vr navigation, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 2019, pp. 1–12. URL: https://doi.org/10.1145/3290605.3300578.
* Olsen (2012) A. Olsen, The tobii i-vt fixation filter algorithm description, 2012. URL: https://www.tobiipro.com/.
* Duchowski et al. (2002) A. Duchowski, E. Medlin, N. Cournia, H. Murphy, A. Gramopadhye, S. Nair, J. Vorah, B. Melloy, 3-d eye movement analysis, Behavior Research Methods Instruments & Computers 34 (2002) 573–591.
* Diaz et al. (2013) G. Diaz, J. Cooper, D. Kit, M. Hayhoe, Real-time recording and classification of eye movements in an immersive virtual environment, Journal of Vision 13 (2013) 109–111.
* Pelz and Canosa (2001) J. B. Pelz, R. Canosa, Oculomotor behavior and perceptual strategies in complex tasks, Vision Research 41 (2001) 3587–3596.
* Reimer and Sodhi (2006) B. Reimer, M. Sodhi, Detecting eye movements in dynamic environments., Behavior Research Methods 38 (2006) 667–682.
* Munn et al. (2008) S. M. Munn, L. Stefano, J. B. Pelz, Fixation-identification in dynamic scenes: Comparing an automated algorithm to manual coding, in: Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization, APGV ’08, 2008, pp. 33–42.
* Llanes-Jurado et al. (2020) J. Llanes-Jurado, J. Marín-Morales, J. Guixeres, M. Alcaiz, Development and calibration of an eye-tracking fixation identification algorithm for immersive virtual reality, Sensors 20 (2020).
* Salvucci and Goldberg (2000) D. D. Salvucci, J. H. Goldberg, Identifying fixations and saccades in eye-tracking protocols, in: Proceedings of the Eye Tracking Research & Application Symposium, ETRA 2000, Palm Beach Gardens, Florida, USA, November 6-8, 2000, 2000.
* Komogortsev and Karpov (2013) O. V. Komogortsev, A. Karpov, Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades, Behavior Research Methods 45 (2013) 203–215.
* Komogortsev et al. (2010) O. V. Komogortsev, D. V. Gobert, S. Jayarathna, D. H. Koh, S. M. Gowda, Standardization of automated analyses of oculomotor fixation and saccadic behaviors, IEEE transactions on bio-medical engineering 57 (2010) 2635–2645.
* Duchowski et al. (2001) A. Duchowski, E. Medlin, A. Gramopadhye, B. Melloy, S. Nair, Binocular eye tracking in vr for visual inspection training, in: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST ’01, Association for Computing Machinery, New York, NY, USA, 2001, pp. 1–8. URL: https://doi.org/10.1145/505008.505010. doi:10.1145/505008.505010.
* Mansouryar et al. (2016) M. Mansouryar, J. Steil, Y. Sugano, A. Bulling, 3d gaze estimation from 2d pupil positions on monocular head-mounted eye trackers, in: Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, ETRA ’16, 2016, pp. 197–200.
* Rayner (1992) K. Rayner (Ed.), Eye movements and visual cognition: Scene perception and reading., Springer-Verlag Publishing., 1992.
* Rayner (1998) K. Rayner, Eye movements in reading and information processing: 20 years of research, Psychological Bulletin 124 (1998) 372–422.
* Andrews and Coppola (1999) T. J. Andrews, D. M. Coppola, Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments., Vision Research 39 (1999) 2947–2953.
## Appendix A Line chart of IVDT parameters
## Appendix B Line chart of m-IVDT parameters
## Appendix C Author Bio
Wen-jun Hou (Professor, Doctoral supervisor, Vice Chairman UXACN) received BE
in ME, Taiyuan University of Science and Technology, Taiyuan, China in June
1991, the Ph.D. degrees from the Beijing University of Posts and
Telecommunications, Beijing, China in June 2006. Her research interests
include natural interaction, information visualization, interaction experience
and VR/AR. Currently, she is the assistant dean at the School of Digital Media
& Design, Beijing University of Post and Telecommunication, Beijing, China.
Xiao-lin Chen received BE in Industrial Design, Beijing University of Posts
and Telecommunications, Beijing, China in June 2016. Her research interests
include eye-based interaction, voice interaction and user experience.
Currently, she is pursuing a Ph.D. degree in Mechatronic Engineering from
Beijing University of Post and Telecommunication, Beijing, China.
|
# Characteristic Polynomials in Coupled Matrix Models
Nicolas Babinet and Taro Kimura Institut de Mathématiques de Bourgogne,
Université Bourgogne Franche-Comté
###### Abstract
We study correlation functions of the characteristic polynomials in coupled
matrix models based on the Schur polynomial expansion, which manifests their
determinantal structure.
###### Contents
1. 1 Introduction and summary
1. 1.1 Introduction
2. 1.2 Summary of the results
2. 2 Coupled matrix model
1. 2.1 Determinantal formula
2. 2.2 Christoffel–Darboux kernel
3. 2.3 Operator formalism
4. 2.4 Polynomial ensemble
3. 3 Characteristic polynomial averages
1. 3.1 Schur polynomial average
2. 3.2 Characteristic polynomial
3. 3.3 Characteristic polynomial inverse
4. 4 Pair correlation functions
1. 4.1 Characteristic polynomial
2. 4.2 Characteristic polynomial inverse
3. 4.3 Mixed pair correlation
## 1 Introduction and summary
### 1.1 Introduction
Random Matrix Theory (RMT) has been playing an important role over the decades
in the both of physics and mathematics communities [Meh04, For10, ABDF11,
EKR15]. Applying the analogy with Quantum Field Theory (QFT), the asymptotic
behavior appearing in the large size limit (large $N$ limit) is interpreted as
a classical behavior as the parameter $1/N$ plays a role of the Planck
constant. From this point of view, it is an important task to explore the
finite $N$ result to understand the quantum $1/N$ correction and also the non-
perturbative effect beyond the perturbative analysis. The purpose of this
paper is to show the finite $N$ exact result of a class of the correlation
functions in the generalized two-matrix model, what we simply call the coupled
matrix model, which contains various models coupled in the chain. See, e.g,
[IZ80, EM98, BEH02, BEH03b, BEH03a, BE03, BGS09] and also [Eyn05, Ber11,
Ora11] for the development in this direction. We will show that this model can
be analyzed using its determinantal structure, which is a key property to
obtain the finite $N$ exact result. In this paper, we in particular consider
the correlation function of the characteristic polynomials in the coupled
matrix model. It has been known in the context of RMT that the characteristic
polynomial plays a central role in the associated differential equation system
through the Riemann–Hilbert problem and the notion of quantum curve. In
addition, the characteristic polynomial is essentially related to various
other important observables in RMT, e.g., the resolvent, the eigenvalue
density function, etc. See, e.g., [Mor94, BH00, FS03, SF03, AV03, BDS03, BS06]
and also [BH11] for earlier results in this direction.
### 1.2 Summary of the results
We state the summary of this paper. In Section 2, we introduce the generalized
coupled matrix model defined as the following formal eigenvalue integral,
$\displaystyle
Z_{N}=\frac{1}{N!^{2}}\int\prod_{k=L,R}\differential{X}_{k}\mathrm{e}^{-\tr
V_{k}(X_{k})}\prod_{i<j}^{N}(x_{k,i}-x_{k,j})\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})$ (1.1)
for arbitrary potential functions $V_{k}(x)$ and a two-variable function
$\omega(x,y)$. See Definition 2.1 for details. We then show that this
eigenvalue integral is reduced to the determinant of the norm matrix of the
corresponding two-variable integral. We also show that biorthogonal
polynomials, which diagonalize the norm matrix, simplify the formulas. We
mention in Section 2.4 that the analysis shown there is straightforwardly
applied to the coupled matrix generalization of the polynomial ensemble [KS14]
defined for a set of arbitrary functions, containing various known models,
e.g., the external source model [BH16]. See also [Bor98]. In Section 3, we
study the correlation function for the coupled matrix model. In Section 3.1,
we show the Schur polynomial average, which will be a building block of the
characteristic polynomials discussed throughout the paper. In Sections 3.2 and
3.3, we explore the correlation function of the characteristic polynomial and
its inverse, and show that they are concisely expressed as a determinant of
the biorthogonal polynomial and its dual. We remark that these results are
natural generalization of the earlier results on the one-matrix model case. In
Section 4, we consider the pair correlation function, which involves both the
characteristic polynomials coupled with $X_{L}$ and $X_{R}$. In this case, the
correlation functions are again expressed as a determinant, while the
corresponding matrix element is written using the Christoffel–Darboux (CD)
kernel and its dual.
### Acknowledgments
We would like to thank Bertrand Eynard for useful conversation. This work was
supported in part by “Investissements d’Avenir” program, Project ISITE-BFC
(No. ANR-15-IDEX-0003), EIPHI Graduate School (No. ANR-17-EURE-0002), and
Bourgogne-Franche-Comté region.
## 2 Coupled matrix model
In this paper, we explore the coupled matrix model defined as follows.
###### Definition 2.1 (Partition function).
Let $V_{k}(x)$ $(k=L,R)$ be a polynomial function and $\omega(x_{L},x_{R})$ be
a two-variable function. Let $(X_{k})_{k=L,R}=(x_{k,i})_{k=L,R,i=1,\ldots,N}$
be a set of formal eigenvalues. Then, we define the partition function of the
coupled matrix model,
$\displaystyle
Z_{N}=\frac{1}{N!^{2}}\int\prod_{k=L,R}\differential{X}_{k}\mathrm{e}^{-\tr
V_{k}(X_{k})}\Delta_{N}(X_{L})\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\Delta_{N}(X_{R})\,,$ (2.1)
where we denote the Vandermonde determinant by
$\displaystyle\Delta_{N}(X)=\prod_{i<j}^{N}(x_{i}-x_{j})\,.$ (2.2)
###### Remark 2.2.
We formally consider the eigenvalues $(x_{k,i})$ as complex variables, and
thus their integration contour is taken to provide a converging integral,
which is not unique in general. In this paper, we do not discuss the contour
dependence on the partition function, so that we always consider the
eigenvalue integral as a formal integral.
Throughout the paper, we frequently use the following identity.
###### Lemma 2.3 (Andréief–Heine identity).
Let $(f_{i}(x))_{i=1,\ldots,N}$ and $(g_{i}(x))_{i=1,\ldots,}$ be the
sequences of integrable functions on the domain $D$. Denoting
$\differential{X}=\differential{x}_{1}\cdots\differential{x}_{N}$, the
following identity holds,
$\displaystyle\frac{1}{N!}\int_{D^{N}}\differential{X}\det_{1\leq i,j\leq
N}f_{i}(x_{j})\det_{1\leq i,j\leq N}g_{i}(x_{j})=\det_{1\leq i,j\leq
N}\quantity(\int_{D}\differential{x}f_{i}(x)g_{j}(x))\,,$ (2.3)
which is called the Andréief–Heine (AH) identity.
###### Proposition 2.4 (Hermitian matrix chain models).
Let $(M_{k})_{k=1,\ldots,\ell}$ be a set of $\ell$ Hermitian matrices of rank
$N$. The following matrix chain models are reduced to the coupled matrix model
of the form of (2.1):
$\displaystyle Z_{\text{pot}}$
$\displaystyle=\int\prod_{k=1,\ldots,\ell}\differential{M_{k}}\mathrm{e}^{-\tr
V_{k}(M_{k})}\prod_{k=1}^{\ell-1}\mathrm{e}^{\tr M_{k}M_{k+1}}\,,$ (2.4a)
$\displaystyle Z_{\text{Cauchy}}$
$\displaystyle=\int\prod_{k=1,\ldots,\ell}\differential{M_{k}}\mathrm{e}^{-\tr
V_{k}(M_{k})}\prod_{k=1}^{\ell-1}\det(M_{k}\otimes\mathbbm{1}_{N}+\mathbbm{1}_{N}\otimes
M_{k+1})^{-N}\,.$ (2.4b)
We call them the potential-interacting matrix chain and the Cauchy-interacting
matrix chain, respectively.
###### Proof.
Diagonalizing each Hermitian matrix using the unitary transform for
$k=1,\ldots,\ell$,
$\displaystyle M_{k}=U_{k}X_{k}U_{k}^{-1}\,,\qquad
X_{k}=\operatorname{diag}(x_{k,1},\ldots,x_{k,N})\,,\qquad
U_{k}\in\mathrm{U}(N)\,,$ (2.5)
the matrix measure is given by
$\displaystyle\differential{M}_{k}=\frac{\differential{U_{k}}\differential{X}_{k}}{N!(2\pi)^{N}}\Delta_{N}(X_{k})^{2}\,,\qquad\differential{X}_{k}=\prod_{i=1}^{N}\differential{x}_{k,i}\,,$
(2.6)
where we denote the Haar measure of each unitary matrix by
$\differential{U_{k}}$. We remark that the factors $(2\pi)^{N}$ and $N!$ are
interpreted as the volumes of the maximal Cartan torus
$\mathrm{U}(1)^{N}\subset\mathrm{U}(N)$, and the symmetric group
$\mathfrak{S}_{N}$, which is the Weyl group of the unitary group
$\mathrm{U}(N)$.
For the potential-interacting chain, we may use the Harich-
Chandra–Itzykson–Zuber formula [IZ80],
$\displaystyle\int_{\mathrm{U}(N)}\differential{U}\mathrm{e}^{\tr
UXU^{-1}Y}=\frac{c_{N}}{\Delta_{N}(X)\Delta_{N}(Y)}\det_{1\leq i,j\leq
N}\mathrm{e}^{x_{i}y_{j}}$ (2.7)
where the constant factor $c_{N}=\Gamma_{2}(N+1)=\prod_{j=0}^{N-1}j!$ is
chosen to be consistent with the normalization of the group integral,
$\int_{\mathrm{U}(N)}\differential{U}=1$. Then, we obtain
$\displaystyle Z_{\text{pot}}$
$\displaystyle=\frac{c_{N}^{\ell-1}}{N!^{\ell}}\int\prod_{k=1,\ldots,\ell}\frac{\differential{X}_{k}}{(2\pi)^{N}}\mathrm{e}^{-\tr
V_{k}(X_{k})}\Delta_{N}(X_{1})\quantity(\prod_{k=1}^{\ell-1}\det_{1\leq
i,j\leq N}\mathrm{e}^{x_{k,i}x_{k+1,j}})\Delta_{N}(X_{\ell})$
$\displaystyle=\frac{c_{N}^{\ell-1}}{N!^{2}}\int\prod_{k=1,\ell}\frac{\differential{X}_{k}}{(2\pi)^{N}}\mathrm{e}^{-\tr
V_{k}(X_{k})}\Delta_{N}(X_{1})\det_{1\leq i,j\leq
N}\quantity(\int\prod_{k=2,\ldots,\ell-1}\frac{\differential{x}_{k}}{2\pi}\mathrm{e}^{-V_{k}(x_{k})}\prod_{k=1}^{\ell-1}\mathrm{e}^{x_{k}x_{k+1}})\Delta_{N}(X_{\ell})\,,$
(2.8)
where we apply the AH identity (Lemma 2.3) for $(X_{k})_{k=2,\ldots,\ell-1}$.
Identifying $(X_{1},X_{\ell})=(X_{L},X_{R})$ and
$\displaystyle\omega(x_{1},x_{\ell})=\int\prod_{k=2,\ldots,\ell-1}\frac{\differential{x}_{k}}{2\pi}\mathrm{e}^{-V_{k}(x_{k})}\prod_{k=1}^{\ell-1}\mathrm{e}^{x_{k}x_{k+1}}\,,$
(2.9)
we arrive at the expression (2.1) up to an overall constant.
For the Cauchy-interacting chain, we remark the relation [BGS09]
$\displaystyle\det(M_{k}\otimes\mathbbm{1}_{N}+\mathbbm{1}_{N}\otimes
M_{k+1})^{-N}$ $\displaystyle\xrightarrow{\text{diagonalization}}\prod_{1\leq
i,j\leq N}\frac{1}{x_{k,i}+x_{k+1,j}}$
$\displaystyle=\frac{1}{\Delta_{N}(X_{k})\Delta_{N}(X_{k+1})}\det_{1\leq
i,j\leq N}\quantity(\frac{1}{x_{k,i}+x_{k+1,j}})\,.$ (2.10)
Therefore, we may write the Cauchy-interacting chain partition function as
$\displaystyle Z_{\text{Cauchy}}$
$\displaystyle=\frac{1}{N!^{\ell}}\int\prod_{k=1,\ldots,\ell}\frac{\differential{X}_{k}}{(2\pi)^{N}}\mathrm{e}^{-\tr
V_{k}(X_{k})}\Delta_{N}(X_{1})\prod_{k=1}^{\ell-1}\det_{1\leq i,j\leq
N}\quantity(\frac{1}{x_{k,i}+x_{k+1,j}})\Delta_{N}(X_{\ell})\,.$ (2.11)
Similarly, applying the AH identity for $(X_{k})_{k=2,\ldots,\ell-1}$, and
identifying $(X_{1},X_{\ell})=(X_{L},X_{R})$ with
$\displaystyle\omega(x_{1},x_{\ell})=\int\prod_{k=2,\ldots,\ell-1}\frac{\differential{x}_{k}}{2\pi}\mathrm{e}^{-V_{k}(x_{k})}\prod_{k=1}^{\ell-1}\frac{1}{x_{k}+x_{k+1}}\,,$
(2.12)
we arrive at the expression (2.1). This completes the proof. ∎
###### Remark 2.5.
We can in general obtain the coupled matrix model (2.1) from the matrix chain
if the nearest-neighbor interaction is given in the determinantal form
$\displaystyle\frac{1}{\Delta_{N}(X_{k})\Delta_{N}(X_{k+1})}\det_{1\leq
i,j\leq N}I(x_{k,i},x_{k+1,j})$ (2.13)
after the diagonalization. We also remark that the supermatrix model
$\displaystyle Z_{\text{susy}}$
$\displaystyle=\frac{1}{N!^{2}}\int\differential{X}\differential{Y}\mathrm{e}^{-\tr
V(X)+\tr V(Y)}\Delta_{N}(X)^{2}\Delta_{N}(Y)^{2}\prod_{1\leq i,j\leq
N}(x_{i}-y_{j})^{-2}$
$\displaystyle=\frac{1}{N!^{2}}\int\differential{X}\differential{Y}\mathrm{e}^{-\tr
V(X)+\tr V(Y)}\det_{1\leq i,j\leq N}\quantity(\frac{1}{x_{i}-y_{j}})^{2}$
(2.14)
has a closed form to the partition function (2.1), but it does not belong to
the coupled matrix model of our current interest.
### 2.1 Determinantal formula
We show that the partition function (2.1) is written in a determinantal form.
In order to show this, we introduce the notations.
###### Definition 2.6.
We define the inner product with respect to the potentials $V_{L,R}(x_{L,R})$,
$\displaystyle(\,f\mid\omega\mid
g\,)=\int\prod_{k=L,R}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}f(x_{L})\omega(x_{L},x_{R})g(x_{R})\,.$
(2.15)
For a set of arbitrary monic polynomials
$(p_{i}(x),q_{i}(x))_{i\in\mathbb{Z}_{\geq 0}}$, where $p_{i}(x)=x^{i}+\cdots$
and $q_{i}(x)=x^{i}+\cdots$, we define the norm matrix,
$\displaystyle\mathsf{N}_{i,j}=(\,p_{i}\mid\omega\mid q_{j}\,)\,.$ (2.16)
###### Proposition 2.7.
The coupled matrix model partition function (2.1) is given as a rank $N$
determinant of the norm matrix,
$\displaystyle Z_{N}=\det_{1\leq i,j\leq N}\mathsf{N}_{N-i,N-j}\,.$ (2.17)
###### Proof.
Noticing that the Vandermonde determinant is written as a determinant of
arbitrary monic polynomials,
$\displaystyle\Delta_{N}(X_{L})=\det_{1\leq i,j\leq
N}p_{N-j}(x_{L,i})\,,\qquad\Delta_{N}(X_{R})=\det_{1\leq i,j\leq
N}q_{N-j}(x_{R,i})\,,$ (2.18)
the partition function (2.1) is evaluated as a rank $N$ determinant,
$\displaystyle Z_{N}$
$\displaystyle=\frac{1}{N!^{2}}\int\prod_{k=L,R}\differential{X}_{k}\mathrm{e}^{-\tr
V_{k}(X_{k})}\det_{1\leq i,j\leq N}p_{N-j}(x_{L,i})\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\det_{1\leq i,j\leq N}q_{N-j}(x_{R,i})$
$\displaystyle=\det_{1\leq i,j\leq
N}\quantity[\int\prod_{k=L,R}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}p_{N-i}(x_{L})\omega(x_{L},x_{R})q_{N-j}(x_{R})]$
$\displaystyle=\det_{1\leq i,j\leq N}\mathsf{N}_{N-i,N-j}\,,$ (2.19)
where we apply the AH identity for $X_{L,R}$. This completes the proof. ∎
###### Remark 2.8 (Biorthogonal polynomial).
Specializing the monic polynomials to the biorthogonal polynomials,
$\displaystyle(\,P_{i}\mid\omega\mid Q_{j}\,)=h_{i}\delta_{i,j}\,,$ (2.20)
the norm matrix is diagonalized $\mathsf{N}_{i,j}=h_{i}\delta_{i,j}$, so that
the partition function is given by
$\displaystyle Z_{N}=\prod_{i=0}^{N-1}h_{i}\,.$ (2.21)
### 2.2 Christoffel–Darboux kernel
###### Definition 2.9 (Christoffel–Darboux kernel).
We define the Christoffel–Darboux (CD) kernel associated with the coupled
matrix model,
$\displaystyle K_{N}(x_{R},x_{L})$
$\displaystyle=\mathrm{e}^{-V_{L}(x_{L})-V_{R}(x_{R})}\sum_{i,j=0}^{N-1}q_{i}(x_{R})\left(\mathsf{N}^{-1}\right)_{i,j}p_{j}(x_{L})$
$\displaystyle=\mathrm{e}^{-V_{L}(x_{L})-V_{R}(x_{R})}\sum_{i=0}^{N-1}\frac{Q_{i}(x_{R})P_{i}(x_{L})}{h_{i}}=\sum_{i=0}^{N-1}\psi_{i}(x_{R})\phi_{i}(x_{L})\,.$
(2.22)
We denote the inverse of the norm matrix by
$\left(\mathsf{N}^{-1}\right)_{i,j}$, and define the biorthonormal functions,
that we call the wave functions, by
$\displaystyle\phi_{i}(x)=\frac{\mathrm{e}^{-V_{L}(x)}}{\sqrt{h_{i}}}p_{i}(x)\,,\qquad\psi_{i}(x)=\frac{\mathrm{e}^{-V_{R}(x)}}{\sqrt{h_{i}}}q_{i}(x)\,.$
(2.23)
###### Proposition 2.10.
The probability distribution associated with the partition function (2.1) is
written using the CD kernel,
$\displaystyle\mathsf{P}_{N}(X_{L,R})$
$\displaystyle=\frac{Z_{N}^{-1}}{N!^{2}}\prod_{k=L,R}\mathrm{e}^{-\tr
V_{k}(X_{k})}\Delta_{N}(X_{L})\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\Delta_{N}(X_{R})$
$\displaystyle=\frac{1}{N!^{2}}\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\det_{1\leq i,j\leq N}K_{N}(x_{R,i},x_{L,j})\,,$
(2.24)
which obeys the normalization condition
$\displaystyle\int\prod_{k=L,R}\differential{X}_{k}\mathsf{P}_{N}(X_{L,R})=1\,.$
(2.25)
###### Definition 2.11 (Expectation value).
We define the expectation value with respect to the probability distribution
function $\mathsf{P}_{N}(X_{L,R})$ as follows,
$\displaystyle\langle\,\mathcal{O}(X_{L,R})\,\rangle=\int\prod_{k=L,R}\differential{X}_{k}\mathsf{P}_{N}(X_{L,R})\mathcal{O}(X_{L,R})\,.$
(2.26)
### 2.3 Operator formalism
###### Definition 2.12.
We define an inner product symbol,
$\displaystyle\langle\,f\mid g\,\rangle$
$\displaystyle=\int\differential{x}f(x)g(x)\,,$ (2.27a)
$\displaystyle\langle\,f\mid\omega\mid g\,\rangle$
$\displaystyle=\int\differential{x}_{L,R}f(x_{L})\omega(x_{L},x_{R})g(x_{R})\,.$
(2.27b)
We remark that, compared with the previous notation (2.15), this definition
does not depend on the potential function.
Then, the orthonormality of the wave functions $(\phi_{i},\psi_{i})$ defined
in (2.23) is expressed as
$\displaystyle\langle\,\phi_{i}\mid\omega\mid\psi_{j}\,\rangle=\int\differential{x}_{L,R}\phi_{i}(x_{L})\omega(x_{L},x_{R})\psi_{j}(x_{R})=\delta_{i,j}\,,$
(2.28)
where we write
$\displaystyle\phi_{i}(x)=\langle\,\phi_{i}\mid
x\,\rangle\,,\qquad\psi_{i}(x)=\langle\,x\mid\psi_{i}\,\rangle\,,\qquad\omega(x_{L},x_{R})=\langle\,x_{L}\mid\hat{\omega}\mid
x_{R}\,\rangle\,.$ (2.29)
together with the completeness condition
$\displaystyle 1=\int\differential{x}\ket{x}\bra{x}\,.$ (2.30)
In this operator formalism, the CD kernel is given by a matrix element of the
operator defined as
$\displaystyle K_{N}(x_{R},x_{L})$
$\displaystyle=\langle\,x_{R}\mid\hat{K}_{N}\mid
x_{L}\,\rangle\,,\qquad\hat{K}_{N}=\sum_{i=0}^{N-1}\ket{\psi_{i}}\bra{\phi_{i}}\,.$
(2.31)
Introducing infinite dimensional vectors
$\displaystyle\ket{\underline{\phi}}=\begin{pmatrix}&\ket{\phi_{0}}&\ket{\phi_{1}}&\ket{\phi_{2}}&\cdots&\end{pmatrix}^{\text{T}}\,,\qquad\ket{\underline{\psi}}=\begin{pmatrix}&\ket{\psi_{0}}&\ket{\psi_{1}}&\ket{\psi_{2}}&\cdots&\end{pmatrix}^{\text{T}}\,,$
(2.32)
together with the projection matrix
$\displaystyle\left(\Pi_{N}\right)_{i,j}=\begin{cases}1&(i=j\in[0,\ldots,N-1])\\\
0&(\text{otherwise})\end{cases}$ (2.33)
the CD kernel operator is written as
$\displaystyle\hat{K}_{N}=\ket{\underline{\psi}}\Pi_{N}\bra{\underline{\phi}}\,.$
(2.34)
In the limit $N\to\infty$, we have
$\displaystyle\lim_{N\to\infty}{K}_{N}(x_{R},x_{L})=\sum_{i=0}^{\infty}\psi_{i}(x_{R})\phi_{i}(x_{L})=\langle\,x_{R}\mid\omega^{-1}\mid
x_{L}\,\rangle=:\tilde{\omega}(x_{R},x_{L})\,,$ (2.35)
such that
$\displaystyle\int\differential{z}\omega(x,z)\tilde{\omega}(z,y)=\int\differential{z}\tilde{\omega}(x,z){\omega}(z,y)=\delta(x-y)\,.$
(2.36)
###### Proposition 2.13.
The CD kernel is self-reproducing
$\displaystyle\hat{K}_{N}\cdot\hat{\omega}\cdot\hat{K}_{N}=\hat{K}_{N}\,,\qquad\tr\left(\hat{\omega}\cdot\hat{K}_{N}\right)=N\,,$
(2.37)
and therefore the correlation functions are in general determinantal
(Eynard–Mehta’s theorem [EM98]).
### 2.4 Polynomial ensemble
We consider a generalization of the coupled matrix model, that we call the
coupled polynomial ensemble, which is a coupled version of the polynomial
ensemble introduced in Ref. [KS14]. We define the following generalized
coupled matrix model partition functions.
###### Definition 2.14.
Let $(f_{k,i})_{k=L,R,0=1,\ldots,N-1}$ be a set of arbitrary functions. We
define the polynomial ensemble partition functions as follows,
$\displaystyle Z_{N,f_{L}}$
$\displaystyle=\frac{1}{N!^{2}}\int\differential{X}_{L,R}\mathrm{e}^{-\tr
V_{R}(X_{R})}\det_{1\leq i,j\leq N}f_{L,N-i}(x_{L,j})\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\Delta_{N}(X_{R})\,,$ (2.38a) $\displaystyle
Z_{N,f_{R}}$
$\displaystyle=\frac{1}{N!^{2}}\int\differential{X}_{L,R}\mathrm{e}^{-\tr
V_{L}(X_{L})}\Delta_{N}(X_{L})\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\det_{1\leq i,j\leq N}f_{R,N-i}(x_{R,j})\,.$ (2.38b)
###### Remark 2.15.
Specializing each function $(f_{k,i})_{k=L,R,i=0,\ldots,N-1}$ to be a monic
polynomial, these partition functions (2.38) are reduced to the original one
(2.1).
These partition functions show the determinantal structure as discussed
before. In order to discuss their properties, we introduce the notation.
###### Definition 2.16 (Mixed braket notation).
We define the following inner product symbol,
$\displaystyle(\,f\mid g\,\rangle$
$\displaystyle=\int\differential{x}_{L,R}\mathrm{e}^{-V_{L}(x_{L})}f(x_{L})g(x_{R})\,,$
(2.39a) $\displaystyle\langle\,f\mid g\,)$
$\displaystyle=\int\differential{x}_{L,R}\mathrm{e}^{-V_{R}(x_{R})}f(x_{L})g(x_{R})\,.$
(2.39b)
We obtain the following result.
###### Proposition 2.17.
The partition function of the polynomial ensemble is written as a rank $N$
determinant with a set of arbitrary monic polynomials
$(p_{i},q_{i})_{i=0,\ldots,N-1}$,
$\displaystyle Z_{N,f_{L}}$ $\displaystyle=\det_{1\leq i,j\leq
N}\langle\,f_{L,N-i}\mid\omega\mid q_{N-j}\,)\,,$ (2.40) $\displaystyle
Z_{N,f_{R}}$ $\displaystyle=\det_{1\leq i,j\leq N}(\,p_{N-i}\mid\omega\mid
f_{R,N-j}\,\rangle\,.$ (2.41)
###### Proof.
We obtain this formula by direct calculation. Recalling the Vandermonde
determinant is given as (2.18) with a set of arbitrary monic polynomials, we
have
$\displaystyle Z_{N,f_{L}}$
$\displaystyle=\frac{1}{N!^{2}}\int\differential{X}_{L,R}\mathrm{e}^{-\tr
V_{R}(X_{R})}\det_{1\leq i,j\leq N}f_{L,N-i}(x_{L,j})\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\det_{1\leq i,j\leq N}q_{N-j}(x_{R,i})$
$\displaystyle=\det_{1\leq i,j\leq
N}\quantity(\int\differential{x}_{L,R}\mathrm{e}^{-V_{R}(x_{R})}f_{L,N-i}(x_{L})\omega(x_{L},x_{R})q_{N-j}(x_{R}))$
$\displaystyle=\det_{1\leq i,j\leq N}\langle\,f_{L,N-i}\mid\omega\mid
q_{N-j}\,)\,.$ (2.42)
We can obtain the other formula in the same way. ∎
###### Definition 2.18 (Biorthogonal functions).
We can then define two pairs of biorthogonal families
$(F_{L,i},Q_{j})_{i,j=0,\ldots,N-1}$ and $(P_{i},F_{R,j})_{i,j=0,\ldots,N-1}$
such that:
* •
The functions $P_{i}$ and $Q_{j}$ are monic polynomials.
* •
The functions $F_{L,i}$ (resp. $F_{R,i}$) are linearly spanned by the
functions $(f_{L,k})_{k=0,\cdots,i}$ (resp. $(f_{R,k})_{k=0,\cdots,i}$).
* •
They satisfy the following scalar product properties:
$\displaystyle\langle\,F_{L,i}\mid\omega\mid Q_{j}\,)$
$\displaystyle=h_{L,i}\delta_{i,j}\qquad(i,j=0,\ldots,N-1),$ (2.43a)
$\displaystyle(\,P_{i}\mid\omega\mid F_{R,j}\,\rangle$
$\displaystyle=h_{R,i}\delta_{i,j}\qquad(i,j=0,\ldots,N-1).$ (2.43b)
###### Corollary 2.19.
The partition functions of the coupled polynomial ensemble (2.38) take the
following form in terms of the normalization constants
$(h_{k,i})_{k=L,R,i=0,\ldots,N-1}$,
$\displaystyle Z_{N,f_{k}}$
$\displaystyle=\prod_{i=0}^{N-1}h_{k,i}\qquad(k=L,R)\,.$ (2.44)
###### Proof.
Once recalling that the determinant is invariant under linear operations on
rows and columns, one can express it in terms of the biorthogonal functions
defined before,
$\displaystyle Z_{N,f_{L}}$ $\displaystyle=\det_{1\leq i,j\leq
N}\langle\,F_{L,i-1}\mid\omega\mid Q_{j-1}\,)\,,$ (2.45a) $\displaystyle
Z_{N,f_{R}}$ $\displaystyle=\det_{1\leq i,j\leq N}(\,P_{i-1}\mid\omega\mid
F_{R,j-1}\,\rangle\,.$ (2.45b)
which is exactly the desired expression. ∎
###### Definition 2.20 (Christoffel–Darboux kernel).
We define the CD kernels for the coupled polynomial ensemble as follows,
$\displaystyle K_{N,f_{L}}(x,y)$
$\displaystyle=\mathrm{e}^{-V_{R}(x)}\sum_{i=0}^{N-1}\frac{Q_{i}(x)F_{L,i}(y)}{h_{L,i}}\,,$
(2.46a) $\displaystyle K_{N,f_{R}}(x,y)$
$\displaystyle=\mathrm{e}^{-V_{L}(y)}\sum_{i=0}^{N-1}\frac{F_{R,i}(x)P_{i}(y)}{h_{R,i}}\,.$
(2.46b)
###### Remark 2.21.
As for the ordinary coupled matrix model (2.1), one can define the following
biorthonormal wave functions
$\displaystyle\psi_{L,i}(x)=\frac{1}{\sqrt{h_{L,i}}}\mathrm{e}^{-V_{R}(x)}Q_{i}(x)\,,\qquad\phi_{L,i}(x)=\frac{1}{\sqrt{h_{L,i}}}F_{L,i}(x)\,,$
(2.47)
$\displaystyle\psi_{R,i}(x)=\frac{1}{\sqrt{h_{R,i}}}\mathrm{e}^{-V_{L}(x)}P_{i}(x)\,,\qquad\phi_{R,i}(x)=\frac{1}{\sqrt{h_{R,i}}}F_{R,i}(x)\,,$
(2.48)
and the CD kernels take then a very compact form.
###### Proposition 2.22.
The probability distributions for the coupled polynomial ensemble can be
expressed as
$\displaystyle\mathsf{P}_{N,f_{L}}(X_{L,R})$
$\displaystyle=\frac{1}{N!^{2}}\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\det_{1\leq i,j\leq
N}K_{N,f_{L}}(x_{R,i},x_{L,j})\,,$ (2.49)
$\displaystyle\mathsf{P}_{N,f_{R}}(X_{L,R})$
$\displaystyle=\frac{1}{N!^{2}}\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\det_{1\leq i,j\leq
N}K_{N,f_{R}}(x_{R,i},x_{L,j})\,.$ (2.50)
###### Remark 2.23.
All the previous formulas lead to the familiar matrix model formalism.
Therefore the correlation functions of the Schur polynomial and the
characteristic polynomials shown in the following sections are
straightforwardly generalized to the coupled polynomial ensemble (except for
the pair correlation functions discussed in Section 4). We obtain a natural
generalization of the results for the characteristic polynomial average with
the source term [Kim14b, Kim14a, KM21] and also the one-matrix polynomial
ensemble [ASW20].
## 3 Characteristic polynomial averages
### 3.1 Schur polynomial average
We first compute the Schur polynomial average for the coupled matrix model,
which will be a building block of the correlation functions of the
characteristic polynomials [KM21]. See also [ST21] for a related work.
###### Definition 3.1 (Schur polynomial).
Let $\lambda$ be a partition, a non-increasing sequence of non-negative
integers,
$\displaystyle\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{\ell}>\lambda_{\ell+1}=\cdots=0)\,,$
(3.1)
where $\ell=\ell(\lambda)$ is called the length of the partition. Denoting the
transposed partition by $\lambda^{\text{T}}$, we have
$\ell(\lambda)=\lambda_{1}^{\text{T}}$. Then, the Schur polynomial of $N$
variables, $X=(x_{i})_{i=1,\ldots,N}$, is defined as follows,
$\displaystyle s_{\lambda}(X)=\frac{1}{\Delta_{N}(X)}\det_{1\leq i,j\leq
N}x_{i}^{\lambda_{j}+N-i}\,.$ (3.2)
If $\ell(\lambda)>N$, we have $s_{\lambda}(X)=0$. We also remark
$s_{\emptyset}(X)=1$.
###### Lemma 3.2.
The Schur polynomial average with respect to the probability distribution
function $\mathsf{P}_{N}(X_{L,R})$ (2.24) is given as a rank $N$ determinant,
$\displaystyle\langle\,s_{\lambda}(X_{L})s_{\mu}(X_{R})\,\rangle$
$\displaystyle=\frac{1}{Z_{N}}\det_{1\leq i,j\leq
N}(\,x_{L}^{\lambda_{i}+N-i}\mid\omega\mid x_{R}^{\mu_{j}+N-j}\,)\,.$ (3.3)
###### Proof.
This can be shown by direct calculation,
$\displaystyle\langle\,s_{\lambda}(X_{L})s_{\mu}(X_{R})\,\rangle$
$\displaystyle=\int\prod_{k=L,R}\differential{X}_{k}\mathsf{P}_{N}(X_{L,R})s_{\lambda}(X_{L})s_{\mu}(X_{R})$
$\displaystyle=\frac{Z_{N}^{-1}}{N!^{2}}\int\prod_{k=L,R}\mathrm{e}^{-\tr
V_{k}(X_{k})}\det_{1\leq i,j\leq N}x_{L,i}^{\lambda_{j}+N-j}\det_{1\leq
i,j\leq N}\omega(x_{L,i},x_{R,j})\det_{1\leq i,j\leq N}x_{R,i}^{\mu_{j}+N-j}$
$\displaystyle=\frac{1}{Z_{N}}\det_{1\leq i,j\leq
N}\quantity(\int\prod_{k=L,R}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}x_{L}^{\lambda_{i}+N-i}\omega(x_{L},x_{R})x_{R}^{\mu_{j}+N-j})$
$\displaystyle=\frac{1}{Z_{N}}\det_{1\leq i,j\leq
N}(\,x_{L}^{\lambda_{i}+N-i}\mid\omega\mid x_{R}^{\mu_{j}+N-j}\,)\,.$ (3.4)
This completes the proof. ∎
###### Lemma 3.3 (Schur polynomial expansion).
Let $Z=\operatorname{diag}(z_{1},\ldots,z_{M})$. The characteristic polynomial
is expanded with the Schur polynomial as follows,
$\displaystyle\prod_{\alpha=1}^{M}\det(z_{\alpha}-X)$
$\displaystyle=\sum_{\lambda\subseteq(M^{N})}(-1)^{|\lambda|}s_{\lambda^{\vee}}(Z)s_{\lambda}(X)\,,$
(3.5a) $\displaystyle\prod_{\alpha=1}^{M}\det(z_{\alpha}-X)^{-1}$
$\displaystyle=\det_{M}Z^{-N}\sum_{\lambda|\ell(\lambda)\leq\operatorname{min}(M,N)}s_{\lambda}(Z^{-1})s_{\lambda}(X)\,,$
(3.5b)
where we define the dual partition
$\displaystyle\lambda^{\vee}=(\lambda_{1}^{\vee},\ldots,\lambda_{M}^{\vee})=(N-\lambda_{M}^{\text{T}},\ldots,N-\lambda_{1}^{\text{T}})\,,$
(3.6)
and the length of the partition denoted by $\ell(\lambda)=\lambda_{1}$.
###### Proof.
This follows from the Cauchy sum formula. See, e.g., [Mac15]. ∎
### 3.2 Characteristic polynomial
Based on the Schur polynomial expansion, we obtain the determinantal formula
for the characteristic polynomial average as follows.
###### Proposition 3.4 (Characteristic polynomial average).
The $M$-point correlation function of the characteristic polynomial is given
by a rank $M$ determinant of the associated biorthogonal polynomials,
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})}$
$\displaystyle=\frac{1}{\Delta_{M}(Z)}\det_{1\leq\alpha,\beta\leq
M}P_{N+M-\beta}(z_{\alpha})\,,$ (3.7a)
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{R})}$
$\displaystyle=\frac{1}{\Delta_{M}(Z)}\det_{1\leq\alpha,\beta\leq
M}Q_{N+M-\beta}(z_{\alpha})\,.$ (3.7b)
###### Proof.
We may use Lemma 3.2 and Lemma 3.3 to show this formula. Considering the
characteristic polynomial coupled with the matrix $X_{L}$, we obtain
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})}$
$\displaystyle=\sum_{\lambda\subseteq(M^{N})}(-1)^{|\lambda|}s_{\lambda^{\vee}}(Z)\expectationvalue{s_{\lambda}(X_{L})}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)}\sum_{\lambda\subseteq(M^{N})}(-1)^{|\lambda|}\det_{1\leq\alpha,\beta\leq
M}z_{\alpha}^{\lambda_{\beta}^{\vee}+M-\beta}\det_{1\leq i,j\leq
N}(x_{L}^{\lambda_{i}+N-i}\mid\omega\mid q_{N-j})$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq i,j\leq
N\end{subarray}}\begin{pmatrix}z_{\alpha}^{N+M-\beta}&z_{\alpha}^{N-j}\\\
(x_{L}^{N+M-\beta}\mid\omega\mid q_{N-j})&(x_{L}^{N-j}\mid\omega\mid
q_{N-j})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq i,j\leq
N\end{subarray}}\begin{pmatrix}p_{N+M-\beta}(z_{\alpha})&p_{N-j}(z_{\alpha})\\\
(p_{N+M-\beta}\mid\omega\mid q_{N-i})&(p_{N-j}\mid\omega\mid
q_{N-i})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq i,j\leq
N\end{subarray}}\begin{pmatrix}P_{N+M-\beta}(z_{\alpha})&P_{N-j}(z_{\alpha})\\\
(P_{N+M-\beta}\mid\omega\mid Q_{N-i})&(P_{N-j}\mid\omega\mid
Q_{N-i})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq i,j\leq
N\end{subarray}}\begin{pmatrix}P_{N+M-\beta}(z_{\alpha})&P_{N-j}(z_{\alpha})\\\
0&h_{N-i}\,\delta_{N-i,N-j}\end{pmatrix}$
$\displaystyle=\frac{1}{\Delta_{M}(Z)}\det_{1\leq\alpha,\beta\leq
M}P_{N+M-\beta}(z_{\alpha})\,,$ (3.8)
where we apply the rank $M$ co-factor expansion of the rank $N+M$ determinant.
The other formula (3.7b) is similarly obtained. ∎
### 3.3 Characteristic polynomial inverse
In order to write down the characteristic polynomial inverse average, we
define the Hilbert transform.
###### Definition 3.5 (Hilbert transform).
We define the Hilbert transform of the polynomial functions as follows,
$\displaystyle\widetilde{p}_{j}(z)=\int\prod_{k=L,R}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}\frac{\omega(x_{L},x_{R})q_{j}(x_{R})}{z-x_{L}}\,,$
(3.9a)
$\displaystyle\widetilde{q}_{j}(z)=\int\prod_{k=L,R}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}\frac{p_{j}(x_{L})\omega(x_{L},x_{R})}{z-x_{R}}\,.$
(3.9b)
We obtain the following formula.
###### Proposition 3.6 (Characteristic polynomial inverse average).
Let $Z=\operatorname{diag}(z_{1},\ldots,z_{M})$. The $M$-point correlation
function of the characteristic polynomial inverse is given by a rank $M$
determinant of the dual biorthogonal polynomials. Depending on the relation
between $N$ and $M$, we have the following formulas.
1. 1.
$M\leq N$
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}}$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)}\det_{1\leq\alpha,\beta\leq
M}\widetilde{P}_{N-\beta}(z_{\alpha})\,,$ (3.10a)
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)}\det_{1\leq\alpha,\beta\leq
M}\widetilde{Q}_{N-\beta}(z_{\alpha})\,.$ (3.10b)
2. 2.
$M\geq N$
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{N}(Z)}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}\widetilde{p}_{N-i}(z_{\alpha})\\\
p_{a-1}(z_{\alpha})\end{pmatrix}\,,$ (3.10c)
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{N}(Z)}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}\widetilde{q}_{N-i}(z_{\alpha})\\\
q_{a-1}(z_{\alpha})\end{pmatrix}\,.$ (3.10d)
###### Proof.
We first consider the case $M\leq N$. In this case, the Schur polynomial
average for $\ell(\lambda)\leq M$ is obtained from Lemma 3.2 as
$\displaystyle\expectationvalue{s_{\lambda}(X_{L})}$
$\displaystyle=\frac{1}{Z_{N}}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ M+1\leq a,b\leq
N\end{subarray}}\begin{pmatrix}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
q_{N-\beta})&(x_{L}^{N-a}\mid\omega\mid q_{N-\beta})\\\\[5.0pt]
(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
q_{N-b})&(x_{L}^{N-a}\mid\omega\mid q_{N-b})\\\ \end{pmatrix}$
$\displaystyle=\frac{1}{Z_{N}}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ M+1\leq a,b\leq
N\end{subarray}}\begin{pmatrix}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
Q_{N-\beta})&0\\\\[5.0pt] (x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
Q_{N-b})&h_{N-a}\,\delta_{N-a,N-b}\\\ \end{pmatrix}$
$\displaystyle=\frac{Z_{N-M}}{Z_{N}}\det_{1\leq\alpha,\beta\leq
M}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid Q_{N-\beta})\,.$ (3.11)
Then, applying the Schur polynomial expansion as given in Lemma 3.3, the
characteristic polynomial inverse average is given as follows,
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}}$
$\displaystyle=\det_{M}Z^{-N}\sum_{\ell(\lambda)\leq
M}s_{\lambda}(Z^{-1})\expectationvalue{s_{\lambda}(X_{L})}$
$\displaystyle=\frac{Z_{N-M}}{Z_{N}}\frac{1}{\Delta_{M}(Z)}\sum_{0\leq\lambda_{M}\leq\cdots\leq\lambda_{1}\leq\infty}\det_{1\leq\alpha,\beta\leq
M}\quantity(z_{\alpha}^{-\lambda_{\beta}+\beta-(N+1)})\det_{1\leq\alpha,\beta\leq
M}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid Q_{N-\beta})$
$\displaystyle=\frac{Z_{N-M}}{Z_{N}}\frac{1}{\Delta_{M}(Z)}\frac{1}{M!}\sum_{\begin{subarray}{c}0\leq
r_{1},\cdots,r_{M}\leq\infty\\\ r_{\alpha}\neq
r_{\beta}\end{subarray}}\det_{1\leq\alpha,\beta\leq
M}\quantity(z_{\alpha}^{M-N-r_{\beta}-1})\det_{1\leq\alpha,\beta\leq
M}(x_{L}^{N-M+r_{\alpha}}\mid\omega\mid Q_{N-\beta})$
$\displaystyle=\frac{Z_{N-M}}{Z_{N}}\frac{1}{\Delta_{M}(Z)}\det_{1\leq\alpha,\beta\leq
M}\quantity(\sum_{r=0}^{\infty}z_{\alpha}^{M-N-r-1}(x_{L}^{N-M+r}\mid\omega\mid
Q_{N-\beta}))\,,$ (3.12)
where we have applied an analog of the AH identity for non-colliding discrete
variables, $(r_{\alpha})_{\alpha=1,\ldots,M}$ ($r_{\alpha}\neq r_{\beta}$).
Noticing
$\displaystyle\sum_{r=0}^{\infty}z^{-r-1}x^{r}=\frac{1}{z-x}\,,$ (3.13)
and
$\displaystyle\frac{x^{N-M}}{z-x}=\frac{z^{N-M}}{z-x}-\frac{z^{N-M}-x^{N-M}}{z-x}=\frac{z^{N-M}}{z-x}-O(x^{N-M-1})\,,$
(3.14)
we obtain
$\displaystyle\sum_{r=0}^{\infty}z_{\alpha}^{M-N-r-1}(x_{L}^{N-M+r}\mid\omega\mid
Q_{N-\beta})$
$\displaystyle=z_{\alpha}^{M-N}\int\prod_{k=L,R}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}\frac{x_{L}^{N-M}}{z_{\alpha}-x_{L}}\omega(x_{L},x_{R})Q_{N-\beta}(x_{R})$
$\displaystyle=\int\prod_{k=L,R}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}\frac{\omega(x_{L},x_{R})Q_{N-\beta}(x_{R})}{z_{\alpha}-x_{L}}$
$\displaystyle=\widetilde{P}_{N-\beta}(z_{\alpha})\,.$ (3.15)
We have used the biorthogonality $(\,x_{L}^{a}\mid\omega\mid Q_{N-\beta}\,)=0$
for $\beta=1,\ldots,M$ and $a=0,\ldots,N-M-1$ to obtain the last expression.
This completes the derivation of the formula (3.10a). We can similarly obtain
the formula (3.10b).
We then consider the case $M\geq N$. In this case, the $M$-variable Schur
polynomial with the condition $\ell(\lambda)\leq N$ for
$Z=\operatorname{diag}(z_{1},\ldots,z_{M})$ is given by
$\displaystyle\frac{s_{\lambda}(Z^{-1})}{\det Z^{N}}$
$\displaystyle=\det_{\begin{subarray}{c}i=1,\ldots,N\\\ \alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}z_{\alpha}^{-\lambda_{i}+i-(N+1)}\\\
z_{\alpha}^{a-1}\end{pmatrix}$
$\displaystyle=\det_{\begin{subarray}{c}i=1,\ldots,N\\\ \alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}z_{\alpha}^{-\lambda_{i}+i-(N+1)}\\\
p_{a-1}(z_{\alpha})\end{pmatrix}\,.$ (3.16)
Hence, applying the Schur polynomial expansion, we obtain
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{N}(Z)}\sum_{0\leq\lambda_{N}\leq\cdots\leq\lambda_{1}\leq\infty}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}z_{\alpha}^{-\lambda_{i}+i-(N+1)}\\\
p_{a-1}(z_{\alpha})\end{pmatrix}\det_{1\leq i,j\leq
N}(x_{L}^{\mu_{i}+N-i}\mid\omega\mid q_{N-j})$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{N}(Z)}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}\displaystyle\sum_{r=0}^{\infty}z_{\alpha}^{-r-1}(x_{L}^{r}\mid\omega\mid
q_{N-i})\\\ p_{a-1}(z_{\alpha})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{N}(Z)}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}\displaystyle\sum_{r=0}^{\infty}z_{\alpha}^{-r-1}(x_{L}^{r}\mid\omega\mid
q_{N-i})\\\ p_{a-1}(z_{\alpha})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{N}(Z)}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}\widetilde{p}_{N-i}(z_{\alpha})\\\
p_{a-1}(z_{\alpha})\end{pmatrix}\,.$ (3.17)
This is the determinantal formula shown in (3.10c). We can similarly obtain
the other formula (3.10d). This completes the proof. ∎
## 4 Pair correlation functions
In this Section, we consider the correlation function of both of the
characteristic polynomials coupled to the matrices $X_{L,R}$, that we call the
pair correlation function.
### 4.1 Characteristic polynomial
We have the following result regarding the pair correlation of the
characteristic polynomials.
###### Proposition 4.1 (Pair correlation of characteristic polynomials).
Let $Z=\operatorname{diag}(z_{1},\ldots,z_{M})$ and
$W=\operatorname{diag}(w_{1},\ldots,w_{M})$. The correlation function of $M$
pairs of the characteristic polynomials is given by a rank $M$ determinant of
the CD kernel,
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})\det(w_{\alpha}-X_{R})}$
$\displaystyle=\frac{\mathrm{e}^{\tr V_{L}(Z)+\tr
V_{R}(W)}}{\Delta_{M}(Z)\Delta_{M}(W)}\frac{Z_{N+M}}{Z_{N}}\det_{1\leq\alpha,\beta\leq
M}K_{N+M}(w_{\alpha},z_{\beta})\,.$ (4.1)
###### Proof.
We use Lemma 3.2 and Lemma 3.3 as before. In addition, we apply the co-factor
expansion twice to obtain the following,
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})\det(w_{\alpha}-X_{R})}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq i,j\leq N+M\end{subarray}}\begin{pmatrix}0&w_{\alpha}^{N+M-j}\\\
z_{\beta}^{N+M-i}&(x_{L}^{N+M-i}\mid\omega\mid x_{R}^{N+M-j})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq i,j\leq N+M\end{subarray}}\begin{pmatrix}0&q_{N+M-j}(w_{\alpha})\\\
p_{N+M-i}(z_{\beta})&\mathsf{N}_{N+M-i,N+M-j}\end{pmatrix}$
$\displaystyle=\frac{Z_{N+M}/Z_{N}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{1\leq\alpha,\beta\leq
M}\quantity(\sum_{k,k^{\prime}=0}^{N+M-1}q_{k}(w_{\alpha})(\mathsf{N}^{-1})_{k,k^{\prime}}p_{k^{\prime}}(z_{\beta}))$
$\displaystyle=\frac{\mathrm{e}^{\tr V_{L}(Z)+\tr
V_{R}(W)}}{\Delta_{M}(Z)\Delta_{M}(W)}\frac{Z_{N+M}}{Z_{N}}\det_{1\leq\alpha,\beta\leq
M}K_{N+M}(w_{\alpha},z_{\beta})\,,$ (4.2)
We have applied the definition of the CD kernel of degree $N+M$ (2.22) to
obtain the last expression. ∎
###### Remark 4.2.
This result can be also obtained using the self-reproducing property of the CD
kernel as follows. Noticing
$\displaystyle\Delta_{N}(X)\prod_{\alpha=1}^{M}\det(z_{\alpha}-X)=\frac{\Delta_{N+M}(X;Z)}{\Delta_{M}(Z)}\,,$
(4.3)
the pair correlation is given by
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})\det(w_{\alpha}-X_{R})}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\frac{1}{N!^{2}}\int\prod_{k=L,R}\differential{X}_{k}\mathrm{e}^{-\tr
V_{k}(X_{k})}\Delta_{N+M}(X_{L};Z)\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\Delta_{N+M}(X_{R};W)$
$\displaystyle=\frac{\mathrm{e}^{\tr V_{L}(Z)+\tr
V_{R}(W)}}{\Delta_{M}(Z)\Delta_{M}(W)}\frac{Z_{N+M}/Z_{N}}{N!^{2}}$
$\displaystyle\hskip
10.00002pt\times\int\prod_{k=L,R}\differential{X}_{k}\mathrm{e}^{-\tr
V_{k}(X_{k})}\det_{1\leq i,j\leq
N}\omega(x_{L,i},x_{R,j})\det_{\begin{subarray}{c}1\leq i,j\leq N\\\
1\leq\alpha,\beta\leq
M\end{subarray}}\begin{pmatrix}K_{N+M}(x_{R,i},x_{L,j})&K_{N+M}(x_{R,i},z_{\beta})\\\
K_{N+M}(w_{\alpha},x_{L,j})&K_{N+M}(w_{\alpha},z_{\beta})\end{pmatrix}$
$\displaystyle=\frac{\mathrm{e}^{\tr V_{L}(Z)+\tr
V_{R}(W)}}{\Delta_{M}(Z)\Delta_{M}(W)}\frac{Z_{N+M}}{Z_{N}}\det_{1\leq\alpha,\beta\leq
M}K_{N+M}(w_{\alpha},z_{\beta})\,.$ (4.4)
### 4.2 Characteristic polynomial inverse
We then consider the pair correlation of the characteristic polynomial
inverses. In order to write down the formula in this case, we define the dual
CD kernel as follows.
###### Definition 4.3 (Dual Christoffel–Darboux kernel).
For the dual wave functions defined through the Hilbert transform,
$\displaystyle\widetilde{\phi}_{i}(z)$
$\displaystyle=\mathrm{e}^{V_{L}(z)}\int\differential{x}_{L,R}\mathrm{e}^{-V_{L}(x_{L})}\frac{\omega(x_{L},x_{R})\psi_{i}(x_{R})}{z-x_{L}}\,,$
(4.5a) $\displaystyle\widetilde{\psi}_{i}(z)$
$\displaystyle=\mathrm{e}^{V_{R}(z)}\int\differential{x}_{L,R}\mathrm{e}^{-V_{R}(x_{R})}\frac{\phi_{i}(x_{L})\omega(x_{L},x_{R})}{z-x_{R}}\,,$
(4.5b)
we define the dual Christoffel–Darboux kernel of degree $N$ as follows,
$\displaystyle\widetilde{K}_{N}(w,z)=\sum_{i=N}^{\infty}\widetilde{\psi}_{i}(w)\widetilde{\phi}_{i}(z)\,.$
(4.6)
###### Proposition 4.4 (Pair correlation of characteristic polynomial
inverses).
Let $Z=\operatorname{diag}(z_{1},\ldots,z_{M})$ and
$W=\operatorname{diag}(w_{1},\ldots,w_{M})$. The correlation function of $M$
pairs of the characteristic polynomial inverses is given by a rank $M$
determinant of the dual CD kernel depending on the relation between $N$ and
$M$ as follows.
1. 1.
$M\leq N$
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}\det(w_{\alpha}-X_{R})^{-1}}=\frac{\mathrm{e}^{-\operatorname{tr}V_{L}(Z)}\mathrm{e}^{-\operatorname{tr}V_{R}(W)}}{\Delta_{M}(Z)\Delta_{M}(W)}\frac{Z_{N-M}}{Z_{N}}\det_{1\leq\alpha,\beta\leq
M}\widetilde{K}_{N-M}(w_{\beta},z_{\alpha})$ (4.7a)
2. 2.
$M\geq N$
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}\det(w_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{(-1)^{M-N}Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{1\leq\alpha,\beta\leq
M}\quantity(\frac{1}{z_{\alpha}-x_{L}}\mid\omega\mid\frac{1}{w_{\beta}-x_{R}})\det_{1\leq
a,b\leq
M-N}\quantity(\sum_{\alpha,\beta=1}^{M}p_{a-1}(z_{\alpha})\widetilde{\omega}_{\alpha,\beta}q_{b-1}(w_{\beta}))$
(4.7b)
where $\widetilde{\omega}_{\alpha,\beta}$ is the inverse of
$\quantity(\frac{1}{z_{\alpha}-x_{L}}\mid\omega\mid\frac{1}{w_{\beta}-x_{R}})$.
###### Proof.
We first consider the case $M\leq N$. In this case, applying the Schur
polynomial expansion as before, we obtain
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}\det(w_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\sum_{\ell(\lambda),\ell(\mu)\leq
M}\det_{1\leq\alpha,\beta\leq
M}z_{\alpha}^{-N-\lambda_{\beta}+\beta-1}\det_{1\leq\alpha,\beta\leq
M}w_{\alpha}^{-N-\mu_{\beta}+\beta-1}$ $\displaystyle\hskip
80.00012pt\times\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq M\\\ 1\leq
i,j\leq
N-M\end{subarray}}\begin{pmatrix}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})&(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
x_{R}^{N-M-j})\\\ (x_{L}^{N-M-i}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})&(x_{L}^{N-M-i}\mid\omega\mid
x_{R}^{N-M-j})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\sum_{\ell(\lambda),\ell(\mu)\leq
M}\det_{1\leq\alpha,\beta\leq
M}z_{\alpha}^{-N-\lambda_{\beta}+\beta-1}\det_{1\leq\alpha,\beta\leq
M}w_{\alpha}^{-N-\mu_{\beta}+\beta-1}$ $\displaystyle\hskip
80.00012pt\times\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq M\\\ 1\leq
i,j\leq
N-M\end{subarray}}\begin{pmatrix}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})&(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
q_{N-M-j})\\\ (p_{N-M-i}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})&(p_{N-M-i}\mid\omega\mid q_{N-M-j})\end{pmatrix}$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)\Delta_{M}(W)}\sum_{\ell(\lambda),\ell(\mu)\leq
M}\det_{1\leq\alpha,\beta\leq
M}z_{\alpha}^{-N-\lambda_{\beta}+\beta-1}\det_{1\leq\alpha,\beta\leq
M}w_{\alpha}^{-N-\mu_{\beta}+\beta-1}$
$\displaystyle\quad\times\det_{1\leq\alpha,\beta\leq
M}\quantity((x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})-\sum_{i,j=0}^{N-M-1}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
q_{i})(\mathsf{N}^{-1})_{i,j}(p_{j}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta}))\,.$ (4.8)
We remark that each element in the determinant is given by
$\displaystyle(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})-\sum_{i,j=0}^{N-M-1}(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
q_{i})(\mathsf{N}^{-1})_{i,j}(p_{j}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})$
$\displaystyle=(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})$
$\displaystyle\qquad-\int\prod_{k=L,R,L^{\prime},R^{\prime}}\differential{x}_{k}\mathrm{e}^{-V_{k}(x_{k})}x_{L}^{\lambda_{\alpha}+N-\alpha}\omega(x_{L},x_{R^{\prime}})\sum_{i,j=0}^{N-M-1}q_{i}(x_{R^{\prime}})(\mathsf{N}^{-1})_{i,j}p_{j}(x_{L^{\prime}})\omega(x_{L^{\prime}},x_{R})x_{L}^{\mu_{\beta}+N-\beta}$
$\displaystyle=(x_{L}^{\lambda_{\alpha}+N-\alpha}\mid\omega\mid
x_{R}^{\mu_{\beta}+N-\beta})$
$\displaystyle\qquad-\int\prod_{k=L,R}\differential{x}_{k}\differential{x}_{k^{\prime}}\mathrm{e}^{-V_{k}(x_{k})}x_{L}^{\lambda_{\alpha}+N-\alpha}\omega(x_{L},x_{R^{\prime}})K_{N-M}(x_{R^{\prime}},x_{L^{\prime}})\omega(x_{L^{\prime}},x_{R})x_{R}^{\mu_{\beta}+N-\beta}$
$\displaystyle=\int\prod_{k=L,R}\differential{x}_{k}\differential{x}_{k^{\prime}}\mathrm{e}^{-V_{k}(x_{k})}x_{L}^{\lambda_{\alpha}+N-\alpha}\omega(x_{L},x_{R^{\prime}})\quantity(\widetilde{\omega}(x_{R^{\prime}},x_{L^{\prime}})-K_{N-M}(x_{R^{\prime}},x_{L^{\prime}}))\omega(x_{L^{\prime}},x_{R})x_{R}^{\mu_{\beta}+N-\beta}$
$\displaystyle=\int\prod_{k=L,R}\differential{x}_{k}\differential{x}_{k^{\prime}}\mathrm{e}^{-V_{k}(x_{k})}x_{L}^{\lambda_{\alpha}+N-\alpha}\omega(x_{L},x_{R^{\prime}})\quantity(\sum_{k=N-M}^{\infty}\psi_{k}(x_{R^{\prime}})\phi_{k}(x_{L^{\prime}}))\omega(x_{L^{\prime}},x_{R})x_{R}^{\mu_{\beta}+N-\beta}\,.$
(4.9)
Therefore, we obtain
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}\det(w_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{1\leq\alpha,\beta\leq
M}\quantity(\sum_{i=N-M}^{\infty}\int\prod_{k=L,R}\differential{x}_{k}\differential{x}_{k^{\prime}}\mathrm{e}^{-V_{k}(x_{k})}\frac{\omega(x_{L},x_{R^{\prime}})\psi_{i}(x_{R^{\prime}})\phi_{i}(x_{L^{\prime}})\omega(x_{L^{\prime}},x_{R})}{(z_{\alpha}-x_{L})(w_{\beta}-x_{R})})$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{1\leq\alpha,\beta\leq
M}\quantity(\mathrm{e}^{-V_{L}(z_{\alpha})}\mathrm{e}^{-V_{R}(w_{\beta})}\sum_{i=N-M}^{\infty}\widetilde{\phi}_{i}(z_{\alpha})\widetilde{\psi}_{i}(w_{\beta}))$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)\Delta_{M}(W)}\mathrm{e}^{-\operatorname{tr}V_{L}(Z)}\mathrm{e}^{-\operatorname{tr}V_{R}(W)}\det_{1\leq\alpha,\beta\leq
M}\quantity(\widetilde{K}_{N-M}(w_{\beta},z_{\alpha}))\,.$ (4.10)
This completes the derivation of the formula (4.7a).
We then consider the case $M\geq N$. In this case, we similarly obtain the
formula (4.7b) as follows,
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}\det(w_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}$
$\displaystyle\times\sum_{\begin{subarray}{c}0\leq\lambda_{N}\leq\cdots\leq\lambda_{1}\leq\infty\\\
0\leq\mu_{N}\leq\cdots\leq\mu_{1}\leq\infty\end{subarray}}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}z_{\alpha}^{-\lambda_{i}+i-(N+1)}\\\
p_{a-1}(z_{\alpha})\end{pmatrix}\det_{1\leq i,j\leq
N}(x_{L}^{\lambda_{i}+N-i}\mid\omega\mid
x_{R}^{\mu_{j}+N-j})\det_{\begin{subarray}{c}j=1,\ldots,N\\\
\beta=1,\ldots,M\\\
b=1,\ldots,M-N\end{subarray}}\begin{pmatrix}w_{\beta}^{-\mu_{j}+j-(N+1)}\\\
q_{b-1}(w_{\beta})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\frac{1}{N!^{2}}\sum_{\begin{subarray}{c}0\leq
r_{1},\cdots,r_{N}\leq\infty\\\ 0\leq s_{1},\cdots,s_{N}\leq\infty\\\
r_{i}\neq r_{j},s_{i}\neq
s_{j}\end{subarray}}\det_{\begin{subarray}{c}i=1,\ldots,N\\\
\alpha=1,\ldots,M\\\
a=1,\ldots,M-N\end{subarray}}\begin{pmatrix}z_{\alpha}^{-r_{i}-1}\\\
p_{a-1}(z_{\alpha})\end{pmatrix}\det_{1\leq i,j\leq
N}(x_{L}^{r_{i}}\mid\omega\mid
x_{R}^{s_{j}})\det_{\begin{subarray}{c}j=1,\ldots,N\\\ \beta=1,\ldots,M\\\
b=1,\ldots,M-N\end{subarray}}\begin{pmatrix}w_{\beta}^{-s_{j}-1}\\\
q_{b-1}(w_{\beta})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq a,b\leq
M-N\end{subarray}}\begin{pmatrix}\displaystyle\sum_{r,s=0}^{\infty}z_{\alpha}^{-r-1}w_{\beta}^{-s-1}(x_{L}^{r}\mid\omega\mid
x_{R}^{s})&q_{b-1}(w_{\beta})\\\ p_{a-1}(z_{\alpha})&0\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{\begin{subarray}{c}1\leq\alpha,\beta\leq
M\\\ 1\leq a,b\leq
M-N\end{subarray}}\begin{pmatrix}\quantity(\frac{1}{z_{\alpha}-x_{L}}\mid\omega\mid\frac{1}{w_{\beta}-x_{R}})&q_{b-1}(w_{\beta})\\\
p_{a-1}(z_{\alpha})&0\end{pmatrix}$
$\displaystyle=\frac{(-1)^{M-N}Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{1\leq\alpha,\beta\leq
M}\quantity(\frac{1}{z_{\alpha}-x_{L}}\mid\omega\mid\frac{1}{w_{\beta}-x_{R}})\det_{1\leq
a,b\leq
M-N}\quantity(\sum_{\alpha,\beta=1}^{M}p_{a-1}(z_{\alpha})\widetilde{\omega}_{\alpha,\beta}q_{b-1}(w_{\beta}))$
(4.11)
This completes the proof. ∎
### 4.3 Mixed pair correlation
We consider the mixed-type pair correlation function of the characteristic
polynomials.
###### Proposition 4.5.
Let $Z=\operatorname{diag}(z_{1},\ldots,z_{M})$ and
$W=\operatorname{diag}(w_{1},\ldots,w_{M})$. The following determinantal
formulas hold for the mixed-pair correlation for $M\leq N$.
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})\det(w_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{\begin{subarray}{c}\alpha=1,\ldots,M\\\
\beta=1,\ldots,2M\end{subarray}}\begin{pmatrix}P_{N+M-\beta}(z_{\alpha})\\\
\widetilde{Q}_{N+M-\beta}(w_{\alpha})\end{pmatrix}\,,$ (4.12a)
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})^{-1}\det(w_{\alpha}-X_{R})}$
$\displaystyle=\frac{Z_{N-M}/Z_{N}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{\begin{subarray}{c}\alpha=1,\ldots,M\\\
\beta=1,\ldots,2M\end{subarray}}\begin{pmatrix}\widetilde{P}_{N+M-\beta}(z_{\alpha})\\\
Q_{N+M-\beta}(w_{\alpha})\end{pmatrix}\,.$ (4.12b)
###### Proof.
Applying the Schur polynomial expansion and the co-factor expansion as before,
we obtain the following,
$\displaystyle\expectationvalue{\prod_{\alpha=1}^{M}\det(z_{\alpha}-X_{L})\det(w_{\alpha}-X_{R})^{-1}}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\sum_{\ell(\lambda)\leq
M}\det_{1\leq\alpha,\beta\leq
M}\quantity(w_{\alpha}^{-\lambda_{\beta}+\beta-N-1})\det_{\begin{subarray}{c}i=1,\ldots,N+M\\\
\alpha,\beta=1,\ldots,M\\\
k=M+1,\ldots,N\end{subarray}}\begin{pmatrix}z_{\alpha}^{N+M-i}\\\
(x_{L}^{N+M-i}\mid\omega\mid x_{R}^{\lambda_{\beta}+N-\beta})\\\
(x_{L}^{N+M-i}\mid\omega\mid x_{R}^{N-k})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\sum_{\ell(\lambda)\leq
M}\det_{1\leq\alpha,\beta\leq
M}\quantity(w_{\alpha}^{-\lambda_{\beta}+\beta-N-1})\det_{\begin{subarray}{c}i=1,\ldots,N+M\\\
\alpha,\beta=1,\ldots,M\\\
k=M+1,\ldots,N\end{subarray}}\begin{pmatrix}p_{N+M-i}(z_{\alpha})\\\
(p_{N+M-i}\mid\omega\mid x_{R}^{\lambda_{\beta}+N-\beta})\\\
(p_{N+M-i}\mid\omega\mid x_{R}^{N-k})\end{pmatrix}$
$\displaystyle=\frac{Z_{N}^{-1}}{\Delta_{M}(Z)\Delta_{M}(W)}\det_{\begin{subarray}{c}i=1,\ldots,N+M\\\
\alpha,\beta=1,\ldots,M\\\
k=M+1,\ldots,N\end{subarray}}\begin{pmatrix}p_{N+M-i}(z_{\alpha})\\\
\widetilde{q}_{N+M-i}(w_{\beta})\\\ (p_{N+M-i}\mid\omega\mid
x_{R}^{N-k})\end{pmatrix}\,.$ (4.13)
Then, the determinant part is given by
$\displaystyle\det_{\begin{subarray}{c}i=1,\ldots,N+M\\\
\alpha,\beta=1,\ldots,M\\\
k=M+1,\ldots,N\end{subarray}}\begin{pmatrix}p_{N+M-i}(z_{\alpha})\\\
\widetilde{q}_{N+M-i}(w_{\beta})\\\ (p_{N+M-i}\mid\omega\mid
x_{R}^{N-k})\end{pmatrix}$
$\displaystyle=\det_{\begin{subarray}{c}\alpha,\beta,\gamma,\delta=1,\ldots,M\\\
k,l=1,\ldots,N-M\end{subarray}}\begin{pmatrix}p_{N+M-\gamma}(z_{\alpha})&p_{N-\delta}(z_{\alpha})&p_{N-M-l}(z_{\alpha})\\\
\widetilde{q}_{N+M-\gamma}(w_{\beta})&\widetilde{q}_{N-\delta}(w_{\beta})&\widetilde{q}_{N-M-l}(w_{\beta})\\\
(p_{N+M-\gamma}\mid\omega\mid q_{N-M-k})&(p_{N-\delta}\mid\omega\mid
q_{N-M-k})&(p_{N-M-l}\mid\omega\mid q_{N-M-k})\end{pmatrix}$
$\displaystyle=\det_{\begin{subarray}{c}\alpha,\beta,\gamma,\delta=1,\ldots,M\\\
k,l=1,\ldots,N-M\end{subarray}}\begin{pmatrix}P_{N+M-\gamma}(z_{\alpha})&P_{N-\delta}(z_{\alpha})&P_{N-M-l}(z_{\alpha})\\\
\widetilde{Q}_{N+M-\gamma}(w_{\beta})&\widetilde{Q}_{N-\delta}(w_{\beta})&\widetilde{Q}_{N-M-l}(w_{\beta})\\\
0&0&h_{N-M-l}\,\delta_{N-M-l,N-M-k}\end{pmatrix}$
$\displaystyle=Z_{N-M}\det_{\alpha,\beta,\gamma,\delta=1,\ldots,M}\begin{pmatrix}P_{N+M-\gamma}(z_{\alpha})&P_{N-\delta}(z_{\alpha})\\\
\widetilde{Q}_{N+M-\gamma}(w_{\beta})&\widetilde{Q}_{N-\delta}(w_{\beta})\end{pmatrix}\,.$
(4.14)
This completes the derivation of (4.12a). The other formula (4.12b) can be
also derived in the same way. ∎
###### Remark 4.6.
For $M=1$, the mixed-pair correlation functions are given by
$\displaystyle\expectationvalue{\frac{\det(z-X_{L})}{\det(w-X_{R})}}$
$\displaystyle=\frac{Z_{N-1}}{Z_{N}}\det\begin{pmatrix}P_{N}(z)&P_{N-1}(z)\\\
\widetilde{Q}_{N}(w)&\widetilde{Q}_{N-1}(w)\end{pmatrix}$
$\displaystyle=\frac{1}{h_{N-1}}\left(P_{N}(z)\widetilde{Q}_{N-1}(w)-P_{N-1}(z)\widetilde{Q}_{N}(w)\right)\,,$
(4.15a) $\displaystyle\expectationvalue{\frac{\det(w-X_{R})}{\det(z-X_{L})}}$
$\displaystyle=\frac{Z_{N-1}}{Z_{N}}\det\begin{pmatrix}\widetilde{P}_{N}(z)&\widetilde{P}_{N-1}(z)\\\
{Q}_{N}(w)&{Q}_{N-1}(w)\end{pmatrix}$
$\displaystyle=\frac{1}{h_{N-1}}\left(\widetilde{P}_{N}(z){Q}_{N-1}(w)-\widetilde{P}_{N-1}(z)\widetilde{Q}_{N}(w)\right)\,.$
(4.15b)
These expressions suggest that the mixed-pair correlation could be also
written in terms of the associated CD kernel. See [SF03, BDS03, BS06, EKR15]
for details. We leave this issue for the future study.
## References
* [ABDF11] G. Akemann, J. Baik, and P. Di Francesco (eds.), _The Oxford Handbook of Random Matrix Theory_, Oxford Handbooks in Mathematics, Oxford Univ. Press, 2011.
* [ASW20] G. Akemann, E. Strahov, and T. R. Würfel, _Averages of Products and Ratios of Characteristic Polynomials in Polynomial Ensembles_ , Annales Henri Poincare 21 (2020), no. 12, 3973–4002, arXiv:2003.08128 [math-ph].
* [AV03] G. Akemann and G. Vernizzi, _Characteristic polynomials of complex random matrix models_ , Nucl. Phys. B 660 (2003), 532–556, arXiv:hep-th/0212051.
* [BDS03] J. Baik, P. Deift, and E. Strahov, _Products and ratios of characteristic polynomials of random Hermitian matrices_ , J. Math. Phys. 44 (2003), 3657–3670, arXiv:math-ph/0304016 [math-ph].
* [BE03] M. Bertola and B. Eynard, _Mixed correlation functions of the two matrix model_ , J. Phys. A 36 (2003), 7733–7750, arXiv:hep-th/0303161.
* [BEH02] M. Bertola, B. Eynard, and J. Harnad, _Duality, biorthogonal polynomials and multimatrix models_ , Commun. Math. Phys. 229 (2002), 73–120, arXiv:nlin/0108049.
* [BEH03a] , _Differential systems for biorthogonal polynomials appearing in 2-matrix models and the associated Riemann-Hilbert problem_ , Commun. Math. Phys. 243 (2003), 193–240, arXiv:nlin/0208002.
* [BEH03b] , _Duality of spectral curves arising in two matrix models_ , Theor. Math. Phys. 134 (2003), 27–38, arXiv:nlin/0112006.
* [Ber11] M. Bertola, _The Oxford Handbook of Random Matrix Theory_ , ch. Two-matrix models and biorthogonal polynomials, pp. 310–328, Oxford Univ. Press, 2011.
* [BGS09] M. Bertola, M. Gekhtman, and J. Szmigielski, _The Cauchy Two-Matrix Model_ , Commun. Math. Phys. 287 (2009), no. 3, 983–1014, arXiv:0804.0873 [math-ph].
* [BH00] E. Brézin and S. Hikami, _Characteristic Polynomials of Random Matrices_ , Commun. Math. Phys. 214 (2000), 111–135, arXiv:math-ph/9910005.
* [BH11] , _The Oxford Handbook of Random Matrix Theory_ , ch. Characteristic polynomials, pp. 398–414, Oxford Univ. Press, 2011.
* [BH16] , _Random Matrix Theory with an External Source_, SpringerBriefs in Mathematical Physics, vol. 19, Springer, 2016.
* [Bor98] A. Borodin, _Biorthogonal ensembles_ , Nucl. Phys. B 536 (1998), no. 3, 704–732, arXiv:math/9804027.
* [BS06] A. Borodin and E. Strahov, _Averages of Characteristic Polynomials in Random Matrix Theory_ , Comunn. Pure Appl. Math. 59 (2006), 161–253, arXiv:math-ph/0407065 [math-ph].
* [EKR15] B. Eynard, T. Kimura, and S. Ribault, _Random matrices_ , arXiv:1510.04430 [math-ph].
* [EM98] B. Eynard and M. L. Mehta, _Matrices coupled in a chain. I. Eigenvalue correlations_ , J. Phys. A: Math. Gen. 31 (1998), no. 19, 4449–4456, arXiv:cond-mat/9710230 [cond-mat].
* [Eyn05] B. Eynard, _The 2-matrix model, biorthogonal polynomials, Riemann-Hilbert problem, and algebraic geometry_ , arXiv:math-ph/0504034.
* [For10] P. J. Forrester, _Log-gases and random matrices_ , Princeton University Press, Princeton, 2010.
* [FS03] Y. V. Fyodorov and E. Strahov, _An Exact formula for general spectral correlation function of random Hermitian matrices_ , J. Phys. A36 (2003), 3203–3214, arXiv:math-ph/0204051 [math-ph].
* [IZ80] C. Itzykson and J. B. Zuber, _The Planar Approximation. II_ , J. Math. Phys. 21 (1980), 411\.
* [Kim14a] T. Kimura, _Duality and integrability of a supermatrix model with an external source_ , PTEP 2014 (2014), no. 12, 123A01, arXiv:1410.0680 [math-ph].
* [Kim14b] , _Note on a duality of topological branes_ , PTEP 2014 (2014), no. 10, 103B04, arXiv:1401.0956 [hep-th].
* [KM21] T. Kimura and E. A. Mazenc, _The Schur Expansion of Characteristic Polynomials and Random Matrices_ , arXiv:2111.02365 [hep-th].
* [KS14] A. B. J. Kuijlaars and D. Stivigny, _Singular values of products of random matrices and polynomial ensembles_ , Random Matrices: Theory and Applications 03 (2014), no. 03, 1450011, arXiv:1404.5802 [math.PR].
* [Mac15] I. G. Macdonald, _Symmetric functions and hall polynomials_ , 2 ed., Oxford Classic Texts in the Physical Sciences, Oxford University Press, London, 2015.
* [Meh04] M. L. Mehta, _Random Matrices_, 3rd ed., Pure and Applied Mathematics, vol. 142, Academic Press, 2004\.
* [Mor94] A. Y. Morozov, _Integrability and matrix models_ , Phys. Usp. 37 (1994), 1–55, arXiv:hep-th/9303139 [hep-th].
* [Ora11] N. Orantin, _The Oxford Handbook of Random Matrix Theory_ , ch. Chain of matrices, loop equations, and topological recursion, pp. 329–352, Oxford Univ. Press, 2011, arXiv:0911.5089 [math-ph].
* [SF03] E. Strahov and Y. V. Fyodorov, _Universal results for correlations of characteristic polynomials: Riemann-Hilbert approach_ , Commun. Math. Phys. 241 (2003), 343–382, arXiv:math-ph/0210010.
* [ST21] L. Santilli and M. Tierz, _Schur expansion of random-matrix reproducing kernels_ , J. Phys. A 54 (2021), no. 43, 435202, arXiv:2106.04168 [math-ph].
|
# Non-uniqueness of Leray weak solutions of the forced MHD equations ††thanks:
Wang was supported by National Key R$\&$D Program of China (No.
2022YFA1005601), National Natural Science Foundation of China (No. 12371114)
and Outstanding Young foundation of Jiangsu Province (No. BK20200042). Xu was
supported by the Postdoctoral Science Foundation of China (2023M731381). Zhang
was supported by National Natural Science Foundation of China (No. 12301133),
the Postdoctoral Science Foundation of China (No. 2023M741441) and Jiangsu
Education Department (No. 23KJB110007).
Jun Wang, Fei Xu , Yong Zhang School of Mathematical Sciences, Jiangsu
University, Zhenjiang, 212013, P.R. China
E-mail<EMAIL_ADDRESS>of Mathematical Sciences, Jiangsu
University, Zhenjiang, 212013, P.R. China
E-mail<EMAIL_ADDRESS>author.
School of Mathematical Sciences, Jiangsu University, Zhenjiang, 212013, P.R.
China
E-mail<EMAIL_ADDRESS>
###### Abstract
In this paper, we exhibit non-uniqueness of Leray weak solutions of the forced
magnetohydrodynamic (MHD for short) equations. Similar to the solutions
constructed in [12], we first find a special steady solution of ideal MHD
equations whose linear unstability was proved in [21]. It is possible to
perturb the unstable scenario of ideal MHD to 3D viscous and resistive MHD
equations, which can be regarded as the first unstable ”background” solution.
Our perturbation argument is based on the spectral theoretic approach [35].
The second solution we would construct is a trajectory on the unstable
manifold associated to the unstable steady solution. It is worth noting that
these solutions live precisely on the borderline of the known well-posedness
theory.
_Keywords:_ MHD equations, Leray weak solution, non-uniqueness, megneto-
rotational instability(MRI), constraction mapping principle
_AMS Subjection Classification(2020):_ 76B15, 47J15, 76B03.
## 1 Introduction and main results
Consider the three-dimensional magnetohydrodynamic (MHD for short) system on
$\mathbb{R}^{3}$
$\left\\{\begin{array}[]{llll}\partial_{t}v+v\cdot\nabla v-\Delta v+\nabla
p=H\cdot\nabla H+f_{1}\\\ \partial_{t}H+v\cdot\nabla H-\Delta H=H\cdot\nabla
v+f_{2},\\\ \text{div}v=\text{div}H=0,\end{array}\right.$ (1.1)
where $v(t,x):(0,T)\times\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$,
$H(t,x):(0,T)\times\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}$,
$p(t,x)\in\mathbb{R}$ correspond to the velocity field, magnetic field and
pressure of the fluid, respectively, and $f=(f_{1},f_{2})$ is the given body
force. We impose the initial condition
$(v,H)(0,x)=(v^{0},H^{0})(x),\quad x\in\mathbb{R}^{3}.$ (1.2)
Among various hydrodynamic models, the viscous and resistive MHD system is a
canonical macroscopic model to describe the motion of conductive fluid, such
as plasma or liquid metals, under a complicated interaction between the
electromagnetic phenomenon and fluid dynamical phenomenon (see [1]). We refer
the reader to [2, 3, 4, 5] for more physical interpretations of the MHD
system. In particular, in the case without magnetic fields, the system (1.1)
would reduce to the classical incompressible Navier-Stokes equation (NSE for
short). When ignoring viscous and resistive effects, the system (1.1) would
become the ideal MHD system, namely,
$\left\\{\begin{array}[]{llll}\partial_{t}v+v\cdot\nabla v+\nabla
p=H\cdot\nabla H\\\ \partial_{t}H+v\cdot\nabla H=H\cdot\nabla v,\\\
\text{div}v=\text{div}H=0,\end{array}\right.$ (1.3)
with the initial condition
$(v,H)(0,x)=(v^{0},H^{0})(x),\quad x\in\mathbb{R}^{3}$ (1.4)
The incompressible ideal MHD system (1.3) is the classical macroscopic model
coupling the Maxwell equations to the evolution of an electrically conducting
incompressible fluid [3, 4]. In the case $H=0$, it’s obvious that (1.3)
reduces to the Euler equation.
The well-posedness problem of NSE and MHD has been extensively studied in the
literature. For the initial data with finite energy, the existence of global
weak solution $u$ to NSE was first proved by Leray [6] in 1934 and later by
Holf [7] in 1951 in bounded domains, which satisfies
$u\in\mathcal{C}_{weak}([0,+\infty);L^{2}({\Omega}))\cap
L^{2}([0,+\infty);\dot{H}^{1}({\Omega}))$ and obeys the following energy
inequality
$\|u(t)\|_{L^{2}}^{2}+2\nu\int_{t_{0}}^{t}\|\nabla
u(s)\|_{L^{2}}^{2}ds\leq\|u(t_{0})\|_{L^{2}}^{2}$ (1.5)
for any $t>0$ and a.e. $t_{0}\geq 0$. Similar to the Navier-Stokes equation, a
global weak solution (in the sense of Definition 1.1) and local strong
solution to (1.1) with the initial boundary value condition were constructed
by Duvant and Lions [8]. Later, the results were extended to the Cauchy
problem by Sermange and Terman [5], where their main tools are regularity
theory of the Stokes operator and the energy method.
Now let us first recall the notion of Leray weak solution of MHD system (1.1)
for each divergence-free initial value $(v^{0},H^{0})(x)$ and body force
$f_{i}\in L_{t}^{1}L_{x}^{2}(i=1,2)$. Denote that
$L_{\sigma}^{2}(\mathbb{R}^{3})$ is the space of divergence-free vector fields
in $L^{2}(\mathbb{R}^{3})$.
###### Definition 1.1.
The pair $(v,H)\in L^{\infty}(0,T;L_{\sigma}^{2}(\mathbb{R}^{3}))\cap
L^{2}(0,T;W^{1,2}(\mathbb{R}^{3}))$ is called as Leray weak solution in
$[0,T)\times\mathbb{R}^{3}$ if there holds that
(1) The pair $(v,H)$ solves (1.1) in the distribution sense
$\displaystyle\int_{0}^{T}\int_{\mathbb{R}^{3}}v\cdot\partial_{t}\varphi-v\cdot\nabla
v\cdot\varphi+H\cdot\nabla H\cdot\varphi-\nabla u\cdot\nabla\varphi dxdt$
$\displaystyle=-\int_{\mathbb{R}^{3}}v^{0}\cdot\varphi(\cdot,0)dx-\int_{0}^{T}\int_{\mathbb{R}^{3}}f_{1}\varphi
dtdx$
$\displaystyle\int_{0}^{T}\int_{\mathbb{R}^{3}}H\cdot\partial_{t}\phi-v\cdot\nabla
H\cdot\phi+H\cdot\nabla v\cdot\phi-\nabla H\cdot\nabla\varphi dxdt$
$\displaystyle=-\int_{\mathbb{R}^{3}}H^{0}\cdot\phi(\cdot,0)dx-\int_{0}^{T}\int_{\mathbb{R}^{3}}f_{2}\phi
dtdx$ (1.6)
for all $\varphi,\phi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{3})$ and the
initial data $v^{0},H^{0}\in L_{\sigma}^{2}(\mathbb{R}^{3})$.
(2) Such solution pair $(v,H)$ satisfies the energy inequality
$\displaystyle\frac{1}{2}\int_{\mathbb{R}^{3}}|v(t)|^{2}+|H(t)|^{2}dx+\int_{0}^{T}\int_{\mathbb{R}^{3}}|\nabla
v(t)|^{2}+|\nabla H(t)|^{2}dxdt$
$\displaystyle\leq\int_{\mathbb{R}^{3}}|v^{0}|^{2}+|H^{0}|^{2}dx+\int_{0}^{T}\int_{\mathbb{R}^{3}}f_{1}v+f_{2}Hdxdt$
(1.7)
It’s known that Leray weak solutions of MHD system (1.1) enjoy several nice
properties, including the partial regularity and weak-strong uniqueness.
However, the uniqueness of the weak solutions still remains one of the most
challenging problems. In this paper, we will answer the uniqueness question in
the negative. Our main result can be stated as follows.
###### Theorem 1.2.
There exist two distinct Leray weak solutions $(v_{1},H_{1})$, $(v_{2},H_{2})$
to the forced viscous and resistive MHD system (1.1) on
$(0,T)\times\mathbb{R}^{3}$ with body force
$(f_{1},f_{2})\in(L_{t}^{1}L_{x}^{2})^{2}$.
Recall the scaling property of MHD equation, if $(v(x,t),H(x,t),p(x,t))$ is a
solution of (1.1) with force $f_{i}(x,t)(i=1,2)$, then for any $\lambda>0$,
$v^{\lambda}(x,t)=\lambda v(\lambda
x,\lambda^{2}t),~{}~{}~{}H^{\lambda}(x,t)=\lambda H(\lambda
x,\lambda^{2}t),~{}~{}~{}p^{\lambda}(x,t)=\lambda^{2}p(\lambda
x,\lambda^{2}t)$ (1.8)
is also a solution with force $f_{i}(x,t)=\lambda^{3}f_{i}(\lambda
x,\lambda^{2}t)$. A particular class of solution are the $self$-$similar$
solutions, that is, the solutions of MHD equations on
$\mathbb{R}^{3}\times\mathbb{R}$ invariant under the scaling symmetry. We will
finish the proof of Theorem 1.2 by following the similar techniques developed
in recent work of Albritton-Brué-Colombo [12]. However, it is worth noting
that the work is not just a parallel extension. The addition of magnetic field
would bring some new difficulties. Especially, it is vital to construct a
smooth and decaying unstable steady state of the forced MHD equations in three
dimension. Considering the similarity variables
$\xi=\frac{x}{\sqrt{t}},~{}~{}~{}\tau=logt,$ (1.9)
the solutions can be expressed as follows in similarity variables by
$\begin{split}&v(x,t)=\frac{1}{\sqrt{t}}V(\xi,\tau),~{}~{}H(x,t)=\frac{1}{\sqrt{t}}W(\xi,\tau),\\\
&f(x,t)=\frac{1}{t^{\frac{3}{2}}}F(\xi,\tau),~{}~{}p(x,t)=\frac{1}{t}P(\xi,\tau)\end{split}$
If $(v,H,p)$ satisfies (1.1) with force $f_{i}(x,t)(i=1,2)$, then profile
$(V,W,P)$ satisfies the time-dependent Leray equations
$\left\\{\begin{array}[]{llll}\partial_{\tau}V-~{}\frac{1}{2}(1+\xi\cdot\nabla_{\xi})V+V\cdot\nabla
V-W\cdot\nabla W-\Delta_{\xi}V+\nabla P=F_{1},\\\
\partial_{\tau}W-~{}\frac{1}{2}(1+\xi\cdot\nabla_{\xi})W+V\cdot\nabla
W-W\cdot\nabla V-\Delta_{\xi}W=F_{2}.\end{array}\right.$ (1.10)
A special self-similar solutions of (1.1) correspond to steady state of
(1.10). We will find the first weak solution
$(\overline{V}(\xi),\overline{W}(\xi))$ of (1.10), which is linearly unstable
steady solution. That is to say, the following linearized MHD equations around
the steady state $(\overline{V},\overline{W})$
$\left\\{\begin{array}[]{llll}\partial_{\tau}V-(\frac{1}{2}+\frac{1}{2}\xi\cdot\nabla_{\xi}+\Delta_{\xi})V+\mathbb{P}(\overline{V}\cdot\nabla
V+V\cdot\nabla\overline{V}-\overline{W}\cdot\nabla
W-W\cdot\nabla\overline{W})=0,\\\
\partial_{\tau}W-(\frac{1}{2}+\frac{1}{2}\xi\cdot\nabla_{\xi}+\Delta_{\xi})W+\mathbb{P}(\overline{V}\cdot\nabla
W+V\cdot\nabla\overline{W}-\overline{W}\cdot\nabla
V-W\cdot\nabla\overline{V})=0\end{array}\right.$ (1.11)
have a nontrivial solution of the form
$(V(\xi,\tau),W(\xi,\tau))=(e^{\lambda\tau}\widetilde{V}(\xi),e^{\lambda\tau}\widetilde{W}(\xi))$
with $\lambda>0$. In addition, it follows from Section 3 that we can rewrite
(1.11) as
$\partial_{\tau}\Xi-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})\Xi-\Delta_{\xi}\Xi+\mathbb{P}(\mathfrak{B}(\overline{\Xi},\Xi)+\mathfrak{B}(\Xi,\overline{\Xi}))=0,$
(1.12)
where $\Xi=(V(\xi,\tau),W(\xi,\tau))$ and
$\overline{\Xi}=(\overline{V}(\xi),\overline{W}(\xi)).$ We can also say
$\overline{\Xi}$ is linearly unstable for the dynamics of (1.12) if there
exists an unstable eigenvalue for the linearized operator
$-L_{ss}\Xi=-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})\Xi-\Delta_{\xi}\Xi+\mathbb{P}(\mathfrak{B}(\overline{\Xi},\Xi)+\mathfrak{B}(\Xi,\overline{\Xi})).$
The second solution of (1.10) is a trajectory on the unstable manifold
associated to the most unstable eigenvalue which will be constructed in
Section 5.
Before making a comment on our result in more detail, let us review the
literature on some significant progress towards the non-uniqueness of the
Euler equations and Navier-Stokes equations. In recent two papers [9] and
[10], Vishik answered the non-uniqueness of Euler equation by constructing two
Leray-Holf weak solutions. One solution is an explicit unstable steady
vortices for the Euler dynamics in similarity variables and another is a
trajectory on the unstable manifold associated to the unstable steady state,
which lives precisely on the borderline of the known well-posedness theory.
Later, Albritton et.[11] followed the strategy of Vishik and made some
improvement. Motivated by the Vishik’s work, Albritton et.[12] then
constructed a vortex ring which ’lifts’ Vishik’s unstable vortex to three
dimensions, proving the nonuniqueness of Navier-Stokes equations in the same
way. In addition, we also would like to mention two of particular important
work. The first contribution was made by Jia [13, 14, 15], who developed a
program towards non-uniqueness of Navier-stokes equation without external
force. Compared to Vishik’s approach, the self-similar solutions in [14] are
far from explicit. Therefore, the spectral condition therein seems difficult
to verify analytically, although it has been checked with non-rigorous
numerics in [14]. Second, Buckmaster and Vicol constructed non-unique
distributional solutions of the Navier-Stokes equations in [16] (see also
[17]) and the ideal MHD [18] equations with finite kinetic energy via the
powerful method of convex integration. Recently, the author in [19, 20] proved
the sharp non-uniqueness of weak solutions to 3D viscous and resistive MHD
with finite kinetic energy via method of convex integration. However, these
results mentioned above using convex integration schemes are far from reaching
the regularity $\nabla u\in L_{t,x}^{2}$.
In this paper, we will establish the non-uniqueness of MHD equaitons (1.1)
based on the Leray equations (1.10). For better proceeding, we allow a force
in (1.1), which gives us the freedom to search for a more explicit unstable
profile. The rest of this paper is arranged as follows: In section 2, we
mainly review the linear unstable of the axisymmetric ideal MHD equation
around a rotating flow $(v_{0},H_{0})$ in [21], which contributes to
constructing a unstable steady state profile of (1.10) by choosing a suitable
force. In section 3, we will show that the linearly unstable properties of
axisymmetric case in Theorem 2.1 can be extended to the more general case. In
section 4, we perturb this ideal MHD unstable scenario to 3D viscous and
resistive MHD equations based on the spectral theoretic approach [35]. In
other words, we will establish the linear instability of MHD equations. In
section 5, we use the linear instability proved in Theorem 4.1 to construct
the second Leray weak solution of the forced MHD equations.
## 2 Preliminaries
Firstly, let us pay attention to one simple axisymmetric steady solution
$(v_{0},H_{0})$ among explicit solutions of the incompressible ideal MHD
equations (1.3) (see [22, 23]), where $(v_{0},H_{0})$ is a rotating flow with
a vertical magnetic field, that is,
$\left\\{\begin{array}[]{llll}v_{0}(x)=v_{0}(r)e_{\theta}=r\omega(r)e_{\theta},\\\
H_{0}(x)=\epsilon b(r)e_{z},\end{array}\right.$ (2.1)
where $\epsilon\neq 0$ is a constant, $(r,\theta,z)$ are the cylindrical
coordinates with $r=\sqrt{x_{1}^{2}+x_{2}^{2}}$, $z=x_{3}$,
$(e_{r},e_{\theta},e_{z})$ are unit vectors along $r,\theta,z$ directions,
$\omega(r)\in C^{3}(R_{1},R_{2})$ is the angular velocity of the rotating
fluid, the magnetic profile $b(r)\in C^{3}(R_{1},R_{2})(0\leq
R_{1}<R_{2}=+\infty)$ has a positive lower bound. We will require that
$(\omega(r),b(r))$ has a extra decay at infinity, which guarantee the finite
energy. In [21], they give a rigorous proof of the sharp linear instability
criteria of rotating flows (2.1) with magnetic fields. This smooth and
decaying unstable scenario (2.1) can be regarded as the unstable ”background”
solution of (1.1) in similarity variables by choosing a non-standard force. In
addition, the linear unstable properties of the rotating flow (2.1) play an
important role in constructing non-unique energy weak solutions of (1.1).
The stability criterion for rotating flows with a magnetic field is generally
different from the classical Rayleigh criterion for rotating flows without a
magnetic field. The influence of a vertical and uniform magnetic field
(i.e.,$b(r)$ =constant) on the stability of the rotating flows was first
studied by Velikhov [27] and Chandrasekhar [28], who derived a sufficient
condition for linear stability of a rotating flow in the limit of vanishing
magnetic fields that the square of the angular velocity increases outwards,
i.e.,
$\partial_{r}(\omega^{2})>0,~{}~{}\text{for~{}all}~{}r\in(R_{1},R_{2}).$ (2.2)
If the stability condition (2.2) fails, it was suggested in [27] and [28] that
there is linear instability with small magnetic fields and they also showed
that the unstable eigenvalues are necessarily real. Such instability of
rotating flows induced by small magnetic fields is called magneto-rotational
instability (MRI) in the literature, which has wide application in
astrophysics, particularly to the turbulence and enhanced angular momemtum
transport in astrophysical accretion disks. We refer to the reviews [29, 30,
31, 32] for the history and results of this important topic.
In [21], they answered three natural questions for MRI: Firstly, they give a
sharp instability criterion for general vertical magnetic fields and angular
velocities. Secondly, they show that MRI is due to discrete unstable spectrum,
which is finite. Thirdly, they also prove that the sharp stability or
instability criteria can imply nonlinear stability or instability
respectively. The main proof is based on a local dispersion analysis and a
framework of separable Hamiltonian systems which we will sketch below. The
authors [21] considered the axisymmetric solution of the system (1.3) in the
cylinder
${\Omega}:=\\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}|R_{1}\leq\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}\leq
R_{2},x_{3}\in T_{2\pi}\\},0\leq R_{1}<R_{2}\leq\infty$. In our paper, we will
consider the case in $\mathbb{R}^{3}$, that is, $0=R_{1}<R_{2}=\infty$. In the
cylindrical coordinates, we denote
$H(t,r,z)=H_{r}(t,r,z)e_{r}+H_{\theta}(t,r,z)e_{\theta}+H_{z}(t,r,z)e_{z}$
and
$v(t,r,z)=v_{r}(t,r,z)e_{r}+v_{\theta}(t,r,z)e_{\theta}+v_{z}(t,r,z)e_{z}$
Due to $\text{div}H=0$, we can define magnetic potential $\psi(t,r,z)$ of
$H(t,r,z)$ by
$\left\\{\begin{array}[]{llll}H_{r}(t,r,z)=\frac{\partial_{z}\psi}{r},\\\
H_{z}(t,r,z)=-\frac{\partial_{r}\psi}{r},\\\
-\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\psi)-\frac{1}{r^{2}}\partial_{z}^{2}\psi=\frac{1}{r}\partial_{r}H_{z}-\frac{1}{r}\partial_{z}H_{r}.\end{array}\right.$
The system (1.3) can be rewritten in the cylindrical coordinates as
$\left\\{\begin{array}[]{llll}\partial_{t}v_{r}+\partial_{r}p=\frac{\partial_{z}\psi}{r}\partial_{r}(\frac{\partial_{z}\psi}{r})-\frac{\partial_{r}\psi}{r}\partial_{r}(\frac{\partial_{z}\psi}{r})-\frac{H_{\theta}H_{\theta}}{r}-v_{r}\partial_{r}v_{r}-v_{z}\partial_{z}v_{r}+\frac{v_{\theta}v_{\theta}}{r},\\\
\partial_{t}v_{\theta}=\frac{\partial_{z}\psi}{r}\cdot\partial_{r}H_{\theta}-\frac{\partial_{r}\psi}{r}\partial_{z}H_{\theta}+\frac{H_{\theta}\frac{\partial_{z}\psi}{r}}{r}-v_{r}\partial_{r}v_{\theta}-v_{z}\partial_{z}v_{\theta}+\frac{v_{\theta}v_{r}}{r},\\\
\partial_{t}v_{z}+\partial_{z}p=\frac{\partial_{z}\psi}{r}\partial_{r}(-\frac{\partial_{r}\psi}{r})-\frac{\partial_{r}\psi}{r}\partial_{r}(-\frac{\partial_{r}\psi}{r})-v_{r}\partial_{r}v_{z}-v_{z}\partial_{z}v_{z},\\\
\partial_{t}\psi=-v_{r}\partial_{r}\psi-v_{z}\partial_{z}\psi,\\\
\partial_{t}H_{\theta}=\frac{\partial_{z}\psi}{r}\partial_{r}(v_{\theta})-\frac{\partial_{r}\psi}{r}\partial_{r}(v_{\theta})-\frac{H_{\theta}v_{r}}{r}-v_{r}\partial_{r}H_{\theta}-v_{z}\partial_{z}H_{\theta}-\frac{v_{\theta}\frac{\partial_{z}\psi}{r}}{r},\\\
\frac{1}{r}\partial_{r}(ru_{r})+\partial_{z}u_{z}=0.\end{array}\right.$ (2.3)
For steady state, we can take
$\psi_{0}(r)=-\epsilon\int_{0}^{r}sb(s)ds.$
Now let the perturbations be
$\displaystyle u(t,x)=v(t,x)-v_{0}(x);$ $\displaystyle
B_{\theta}(t,x)=H_{\theta}(t,x);$
$\displaystyle\mathcal{P}(t,x)=p(t,x)-p_{0}(x);$
$\displaystyle\varphi(t,r,z)=\psi-\psi_{0}.$
The linearized MHD system around a given steady state
$(v_{0}(x),H_{0}(x),p_{0}(x))$ in the cylindrical coordinates can be reduced
to the following system
$\left\\{\begin{array}[]{llll}\partial_{t}u_{r}=\varepsilon
b(r)\partial_{z}(\frac{\partial_{z}\varphi}{r})+2\omega(r)v_{\theta}-\partial_{r}\mathcal{P},\\\
\partial_{t}u_{\theta}=\varepsilon
b(r)\partial_{z}(B_{\theta})-\frac{u_{r}}{r}\partial_{r}(r^{2}\omega^{2}(r)),\\\
\partial_{t}u_{z}=\varepsilon
b(r)\partial_{z}(-\frac{\partial_{r}\varphi}{r})-\partial_{z}\mathcal{P}+\frac{\varepsilon\partial_{r}b(r)}{r}\partial_{z}\varphi,\\\
\partial_{t}\varphi=\varepsilon rb(r)u_{r},\\\
\partial_{t}B_{\theta}=\varepsilon
rb(r)\partial_{z}u_{\theta}+\partial_{r}\omega(r)\partial_{z}\varphi,\\\
\frac{1}{r}\partial_{r}(ru_{r})+\partial_{z}u_{z}=0.\end{array}\right.$ (2.4)
We impose the system (2.4) with conditions
$\left\\{\begin{array}[]{llll}(u_{r},u_{\theta},u_{z},\varphi,B_{\theta})(t,r,z)|_{t=0}=(u_{r}^{0},u_{\theta}^{0},u_{z}^{0},\varphi^{0},B_{\theta}^{0})(r,z),\\\
(u_{r},u_{\theta},u_{z},\varphi,B_{\theta})(t,r,z)\rightarrow(0,0,0,0,0),~{}~{}~{}\text{as}~{}~{}r\rightarrow\infty,\\\
(u_{r},u_{\theta},u_{z},\varphi,B_{\theta})(t,r,z)=(u_{r},u_{\theta},u_{z},\varphi,B_{\theta})(t,r,z+2\pi).\end{array}\right.$
(2.5)
It is obvious that the linearized axisymmetric MHD equations (2.4) can be
written in a Hamiltonian form
$\begin{matrix}\frac{d}{dt}\begin{pmatrix}u_{1}\\\
u_{2}\end{pmatrix}=\mathbf{J}\mathbf{L}\begin{pmatrix}u_{1}\\\
u_{2}\end{pmatrix}.\end{matrix}$ (2.6)
where $u_{1}=(u_{\theta}+\frac{\partial_{r}\omega(r)}{\varepsilon
b(r)}\varphi,\varphi)$, $u_{2}=(\mbox{\boldmath$u$},B_{\theta})$ with
$\mbox{\boldmath$u$}=(u_{r},u_{z})$. Consider the energy spaces
$\mathbf{X}=X\times Y$ with $X=L^{2}(\mathbb{R}^{3})\times
Z,Y=L_{\sigma}^{2}(\mathbb{R}^{3})\times L^{2}(\mathbb{R}^{3})$, where
$L^{2}(\mathbb{R}^{3})$ is the cylindrically symmetric $L^{2}$ space on
$\mathbb{R}^{3}$,
$L_{\sigma}^{2}(\mathbb{R}^{3}):=\\{\mbox{\boldmath$u$}=u_{r}(r,z)e_{r}+u_{z}(r,z)e_{z}\in
L^{2}(\mathbb{R}^{3})~{}|~{}\text{div}u=0\\}.$
and
$Z=\\{\varphi(r,z)\in
H_{mag}^{1}(\mathbb{R}^{3})|~{}\|\varphi\|_{H_{mag}^{1}(\mathbb{R}^{3})}<\infty\\}$
with
$\|\varphi\|_{H_{mag}^{1}(\mathbb{R}^{3})}=\left(\int_{\Omega}\frac{1}{r^{2}}|\partial_{z}\varphi|^{2}+\frac{1}{r^{2}}|\partial_{r}\varphi|^{2}dx\right)^{\frac{1}{2}}.$
The off-diagonal anti-self-dual operator $\mathbf{J}$ and diagonal self-dual
operator $\mathbf{L}$ are defined by
$\begin{matrix}\mathbf{J}=\begin{pmatrix}0&\mathbb{B}\\\
-\mathbb{B^{\prime}}&0\end{pmatrix}:X^{\ast}\rightarrow
X,~{}~{}~{}\mathbf{L}=\begin{pmatrix}\mathbb{L}&0\\\
0&A\end{pmatrix}:X\rightarrow X^{\ast},\end{matrix}$ (2.7)
in which
$\displaystyle\mathbb{B}\begin{pmatrix}\mbox{\boldmath$u$}\\\
B_{\theta}\end{pmatrix}=\begin{pmatrix}-2\omega(r)u_{r}+\varepsilon
b(r)\partial_{z}B_{\theta}\\\ \varepsilon
rb(r)u_{r}\end{pmatrix}:Y^{\ast}\rightarrow X,$
$\displaystyle\mathbb{B^{\prime}}\begin{pmatrix}f_{1}\\\
f_{2}\end{pmatrix}=\begin{pmatrix}\mathbb{P}\begin{pmatrix}-2\omega(r)f_{1}+r\varepsilon
b(r)f_{2}\\\ 0\end{pmatrix}\\\ -\varepsilon
b(r)\partial_{z}f_{1}\end{pmatrix}:X^{\ast}\rightarrow Y,$
$\displaystyle\mathbb{L}=\begin{pmatrix}Id_{1}&0\\\
0&L\end{pmatrix}:X\rightarrow X^{\ast},~{}~{}~{}~{}A=Id_{2}:Y\rightarrow
Y^{\ast},$
with
$L:=-\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\cdot)-\frac{1}{r^{2}}\partial_{z}^{2}+\mathfrak{F}(r):Z\rightarrow
Z^{\ast},$
and
$\mathfrak{F}(r):=\frac{\partial_{r}\omega^{2}}{\epsilon^{2}b(r)^{2}r}+(\frac{\partial_{r}^{2}b(r)}{r^{2}b(r)}-\frac{\partial_{r}b(r)}{r^{3}b(r)}),$
(2.8)
where $Id_{1}:L^{2}(\mathbb{R}^{3})\rightarrow(L^{2}(\mathbb{R}^{3})^{\ast}$
and $Id_{2}:Y\rightarrow Y^{\ast}$ are the isomorphisms, the operator
$\mathbb{P}$ is the Leray projection from $L^{2}(\Omega)$ to
$L^{2}_{\sigma}(\Omega)$. As proved in [21, Theorem 2.1], when
($\mathbb{L},A,\mathbb{B}$) satisfies the conditions (G1)-(G4) of general
seperable Hamiltonian PDEs, the unstable spectra of (2.6) are all real
discrete and the dimension of the unstable subspace corresponding to positive
(negative) eigenvalues can be determined.
A complete description of the instability spectra and semigroup growth of the
linear axi-symmetric MHD equations (2.4) can be stated as follows:
###### Theorem 2.1.
(refer to [21]) Assume that the steady state $(v_{0},H_{0})(x)$ is given by
(2.1), in which $\omega(r)\in C^{3}(R_{1},R_{2})$, $b(r)\in
C^{3}(R_{1},R_{2})$ with a positive lower bound.
(1) If $R_{1}=0$, $\partial_{r}(\omega^{2})=O(r^{\beta-3})$,
$\partial_{r}b=O(r^{\beta-1})$ for some constant $\beta>0$, as $r\rightarrow
0$.
(2) If $R_{2}=\infty$, $\partial_{r}(\omega^{2})=O(r^{-3-2\alpha})$,
$\partial_{r}b=O(r^{-1-2\alpha})$, for some constant $\alpha>0$, as
$r\rightarrow\infty$.
The operator $\mathbf{JL}$ defined by (2.6) generates a $C^{0}$ group
$e^{t\mathbf{JL}}$ of bounded linear operators on $\mathbf{X}=X\times Y$ and
there exists a decomposition
$\mathbf{X}=E^{u}\oplus E^{c}\oplus E^{s}$
of closed subspace $E^{u,s,c}$ with the following properties:
(i)$E^{c}$, $E^{u}$ and $E^{s}$ are invariant under $e^{t\mathbf{JL}}$.
(ii)$E^{u}(E^{s})$ only consists of eigenvector corresponding to positive
(negative) eigenvalues of $\mathbf{JL}$ and
$\dim E^{u}=\dim E^{s}=n^{-}(\mathbb{L}|_{\overline{R(\mathbb{B})}}),$
where the unstable eigenvalues of the linearized operator
$\mathbf{J}\mathbf{L}$ are all discrete and the number of unstable mode equals
$0<n^{-}(\mathbb{L}|_{\overline{R(\mathbb{B})}})<\infty$, that is, the number
of negative direction of $<\mathbb{L}\cdot,\cdot>$ restricted to
$\overline{R(\mathbb{B})}$ which is shown to be
$\overline{R(\mathbb{B})}=\\{(g_{1},g_{2})\in
X|g_{j}(r,z)=\sum_{k=1}^{\infty}e^{ikz}\widetilde{\varphi}_{k,j}(r),j=1,2\\}.$
It follows that
$n^{-}(\mathbb{L}|_{\overline{R(\mathbb{B})}})=\Sigma_{k=1}^{\infty}n^{-}(\mathbb{L}_{k})$,
where the operator $\mathbb{L}_{k}:H_{mag}^{r}\rightarrow(H_{mag}^{r})^{*}$ is
defined by
$\mathbb{L}_{k}:=-\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\cdot)+\frac{k^{2}}{r^{2}}+\mathfrak{F}(r)$
(2.9)
for any $k\geq 0$, with $\mathfrak{F}(r)$ defined in $(\ref{e2.8})$.
$n^{-}(\mathbb{L}_{k})$ denotes the number of negative directions of
$<\mathbb{L}_{k}\cdot,\cdot>$. In particular, $n^{-}(\mathbb{L}_{k})=0$ when
$k$ is large enough.
(3) If there exists $r_{0}\in(R_{1},R_{2})$ such that
$\partial_{r}(\omega^{2})|_{r=r_{0}}<0$, then for $\epsilon^{2}$ small enough
the steady state $(v_{0},H_{0})(x)$ in (2.1) is linearly unstable to axi-
symmetric perturbations.
(iii) The sharp exponential growth estimate for the linearized MHD equation
(2.6) along the most unstable modes
$\|e^{t\mathbf{JL}}(u_{1},u_{2})(0)\|_{\mathbf{X}}\lesssim e^{\Lambda
t}\|(u_{1},u_{2})(0)\|_{\mathbf{X}}$ (2.10)
where $\Lambda>0$ determined the maximal growth rate.
###### Remark 2.2.
(i) In this paper, we will consider the case $R_{2}=\infty$ and study the
nonlinear instability of (1.1) to construct non-uniqueness Leray weak
solution, which is located in finite energy class
$L^{\infty}(0,T;L^{2}(\mathbb{R}^{3}))\cap L^{2}(0,T;H^{1}(\mathbb{R}^{3}))$.
Indeed, here we need to require a stronger condition $\omega=O(r^{-1-\alpha})$
for $\alpha>0$ as $r\rightarrow\infty$, which ensure that
$\nabla(v_{0},H_{0})\in L^{2}(\mathbb{R}^{3})$.
(ii) It follows from [21] that any order derivative of $(u_{1},u_{2})$ can be
written in a Hamiltonian form
$\begin{matrix}\frac{d}{dt}\begin{pmatrix}\partial_{z}^{\alpha}u_{1}\\\
\partial_{z}^{\alpha}u_{2}\end{pmatrix}=\mathbf{J}\mathbf{L}\begin{pmatrix}\partial_{z}^{\alpha}u_{1}\\\
\partial_{z}^{\alpha}u_{2}\end{pmatrix}.\end{matrix}$ (2.11)
where $u_{1}=(u_{\theta}+\frac{\partial_{r}\omega(r)}{\varepsilon
b(r)}\varphi,\varphi)$, $u_{2}=(\mbox{\boldmath$u$},B_{\theta})$ with
$\mbox{\boldmath$u$}=(u_{r},u_{z})$. Together with the fact
$\|(u,B)(t)\|_{L^{2}}\sim\|(u_{1},u_{2})(t)\|_{X}$ and
$\|\partial_{z}^{\alpha}(u,B)\|_{L^{2}}\sim\|\partial_{z}^{\alpha}(u,B)\|_{X}$.
One can get
$\|(u,B)(t)\|_{L^{2}}\lesssim e^{\Lambda t}\|(u,B)(0)\|_{L^{2}}$ (2.12)
and
$\|\partial_{z}^{\alpha}(u,B)(t)\|_{L^{2}}\lesssim e^{\Lambda
t}\|\partial_{z}^{\alpha}(u,B)(0)\|_{L^{2}},~{}~{}|\alpha|\leq s~{}(s\geq 0).$
(2.13)
It also follows [21] that $\|\partial_{r}^{\alpha}(u,B)(t)\|_{L^{2}}\lesssim
e^{\Lambda t}\|(u,B)(0)\|_{H^{s}}$ for $|\alpha|=s$.
## 3 The linear instability and semigroup estimates of ideal MHD system
In this subsection, we are going to show that the linearly unstable properties
of axisymmetric case in Theorem 2.1 can be extended to the more general case.
To this end, we first linearize the ideal MHD equations (1.3) around the
axisymmetric steady solution $(v_{0},H_{0})(x)$ to obtain the following system
(3.3). Then we will obtain the linear instability of (3.3) as in Theorem 3.1.
Assume axisymmetric steady solution $(v_{0},H_{0})(r,z)$ is perturbed by a
small disturbance
$\displaystyle v(t,r,z)=v_{0}(r,z)+\varepsilon u(t,r,z),\quad
H(t,r,z)=H_{0}(r,z)+\varepsilon B(t,r,z),$ $\displaystyle
p(t,r,z)=p_{0}(r,z)+\varepsilon\mathcal{P}(t,r,z).$ (3.1)
We shall rewrite the ideal MHD system (1.3) in following perturbation form:
$\left\\{\begin{array}[]{llll}\varepsilon\partial_{t}u+\varepsilon
v_{0}\cdot\nabla u+\varepsilon u\cdot\nabla v_{0}-\varepsilon H_{0}\cdot\nabla
B-\varepsilon B\cdot\nabla
H_{0}+\varepsilon\nabla\mathcal{P}=\varepsilon^{2}B\cdot\nabla
B-\varepsilon^{2}u\cdot\nabla u,\\\ \varepsilon\partial_{t}B+\varepsilon
v_{0}\cdot\nabla B+\varepsilon u\cdot\nabla H_{0}-\varepsilon H_{0}\cdot\nabla
u-\varepsilon B\cdot\nabla v_{0}=\varepsilon^{2}B\cdot\nabla
u-\varepsilon^{2}u\cdot\nabla B,\\\ \text{div}\varepsilon
u=\text{div}\varepsilon B=0,\end{array}\right.$ (3.2)
Then we obtain the following linearized system of the MHD system (1.3) around
steady solution $(v_{0},H_{0},p_{0})(x)$ by estimating (3.2) at order
$\varepsilon$
$\left\\{\begin{array}[]{llll}\partial_{t}u+v_{0}\cdot\nabla u+u\cdot\nabla
v_{0}-H_{0}\cdot\nabla B-B\cdot\nabla H_{0}+\nabla\mathcal{P}=0,\\\
\partial_{t}B+v_{0}\cdot\nabla B+u\cdot\nabla H_{0}-H_{0}\cdot\nabla
u-B\cdot\nabla v_{0}=0,\\\ \text{div}u=\text{div}B=0.\end{array}\right.$ (3.3)
Moreover, (3.3) can be rewritten as (2.4) in cylindrical coordinates, which is
equivalent to the linearized system (2.6). If we define
$\displaystyle(u,B,\mathcal{P})(x,t)=(u,B,\mathcal{P})(x)e^{\Lambda_{0}t}=(u,B,\mathcal{P})(r,z)e^{\Lambda_{0}t}$
(3.4)
with
$\displaystyle u_{r}=\widetilde{u_{r}}(r)cos(kz)\cdot
e^{\Lambda_{0}t},~{}~{}u_{\theta}=\widetilde{u_{\theta}}(r)cos(kz)\cdot
e^{\Lambda_{0}t},~{}~{}u_{z}=\widetilde{u_{z}}(r)sin(kz)\cdot
e^{\Lambda_{0}t},$ $\displaystyle\varphi=\widetilde{\varphi}(r)cos(kz)\cdot
e^{\Lambda_{0}t},~{}~{}B_{\theta}=\widetilde{B_{\theta}}(r)sin(kz)\cdot
e^{\Lambda_{0}t},~{}~{}\mathcal{P}=\widetilde{\mathcal{P}}(r)cos(kz)\cdot
e^{\Lambda_{0}t},$ (3.5)
where $\Lambda_{0}$ is one of the unstable eigenvalues of (2.6) and
$(u,B,\mathcal{P})(x,t)$ will satisfy equation (2.4). Naturally, the
nontrivial (3.4) with $\Lambda_{0}>0$ is a linear unstable solution of (3.3).
We call the pair $(u,B)$ solve (3.3) in the distribution sense if
$\displaystyle\int_{0}^{T}\int_{\mathbb{R}^{3}}u\cdot\partial_{t}\varphi-
v_{0}\cdot\nabla u\cdot\varphi-u\cdot\nabla v_{0}\cdot\varphi+H_{0}\cdot\nabla
B\cdot\varphi+B\cdot\nabla H_{0}\cdot\varphi dxdt$
$\displaystyle=-\int_{\mathbb{R}^{3}}u^{0}\cdot\varphi(\cdot,0)dx,$
$\displaystyle\int_{0}^{T}\int_{\mathbb{R}^{3}}B\cdot\partial_{t}\phi-
v_{0}\cdot\nabla B\cdot\phi-u\cdot\nabla H_{0}\cdot\phi+H_{0}\cdot\nabla
u\cdot\phi+B\cdot\nabla v_{0}\cdot\phi dxdt$
$\displaystyle=-\int_{\mathbb{R}^{3}}B^{0}\cdot\phi(\cdot,0)dx$ (3.6)
for initial data $u^{0},B^{0}\in L_{\sigma}^{2}(\mathbb{R}^{3})$ and test
function $\varphi,\phi\in\mathcal{D_{T}}$, where
$\mathcal{D_{T}}:=\\{\varphi\in
C_{0}^{\infty}((0,T);\mathbb{R}^{3}),\text{div}\varphi=0\\}$.
In order to deal with nonlinear term conveniently, we will introduce a
trilinear form $\mathfrak{B_{0}}$. First, we define a trilinear form by
setting
$b(u,v,\omega)=\sum_{i,j=1}^{3}\int_{\Omega}u_{i}\partial_{i}v_{j}\omega_{j}dx=\int_{\Omega}u\cdot\nabla
v\cdot\omega dx.$ (3.7)
We can easily get by a direct calculation
$b(u,v,\omega)=-b(u,\omega,v).$
In order to write (3.4) as a simpler form, we define a trilinear form
$\mathfrak{B_{0}}$ as
$\mathfrak{B_{0}}(\Phi^{1},\Phi^{2},\Phi^{3})=b(u,v,\omega)-b(\mathbb{U},\mathbb{V},\omega)+b(u,\mathbb{V},\mathbb{W})-b(\mathbb{U},v,\mathbb{W}),$
where
$\Phi^{1}=(u,\mathbb{U}),\Phi^{2}=(v,\mathbb{V}),\Phi^{3}=(\omega,\mathbb{W})$.
Due to the continuous $b$, one derives that $\mathfrak{B_{0}}$ is trilinearly
continuous. This let us give a continuous bilinear operator $\mathfrak{B}$ as
$\langle\mathfrak{B}(\Phi^{1},\Phi^{2}),\Phi^{3}\rangle=\mathfrak{B_{0}}(\Phi^{1},\Phi^{2},\Phi^{3}).$
If we choose $\varphi,\phi\in C_{0,\sigma}^{\infty}(\mathbb{R}^{3})$, one can
rewrite (3.4) as
$\displaystyle\partial_{t}(u,\varphi)+b(v_{0},u,\varphi)+b(u,v_{0},\varphi)-b(H_{0},B,\varphi)-b(B,H_{0},\varphi)=0,$
$\displaystyle\partial_{t}(B,\phi)+b(v_{0},B,\phi)+b(u,H_{0},\phi)-b(H_{0},u,\phi)-b(B,v_{0},\phi)=0,$
with $(f,g)=\int_{R^{3}}f\cdot gdx$. Then this system can be equivalent to the
following formula
$\partial_{t}(\Gamma,\Psi)+\mathfrak{B_{0}}(\Gamma_{0},\Gamma,\Psi)+\mathfrak{B_{0}}(\Gamma,\Gamma_{0},\Psi)=0,$
(3.8)
where $\Gamma=(u,B)$, $\Gamma_{0}=(v_{0},H_{0})$. Using the operator
$\mathfrak{B}$ previously defined, (3.2) is equivalent to the following
formula
$\partial_{t}\Gamma+\mathfrak{B}(\Gamma_{0},\Gamma)+\mathfrak{B}(\Gamma,\Gamma_{0})=0.$
(3.9)
It is obvious that (3.8) is a weak formulation of the problem (3.9). In the
sense of distribution, the system (3.3) can be expressed as (3.9).
Suppose $\Gamma_{0}$ is linearly unstable for the dynamics of (3.3), there
exists an unstable eigenvalue for the linearized operator
$-A\Gamma=\mathfrak{B}(\Gamma_{0},\Gamma)+\mathfrak{B}(\Gamma,\Gamma_{0})$
(3.10)
whose domain
$\mathcal{D}(A):=\\{\Gamma\in
L_{\sigma}^{2}(\mathbb{R}^{3}):\Gamma_{0}\cdot\nabla\Gamma\in
L^{2}(\mathbb{R}^{3})\\}.$
in which $\Gamma=(u,B)$, $\Gamma_{0}=(v_{0},H_{0})$. It is easy to see that
$A$ at least has a unstable eigenvalue $\Lambda_{0}>0$ from (3.3) and (3.4).
The main results can be stated as follows:
###### Theorem 3.1.
Assume that the steady state $(v_{0},H_{0})(x)$ is given by (2.1), in which
$\omega(r)\in C^{3}(R_{1},R_{2})$ and $b(r)\in C^{3}(R_{1},R_{2})$ satisfy the
conditions (1), (2) and (3) in Theorem 2.1, then the linearized operator $A$
defined by (3.10) generates a $C^{0}$ group $e^{At}$ of bounded linear
operator on $\mathcal{D}(A)\subset L^{2}_{\sigma}(\mathbb{R}^{3})\rightarrow
L^{2}_{\sigma}(\mathbb{R}^{3})$. And there exists a decomposition
$L^{2}(\Omega)=E^{u}\oplus E^{c}\oplus E^{s}$
of closed subspaces $E^{u,s,c}$ with the following properties:
i) $E^{u},E^{c},E^{s}$ are invariant under $e^{At}$.
ii)The operator $A$ defined by (3.10) is linear instability and the unstable
spectra are all real and discrete. $E^{u}(E^{s})$ only consists of
eigenvectors corresponding to positive (negative) eigenvalues of $A$ and
dimension is finite.
(iii)The sharp exponential growth estimate for the $C^{0}$ group $e^{At}$
along the most unstable modes
$\|e^{At}\Gamma_{0}\|_{L^{2}}\lesssim e^{\Lambda t}\|\Gamma_{0}\|_{L^{2}}$
where $\Lambda>0$ is the maximum instability eigenvalue.
###### Proof.
We mainly prove the operator $A$ can inherit the instability nature of the
operator $\mathbf{J}\mathbf{L}$. Note that in the cylindrical coordinates, the
linearized system (3.3) can be rewritten as (2.4), which is equivalent to the
linearized system (2.6) with
$b_{r}=\frac{\partial_{z}\varphi}{r},b_{z}=\frac{\partial_{r}\varphi}{r}$.
According to Theorem 2.1, the unstable spectra of (2.6) are all discrete and
finite, we can denote the maximum instability eigenvalue of the operator
$\mathbf{J}\mathbf{L}$ as $a>0$, where
$a^{2}=\max\\{\sigma(-\mathbb{B^{\prime}}\mathbb{L}\mathbb{B}A)\\}$. We can
construct a maximal growing normal mode of the linear equation (2.4) with
$\Lambda_{0}=a$ as (3.4). It’s known that $(u,B,\mathcal{P})$ so defined is
also the solution of (3.3). It can be proved that the operator $A$ has maximum
unstable eigenvalue $a>0$. Then, there holds
$\|e^{At}\Gamma_{0}\|_{L^{2}}\lesssim e^{at}\|\Gamma_{0}\|_{L^{2}}.$ (3.11)
where $a=\sup\\{\lambda:\lambda\in\sigma(A)\\}$ is the spectral bound of
$e^{At}$ equaling to growth bound
$\omega_{0}(e^{At}):=\inf\\{\omega\in\mathbb{R}:\|e^{At}\|_{L^{2}\rightarrow
L^{2}}\leq M(\omega)e^{\omega t}\\}$.
∎
## 4 Linear instability: from the ideal MHD equation to MHD equation
In this section, we concern the linear instability of the following Leray
equation around steady state $(\beta V_{0}(\xi),\beta W_{0}(\xi))$ ($\beta\gg
1$)
$\left\\{\begin{array}[]{ll}\partial_{\tau}U-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})U+\beta\mathbb{P}(V_{0}\cdot\nabla
U+U\cdot\nabla V_{0}-W_{0}\cdot\nabla W-W\cdot\nabla
W_{0})-\Delta_{\xi}U=0,\\\
\partial_{\tau}W-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})W+\beta\mathbb{P}(V_{0}\cdot\nabla
W+U\cdot\nabla W_{0}-W_{0}\cdot\nabla U-W\cdot\nabla
V_{0})-\Delta_{\xi}W=0,\end{array}\right.$ (4.1)
in which $(V_{0}(\xi),W_{0}(\xi))$ is the axisymmetric velocity profile and
magnetic profile associated with $(v_{0}(x),H_{0}(x))$. That is to say, they
are equal under the similarity variables (1.9) as $t=1$. Using the same method
introduced in Section 3, we can rewrite (4.1) as
$\partial_{\tau}\Xi-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})\Xi-\Delta_{\xi}\Xi+\beta\mathbb{P}(\mathfrak{B}(\Xi_{0},\Xi)+\mathfrak{B}(\Xi,\Xi_{0}))=0,$
(4.2)
where $\Xi=(V(\xi,\tau),W(\xi,\tau))$ and $\Xi_{0}=(V_{0}(\xi),W_{0}(\xi)).$
In this section, we concern ourselves with instability for the operator
$-L_{ss}^{\beta}\Xi=-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})\Xi-\Delta_{\xi}\Xi+\beta\mathbb{P}(\mathfrak{B}(\Xi_{0},\Xi)+\mathfrak{B}(\Xi,\Xi_{0}))$
(4.3)
whose domain is
$\mathcal{D}(L_{ss}^{\beta}):=\\{\Xi\in L_{\sigma}^{2}(\Omega):\Xi\in
H^{2}(\Omega),\xi\cdot\nabla_{\xi}\Xi\in L^{2}(\Omega\\}.$
Indeed, we $claim$ that $\beta\Xi_{0}$ is linearly unstable for the dynamics
of (4.3), namely, there exists an unstable eigenvalue
$\widetilde{\lambda_{\beta}}>0$ for the linearized operator $L_{ss}^{\beta}$.
It is difficult to study the unstable eigenvalue for the linearized operator
$L_{ss}^{\beta}$ directly. Thus, multiplying both sides of (4.3) by
$\frac{1}{\beta}$, we can obtain that
$-T_{\beta}\Xi:=-\frac{1}{\beta}[(\frac{1}{2}+\frac{1}{2}\xi\cdot\nabla_{\xi})+\Delta_{\xi}]\Xi+\mathbb{P}(\mathfrak{B}(\Xi_{0},\Xi)+\mathfrak{B}(\Xi,\Xi_{0}))$
The main terms in the operator $-T_{\beta}$ are arising from the nonlinearity
of the equation; the extra term, including the Laplacian, can be considered as
perturbations. In the following, we will prove the existence of the unstable
eigenvalue $\lambda_{\beta}$ of $T_{\beta}$. Then
$\widetilde{\lambda_{\beta}}=\beta\lambda_{\beta}$ would be taken as an
unstable eigenvalue of the linearized operator $L_{ss}^{\beta}$.
Let us firstly define
$-T_{\infty}\Xi:=\mathfrak{B}(\Xi_{0},\Xi)+\mathfrak{B}(\Xi,\Xi_{0}).$ (4.4)
It follows from Theorem 3.1 that the operator $T_{\infty}$ is linearly
unstable and exist unstable eigenvalue. Now let us state our main result in
this section as follows.
###### Theorem 4.1.
(the instability of self-similar MHD) Let $\Lambda_{0}$ be an unstable
eigenvalue of $T_{\infty}$ with $\Lambda_{0}>0$. For any $\varepsilon>0$,
there exists $\beta_{0}>0$ such that, for all $\beta>\beta_{0}$,
$T_{\beta}|_{\mathbf{X}_{sub}}$ has an unstable eigenvalue $\lambda_{\beta}>0$
satisfying $|\lambda_{\beta}-\Lambda_{0}|<\varepsilon$. We can conclude that
$L_{ss}^{\beta}$ has unstable eigenvalue $\widetilde{\lambda_{\beta}}$ with
$\widetilde{\lambda_{\beta}}=\beta\lambda_{\beta}$ and the corresponding
unstable modes belongs to $L^{2}(\mathbb{R}^{3})$.
Before showing the proof of Theorem 4.1, we first give a spectral perturbation
Lemma due to Kato [35].
###### Lemma 4.2.
Consider the operator $T$ in the finite dimensional space, assume $T(\tau)$ be
continuous at $\tau=0$, then the eigenvalues of $T(\tau)$ are continuous at
$\tau=0$.
Now let us prove Theorem 4.1.
###### Proof.
We define the following operator
$-T_{\tau}\Xi:=-\tau[\frac{1}{2}+\frac{1}{2}\xi\cdot\nabla_{\xi}+\Delta_{\xi}]\Xi+\mathbb{P}(\mathfrak{B}(\Xi_{0},\Xi)+\mathfrak{B}(\Xi,\Xi_{0}))$
with domain
$\mathcal{D}:=\\{\Xi\in L_{\sigma}^{2}(\mathbb{R}^{3}):\Xi\in
H^{2},\xi\cdot\nabla_{\xi}\Xi\in L^{2}(\mathbb{R}^{3})\\}.$
For any $\tau_{1},\tau_{2}\in\mathbb{R}$, $\Xi\in\mathcal{D}$, it’s not
difficult to check that
$\|T_{\tau_{1}}\Xi-
T_{\tau_{2}}\Xi\|_{L^{2}}\leq|\tau_{1}-\tau_{2}|\|\Xi\|_{\mathcal{D}},$ (4.5)
which gives the continuity of $T(\tau)$ with respect to $\tau$.
For our perturbation argument, we will consider a new finite dimensional
subspace
$\mathbf{X}_{sub}=\mathcal{D}\cap E^{u}$
by taking $\tau=\frac{1}{\beta}$, where $E^{u}$ defined in Theorem 3.1 only
consists of eigenvectors corresponding to positive eigenvalues of
$T_{\infty}$. Since $\Lambda_{0}$ is an unstable eigenvalue of $T_{\infty}$
with $\Lambda_{0}>0$, then we use Kato’s perturbation Lemma 4.2 on
$\mathbf{X}_{sub}$ to deduce that there is an unstable eigenvalue
$\lambda_{\beta}$ of $T_{\beta}$. So
$\widetilde{\lambda_{\beta}}=\beta\lambda_{\beta}$ would be an unstable
eigenvalue of the linearized operator $L_{ss}^{\beta}$. ∎
## 5 Nonlinear instability
In this section, we demonstrate how to use the linear instability established
in Theorem 4.1 to construct non-unique Leray weak solution to the forced MHD
equations. The refined non-uniqueness Theorem can be stated as follows:
###### Theorem 5.1.
There exists a smooth decaying unstable velocity and magnetic profile
$\beta\Xi_{0}=(\beta V_{0}(\xi),\beta W_{0}(\xi))$ of (1.10) with force
profile
$F(\xi,\tau)=-\frac{\beta}{2}(1+\xi\cdot\nabla_{\xi})\Xi_{0}-\beta\Delta_{\xi}\Xi_{0}+\beta^{2}\mathbb{P}(\mathfrak{B}(\Xi_{0},\Xi_{0})+\mathfrak{B}(\Xi_{0},\Xi_{0}))$
satisfying the following properties:
(A) The linearized operator $L_{ss}^{\beta}$ defined in (4.3) ($\beta$
sufficiently large) has real discrete unstable eigenvalue $\lambda_{\beta}$.
Then $a$ ($a>0)$ can be chosen to be the maximally unstable, the corresponding
non-trivial smooth eigenfunction $\eta$ belonging to $H^{k}(\mathbb{R}^{3})$
for all $k\geq 0$:
$L_{ss}^{\beta}\eta=a\eta.$ (5.1)
We can look for another solution $\Xi$ to (1.10) that vanishes as
$\tau\rightarrow-\infty$ with the ansatz
$\Xi=\beta\Xi_{0}+\Xi^{lim}+\Xi^{per},$
where
$\Xi^{lim}=e^{a\tau}\eta$ (5.2)
solves the linear equation
$\partial_{\tau}\Xi^{lim}=L_{ss}^{\beta}\Xi^{lim}.$ (5.3)
(B) Substituting this ansatz into (1.10), there exists $T\in\mathbb{R}$ and a
velocity field and magnetic profile $\Xi^{per}$ satisfying (5.35) and
$\|\Xi^{per}\|_{H^{k}}\leq
Ce^{2a\tau}~{}~{}~{}~{}~{}\forall~{}~{}\tau\in(-\infty,T)$ (5.4)
for all $k\geq 0$. Correspondingly, in the similarity variable (1.9), we
construct the first Leray weak solution of the equation (1.1) is
$(v_{1},H_{1})(x,t)=(\frac{\beta}{\sqrt{t}}V_{0}(\xi),\frac{\beta}{\sqrt{t}}W_{0}(\xi))$
with force $f_{i}(x,t)=\frac{1}{t^{\frac{3}{2}}}F_{i}(\xi,\tau)$ for $(i=1,2)$
on a time interval $[0,e^{T}]$. Based on this, the second weak solution of the
MHD equations (1.1) constructed is
$(v_{2}(x,t),H_{2}(x,t))=\frac{1}{\sqrt{t}}\Xi(\xi,\tau)$
on $\mathbb{R}^{3}\times(0,e^{T})$ with zero initial data and same forcing
term $f_{i}(x,t)$ for $(i=1,2)$.
The solutions constructed above live at critical regularity. One may easily
verify that for any $p\in[2,+\infty]$, $j,k\geq 0$ and $t\in(0,e^{T})$, we
have
$\displaystyle
t^{\frac{k}{2}}\|\nabla^{k}\Gamma_{0}(\cdot,t)\|_{L^{p}}+t^{\frac{k}{2}}\|\nabla^{k}\Gamma(\cdot,t)\|_{L^{p}}\lesssim
t^{\frac{1}{2}(\frac{3}{p}-1)},$ $\displaystyle
t^{j+\frac{k}{2}}\|\partial_{t}^{j}\nabla^{k}f(\cdot,t)\|_{L^{p}}\lesssim
t^{\frac{1}{2}(\frac{3}{p}-3)}$
This is enough to bootstrap $\Gamma_{0}$, $\Gamma$ to smoothness in
$\mathbb{R}^{3}\times(0,e^{T})$. As mentioned above a second solution to (MHD)
is sought as a trajectory on the unstable manifold of $\beta\Xi_{0}$
associated to the most unstable eigenvalue $a$ of $L_{ss}^{\beta}$.
### 5.1 The semigroup generated by $L^{\beta}_{ss}$
In this subsection, we introduce the $C^{0}$ semigroup $e^{\tau
L^{\beta}_{ss}}$ generated by $L_{ss}^{\beta}$. We combine Theorem 3.1 and
Theorem 4.1 to prove some results for the spectrum and the semigroup
estimation of $L^{\beta}_{ss}$. It follows from [14, Lemma 2.1] that the
spectrum of $L^{0}_{ss}$ satisfies
$\sigma(L^{0}_{ss})\subset\\{\lambda\in\mathbb{C}:Re(\lambda)\leq-\frac{1}{4}\\}.$
According to Theorem 3.1, it is known that $L^{\beta}_{ss}-L^{0}_{ss}$ has
finite unstable discrete spectrum $\lambda>0$ and the unstable subspace is
finite-dimensional.
We refer the reader to [34] for definition of the $spectral~{}bound$ of
$L_{ss}^{\beta}$ as
$s(L_{ss}^{\beta})=:\sup\\{Re(\lambda):\lambda\in\sigma(L_{ss}^{\beta})\\},$
which is bounded by the $growth~{}bound$
$\omega_{0}(L^{\beta}_{ss}):=\inf\\{\omega\in R:\|e^{\tau
L^{\beta}_{ss}}\|_{L_{\sigma}^{2}\rightarrow L_{\sigma}^{2}}\leq
M(\omega)e^{\tau\omega}\\}$
of semigroup. Then we can obtain the following result.
###### Lemma 5.2.
$L_{ss}^{\beta}$ is the generator of a strongly continuous semigroup $e^{\tau
L_{ss}^{\beta}}$ on $L^{2}(\mathbb{R}^{3})$.
$\sigma(L^{\beta}_{ss})\cap\\{\lambda:Re(\lambda)>0\\}$ consists of only
finitely many eigenvalues with finite multiplicity, then the growth bound
$\omega_{0}(L_{ss}^{\beta})$ of $e^{\tau L_{ss}^{\beta}}$ equals
$s(L_{ss}^{\beta}):=\sup\\{z_{0}:z_{0}\in\sigma(L_{ss}^{\beta})<\infty\\}.$
In other words, assume $a=s(L_{ss}^{\beta})>0$, then $a<\infty$, and there
exist $\lambda\in\sigma(L_{ss}^{\beta})$ and $\eta\in D(L_{ss}^{\beta})$ and
$L_{ss}^{\beta}\eta=\lambda\eta$. Moreover, for every $\delta>0$, there is a
constant $M(\delta)$ with the property that
$\|e^{\tau L_{ss}^{\beta}}\Xi(0,\cdot)\|_{L^{2}(\mathbb{R}^{3})}\leq
M(\delta)e^{(a+\delta)\tau}\|\Xi(0,\cdot)\|_{L^{2}(\mathbb{R}^{3})},~{}~{}\forall~{}\tau\geq
0,\Xi\in L^{2}.$ (5.5)
###### Lemma 5.3.
By (5.2), we have the following energy estimate
$\|\Xi^{lim}\|_{H^{k}}\leq C(k,\eta)e^{a_{0}\tau}$ (5.6)
We consider the following system, the linearized system of the homogeneous MHD
system (1.4) around the axisymmetric steady solution $(\beta v_{0},\beta
H_{0},\beta^{2}p_{0})(x)$ in (2.1) as
$\left\\{\begin{array}[]{llll}\partial_{t}u+\beta\mathbb{P}(v_{0}\cdot\nabla
u+u\cdot\nabla v_{0}-H_{0}\cdot\nabla B-B\cdot\nabla H_{0})-\Delta u=0,\\\
\partial_{t}B+\beta\mathbb{P}(v_{0}\cdot\nabla B+u\cdot\nabla
H_{0}-H_{0}\cdot\nabla u-B\cdot\nabla v_{0})-\Delta B=0,\\\
\text{div}u=\text{div}B=0,\end{array}\right.$ (5.7)
can be rewrite as
$\left\\{\begin{array}[]{llll}\partial_{t}\Gamma+A\Gamma+\beta\mathbb{P}(\mathfrak{B}(\Gamma_{0},\Gamma)+\mathfrak{B}(\Gamma,\Gamma_{0}))=0\\\
\text{div}\Gamma=0,\end{array}\right.$ (5.8)
in the sense of distribution, endowed with the initial condition
$(u,B)(0,x)=(u^{0},B^{0})(x)$ (5.9)
where $\Gamma=(u,B)$, $\Gamma_{0}=(v_{0},H_{0})$, $\Gamma^{0}=(u^{0},B^{0})$.
In this case, the solution to (5.8) is formally given by
$\Gamma=e^{t\Delta}\Gamma^{0}-\int_{0}^{t}e^{(t-s)\Delta}\beta\mathbb{P}(\mathfrak{B}(\Gamma_{0},\Gamma)+\mathfrak{B}(\Gamma,\Gamma_{0}))(s)ds.$
(5.10)
###### Lemma 5.4.
[36] Let $\Omega\subset\mathbb{R}^{3}$ is a smooth bounded domain or
$\mathbb{R}^{3}$, Let $1<r\leq q<\infty$ and
$\sigma=\sigma(q,r)=\frac{3}{2}(\frac{1}{r}-\frac{1}{q})\geq 0$, we have
$\displaystyle\|e^{t\Delta}\mathbb{P}f\|_{q}\leq Ct^{-\sigma}\|f\|_{r},$
$\displaystyle\|\nabla e^{t\Delta}\mathbb{P}f\|_{q}\leq
Ct^{-\sigma-\frac{1}{2}}\|f\|_{r}.$ (5.11)
###### Lemma 5.5.
(Parabolic regularity) Assume $a=s(L_{ss}^{\beta})>0$, for any
$\sigma_{2}\geq\sigma_{1}\geq 0$ and $\delta>0$, it holds
$\|e^{\tau L_{ss}^{\beta}}\Xi(0,\cdot)\|_{H^{\sigma_{2}}}\leq
M(\sigma_{1},\sigma_{2},\delta,\beta)e^{(a+\delta)\tau}\tau^{-\frac{\sigma_{2}-\sigma_{1}}{2}}\|\Xi(0,\cdot)\|_{H^{\sigma_{1}}}$
(5.12)
for any $\Xi(0,\cdot)\in L_{\sigma}^{2}\cap H^{\sigma_{1}}(\mathbb{R}^{3})$.
###### Proof.
Firstly, we prove for any $0\leq m\leq k$, $\Xi(0,\cdot)\in L_{\sigma}^{2}\cap
H^{m}(\mathbb{R}^{3})$, it holds
$\|e^{\tau L_{ss}^{\beta}}\Xi(0,\cdot)\|_{H^{k}}\leq
M(k,m)\tau^{-\frac{k-m}{2}}\|\Xi(0,\cdot)\|_{H^{m}},\tau\in(0,2).$ (5.13)
We study the problem in physical variables: setting
$u(x,t)=\frac{1}{\sqrt{t+1}}U(\frac{x}{\sqrt{t+1}},log^{(t+1)}),~{}~{}B(x,t)=\frac{1}{\sqrt{t+1}}W(\frac{x}{\sqrt{t+1}},log^{(t+1)}),$
$v_{0}(x,t)=\frac{1}{\sqrt{t+1}}V_{0}(\frac{x}{\sqrt{t+1}},log^{(t+1)}),~{}~{}H_{0}(x,t)=\frac{1}{\sqrt{t+1}}W_{0}(\frac{x}{\sqrt{t+1}},log^{(t+1)}),$
satisfying equation (5.7), which can be expressed as (5.8). Since $\Gamma_{0}$
is smooth function in bounded domain, we can easily prove that
$\|\Gamma\|_{2}+t^{\frac{1}{2}}\|\nabla\Gamma\|_{2}\leq
C(\Gamma_{0},\beta)\|\Gamma(0,\cdot)\|_{2},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}t\in(0,10)$
(5.14)
which gives
$\|\Xi\|_{2}+\tau^{\frac{1}{2}}\|\nabla\Xi\|_{2}\leq
C(\Xi_{0},\beta)\|\Xi(0,\cdot)\|_{2},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\tau\in(0,2)$
(5.15)
The latter implies (5.12) for $k=1$ and $m=0,1$. The general case follows by
induction studying the equation solved by $\nabla^{k}\Xi$ which has a
structure similar to (5.8) but with forcing and additional lower order terms.
Secondly, we prove for any $\delta>0$ it holds
$\|e^{\tau L_{ss}^{\beta}}\Xi(0,\cdot)\|_{H^{k}}\leq
M(k,\delta)e^{\tau(a+\delta)}\|\Xi(0,\cdot)\|_{L^{2}},~{}~{}\tau\geq 2.$
(5.16)
Using the semigroup property in Step 1 with $m=0$, and (5.6), we have
$\displaystyle\|e^{\tau L_{ss}^{\beta}}\Xi(0,\cdot)\|_{H^{k}}$
$\displaystyle=\|e^{\kappa
L_{ss}^{\beta}}(e^{(\tau-k)L_{ss}^{\beta}}\Xi(0,\cdot))\|_{H^{k}}$
$\displaystyle\leq M(k)k^{-\frac{k}{2}}\|e^{\kappa
L_{ss}^{\beta}}(e^{(\tau-k)L_{ss}^{\beta}}\Xi(0,\cdot))\|_{L^{2}}$
$\displaystyle\leq
M(k,\delta)k^{-\frac{k}{2}}e^{(\tau-k)(a+\delta)}\|\Xi(0,\cdot)\|_{L^{2}}$
(5.17)
The claimed inequality (5.12) follows by choosing $\kappa=1$.
It is immediate to see that the combination of (5.13) and (5.16) gives (5.12)
for integers $\sigma_{2}\geq\sigma_{1}\geq 0$. This completes the proof. ∎
### 5.2 Nonlinear construction
In the conditions of Theorem 3.1, we consider smooth, compactly supported
velocity and magnetic profile $(\beta v_{0},\beta H_{0})$. Since
$(u^{lim},B^{lim})$ satisfy (5.7), we can get $(u^{per},B^{per})$ satisfies
the following nonlinear problem
$\displaystyle\left\\{\begin{array}[]{llll}&\partial_{t}u^{per}+\beta\mathbb{P}(v_{0}\cdot\nabla
u^{per}+u^{per}\cdot\nabla v_{0}-H_{0}\cdot\nabla H^{per}-H^{per}\cdot\nabla
H_{0})-\Delta u^{per}\\\ &=B\cdot\nabla B-u\cdot\nabla u,\\\
&\partial_{t}B^{per}+\beta\mathbb{P}(v_{0}\cdot\nabla
B^{per}+u^{per}\cdot\nabla H_{0}-H_{0}\cdot\nabla u^{per}-B^{per}\cdot\nabla
v_{0})-\Delta B^{per}\\\ &=B\cdot\nabla u-u\cdot\nabla B,\\\
&div~{}u=divB=0,\end{array}\right.$ (5.23)
we can write $f_{1}=B\cdot\nabla B-u\cdot\nabla u$ and $f_{2}=B\cdot\nabla
u-u\cdot\nabla B$, in which $B(x,t)=B^{lim}+B^{per}$ and
$u(x,t)=u^{lim}+u^{per}$. In similarity variable,
$\begin{split}&\xi=\frac{x}{{t}},~{}~{}~{}~{}~{}~{}~{}\tau=logt,~{}~{}~{}~{}~{}~{}~{}~{}~{}f(x,t)=\frac{1}{t^{\frac{3}{2}}}F(\xi,\tau)\\\
&u^{lim}(x,t)=\frac{1}{\sqrt{t}}V^{lim}(\xi,\tau),~{}~{}B^{lim}(x,t)=\frac{1}{\sqrt{t}}W^{per}(\xi,\tau),\\\
&u^{per}(x,t)=\frac{1}{\sqrt{t}}V^{per}(\xi,\tau),~{}~{}B^{per}(x,t)=\frac{1}{\sqrt{t}}W^{per}(\xi,\tau),\end{split}$
then (5.23) can be expressed as
$\displaystyle\left\\{\begin{array}[]{llll}&\partial_{\tau}V^{per}-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})V^{per}-\Delta_{\xi}V^{per}+\beta\mathbb{P}(V_{0}\cdot\nabla
V^{per}+V^{per}\cdot\nabla V_{0}-W_{0}\cdot\nabla W^{per}\\\
&-W^{per}\cdot\nabla W_{0})=F_{1},\\\
&\partial_{\tau}W^{per}-~{}\frac{1}{2}(1+\xi\cdot\nabla_{\xi})W^{per}-\Delta_{\xi}W^{per}+\beta\mathbb{P}(V_{0}\cdot\nabla
W^{per}+V^{per}\cdot\nabla W_{0}-W_{0}\cdot\nabla V^{per}\\\
&-W^{per}\cdot\nabla V_{0})=F_{2}\\\
&\text{div}V^{per}=\text{div}W^{per}=0,\end{array}\right.$ (5.29)
where
$\displaystyle F_{1}$
$\displaystyle=(W^{lim}+W^{per})\cdot\nabla(W^{lim}+W^{per})-(V^{lim}+V^{per})\cdot\nabla(V^{lim}+V^{per})$
$\displaystyle=W^{lim}\cdot\nabla W^{lim}+W^{per}\cdot\nabla
W^{lim}+W^{lim}\cdot\nabla W^{per}+W^{per}\cdot\nabla
W^{per}-V^{lim}\cdot\nabla V^{lim}$ $\displaystyle-V^{per}\cdot\nabla
V^{lim}-V^{lim}\cdot\nabla V^{per}-V^{per}\cdot\nabla V^{per}$ (5.30)
and
$\displaystyle F_{2}$
$\displaystyle=(W^{lim}+W^{per})\cdot\nabla(V^{lim}+V^{per})-(V^{lim}+V^{per})\cdot\nabla(W^{lim}+W^{per})$
$\displaystyle=W^{lim}\cdot\nabla V^{lim}+W^{per}\cdot\nabla
V^{lim}+W^{lim}\cdot\nabla V^{per}+W^{per}\cdot\nabla
V^{per}-V^{lim}\cdot\nabla W^{lim}$ $\displaystyle-V^{per}\cdot\nabla
W^{lim}-V^{lim}\cdot\nabla W^{per}-V^{per}\cdot\nabla W^{per}.$ (5.31)
Note that using the same method as in section 2, we can rewrite equation
(5.29) as
$\partial_{\tau}\Xi^{per}-\frac{1}{2}(1+\xi\cdot\nabla_{\xi})\Xi^{per}-\Delta_{\xi}\Xi^{per}+\beta\mathbb{P}(\mathfrak{B}(\Xi_{0},\Xi^{per})+\mathfrak{B}(\Xi^{per},\Xi_{0}))=F,$
(5.32)
where $\Xi^{per}=(V^{per}(\xi,\tau),W^{per}(\xi,\tau))$,
$\Xi_{0}=(V_{0}(\xi,\tau),W_{0}(\xi,\tau))$, $F=(F_{1},F_{2})$ and
$\mathbb{P}$ is the Leray projector. We define the total energy by
$\|\Xi^{per}\|_{\mathbb{X}}=\|(V^{per},W^{per})\|_{\mathbb{X}}:=\sup_{\tau<T}e^{-(a+\varepsilon_{0})\tau}(\|V^{per}(\cdot,\tau)\|_{H^{N}}+\|W^{per}(\cdot,\tau)\|_{H^{N}}).$
###### Proposition 5.6.
Assume $a=s(L_{ss}^{\beta})>0$ and $N>\frac{5}{2}$ is an integer. Then there
exist $T=T(\Xi_{0},\Xi^{lim})$, $\varepsilon_{0}>0$ and $\Xi^{per}\in
C((-\infty,T];H^{N}(\mathbb{R}^{3};\mathbb{R}^{3}))$, a solution to (5.32),
such that
$\|E^{per}(\cdot,\tau)\|_{H^{N}}\leq
e^{(a+\varepsilon_{0})\tau},~{}~{}~{}~{}for~{}any~{}\tau<T$ (5.33)
###### Proof.
Firstly, we introduce the Banach space
$\mathbb{X}:=\\{\Xi^{per}\in
C((-\infty,T];H^{N}(\mathbb{R}^{3};\mathbb{R}^{3})):\sup_{\tau<T}e^{-(a+\varepsilon_{0})\tau}\|\Xi^{per}(\cdot,\tau)\|_{H^{N}}<\infty\\}$
with the norm
$\|\Xi^{per}\|_{\mathbb{X}}:=\sup_{\tau<T}e^{-(a+\varepsilon_{0})\tau}\|\Xi^{per}(\cdot,\tau)\|_{H^{N}}$
By Duhamel’s formula, (5.32) can be expressed as the following functional
$\mathcal{T}(\Xi^{per})(\cdot,\tau)=-\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}^{\beta}}\circ\mathbb{P}Fds$
(5.34)
By parabolic regularity theory, any $\Xi^{per}\in\mathbb{X}$ such that
$T(\Xi^{per})=\Xi^{per}$ is a solution to (5.32) satisfying the statement of
the Proposition by the following contraction mapping principle. ∎
###### Proposition 5.7.
Let $B_{\mathbb{X}}$ be the closed unit ball of $\mathbb{X}$. Then for $T$
sufficiently large and negative, and $N>\frac{5}{2}$, $\mathcal{T}$ map
$B_{\mathbb{X}}\rightarrow B_{\mathbb{X}}$ is a contraction.
###### Proof.
According to the definition of total energy, $\mathcal{T}$ splits into three
terms $\mathcal{T}(E^{per})\simeq B(E^{per},E^{per})+L(\cdot,E^{per})+G$,
defined by
$G=\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}^{\beta}}(W^{lim}\cdot\nabla
E^{lim,\perp}-V^{lim}\cdot\nabla E^{lim})ds=G^{0,1}+G^{0,2}$ $\displaystyle
L(\cdot,E^{per})=\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}^{\beta}}(W^{lim}\cdot\nabla
E^{per,\perp}+W^{per}\cdot\nabla E^{lim,\perp}-V^{lim}\cdot\nabla
E^{per}-V^{per}\cdot\nabla E^{lim})ds$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=L^{1,1}+L^{1,2}+L^{1,3}+L^{1,4}$
and
$B(E^{per},E^{per})=\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}^{\beta}}(W^{per}\cdot\nabla
E^{per,\perp}-V^{per}\cdot\nabla E^{per})ds=B^{2,1}+B^{2,2},$
where $E^{\perp}=(W,V)$, $G\in\mathbb{X}$, $L:\mathbb{X}\rightarrow\mathbb{X}$
is a bounded linear operator and
$B:\mathbb{X}\times\mathbb{X}\rightarrow\mathbb{X}$ is a bounded bilinear
form. By a simple computation, we can obtain that
$\|\mathcal{T}(E^{per}_{1}-E^{per}_{2})\|_{\mathbb{X}}\leq(2\|B\|+\|L\|)\|E^{per}_{1}-E^{per}_{2}\|_{\mathbb{X}}.$
(5.35)
To prove the operator $\mathcal{T}$ is contraction, it is sufficient for us to
show $2\|B\|+\|L\|<1$.
Firstly, we apply Lemma 5.5 to get
$\|B^{2,1}\|_{H^{N+\frac{1}{2}}}\leq\int_{-\infty}^{\tau}(\tau-s)^{-\frac{3}{4}}e^{(\tau-s)(a+\delta)}\|(W^{per}\cdot\nabla
E^{per,\perp})\|_{H^{N-1}}(s)ds.$ (5.36)
Since the space $H^{N-1}$ is an algebra, then
$\|W^{per}\cdot\nabla E^{per,\perp}\|_{H^{N-1}}\leq
C(N)\|E^{per}\|_{H^{N}}^{2}\leq
e^{2(a+\varepsilon_{0})s}\|E^{per}\|_{\mathbb{X}}^{2}$ (5.37)
and, using that $a+2\varepsilon_{0}-\delta>0$, we obtain
$\displaystyle\|B^{2,1}\|_{H^{N+\frac{1}{2}}}$ $\displaystyle\leq
C(N,\delta)\|E^{per}\|_{\mathbb{X}}^{2}\int_{-\infty}^{\tau}\frac{e^{(\tau-s)(a+\delta)}e^{2(a+\varepsilon_{0})s}}{(\tau-s)^{\frac{1}{2}}}ds\leq
C(N,\delta)e^{2(a+\varepsilon_{0})\tau}\|E^{per}\|_{\mathbb{X}}^{2}.$ (5.38)
Hence, $\|B^{2,1}\|_{\mathbb{X}}\leq
e^{T(a+\varepsilon_{0})}\|E^{per}\|_{\mathbb{X}}^{2}$. As a consequence of
Lemma 5.5, we get
$\displaystyle\|L^{1,1}\|_{H^{N}}$ $\displaystyle\leq
M(N,\delta)\int_{-\infty}^{\tau}(\tau-s)^{-\frac{1}{2}}e^{(a+\varepsilon_{0})(\tau-s)}\|W^{lim}\cdot\nabla
E^{per}\|_{H^{N-1}}ds$ $\displaystyle\leq
M(N,\delta)\int_{-\infty}^{\tau}(\tau-s)^{-\frac{1}{2}}e^{(a+\varepsilon_{0})(\tau-s)}e^{(2a+\varepsilon_{0})s}\|E^{per}\|_{\mathbb{X}}ds$
(5.39)
By employing Lemma 5.3 and Lemma 5.5, we deduce
$\|L^{1,1}\|_{H^{N}}\leq
C(N,\delta,a)e^{(2a+\varepsilon_{0})\tau}\|E^{per}\|_{\mathbb{X}}$ (5.40)
and
$\displaystyle\|G^{0,1}\|_{H^{N}}$
$\displaystyle\leq\int_{-\infty}^{\tau}e^{(a+\delta)(\tau-s)}\|(W^{lim}\cdot\nabla
E^{lim,\perp})\|_{H^{N}}ds$
$\displaystyle\leq\int_{-\infty}^{\tau}e^{(a+\delta)(\tau-s)}e^{2as}ds\leq
C(N,\delta,a)e^{2a\tau},$ (5.41)
provided $\delta<a$, as a consequence of Lemma 5.3 and Lemma 5.5, which leads
to the estimates
$\|L^{1,1}\|_{\mathbb{X}}\leq
C(N,\delta,a)e^{aT}\|E^{per}\|_{\mathbb{X}},~{}~{}~{}\|G^{0,1}\|_{\mathbb{X}}\leq
e^{(a-\varepsilon_{0})T}.$
Similar estimates can be carried on $B^{2,2}(V^{per},E^{per})$,
$L^{1,2}(W^{per},E^{lim}),L^{1,3}(V^{lim},E^{per})$,
$L^{1,4}(V^{per},E^{lim}),G^{0,2}(V^{lim},E^{lim})$.
Based on these estimates above, it follows that for $T$ sufficiently large and
negative and $\|E^{per}\|_{\mathbb{X}}\leq 1$, we can make $2\|B\|+\|L\|<1$,
which gives $\|\mathcal{T}(E^{per})\|_{\mathbb{X}}\leq 1$. That is to say,
$\mathcal{T}|_{B(\mathbb{X})}$ is contractive. This finish the proof. ∎
In Theorem 5.1, $\beta\Xi_{0}$ is the solution of (1.10) by choosing the force
$\widetilde{F}(\xi,\tau)$. Let $E^{per}\in\mathbb{X}$ be the unique fixed
point of $\mathcal{T}$ guaranteed by Proposition 5.6 and showed that $E^{per}$
decays as $e^{(a+\varepsilon_{0})\tau}$ as $\tau\rightarrow-\infty$. By
induction, we can bootstrap it to $O(e^{2a\tau})$ decay in $H^{N}$ for
$N>\frac{5}{2}$. We can construct $E=\beta\Xi_{0}+E^{lim}+E^{per}$ solving
(1.10) which is not equal to $\beta\Xi_{0}$. Finally, undoing the similarity
variable transform for both $\beta\Xi_{0}$ and $E$ gives a pair of distinct
solutions of (1.1).
## Data Availability Statements
Data sharing not applicable to this article as no datasets were generated or
analysed during the current study.
## References
* [1] T. Li and T. Qin, Physics and partial differential equations, Vol. II. Translated from the 2000 Chinese edition by Yachun Li. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; Higher Education Press, Beijing, 2014.
* [2] D. Biskamp, Nonlinear Magnetohydrodynamics, Cambridge Monographs on Plasma Physics. Cambridge University Press, 1993.
* [3] D. Biskamp, Magnetohydrodynamic Turbulence, Cambridge University Press, 2003.
* [4] P. A. Davidson, An Introduction to Magnetohydrodynamics, Cambridge Texts in Applied Mathematics. Cambridge University Press, 2001.
* [5] M. Sermange and R. Temam, Some mathematical questions related to the MHD equations, Comm. Pure Appl. Math., 36(5):635-664, 1983.
* [6] J. Leray, Sur le mouvement d’un liquide visqueux emplissant l’espace, Acta Math., 63(1):193-248, 1934.
* [7] E. Hopf, $\ddot{\text{U}}$ber die Anfangswertaufgabe f$\ddot{\text{u}}$r die hydrodynamischen Grundgleichungen, Math. Nachr., 4:213-231, 1951.
* [8] G.Duvaut and J.L.Lions, Inquations en thermolasticit et magnto-hydrodynmique, Arch. Rational Mech. Anal., 46:241-279, 1972.
* [9] M. Vishik, Instability and non-uniqueness in the Cauchy problem for the Euler equations of an ideal incompressible fluid, part i, 2018.
* [10] M. Vishik, Instability and non-uniqueness in the Cauchy problem for the Euler equations of an ideal incompressible fluid, part ii, 2018.
* [11] D. Albritton, E. Bru$\acute{e}$, M. Colombo, C. De Lellis, V. Giri, M. Janisch, and H. Kwon, Instability and nonuniqueness for the 2d Euler equations in vorticity form, after M. Vishik, 2021.
* [12] D. Albritton, E. Bru$\acute{e}$ and M. Colombo, Non-uniqueness of Leray solutions of the forced Navier-Stokes equations, Annals of Mathematics, 196:415-455, 2022.
* [13] H. Jia and V. $\breve{\text{S}}$ver$\acute{\text{a}}$k, Local-in-space estimates near initial time for weak solutions of the Navier-Stokes equations and forward self-similar solutions, Invent. Math., 196(1):233-265, 2014.
* [14] H. Jia and V. Sverak, Are the incompressible 3d Navier-Stokes equations locally ill-posed in the natural energy space?, J. Funct. Anal., 268(12):3734-3766, 2015.
* [15] J. Guillod and V. $\breve{\text{S}}$ver$\acute{\text{a}}k$, Numerical investigations of non-uniqueness for the Navier-Stokes initial value problem in borderline spaces, arXiv:1704.00560.
* [16] T. Buckmaster and V. Vicol, Nonuniqueness of weak solutions to the Navier-Stokes equation, Ann. of Math. (2), 189(1):101-144, 2019.
* [17] T. Buckmaster, M. Colombo and V. Vicol, Wild solutions of the navier-stokes equations whose singular sets in time have hausdorff dimension strictly less than 1, J. Eur. Math. Soc. (JEMS) 24, 2022.
* [18] R. Beekie, T. Buckmaster and V. Vicol, Weak solutions of ideal MHD which do not conserve magnetic helicity, Ann. PDE, 6(1), 2020.
* [19] Y. Li, Z. Zeng and D. Zhang, Non-uniqueness of weak solutions to 3D magnetohydrodynamic equations, J. Math. Pures Appl. (9), 165:232-285, 2022.
* [20] Y. Li, Z. Zeng and D. Zhag, Sharp non-uniqueness of weak solutions to 3D magnetohydrodynamc equations, arXiv:2208.00624.
* [21] Z. Lin and Y. Wang, Dynamical magneto-rotational instability, arXiv:2310.10075.
* [22] A. J. Meir, The equations of stationary, incompressible magnetohydrodynamics with mixed boundary conditions, Comput. Math. Appl., 25(12):13-29, 1993.
* [23] H. Tasso and G. N. Throumoulopoulos, Axisymmetric ideal magnetohydrodynamic equilibria with incompressible flows, Physics of Plasmas, 5(6):2378-2383, 1998.
* [24] L. Rayleigh, On the stability, or instability, of certain fluid motions. Proc. London Math. Soc., 11:57-70, 1880.
* [25] P. G. Drazin and W. H. Reid. Hydrodynamic stability, Cambridge University Press, Cambridge, 2004.
* [26] J. L. Synge, The stability of heterogeneous liquids, Trans. Roy. Soc. Canada, 27(3):1-18, 1933.
* [27] E. P. Velikhov, Stability of an Ideally Conducting Liquid Flowing Between Cylinders Rotating in a Magnetic Field, J. Exptl. Theoret. Phys., 36:1398-1404, 1959.
* [28] S. Chandrasekhar, The stability of non-dissipative Couette flow in hydromagnetics, Proc. Natl. Acad. Sci., 46(2):253-257, 1960.
* [29] S. Balbus, Enhanced angular momentum transport in accretion disks, Annu. Rev. Astron. Astrophys, 41, 555-597, 2003.
* [30] S. Balbus and J. Hawley, A powerful local shear instability in weakly magnetized disks. I-Linear analysis. II-Nonlinear evolution, The Astrophysical Journal, 376: 214-233, 1991.
* [31] S. Balbus and J. Hawley, Instability, turbulence, and enhanced transport in accretion disk, Reviews of Modern Physics, 70(1):1-53, 1998.
* [32] J. M. Stone, Astrophysical magnetogydrodynamics, Bull. Astr. Soc. India, 39:129-143, 2011.
* [33] Z. Lin and C. Zeng, Separable Hamiltonian PDEs and turning point principle for stability of gaseous stars, Comm. Pure Appl. Math., 75(11):2511-2572, 2022.
* [34] K.-J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations, Grad. Texts in Math. 194, Springer-Verlag, New York, 2000
* [35] T. Kato, Perturbation theory for linear operators, Die Grundlehren der mathematischen Wissenschaften, Band 132. Springer-Verlag New York, Inc., New York, 1966.
* [36] T. P. Tsai, Lectures on Navier-Stokes equtions, Graduate Studies in Mathematics, 192. American Mathematical Society, Providence, RI, 2018.
|
# Lightning Creation Games
Zeta Avarikioti TU Wien
Vienna, Austria
<EMAIL_ADDRESS>Tomasz Lizurej University of Warsaw $\&$
IDEAS NCBR
Warsaw, Poland
<EMAIL_ADDRESS>Tomasz Michalak Michelle Yeo University of
Warsaw $\&$ IDEAS NCBR
Warsaw, Poland
<EMAIL_ADDRESS>Institute of Science and Technology Austria
Klosterneuburg, Austria
<EMAIL_ADDRESS>
###### Abstract
Payment channel networks (PCNs) are a promising solution to the scalability
problem of cryptocurrencies. Any two users connected by a payment channel in
the network can theoretically send an unbounded number of instant, costless
transactions between them. Users who are not directly connected can also
transact with each other in a multi-hop fashion. In this work, we study the
incentive structure behind the creation of payment channel networks,
particularly from the point of view of a single user that wants to join the
network. We define a utility function for a new user in terms of expected
revenue, expected fees, and the cost of creating channels, and then provide
constant factor approximation algorithms that optimise the utility function
given a certain budget. Additionally, we take a step back from a single user
to the whole network and examine the parameter spaces under which simple graph
topologies form a Nash equilibrium.
###### Index Terms:
Payment channel networks, Nash Equilibrium, Blockchain, Network design, Layer
2, Bitcoin
## I Introduction
One of the critical limitations of the major cryptocurrencies, such as Bitcoin
or Ethereum, is their low transaction throughput [1, 2, 3]. For instance,
given Bitcoin’s block size limit of 1MB and the average block creation time of
10 minutes, its throughput is limited to tens of transactions per second. This
is clearly not enough to facilitate the widespread everyday use of Bitcoin.
For comparison, credit card payment systems such as VISA handles approximately
7K transactions per second [4].
Payment Channel Networks (PCNs), such as Bitcoin’s Lightning Network [5] and
Ethereum’s Raiden Network [6], are second-layer solutions that are designed to
address the above scalability problem. The core idea is to process the
majority of transactions off-chain by enabling nodes to establish bilateral
payment channels; each channel acts as a joint account between the channel
participants. To preserve security, opening a channel requires depositing
funds to a shared address on-chain. These funds serve as secure collateral to
possibly many off-chain transactions between both parties. When the channel is
closed, the final balance is settled on-chain.
Importantly, each node can establish such payment channels with many other
nodes. This gives rise to a network that allows for funds transfers to non-
neighbors through a path of intermediaries. Because opening and maintaining a
channel requires locking up funds, serving as an intermediary results in
opportunity costs. To mitigate this cost, intermediary nodes earn transaction
fees for their services.
The protocols underlying PCNs have attracted a lot of attention in the
literature [7]. In addition to analyzing cryptographic underpinnings of the
PCN’s security proofs [8, 9], an effort has been made to understand game-
theoretic aspects of these networks either with respect to security e.g., [10,
11, 12, 13, 14], or economics, e.g., [15, 16].
A particularly interesting question is how the nodes should choose _where to
connect_ to a PCN and _what portion of a budget should be locked_ to distinct
channels. This is important as their choice not only affects the situation of
individual nodes but also influences the resulting network as a whole.
However, this issue has been weakly studied in the literature. In fact, most
PCN implementations (e.g., the Lightning Network) still propose a simple
heuristic for new nodes, suggesting connecting to a trusted peer or a hub.
In this work, we answer this question by first presenting several attachment
strategies for newly-joining nodes in a PCN. The first key challenge to this
task is to define the new node’s utility function that accurately reflects the
key objectives of new PCN users. A newcomer has to weigh the cost of creating
channels and locking up capital against the profits stemming from these
connections and the node’s position in the network. Furthermore, the utility
function should be efficiently computable, so that it can be used in practice
by new nodes, posing a second challenge.
Unfortunately, the models of the utility function considered so far in the
literature do not take all the above aspects into account. In particular,
Guasoni et al. [17] analyse the cost of channel creation, and establish
conditions under which two parties would create unidirectional or
bidirectional channels between themselves, as opposed to transacting on-chain.
However, the utility function in [17] only accounts for the cost of channel
creation but neglects profits from routing transactions and fees a user could
encounter. Avarikioti et. al [18, 19] and Ersoy et. al [20], on the other
hand, account for fees and profits from routing transactions through the PCN
but neglect the opportunity costs from the locked capital and consider only a
simplified transaction model where users transact with each other with uniform
probability.
We take up the first challenge to define a utility function that accurately
depicts the gains and costs of newly joining nodes. In particular, we account
for on-chain costs for opening channels, routing fees paid to and by the node
due to its position in the network, and opportunity costs for locked capital.
We further leverage a realistic transaction distribution where nodes transact
with other nodes with probability proportional to their degree, inspired by
the well-known Barabási-Albert preferential attachment model [21]. We believe
this transaction distribution approximates well real-life scenarios where
nodes transact more often with big vendors and service providers. We further
address the second challenge by providing a series of approximation algorithms
to efficiently compute the optimal connection strategy for newly-joining
nodes. The approximation ratio and runtime of each algorithm depend on how
much freedom the node has to distribute its budget on the channels,
highlighting an interesting trade-off.
Apart from the myopic analysis for a single joining node, we also examine the
effect our strategies may have on the topological structure of a PCN. In
particular, we examine simple graph structures, i.e., path, circle, and star
graphs, to determine under which conditions these constitute stable graphs,
where no node may increase its utility by changing its strategy (Nash
equilibrium). Naturally, which topologies are stable or not heavily depends on
the parameters of the transaction distribution. We thus identify the exact
parameter space in which each topology constitutes a Nash equilibrium.
In summary, our contributions are as follows:
* •
We extend the utility function of [19] to incorporate a _realistic transaction
model and opportunity costs_. To that end, we consider transaction
distributions where users transact with other users in proportion to their
degree instead of uniformly at random as in [19, 20, 18].
* •
We provide a series of _approximation algorithms_ that maximize our utility
function under different constraints. In particular, we identify a _trade-off
between the runtime of the algorithm and capital distribution constraints_ ,
i.e., how much capital is locked in each channel at its creation.
* •
Finally, we examine simple graph topologies and determine under which
parameter space of the transaction distribution, they form _Nash equilibria_.
## II The Model
In this section, we outline our model which is an extension of the model
introduced in [19]. We alleviate several unrealistic assumptions introduced in
[19], thus providing more meaningful insights on the connection strategies and
expected network structure of PCNs. We indicate these assumptions below.
### II-A Payment channel networks and routing fees
Payment channels provide a way for users on the blockchain to transact
directly with each other off-chain, thereby avoiding the high fees and latency
involved in transacting on the blockchain. Any two users on the blockchain can
open a payment channel with each other by locking some of their funds to be
used only in this channel, much like opening a joint account in a bank. Once
the channel is created, both users can send each other coins by updating the
channel balances in favour of the other party (see Figure 1 for an example).
For each payment (channel balance update), the respective capital must be
respected, meaning that a party cannot send more coins than it currently owns
to the counterparty. To close their payment channel, the parties post on-chain
a transaction that depicts the latest mutually agreed distribution of their
funds. The closing transaction can be posted either in collaboration or
unilaterally by one channel party. Note that _posting a transaction on-chain
bears a cost_ : the fee to the miner that includes the transaction on the
blockchain.
A payment channel network comprises of several two-party channels among users
of the blockchain. Each user of the network is represented by a vertex while
each (bidirectional) channel among two parties is represented by $2$ directed
edges (one in each direction) connecting the two vertices corresponding to the
parties. We model each bidirectional channel as $2$ directed edges to take
into account the balance on both ends of the channel which can be different
and thus impose different limits on the payment amount that can be sent in
each direction. More concretely, let us represent the topology of a payment
channel network with a directed graph $G=(V,E)$ with $|V|=n$, and $|E|=m$. For
node $u\in V$, let $Ne(u)$ denote the set of in- and out-neighbors of $u$.
Figure 1: Example of payments going through a channel between $2$ users $u$
and $v$. $b_{u}$ and $b_{v}$ denote the balances of $u$ and $v$ in the channel
and are updated with every successful payment. The last payment of size $6$
going from $u$ to $v$ is unsuccessful as the size of the payment is larger
than $b_{u}=5$.
Users who are not directly connected with a channel can still transact with
each other if there exists a path of channels between them in the PCN graph.
For instance, if Alice and Carol do not share a channel, but Alice shares a
channel with Bob and Bob with Carol then Alice may send coins to Bob and then
Bob to Carol111There exist techniques, namely HTLCs, to ensure that the
transactions on a path will be executed atomically, either all or none, so the
intermediaries do not lose any funds [5].. However, each channel must hold
enough coins to ensure the feasibility of the transaction routing. In our
previous example, if Alice wants to send to Carol 5 coins through Bob, then
Alice must own at least 5 coins in her channel with Bob, and Bob at least 5
coins in his channel with Carol, at the time of the transaction execution. The
users along the transaction path who are not the sender or receiver (e.g.,
Bob) typically _charge a fee for forwarding the transaction_ that depends on
the transaction amount and is publicly announced. The precise form of the fee
function for forwarding transactions is determined by the user who owns the
coins. That is, given a payment channel $(u,v)$ and a fixed transaction amount
of $t$, the fees incurred from forwarding $t$ can even differ depending on
whether $t$ is forwarded from $u$ to $v$ or from $v$ to $u$.
In our model, we assume transactions ($tx$) are of size at most $T>0$ and all
intermediary nodes take the same – global – fee function
$F:[0,T]\longrightarrow\mathbb{R}^{+}$ which is an abstraction for an average
fee function. We denote by $f_{avg}$ the value of the average fee when using
the global fee function $F$. That is, $f_{avg}=\int_{0}^{T}p_{tx\
\text{size}=t}\cdot F(t)dt$, where $p_{\text{tx size}=t}$ is a global
probability of occurrence of a transaction with size $t$. We assume that
$f_{avg}$ is publicly known (recall that the fee functions are publicly
announced in PCNs).
### II-B PCN transactions
In the following, we alleviate the assumption of [19] that transactions are
uniformly distributed among the PCN users, and introduce a more realistic
transaction model.
Transactions: Let $N_{u}$ denote the average number of transactions sent from
user $u$ over a unit of time. We denote with $N$ the sum of the number of all
transactions sent by users in a unit of time $N=\sum_{v\in V}N_{v}$. We assume
a user $u$ joining the network knows the distribution of transactions in the
network. These assumptions equally allow each user to estimate the mean rate
(denoted by $\lambda_{uv}$) of transactions going along any directed edge
$(u,v)$ which, we assume, follows a Poisson process with rate $\lambda_{uv}$.
We also stress that this estimation can be done efficiently in time
$\mathcal{O}(n^{2})$, by calculating shortest paths using e.g., Dijkstra’s
algorithm [22] for each pair of nodes in the network.
Reduced subgraph with updated capacities:: The topology of the PCN can change
with the size of transactions due to balance constraints: some directed edges
do not have enough capacity to forward transactions when the transaction size
is too large. However, given that we assume users know the distribution of
transactions in the network, and that the capacity and time of channel
creation are publicly posted on the blockchain, users can estimate the
expected balance on each end of all channels in the network. Thus, for the
rest of the paper, we consider that all our proposed algorithms for a given
transaction of size $x$ are computed on a subgraph $G^{\prime}$ of the
original PCN $G$ that only takes into account directed edges that have enough
capacity to forward $x$.
Transaction distribution: In the topological studies included in this work, we
assume that the probability that any two users transact with each other is
proportionate to their degree. Specifically, we use the Zipf distribution [23]
to model the occurrence of any two users transacting with each other. That is,
assume for a user $u$ a ranking of all other users in the network according to
their degree, breaking ties arbitrarily. That is, the highest degree vertex is
given rank 1, the second highest is given rank 2, etc. Then for some user-
specific parameter $s_{u}>0$, the probability $p^{trans}_{u,v}$ that $u$
transacts with another user $v\in V\setminus\\{u\\}$ with rank $k$ is:
$p^{trans}_{u,v}=\frac{1/k^{s_{u}}}{\sum_{i=1}^{n}1/i^{s_{u}}}.$ (1)
We note that the Zipf distribution is widely used in the natural and social
sciences for modelling data with a power law distribution [24, 25]. It is also
frequently used in the context of social networks [26] and thus seems a
natural model for approximating the probability of any 2 users transacting in
a payment channel network.
Let the edge betweenness centrality be defined as:
$EBC(e):=\sum_{s,r\in V;s\neq r;m(s,r)>0}\frac{m_{e}(s,r)}{m(s,r)},$
where $m_{e}(s,r)$ is the number of shortest paths that traverse through the
edge $e$ and $m(s,r)$ is the total number of shortest paths from $s$ to $r$.
The transaction rate $\lambda_{e}$ for all directed edges $e$ in $E$ can be
estimated by the edge betweenness centrality of the edge $e$ weighted by the
probability of any two vertices $s$ and $r$ transacting with each other. That
is, for a directed edge $e$, we first define the probability $p_{e}$ that the
edge $e$ is chosen in a single transaction:
$p_{e}=\sum_{s,r\in V;s\neq
r;m(s,r)>0}\frac{m_{e}(s,r)}{m(s,r)}p^{trans}_{s,r}.$ (2)
Let $N$ denote the average number of transactions that happen in a unit of
time sent out by a user in the network. We assume these transactions are
independent. The average number of times a directed edge $e=(u,v)$ is chosen
in $N$ transactions is the transaction rate $\lambda_{e}$ and is simply
$N\cdot p_{e}$.
In this work, we slightly modify the original Zipf distribution to ensure that
the probability of any user transacting with two other distinct users having
the same degree is equal. We do this by simply averaging the Zipf probability
of transacting with every user with the same degree. Below we propose a
detailed method of calculating the probability that a given node $u$ transacts
with any other node $v$ in the network.
Given a network $G=(V,E)$, we first consider the subgraph
$G^{\prime}=(V^{\prime}=V\setminus\\{u\\},E^{\prime})$ which is created by
removing the node $u$ all of its incident edges from $G$. Then, we sort all
nodes in $V^{\prime}$ by their _in-degree_ and then assign a _rank-factor_ –
$rf(v)$ to each node $v$ in $V^{\prime}$. Since we want to ensure that every
node with the same in-degree has the same rank-factor, we simply average the
ranks of nodes with the same in-degree. In more detail, let $r_{0}(v)$ denote
the smallest rank of a node $v^{\prime}\in V^{\prime}$ such that the in-degree
of $v^{\prime}$ is equal to the in-degree of $v$. Let $n(v)$ be the number of
nodes in $V^{\prime}$ with the same in-degree as $v$. The rank factor of $v$
can be computed as follows:
$rf(v)=\frac{\frac{1}{r_{0}^{s}(v)}+\ldots+\frac{1}{(r_{0}(v)+n(v))^{s}}}{n(v)}$
The probability that $u$ transacts with $v\in V^{\prime}$ is then:
$p^{trans}_{u,v}=\frac{rf(v)}{\sum_{v^{\prime}\in V^{\prime}}rf(v^{\prime})}$
Finally, observe that the modified Zipf distribution satisfies the following
property: $r_{1}(v_{1})<r_{2}(v_{2})\implies rf(v_{1})>rf(v_{2})$. This holds
because $rf(v_{1})\geq\frac{1}{(r_{0}(v_{1})+n(v_{1}))^{s}}$ and
$rf(v_{2})\leq\frac{1}{(r_{0}(v_{2}))^{s}}$.
### II-C Utility function of a new user
When a new user joins a PCN, they must decide _which channels to create and
how much capital to lock in each channel_ , while respecting their own budget.
In their decision, the user must factor the following: (a) the on-chain costs
of the channels they choose to open, (b) the opportunity cost from locking
their capital for the lifetime of each channel, (c) the potential gains from
routing transactions of others (routing fees), (d) the routing fees they must
pay to route their own transactions through the PCN, (e) their budget.
Intuitively, the more channels a user opens and the higher the amount of the
total capital locked, the more fees they will obtain from routing and the less
cost they will bear for routing their own transactions. In other words,
increasing the initial costs also increases the potential gains. Our goal is
to analyze these trade-offs and find the sweet spot that maximizes the
benefits for a newly-joining user with a specific budget. We account for all
these factors in a realistic manner when we design the utility function of the
user, in contrast to previous work [19, 18] where the opportunity cost was
omitted, and the routing fees were calculated naively (i.e., constant fees and
uniform transaction distribution).
Figure 2: $E$ joins a PCN with existing users $A$, $B$, $C$, $D$. $E$ plans to
transact with $B$ once a month, and $A$ usually makes $9$ transactions with
$D$ each month. We assume the transactions are of equal size, and transaction
fees and costs are of equal size. $E$ has enough budget only for $2$ channels,
with the spare amount of funds to lock equaling $19$ coins. $E$ should create
channels with $A$ and $D$ of sizes $10$ and $9$ to maximize the intermediary
revenue and minimize $E$’s own transaction costs.
User strategy and constraints: Consider a fixed PCN $G=(V,E)$ and a new user
$u$ that wants to join $G$. Furthermore, let us denote the set of possible
actions by $\Omega:=\\{(v_{i},l_{i})\\}_{i}$, where each element
$(v_{i},l_{i})\in\Omega$ represents a node $v_{i}$ that $u$ wants to connect
to by locking in an amount of $l_{i}>0$ on the corresponding channel. The
strategy of $u$ is to select a set of users (a strategy) $S\subset\Omega$ that
$u$ wants to connect to and how much funds to deposit in these channels. Note
that both $\Omega$ and $S$ may contain more than one channel with the same
endpoints, but different amounts of locked funds on each end. We also assume
$u$ has some budget $B_{u}>0$ to create and fund channels and $u$’s budget
constraint imposes the requirement that for the strategy $S\subseteq\Omega$
chosen by $u$, $\sum_{j=1}^{|S|}[C+l_{j}]\leq B_{u}$. Finally, we remark that
$\Omega:=\\{(v_{i},l_{i})\\}_{i}$ may contain $v_{i}$ with continuously many
values of $0\leq l_{i}\leq B_{u}$. We will call it a continuous action set. In
this case, we will operate on the set of vertices $\Omega^{V}\subseteq V$ for
which the user $u$ will choose a strategy $S$ consisting of pairs
$(x_{j},l_{j}):x\in\Omega^{V},0\leq l_{j}\leq B_{u}$. Figure 2 highlights a
simple example of the decision-making process of a new user that wants to join
an existing PCN.
Now we define the _utility function_ for a new user $u$ that wants to join a
PCN $G$ and has a fixed budget $B_{u}$. The goal of $u$ is to _choose any
strategy $S=\\{(v_{i},l_{i})\\}_{i}\subseteq\Omega$ to maximize their expected
profit within a given budget $B_{u}$._ The expected profit (utility) of $u$ is
essentially the expected revenue from forwarding transactions through its
channels and collecting the routing fees, minus the costs of creating the
channels (on-chain fees and opportunity cost) and the expected fees
encountered by $u$ when sending transactions to other users in the network.
Channel costs: Typically, two on-chain transactions are needed to open and
close a channel 222We omit the case when one of the parties may commit fraud,
and a third transaction is necessary to award the cheated party all the
channel funds. This case is outside the scope of the paper as the costs for
publishing such a transaction may be covered by the total funds of the
channel. Recall that each blockchain transaction costs a fee to the miners,
denoted $C$.
The cost of the _opening_ transaction can be shared by two parties, and we
assume that parties only agree to open channels if they share this cost
equally ($C/2$ each). The cost of the closing transaction, on the other hand,
is paid by both parties when the channel closes in collaboration, or by the
party that closes the channel when the channel is closed unilaterally. To
model the costs of a channel between $u$ and $v$, we assume that it is equally
probable that the channel closes in one of the three ways: unilaterally by
$v$, unilaterally by $u$, in collaboration of $u$ and $v$. Thus, the cost of
the closing transaction is on expectation $C/2$ for each party. Hence, in
total, the channel cost for each party is $C$.
We also account for the opportunity cost of locking funds (as opposed to using
or storing them elsewhere) in a channel for the lifetime of the channel.
Suppose two users $u$ and $v$ wish to open a channel locking $c_{u}$ and
$c_{v}$ amount of coins respectively. Let $l_{i},i\in V$ the opportunity cost
defined by user $i$; that is, typically a function of the amount of coins
$c_{i}$, e.g., $l_{i}=r\cdot c_{i}$, $r$ constant (a standard economic
assumption due to the non-specialized nature of the underlying coins [27]). We
denote the total cost for opening a channel for user $u$ by
$L_{u}(v,l)=C+l_{u}$. The cost of user $v$ is symmetric.
We direct the reader to the work by Guasoni et al. [17] for a more detailed
model of channel costs. We note that our computational results still hold in
this extended model of channel cost. We further note that while the utility
function in [17] only accounts for the cost of channel creation, in our work
we also consider the potential profit from routing transactions and fees a
user could encounter.
Revenue from routing fees: Each user of the PCN may route transactions of
others through their channels in exchange for a routing fee. Each time a user
$u$ provides such service, $u$ gains revenue equal to $f_{avg}$ as described
in section II-B. Specifically, the expected revenue gained by a user $u$ over
a unit time interval from routing transactions in the PCN is the sum of the
fees weighted by the average transaction rate from all of $u$’s incident
channels:
$\mathbb{E}^{rev}_{u}=\sum_{v_{i}\in Ne(u)}\lambda_{uv_{i}}\cdot f_{avg}.$ (3)
We write $\mathbb{E}^{rev}_{u_{S}}$ when we want to explicitly say that the
user $u$ already added edges from $S$ to the network.
Fees encountered by the user: Whenever a user in the network $u$ makes a
payment to another user $v$, $u$ has to pay some amount of fees to all the
intermediary nodes in the payment path from $u$ to $v$. Let $d(u,v)$ be the
length of the shortest path from $u$ to $v$ in the network and let us assume
that $u$ pays $f_{avg}^{T}$ to every intermediary node in the path. The
expected fees encountered by $u$ with a stream of $N_{u}$ output transactions
is the sum of costs which increases proportionally with the distance between
any two users:
$\mathbb{E}^{fees}_{u}=N_{u}\cdot\sum_{v\in V;v\neq u}d(u,v)\cdot
f_{avg}^{T}\cdot p^{trans}_{u,v}$
We write $\mathbb{E}^{fees}_{u_{S}}$ when we want to explicitly say that the
user $u$ already added edges from $S$ to the network. We note that when two
users $u$ and $v$ are not connected, then $d(u,v)=+\infty$.
Objective of the user: Here, we combine all the costs calculated above and
compute the utility function of a newly joining node. The expected utility of
a user $u$ under a given strategy $S\subseteq\Omega$ is the profit gained from
collecting the fees, minus the fees paid for sending out transactions, and
minus the costs of the channels. Formally,
$\mathcal{U}_{u_{S}}=\mathbb{E}^{rev}_{u}-\mathbb{E}^{fees}_{u}-\sum_{(v,l)\in
S}L_{u}(v,l)$
We assume the utility of a disconnected node (i.e. a node that is not
connected to any other node in the network) is $-\infty$.
The objective of $u$ is to select a subset of users to connect to as well as
the amount of funds to lock into these channels that maximises their expected
utility subject to the budget constraints. Formally:
$\max_{S\in\Omega}\mathcal{U}_{u_{S}}\text{ s.t. }\sum_{(v,l_{u})\in
S}[C+l_{u}]\leq B_{u}$
## III Optimisation algorithms
Having defined the utility and objective for a new user $u$ in Section II-C,
we now propose several algorithms to optimise the objective in this section.
We begin by establishing some properties of our objective function. We first
show that our utility function is submodular but not necessarily monotone and
not necessarily non-negative. Thus, we cannot apply standard algorithms to
optimise it efficiently with guarantees on the approximation ratio. We thus
propose a series of constraints on the actions of the new user $u$ and define
a solution for the objective in each constrained setting. We then provide a
corresponding optimisation algorithm for each setting that comes with
guarantees on the approximation ratio. In the following, let $[k]$ denote
$\\{1,\dots,k\\}$.
### III-A Properties of the objective function
Whenever we add a new edge, its estimated average transaction rate will depend
on the current topology of the network and the capacities of the channels in
the network. We first show that the objective function is submodular. Let
$S\subset\Omega$ be a strategy. Note that we allow the algorithm to add more
than one channel with the same endpoint $v$ but different amounts of funds
$l_{i}$ to the strategy set $S$.
###### Theorem 1.
The expected utility function $\mathcal{U}_{u_{S}}$ is submodular.
###### Proof.
We split $\mathcal{U}_{u_{S}}$ into three components that sum to
$\mathcal{U}_{u_{S}}$ and show that each component is submodular. Since the
sum of submodular functions is submodular, the claim follows.
We first rewrite $\mathcal{U}_{u_{S}}$ as
$\mathcal{U}_{u_{S}}=\mathbb{E}^{rev}_{u}+\left(-\mathbb{E}^{fees}_{u}\right)+\left(-\sum_{(v,l)\in
S}L_{u}(v,l)\right).$ (4)
Consider the configurations with two strategies $u_{S_{1}},u_{S_{2}}$ with
$S_{1}\subseteq S_{2}$, and consider a pair $X=(x,l_{ux})\notin S_{2}$. Recall
that a function $g$ is submodular if $g(u_{S_{2}\cup\\{X\\}})-g(u_{S_{2}})\leq
g(u_{S_{1}\cup\\{X\\}})-g(u_{S_{1}})$.
Now first observe that
$\displaystyle\mathbb{E}^{rev}_{u_{S1\cup\\{x\\}}}-\mathbb{E}^{rev}_{u_{S1}}$
$\displaystyle=\mathbb{E}^{rev}_{u_{\\{X\\}}}=\lambda_{xu}\cdot
f_{avg}=\mathbb{E}^{rev}_{u_{S2\cup\\{X\\}}}-\mathbb{E}^{rev}_{u_{S2}}$
Hence the expected revenue function $\mathbb{E}^{rev}_{u_{S}}$ is submodular.
Note that in the calculations we assume that $\lambda_{xy}$ is a fixed value.
Now we show that the second component of 4 is submodular. That is,
$-\mathbb{E}^{fees}_{u_{S}}=-\lambda_{u}\sum_{v\in V;v\neq u}d(u,v)\cdot
f_{avg}^{T}\cdot p^{trans}_{u,v}$ is submodular. Let us denote the marginal
contribution in terms of the expected fees of adding $X$ to strategy $S$ as
$MC_{S}(X):=\mathbb{E}^{fees}_{u_{S}}-\mathbb{E}^{fees}_{u_{S\cup\\{X\\}}}$.
We note that $MC_{S}(X)$ only changes when one adds a pair $X=(x,l_{ux})$ to
$S$, such that a shortest path from $u$ to some $v$ goes through the vertex
$x$ in the new configuration $S\cup\\{X\\}$, i.e.:
$MC_{S}(X)=\lambda_{u}f_{avg}^{T}\sum_{\begin{subarray}{c}v\in V;v\neq u;\\\
x\in
sp_{S\cup\\{X\\}}(u,v)\end{subarray}}p^{trans}_{u,v}\Big{[}d_{S}(u,v)-d_{S\cup\\{X\\}}(u,v)\Big{]}$
Recall that $d(u,v)$ as defined for two disconnected nodes $u,v$ is $+\infty$.
Thus, $d_{S_{1}\cup\\{X\\}}(u,v)-d_{S_{1}}(u,v)\leq 0$ as $X\notin
S_{1},S_{2}$. Moreover, as $v\in S_{1},S_{2}$ are direct neighbours of $u$ in
all configurations, then
$|d_{S_{1}}(u,v)-d_{S_{1}\cup\\{X\\}}(u,v)|>|d_{S_{2}}(u,v)-d_{S_{2}\cup\\{X\\}}(u,v)|$.
Hence, we conclude that $MC_{S_{1}}(X)>MC_{S_{2}}(X)$. Note that in the
calculations we assume that $p^{trans}_{u,v}$ is a fixed value.
Finally, we show that the last component $-\sum_{(v,l)\in S}L_{u}(v,l)$ in 4
is submodular. The marginal contribution of $X=(x,l_{ux})$ to the channel
costs given $u_{S_{1}}$ is simply the cost of a single bidirectional channel
between $u$ and $x$, i.e. $L_{u}(v,x)$. This is exactly equal to the marginal
contribution given $u_{S_{2}}$. ∎
Now, we show that although the objective function is submodular, it is
unfortunately non-monotone. That is, for any two strategy sets $S_{1},S_{2}$
with $S_{1}\subset S_{2}$, it is not necessarily the case that
$\mathcal{U}_{u_{s_{1}}}\leq\mathcal{U}_{u_{s_{2}}}$.
###### Theorem 2.
The expected utility function $\mathcal{U}_{u_{S}}$ is not necessarily
monotone, but the modified utility function
$\mathcal{U}_{u_{S}}^{\prime}=\mathbb{E}^{rev}_{u}-\mathbb{E}^{fees}_{u}$ is
monotonically increasing.
###### Proof.
We analyse each component of $\mathcal{U}_{u_{S}}$ separately. First, we note
that a direct application of [20] shows that $\mathbb{E}^{rev}_{u}$ is
monotone increasing. Next, we look at expected fees:
$-\mathbb{E}[\text{fees encountered by }u_{S}]=-\lambda_{u}\sum_{v\in V;v\neq
u}d(u,v)\cdot f_{avg}^{T}\cdot p^{trans}_{u,v}.$
The monotonicity of this function directly follows from the fact that for any
$S_{1}\subseteq S_{2}$, $d_{S_{1}}(u,v)\geq d_{S_{2}}(u,v)$. Thus, the
function is monotonically increasing. Note that in the calculations we assume
that $p^{trans}_{u,v}$ is a fixed value.
Finally, $-\sum_{(v,l)\in S}L_{u}(v,l)$ is clearly a monotonically decreasing
function. Since two components of $\mathcal{U}_{u_{S}}$ are monotonically
increasing and one component is monotonically decreasing,
$\mathcal{U}_{u_{S}}$ is non-monotone. ∎
The final property we show about our objective function is that it is not
necessarily non-negative.
###### Theorem 3.
The expected utility function $\mathcal{U}_{u_{S}}$ is not necessarily non-
negative.
###### Proof.
This follows from the observation that the sum of the cost of creating
channels and the expected fees $\sum_{(v,l)\in
S}L_{u}(v,l)+\mathbb{E}^{fees}_{u}$ might easily get bigger than the expected
revenue $\mathbb{E}^{rev}_{u}$ when choosing some strategy $S\subseteq\Omega$.
∎
### III-B Fixed amounts of funds per channel
We first show that if we restrict the amount of funds (say $l_{1}$) that the
new user $u$ can lock in each created channel, we can achieve an approximation
ratio of $1-\frac{1}{e}$. This setting is useful for users who want to
minimize their computational cost. The algorithm (described in Algorithm 1)
that achieves this ratio in this setting is simple – we greedily pick the $k$
best channels to connect with that maximize the expected revenue minus the
expected fees. Formally, let us define a simplified utility function
$\mathcal{U}_{u_{S}}^{\prime}$ which is the sum of the expected revenue and
the expected fees:
$\mathcal{U}_{u_{S}}^{\prime}=\mathbb{E}^{rev}_{u}+(-\mathbb{E}^{fees}_{u})$.
We note that the simplified utility function $\mathcal{U}_{u_{S}}^{\prime}$ is
submodular and monotone, as shown in III-A. Let us denote the maximum number
of channels that can be created given $u$’s budget $B_{u}$ by
$\textsc{M}:=\lfloor\frac{B_{u}}{C+l_{1}}\rfloor$. We can now maximize
$\mathcal{U}_{u_{S}}^{\prime}$ and find the optimal set of vertices to connect
to for each possible subset of vertices of size $k$. We do this for
$k\in\\{1,2,\ldots,\textsc{M}\\}$, and then compare the results for all $k$.
Since the channel creation cost is now fixed for any choice of $k$ new
channels, the $(1-\frac{1}{e})$-approximation we achieve when we greedily
maximize $\mathcal{U}_{u_{S}}^{\prime}$ simply follows from the result in [28]
since $\mathcal{U}_{u_{S}}^{\prime}$ is submodular and monotone.
The next theorem shows that Algorithm 1 returns a
$(1-\frac{1}{e})$-approximation and runs in time linear in M.
###### Theorem 4.
Algorithm 1 with inputs $\Omega=\\{(v,l_{1})\in V:v\neq u\\}$ and M returns a
$(1-\frac{1}{e})$-approximation of the optimum of
$\mathcal{U}_{u_{S}}^{\prime}$. The result is computed in at most
$\mathcal{O}(\textsc{M}\cdot n)$ number of estimations of the $\lambda_{uv}$
parameter.
###### Proof.
To see that Algorithm 1 returns a $(1-\frac{1}{e})$-approximation of the
optimum of $\mathcal{U}_{u_{S}}^{\prime}$, we need to see that in the
algorithm for each possible $k$ we compute a $(1-\frac{1}{e})$-approximation
of $\mathcal{U^{\prime}}$ (in $\mathcal{O}(n)$ time), because the function
$\mathcal{U^{\prime}}$ is submodular and monotonically increasing, then the
overall solution that compares partial results gives a
$(1-\frac{1}{e})$-approximation ratio for a fixed $k$. This in turn gives a
$(1-\frac{1}{e})$-approximation ratio for each
$k\in\\{1,2,\ldots,\textsc{M}\\}$. ∎
Input: $\Omega,\textsc{M}$
$P_{S}\leftarrow\text{array indexed }1,\ldots,\textsc{M}\text{ initialized
with }P[i]=\emptyset$
$P_{U}\leftarrow\text{array indexed }1,\ldots,\textsc{M}\text{ initialized
with }P[i]=-\infty$
$S\leftarrow\emptyset$
$A\leftarrow\Omega$
while _$|S|\leq\textsc{M}$_ do
$X\leftarrow argmax_{X\in
A}[\mathcal{U^{\prime}}_{u_{S\cup\\{X\\}}}-\mathcal{U^{\prime}}_{u_{S}}]$
$S\leftarrow S\cup\\{X\\}$
$P_{S}[|S|]\leftarrow S$
$P_{U}[|S|]\leftarrow\mathcal{U^{\prime}}_{u_{S}}$
$A\leftarrow A\setminus\\{X\\}$
end while
$i\leftarrow argmax_{i\in\\{1,\ldots,M\\}}[P_{U}[i]]$
return $P_{S}[i]$
Algorithm 1 Greedy algorithm
### III-C Varying amount of funds per channel, discrete version
Next, we give the new user a choice of locking varying amounts of capital in
each channel. Enabling varying capital on channels depicts more accurately the
realistic model of transaction distribution we leverage. However, in order to
achieve the same approximation ratio of $1-\frac{1}{e}$ as in the previous
setting, we have to discretize the capital that can be locked into a channel
to some minimal amount $m>0$. That is, opening a channel would require
injecting funds of the form $km$ for some $k\in\mathbb{N}$. We impose this
discretization constraint in order to perform an exhaustive search over all
possible assignments of the budget $B_{u}$ to the capital in each channel.
We again operate on the modified utility function
$\mathcal{U}_{u_{S}}^{\prime}$ and present an algorithm (described in
Algorithm 2) that achieves the same approximation ratio of $1-\frac{1}{e}$. In
more detail, given a parameter $m$, Algorithm 2 firstly divides the budget
$B_{u}$ to $\frac{B_{u}}{m}$ units that can be spent. Then, the algorithm
divides these units into $k+1$ parts (where $k=\lfloor\frac{B_{u}}{C}\rfloor$
is a bound on the number of channels that $u$ can possibly create). Finally,
for each possible division, it runs Algorithm 1 (again by temporarily skipping
the channel costs) in each step locking the capital assigned to this channel
in the division. Let us denote
$T:=\binom{\frac{B_{u}}{m}}{\frac{B_{u}}{C}+1}$.
Input: $V,B_{u},m$
$k=\lfloor\frac{B_{u}}{C}\rfloor$
$D=\text{array of all divisions of }[\lfloor\frac{B_{u}}{m}\rfloor]\text{ to
}k+1\text{ parts}$
$D_{S}\leftarrow\text{array indexed }1,\ldots,|D|\text{ initialized with
}D_{S}[i]=\emptyset$
for _$i\in[|D|]$_ do
$(l_{1},\ldots,l_{k+1})\leftarrow D[i]$
$D_{S}[i]\leftarrow$ the output of Algorithm 1 run on $M=k$ with a restriction
that in every step $j$ of _while_ loop in the algorithm a channel of capacity
$l_{j}$ is selected
end for
$i\leftarrow
argmax_{i\in\\{1,\ldots,|D|\\}}\mathcal{U^{\prime}}_{u_{D_{S}[i]}}$
return $D_{S}[i]$
Algorithm 2 Exhaustive search over channel funds
###### Theorem 5.
Algorithm 2 with inputs $V$, budget $B_{u}$, and parameter $m$ returns a
$(1-\frac{1}{e})$-approximation of the optimum of
$\mathcal{U}_{u_{S}}^{\prime}$. The result is computed in at most
$\mathcal{O}(T\cdot\frac{B_{u}}{C}\cdot n)$ steps.
###### Proof.
The budget $\frac{B_{u}}{m}$ can be split to at most
$k=\lfloor\frac{B_{u}}{C}\rfloor$ parts in at most
$\binom{\frac{B_{u}}{m}}{k+1}$ cases. Algorithm 1 is run as a subroutine of
Algorithm 2. The main routine iterates through all possible combinations of
amounts locked to channels, each of them giving the
$(1-\frac{1}{e})$-approximation for the selected assignments of funds. ∎
We note that there is a trade-off between the choice of $m$ and the run time
of Algorithm 2: a larger $m$ would reduce the search space and hence the
runtime of the algorithm. However, it would reduce the control over the
capital the user could lock into any particular channel.
### III-D Varying amount of funds per channel, continuous version
In this section, we remove the previous discrete constraint on the capital the
new user $u$ can inject into the channel, that is, $u$ can now inject funds of
the form $m\in\mathbb{R}+$ into any channel. We sketch a polynomial-time
$\frac{1}{5}$-approximation algorithm for the optimisation problem: let us
first denote the total expected on-chain transaction cost for a user $u$ with
an average output stream of $N_{u}$ transactions as $C_{u}:=\frac{N_{u}\cdot
C}{2}$. That is, $C_{u}$ represents the total expected cost for user $u$ when
$u$ transacts entirely on the blockchain. One can now consider what we term
the benefit function, which is simply the sum of $C_{u}$ and the utility of
$u$ when $u$ joins the network with strategy $S$. Formally, we denote this
function by $\mathcal{U}_{u_{S}}^{b}:=C_{u}+\mathcal{U}_{u_{S}}$. Intuitively,
the benefit function captures the potential benefit $u$ would gain from
transacting with other users over the PCN rather than on the blockchain.
We observe that $\mathcal{U}_{u_{S}}^{b}$ will stay submodular and positive
whenever the user chooses channels $(u,v)$, such that
$\mathbb{E}^{fees}_{u}+\frac{B_{u}}{C}\cdot L_{u}(v,l)<C_{u}.$
As such, we can apply the algorithm and result of Lee et al. [29] for
optimising submodular and non-negative functions to $\mathcal{U}_{u_{S}}^{b}$
to achieve a $\frac{1}{5}$-approximation of $\mathcal{U}_{u_{S}}^{b}$.
## IV Structural properties of simple graph topologies
In this section, we complement our study of optimisation algorithms for users
in the payment channel network (Section III) with a study of structural
properties of simple graph topologies given the transaction model between
users as defined in Section II-B. We are particularly interested in properties
of stable networks, that is, networks that are in a Nash Equilibrium where no
user can increase their utility by any unilateral change in strategy.
Stability is an important notion in the context of PCNs as this has
implications not only on the choice of which nodes to connect to for a new
user [19] but also on payment routing and finding off-chain rebalancing cycles
for existing users to replenish depleted channels [30]. We are also interested
in the parameter space of our model under which specific graph topologies form
a Nash Equilibrium.
We use the following assumptions and notations in our analysis in this
section:
1. 1.
Recall from Equations 2 and 3 in Section II that the expected revenue of a
user $u$ can be written as: $\mathbb{E}^{rev}_{u}=\sum_{v_{i}\in
Ne(u)}\lambda_{uv_{i}}\cdot f_{avg}=\sum_{\begin{subarray}{c}v_{1}\neq
v_{2}\\\ v_{1},v_{2}\in
V\setminus\\{u\\}\end{subarray}}\frac{m_{u}(v_{1},v_{2})}{m(v_{1},v_{2})}\cdot
N_{v_{1}}\cdot p^{trans}_{v_{1},v_{2}}\cdot f_{avg}$
We denote $b:=N_{v_{1}}\cdot f_{avg}$ and assume it is constant for $v_{1}\in
V\setminus\\{u\\}$.
2. 2.
We denote $a:=N_{u}\cdot f_{avg}^{T}$.
3. 3.
For any $s>0$ and $n\in\mathbb{N^{*}}$, we denote
$H_{n}^{s}:=\sum_{k=1}^{n}\frac{1}{k^{s}}$.
4. 4.
All the players create channels of equal cost $l$.
### IV-A Upper bound on the longest shortest path containing a hub
An interesting question is how large is the diameter of stable networks with
highly connected nodes. In the context of PCNs, this has implications on
efficient payment routing algorithms[31, 32, 33, 34]. As a first step to
answering this question, we derive an upper bound on the longest shortest path
in a stable network that contains a hub node, i.e., an extremely well-
connected node that transacts a lot with other nodes in the network. Let us
select a hub node $h$ and consider the longest shortest path that $h$ lies on
(if there are multiple we simply select one of them arbitrarily). We denote
the length of the path by $d$. The following theorem derives an upper bound on
$d$ for a stable network.
###### Theorem 6.
$d$ is upper bounded by $2(\frac{\frac{C+\epsilon}{2}-\lambda_{e}\cdot
f}{p_{\min}\cdot N\cdot f})+1$.
###### Proof.
Let $P=(v_{0},v_{1},\dots,v_{d})$ be the path. Consider the addition of an
edge $e$ between $v_{\lfloor\frac{d}{2}\rfloor-1}$ and
$v_{\lfloor\frac{d}{2}\rfloor+1}$. Denote by $\lambda_{e}$ the minimum rate of
transactions going through the edge $e$ in both directions, i.e.
$\lambda_{e}:=\min\\{\lambda_{(v_{\lfloor\frac{d}{2}\rfloor-1},v_{\lfloor\frac{d}{2}\rfloor+1})},\lambda_{(v_{\lfloor\frac{d}{2}\rfloor+1},v_{\lfloor\frac{d}{2}\rfloor-1})}\\}$.
Now consider the set of directed shortest paths $S$ such that each path
$s_{i}\in S$ is a sub sequence of $P$ and one end point of $s_{i}$ lies in
$\\{v_{0},\dots,v_{\lfloor\frac{d}{2}\rfloor-1}\\}$ and the other end point of
$s_{i}$ lies in $\\{v_{\lfloor\frac{d}{2}\rfloor+1},\dots,v_{d}\\}$. Let
$p_{i}$ be the probability that $s_{i}$ is selected, with probabilities of
directed paths being selected as defined by the probability of the source of
the path transacting with the sink (refer to 1 for more details). Let
$p_{\min}:=\min_{i}p_{i}$.
We know the cost (split equally) of creating the edge $e$ is at least
$\frac{C+\epsilon}{2}$. Since the network is stable, this implies that the
cost of creating $e$ is larger than any benefits gained by the $2$ users
$v_{\lfloor\frac{d}{2}\rfloor-1}$ and $v_{\lfloor\frac{d}{2}\rfloor+1}$ by
creating $e$. That is,
$\frac{C+\epsilon}{2}\geq\lambda_{e}\cdot f+N\cdot p_{\min}\cdot
f\cdot\lfloor\frac{d}{2}\rfloor,$ (5)
where the first term on the RHS of the inequality is the minimum (among the
two parties $v_{\lfloor\frac{d}{2}\rfloor-1}$ and
$v_{\lfloor\frac{d}{2}\rfloor+1}$) of the average revenue gained by adding the
edge $e$. The second term on the RHS of the inequality is a lower bound on the
average amount of fees saved by $v_{\lfloor\frac{d}{2}\rfloor-1}$ and
$v_{\lfloor\frac{d}{2}\rfloor+1}$. Rearranging, this implies that $d\leq
2(\dfrac{\dfrac{C+\epsilon}{2}-\lambda_{e}\cdot f}{p_{\min}\cdot N\cdot
f})+1.$ ∎
Note that since a hub node is on the path, as long as it is not directly in
the middle of the path (i.e. vertex $v_{\lfloor\frac{d}{2}\rfloor}$),
$p_{\min}$ should be fairly large as hubs are typically high degree vertices.
Moreover, if a hub node is on a diametral path, we extract a meaningful bound
on the diameter of a stable network.
### IV-B Stability of simple graph topologies
In this section, we study some simple graph topologies, and the parameter
spaces of the underlying transaction distribution under which they form a Nash
Equilibrium. We restrict our analysis to these simple topologies because
computing Nash Equilibria for a general graph using best response dynamics is
NP-hard (see Theorem $2$ in [19]). As mentioned in Section II-B, we assume the
underlying transaction distribution that gives the probability of any two
nodes transacting with each other is the Zipf distribution.
We firstly show that when the scale parameter $s$ of the Zipf distribution is
large (i.e. the distribution is heavily biased towards transacting with only
high-degree nodes), the star graph is a Nash Equilibrium.
###### Theorem 7.
The star graph with the number of leaves $\geq 4$ is a Nash Equilibrium when
nodes transact with each other according to the Zipf distribution with
parameter $s$ such that $\frac{1}{2^{s}}$ is negligible, i.e.
$\frac{1}{2^{s}}\approx 0$.
###### Proof.
First note that, because $\frac{1}{2^{s}}$ is negligible, then all leaf nodes
have negligible expected revenue. Now consider a leaf node $u$. The costs of
the leaf node $u$ are triggered by transacting with the central node. If $u$
removes the edge between $u$ and the central node, and replaces this
connection with a set of edges to other leaf nodes, $\mathbb{E}^{fees}_{u}$
can only rise, as the central node still remains the one with the highest
degree.
The central node may want to delete all of its edges, but this will result
only in lowering its $\mathbb{E}^{rev}_{u}$. The $\mathbb{E}^{fees}_{u}$ may
not go down, because the central node already communicates directly with all
leaf nodes. ∎
Secondly, we establish the necessary conditions that make the star graph a
Nash Equilibrium in general.
###### Theorem 8.
The star graph with the number of leaves $n\geq 2$ is a Nash Equilibrium when
nodes transact with each other according to the Zipf distribution with
parameter $s\geq 0$ whenever the following conditions hold:
1. 1.
$a/H_{n}^{s}\leq 2^{s}\cdot l\cdot 1$,
2. 2.
$b\cdot\frac{i}{2}\cdot\frac{H_{i+1}^{s}-1-1/2^{s}}{H_{n}^{s}}+a\cdot\frac{H_{i+1}^{s}-1}{H_{n}^{s}}\leq
l\cdot(i)$ (for $2\leq i\leq n-1$),
3. 3.
$b\cdot\frac{i}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}+a\frac{H_{i+1}^{s}-2}{H_{n}}\leq
l\cdot(i-1)$ (for $2\leq i\leq n-1$).
###### Proof.
Firstly, we prove that the _central node_ is in Nash Equilibrium in the star
graph. Since the central node is connected to all other nodes, adding an
additional channel to any node just increases the channel creation cost and
thus decreases the utility for the central node. Removing a single edge
disconnects the central node from a user and thus leads to infinite cost. Thus
the central node has no incentive to switch to a different strategy.
Secondly, we prove when any _leaf node_ is also in Nash Equilibrium in the
star graph. For every strategy defined below, we calculate expected revenue
$\mathbb{E}^{rev}_{u}$, expected costs $\mathbb{E}^{fees}_{u}$, and channels
cost $L$ of the node $u$ after changes.
– By default a leaf node $u$ will not add/remove any edges.
* •
$\mathbb{E}^{rev}_{u}=0$
* •
She interacts with 1 central node with $rf=1$, and $n-1$ leaf nodes with
$rf=\frac{H_{n}^{s}-1}{n-1}$. $\sum_{v^{\prime}\in
V\setminus\\{u\\}=1+(n-1)\cdot\frac{H_{n}^{s}-1}{n-1}}=H_{n}^{s}$,
$\mathbb{E}^{fees}_{u}=-a(n-1)\frac{\frac{H_{n}^{s}-1}{n-1}}{H_{n}^{s}}=-a\cdot\frac{H_{n}^{s}-1}{H_{n}^{s}}$,
$L=-l\cdot 1$.
– A leaf node may also try to add connections to $n-1$ other leaf nodes.
* •
the other leaf nodes $v^{\prime}$ interact directly, with $2$ nodes (the
central node, and the nodes that changes its strategy, both connected to $n-1$
other nodes) with $rf=\frac{1+1/2^{s}}{2}$, and indirectly with $n-2$ other
nodes with $rf=\frac{H_{n}^{s}-1-1/2^{s}}{n-2}$. $\sum_{v^{\prime\prime}\in
V\setminus\\{v^{\prime}\\}}rf(v^{\prime\prime})=H_{n}^{s}$
$\mathbb{E}^{rev}_{u}=b\cdot[2\cdot
1/2]\binom{n-1}{2}\frac{\frac{H_{n}^{s}-1-1/2^{s}}{n-2}}{H_{n}^{s}}=b\cdot\frac{n-1}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}$
* •
$\mathbb{E}^{fees}_{u}=0$, $L=-l\cdot n$.
– The leaf node may also add connections to $n-1$ leaf nodes and remove the
connection with the central node.
* •
the other leaf nodes $v^{\prime}$ interact directly, with $2$ nodes (the
central node, and the nodes that changes its strategy, both connected to $n-2$
other nodes) with $rf=\frac{1+1/2^{s}}{2}$, and indirectly with $n-2$ other
nodes with $rf=\frac{H_{n}-1-1/2^{s}}{n-2}$. $\sum_{v^{\prime\prime}\in
V\setminus\\{v^{\prime}\\}}rf(v^{\prime\prime})=H_{n}^{s}$
$\mathbb{E}^{rev}_{u}=b\cdot[2\cdot
1/2]\binom{n-1}{2}\frac{\frac{H_{n}-1-1/2^{s}}{n-2}}{H_{n}^{s}}=b\cdot\frac{n-1}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}$
* •
$\mathbb{E}^{fees}_{u}=-a/H_{n}^{s}$, $L=-l\cdot(n-1)$.
– The leaf node can add connection to only one other $1$ leaf node.
* •
$\mathbb{E}^{rev}_{u}=0$
* •
$u$ connects to one central node $rf=1$, $1$ node with $rf=\frac{1}{2^{s}}$,
and $n-2$ other nodes - $rf=\frac{H_{n}^{s}-1-\frac{1}{2^{s}}}{n-2}$.
$\sum=H_{n}^{s}$
$\mathbb{E}^{fees}_{u}=-a(n-2)\cdot\frac{\frac{H_{n}^{s}-1-\frac{1}{2^{s}}}{n-2}}{H_{n}^{s}}=-a\cdot(H_{n}^{s}-1-\frac{1}{2^{s}})/H_{n}^{s}$,
$L=-l\cdot 2$.
– The leaf node can add connections to $2\leq i\leq n-2$ other leaf nodes.
* •
the leaf nodes $v^{\prime}$ that $u$ connects to interact with $1$ central
node ($rf=1$), the $u$ node ($rf=1/2$), $i-1$ other nodes that $u$ connects to
$rf=\frac{H_{i+1}^{s}-1-1/2}{i-1}$, and $n-H_{i+1}^{s}$ other nodes.
$\sum_{v^{\prime\prime}\in
V\setminus\\{v^{\prime}\\}}rf(v^{\prime\prime})=H_{n}^{s}$
$\mathbb{E}^{rev}_{u}=b\cdot[2\cdot
1/2]\binom{i}{2}\frac{\frac{H_{i+1}^{s}-1-1/2}{i-1}}{H_{n}^{s}}=b\cdot\frac{i}{2}\cdot\frac{H_{i+1}^{s}-1-1/2^{s}}{H_{n}^{s}}$
* •
From the perspective o f $u$, the central node has $rf=1$, the nodes that the
$u$ connects to have $rf=\frac{H_{i+1}^{s}-1}{i}$, the other nodes have
$rf=\frac{H_{n}^{s}-H_{i+1}^{s}}{n-i-1}$. $\sum=H_{n}^{s}$.
$\mathbb{E}^{fees}_{u}=-a(n-i-1)\frac{H_{n}^{s}-H_{i+1}^{s}}{n-i-1}/H_{n}^{s}=-a\cdot(H_{n}^{s}-H_{i+1}^{s})/H_{n}^{s}$,
$L=-l\cdot(i+1)$.
– The leaf node can add connections to $2\leq i\leq n-2$ leaf nodes and remove
the connection with the central node.
* •
the other leaf nodes $v^{\prime}$ interact directly, with $2$ nodes, the
central node with $rf=1$, the $u$ node with $rf=1/2$ and indirectly with $i-1$
other nodes with $rf=\frac{H_{i+1}-1-1/2}{i-1}$. $\sum_{v^{\prime\prime}\in
V\setminus\\{v^{\prime}\\}}rf(v^{\prime\prime})=H_{n}$
$\mathbb{E}^{rev}_{u}=b\cdot[2\cdot
1/2]\binom{i}{2}\frac{\frac{H_{i+1}^{s}-1-1/2^{s}}{i-1}}{H_{i+1}^{s}}=b\cdot\frac{i}{2}\cdot\frac{H_{i+1}^{s}-1-1/2^{s}}{H_{n}}$
* •
From the perspective o f $u$, the central node has $rf=1$, the nodes that the
$u$ connects to have $rf=\frac{H_{i+1}^{s}-1}{i}$, the other nodes have
$rf=\frac{H_{n}^{s}-H_{i+1}^{s}}{n-i-1}$. $\sum=H_{n}^{s}$.
$\mathbb{E}^{fees}_{u}=-a\cdot[(n-i-1)\frac{H_{n}^{s}-H_{i+1}^{s}}{n-i-1}/H_{n}^{s}+1/H_{n}^{s}]=-a\cdot(H_{n}^{s}-H_{i+1}^{s}+1)/H_{n}^{s}$,
$L=-l\cdot i$.
Now we compare the utility gained by switching to each strategy as opposed to
sticking to the default strategy:
#### (1) vs (2).
If (1) remains a NE then:
$\displaystyle-a\cdot\frac{H_{n}^{s}-1}{H_{n}^{s}}-l\cdot 1\geq
b\cdot\frac{n-1}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}-l\cdot(n)$
$\displaystyle\iff$ $\displaystyle
a\cdot\frac{H_{n}^{s}-1}{H_{n}^{s}}+b\cdot\frac{n-1}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}\leq
l\cdot(n-1)$
#### (1) vs (3).
If (1) remains a NE, then for any value of the parameter $s\geq 0$:
$\displaystyle-a\cdot\frac{H_{n}^{s}-1}{H_{n}^{s}}-l\cdot 1\geq
b\cdot\frac{n-1}{2}\cdot\frac{H_{n}^{s}-3/2}{H_{n}^{s}}-l\cdot(n-1)-$
$\displaystyle-a/H_{n}^{s}\iff$ $\displaystyle
a\cdot\frac{H_{n}^{s}-2}{H_{n}^{s}}+b\cdot\frac{n-1}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}\leq
l\cdot(n-2)$
#### (1) vs (4).
If (1) remains a NE:
$\displaystyle-a\cdot\frac{H_{n}^{s}-1}{H_{n}^{s}}-l\cdot
1\geq-a\cdot(H_{n}^{s}-1-1/2^{s})/H_{n}^{s}-\text{cost of 2}$
$\displaystyle\iff$ $\displaystyle a/H_{n}^{s}\leq 2^{s}\cdot l\cdot 1$
#### (1) vs (5).
If (1) remains a NE:
$\displaystyle-a\cdot\frac{H_{n}^{s}-1}{H_{n}^{s}}-l\cdot 1\geq
b\cdot\frac{i}{2}\cdot\frac{H_{i+1}^{s}-1-1/2^{s}}{H_{n}^{s}}-\frac{a\cdot(H_{n}^{s}-H_{i+1}^{s})}{H_{n}^{s}}-$
$\displaystyle-l\cdot(i+1)\iff$ $\displaystyle
b\cdot\frac{i}{2}\cdot\frac{H_{i+1}^{s}-1-1/2^{s}}{H_{n}^{s}}+a\cdot\frac{H_{i+1}^{s}-1}{H_{n}^{s}}\leq
l\cdot i$
#### (1) vs (6).
If (1) remains a NE:
$\displaystyle-a\cdot\frac{H_{n}^{s}-1}{H_{n}^{s}}-l\cdot 1\geq
b\cdot\frac{i}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}-a\cdot\frac{1+H_{n}^{s}-H_{i+1}^{s}}{H_{n}^{s}}-$
$\displaystyle-l\cdot i\iff$ $\displaystyle
b\cdot\frac{i}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}+a\frac{H_{i+1}^{s}-2}{H_{n}}\leq
l\cdot(i-1)$
∎
Given the result above, we show that if the scale parameter of the
distribution is _only_ moderately large ($s\geq 2$) and not too many messages
are sent out in the network (i.e. $a/H_{n}^{s},b/H_{n}^{s}\leq l$), then the
star graph is still a Nash Equilibrium. The values
$a/H_{n}^{s},b/H_{n}^{s}\leq l\cdot 1$ give a bound on the transactions sent
to the highest ranked node of a user.
###### Theorem 9.
The star graph with a number of leaves $n\geq 2$ is a Nash Equilibrium when
nodes follow the Zipf distribution with parameter $s\geq 2$ whenever the cost
of all edges is equal, and $a/H_{n}^{s},b/H_{n}^{s}\leq l\cdot 1$.
###### Proof.
Taking the conditions from Theorem 8:
1. 1.
$b\cdot\frac{i}{2}\cdot\frac{H_{i+1}^{s}-1-1/2^{s}}{H_{n}^{s}}+a\cdot\frac{H_{i+1}^{s}-1}{H_{n}^{s}}\leq
l\cdot(i)$ (for $2\leq i\leq n-1$),
2. 2.
$b\cdot\frac{i}{2}\cdot\frac{H_{n}^{s}-1-1/2^{s}}{H_{n}^{s}}+a\frac{H_{i+1}^{s}-2}{H_{n}^{s}}\leq
l\cdot(i-1)$ (for $2\leq i\leq n-1$),
3. 3.
$a/H_{n}^{s}\leq 2^{s}\cdot l\cdot 1$.
We can see that with our assumptions, condition $3$ holds trivially as
$a/H_{n}^{s}\leq l\cdot 1$. Moreover, whenever the cost of all edges is equal,
conditions $1,2$ are more restrictive, whenever $i$ increases, so the most
restrictive case is when $i=n-1$. Now, because $a/H_{n}^{s}\leq l\cdot 1$,
condition $2$ is more restrictive than $1$. Finally, condition $2$ holds,
because $a/H_{n}^{s},b/H_{n}^{s}\leq l\cdot 1$, and for $s\geq 2$,
$H_{n}^{s}=\sum_{i=1}^{n}\dfrac{1}{i^{s}}\leq\sum_{j=1}^{+\infty}j\cdot\dfrac{1}{j^{s}}\leq
2.$ ∎
We also show that the path graph essentially will never become a Nash
Equilibrium.
###### Theorem 10.
A path graph is never a Nash Equilibrium when nodes transact with each other
according to the Zipf distribution with parameter $s\geq 0$.
###### Proof.
Since the cost of any edge is split equally between both parties, the
endpoints of the path would always prefer joining to a node that is not an
endpoint of the path. In this case, even when $s=0$, their expected revenue
factor still remains $0$, but the cost of the expected fees naturally gets
lower. ∎
We finally show that circle graph cannot be a NE when it is sufficiently
large.
###### Theorem 11.
The Circle graph does not form a Nash Equilibrium for all $n\geq n_{0}$, for
some $n_{0}$, when nodes transact with each other according to the Zipf
distribution with $s\geq 0$.
###### Proof.
Assume that we have a circle graph with $n+1$ nodes. – Default strategy for a
node $u$ is not to add or remove any edges.
* •
In this case $u$ is an intermediary node to all of the pairs of nodes for
which the shortest path goes through this node. They rank each other equally,
so each node ranks other nodes with equal $rf=H_{n}^{s}/n$, thus $\sum
rf=H_{n}^{s}$, finally
$\mathbb{E}^{rev}_{u}=b\cdot\frac{H_{n}^{s}/n}{H_{n}^{s}}2\cdot(\binom{n}{2}-\binom{n/2}{2}-n/2\cdot
n/2)\approx\frac{b}{n}\cdot n^{2}/4$
* •
The node $u$ interacts with $n$ nodes with $rf=H_{n}^{s}/n$. $2$ of them are
in distance $0$, $2$ are in distance $1$, and so on. Finally at most $2$ of
them are in distance $\lfloor n/2\rfloor$.
$\mathbb{E}^{fees}_{u}=-a\cdot\frac{H_{n}^{s}/n}{H^{s}_{n}}\cdot
2\cdot(1+2+\ldots+n/2)\approx\frac{-a}{n}\cdot n^{2}/4$.
* •
$L=-l\cdot 1$.
– A strictly better strategy for the node $u$ is to connect to its opposite
node.
* •
In this case $u$ is an intermediary node to all of the pairs of nodes for
which the shortest path goes through this node. The opposite node $u$ ranks
$u$ with $rf=1$ and all of the other nodes with $rf=\frac{H_{n}^{s}-1}{n-1}$
the other nodes rank 2 nodes with $rf=\frac{1+1/2^{s}}{2}$, and all of the
other nodes with $rf=\frac{H_{n}^{s}-1-1/2^{s}}{n-2}$, thus $\sum
rf=H_{n}^{s}$. We will thus asymptotically count only the weakest
$rf=\frac{H_{n}^{s}-1-1/2^{s}}{n-2}$ factor. Finally
$\mathbb{E}^{rev}_{u}=b\cdot\frac{H_{n}^{s}-1-1/2^{s}}{n-2}\cdot
2\cdot(\frac{n}{4}\cdot\frac{n}{2}+\frac{1}{2}\cdot\frac{n}{4}\
\cdot\frac{n}{4})\approx\frac{b}{n}\cdot n^{2}(5/16)$
* •
The node $u$ interacts with $n-1$ nodes with $rf=(H_{n}^{s}-1)/(n-1)$, and
directly with one node with $rf=1$. We calculate the closeness as:
$\displaystyle\mathbb{E}^{fees}_{u}\leq-a\cdot\frac{(H_{n}^{s}-1)/(n-1)}{H^{s}_{n}}\cdot\frac{3\frac{n}{4}(\frac{n}{4}-1)}{2}$
$\displaystyle+\frac{n/2+n/4}{2}\cdot\frac{n}{4})=\frac{-a(H_{n}^{s}-1)/(n-1)}{H^{s}_{n}}\leq\frac{3}{16}n^{2}.$
* •
$L=-l\cdot 1.$
∎
## V Related work
Strategic aspects of cryptocurrencies, and more generally the blockchain
technologies, have attracted a lot of attention in the literature [35, 36, 37]
as by their very nature, they are created to facilitate interactions between
self-interested parties in a decentralised manner.
Apart from the works discussed in the introduction ([19, 18, 20, 17]), perhaps
the closest research line to which our paper contributes is the one on
creation games. In a well-known work by Fabrikant et al. [38], players choose
a subset of other players to connect to in order to minimise their total
distance to all others in the network. The result of Fabrikant et al. was
later strengthened by Albers et al. [39], and also extended to the weighted
network creation game setting. Ehsani et al. [40] considers the network
creation game with a fixed budget for each player, thus constraining the
number of connections each player can make. Another well-known body of
research of this kind are network formation games [41, 42]. All of these
works, however, consider the problem of network creation in general networks
which do not take into account fees and channel collateral which are specific
to PCNs.
Our work is also closely related to the study of stable network topologies for
real-world networks (e.g. social and communication networks) that are formed
by the interaction of rational agents [43, 26]. Demaine et al. [43] show that
all equilibrium networks satisfy the small world property, that is, these
networks have small diameters. Bilo et al. [26] establish properties on the
diameter, clustering and degree distribution for equilibrium networks. In [18,
19], Avarikioti et al. consider stable graph topologies in the context of
PCNs. Our work extends the analysis of Avarikioti et al. [19] and considers
stable graph topologies in PCNs under a non-uniform distribution of
transactions between users.
## VI Conclusion and Future Work
In this paper, we modeled and analysed the incentive structure behind the
creation of PCNs. We first focused on the perspective of a new user who wants
to join the network in an optimal way. To this end, we defined a new user’s
utility function in terms of expected revenue, expected fees, on-chain cost of
creating channels, and opportunity costs, while accounting for realistic
transaction distributions.
We also introduced a series of approximation algorithms under specific
constraints on the capital distribution during the channel creation: (a) We
first presented a linear time $1-\frac{1}{e}$ approximation algorithm when a
user locks a fixed amount to all channels; thus, providing an efficient
approach for users who wish to lower computational costs. (b) We further
provided a pseudo-polynomial time $1-\frac{1}{e}$ approximation algorithm when
users may lock varying, but discretized by $m$, amounts to different channels.
This setting applies to most real-life scenarios but comes with a
computational overhead that depends on $m$. (c) Finally, we proposed a $1/5$
approximation solution when a user can pick the amounts from a continuous set.
We used a modified utility function, the benefit function, which may be
leveraged by a user to test whether assuming continuous funds yields
unexpected profits. Altogether, our results in this section show that
depending on the number of assumptions a new user joining a PCN wants to make,
the user has a range of solutions to deploy to optimize the way they connect
to the network.
Lastly, we analysed the parameter spaces in our underlying model and
conditions under which the star, path, and circle graph topologies form a Nash
Equilibrium. Our analysis indicates that under a realistic transaction model,
the star graph is the predominant topology, enhancing the results of [19].
We highlight three interesting directions for future work. First, it would be
beneficial to develop more advanced algorithms for maximizing the general
utility function that also come with guarantees on the approximation ratio.
Second, we believe there are still avenues in which our model can be made more
realistic, for instance, by considering a more realistic cost model that takes
into account interest rates as in [17]. Lastly, as the accuracy of our model
depends on estimations of the underlying PCN parameters, for instance, the
average total number of transactions and the average number of transactions
sent out by each user, developing more accurate methods for estimating these
parameters may be helpful.
## Acknowledgments
The work was partially supported by the Austrian Science Fund (FWF) through
the project CoRaF (grant 2020388). It was also partially supported by NCN
Grant 2019/35/B/ST6/04138 and ERC Grant 885666.
## References
* [1] A. Chauhan, O. P. Malviya, M. Verma, and T. S. Mor, “Blockchain and scalability,” in _2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C)_. IEEE, 2018, pp. 122–128.
* [2] K. Croman, C. Decker, I. Eyal, A. E. Gencer, A. Juels, A. E. Kosba, A. Miller, P. Saxena, E. Shi, E. G. Sirer, D. Song, and R. Wattenhofer, “On scaling decentralized blockchains - (A position paper),” in _Financial Cryptography and Data Security FC_ , ser. Lecture Notes in Computer Science.
* [3] A. Jain, S. Siddiqui, and S. Gujar, “We might walk together, but i run faster: Network fairness and scalability in blockchains,” in _International Conference on Autonomous Agents and MultiAgent Systems_ , 2021.
* [4] Visa Inc., “Operational performance data for 4q21 https://s1.q4cdn.com $>$ doc$\\_$financials $>$ 2021 $>$ q4,” 12 2021.
* [5] T. D. Joseph Poon, “The bitcoin lightning network: Scalable off-chain instant payments,” 2016. [Online]. Available: https://lightning.network/lightning-network-paper.pdf
* [6] S. K. T. Utomo, T. H. Koshizuka, and N. Koshizuka, “Blockchain-based incentive system for public trash bin,” in _2020 IEEE 9th Global Conference on Consumer Electronics (GCCE)_. IEEE, 2020, pp. 168–172.
* [7] L. Gudgeon, P. Moreno-Sanchez, S. Roos, P. McCorry, and A. Gervais, “Sok: Layer-two blockchain protocols,” in _International Conference on Financial Cryptography and Data Security_. Springer, 2020, pp. 201–226.
* [8] A. Kiayias and O. S. T. Litos, “A composable security treatment of the lightning network,” _Cryptology ePrint Archive_ , 2019.
* [9] G. Malavolta, P. Moreno-Sanchez, A. Kate, M. Maffei, and S. Ravi, “Concurrency and privacy with payment-channel networks,” in _ACM SIGSAC Conference on Computer and Communications Security_ , 2017.
* [10] S. Rain, Z. Avarikioti, L. Kovács, and M. Maffei, “Towards a game-theoretic security analysis of off-chain protocols,” _arXiv preprint arXiv:2109.07429_ , 2021.
* [11] Z. Avarikioti, E. Kokoris-Kogias, R. Wattenhofer, and D. Zindros, “Brick: Asynchronous incentive-compatible payment channels,” in _Financial Cryptography and Data Security FC_ , 2021.
* [12] Z. Avarikioti and O. S. T. Litos, “Suborn channels: Incentives against timelock bribes,” in _Financial Cryptography and Data Security FC_ , 2022\.
* [13] P. McCorry, S. Bakshi, I. Bentov, S. Meiklejohn, and A. Miller, “Pisa: Arbitration outsourcing for state channels,” in _Proceedings of the 1st ACM Conference on Advances in Financial Technologies, AFT_ , 2019.
* [14] Z. Avarikioti, O. S. T. Litos, and R. Wattenhofer, “Cerberus channels: Incentivizing watchtowers for bitcoin,” in _Financial Cryptography and Data Security FC_ , 2020.
* [15] X. Wang, H. Gu, Z. Li, F. Zhou, R. Yu, and D. Yang, “Why riding the lightning? equilibrium analysis for payment hub pricing.”
* [16] Y. van Engelshoven and S. Roos, “The merchant: Avoiding payment channel depletion through incentives,” in _IEEE International Conference on Decentralized Applications and Infrastructures, DAPPS_ , 2021.
* [17] P. Guasoni, G. Huberman, and C. Shikhelman, “Lightning network economics: Channels,” 2021. [Online]. Available: http://dx.doi.org/10.2139/ssrn.3840374
* [18] G. Avarikioti, R. Scheuner, and R. Wattenhofer, “Payment networks as creation games,” in _Data Privacy Management, Cryptocurrencies and Blockchain Technology CBT_ , vol. 11737, 2019, pp. 195–210.
* [19] Z. Avarikioti, L. Heimbach, Y. Wang, and R. Wattenhofer, “Ride the lightning: The game theory of payment channels,” in _Financial Cryptography and Data Security - 24th International Conference, FC_ , ser. Lecture Notes in Computer Science, vol. 12059, 2020, pp. 264–283.
* [20] O. Ersoy, S. Roos, and Z. Erkin, “How to profit from payments channels,” in _Financial Cryptography and Data Security FC_ , 2020, p. 284–303. [Online]. Available: https://doi.org/10.1007/978-3-030-51280-4_16
* [21] A.-L. Barabási and R. Albert, “Emergence of scaling in random networks,” _science_ , vol. 286, no. 5439, pp. 509–512, 1999.
* [22] E. W. Dijkstra, “A note on two problems in connexion with graphs,” _Numerische Mathematik_ , vol. 1, pp. 269–271, 1959. [Online]. Available: https://doi.org/10.1007/BF01386390
* [23] G. K. Zipf, _Human behavior and the principle of least effort_. Addison-Wesley Press, 1949.
* [24] C. Salge, N. Ay, D. Polani, and M. Prokopenko, “Zipf’s law: Balancing signal usage cost and communication efficiency,” in _PLoS ONE_ , N. R. Smalheiser, Ed., 2015.
* [25] L. Aitchison, N. Corradi, and P. E. Latham, “Zipf’s law arises naturally when there are underlying, unobserved variables,” in _PLoS computational biology_ , O. Sporns, Ed., 2016.
* [26] D. Bilò, T. Friedrich, P. Lenzner, S. Lowski, and A. Melnichenko, “Selfish creation of social networks,” in _Thirty-Fifth AAAI Conference on Artificial Intelligence, Virtual Event_. AAAI Press, 2021, pp. 5185–5193.
* [27] R. Hall and M. Lieberman, _Macroeconomics: Principles and Applications_ , ser. Accounting Principles S. South-Western College Pub., 1998. [Online]. Available: https://books.google.at/books?id=pyazAAAAIAAJ
* [28] D. P. Williamson, “Bridging continuous and discrete optimization,” https://people.orie.cornell.edu/dpw/orie6334/lecture23.pdf, 2019.
* [29] J. Lee, V. S. Mirrokni, V. Nagarajan, and M. Sviridenko, “Non-monotone submodular maximization under matroid and knapsack constraints,” in _ACM Symposium on Theory of Computing, STOC_ , M. Mitzenmacher, Ed., 2009\.
* [30] Z. Avarikioti, K. Pietrzak, I. Salem, S. Schmid, S. Tiwari, and M. Yeo, “Hide & seek: Privacy-preserving rebalancing on payment channel networks,” in _Financial Cryptography and Data Security FC_ , 2022.
* [31] S. Roos, P. Moreno-Sanchez, A. Kate, and I. Goldberg, “Settling payments fast and private: Efficient decentralized routing for path-based transactions,” in _25th Annual Network and Distributed System Security Symposium, NDSS_ , 2018.
* [32] V. Sivaraman, S. B. Venkatakrishnan, K. Ruan, P. Negi, L. Yang, R. Mittal, G. Fanti, and M. Alizadeh, “High throughput cryptocurrency routing in payment channel networks,” in _17th USENIX Symposium on Networked Systems Design and Implementation, NSDI_ , 2020.
* [33] K. Pietrzak, I. Salem, S. Schmid, and M. Yeo, “Lightpir: Privacy-preserving route discovery for payment channel networks,” in _IFIP Networking Conference, IFIP Networking 2021_ , 2021.
* [34] M. Gerla, X. Hong, and G. Pei, “Landmark routing for large ad hoc wireless networks,” in _Globecom ’00 - IEEE. Global Telecommunications Conference. Conference Record (Cat. No.00CH37137)_ , vol. 3, 2000, pp. 1702–1706 vol.3.
* [35] L. Chen, L. Xu, Z. Gao, A. Sunny, K. Kasichainula, and W. Shi, “A game theoretical analysis of non-linear blockchain system,” in _International Conference on Autonomous Agents and MultiAgent Systems_ , 2021\.
* [36] Y. Amoussou-Guenou, B. Biais, M. Potop-Butucaru, and S. Tucci-Piergiovanni, “Rational vs byzantine players in consensus-based blockchains,” in _International Conference on Autonomous Agents and MultiAgent Systems_ , 2020, pp. 43–51.
* [37] Y. Lewenberg, Y. Bachrach, Y. Sompolinsky, A. Zohar, and J. S. Rosenschein, “Bitcoin mining pools: A cooperative game theoretic analysis,” in _International conference on autonomous agents and multiagent systems_ , 2015, pp. 919–927.
* [38] A. Fabrikant, A. Luthra, E. N. Maneva, C. H. Papadimitriou, and S. Shenker, “On a network creation game,” in _ACM Symposium on Principles of Distributed Computing, PODC_ , 2003, pp. 347–351.
* [39] S. Albers, S. Eilts, E. Even-Dar, Y. Mansour, and L. Roditty, “On nash equilibria for a network creation game,” _ACM Trans. Economics and Comput._ , vol. 2, no. 1, pp. 2:1–2:27, 2014.
* [40] S. Ehsani, M. Fazli, A. Mehrabian, S. S. Sadeghabad, M. Safari, M. Saghafian, and S. ShokatFadaee, “On a bounded budget network creation game,” _CoRR_ , vol. abs/1111.0554, 2011.
* [41] V. Bala and S. Goyal, “A noncooperative model of network formation,” _Econometrica_ , vol. 68, no. 5, pp. 1181–1229, 2000.
* [42] M. O. Jackson, “A survey of network formation models: stability and efficiency,” _Group formation in economics: Networks, clubs, and coalitions_ , vol. 664, pp. 11–49, 2005.
* [43] E. D. Demaine, M. Hajiaghayi, H. Mahini, and M. Zadimoghaddam, “The price of anarchy in network creation games,” in _ACM Symposium on Principles of Distributed Computing, PODC_ , 2007, pp. 292–298.
|
11institutetext: MIT Lincoln Laboratory
11email: {shamaria.engram, tyler.kaczmarek, alice.lee<EMAIL_ADDRESS>
# Proactive Provenance Policies for Automatic Cryptographic Data Centric
Security
Shamaria Engram Tyler Kaczmarek Alice Lee David Bigelow
###### Abstract
Data provenance analysis has been used as an assistive measure for ensuring
system integrity. However, such techniques are typically reactive approaches
to identify the root cause of an attack in its aftermath. This is in part due
to fact that the collection of provenance metadata often results in a deluge
of information that cannot easily be queried and analyzed in real time. This
paper presents an approach for proactively reasoning about provenance metadata
within the Automatic Cryptographic Data Centric (ACDC) security architecture,
a new security infrastructure in which all data interactions are considered at
a coarse granularity, similar to the Function as a Service model. At this
scale, we have found that data interactions are manageable for the proactive
specification and evaluation of provenance policies—constraints placed on
provenance metadata to prevent the consumption of untrusted data. This paper
provides a model for proactively evaluating provenance metadata in the ACDC
paradigm as well as a case study of an electronic voting scheme to demonstrate
the applicability of ACDC and the provenance policies needed to ensure data
integrity.
††DISTRIBUTION STATEMENT A. Approved for public release. Distribution is
unlimited. This material is based upon work supported by the Under Secretary
of Defense for Research and Engineering under Air Force Contract No.
FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations
expressed in this material are those of the author(s) and do not necessarily
reflect the views of the Under Secretary of Defense for Research and
Engineering.
## 1 Introduction
Data provenance provides a comprehensive history of data and the manipulations
it has underwent from its inception to its latest state. Analysis of this
history can provide significant insight into a datum’s integrity and
authenticity for forensic analysts and security administrators. However, due
to the mass of data being produced in computing environments, manual analysis
of provenance metadata is a daunting task. Automated provenance analysis
techniques exist but generally provide a reactive evaluation in the aftermath
of a security incident (e.g.,[19]).
This retrospective approach to data provenance analysis has proven valuable in
several security contexts (e.g., diagnosing an attacker’s point of entry to a
system). Nevertheless, given the ubiquity of online services, many of which
operate in an outsourced distributed environment, there is a need for a
proactive approach to data provenance analysis. Proactively evaluating a
datum’s provenance record before consumption is especially applicable to
operations within cloud environments, where end users, who outsource their
data to be processed by cloud applications, should have some level of
assurance about their data’s integrity. Runtime analysis of whole-system
provenance has recently gained attention in the literature but does so at a
fine-grained level, which does not translate cleanly to a distributed system
[23].
The ability to proactively specify properties of provenance metadata, to aid
in security enforcement decisions, can have a significant impact on a
distributed environment’s overall security posture. This paper presents an
approach for proactively reasoning about provenance metadata within the
Automatic Cryptographic Data Centric (ACDC) security architecture, a
distributed architecture that upends the current system-centric paradigm by
taking a data-centric approach to security. Rather than protecting systems
that store data, ACDC puts the focus directly on protecting data itself both
at rest and in motion while simultaneously ensuring that data is used in only
authorized and auditable ways. Data protection goals include confidentiality,
integrity, and availability throughout all uses of the data, including not
only storage and transmission but also sharing and computation, on devices and
networks that may be partially compromised.
ACDC allows application developers to proactively express policies over
provenance metadata to be enforced before data is consumed by an individual
process. We call such policies provenance policies. ACDC can prevent the
consumption of untrusted data by providing the following capabilities: 1)
secure packaging of data with associated integrity and confidentiality
policies at the network’s edge, 2) enforcement of integrity and
confidentiality policies throughout the data’s entire lifespan, and 3) a
thorough record of data provenance to account for every manipulation. To the
best of our knowledge, this paper presents the first effort to provide a
proactive approach for data provenance evaluation within a data-centric
security architecture.
Our core contributions are as follows:
1. 1.
We introduce the ACDC architecture for data-centric security (Section 2),
2. 2.
We describe a formal approach for reasoning about provenance policies
proactively based on a mathematical semantics of provenance metadata (Section
3), and
3. 3.
We demonstrate the applicability of ACDC and proactive provenance policy
evaluation by providing a case study of an end-to-end, coercion-resistant
voting system (Section 4).
Section 5 provides a summary of related work and Section 6 concludes and
provides directions for future work.
## 2 The ACDC FaaS Paradigm
Figure 1: ACDC Core Component Architecture
This section introduces the Automatic Cryptographic Data-Centric (ACDC)
security paradigm and describes each of the components that make up an ACDC
network. As shown in Figure 1, ACDC puts all data into named, secure data
capsules, where each capsule is associated with an owner. These capsules
contain cryptographically enforced access-control policies that define who can
access and use the capsules’ associated data. Each capsule also contains its
provenance as captured within the ACDC system, allowing authorized parties to
assess a capsule’s integrity before acting upon it. ACDC provides flexibility
to data owners by allowing them to 1) cryptographically authorize functions to
run on their data, and 2) specify which secure computation techniques are
allowed to process their data (e.g, multiparty computation (MPC) or secure
enclaves), which enables data owners to consider the tradoffs between
security, functionality, and performance. These capabilities allow mutually
distrusting data owners to securely collaborate and share their data in a
controlled environment. Lastly, ACDC uses content-centric networking (CCN)
[16] to route and transmit data capsules by their name rather than by the
systems storing such data, thus enabling capsules’ cryptographic mechanisms to
protect data wherever capsules go on the network.
An instance of an ACDC network (closed or Internet-wide) consists of the
following components:
#### Nodes
ACDC nodes may be a set of dedicated servers each running ACDC software. Each
node may also have a set of supporting servers that provide data for specific
ACDC functionality using unspecified (back-end) protocols. In general, all
ACDC nodes use a common ACDC core library. The library itself makes no
distinction based on the node type, though the capabilities of an individual
node can dictate many different types.
#### Data Capsules
As previously mentioned, all data is stored in named, secure capsules. All
capsules are digitally signed for authenticity and integrity, and the internal
data of each capsule is encrypted for confidentiality. Each data capsule may
contain an optional output confidentiality policy, which defines the
confidentiality restrictions imposed on any data derived from its data.
#### Capsule Storage
ACDC stores data capsules persistently, allowing nodes to publish new
capsules, fetch existing capsules, and delete capsules. All capsules are named
according to a CCN-compatible ACDC naming scheme.
#### Function as a Service
FaaS allows nodes to perform (or serve) one or more functions in a
query/response model. In general, FaaS is expected to use the same naming
schemes as capsule storage, such that any request can be static (Capsule
Storage) or dynamic (FaaS).
#### Secure Execution Environments
ACDC provides environments for secure function execution (e.g., secure
enclaves such as Intel SGX or MPC).
#### Keys
ACDC uses cryptographic keys for confidentiality, integrity, and authenticity.
#### Policies
ACDC has two types of policies: 1) confidentiality policies, and 2) integrity
policies (i.e., provenance policies). The confidentiality policies are
attribute-based encryption policies [10] that define the attributes needed to
decrypt a data capsule and thus cryptographically enforce access control.
Attributes are terms that may refer to a principal’s characteristics (e.g., a
role or identity) or proof of taking an action (e.g., validating a capsule’s
provenance). Provenance policies define a capsule’s expected provenance and
should be checked before a capsule is used as input to a function (discussed
at length in Section 3).
#### Contracts
Contracts define functions and give restrictions, limiting nodes to perform
computations on data capsules under a given set of conditions. For example, a
contract may restrict who can perform computations, require provenance checks
via a provenance policy (detailed in following sections), or require key
revocation checks.
All contracts are expected to provide an output confidentiality policy, which
defines confidentiality restrictions to impose on the output data of the
function. However, each function argument may have its own output
confidentiality policy, in which case the policies must be composed, thereby
accumulating all the restrictions from each policy (i.e., the contract and
each function argument’s output confidentiality policy).
## 3 ACDC Provenance Model
To reason about provenance within an ACDC architecture, we follow the W3C PROV
Data Model [6] in characterizing the elements of the model into 3 main types:
entities, activities, and agents. We further refine the model by extending the
entity type to contain 3 subtypes and the agent type to contain 2 subtypes. An
entity can be either a _key entity_ , a _contract entity_ , or a _data entity_
and an agent can be either an _account agent_ or a _node agent_.
Relation | Source | Destination | Meaning
---|---|---|---
WasAttributedTo | entity (any subtype) | node agent | The entity was created by
| execution on the node agent.
| | account agent | The entity was sealed
| | | under the account agent’s key(s).
WasDerivedFrom | entity (any subtype) | contract entity | The entity was created based
| on rules specified in the contract.
data entity | The entity is dependent
| | | on the data entity.
| | key entity | The key entity was needed to either wrap
| | | the source entity or unwrap an input entity.
Used | activity | contract entity | The contract entity defined
| the activity’s execution.
data entity | The data entity was input
| | | to the activity.
| | key entity | The activity performed some cryptographic
| | | function using the key entity.
ActedOnBehalfOf | node agent | account agent | The node agent performed a computation
| | | on behalf of the account agent.
WasAssociatedWith | activity | node agent | The activity describing the computation
| | | was performed by the node agent.
Table 1: The effect of the additional subtypes on provenance relations
introduced by ACDC to the PROV data model.
Key entities represent cryptographic keys belonging to an agent, contract
entities represent ACDC contracts, and data entities represent all other types
of data. Account agents represent the users in a computing environment and
node agents represent a secure execution environment (e.g., an sgx enclave).
Activities represent a computation that uses, manipulates, or generates
entities. Node agents act on behalf of account agents; conversely, account
agents _cannot_ act on behalf of node agents. Because node agents represent
environments where computations are performed, activities can only be
associated with node agents. Table 1 summarizes the valid types for provenance
relations affected by our additional subtypes.
Figure 2: A provenance graph of a user who has encapsulated some data
To illustrate this new distinction between entity and agent subtypes, consider
the provenance of a scenario in which a user has introduced some data into the
ACDC ecosystem at the network’s edge, shown in Figure 2. To introduce this
data, the data must be encapsulated because all data in ACDC is stored in
secure capsules. The sgx enclave is a node agent which acts on behalf of Bob
who is an account agent. The encapsulate computation is an activity associated
with the sgx enclave. The plaintext is a data entity, the encapsulate contract
is a contract entity specifying how the function should input and output
entities, $Key_{SGX}$ is a key entity attributed to the sgx enclave for secure
computation, and $Key_{B}$ is a key entity attributed to account agent Bob.
The secure capsule is a data entity generated by the encapsulate activity,
derived from the contract, key, and data entities, and is attributed to
account agent Bob.
To reason about the provenance of a distributed ACDC environment, we specify
the environment at a high level of abstraction as a 6-tuple
$D=(\mathcal{E}_{k},\mathcal{E}_{c},\mathcal{E}_{d},$
$G_{n},G_{a},\mathcal{A})$, where $\mathcal{E}_{k}$ is a finite set of key
entities ranged over by metavariable $\varepsilon_{k}$, $\mathcal{E}_{c}$ is a
finite set of contract entities ranged over by metavariable $\varepsilon_{c}$,
$\mathcal{E}_{d}$ is a finite set of data entities ranged over by metavariable
$\varepsilon_{d}$, $G_{n}$ is a finite set of node agents ranged over by
metavriable $g_{n}$, $G_{a}$ is a finite set of account agents ranged over by
metavariable $g_{a}$, and $\mathcal{A}$ is a finite set of activities ranged
over by metavariable $a$.
The set of all possible entities
$\mathcal{E}=\mathcal{E}_{k}\cup\mathcal{E}_{c}\cup\mathcal{E}_{d}$ is the
union of all entity subtypes, and the set of all possible agents $G=G_{n}\cup
G_{a}$ is the union of all agent subtypes. Because provenance is represented
by a labeled, directed acyclic graph, $V=\mathcal{E}\cup G\cup\mathcal{A}$
denotes the set of all possible vertices, $E\subset V\times V$ denotes the set
of all possible edges, $L$ denotes the set of all possible labels (relations)
and is the union of all relations, and $L^{E}$ denotes the set of all possible
graph labeling functions where $l:E\rightarrow L$ is a function that inputs an
edge and outputs the label corresponding to that edge, indicating the causal
relationship between the source and destination nodes.
The set of all provenance graphs of a distributed environment $D$ is denoted
by $2^{V}\times 2^{E}\times L^{E}$. A provenance policy is a predicate
${P:2^{V}\times 2^{E}\times L^{E}\rightarrow\\{true,false\\}}$. ACDC
provenance policies determine whether a particular subgraph is contained in
the provenance graph under consideration. It is not always the case that the
entire provenance record for a distributed environment be evaluated against a
policy. For example, a provenance policy can be evaluated at runtime to ensure
that data was generated via the expected pathways before using the data as
input for a computation. In this case, a contract will specify a provenance
policy to be evaluated over the function’s inputs; therefore, only the
provenance associated with the input data is relevant for policy evaluation,
making it unnecessary and inefficient to evaluate the policy on the entire
provenance record. Consequently, for each distributed environment there is a
one-to-many relationship between the distributed environment and the number of
provenance graphs it contains. In this paper, we refer to an _event_ as a
provenance subgraph containing an activity with all of its immediate input and
output entities along with their attributions. In a larger distributed
environment, Figure 2 would be considered the $Encapsulate$ event.
Provenance policies are specified as boolean predicates so that large, complex
policies can be composed from simpler policies. For example, let’s consider a
scenario where Bob would like to use his secure capsule in a computation, but
would like to verify that his secure capsule was properly encapsulated (i.e.,
encapsulated with only his data and key). A policy for this situation might
ensure that: (1) the encapsulate function used Bob’s data and key, (2) if the
encapsulate function used any data and cryptographic keys, then they can only
be Bob’s data and key or the node acting on Bob’s behalf key, (3) the secure
capsule is only derived from Bob’s key and plaintext data and no other account
agent’s key and data, and (4) the secure capsule was computed using the
encapsulate contract. To note the importance of precise policy specification,
it may not be easy to distinguish the difference between the informal
specification of concern (1) and concern (2). Concern (1) only ensures that
the encapsulate function used Bob’s data and key but does not preclude the
function from using any one else’s data and key. The second concern ensures
that if the encapsulate function used any data or cryptographic keys, then the
data and keys can only belong to Bob or the node acting on Bob’s behalf.
Formally, given a provenance graph $(V^{\prime},E^{\prime},l^{\prime})\in
2^{V}\times 2^{E}\times L^{E}$, Bob can specify the following policies:
$P_{1}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists\varepsilon_{k}\in V^{\prime}:(Encapsulate,\varepsilon_{k})\in E^{\prime}\land l^{\prime}(Encapsulate,\varepsilon_{k})=Used}$,
---|---|---
$P_{2}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists\varepsilon_{d}\in V^{\prime}:(Encapsulate,\varepsilon_{d})\in E^{\prime}\land l^{\prime}(Encapsulate,\varepsilon_{d})=Used}$,
$P_{3}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\forall\varepsilon_{k}\in V^{\prime}:((Encapsulate,\varepsilon_{k})\in E^{\prime}\land l^{\prime}(Encapsulate,\varepsilon_{k})=Used})\linebreak{\Rightarrow(((\varepsilon_{k},Bob)\in E^{\prime}}{\land~{}l^{\prime}(\varepsilon_{k},Bob)=WasAttributedTo)}\linebreak{\lor(\exists g_{n}\in V^{\prime}:((\varepsilon_{k},g_{n})\in E^{\prime}\land~{}l^{\prime}(\varepsilon_{k},g_{n})=WasAttributedTo)}\linebreak{\land((g_{n},Bob)\in E^{\prime}\land~{}l^{\prime}(g_{n},Bob)=ActedOnBehalfOf)))}$,
$P_{4}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\forall\varepsilon_{d}\in V^{\prime}:((Encapsulate,\varepsilon_{d})\in E^{\prime}\land l^{\prime}(Encapsulate,\varepsilon_{d})=Used})\linebreak{\Rightarrow((\varepsilon_{d},Bob)\in E^{\prime}}{\land~{}l^{\prime}(\varepsilon_{d},Bob)=WasAttributedTo)}$,
$P_{5}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists\varepsilon_{d}\in V^{\prime}:(SecureCapsule,\varepsilon_{d})\in E^{\prime}}\linebreak{\land~{}l^{\prime}(SecureCapsule,\varepsilon_{d})=WasDerivedFrom}$,
$P_{6}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists\varepsilon_{k}\in V^{\prime}:(SecureCapsule,\varepsilon_{k})\in E^{\prime}}\linebreak{\land~{}l^{\prime}(SecureCapsule,\varepsilon_{k})=WasDerivedFrom}$,
$P_{7}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\forall\varepsilon_{k}\in V^{\prime}:((SecureCapsule,\varepsilon_{k})\in E^{\prime}}\linebreak{\land~{}l^{\prime}(SecureCapsule,\varepsilon_{k})=WasDerivedFrom)}\linebreak{\Rightarrow(((\varepsilon_{k},Bob)\in E^{\prime}\land~{}l^{\prime}(\varepsilon_{k},Bob)=WasAttributedTo)}\linebreak{\lor(\exists g_{n}\in V^{\prime}:((\varepsilon_{k},g_{n})\in E^{\prime}\linebreak\land~{}l^{\prime}(\varepsilon_{k},g_{n})=WasAttributedTo)}\linebreak{\land((g_{n},Bob)\in E^{\prime}\land~{}l^{\prime}(g_{n},Bob)=ActedOnBehalfOf)))}$,
$P_{8}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\forall\varepsilon_{d}\in V^{\prime}:((SecureCapsule,\varepsilon_{d})\in E^{\prime}}\linebreak{\land~{}l^{\prime}(SecureCapsule,\varepsilon_{d})=WasDerivedFrom)}\linebreak{\Rightarrow((\varepsilon_{d},Bob)\in E^{\prime}\land~{}l^{\prime}(\varepsilon_{d},Bob)=WasAttributedTo)}$,
$P_{9}(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${(SecureCapsule,EncapsulateContract)\in E^{\prime}}\linebreak{\land~{}l^{\prime}(SecureCapsule,EncapsulateContract)=WasDerivedFrom}$.
The overall provenance policy can be composed as the conjunction of policies
$P_{1}-P_{9}$. Specifying policies in this way allows analyst to reason about
small, simple policies. Logical connectives can then be used to compose these
simple policies into larger, more complex policies.
## 4 A Case Study on Detecting Voter Fraud in E-voting
This section presents a case study of an e-voting scenario within an ACDC
architecture and provenance policies that may prevent illegal ballots from
being cast. As recent voting elections have been under scrutiny by both the
media and general public [9], we believe that ACDC equipped voting machines
can provide significant benefits and increase public confidence in the
integrity of voting elections.
Table 2: Entities in an ACDC E-voting environment
Table 3: Activities in an ACDC E-voting environment
Table 4: Agents in an ACDC E-voting environment
### 4.1 ACDC E-voting Scenario
Within an ACDC architecture all voting may take place electronically on ACDC
equipped voting machines. For illustration purposes, we assume these voting
machines can perform similarly to Direct Recording Electronic (DRE) voting
machines with a Voter-Verified Paper Audit Trail (VVPAT) [27]. However, ACDC
equipped voting machines perform all computations securely (e.g., in a secure
enclave) and the internal data of all capsules is encrypted. Tables 4–4
describe the provenance objects in such an ACDC voting network.
In this scenario, a voter’s ballot is successfully cast after the following
steps: (1) a voter enters their unique VoterID into the ACDC equipped voting
machine, (2) the voting machine invokes a key generation function in which a
cryptographic key is generated that will be attributed to the corresponding
voter, (3) the voter will then be presented with an electronic ballot in which
they can manually enter their selections, (4) a paper ballot, containing a
cryptographically protected confirmation number, will then be generated and
displayed through a viewing glass for a limited amount of time, in which a
user can verify whether they approve the recorded selections, (5) after the
user verifies that their vote has been correctly recorded, the machine
securely stores the paper ballot for a VVPAT, (6) the machine then
electronically counts the new result by including the newly cast vote, and (7)
the machine then provides a printed receipt to the voter, which includes a
cryptographically protected confirmation number that matches the confirmation
number of the paper ballot and exclaims that their vote has been counted. The
encrypted confirmation number on the receipt provided to the voter can be used
at a later date by the voter to ensure that their vote was correctly included
in the election result [7].
To formalize, let
$VM=(\mathcal{E}_{k},\mathcal{E}_{c},\mathcal{E}_{d},G_{n},G_{a},\mathcal{A})$
be a distributed environment of ACDC equipped electronic voting machines
where,
* •
$\mathcal{E}_{k}$ is a finite set of key entities, where each key entity
describes a key belonging to either a voter or a voting machine,
* •
$\mathcal{E}_{c}$ is the finite set of contract entities where the possible
contracts are KeyGenContract, SelectContract, PrintContract, VerifyContract,
CountContract, PrintReceiptContract, and ExitContract,
* •
$\mathcal{E}_{d}$ is a finite set of data entities,
* •
$G_{n}$ is a finite set of node agents, where each node is an ACDC equipped
voting machine,
* •
$G_{a}$ is a finite set of account agents, where each account is a physical
user of an ACDC equipped voting machine, and
* •
$\mathcal{A}$ is a finite set of activities, where the possible activities are
KeyGen, Select, Print, Verify, Count, PrintReceipt, and Exit.
This environment consists of a set of provenance graphs $2^{V}\times
2^{E}\times L^{E}$ where
$V=\mathcal{E}_{k}\cup\mathcal{E}_{c}\cup\mathcal{E}_{d}\cup G_{n}\cup
G_{a}\cup\mathcal{A}$ is the set of all possible vertices, $E\subset V\times
V$ is the set of all possible edges, and $L^{E}$ is the set of all possible
labeling functions. We assume that in a scenario where a provenance-based
enforcement mechanism is tasked with enforcing a provenance policy at a
function execution, the mechanism is able to query the provenance record to
obtain the relevant provenance graph $(V^{\prime},E^{\prime},l^{\prime})\in
2^{V}\times 2^{E}\times L^{E}$. For this particular case study, a mechanism
can query the provenance record for all provenance associated with a
particular voter. Such an assumption is reasonable because an input-enabled
mechanism will be enabled to query the necessary provenance by a voter
inputting their VoterID; this requirement can be specified by the contract for
a specific function. In this scenario, the provenance graph being evaluated
will only contain one account agent, namely the present voter.
### 4.2 Voter Fraud Scenarios
To demonstrate the applicability of ACDC provenance for reasoning about voter
fraud in an e-voting context, we consider 2 real scenarios in which voters
have committed fraud and present provenance policies that might be enforced by
ACDC voting machines to prevent such fraud. Additionally, we present a
scenario in which a user may try to manipulate the voting machine and how
provenance policies can aid in reasoning about such manipulation. These
scenarios include: 1) a voter attempting to cast multiple votes [26, 1], 2) an
ineligible voter attempting to cast a vote [25, 1], and 3) a voter attempting
to cast multiple votes by exiting the system just before a receipt is printed.
#### Duplicate Voting
Consider a scenario in which a user, say Alice, is legitimately registered to
vote in two states. Although it is not a crime for Alice to be registered in
two states, it is a crime, according to state law, for her to cast more than
one vote in the same election [2]. In this scenario, Alice has intentions on
participating in early voting in state 1 and voting on election day in state
2. Because Alice has a legitimate VoterID for state 1, her vote will be
counted and will result in a provenance record showing that she has cast a
legitimate vote. When Alice attempts to vote on election day in state 2, based
on her provenance record, the voting machine should not allow her to cast
another ballot. The simplest check would be to determine whether Alice has
already received a receipt indicating that she has already cast a ballot. To
do so, we can express a provenance policy that defines the expected provenance
of a printed receipt. This policy can be checked at the execution of the
$KeyGen$ activity, as specified by the KeyGenContract, when Alice attempts to
cast a second ballot. Formally, given a provenance graph
$(V^{\prime},E^{\prime},l^{\prime})\in 2^{V}\times 2^{E}\times L^{E}$ that
corresponds to all provenance metadata associated with Alice, we can determine
whether Alice has been attributed a printed receipt if the following policy
$P$ evaluates to true
$P(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists\varepsilon_{d},a,g_{a}\in V^{\prime}:((a,PrintReceiptContract)\in E^{\prime}}$
---|---|---
| | ${\land~{}l^{\prime}(a,PrintReceiptContract)=Used)}$
| | ${\land((\varepsilon_{d},a)\in E^{\prime}\land l^{\prime}(\varepsilon_{d},a)=WasGeneratedBy)}$
| | ${\land((\varepsilon_{d},PrintReceiptContract)\in E}$
| | ${\land~{}l^{\prime}(\varepsilon_{d},PrintReceiptContract)=WasDerivedFrom)}$
| | ${\land((\varepsilon_{d},g_{a})\in E^{\prime}\land l^{\prime}(\varepsilon_{d},g_{a})=WasAttributedTo))}.$
If the policy evaluates to true over the given provenance graph, then the
voting machine can take the necessary actions of preventing Alice from casting
a second ballot (e.g., exiting the system).
#### Ineligible Voting
In the US 2012 election a convicted felon successfully voted in the election,
in a state that prohibits convicted felons from voting, by providing false
information on the voter registration form [25]. Consider a scenario in which
Bob, who is a convicted felon, falsely indicates that he is not a convicted
felon on his voter’s registration form and is approved to vote and is provided
a legitimate VoterID. Because US convicted felon records are public record,
this record can be considered as a blacklist of account agents in an ACDC
voting network. Although a user may have a valid VoterID, voting machines can
ensure that they are not acting on behalf of blacklisted account agents.
However, to make this determination, Bob will first have to enter his VoterID
into the voting machine, thereby generating provenance of a voting machine
acting on his behalf. When the voting machine invokes the $KeyGen$ function,
the function will first use the $KeyGenContract$ to determine how it will
process entities. The contract can specify a provenance policy stating that
the function should proceed iff the voting machine for which it is associated
with is not acting on behalf of a blacklisted account agent. Formally, given
Bob’s provenance graph $(V^{\prime},E^{\prime},l^{\prime})\in 2^{V}\times
2^{E}\times L^{E}$ we can determine if Bob is a convicted felon if
$\exists G_{a_{blacklist}}\subseteq G_{a}:$ | $P(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | $\exists g_{a_{blacklist}}\in G_{a_{blacklist}}:$
---|---|---|---
| | | $\exists g_{n}\in V^{\prime}:(g_{n},g_{a_{blacklist}})\in E^{\prime}$
| | | $\land~{}l^{\prime}(g_{n},g_{a_{blacklist}})=ActedOnBehalfOf$.
If this policy evaluates to true, then it will be known that the voting
machine is acting on behalf of a blacklisted user; therefore, this user should
not be allowed to cast a vote according to state law.
#### Manipulating an ACDC Voting Machine
Consider a scenario in which a malicious voter, Mallory, is aware of the
workflow of the voting machine and attempts to manipulate a voting machine
into allowing her to vote multiple times by preventing the attribution of a
receipt for her vote. In this scenario, Mallory may be able to exit the voting
process right after the Count function executes but before the PrintReceipt
function executes. When Mallory attempts to vote again her provenance record
will not indicate that she has been attributed a receipt for voting. To detect
this scenario, we can specify a policy to detect the execution of each
function to determine how far Mallory may have gotten in the voting process.
Formally, given a provenance graph $(V^{\prime},E^{\prime},l^{\prime})\in
2^{V}\times 2^{E}\times L^{E}$ we can specify the following policy for the
$KeyGen$ function—the other policies can be specified similarly:
* Figure 3: KeyGen provenance event.
Figure 4: Policy subgraph
* •
KeyGen
$P(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists~{}\varepsilon_{k},a,g_{a}\in V^{\prime}:((a,KeyGenContract)\in E^{\prime}}$
---|---|---
| | ${\land~{}l^{\prime}(a,KeyGenContract)=Used)}$
| | ${\land~{}((\varepsilon_{k},a)\in E^{\prime}\land l^{\prime}(\varepsilon_{k},a)=WasGeneratedBy)}$
| | ${\land~{}((\varepsilon_{k},KeyGenContract)\in E^{\prime}}$
| | ${\land~{}l^{\prime}(\varepsilon_{k},KeyGenContract)=WasDerivedFrom)}$
| | ${\land~{}((\varepsilon_{k},g_{a})\in E^{\prime}\land l^{\prime}(\varepsilon_{k},g_{a})=WasAttributedTo)}$
* •
Select
$P(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists~{}\varepsilon_{d},a,g_{a}\in V^{\prime}:((a,SelectContract)\in E^{\prime}}$
---|---|---
| | ${\land~{}l^{\prime}(a,SelectContract)=Used)}$
| | ${\land~{}((\varepsilon_{d},a)\in E^{\prime}\land l^{\prime}(\varepsilon_{d},a)=WasGeneratedBy)}$
| | ${\land~{}((\varepsilon_{d},SelectContract)\in E^{\prime}}$
| | ${\land~{}l^{\prime}(\varepsilon_{d},SelectContract)=WasDerivedFrom)}$
| | ${\land~{}((\varepsilon_{d},g_{a})\in E^{\prime}\land l^{\prime}(\varepsilon_{d},g_{a})=WasAttributedTo)}$
* •
Print
$P(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | $\exists~{}\varepsilon_{d},a,g_{a}\in V^{\prime}:((a,PrintContract)\in E^{\prime}$
---|---|---
| | ${\land~{}l^{\prime}(a,PrintContract)=Used)}$
| | ${\land~{}((\varepsilon_{d},a)\in E^{\prime}\land l^{\prime}(\varepsilon_{d},a)=WasGeneratedBy)}$
| | ${\land~{}((\varepsilon_{d},PrintContract)\in E^{\prime}}$
| | ${\land~{}l^{\prime}(\varepsilon_{d},PrintContract)=WasDerivedFrom)}$
| | ${\land~{}((\varepsilon_{d},g_{a})\in E^{\prime}\land l^{\prime}(\varepsilon_{d},g_{a})=WasAttributedTo)}$
* •
Verify
$P(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists~{}\varepsilon_{d},a,g_{a}\in V^{\prime}:((a,VerifyContract)\in E^{\prime}}$
---|---|---
| | ${\land~{}l^{\prime}(a,VerifyContract)=Used)}$
| | ${\land~{}((\varepsilon_{d},a)\in E^{\prime}\land l^{\prime}(\varepsilon_{d},a)=WasGeneratedBy)}$
| | ${\land~{}(\varepsilon_{d},VerifyContract)\in E^{\prime}}$
| | ${\land~{}l^{\prime}(\varepsilon_{d},VerifyContract)=WasDerivedFrom)}$
| | ${\land~{}((\varepsilon_{d},g_{a})\in E^{\prime}\land l^{\prime}(\varepsilon_{d},g_{a})=WasAttributedTo}$
* •
Count
$P(V^{\prime},E^{\prime},l^{\prime})$ | $\iff$ | ${\exists~{}\varepsilon_{d},a,g_{n},g_{a}\in V^{\prime}:((a,CountContract)\in E^{\prime}}$
---|---|---
| | ${\land~{}l^{\prime}(a,CountContract)=Used)}$
| | ${\land~{}((\varepsilon_{d},a)\in E^{\prime}\land l^{\prime}(\varepsilon_{d},a)=WasGeneratedBy)}$
| | ${\land~{}((\varepsilon_{d},CountContract)\in E^{\prime}}$
| | ${\land~{}l^{\prime}(\varepsilon_{d},CountContract)=WasDerivedFrom)}$
| | ${\land~{}((\varepsilon_{d},g_{n})\in E^{\prime}\land l^{\prime}(\varepsilon_{d},g_{n})=WasAttributedTo)}$
| | ${\land~{}((g_{n},g_{a})\in E^{\prime}\land l^{\prime}(g_{n},g_{a})=ActedOnBehalfOf)}$
Informally, such policies can evaluate whether each of the possible contracts
were used by activities that generated entities, if so, the generated entities
should be derived from the specified contract and attributed to the account
agent under consideration or a node agent acting on behalf of the account
agent under consideration. Figure’s 4 and 4 illustrate the $KeyGen$ event and
the subgraph specified by the policy, respectively. Similar graphs for each of
the other functions and their associated policies can be found in Appendix
0.A. These policies can be composed to form a single policy to be evaluated at
the KeyGen activity whenever a voter attempts to begin the voting process.
Because we employ a separation of concerns and specify policies for each
functional execution, the mechanism enforcing such policies can determine how
far Mallory may have gotten in the voting process by determining which
policies fail. In our scenario, since Mallory’s provenance record indicates
that she completed all steps except for the PrintReceipt function, if she
attempts to vote on the same machine as her originally counted vote, then the
machine can continue its process and print a receipt with a confirmation
number based on her VoterKey. If Mallory attempts to vote on another machine,
then the machine can simply exit, perhaps notifying Mallory to return to the
original machine for a receipt.
### 4.3 Challenges of Voting Provenance
Due to the increase of technology used in voting elections where the
technology can malfunction [11], is possibly vulnerable to attacks [3], and
may be hacked [4], it is important to be able to verify the trustworthiness of
results reported by voting machines. Data provenance collection is one viable
solution to ensure trustworthy results. However, in a democratic election it
is important to only reveal the final result of the election while keeping
individual votes secret. Auditing the provenance record of a DRE voting
machine in a traditional provenance architecture can reveal the results of
individual ballots and can attribute ballots to specific voters.
Prior work has examined protection mechanisms for provenance storage systems
in which the leakage of the provenance record is potentially more sensitive
than the leakage of the data for which the provenance corresponds (e.g., [5,
8]). However, such solutions are system-centric, relying on protection
mechanisms of the storage system. If the system is breached by an unauthorized
agent, the provenance record may be exposed. Therefore, the security of the
provenance record relies on the strength of security placed on the physical
storage system.
We argue that a data-centric approach is more suitable and may provide better
security guarantees in scenarios where both the data and provenance record of
such data can reveal sensitive information. Analyzing provenance records in an
ACDC e-voting network, where all data capsules contain encrypted data, does
not suffer from the drawbacks of analyzing provenance records in a traditional
system-centric architecture because an ACDC provenance record is a causal
record of named encrypted data rather than a causal record of named plaintext
data. Therefore, the only information that may be revealed by an ACDC voting
provenance record is that a specific user cast a vote but not what or who the
particular user voted for. We do not consider revealing that a particular user
cast a vote as a limitation of this architecture because this fact is inherent
to any voting system in practice.
## 5 Related Work
Several frameworks have been proposed for analyzing provenance metadata but do
so reactively and in retrospect, relying on either human analysis or the use
of automated tools that may rely on machine learning techniques to
characterize provenance graphs. Reactive security has benefits in areas such
as identifying the root cause of an attack [18] and security auditing to
ensure compliance with company policies [24]. While useful, these security
practices do not actively prevent security mishaps. Proactive security
practices should also be used in conjunction with reactive security practices.
However, because proactive security policies are specified with the intent of
being enforced, such policies must be based on precise and unambiguous
reasoning instead of human intuition. Relevant to this work is proactive
reasoning about data provenance, which has received little attention in the
literature.
Much work related to data provenance has focused in the areas of provenance
collection (e.g., [20]) and secure storage of provenance metadata (e.g.,
[21]). Both of these areas are foundational to provenance-aware systems;
however, in the context of security, it is equally important to continually
analyze provenance metadata at runtime to gain insight into and maintain a
computing environment’s overall security posture.
Due to the large amounts of data that provenance collection systems can
capture, relying on human analysis is impractical and error prone [14].
Automated tools aim to simplify and make the analysis of provenance metadata
more efficient; however, many do so at a loss in precision. Huynh et al. [15]
present an automated analysis technique that relies on network analysis and
machine learning techniques, it is shown that their analysis technique is able
to classify provenance graphs into predetermined categories with high
accuracy. FRAPpuccino [13] is a provenance-based intrusion detection framework
that aims to distinguish benign from anomalous behavior using a machine
learning approach. Although machine learning techniques improve the efficiency
with which provenance graphs can be analyzed, in high security contexts, such
techniques have at least two drawbacks: (1) the classification categories do
not provide well-defined properties of the graphs, and (2) the classification
categories cannot provide formal guarantees about data due to the possibility
of false positives and false negatives.
CamQuery [23] is a framework for the runtime analysis of whole system
provenance. Because analysis takes place at runtime, the framework takes a
proactive approach to policy specification over provenance metadata by
expressing policies in a programmable graph processing framework inspired by
GraphChi [17] and GraphX [12]. Our approach differs from CamQuery in that we
present a formal approach for reasoning about provenance policies in a
distributed environment, which is based on a mathematical semantics of
provenance graphs.
Lemay et al. [19] present a framework for automated analysis of provenance by
using graph grammars as a way to characterize provenance graphs. However,
because the class of graphs parseable by such grammar is restricted to regular
grammars, precision is lost and some graphs become parseable that the analyst
may not intend to be; therefore, this approach is not amenable to security
policy specification in which the policy must be precise and unambiguous.
Park et al. [22], present a model for provenance-based access control in which
policies are specified using propositional logic as an underlying formalism.
This approach can provide formal guarantees about data that conforms to the
policy. However, the approach presented in [22] is specific to the access-
control domain. In this paper, we have provided a more general and expressive
framework for reasoning about provenance policies in a distributed, data-
centric environment by using predicate logic as an underlying formalism.
## 6 Conclusion and Future Work
In summary, this paper presented a new data-centric paradigm that provides
capabilities for rigorous provenance analysis over distributed systems. A
formal approach for reasoning about, and the proactive specification of,
provenance policies was introduced. Additionally. we provided a case study
that examined the provenance policies necessary to ensure integrity of an
ACDC-equipped electronic voting system without sacrificing capabilities for
post-factum auditing that traditional provenance techniques provide. We
believe that the migration from the current server-centric security paradigm
is key to not only enabling the collection of coarsely-grained provenance that
is suitable for proactive policy evaluation, but also defends against
catastrophic compromises of data records within a given system. In this
regard, there are two primary directions for future work stemming from this
initial policy design and evaluation. First, the expansion of the ACDC
framework. Securing data as a first-class citizen is an approach that has a
myriad of benefits that prevent many of the pitfalls that have led to
catastrophic data breaches in systems today. Second, there is independent
advancement of provenance policies in the Function as a Service (FaaS)
execution model. Such an expansion could enable clients of services such as
AWS lambda to untangle the currently inscrutable chain of custody for inputs
and products used in FaaS-style execution. This may entail the introduction of
a distributed truncation-resistant store and provenance hooks into FaaS job
specifications, but could be handled entirely on the clients’ end.
## References
* [1] A sampling of election fraud cases from across the country. https://www.heritage.org/sites/default/files/voterfraud_download/VoterFraudCases_5.pdf, accessed: 2020–01–10
* [2] Double voting. https://www.ncsl.org/research/elections-and-campaigns/double-voting.aspx (2018), accessed: 2020–01–10
* [3] Appel, A.W., Ginsburg, M., Hursti, H., Kernighan, B.W., Richards, C.D., Tan, G., Venetis, P.: The New Jersey Voting-machine Lawsuit and the AVC Advantage DRE Voting Machine. Electronic Voting Technology Workshop/Workshop on Trustworthy Elections. (2009)
* [4] Bannet, J., Price, D.W., Rudys, A., Singer, J., Wallach, D.S.: Hack-a-vote: Security Issues with Electronic Voting Systems. IEEE Security & Privacy 2(1), 32–37 (2004)
* [5] Bates, A., Mood, B., Valafar, M., Butler, K.: Towards secure provenance-based access control in cloud environments. In: Proceedings of the third ACM conference on Data and application security and privacy. pp. 277–284. ACM (2013)
* [6] Belhajjame, K., B’Far, R., Cheney, J., Coppens, S., Cresswell, S., Gil, Y., Groth, P., Klyne, G., Lebo, T., McCusker, J., Miles, S., Myers, J., Sahoo, S., Tilmes, C.: PROV-DM: The PROV Data Model. Tech. rep. (2012), http://www.w3.org/TR/prov-dm/
* [7] Bernhard, M., Benaloh, J., Halderman, J.A., Rivest, R.L., Ryan, P.Y., Stark, P.B., Teague, V., Vora, P.L., Wallach, D.S.: Public Evidence from Secret Ballots. In: International Joint Conference on Electronic Voting. pp. 84–109. Springer (2017)
* [8] Braun, U.J., Shinnar, A., Seltzer, M.I.: Securing Provenance. In: Proceedings of the 3rd USENIX Workshop on Hot Topics in Security (2008)
* [9] Cassidy, C.A., Long, C.: Voting officials under scrutiny amid heavy election turnout. https://apnews.com/8af093ef14954d3293fae718c37f3eb3 (2018), accessed: 2020–01–10
* [10] Chase, M.: Multi-authority attribute based encryption. In: Theory of cryptography conference. pp. 515–534. Springer (2007)
* [11] Friedersdorf, C.: An embarrassment of glitches: A wealthy country should be able to conduct a national election with fewer problems than the united states experiences in the 2018 midterms. https://www.theatlantic.com/ideas/archive/2018/11/voting-machines/575044/ (2018), accessed: 2020-01–10
* [12] Gonzalez, J.E., Xin, R.S., Dave, A., Crankshaw, D., Franklin, M.J., Stoica, I.: Graphx: Graph processing in a distributed dataflow framework. In: 11th USENIX Symposium on Operating Systems Design and Implementation. pp. 599–613 (2014)
* [13] Han, X., Pasquier, T., Ranjan, T., Goldstein, M., Seltzer, M.: Frappuccino: Fault-detection through runtime analysis of provenance. In: Workshop on Hot Topics in Cloud Computing (2017)
* [14] Hassan, W.U., Aguse, L., Aguse, N., Bates, A., Moyer, T.: Towards scalable cluster auditing through grammatical inference over provenance graphs. In: Network and Distributed Systems Security Symposium (2018)
* [15] Huynh, T.D., Ebden, M., Fischer, J., Roberts, S., Moreau, L.: Provenance network analytics. Data Mining and Knowledge Discovery 32(3), 708–735 (2018)
* [16] Jacobson, V., Smetters, D.K., Thornton, J.D., Plass, M.F., Briggs, N.H., Braynard, R.L.: Networking Named Content. In: Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies. pp. 1–12 (2009)
* [17] Kyrola, A., Blelloch, G., Guestrin, C.: GraphChi: Large-Scale Graph Computation on Just a PC. In: 10th USENIX Symposium on Operating Systems Design and Implementation. pp. 31–46 (2012)
* [18] Lee, K.H., Zhang, X., Xu, D.: High accuracy attack provenance via binary-based execution partition. In: Network and Distributed System Security Symposium (2013)
* [19] Lemay, M., Hassan, W.U., Moyer, T., Schear, N., Smith, W.: Automated provenance analytics: a regular grammar based approach with applications in security. In: 9th USENIX Workshop on the Theory and Practice of Provenance (2017)
* [20] Liang, X., Shetty, S., Tosh, D., Kamhoua, C., Kwiat, K., Njilla, L.: Provchain: A blockchain-based data provenance architecture in cloud environment with enhanced privacy and availability. In: Proceedings of the International Symposium on Cluster, Cloud and Grid Computing. pp. 468–477. IEEE Press (2017)
* [21] Liang, X., Zhao, J., Shetty, S., Li, D.: Towards data assurance and resilience in iot using blockchain. In: IEEE Military Communications Conference. pp. 261–266. IEEE (2017)
* [22] Park, J., Nguyen, D., Sandhu, R.: A provenance-based access control model. In: International Conference on Privacy, Security and Trust. pp. 137–144. IEEE (2012)
* [23] Pasquier, T., Han, X., Moyer, T., Bates, A., Hermant, O., Eyers, D., Bacon, J., Seltzer, M.: Runtime analysis of whole-system provenance. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. pp. 1601–1616. ACM (2018)
* [24] Pasquier, T., Singh, J., Powles, J., Eyers, D., Seltzer, M., Bacon, J.: Data provenance to audit compliance with privacy policy in the internet of things. Personal and Ubiquitous Computing 22(2), 333–344 (2018)
* [25] Trischitta, L.: ‘I voted early’ sticker leads to arrest, fraud charges. https://www.sun-sentinel.com/news/fl-xpm-2013-02-22-fl-felon-voter-fraud-pompano-20130222-story.html (2013), accessed: 2020–01–10
* [26] Vielmetti, B.: Shorewood man sentenced to jail for multiple votes in several elections. https://archive.jsonline.com/news/crime/shorewood-man-sentenced-to-jail-for-multiple-votes-in-several-elections-b99677321z1-370317801.html, accessed: 2020–01–10
* [27] Wack, J.P.: Draft Standard for Voter Verified Paper Audit Trails in DRE Voting Systems (DRE-VVPAT): Supplement to the 2002 Voting Systems Standard. https://www.nist.gov/system/files/documents/itl/vote/VVPAT-Addendum-jpw-3-2-051.pdf (2005), accessed: 2020–01–10
## Appendix 0.A Provenance Graphs of Individual Case Study Events
Figure 5: PrintReceipt provenance event
Figure 6: Policy subgraph
Figure 7: Select provenance event.
Figure 8: Policy subgraph
Figure 9: Print provenance event
Figure 10: Policy subgraph
Figure 11: Verify provenance event
Figure 12: Policy subgraph
Figure 13: Count provenance event
Figure 14: Policy subgraph
|
# Quantum squeezing cannot beat the standard quantum limit
###### Abstract.
Quantum entanglement between particles is expected to allow one to perform
tasks that would otherwise be impossible [Bell1964, Feynman1982, Deutsch1985,
Shor1994, Giovannetti2001]. In quantum sensing and metrology, entanglement is
often claimed to enable a precision that cannot be attained with the same
number of particles and time, forgoing entanglement [Helstrom1969,
Holevo1973a, Giovannetti2004, Giovannetti2006, DemkowiczDobrzanski2012,
Zwierz2012, Pezze2018]. Two distinct approaches exist: creation of entangled
states that either _i)_ respond quicker to the signal, or _ii)_ are associated
with lower noise and uncertainty. The second class of states are generally
called squeezed states. Here we show that if our definition of success is – a
precision that is impossible to achieve without entanglement – then squeezed
states cannot succeed. In doing so we show that a single non-separable
squeezed state provides fundamentally no better precision, per unit time, than
a single particle.
Liam P. McGuinness
## Prelude
I have asked for and received a lot of feedback on this work from experts in
the field111If you have any feedback, please contact me.. Here I try to distil
the arguments presented in the main text as clearly as possible.
If one wants to compare the precision of two measurement devices, a good
method is to break each device down into its individual components and analyse
the precision each component provides. If you show that each component of the
first device has no better precision than each component of the second, it is
possible to conclude that the first device cannot outperform the second. For
rigour, two additional points are needed:
1. (1)
A check that the second device has more components (as more components improve
precision).
2. (2)
An assumption that the components are independent.
This paper performs such an analysis. The mathematical content is simply to
analyse the (optimum) precision per unit time, that quantum mechanics predicts
an individual component can achieve222To be precise, I analyse the maximum
information each individual component provides per unit time, for a single
measurement.. One proviso is that this analysis only applies when each
component is measured once. The physics is to associate an individual
component with a single non-separable state vector. As non-separable states
can be entangled, this allows quantum correlations to improve the measurement
precision, however, with assumption (2) above, these are the only form of
correlations present.
Using basic quantum mechanics, it is straightforward to bound the amount of
information per unit time, a single state vector can provide on a signal
$\theta$. The amount of information (and the measurement precision) just
depends on the vector response to $\theta$, i.e.
$\frac{\mathrm{d}}{\mathrm{d}\theta}$. This is a well-known and standard
result (see Refs [91–93]). In terms of the mathematics and physics there is
nothing even mildly controversial about what I am saying here. Note, we don’t
even need to worry about the length of the state vector – entangled or
unentangled – any pure state has unit length.
I then make the observation that in the quantum squeezing community, people
claim to improve the precision of their measurement device (beyond a
fundamental limit) without improving the state response to the signal. In
fact, in a squeezed device, the response to the signal of a single non-
separable squeezed state is often described as being the same as that of a
single particle. In case you think that I am misrepresenting the position of
the squeezing community, please look through the papers for yourself. Refs
[12–15] are review papers and a good starting point. See also the references
within [14] and Refs [33, 35–85, 96–101]. There is a mathematical and logical
contradiction between these two observations. They cannot both be true. It
turns out that the results of the squeezing community are in error.
To prove that a device constructed from an ensemble of $N$ squeezed particles
can not provide fundamentally more information per unit time than possible
from a device made from the same number of independent particles (this is
known as the standard quantum limit), I note:
1. (1)
An entangled ensemble of $N$ particles contains less indivisible state vectors
than an unentangled ensemble of $N$ particles.
That’s it. Much of the main text discusses experiments with spins, but the
analysis is general and equally applies to measuring devices composed of
photons, i.e. optical interferometers such as LIGO.
Just to reiterate the logical arguments made here.
1. (1)
Assuming the postulates of quantum mechanics are correct, the minimum
uncertainty per unit time, that any single non-separable state provides on an
unknown signal is bounded by the state response to the signal. This bound
takes into account the minimum noise when measuring a single state vector.
2. (2)
In the literature, a squeezed state is commonly described as having the same
signal response as a single particle state. However, the measurement noise of
this state is claimed to be less than the measurement noise of a single
particle. I agree that this argument sounds reasonable. It is persuasive
because it appeals to our intuitive concept of signal-to-noise determining the
measurement precision. I refer you to preceding point. Based on our
understanding of quantum mechanics, this is impossible.
3. (3)
An ensemble of entangled squeezed states contains less non-separable state
vectors than an ensemble of unentangled states. Therefore squeezed ensembles
cannot beat the standard quantum limit.
If I could pinpoint why people have such strong resistance to these results,
it seems to be because there is a vast body of theoretical and experimental
work claiming the contrary. If you are at that point, then I ask you first to
go through the arguments that I am making and look for an error. If none is
forthcoming, then I urge you to consider reassessing the analysis presented by
the rest of the scientific community (see the FAQ’s section and Ref. [31] for
a summary of my efforts to date).
I have lost a lot of confidence in the peer-review system (see Refs
[108–110]), but that is not to say that I do not want feedback. Of course I
would like other people to check my work for rigour. Not only that, I also
want to demonstrate to non-experts that this has been done. I do not think
that the peer-review system performs those two tasks very well. For those two
goals, a much more effective strategy is to financially incentivise people to
perform a thorough peer-review, i.e. offer a financial reward to anyone who
finds an error. Therefore I offer a prize of US$10,000 to the first person
that finds an error that invalidates the conclusions of this work. Please send
to me via email333Note, in Ref. [31] I have also offered a prize of US$10,000
for the first experimental demonstration of a precision beyond the standard
quantum limit, and an additional prize of US$10,000 for the first
demonstration using $N$-partite entanglement that surpasses the single
particle limit. At the least, the measurement time and number of particles
should include all unitary evolution..
## Introduction
Consider a sensing device composed of $N$ spin-$1/2$ particles,444Note, these
results apply to photon interferometry and are discussed in this context in
Appendix 1. which is used to measure some unknown signal $\theta$. Taking just
one spin and using the spin direction $\accentset{\rightharpoonup}{S}$ of this
particle as a meter, then by error propagation any estimate $\tilde{\theta}$
of the value of $\theta$ has an uncertainty555More fully, we can include the
possibility of a finite bias $\epsilon$ and average over $p(\theta=\Theta)$,
the _a priori_ probability distribution of $\theta$, to obtain the expected
uncertainty:
$\langle\Delta\tilde{\theta}\rangle\geq\sqrt{\int^{\Theta}p(\theta=\Theta)\left(\left|\frac{\partial\accentset{\rightharpoonup}{S}}{\partial\theta}\right|^{-2}\left(\Delta\accentset{\rightharpoonup}{S}\right)^{2}+\epsilon^{2}\right)d\theta}$.:
$\Delta\tilde{\theta}\geq\left|\frac{\partial\accentset{\rightharpoonup}{S}}{\partial\theta}\right|^{-1}\Delta\accentset{\rightharpoonup}{S},$
(1)
where $\Delta\accentset{\rightharpoonup}{S}$ is the uncertainty in determining
the spin direction. The derivative term is often called the measurement
signal, and $\Delta\accentset{\rightharpoonup}{S}$ the measurement
noise666Incorrectly so, it is the measurement uncertainty not noise., so that
the ‘signal-to-noise ratio’ determines the measurement precision. With $N$
identical and independent spins, in general one cannot do better than
$\Delta\tilde{\theta}\geq\Delta\tilde{\theta}_{1}/\sqrt{N},$ (2)
where $\Delta\tilde{\theta}_{1}$ bounds the uncertainty using a single
particle. Note, for Eq. (2) to hold, $\Delta\tilde{\theta}_{1}$ must represent
the best possible uncertainty that can be attained with a single particle,
otherwise one could beat the limit simply by improving
$\Delta\tilde{\theta}_{1}$. Furthermore, in general $\Delta\tilde{\theta}_{1}$
is a function of time since the spin response is given by unitary evolution
and with more time one can increase the spin response or perform more
measurements to improve the uncertainty. Under this definition, Eq. (2) is
called the standard quantum limit (SQL) and it sets an uncertainty bound per
unit time that is impossible to surpass with a given number of identical and
independent spins.
There are two approaches to overcoming the SQL using entanglement, either make
the spin response greater, or reduce the uncertainty in measuring the spin
direction [Gross2012, Ma2011, Ludlow2015, Pezze2018]. The first makes use of
entangled NOON, CAT or GHZ states so that, in theory:
$\frac{\partial\accentset{\rightharpoonup}{S}_{N}(t)}{\partial\theta}=N\frac{\partial\accentset{\rightharpoonup}{S}_{1}(t)}{\partial\theta}$,
whilst
$\Delta\accentset{\rightharpoonup}{S}_{N}=\Delta\accentset{\rightharpoonup}{S}_{1}$,
where the subscript denotes the number of spins in the sensor [Pezze2018,
Gross2012, Ludlow2015, Bollinger1996, Dowling1998, Childs2000, Campos2003,
Kok2004, Leibfried2004, Nagata2007, Resch2007, Berry2009, Dorner2009,
Escher2011, Giovannetti2011, Daryanoosh2018, Shaniv2018, Xie2021]. In effect a
single spin with greater magnetic moment is created such that the response of
the entangled device is $N$-fold greater than that of a single spin. It is
worth noting here, that $\Delta\accentset{\rightharpoonup}{S}$ is the result
of sampling from an unknown probability distribution and since a measurement
of both a single entangled state and a single spin provide only one sample,
they have the same measurement uncertainty. We will not discuss this approach
except to note that when the resources required to generate entanglement are
fully accounted for, no improvement over the SQL or $\Delta\tilde{\theta}_{1}$
has been demonstrated [McGuinness2021].
The second approach uses entangled squeezed states so that777Early proposals
did not require entanglement to overcome the SQL, resulting in confusion as to
whether entanglement is required (see Appendix 2). Conflicting definitions of
the SQL still remain, for example when photon states with non-Poissonian
statistics are called non-classical and claimed to surpass the SQL (I prefer
the descriptor ‘non-thermal’ since any single particle including a single
baseball is a non-classical state in this definition). We explicitly define
the SQL as whatever one can achieve without entanglement, thus it is a moot
point whether entanglement is required to surpass it. The definition of
squeezing is also not standardized see e.g. [Soerensen2001, Meyer2001,
Maccone2020].:
$\frac{\partial\accentset{\rightharpoonup}{S}_{N}(t)}{\partial\theta}=\frac{\partial\accentset{\rightharpoonup}{S}_{1}(t)}{\partial\theta}$,
whereas
$\Delta\accentset{\rightharpoonup}{S}_{N}=\Delta\accentset{\rightharpoonup}{S}_{1}/N$.
I.e. the spin response of the entangled device remains the same as that of a
single spin but the measurement noise reduces by a factor of $N$ [Caves1981,
Ma2011, Pezze2018, Gross2012, Ludlow2015, Walls1983, Wodkiewicz1985, Wu1986,
Slusher1987, Xiao1987, Polzik1992, Wineland1992, Kitagawa1993, Wineland1994,
Sanders1995, Kuzmich1998, Soerensen1998, Brif1999, Kuzmich2000, Meyer2001,
Esteve2008, Appel2009, Eberle2010, Gross2010, Koschorreck2010,
Koschorreck2010a, Leroux2010, Leroux2010a, LouchetChauvet2010,
SchleierSmith2010, Wasilewski2010, Luecke2011, Sewell2012, Aasi2013,
Taylor2013, Bohnet2014, Muessel2014, Strobel2014, Kruse2016, Polzik2016,
Cox2016, Davis2016, Bohnet2016, Hosten2016, Linnemann2016, Macri2016, Tse2019,
Braverman2019, Schulte2020, Malia2020, Bao2020, PedrozoPenafiel2020,
Casacio2021, Gilmore2021, Greve2022, Malia2022]. We can already see that there
is a conflict between our explanation of the origin of the measurement
uncertainty and what is observed with squeezed states. Shouldn’t the
uncertainty in estimating the direction of a single squeezed state be the same
as for a single spin? Where does this reduced uncertainty come from? Either
our layperson description is wrong or squeezed states do not perform as
described. We now show that, if entanglement provides any benefit over the
SQL, then it must come about from increasing the sensor response to the signal
and not through reduced noise888My apologies for changing between
‘uncertainty’ and ‘noise’, they do not mean the same thing. I would like to
use the correct terminology - uncertainty, but the papers in this field refer
to noise and I am quoting their claims. This conflation of terms is the crux
of the issue, I encourage the reader to take careful note of this point.
Returning to the start of this paragraph,
$\Delta\accentset{\rightharpoonup}{S}_{N}=\Delta\accentset{\rightharpoonup}{S}_{1}/N$
does not mean the measurement noise reduces by a factor of $N$, rather the
uncertainty in estimating the spin direction of
$\accentset{\rightharpoonup}{S}_{N}$, is $N$ times lower than estimating the
direction of a single particle $\accentset{\rightharpoonup}{S}_{1}$..
## Noise independent quantum precision bound
For rigour we make two adjustments to our language. First, rather than talk
about spin direction, we address the underlying mathematical object – the
state vector. To show that the uncertainty bound obtained from an entangled
ensemble containing squeezed states is worse than that of an unentangled
ensemble, we use a counting argument based on the number of indivisible state
vectors (i.e. a state vector that cannot be further factored into non-
separable states) in the ensemble. Henceforth, a state vector,
$\left|\psi\right>$ always refers to this basic unit we are dealing with – an
indivisible state. A single state vector is never used to describe the quantum
state of a separable ensemble, instead we keep note of the number of copies of
each state. Secondly, for technical reasons and conciseness we avoid
quantitatively defining uncertainty, we define instead the (Fisher)
information on $\theta$ denoted $\mathrm{I}\left[\theta,t\right]$, provided by
a given state, or more precisely, measurement of that state [Braunstein1994]:
$\mathrm{I}\left[\theta,t\right]\equiv\int^{X}\mathrm{d}X\frac{1}{\mathrm{Pr}\left[X|\theta,t\right]}\left(\frac{\partial\mathrm{Pr}\left[X|\theta,t\right]}{\partial\theta}\right)^{2},\quad\mathrm{I}\left[\theta\right]\equiv\sum_{i=1}^{R}\frac{1}{\mathrm{Pr}\left[X_{i}|\theta,t\right]}\left(\frac{\partial\mathrm{Pr}\left[X_{i}|\theta,t\right]}{\partial\theta}\right)^{2}$
(3)
where $\mathrm{Pr}\left[X|\theta,t\right]$ is the conditional probability to
obtain the measurement result $X$ in time $t$ given $\theta$ and the (LHS) RHS
assumes the measurement outcomes are (continuous) discrete with $R$
possibilities. We note that for any measurement, the estimation uncertainty
$\Delta\tilde{\theta}$ is a monotonically decreasing function of
$\mathrm{I}\left[\theta,t\right]$. This observation provides the necessary
tools to compare uncertainties, to say one is greater than another, and is
sufficient for our purposes.
_Key requirement:_ The central claim in squeezing enhanced metrology is that
squeezed states have an improved intrinsic noise (uncertainty) compared to a
single spin. It is clear that the response to $\theta$ of the squeezed state,
quantified by $\left|\frac{d\left|\psi(\theta,t)\right>}{d\theta}\right|$, is
not greater than that of a single spin. One reason this must be the case is
that if the noise can be reduced by a factor of $N$, then any further
improvement in the sensor response would violate the Heisenberg limit.
Finally, we only establish a bound on the information provided by a single
measurement of the ensemble. While this allows a direct like-for-like
comparison between measurement devices, it is important to note that this is
not how measurement sensitivity is usually reported. Often, when reporting
squeezing enhancements, comparison is made to a time-averaged limit with
$\mathrm{Hz}^{-1/2}$ uncertainty improvement. Further assumptions for the
proof are provided in Appendix 3.
_Counting approach:_ For a measurement device comprised of $N$ spins, assuming
we can prove
> Statement 1: A single squeezed state does not provide fundamentally more
> information on $\theta$, per unit time, than a single spin.
It then follows that the information bound on $\theta$ for any $N$ spin
ensemble containing $M$ squeezed states is lower than an ensemble of $N$
unentangled spins. The reason being that a squeezed ensemble can be separated
into a maximum of $M$ independent state vectors where $M<N$ (this follows from
the very definition of entanglement). Assuming these states are
independent999This assumption just ensures that any correlations come solely
from entanglement., the information provided by the squeezed ensemble is $M$
times that of a single squeezed state and is therefore less than the
information provided by the unentangled ensemble. In fact, this counting
argument shows that increasing squeezing leads to a worse uncertainty bound
since there are less states to average over. Meaning that, if the uncertainty
provided by an ensemble containing squeezed states ever surpasses the single
particle bound and if the degree of squeezing/entanglement is continuously
increased, then at some point the uncertainty must get worse. And the
converse: if the measurement uncertainty always improves for arbitrary amounts
of squeezing, then the uncertainty never surpasses the single particle bound.
Many mathematical statements equivalent to Statement 1 (excluding the counting
argument) have been published (see Appendix 4). For example, Wootters showed
in 1981 that $\mathrm{I}\left[\theta,t\right]$ can be interpreted as distance
metric over quantum states [Wootters1981], meaning that if the path mapped out
by unitary evolution is the same for two states then so is the information.
This is equivalent to saying that the uncertainty depends only on the sensor
response to $\theta$ [Ou1997, Childs2000, Giovannetti2004, Zwierz2012,
Pang2014, Yuan2015, Pang2017, Gorecki2020], therefore states with no enhanced
response provide no fundamental sensing advantage. Braunstein, Caves and
Milburn provided most of the mathematical content of the proof by showing that
for pure states, $\mathrm{I}\left[\theta,t\right]$ is given solely by the
gradient of the state vector with respect to $\theta$, and does not depend on
any intrinsic uncertainty of this state [Braunstein1996]. Here we detail the
arguments in full.
## Proof - Time-independent multiplicative Hamiltonian:
$\hat{H}(\theta)=\theta\cdot\hat{H}$
Denote a spin-squeezed state as $\left|\psi_{SS}(\theta,t)\right>$, and the
state of a single spin as $\left|\psi_{1}(\theta,t)\right>$. Denote the
maximum information on $\theta$ that can be provided by any measurement on
$\left|\psi_{SS}(\theta,t)\right>$, $\left|\psi_{1}(\theta,t)\right>$ as
$\mathrm{I}_{\mathrm{SS}}\left[\theta,t\right]$ and
$\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]$ respectively, then we have:
Claim: If
$\left(\frac{\mathrm{d}\left<\psi_{SS}(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi_{SS}(\theta,t)\right>}{\mathrm{d}\theta}\right)=\left(\frac{\mathrm{d}\left<\psi_{1}(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi_{1}(\theta,t)\right>}{\mathrm{d}\theta}\right)$
then
$\mathrm{I}_{\mathrm{SS}}\left[\theta,t\right]\leq\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]$.
Physical interpretation: Squeezed states claim to surpass the SQL by reducing
the uncertainty associated with the state, and not by increasing the response
of the state to the signal (i.e. the derivative). To refute this claim we need
to show that if the gradient of the squeezed state with respect to the signal
is the same as that of a single spin, then information bound on the squeezed
state is less than or equal to that of a single spin. As the derivative of a
state vector is also a state vector, we can’t say one state vector is greater
than another, i.e.
$\frac{d\left|\psi_{1}\right>}{d\theta}>\frac{d\left|\psi_{2}\right>}{d\theta}$
is not a mathematical statement (the vectors actually exist in different
Hilbert spaces). To obtain a non-negative real number, we take the inner-
product. Since state vectors are normalised, this operation returns the
magnitude of the derivative.
Proof: Braunstein, Caves and Milburn showed that for a pure state
[Braunstein1996]:
$\mathrm{I}\left[\theta,t\right]=4\left[\left(\frac{\mathrm{d}\left<\psi(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}\right)-\left|\left<\psi(\theta,t)\right|\left(\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}\right)\right|^{2}\right].$
(4)
Working in the Schrödinger picture where time-evolution is carried by quantum
states, an initial state $\left|\psi_{0}\right>$ evolves in response to the
Hamiltonian $\hat{H}(\theta,t)=\theta\cdot\hat{H}$, according to:
$\left|\psi_{0}\right>\rightarrow\hat{U}(\theta,t)\left|\psi_{0}\right>\equiv\left|\psi(\theta,t)\right>$,
where $\hat{U}(\theta,t)=\mathrm{Exp}\left[-{i\mkern 1.0mu}\theta
t\hat{H}/\hbar\right]$. Writing $\left|\psi_{0}\right>$ in terms of the $K$
eigenstates of $\hat{H}$, denoted by their eigenvalues
$\left|\psi_{E_{k}}\right>$ with complex amplitude $\alpha_{k}$:
$\left|\psi_{0}\right>=\sum_{k=1}^{m}\alpha_{i}\left|\psi_{E_{k}}\right>$, we
have:
$\left|\psi(\theta,t)\right>=\sum_{k=1}^{K}\mathrm{Exp}\left[-{i\mkern
1.0mu}\theta tE_{k}/\hbar\right]\alpha_{k}\left|\psi_{E_{k}}\right>,$
where the derivative of this state with respect to $\theta$ is:
$\frac{\mathrm{d}}{\mathrm{d}\theta}\left|\psi(\theta,t)\right>=\sum_{k=1}^{K}(-{i\mkern
1.0mu}tE_{k}/\hbar)\mathrm{Exp}\left[-{i\mkern 1.0mu}\theta
tE_{k}/\hbar\right]\alpha_{i}\left|\psi_{E_{k}}\right>.$
We first derive the maximum information a single spin can provide. For a
spin-1/2 with two eigenstates and denoting the eigenvalues of $\hat{H}$ as
$\pm\frac{\gamma}{2}$, we have:
$\left(\frac{\mathrm{d}\left<\psi_{1}(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi_{1}(\theta,t)\right>}{\mathrm{d}\theta}\right)=\left(\frac{t\gamma}{2\hbar}\right)^{2}\left[|\alpha_{1}|^{2}+|\alpha_{2}|^{2}\right],$
and
$\left|\left<\psi_{1}(\theta,t)\right|\left(\frac{\mathrm{d}\left|\psi_{1}(\theta,t)\right>}{\mathrm{d}\theta}\right)\right|^{2}=\left(\frac{t\gamma}{2\hbar}\right)^{2}\left[|\alpha_{1}|^{2}-|\alpha_{2}|^{2}\right]^{2}.$
We can see that $\mathrm{I}\left[\theta,t\right]$ is maximised by initially
placing the spin in an equal superposition of eigenstates, so that
$|\alpha_{1}|^{2}-|\alpha_{2}|^{2}=0$. Then:
$\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]=\left(\frac{t\gamma}{\hbar}\right)^{2}.$
We have shown that:
$\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]=4\left(\frac{\mathrm{d}\left<\psi_{1}(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi_{1}(\theta,t)\right>}{\mathrm{d}\theta}\right)$
and reproduced the well-known Heisenberg limit for a single spin
[Bollinger1996, Giovannetti2004, Pang2014]. Inserting into Eq. (4), the
equality stated in the claim, the maximum information of the squeezed state
is:
$\mathrm{I}_{\mathrm{SS}}\left[\theta,t\right]=\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]-4\left|\left<\psi_{SS}(\theta,t)\right|\left(\frac{\mathrm{d}\left|\psi_{SS}(\theta,t)\right>}{\mathrm{d}\theta}\right)\right|^{2}.$
As the second term is a non-negative number,
$\mathrm{I}_{\mathrm{SS}}\left[\theta,t\right]\leq\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]$
and the proof is complete.
In the above analysis, it may seem like we only consider quantum states and
not measurements, however Eq. (4) implicitly contains the Born rule. In
particular, $\mathrm{I}\left[\theta,t\right]$ is a projection of
$\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}$ meaning that
(for this Hamiltonian) the optimal measurement basis is a projection
orthogonal to the eigenstates101010Here orthogonal means at an angle of
$90^{\circ}$, not anti-parallel. [Braunstein1996, Childs2000, Giovannetti2004,
Pang2017]. Explicitly, for $\hat{H}=\gamma\hat{\sigma}_{z}/2$ where
$\hat{\sigma}_{z}$ is the Pauli-$z$ matrix, the measurement associated with
$\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]$ is a projective measurement in
the $x-y$ plane. Considering just a 2-dimensional space with discrete
measurement outcomes, we can denote the measurement results as ‘1’ and ‘0’,
thus allowing Eq. (3) to be expressed as111111We have used
$\mathrm{Pr}\left[0|\theta,t\right]=1-\mathrm{Pr}\left[1|\theta,t\right]$, for
a Bernoulli random variable taking only two values.:
$\mathrm{I}\left[\theta,t\right]=\left|\frac{\partial\mathrm{Pr}\left[1|\theta,t\right]}{\partial\theta}\right|^{2}\frac{1}{\mathrm{Pr}\left[1|\theta,t\right]\left(1-\mathrm{Pr}\left[1|\theta,t\right]\right)}.$
(5)
Here we can identify
$\left|\frac{\partial\mathrm{Pr}\left[1|\theta,t\right]}{\partial\theta}\right|$
and
$\sqrt{\mathrm{Pr}\left[1|\theta,t\right]\left(1-\mathrm{Pr}\left[1|\theta,t\right]\right)}$
with the measurement signal and noise in Eq. (1), where the latter is called
quantum projection noise [Itano1993]. Note that the description of some
quantum states as having intrinsically lower uncertainty is completely absent
in this analysis, and in Eq. (5) the noise and signal are not
independent121212We should be careful when equating terms in Eq. (1) and Eq.
(5) because they are not identical. In Eq. (1) the response of the meter is
independent of the measurement, whereas in Eq. (5),
$\mathrm{Pr}\left[1|\theta,t\right]$ depends on the measurement..
In Appendix 5, 6 we generalise the proof to the following situations. We show
that if we consider:
* •
probability mixtures of squeezed states, then the information from these mixed
states cannot exceed a single spin.
* •
the expected mean information, averaged over the prior probability
distribution of $\theta$, then squeezed states cannot outperform a single
spin.
* •
a modified claim concerning the projection of the gradient
$\frac{\mathrm{d}\left|\psi\right>}{\mathrm{d}\theta}$ orthogonal to
$\left|\psi\right>$, then a squeezed state cannot cannot outperform a single
spin for estimation of signals in arbitrary time-dependent Hamiltonians
$\hat{H}(\theta,t)$.
## Discussion
We have proved a powerful and surprising theorem that allows us to immediately
exclude any of the methods proposed in [Caves1981, Walls1983, Wodkiewicz1985,
Slusher1985, Wu1986, Slusher1987, Xiao1987, Polzik1992, Wineland1992,
Kitagawa1993, Wineland1994, Sanders1995, Kuzmich1998, Soerensen1998, Brif1999,
Kuzmich2000, Meyer2001, Orzel2001, Andre2002, Esteve2008, Appel2009,
Eberle2010, Gross2010, Koschorreck2010, Koschorreck2010a, Leroux2010,
Leroux2010a, LouchetChauvet2010, Riedel2010, SchleierSmith2010,
Wasilewski2010, Abadie2011, Luecke2011, Ma2011, Sewell2012, Aasi2013,
Taylor2013, Bohnet2014, Muessel2014, Strobel2014, Kruse2016, Polzik2016,
Cox2016, Davis2016, Bohnet2016, Hosten2016, Hosten2016a, Linnemann2016,
Macri2016, Tse2019, Braverman2019, Schulte2020, Malia2020, Bao2020,
PedrozoPenafiel2020, Casacio2021, Gilmore2021, Colombo2022, Greve2022,
Malia2022, Braginsky1974, Braginsky1977, Unruh1979, Braginsky1980,
Braginsky1996] from achieving a measurement precision beyond the SQL. Of these
works, the following are experimental papers [Xiao1987, Slusher1985,
Slusher1987, Polzik1992, Soerensen1998, Kuzmich2000, Meyer2001, Orzel2001,
Esteve2008, Appel2009, Eberle2010, Gross2010, Koschorreck2010,
Koschorreck2010a, Leroux2010, Leroux2010a, LouchetChauvet2010, Riedel2010,
SchleierSmith2010, Wasilewski2010, Abadie2011, Luecke2011, Sewell2012,
Aasi2013, Taylor2013, Bohnet2014, Muessel2014, Strobel2014, Bohnet2016,
Cox2016, Hosten2016, Hosten2016a, Kruse2016, Linnemann2016, Braverman2019,
Tse2019, Bao2020, Malia2020, PedrozoPenafiel2020, Casacio2021, Gilmore2021,
Colombo2022, Greve2022, Malia2022], the majority of which claim to demonstrate
a measurement precision that cannot be obtained without squeezing. This begs
the question:
> Why are there so many experimental papers that seem to contradict this
> proof?
The answer is not that our proof is erroneous, but rather that in general,
analyses of experiments either do not compare the achieved precision to the
SQL or involve fundamental mistakes. Below we outline some common examples in
the literature.
_Reasons why quantum squeezing has never surpassed the SQL:_
A. Comparison to a different limit
1. (1)
Comparison to an imperfect limit: An improvement in measurement precision for
a device operating far away from the SQL is demonstrated [Abadie2011,
Aasi2013, Tse2019, Casacio2021]. This is not the same as demonstrating a
measurement precision beyond the SQL.
2. (2)
Tautological definition of the measurement precision: A different quantum
limit is introduced. This limit is defined as the precision of a measurement
device _as it is and without modifications_. Modifications that utilise
entanglement are allowed, whilst forbidding any other changes to the device
[Wu1986, Xiao1987, Taylor2013]. If the device was already operating at the
SQL, and squeezing implemented whilst keeping the number of particles and time
constant, then the sensitivity cannot improve.
B. Analytic errors
1. (1)
Analysis of measurement noise not uncertainty: Conflation of noise and
uncertainty is one of the most common analytical mistakes in spin squeezing,
see e. g. [Orzel2001, Esteve2008, Appel2009, Gross2010, Koschorreck2010,
Koschorreck2010a, Leroux2010, LouchetChauvet2010, Riedel2010,
SchleierSmith2010, Sewell2012, Cox2016, Hosten2016, Kruse2016, Colombo2022,
Greve2022, Malia2022]. As an example, assume a projective measurement in the
$x$-basis is performed on a single spin pointing in an unknown direction in
the $x-y$ plane. Denote this state $\left|a\right>$ and label the outcomes of
the observable $\hat{S}_{x}=\hat{\sigma}_{x}/2$ as $\\{0,1\\}$. The
uncertainty in estimating the $x$-component of the spin direction is actually
independent of the measurement variance. If ten experiments produce the
following data-set: $\\{1,1,1,1,1,1,1,1,1,1\\}$, the SQL has not been
surpassed, even though the variance is less than 1/4. Compare to the data-set
obtained from a different state $\left|b\right>$: {0,1,1,1,0,0,1,1,1,0}.
Someone with knowledge of $\left|a\right>$, $\left|b\right>$ can prove this is
not a statistically anomaly by showing
$\left(\Delta\hat{S}_{x}\right)^{2}_{a}<\left(\Delta\hat{S}_{x}\right)^{2}_{b}$,
where the subscript indicates that the variance is calculated with respect to
these states. Despite this mathematical inequality, our uncertainty in
estimating the $x$-component of $\left|b\right>$ is the same as for
$\left|a\right>$. Ultimately we do not want measurements with less _noise_ ,
but measurements that provide lower _uncertainty_ on the spin
direction131313In the literature $(\Delta\accentset{\rightharpoonup}{S})^{2}$
is often incorrectly equated with the variance of some collective spin
observable $\hat{\bm{S}}_{z}=\sum_{i=1}^{N}\hat{S}_{z}$, such that a variance
$\left(\Delta\hat{\bm{S}}_{z}\right)^{2}<|S|/4$, where $|S|$ denotes the spin
vector length, is evidence of an enhanced precision. Note, in Eq. (1),
$\Delta\accentset{\rightharpoonup}{S}$ is the uncertainty in estimating the
spin direction, it is not the (square-root) variance of an operator. Nor is it
the square-root variance of some dataset.. Often prior information from state
preparation (and the value of $\theta$) is used to ensure that the measurement
statistics have low variance without reducing the uncertainty in estimating
$\theta$.
2. (2)
Replace the signal gradient with the contrast: The measurement signal (c.f.
the derivative in Eq. (5)), is replaced with a constant term that does not
depend on $\theta$ or the measurement basis. A common error when the
measurement precision is defined, is to replace the gradient with the spin
vector length, characterised by the ‘contrast’, ‘visibility’, ‘coherence’ or
‘Wineland squeezing parameter’ [Meyer2001, Esteve2008, Gross2010, Leroux2010,
Leroux2010a, Riedel2010, Hosten2016, PedrozoPenafiel2020, Malia2020,
Malia2022]. Again a measurement basis is chosen to reduce the measurement
noise without accounting for the commensurate reduction in signal. I.e. the
experiment is designed so that $\mathrm{Pr}\left[X|\theta,t\right]\rightarrow
0$, but the loss of signal
$\frac{\partial\mathrm{Pr}\left[X|\theta,t\right]}{\partial\theta}\rightarrow
0$ is not fully accounted for, and is often assumed to be constant. To some
extent, the operation regime of Advanced LIGO [Aasi2015, Tse2022] and QND
proposals [Braginsky1980, Kimble2001] suffer from this error.
There are valid questions on how one should interpret this proof. For instance
it only addresses fundamental quantum measurement noise141414It also neglects
measurement back-action onto the signal. However back-action is minimised when
using a single particle sensor., other technical sources of experimental noise
should be reduced. One question is to what extent this is possible and whether
there is some practical regime where squeezing can be of advantage? As we note
in App. 3 invoking this explanation is tantamount to saying quantum mechanics
is not correct. Perhaps a more nuanced interpretation is to note that no
experiment can reach the precision bounds set by quantum mechanics. Reaching
these bounds requires perfect, instantaneous measurements with no decoherence.
But you can’t have it both ways. Claims that squeezing can surpass the SQL do
not present that message, indeed quite the opposite.
## Conclusion
At a fundamental level, quantum mechanics dictates that measurement
uncertainty is a statistical sampling phenomenon. One drawback of using
quantum correlations is that it provides less samples and this can lead to
worse uncertainty. In particular some statistical analyses of measurement
precision mistakenly associate the fundamental randomness of measurement
observations as a noise which should be reduced. Here we have shown that
approaches to improve the measurement uncertainty by reducing this noise
cannot succeed. To misquote Rolf Landauer – The signal is the noise!
By relating metrological performance to the number of separable states we have
proved that squeezed ensembles cannot outperform unentangled ensembles in
sensing. The proof was inspired by the proposition of a general quantum
mechanical uncertainty limit per wavefunction [McGuinness2021a]. Here
wavefunctions are considered the fundamental information unit in computers and
sensors, not particles. My position is that it this information per unit time
limit holds for all entangling procedures, not just squeezing, and applies to
general tasks in quantum information like computation. Finally, it should be
apparent that in the field of quantum physics, the peer-review process has
failed (see also [McGuinness2022, McGuinness2023, McGuinness2023a]) and this
has led to a loss of scientific progress.
## Appendix
### 1\. Photon interferometry and vacuum noise
In his seminal paper on quantum mechanical noise in an interferometer, Caves
noted a fundamental noise source that “can be attributed to vacuum (zero-
point) fluctuations in the electromagnetic field, which enter the
interferometer from the unused input port” [Caves1981]. One typical physical
description of this noise is that when an electromagnetic wave enters the
interferometer at the first beam-splitter, vacuum fluctuations entering the
unused port cause an unequal splitting of the electromagnetic field. Caves
noted an equivalent view that treats photons as particles and attributes this
imbalance to the “random scattering of the input photons at the beam-
splitter”. As a result more of the input light is directed to one arm of the
interferometer and less into the other, in turn affecting the output light
intensity at both ports. Yet another description is that photon shot-noise
randomises the arrival time of photons on the detector [Tse2019]. It is
commonly understood that this noise cannot be removed, except through
inserting squeezed vacuum into the unused input port to reduce these
fluctuations. Here I present a different physical description of the
interferometer, one that changes the emphasis of the noise and I believe,
helps better predict experimental results.
As in the main text, one can learn a lot by reducing the experiment down to
its individual components and providing a physical description of this basic
experiment. So what happens when a single photon enters the interferometer?
Using a particle description of the photon, which randomly scatters at the
beam-splitter, we encounter a problem. The photon only traverses one arm, thus
there is no interference. No matter the path difference, the photon exits the
unused port with 50% probability. This is not what is observed experimentally.
Furthermore, the idea of each photon traversing just one arm of the
interferometer leads one to infer that the output signal depends on
interference between separate photons. This does not capture the basics,
rather it is interference of a photon with itself that is relevant151515More
precisely it is interference between basis states of a single wavefunction,
which can include more than one photon for entangled states.. I also find the
suggestion that vacuum fluctuations somehow prevent one from creating an equal
superposition state hard to accept – the creation of equal superposition
states is a basic concept in quantum metrology and quantum information
processing. Importantly, this description also incorrectly predicts
experimental results. For example, the observed probability (up to
experimental imperfections) to detect a photon at the unused port is given by
$p=\sin^{2}(\phi/2)$, where $\phi$ is the relative phase difference. If vacuum
fluctuations always enter the unused port to randomise the observed photon
statistics, then how can one explain this observation? Take $\phi=0$. Without
inserting any squeezed vacuum into the unused port, one can predict the
experimental outcome with remarkable accuracy. Fringe visibilities in excess
of 90% have been demonstrated so far for single photon interference. This
should not be possible if vacuum fluctuations of order $\sqrt{N}$ are a real
and fundamental source of noise.
A different description to those presented so far is to assume the outgoing
photon wavefunction at the beam-splitter is a perfect superposition of basis
states (as far as technical tolerances allow), so that a relative phase
difference between returning basis states determines their complex
coefficients161616Here basis states are eigenstates of photon number in each
mode of the interferometer.. A photodetector, placed at the unused port,
performs a measurement of the photon wavefunction, and detects a photon with
probability given by the squared amplitude of the state in this mode. In this
description, there is no noise in the interferometer which we need to remove.
The noise is a statistical effect, due to probabilistic measurement and is the
result of the Born rule. When multiple independent photons are inserted into
the interferometer, the basis states of each individual wavefunction interfere
upon return, and the summation of many individual probabilistic measurements
yield the detected signal. We do not need to worry about balancing the light
intensity in each arm of the interferometer, except to manufacture an ideal
beam-splitter. Note, in this description, if the interferometer is operated so
that the unused port is dark, then although laser power noise and quantum
projection noise is reduced [Itano1993], so is the signal since
$\frac{\partial p}{\partial\phi}\rightarrow 0$.
To convincingly demonstrate that vacuum fluctuations are not a fundamental
noise source in photon interferometry, we can go one step further and remove
the beam-splitter, thereby preventing vacuum fluctuations from entering the
interferometer. To illustrate the idea, take photons produced from a pulsed
laser, and replace the beam-splitter with a flippable (double-sided) mirror,
starting flipped down so that incident photons enter the longitudinal arm of
the interferometer, after which the mirror is flipped up, so that the second
pulse enters the transverse arm. The mirror remains in this position so that
photons returning from the longitudinal arm are sent out the antisymmetric
(unused) port, after which it is flipped down allowing photons returning from
the transverse arm to output at the same port. Now we do want to compare the
relative phase between separated optical pulses. However, as optical
photodiodes are sensitive only to the light intensity and not the phase, this
mode of operation seems forbidden. It turns out that coherent ensembles of
spins can be used as phase sensitive detectors.
As a first approximation, let’s treat the photodetector quantum mechanically
and the optical pulses as classical control fields. In this description,
gravitational waves change the relative phase of control fields acting on spin
qubits. The first (longitudinal) optical pulse performs a $\pi/2$-rotation on
the ensemble, where the optical phase is stored coherently in the spin
direction of the ensemble. This phase can be ‘readout’ with the second
(transverse) optical pulse which performs a $\pi/2$-rotation around an axis
determined by the relative phase of the two pulses (see [McGuinness2021a] for
a demonstration with microwave photons). Finally the spin state along-$z$
should be projectively readout. For a single spin, the probability to remain
in the initial $\left|0\right>$ state is:
$p(\left|0\right>)\approx\sin^{2}(\phi/2)$, where $\phi$ now refers to the
phase difference between the two pulses. If the spin energy transition is
perfectly resonant to the laser frequency, time-delays between the pulses have
no effect on the final spin state.
An advantage of treating spins rather than photons, as the measuring device,
is that it is much easier to perform quantum non-demolition (QND) measurements
on spins. Not only that, sensitivity to power fluctuations can be removed by
operating near $\phi=\pi$ so that $p(\left|0\right>)\approx 1$ and the spins
are in a ‘dark fringe’. Consider fluctuations slow enough so that both the
longitudinal and transverse pulse have the same intensity. Now instead of
$\pi/2$-rotations, the spins are rotated through an angle $\Theta$. The
$\pi$-phase shift of the second pulse ensures the spins are brought back to
$\left|0\right>$. There is a fundamental physical principle behind this
observation, and it allows quadratic suppression of all amplitude
fluctuations. The photon wavefunction is mapped to the spin state, so that
conjugate observables of the photon wavefunction (phase and amplitude) are
mapped to conjugate spin observables ($\hat{S}_{z}$ and $\hat{S}_{x}$).
Measurements that obtain information on the photon phase cannot also reveal
information on, and are therefore insensitive to the photon amplitude.
To see this, assume we only want to detect gravitational waves along the
longitudinal axis, which corresponds to measuring the phase $\varphi$ of the
first optical pulse. The first pulse is mapped to the spin state resulting in:
$\left|\psi\right>=\cos(\Theta)\left|0\right>+e^{-{i\mkern
1.0mu}\varphi}\sin(\Theta)\left|1\right>.$
The pulse amplitude (proportional to $\Theta$) is mapped to $S_{z}$ and (the
cosine of) $\varphi$ mapped to $S_{x}$. As phase and amplitude are conjugate
observables, a $\hat{S}_{z}$ measurement gives no information on $\varphi$ –
one observes $p(\left|0\right>)=\cos^{2}(\Theta)$,
$p(\left|1\right>)=\sin^{2}(\Theta)$. Measurement of the laser phase
$\varphi$, requires measurement of a conjugate observable. In order to perform
such a measurement, we do not need to send an optical pulse into the
transverse arm of the interferometer, we just need to implement a
$\hat{S}_{x}$ (or $\hat{S}_{y}$) measurement. Although there are technical
challenges in performing such a perfect measurement, the postulates of quantum
mechanics, assume that that it is possible. Furthermore, we do not need to
analyse the back-action this measurement induces on the mirror and there is no
radiation pressure trade-off. We have now described a setup that allows
realisation of the QND type measurements originally proposed by Braginsky,
Vorontsov and Thorne [Braginsky1980] and more recently by Kimble et. al.
[Kimble2001]. In the language of uncertainty squeezing and QND measurements,
one would say that the meter remains in an eigenstate, and measurements of
this state have minimum uncertainty. However, when the measurement basis is
chosen to make the readout noise zero, the meter response is quadratic to
phase shifts and is insensitive to small phase shifts.
Finally, one should again consider the case, where just a single photon enters
the interferometer and is detected with just a single spin. The spin-photon
interaction should be tailored so that the combined system is in an entangled
superposition:
$\frac{1}{\sqrt{2}}\left(\left|0_{S}\right>\left|1_{P}\right>+e^{-{i\mkern
1.0mu}\varphi}\left|1_{S}\right>\left|0_{P}\right>\right)$, where the $S$, $P$
subscripts denote the spin and photon states respectively. In this scenario
there is the outstanding issue on defining the phase of a single photon. I
argue that $\varphi$ is a physical property of the experiment, and that it can
be observed by interfering this state with a suitable (single) photon state.
Although further details are required to make this discussion rigorous, that
does not make it useless. All physical descriptions involve some
approximations, the key is to find which ones are critical to obtaining
accurate predictions. One benefit of this perspective is to demonstrate that
vacuum fluctuations can be removed from an interferometer. We have reduced the
problem of gravitational wave detection to estimating spin direction, a
problem which is explicitly addressed in the main text. Of course, vacuum
fluctuations are replaced with something else – quantum projection noise when
the spin state is readout – but as we have shown, this noise cannot be removed
through squeezing. Furthermore, this shifts the focus, we should focus on
reducing technical noise – laser power fluctuations, beam-splitter, detector
efficiency. Most importantly, there is an achievable parameter space where the
predictions of this analysis diverge greatly from treatments of vacuum
fluctuations. For low input power into LIGO where number of photons is small
so radiation pressure is very small, we can define the minimum possible
sensitivity per unit time that one can achieve with $N$ input photons into the
interferometer. If we now implement squeezing and use the same number of input
photons into the interferometer and the same measurement time, we can ask
whether the interferometer sensitivity can surpass this limit? Our analysis
show that the answer to this question is no.
### 2\. A confusing history of early ideas to surpass the SQL
The idea of using special projective measurements (not states), with reduced
noise originated in discussions on improving the sensitivity of gravitational
wave detectors [Braginsky1974, Braginsky1977, Unruh1979, Braginsky1996]. These
low noise measurements, called quantum non-demolition (QND) measurements, are
performed by choosing a measurement operator so that the meter is in an
eigenstate of the observable and the variance of the measurement observable
goes to zero [Unruh1978, Braginsky1980]. However no entanglement is required
to observe such a variance since separable eigenstates also yield zero
variance results. One unfortunate relic of this early analysis which persists
today is that the SQL is often defined as the minimum obtainable variance on
one observable when the variance equals that of another conjugate observable.
Often this statement is incorrectly reduced to saying that the uncertainty in
measuring spin-direction is equivalent to the spin noise, giving the SQL:
$\Delta\accentset{\rightharpoonup}{S}=\sqrt{|S|}/2$. Thus when the spin noise
is $<\sqrt{|S|}/2$, the SQL for measuring spin direction has been surpassed.
Initially the idea was to take advantage of better measurements, not entangled
states, however around the time of Caves’ proposal for squeezed vacuum
fluctuations in the context of photon interferometry [Caves1981], emphasis on
states of the meter with reduced uncertainty started. These methods were then
extended to spin states in Ramsey interferometers, called spin-squeezed states
[Kitagawa1991, Wineland1992, Kitagawa1993].
We can summarize some reasons why QND measurements do not surpass the SQL.
First, the requirement of perfect prior knowledge about the state of the meter
in order to perform a QND measurement is self-defeating171717If we are being
pedantic, one can learn about _which_ eigenstate the meter is in, and perform
discrimination.. Put another way, if we want the noise to remain low, the
meter must remain in an eigenstate, but if the meter simply remains in an
eigenstate then nothing is happening?! Also there is an inherent lack of
recognition of the difference between ‘uncertainty’ and ‘noise’. In general,
QND schemes neglected to consider that the measurement signal and measurement
noise are not independent, measurements with reduced noise – where the meter
is near an eigenstate of the measurement operator – also have less signal. In
fact, Wootters showed in 1981 [Wootters1981], that even in a $2^{N}$
dimensional Hilbert space the optimal measurement yields a probability
function of the form:
$\mathrm{Pr}\left[1|\theta\right]=\mathrm{cos^{2}}\left[\frac{m}{2}(\theta-\theta_{0})\right]$,
where $m$ is an integer and $\theta_{0}$ is a constant; thus when the
probability to measure an observable goes to one or zero, the gradient
disappears (Unruh used related arguments in response initial proposals
[Unruh1978]). Moreso, Itano et. al. noted that since projective measurements
are not perfect, it is often preferable to choose a measurement that maximises
projection noise [Itano1993]. The message here is that simply reducing
measurement noise does not guarantee better precision, and in general
measurements which maximise noise are better. Finally, Giovannetti, Lloyd and
Maccone proved, for a restricted set of Hamiltonians, that entangled states
and not measurements are required to surpass the SQL [Giovannetti2006].
Another relic of early suggestions to use QND measurements to surpass the SQL
is their current181818It seems that my characterisation of QND measurements as
a discounted historical idea is not accurate. Carlton Caves informs me that
LIGO are still working on QND measurements as outlined here [Kimble2001]. use
in the preparation of squeezed spin states [Kuzmich2000, Appel2009,
Koschorreck2010, LouchetChauvet2010, SchleierSmith2010, Sewell2012,
Bohnet2014, Cox2016, Hosten2016, Hosten2016a, Braverman2019, Malia2020,
Greve2022, Malia2022]. The idea is to place an ensemble of spins initialised
in $\left|0\right>^{\otimes N}$ in an optical cavity, where a $\pi/2$-pulse
rotates $\accentset{\rightharpoonup}{S}$ into the $x-y$ plane. A QND
measurement is then performed on the ensemble to project a sub-ensemble of the
spins to either $\left|0\right>$ or $\left|1\right>$. This measurement is
generally a dispersive population measurement of the spins, where an optical
probe pulse is inserted into the cavity and the phase (frequency) shift of the
output light is recorded. The shift is proportional to the number of atoms
projected into $\left|0\right>,\left|1\right>$. Apparently, the ensemble
contains an entangled state with less noise and therefore improved measurement
precision.
I strongly disagree with this description. After the QND measurement, I would
describe a single state in the ensemble as being in a probability mixture of
$\left|0\right>,\left|1\right>$ and $\left|x\right>$191919Actually, not
exactly $\left|x\right>$, the electromagnetic field of the optical probe
slightly rotates this state around $z$.. To refute this description at least
one piece of information is required: a definition of the phase relationship
between entangled basis states. For example, one can write an entangled Bell
state as: $(\left|00\right>+\left|11\right>)/\sqrt{2}$, and although it is
generally overlooked, a well-defined phase between the states
$\left|00\right>$ and $\left|11\right>$ is required to generate and make use
of this entangled state (in this example the state can be written
$(\left|00\right>+e^{{i\mkern 1.0mu}\phi}\left|11\right>)/\sqrt{2}$ with
$\phi=0$). Otherwise we have a mixed state and the best we can say is that for
any measurement we will record either a ‘0’ or a ‘1’, each with 50%
probability. None of the papers that use QND preparation of a “squeezed” spin
state define or make use of any phase relationship between the entangled basis
states. It is true that knowledge of the phase of the initial $\pi/2$-pulse
and the intensity the probe light is required when measuring these states, but
these parameters do not determine $\phi$ (see Assumptions and FAQ’s sections
below).
### 3\. Proof assumptions
For completeness, we list some assumptions of the proof below. In particular
we assume:
* •
The postulates of quantum mechanics.
* •
The entire dependence on $\theta$ is encoded in $\hat{H}(\theta,t)$. The
starting state $\left|\psi_{0}\right>$ and the measurement operator are
independent of $\theta$.
* •
Apart from the initial state vectors that evolve to
$\left|\psi_{SS}(\theta,t)\right>,\left|\psi_{1}(\theta,t)\right>$, everything
else in the external universe is the same when comparing unentangled and
entangled ensembles. The prior information on $\theta$ and the ability to
apply control fields is the same. In fact, my understanding is that the
squeezing community claim even unitary evolution is the same for these two
states, therefore the only difference of consequence is the starting state
(and the measurement).
* •
Each copy of the same state is independent and measurements on copies of the
same state are identical. This assumption ensures that any enhancement over
the SQL comes solely from entanglement, not other correlations, additional
information or different control on copies.
Comment on Assumption 1: I discuss one assumption of the analysis presented
here and use a simple example to illustrate how it is critical to the proof.
The assumption is the (Born rule) postulate of quantum mechanics – the
probability of obtaining a measurement outcome $m$ when measuring a state
$\left|\psi\right>$ is equal to
$p(m)=\left<\psi\right|\hat{M}^{\dagger}_{m}\hat{M}_{m}\left|\psi\right>$,
where $\\{\hat{M}_{m}\\}$ is the collection of self-adjoint measurement
operators associated with outcomes $m$ (Postulate 3 in [Nielsen2000]). This
postulate does not account for the fact that measurements are never perfect in
experiments and that some quantum states are easier to measure.
If we compare readout of the following spin states, performed by scattering
photons and counting the total number of photons detected (a collective spin
measurement):
$\displaystyle\frac{1}{\sqrt{2}}(\left|0\right>$
$\displaystyle+\left|1\right>)$
$\displaystyle\frac{1}{\sqrt{2}}(\left|00\right>$
$\displaystyle+\left|11\right>)$
then the second state is easier to measure. For readout along $z$, the photon
difference for measurements of the second state is twice that when reading out
the first state202020One could argue that these measurement results have
greater variance, i.e. the variance of the operator $(\Delta S_{z})^{2}$ is
greater for the second state. Apart from the fact that we are comparing two
different operators here, at a fundamental level in terms of the amount of
information provided, I disagree. If the measurements are perfect, we can take
readout of the first state and multiply by an arbitrary large number to
increase the variance (or multiply by a small number to reduce the variance).
In terms of measurement statistics, this does not change the information
contained in the measurements.. Whenever state readout is not perfect, then
(under the fair sampling assumption) the second state provides more
information. A few points of emphasis, while this is a practical limitation in
every real experiment, the proposition that it represents a fundamental
precision limit in quantum squeezing is equivalent to saying quantum mechanics
is incorrect. Also, increasing the readout fidelity removes this loophole
since the information difference reduces as the readout improves. Importantly,
in spin squeezing literature, the second state (or its extension to $N$
particle Hilbert space) is not being used, the following is used instead:
$\frac{1}{\sqrt{2(1+\epsilon^{2})}}\left(\epsilon\left|00\right>+\left(\left|10\right>+\left|01\right>\right)+\epsilon\left|11\right>\right)$
where $\epsilon$ is real, $\epsilon<1$ for a squeezed state, $\epsilon\ll 1$
in the limit of strong squeezing, and $\left|10\right>$, $\left|01\right>$
cannot be distinguished by a collective spin measurement along $z$
[Kitagawa1993, Ma2011, Andre2002, Vuletic2023] (see FAQ’s section).
Assumptions for physical interpretations:
The following assumptions are not required to derive the mathematical
inequalities, but if one then uses those inequalities to claim that the
precision of a squeezed device cannot surpass the precision of an ideal
unentangled ensemble, that argument requires further assumptions. I.e. when
going from the maths to physical interpretations of the proof one makes
implicit use of these assumptions.
* •
In general (quantum) measuring devices are not initialised in
$\left|\psi_{0}\right>$, therefore we should account for the time and
resources required to prepare this state. Using $\left|\psi_{0,1}\right>$,
$\left|\psi_{0,SS}\right>$ to differentiate between $\left|\psi_{0}\right>$
for a single spin and a squeezed state respectively, then we assume the time
and resources to generate $\left|\psi_{0,1}\right>$ is not greater than
$\left|\psi_{0,SS}\right>$. In every proposal and experiment that I am aware
of, at least one copy of $\left|\psi_{0,1}\right>$ is used to create
$\left|\psi_{0,SS}\right>$. For example, a $\pi/2$-rotation is performed on at
least one spin as the first step in creating entangled spin states. To create
squeezed photon states, single photons are inserted into a non-linear optical
element, or the interferometer together with an additional field. Thus the
assumption that, in general it is more technically demanding and time-
consuming to generate the ideal $\left|\psi_{0,1}\right>$ compared to the
ideal $\left|\psi_{0,SS}\right>$, seems correct. In fact, when generating
entangled NOON, GHZ and CAT type states, (I claim) it precisely because the
converse assumption does not hold that these physical devices do not
outperform a single particle. I would like to remove this assumption, but have
been unable to find a rigorous proof. One approach is to compare the distance
between the initialised basis state $\left|0\right>^{\otimes N}$ and
$\left|\psi_{0,1}\right>$ or $\left|\psi_{0,SS}\right>$. However,
$\mathrm{Dist}\left[\left|0\right>,\left|\psi_{0,1}\right>\right]\nless\mathrm{Dist}\left[\left|0\right>^{\otimes
N},\left|\psi_{0,SS}\right>\right]$ for any distance metric I have found.
* •
A single spin produces no greater measurement back-action onto the signal
compared to a single squeezed state. Again it seems like this assumption can
be made rigorous by proving that a single spin produces minimal back-action
for the amount of information extracted, and showing this is not the case for
a single squeezed state. We can remove this assumption with respect to LIGO,
since a single photon has lower radiation pressure than a squeezed state. Our
treatment of the SQL assumes that measurement back-action is zero. I.e. we
treat the signal classically and assume it has a fixed value that does not
vary. This is standard in theoretical analyses of the Heisenberg limit,
especially metrology with spins. For LIGO, this is the low light power
operation regime. Again we emphasise that as squeezed states produce more
back-action than a single photon our results apply to the high power regime.
* •
In going from (Fisher) information to measurement uncertainty, we make
assumptions on the estimator obtained from measurements of single spin
compared to measurements of a squeezed state. Note, we do not need to assume
that the minimal variance unbiased estimator exists [Kay1993], which may not
always be the case. For readout of $\left|\psi_{1}(\theta,t)\right>$ we just
need the existence of any estimator with lower mean squared error than the
best estimator obtained from readout of $\left|\psi_{SS}(\theta,t)\right>$. In
practice, this is easy to check, since state readout of a single spin only
gives two outcomes and finding the estimator is straightforward.
* •
We assume that the information limit for ideal measurements of a single spin
can be reached. More precisely, if the bound is not strict, meaning there is a
gap between theory and practice, that squeezed states cannot fit in this gap.
In general, if we define $t$ as the total measurement time, the finite
$\pi/2$-rotation time required to create $\left|\psi_{0}\right>$, and the
finite readout time prevent $\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]$
from being reached. However this is also the case for squeezed states. We also
assume no other technical noise or decoherence, or more precisely that
measurements of single particle states are less susceptible to this noise.
Importantly, this assumption allows us to define
$\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]$, and we note these assumptions
are generally used in derivations of the SQL and $\Delta\tilde{\theta}_{1}$.
Going beyond this assumption seems to require a modification of the postulates
of quantum mechanics. It is a position I strongly advocate.
### 4\. Alternative proofs of a noise independent uncertainty limit
The following gives some sketches for alternative approaches to obtaining the
same proof as in the main text. They are not rigorous and more work is
required to close technical details.
1. (1)
We could make the observation that most experimental implementations of
squeezing such as quadrature detection in optical interferometers or spin
readout in Ramsey interferometers result in a binary outcome measurement. Thus
we can use the expression given in Eq. (5):
$\mathrm{I}\left[\theta,t\right]=\left|\frac{\partial\mathrm{Pr}\left[1|\theta,t\right]}{\partial\theta}\right|^{2}\frac{1}{\mathrm{Pr}\left[1|\theta,t\right]\left(1-\mathrm{Pr}\left[1|\theta,t\right]\right)}.$
It is left as an exercise to show that any modifications to
$\mathrm{Pr}\left[1|\theta,t\right]$ that increase
$\mathrm{I}\left[\theta,t\right]$ necessarily increase
$\left|\frac{\partial\mathrm{Pr}\left[1|\theta,t\right]}{\partial\theta}\right|$.
More involved is to show that this is the case even for probability
distributions taking $2^{N}$ discrete values, thus we do not need to restrict
ourselves to a two-dimensional probability space.
2. (2)
We can follow Wootters [Wootters1981] and show that
$\mathrm{I}\left[\theta,t\right]$ is a distance metric on state vectors,
characterising the (angular) distance between the start and final state. As
this distance is parameterized by the state evolution it follows that if state
evolution is the same for two states, then the information is also the same.
Equivalently, the information provided by a path of states is given by the
gradient vector. We need to use the fact that state vectors are normalised, so
they have the same length.
3. (3)
In Refs. [Wootters1981, Braunstein1994, Childs2000, Giovannetti2006,
Jones2010, Pang2017] amongst others, an uncertainty bound depending only on
the maximal difference in eigenvalues of
$\frac{\partial\hat{H}(\theta,t)}{\partial\theta}$ is provided. Minimising the
uncertainty requires placing the meter in an equal superposition of states
with extremal eigenvalues, $\mu_{\mathrm{max}}(t),\;\mu_{\mathrm{min}}(t)$:
$\left|\psi(t)\right>=\frac{1}{\sqrt{2}}\left(\left|\mu_{\mathrm{max}}(t)\right>+e^{{i\mkern
1.0mu}\Phi}\left|\mu_{\mathrm{min}}(t)\right>\right),$
where $\Phi$ is an arbitrary phase. The optimal state is isomorphic to a
single spin, since it is just a 2-dimensional space and the measurement
uncertainty per unit time is limited by the maximum response speed of the
meter to changes in $\theta$.
4. (4)
We could observe that the uncertainty on $\theta$ in Eq. (1) is neatly divided
into two time-zones (this is in contrast to Eq. (5)). The first contribution
is during sensor interaction with the signal and the second occurs after
interaction and is just readout of the sensor. Now the benefit from squeezing
closely resembles the original proposals to surpass the SQL (see Appendix 2).
But Giovannetti, Lloyd and Maccone proved that entanglement cannot provide any
benefit at the measurement stage for a restricted class of Hamiltonians
[Giovannetti2006]. For a rigorous proof we also need to show that the two
terms in Eq. (1) are independent. For example states that interact faster also
happen to be states that cannot be measured well. This happens to be true when
we consider the meter as an ensemble of spins, putting the whole ensemble in a
NOON state reduces the ability to measure the direction of the meter as
compared to $N$ copies of the same state. However this is not the case for
squeezed states. The temporal separation achieves this to the extent required,
since after interaction, the first term is fixed and we do not yet need to
make any decision on the measurement.
5. (5)
We could apply results from quantum state tomography, where the uncertainty in
estimating the spin direction of an unknown state just depends on the number
of copies of the state [ODonnell2016, Aaronson2018].
6. (6)
We could observe the hint provided by looking at state discrimination, i.e.
when we know that $\theta$ can take only one of two discrete values. Childs,
Renes and Preskill [Childs2000] showed that driving the state to one of two
orthogonal states minimises the discrimination uncertainty and the best one
can do is to drive as quickly as possible. There is no benefit to reducing
readout noise here since it already goes to zero.
### 5\. Proof for average expected information and mixed states
The key to these extended proofs is that we are dealing with probability
distributions over non-negative real numbers, so we don’t have to worry about
any complex number magic. In comparing summations of non-negative real numbers
where each term in one sum is less than each corresponding term in another
sum, by convexity we have an inequality.
Mixed states: To prove the claim when the squeezed state is a mixed state
$\left|\rho_{0,SS}\right>$, we can write the initial state as probability
distribution over $L$ pure states, with probabilities $p_{l}$:
$\left|\rho_{0,SS}\right>=\sum_{l=1}^{L}p_{l}\left|\psi_{0,SS}\right>_{l}$. As
we have assumed that the initial state does not depend on $\theta$, we can use
the property that the Fisher information is convex over mixed states when
$p_{l}$ is independent of $\theta$ [Liu2020].
Average expected information: Eq. (3) just provides the information around a
given value of $\theta$. The expected information, averaged over the prior
probability distribution of $\theta$ is:
$\langle\mathrm{I}\left[\theta,t\right]\rangle=\int^{\Theta}\mathrm{d}\Theta
p(\theta=\Theta)\mathrm{I}\left[\Theta,t\right],\quad\langle\mathrm{I}\left[\theta,t\right]\rangle=\sum_{j=1}^{J}p(\theta=\Theta_{j})\mathrm{I}\left[\Theta_{j},t\right]$
(6)
where $p(\theta=\Theta)$ is the prior probability that $\theta$ takes on some
value $\Theta$. The LHS expression is the expected information when the prior
probability is continuous and the RHS when $\theta$ can take $J$ discrete
values.
Given that the information on any value of $\theta$ is not greater than a
single spin, we can show that this holds for the expected information averaged
over the probability distribution of $\theta$. First we note that the optimal
state is independent of the value of $\theta$ and therefore also independent
of the prior probability of $\theta$. It is not however the case that the
optimal measurement is independent of prior probability distribution of
$\theta$. To additionally prove that no measurement can produce more
information, we use the result of Giovannetti, Lloyd and Maccone who showed
that entanglement does not provide any advantage at the measurement stage
[Giovannetti2006]. Thus we only need to consider the amount of information
encoded in the state (before measurement).
### 6\. Proof - Arbitrary time-dependent Hamiltonian: $H(\theta,t)$
We motivate the proof by noting that if we could solve the unitary evolution
of a single spin and show that it remains in the optimal state for all
evolution times, then this would imply
$\left|\left<\psi_{1}(\theta,t)\right|\left(\frac{\mathrm{d}\left|\psi_{1}(\theta,t)\right>}{\mathrm{d}\theta}\right)\right|^{2}=0$.
However, without a general method of solving time-evolution, even when
restricting ourselves to a single spin, we cannot make that statement. An
equivalent approach is to use the information bound proved by Pang and Jordan
for general time-dependent Hamiltonians [Pang2017]:
$\mathrm{I}\left[\theta,t\right]\leq\left|\int^{t}_{0}\mathrm{d}t^{\prime}\left(\mu_{\mathrm{max}}(t^{\prime})-\mu_{\mathrm{min}}(t^{\prime})\right)/\hbar\,\right|^{2},$
(7)
where $\mu_{\mathrm{max}}(t),\;\mu_{\mathrm{min}}(t)$ are the maximum, minimum
eigenvalues of $\frac{\partial\hat{H}(\theta,t)}{\partial\theta}$. Eq. (7)
expresses a simple concept, the information per unit time is limited by the
maximum response speed of the meter to changes in $\theta$. This quantity is
characterised by the eigenvalues of
$\frac{\partial\hat{H}(\theta,t)}{\partial\theta}$ (c.f. how the eigenvalues
of $\hat{H}$ determine the evolution speed of any quantum state). Minimising
the uncertainty requires placing the meter in an equal superposition of states
with extremal eigenvalues, $\mu_{\mathrm{max}}(t),\;\mu_{\mathrm{min}}(t)$:
$\left|\psi^{\mathrm{Opt}}(t)\right>=\frac{1}{\sqrt{2}}\left(\left|\mu_{\mathrm{max}}(t)\right>+e^{{i\mkern
1.0mu}\Phi}\left|\mu_{\mathrm{min}}(t)\right>\right),$
where $\Phi$ is an arbitrary phase [Braunstein1996, Giovannetti2006,
Childs2000].
If we could solve the unitary evolution under $\hat{H}(\theta,t)$, show that
$\left|\psi_{1}(\theta,t)\right>=\left|\psi^{\mathrm{Opt}}(t)\right>$, and
find a projective measurement that saturates the bound, then we can saturate
Eq. (7) and show that
$\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]=4\left(\frac{\mathrm{d}\left<\psi_{1}(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi_{1}(\theta,t)\right>}{\mathrm{d}\theta}\right)$.
Unfortunately in general this is not possible: for example when both the
eigenvalues and eigenstates of $\hat{H}$ depend on $\theta$. Therefore we need
to modify our claim to address not the gradient of $\left|\psi\right>$, but
rather the projection of
$\frac{\mathrm{d}\left|\psi\right>}{\mathrm{d}\theta}$ orthogonal to
$\left|\psi\right>$:
$\left(\frac{\mathrm{d}\left|\psi\right>}{\mathrm{d}\theta}\right)_{\perp}\equiv\frac{\mathrm{d}\left|\psi\right>}{\mathrm{d}\theta}-\left|\psi\right>\left<\psi\right|\left(\frac{\mathrm{d}\left|\psi\right>}{\mathrm{d}\theta}\right)$.
Claim: If
$\left(\frac{\mathrm{d}\left<\psi_{SS}(\theta,t)\right|}{\mathrm{d}\theta}\right)_{\perp}\left(\frac{\mathrm{d}\left|\psi_{SS}(\theta,t)\right>}{\mathrm{d}\theta}\right)_{\perp}=\left(\frac{\mathrm{d}\left<\psi_{1}(\theta,t)\right|}{\mathrm{d}\theta}\right)_{\perp}\left(\frac{\mathrm{d}\left|\psi_{1}(\theta,t)\right>}{\mathrm{d}\theta}\right)_{\perp}$
then
$\mathrm{I}_{\mathrm{SS}}\left[\theta,t\right]=\mathrm{I}_{\mathrm{1}}\left[\theta,t\right]$.
Proof: As noted by Braunstein, Caves and Milburn [Braunstein1996]
$\left(\frac{\mathrm{d}\left<\psi(\theta,t)\right|}{\mathrm{d}\theta}\right)_{\perp}\left(\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}\right)_{\perp}=\left(\frac{\mathrm{d}\left<\psi(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}\right)-\left|\left<\psi(\theta,t)\right|\left(\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}\right)\right|^{2}$
and Eq. (4) proves the claim. This statement is equivalent to looking at the
data-set $\\{X_{i}|\theta,t\\}$ resulting from projective measurements of
$\left|\psi_{SS}(\theta,t)\right>$. If this data-set shows no greater
dependence on $\theta$ than a single spin, then it cannot contain
fundamentally more information on $\theta$ than projective measurements of a
single spin.
### 7\. FAQs
I find it difficult to accept your statement that the whole field is wrong
without more evidence. Could you help point out some specific errors in
papers.
I have started doing that for many experiments, see here [McGuinness2021,
McGuinness2022, McGuinness2023, McGuinness2023a]. Let’s go through an explicit
example for spin-squeezing. All you need to do is ask some simple questions
about what is going on and see if the answers make sense. Some questions that
might help:
> In spin squeezing, what entangled state is being created?
It is surprisingly difficult to find a direct answer to this question. In
place of a mathematical definition of the squeezed state, a pictorial
representation on the Bloch sphere is often used. For a system consisting of
two particles, Mark Kasevich and Vladan Vuletić inform me that the state that
everyone is preparing is [Vuletic2023]:
$\frac{1}{\sqrt{2(1+\epsilon^{2})}}\left(\epsilon{\color[rgb]{0,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0}\left|00\right>}+\left({\color[rgb]{0.5,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0.5,0,0}\left|10\right>+\left|01\right>}\right)+\epsilon{\color[rgb]{0,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0}\left|11\right>}\right)$
(8)
where $\epsilon$ is real, $\epsilon<1$ for a squeezed state, $\epsilon\ll 1$
in the limit of strong squeezing, and $\left|10\right>$, $\left|01\right>$
cannot be distinguished by a collective spin measurement along $z$ (I have
added colours for emphasis). Equivalent definitions and the extension to $N$
particles are given in [14, 43, 97].
Let’s assume this is the input state to a Ramsey interferometer, where a
relative phase between basis states accumulates and is compared to the phase
of the final readout pulse in the interferometer. A few characteristics of
this state to note.
* •
The green basis states accumulate a relative phase twice that of a single
particle. If the $N$ particle squeezed state has components:
${\color[rgb]{0,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0}\left|0\right>^{\otimes
N}}$ and
${\color[rgb]{0,0.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0}\left|1\right>^{\otimes
N}}$, they accumulate a relative phase, $N$ times that of a single particle.
* •
The relative phase accumulated by the red basis states is not observable by a
collective spin measurement along $z$. In fact, a perfect measurement of these
basis states always gives the same observable $S_{z}=0$, independent of the
phase of the final readout pulse.
* •
For $\epsilon=1$, although one might say the state in Eq. (8) is not squeezed,
it is still entangled. On average, we would expect to observe Ramsey fringes
with a different contrast and period compared to the same number of
independent spins. For a two-spin ensemble, if the unentangled ensemble has a
period of $2\pi$ and the fringe contrast goes from $1$ to $-1$, then the
entangled ensemble should have a period of $\pi$ and fringes going from $1/2$
to $-1/2$.
* •
When squeezing is increased, i.e. $\epsilon\rightarrow 0$, we would expect to
see contrast of the Ramsey fringes disappear when performing a collective spin
measurement. It is true that the measurement variance is reduced, since we
always obtain the same result $S_{z}=0$, but this gives no information on the
phase of the final readout pulse. Think about what is happening here,
squeezing just reduces the amplitude of the Ramsey fringes, without affecting
the period. How can this improve the measurement precision?
Again, points to emphasise. Measurements of this state give completely
different data, compared to measurement of the same number of unentangled
spins. In particular, the period is reduced and the contrast is reduced by the
same amount. Also, when squeezing is increased, although we observe that the
measurement variance reduces it is clear that this leads to worse phase
estimation.
If we accept that the squeezed state defined above is actually being created,
the next question one could ask is:
> What experimental evidence is presented, to verify the creation of this
> state?
Note, in the following papers, a collective spin measurement along $z$ is
performed on the spin squeezed state [Appel2009, Gross2010, Leroux2010,
Leroux2010a, LouchetChauvet2010, SchleierSmith2010, Bohnet2014, Muessel2014,
Strobel2014, Cox2016, Hosten2016, Linnemann2016, Braverman2019, Malia2020,
Bao2020, PedrozoPenafiel2020, Greve2022, Malia2022, Riedel2010, Hosten2016a,
Colombo2022]. No evidence is presented of a state with the above
characteristics (except that the Ramsey fringe contrast reduces and the
measurement variance reduces). Most strikingly, when a plot of the measurement
observable vs. readout phase is presented, the period is the same as for the
unentangled ensemble, see:
1. (1)
Figure 2 of [Gross2010]
2. (2)
Figure 2 of [Leroux2010]. Data is only presented for the variance as a
function of rotation angle.
3. (3)
Figure 5 of [LouchetChauvet2010]
4. (4)
Supplementary Figure A6 of [SchleierSmith2010]
5. (5)
Figure 2 and Figure 3c of [Bohnet2014]
6. (6)
Figure 1c and Figure 2 of [Muessel2014]. No data is presented for the
unsqueezed ensemble, but the expected dependence on the readout phase is the
same as presented for the squeezed ensemble (a period of $2\pi$).
7. (7)
Figure 4A of [Strobel2014]. This data corresponds to measurements on a state,
which is prepared as a spin squeezed state and evolved for 25 ms. After 25 ms
squeezing is lost, although entanglement remains?! In Figure 2, data is
presented for squeezing as a function of tomography angle.
8. (8)
Figure 3(d) of [Cox2016]. A plot of spin noise as a function of the phase
$\psi$ of the final readout pulse.
9. (9)
Figure 2 of [Linnemann2016] where a $2\pi$ phase dependence is observed. See
also Figure 3(a) inset.
10. (10)
Figure 2 of [Braverman2019], data is presented for the variance as a function
of rotation angle.
11. (11)
Figure 2c of [PedrozoPenafiel2020], a plot of variance as a function of
tomography angle is presented for the spin squeezed state
12. (12)
Figure 2a of [Riedel2010], the same angular dependence is observed for
squeezed and unsqueezed ensembles.
13. (13)
Figure 4c of [Colombo2022] where the phase response for a squeezed and
unsqueezed ensemble are directly compared. See also Figure 4b for a plot of
spectroscopic enhancement as a function of rotation angle for the spin
squeezed state.
Note for measurements of noise or variance, twice the angular or phase
dependence is observed compared to population measurements (i.e. half the
period). The same is also observed for measurements of unentangled ensembles
when the correct rotation direction is chosen. Sometimes the sensor response
is pictorially represented, and no experimental data is presented, see:
1. (1)
Figure 1 of [Appel2009]
2. (2)
Figure 2 of [Leroux2010a]
In fact, these two points: no mathematical definition of the squeezed state, a
spin response the same as an unentangled state, are what forced me to take the
approach I did in the main text.
What should be abundantly clear, is that the squeezed state defined in Eq. (8)
(or the $N$ particle equivalent) is not created. So, the natural question to
ask is
> What state is being created?
Summarizing the experimental evidence that has been presented. Collective spin
measurements of “squeezed” ensembles have the same phase dependence as
unentangled ensembles and with lower contrast. In addition, the measurement
noise (spin variance) is observed to reduce for particular readout angles. An
important piece of information that the above papers do not mention, is that
the measurement noise (spin variance) also reduces for particular readout
angles of unentangled ensembles (see for example Figure 11 of [Itano1993]).
Despite many papers claiming the contrary, it is not correct that reduced
noise for particular projective measurements is evidence of squeezing or
entanglement. Exactly the same dependence is seen for measurements on
unentangled systems.
Finally, we can read the description of how the “squeezed” state is created.
In the following papers, a “squeezed” state is prepared by performing a QND
measurement on the ensemble, i.e. a projective measurement of the spin
population along $z$ [Appel2009, Leroux2010, Leroux2010a, LouchetChauvet2010,
SchleierSmith2010, Bohnet2014, Cox2016, Hosten2016, Braverman2019, Malia2020,
Bao2020, PedrozoPenafiel2020, Greve2022, Malia2022, Hosten2016a, Colombo2022].
In general, this measurement is weak, in that it only projects a small
proportion of spins to the $\left|0\right>,\left|1\right>$ eigenstates of $z$.
After performing the QND, the authors state that the ensemble is now in an
entangled “squeezed” state.
Based on the evidence presented in these papers, this is definitely not the
description I would use. I would describe some of the spins as being in
$\left|0\right>$ or $\left|1\right>$ conditional on the QND measurement
result. That is basic quantum mechanics, after a projective measurement, the
system is in an eigenstate of the measurement observable. The spins that are
not measured, are unaffected except to experience an a.c. Stark shift from the
QND measurement. So, this is my explanation of what happening in the above
papers. The spins are prepared in an unentangled state:
$\left|x\right>^{\otimes N}$. Some of these spins are projected to
$\left|0\right>$, $\left|1\right>$. The resulting ensemble can be described as
one where most of the spins are initialised to $\left|x\right>$ and some are
in $\left|0\right>$, $\left|1\right>$. This ensemble is then used to perform
Ramsey interferometry. The resulting contrast is worse than an ensemble
initialised to $\left|x\right>^{\otimes N}$ and the phase response is the
same. As far as I am aware, this is the only description that fits to all of
the evidence presented and is theoretically consistent.
Why do you restrict your analysis to (Fisher) information, why not analyse the
actual measurement uncertainty? Are you hiding something behind this
definition?
I analyse the information and not the measurement uncertainty, for a reason
that is both somewhat uninteresting and at the same time points to a bigger
issue in quantum metrology. The reason is that it is easy to violate bounds on
the measurement uncertainty, whereas the same is not true for the information.
This makes the uncertainty bound non-rigorous.
In general, the following inequality is used to relate information (from a
single measurement) to the estimation uncertainty [Braunstein1996,
Giovannetti2004, Giovannetti2006, DemkowiczDobrzanski2012, Pang2014, Yuan2015,
Pang2017]:
$\langle(\Delta\tilde{\theta})^{2}\rangle\geq\frac{1}{\mathrm{I}\left[\theta,t\right]},$
(9)
where $\langle(\Delta\tilde{\theta})^{2}\rangle$ is the expected mean squared
error of the estimator. Thus the uncertainty $\Delta\tilde{\theta}$ is the
positive square-root of the expected mean squared error of the estimator.
For $t=0$, no information is obtained on the signal $\theta$, thus one would
expect the uncertainty to be infinite. However if we have any amount of prior
information on $\theta$, then the uncertainty bound is violated, because at
$t=0$ the uncertainty is not infinite, instead it is given by our prior
uncertainty on $\theta$. Even if we take into account the prior information,
and modify Eq. (9) to express a relative uncertainty reduction (in comparison
to the prior uncertainty), it is possible to violate the bound. Take the
following discrete prior probability distribution of $\theta$:
$p(\theta=\pi/2)=1/2;\quad\quad p(\theta=\pi)=1/2,$
i.e. $\theta$ can take one of two values with equal probability. For this
prior, our initial uncertainty in $\theta$ is $\pi/4$. Even taking into
account this information, the uncertainty bound can be violated by an
arbitrary amount. To do so, we need drive a single spin sensor to one of two
orthogonal states (e.g. $\left|0\right>$, $\left|1\right>$) depending on the
value of $\theta$, so that if $\theta=\pi/2$, the spin ends up in
$\left|0\right>$ and if $\theta=\pi$, the spin ends up in $\left|1\right>$. If
the readout of the spin is perfect, then regardless of how long it takes to
perform unitary evolution, the final uncertainty on $\theta$ is zero. Similar
arguments have been observed in obtaining violations of the Heisenberg time-
energy uncertainty relation [Aharonov2002] and used as evidence that quantum
computers can exponentially outperform classical computers [Atia2017].
These examples show that if we really want to obtain a rigorous bound on the
measurement uncertainty, we need to be careful in considering the prior
information, because the final uncertainty depends on both the information
provided from the measurement and our prior uncertainty. Mathematically, the
reason why we can violate the Cramer-Rao uncertainty bound in the above
example is because the posterior probability distribution does not satisfy the
regularity condition
$\mathbb{E}\left[\frac{\partial\mathrm{ln}\left(\mathrm{Pr}\left[X|\theta,t\right]\right)}{\partial\theta}\right]=0\quad\mathrm{for\;all}\;\theta$
where the expectation is taken with respect to
$\mathrm{Pr}\left[X|\theta,t\right]$ (see section on Cramer-Rao lower bound in
[Kay1993]). This condition is assumed in derivation of the uncertainty bound
on the mean squared error of the estimator.
Performing the above analysis adds complexity and forces us to put
restrictions on the prior probability distribution to avoid pathological
priors. This also makes the analysis less general. Most importantly it is
distracting and does not address the issue at stake. We don’t care about the
prior probability distribution and how that impacts the final measurement
uncertainty, what we want to know is the following
> Do measurements of entangled squeezed states give more information on a
> signal than measurements of unentangled states?
The resulting uncertainty may violate the bound of Eq. (9), but unless
something strange occurs to violate our understanding of information laws, we
can say that: if the answer is no, then measurements of entangled squeezed
states will not provide a lower uncertainty than measurements unentangled
states (see also the discussion in App. 3).
What about the Heisenberg uncertainty relations for conjugate observables.
Don’t they disprove this work. Are you saying they are wrong?
Instead of operating in the Schrödinger picture we could analyse how unitary
evolution acts as a mapping of quantum states in the Heisenberg picture. Then,
for $\hat{H}(\theta,t)=\theta\hat{H}$, we have:
$\frac{\mathrm{d}\hat{U}(\theta,t)\left|\psi_{0}\right>}{\mathrm{d}\theta}=-{i\mkern
1.0mu}t\hat{H}\,\hat{U}(\theta,t)/\hbar\left|\psi_{0}\right>$. As $\hat{U}$
and $\hat{H}$ commute, expressing $\left|\psi_{0}\right>$ in terms of the
eigenstates of $\hat{H}$, we have:
$\begin{split}&\left(\frac{\mathrm{d}\left<\psi(\theta,t)\right|}{\mathrm{d}\theta}\right)\left(\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}\right)=\left<\psi_{0}\right|\left(-{i\mkern
1.0mu}t\hat{H}\,\hat{U}(\theta,t)/\hbar\right)^{\dagger}\left(-{i\mkern
1.0mu}t\hat{H}\,\hat{U}(\theta,t)/\hbar\right)\left|\psi_{0}\right>\\\
&=(t/\hbar)^{2}\sum_{k=1}^{K}\left<\psi_{E_{k}}\right|\left(\mathrm{Exp}\left[{i\mkern
1.0mu}\theta
tE_{k}/\hbar\right]E_{k}\alpha_{i}^{\ast}\right)\left(\alpha_{i}E_{k}\mathrm{Exp}\left[-{i\mkern
1.0mu}\theta tE_{k}/\hbar\right]\right)\left|\psi_{E_{k}}\right>\\\
&=(t/\hbar)^{2}\sum_{k=1}^{K}E_{k}^{2}|\alpha_{i}|^{2},\\\ \end{split}$
and
$\left|\left<\psi(\theta,t)\right|\left(\frac{\mathrm{d}\left|\psi(\theta,t)\right>}{\mathrm{d}\theta}\right)\right|^{2}=\left|-{i\mkern
1.0mu}t/\hbar\left<\psi_{0}\right|\hat{U}^{\dagger}\hat{H}\,\hat{U}\left|\psi_{0}\right>\right|^{2}=(t/\hbar)^{2}\left(\sum_{k=1}^{K}E_{k}|\alpha_{i}|^{2}\right)^{2}.$
Thus, for this Hamiltonian, maximising information on $\theta$ means
maximising the variance of $\hat{H}$212121In general we want to maximise the
variance of the generator of translation with respect to $\theta$.:
$(\Delta\hat{H})^{2}\equiv\langle\hat{H}^{2}\rangle-\langle\hat{H}\rangle^{2}$
with respect to $\left|\psi_{0}\right>$. For unitary evolution this also means
maximising the variance of $\hat{H}$ on the output state. We have:
$\mathrm{I}\left[\theta\right]=4\left(\Delta\hat{H}\right)^{2}.$
Due to the correspondence between different pictures, increasing
$\left(\Delta\hat{H}\right)^{2}$ in this context only comes about through
increasing the state response to $\theta$. In this analysis, it is incorrect
to equate $\left(\Delta\hat{H}\right)^{2}$ as the uncertainty in estimating
the energy eigenvalues or the Hamiltonian $\hat{H}$. It is the variance of the
measurement results, assuming perfect measurements. Importantly, we see that
states with large variance in measurement outcomes (i.e. noise) give the most
information on $\theta$. This is in direct contradiction with the analysis
provided by the squeezing community.
There are some technical aspects to this analysis which means that working in
the Schrödinger picture is preferable. In the main text, we assume that two
state vectors have the same dependence on $\theta$. However in the Heisenberg
picture, the dependence is included in the operator not the state vector, so
we cannot make that assumption. Also when comparing a single spin with a
squeezed state, we are comparing the variance of two different operators with
different dimension. We don’t have that issue in the Schrödinger picture
because we are comparing inner products on normalised vectors. Even when the
vector spaces are different dimensions, this is still ok. On a related point,
defining the measurement variance is a tricky endeavour. Processing of raw
experimental data is always required to obtain meaningful estimates, we don’t
just apply a $\hat{H}$ operator to our system and receive the outcome $E_{k}$.
There is nothing to stop us from multiplying the data by a large number and
artificially increasing the variance. This operation does not improve our
measurement precision and it can be hard to identify this artificial
enhancement and separate it from actual improvements.
Take a phase estimation experiment. In one experiment, the state to be
measured is $\left(\left|1\right>+e^{{i\mkern
1.0mu}\varphi}\left|0\right>\right)/\sqrt{2}$. Compare to another experiment,
with the state $\left(\left|11\right>+e^{{i\mkern
1.0mu}\varphi}\left|00\right>\right)/\sqrt{2}$. The measurement outcomes of
the second experiment produce twice as many photons as the first. But in the
limit of perfect fidelity readout, these extra photons do not provide more
information on $\varphi$ – they are correlated! Squeezing people might claim
that because this measurement observable has twice the variance of the single
spin state, then by measuring a conjugate observable we would obtain a better
uncertainty. But this is an incorrect argument in many respects. For one, it
is a simple rescaling error. The error becomes even harder to identify when we
start talking about measurements on ensembles where the contribution of each
state gets blurred out. It is much easier to pick up on this error when
comparing individual normalised states in the Schrödinger picture (see the
comment on Assumption 1 in App. 3).
|
# Pure anti-de Sitter supergravity and the conformal bootstrap
Luis F. Alday Mathematical Institute, University of Oxford, Woodstock Road,
Oxford, OX2 6GG, UK Shai M. Chester Department of Particle Physics and
Astrophysics, Weizmann Institute of Science, Rehovot, Israel
###### Abstract
We consider graviton scattering in maximal supergravity on Anti-de Sitter
space (AdS) in $d+1$ dimensions for $d=3,4,\text{and $6$}$ with no extra
compact spacetime factor. Holography suggests that this theory is dual to an
exotic maximally supersymmetric conformal field theory (CFT) in $d$ dimensions
whose only light single trace operator is the stress tensor. This contrasts
with more standard cases like Type IIB string theory on $AdS_{5}\times S^{5}$
dual to $\mathcal{N}=4$ Super-Yang-Mills, where the CFT has light single trace
operators for each Kaluza-Klein mode on $S^{5}$. We compute the 1-loop
correction to the pure AdSd+1 theory in a small Planck length expansion, which
is dual to the large central charge expansion in the CFT. We find that this
correction saturates the most general non-perturbative conformal bootstrap
bounds on this correlator in the large central charge regime for $d=3,4,6$,
while the 1-loop correction to CFTs with string/M-theory duals all lie inside
the allowed region.
## I Introduction
The AdS/CFT duality relates quantum gravity on Anti-de Sitter (AdS) space in
$d+1$ dimensions times a compact spacetime factor, to certain supersymmetric
CFTs in $d$ dimensions Maldacena:1997re . In the simplest examples, the
compact space is simply a sphere with a similar radius as AdS, and the CFT is
maximally supersymmetric. Compactifying the graviton on the sphere generates
an infinite tower of Kaluza-Klein (KK) modes in AdS, which are dual to light
single trace operators in the CFT. It is an open question if holographic duals
exist where the radius of the sphere is parametrically smaller than that of
AdS, so that these extra dimensions would be small (See Alday:2019qrf ;
Gopakumar:2022kof for a recent discussion). In the most extreme case, there
would simply be no compact factor at all, and the only single trace operators
in the dual CFT would be the stress tensor multiplet. No such pure AdS theory
has been constructed, despite much effort Witten:2007kt ; Maloney:2007ud ;
Hellerman:2009bu ; Keller:2014xba ; Collier:2016cls ; Afkhami-Jeddi:2019zci ;
Hartman:2019pcd ; Maxfield:2020ale ; Afkhami-Jeddi:2020ezh ; Maloney:2020nni .
We will address this question by studying the stress tensor four-point
function, which is dual to scattering of gravitons in the bulk, in maximally
supersymmetric CFTs in $d=3,4,6$ dimensions. Consider the large central charge
$c$ expansion of this correlator, where $c$ is defined as the coefficient of
the stress-tensor two-point function, and is related to the bulk as
$\begin{split}c\sim(L_{\text{AdS}}/\ell_{\text{Planck}})^{D-2}\,,\end{split}$
(1)
where $L_{\text{AdS}}$ is the radius of the AdSd+1 factor, and
$\ell_{\text{Planck}}$ is the Planck length of the full $D$-dimensional bulk
spacetime, including a possible compact factor. We can define the correlator
$\mathcal{G}$ in any such theory to any order in $1/c$ as
$\begin{split}{\mathcal{G}}&={\mathcal{G}}^{(0)}+c^{-1}{\mathcal{G}}^{R}+c^{-2}({\mathcal{G}}^{R|R}+\kappa{\mathcal{G}}^{R^{4}})+\dots\\\
&\dots+c^{-\frac{D+4}{D-2}}{\mathcal{G}}^{R^{4}}+c^{-\frac{D+8}{D-2}}{\mathcal{G}}^{D^{4}R^{4}}+\dots\,,\end{split}$
(2)
where in the first line we wrote the tree level supergravity term
$\mathcal{G}^{R}$ and the 1-loop term ${\mathcal{G}}^{R|R}$ with supergravity
vertices $R$, while in the second line we wrote tree level higher derivative
corrections that are allowed by supersymmetry 111In CFTs dual to M-theory the
lowest correction $R^{4}$ scales as $c^{-5/3}$, and was computed in
Chester:2018aca ; Chester:2018dga . In CFTs dual to string theory, this
coefficient scales like $c^{-7/4}$ at finite string coupling, and was computed
for Type IIA in Binder:2019mpb , and Type IIB in Chester:2019jas . The
$D^{4}R^{4}$ term has also been computed for M-theory in Binder:2018yvd , and
for Type IIB in Chester:2020vyz .. The expansion also includes 1-loop terms
with such higher derivative vertices, as well as higher loop terms 222The
distinction between tree and loop is ambiguous, since
$c\sim(L_{\text{AdS}}/\ell_{\text{Planck}})^{D-2}$ is the only expansion
parameter, but at low orders for some $D$ they can be distinguished by the
powers of $1/c$. The $\mathcal{G}^{R|R}$ term has an $\mathcal{G}^{R^{4}}$
type contact term with coefficient $\kappa$ as long as the scaling of the
$R^{4}$ tree level term is smaller than $R|R$, which is the case for string
and M-theory with $D=10,11$ 333This contact term has been fixed for M-theory
on $AdS_{4}\times S^{7}/\mathbb{Z}_{k}$ Alday:2021ymb ; Alday:2022rly and
Type IIB on $AdS_{5}\times S^{5}/\mathbb{Z}_{k}$ Chester:2019pvm ;
Alday:2021vfb for $k=1,2$., respectively, but is not for the pure AdSd+1
theory where $D=d+1$ and $d=3,4,6$. All tree and loop supergravity terms
$\mathcal{G}^{R|R|\dots}$ can be computed iteratively using the analytic
bootstrap Rastelli:2017udc ; Aharony:2016dwx , but to fix the higher
derivative corrections as well as loop contact terms such as
$\kappa{\mathcal{G}}^{R^{4}}$, we need a UV completion like string/M-theory.
These terms only affect CFT data with finite spin Heemskerk:2009pn , so at any
given order in $1/c$ we can unambiguously determine an infinite set of CFT
data for AdSd+1 duals with any (or no) compact factor. Whether or not a pure
AdSd+1 theory is also defined non-perturbatively in $c$ is a separate question
that we will address in the conclusion.
The tree level supergravity correction $\mathcal{G}^{R}$ at order $1/c$ is
unaffected by a compact spacetime factor Rastelli:2017udc ; Rastelli:2017ymc ;
Zhou:2017zaw ; Alday:2020dtb , but higher loop terms starting with
$\mathcal{G}^{R|R}$ at order $1/c^{2}$ are sensitive to the number of KK modes
Aharony:2016dwx . We will compute this 1-loop term for pure AdSd+1 theories in
$d=3,4,6$ using the analytic bootstrap, which allows us to extract all CFT
data to $O(c^{-2})$. We then can compare this $O(1/c^{2})$ data to non-
perturbative numerical bootstrap bounds Beem:2013qxa ; Beem:2015aoa ;
Chester:2014fya ; Chester:2014mea , which apply to any maximally
supersymmetric CFT, and can be computed for any $c$. We find that for all
$d=3,4,6$, the pure AdSd+1 1-loop correction precisely saturates the bootstrap
bounds in the large $c$ regime.
The 1-loop correction has also been computed for maximally supersymmetric CFTs
with string/M-theory duals. In 3d, these CFTs are $U(N)_{k}\times U(N)_{-k}$
ABJM theory with $k=1,2$, which is dual to M-theory on $AdS_{4}\times
S^{7}/\mathbb{Z}_{k}$ with $c\sim N^{3/2}$ 444The $U(N)_{2}\times U(N+1)_{-2}$
theory also has maximal supersymmetry, but this shift of the gauge factor does
not matter in the large $N$ limit. When $k>2$, the theory has $\mathcal{N}=6$
supersymmetry. Aharony:2008ug . In 4d, they are $\mathcal{N}=4$ super-Yang-
Mills (SYM) with gauge group $SU(N)$ or $SO(N)$ 555The $USp(2N)$ gauge group
is also allowed, but is similar to $SO(N)$ in the large $N$ limit., which is
dual to Type IIB string theory on $AdS_{5}\times S^{5}$ or $AdS_{5}\times
S^{5}/\mathbb{Z}_{2}$ with $c\sim N^{2}$ Maldacena:1997re , respectively. In
6d, they are $A_{N-1}$ or $D_{N}$ $(2,0)$ theories Witten:1995zh 666There are
also $(2,0)$ theories constructed from exceptional groups, but these do not
have a large $N$ limit., which are dual to $AdS_{7}\times S^{4}$ or
$AdS_{7}\times S^{4}/\mathbb{Z}_{2}$ with $c\sim N^{3}$ Witten:1998xy ;
Aharony:1998rm , respectively. The 1-loop corrections were computed in these
various cases in Alday:2017xua ; Aprile:2017bgs ; Alday:2020tgi ;
Alday:2021ymb ; Alday:2021vfb ; Alday:2022rly . In all cases, we find that
these corrections lie inside the allowed region of the bootstrap bounds for
the same regime of large $c$ where the pure AdSd+1 theory saturates the bound.
The rest of this paper is organized as follows. In Section II, we review the
constraints of maximal superconformal symmetry on the stress tensor four-point
function for $d=3,4,6$. In Section III we consider the large $c$ expansion of
this correlator and compute the 1-loop correction to pure AdSd+1 supergravity.
In Section IV we compare this correction, and the previously computed 1-loop
corrections for string/M-theory duals, to non-perturbative numerical conformal
bootstrap bounds in the large $c$ regime. We end with a discussion of our
results in Section V.
## II Stress tensor correlator
We begin by reviewing the constraints of maximal supersymmetry in $d=3,4,6$ on
the stress tensor correlator. We consider the superconformal primary $S(x)$,
which is a scalar with $\Delta=d-2$ that transforms in the symmetric traceless
representation of the R-symmetry group $SO(8)_{R}$, $SO(6)_{R}$, and
$SO(5)_{R}$ for 3d, 4d, and 6d, respectively. Conformal and R-symmetry fixes
the four-point function to take the form
$\begin{split}&\langle
S(x_{1},Y_{1})S(x_{2},Y_{2})S(x_{3},Y_{3})S(x_{4},Y_{4})\rangle=\\\
&\qquad\qquad\qquad\frac{(Y_{1}\cdot Y_{2})^{2}(Y_{3}\cdot
Y_{4})^{2}}{|x_{12}|^{2(d-2)}|x_{34}|^{2(d-2)}}\mathcal{G}(U,V;\sigma,\tau)\,,\end{split}$
(3)
where we define the cross ratios
$\begin{split}&U\equiv\frac{{x}_{12}^{2}{x}_{34}^{2}}{{x}_{13}^{2}{x}_{24}^{2}}\,,\qquad
V\equiv\frac{{x}_{14}^{2}{x}_{23}^{2}}{{x}_{13}^{2}{x}_{24}^{2}}\,,\\\
&\sigma\equiv\frac{(Y_{1}\cdot Y_{3})(Y_{2}\cdot Y_{4})}{(Y_{1}\cdot
Y_{2})(Y_{3}\cdot Y_{4})}\,,\qquad\tau\equiv\frac{(Y_{1}\cdot
Y_{4})(Y_{2}\cdot Y_{3})}{(Y_{1}\cdot Y_{2})(Y_{3}\cdot Y_{4})}\,,\end{split}$
(4)
with $x_{ij}\equiv x_{i}-x_{j}$, and $Y_{i}$ are null polarization vectors
that encode the R-symmetry indices. The constraints from supersymmetry are
given by the superconformal Ward identities Dolan:2004mu , which can be
satisfied by expanding $\mathcal{G}$ in superconformal blocks as 777In 4d and
6d, we can also satisfy these Ward identities by writing
$\mathcal{G}(U,V;\sigma,\tau)$ in terms of a differential operator
$\Upsilon(U,V,\partial_{U},\partial_{V},\sigma,\tau)$ acting on a reduced
correlator $\mathcal{H}(U,V)$, which is then an R-symmetry singlet.
$\begin{split}\mathcal{G}(U,V;\sigma,\tau)=\sum_{\mathcal{M}}\lambda^{2}_{\mathcal{M}}\mathfrak{G}_{\mathcal{M}}(U,V;\sigma,\tau)\,,\end{split}$
(5)
where ${\cal M}$ runs over all the supermultiplets appearing in the $S\times
S$ OPE, the $\lambda^{2}_{\mathcal{M}}$ are the squared OPE coefficients for
each such supermultiplet $\mathcal{M}$, and the explicit form of the
superblocks can be found for each $d$ in Dolan:2004mu ; Beem:2016wfs ;
Beem:2015aoa ; Chester:2014fya . In Appendix A, for each $d$ we summarize the
multiplets $\mathcal{M}$ that appear, which we label by the scaling dimension
$\Delta$, the spin $\ell$, and the R-symmetry representation of the
superprimary. We exclude free theory multiplets, which for $d=4,6$ restricts
us to interacting theories 888In 3d, the free theory multiplet is identical to
the unitarity bound of the long multiplet, so cannot be excluded
kinematically. The $S\times S$ OPE includes long multiplets in the singlet of
the R-symmetry group with even spin $\ell$ and scaling dimension
$\Delta>d-2+\ell$, as well as protected multiplets such as the stress tensor
with fixed $\Delta$. The stress tensor $\lambda^{2}$ is fixed by the conformal
Ward identity Osborn:1993cr to be inversely proportional to the central
charge coefficient $c$ of the stress tensor two-point function:
$\begin{split}\lambda^{2}_{\text{stress}}\propto 1/c\,,\end{split}$ (6)
where the proportionality constant is fixed in 4d so that $c$ is the conformal
anomaly Beem:2016wfs , in 6d so that a free tensor multiplet has $c=1$
Beem:2015aoa , and in 3d so that the free theory has $c=16$ Chester:2014fya .
In 4d and 6d, the existence of a protected 2d chiral algebra Beem:2013sza
fixes $\lambda^{2}_{\mathcal{M}}\propto 1/c$ for certain protected multiplets,
while the remaining protected multiplets $\mathcal{M}_{\text{prot}}$ have
$\lambda^{2}$ that remain unconstrained.
An important non-perturbative constraint on the four-point function can be
derived by swapping $1\leftrightarrow 3$ in (3), which yields the crossing
equations
$\begin{split}{\mathcal{G}}(U,V;\sigma,\tau)=\frac{U^{d-2}}{V^{d-2}}\tau^{2}{\mathcal{G}}(V,U;\sigma/\tau,1/\tau)\,,\end{split}$
(7)
which we will now use to constrain the correlator.
## III One-loop from tree level
We will now restrict to the pure AdSd+1 theory, and consider the large $c$
expansion of the correlator ${\mathcal{G}}$ shown in (2), where we expand long
multiplet CFT data as
$\begin{split}\Delta_{n,\ell}&=2(d-2)+2n+\ell+\gamma^{R}_{n,\ell}/c+\gamma^{R|R}_{n,\ell}c^{2}+\dots\,,\\\
\lambda_{n,\ell}^{2}&=(\lambda^{(0)}_{n,\ell})^{2}+(\lambda^{R}_{n,\ell})^{2}/c+(\lambda^{R|R}_{n,\ell})^{2}/c^{2}+\dots\,.\\\
\end{split}$ (8)
A similar expansion exists for the OPE coefficients of the protected
operators, although of course their scaling dimensions are fixed. The long
multiplets that appear in (8) are all double trace operators $[SS]_{n,\ell}$
of the schematic form
$\begin{split}[SS]_{n,\ell}=S\Box^{n}\partial_{\mu_{1}}\dots\partial_{\mu_{\ell}}S\,,\end{split}$
(9)
with $\Delta^{(0)}_{n,\ell}=2(d-2)+2n+\ell$ in the $c\to\infty$ generalized
free field theory (GFFT). Note that if the bulk theory had a compact factor,
e.g. $AdS_{5}\times S^{5}$, then we could use the higher KK modes to construct
more such long operators, which would be degenerate in the GFFT and thus mix
in the $1/c$ expansion. The GFFT and tree correlators, which are insensitive
to the bulk factor, were computed in each $d$ in Dolan:2001tt ; Zhou:2017zaw ;
Heslop:2004du ; Arutyunov:2002ff and used to extract tree level data, which
we summarize in Table 1. For theories with higher KK modes, we can only
extract the average long multiplet anomalous dimensions
$\langle\lambda^{2}_{n,\ell}\gamma_{n,\ell}^{R}\rangle$, due to the degeneracy
at GFFT. For protected multiplets, we can obtain the unique CFT data for all
such large $c$ theories.
At 1-loop level, we can expand the superblock expansion (5) to get
$\begin{split}&\mathcal{G}^{R|R}=\sum_{n=0}^{\infty}\sum_{\ell\in\text{Even}}\Big{[}\frac{1}{8}{(\lambda^{(0)}_{n,\ell})^{2}(\gamma^{R}_{n,\ell})^{2}}\log^{2}U\\\
&+{(\lambda^{(0)}_{n,\ell})^{2}\gamma^{R|R}_{n,\ell}}\frac{\log
U}{2}+\dots\Big{]}\mathfrak{G}_{n,\ell}+\sum_{{\mathcal{M}}_{\text{prot}}}(\lambda^{R|R}_{\mathcal{M}})^{2}\mathfrak{G}_{\mathcal{M}}\,,\end{split}$
(10)
where the ellipses refers to other other combinations of tree and loop data,
and recall that ${\mathcal{M}}_{\text{prot}}$ denotes protected multiplets
whose OPE coefficients are not $1/c$ exact. The significance of the
$\log^{2}U$ term is that it is the only term at this order that has a double
discontinuity (DD) as $U\to 0$ 999This is true for every known maximally
supersymmetric CFT in $d=3,4,6$ except the $U(N)_{1}\times U(N)_{-1}$ ABJM
theory, for which additional contributions come from odd twist long multiplet
OPE coefficients, which can also be computed from tree level data. See
Alday:2022rly for more details.. The Lorentzian inversion formula Caron-
Huot:2017vep shows that all CFT data with sufficiently large $\ell$ can be
extracted from the DD as $V\to 0$, so we can obtain this DD from the
$\log^{2}U$ terms after applying crossing (7). For instance, we can compute
the 1-loop correction to the OPE coefficient of 3d protected multiplets
${(A,+)_{\ell}}$ as
$\begin{split}&(\lambda^{R|R}_{(A,+)_{\ell}})^{2}=\frac{12(2\ell+5)\Gamma(\ell+3)^{4}}{\Gamma\left(\ell+\frac{5}{2}\right)^{2}\Gamma\left(\ell+\frac{7}{2}\right)^{2}}\\\
&\times\int_{0}^{1}\frac{d\bar{z}}{\bar{z}}g_{\ell+4,\ell+2}(\bar{z})\text{dDisc}[{\cal
G}^{[0040]}(z\bar{z},1-\bar{z},)|_{z}]\,,\end{split}$ (11)
where ${\cal G}^{[0040]}|_{z}$ is the leading twist term in the highest weight
representation of $SO(8)_{R}$, we define the lightcone blocks
$g_{\Delta,\ell}(z)$ in Appendix C, and we introduce the variables
$U=z\bar{z}$ and $V=(1-z)(1-\bar{z})$. We compute dDisc acting on
$\log^{2}V\sim\log^{2}(1-\bar{z})$ as
$\begin{split}{\rm
dDisc}\,[f(z,\bar{z})\log^{2}{1-\bar{z}}]=4\pi^{2}f(z,\bar{z})\,,\end{split}$
(12)
where we assume $f(z,\bar{z})$ is analytic as $\bar{z}\to 1$ (i.e. $V\to 0$ in
a small $U$ expansion). We give the inversion formulae for the other CFT data
in Appendix C. Note that in the string/M-theory cases, the inversion formula
does not converge for low spins, which corresponds to the existence of the
contact terms $\kappa\mathcal{G}^{R^{4}}$ in (2). In the pure AdSd+1 case we
do not have such contact terms as discussed above, so we can in fact extract
all CFT data at 1-loop order.
To apply these inversion formulae, we need to compute the $\log^{2}U$ terms in
(10) for finite $U$, expand to leading in $U$ in the crossed channel (7), and
perform the integral of the resulting resummed $V\sim 1-\bar{z}$ expression.
We compute the $\log^{2}U$ terms in a small $U$ expansion using the ansatz
$\begin{split}&\frac{1}{8}\sum_{n=0}^{\infty}\sum_{\ell\in\text{Even}}{(\lambda^{(0)}_{n,\ell})^{2}(\gamma^{R}_{n,\ell})^{2}}\mathfrak{G}_{n,\ell}=\sum_{n=0}^{\infty}U^{d-2+n}\Big{[}p_{1}\\\
&+p_{2}\log
V+p_{3}\log^{2}V+p_{4}\text{Li}_{2}(1-V)\Big{]}+\dots\,,\end{split}$ (13)
where here we showed the singlet channel, while the dots denote the other
R-symmetry channels $R$ that will start at higher powers of $U$ and have
nontrivial $\sigma,\tau$ dependence given by the structures
$Y_{R}(\sigma,\tau)$, as given in Eq. B.14 of Nirschl:2004pa . The
coefficients $p_{i}$ are polynomials in $V$ divided by monomials in $V$. We
then perform crossing, expand to leading order $U$, and resum the expansion in
$V\sim 1-\bar{z}$ to get the relevant DDs. The final expressions are inverse
trigonometric functions of $\bar{z}$ times high degree polynomials in
$\bar{z}$, whose explicit form we give in the attached Mathematica file. We
then plug these into the inversion formula to obtain the 1-loop correction to
CFT data. For the lowest spin in each multiplet we find
$\begin{split}(\lambda^{R|R}_{(B,+)})^{2}&=793.76\,,\qquad(\lambda^{R|R}_{(A,+)_{0}})^{2}=97.766\,,\\\
(\lambda^{R|R}_{(B,2)})^{2}&=3968.8\,,\qquad\;(\lambda^{R|R}_{(A,2)_{1}})^{2}=570.50\,,\\\
\gamma^{R|R}_{0,0}&=21555\,,\qquad\qquad\,\Delta^{R|R}_{3d,2}=2713.6\,,\\\
\end{split}$ (14)
where here we show 5 digits of precision, but we can compute arbitrary
precision. In 4d, the only nontrivial data is the anomalous dimensions, which
were already computed for pure AdS5 in Alday:2017xua for $\ell\geq 0$:
$\begin{split}\gamma^{R|R}_{0,\ell}=\frac{24\left(7\ell^{5}+116\ell^{4}+725\ell^{3}+2044\ell^{2}+2292\ell+288\right)}{(\ell+1)^{2}(\ell+2)(\ell+3)(\ell+4)(\ell+5)(\ell+6)^{3}}\,.\end{split}$
(15)
In 6d, we compute the lowest few spins for the multiplets with non-trivial
$1/c$ expansions to get
$\begin{split}(\lambda^{R|R}_{\mathcal{B}[0,2]_{1}})^{2}&=-4.2372\,,\qquad(\lambda^{R|R}_{\mathcal{B}[0,2]_{3}})^{2}=-0.1531\,,\\\
\gamma^{R|R}_{0,0}&=-54695\,,\qquad\qquad\;\Delta^{R|R}_{6d,2}=-644.25\,,\\\
\gamma^{R|R}_{0,4}&=-18.918\,,\qquad\;\,(\lambda^{R|R}_{\mathcal{D}[0,4]})^{2}=-822.70\,.\end{split}$
(16)
## IV Numerical conformal bootstrap
We will now compare these 1-loop corrections to the numerical bootstrap bounds
on CFT data in the stress tensor correlator for $d=3,4,6$, which were computed
for $d=3,4$ in Alday:2021ymb ; Alday:2021vfb , and which we compute now for 6d
following Beem:2015aoa . These bounds come from optimizing the infinite set of
constraints imposed by the crossing equations (7) on the superblock expansion
in (5), for more details in each case see the original works Beem:2013qxa ;
Beem:2015aoa ; Chester:2014fya , and Poland:2018epd ; Chester:2019wfx ;
Simmons-Duffin:2016gjk ; Poland:2022qrs for recent reviews. The convergence
of these bounds is monotonic and given by the parameter $\Lambda$ originally
defined in Chester:2014fya , which counts how many derivatives are used in the
expansion of conformal blocks around the crossing symmetric point 101010For
comparison, the most precise Ising model bounds were computed with
$\Lambda=43$ in Landry:2019qug , while all the bounds shown here use at least
twice that precision.. These bounds apply to any theory with maximal
supersymmetry in the given $d$ and are computed as a function of $c$, which is
related to the stress tensor OPE coefficient as in (6). Since these bounds are
non-perturbative in $c$, we will look at the large $c$ regime where we expect
the $1/c$ expansion of the previous section to be good. The large $c$
expansion of CFT data is asymptotic, which means that after a few orders the
expansion will actually get worse, unless we look at very large values of $c$.
We observe that the $1/c^{2}$ corrections get smaller relative to $1/c$ tree
corrections as the spin increases, which implies that the asymptotic expansion
is getting more accurate at this order. We do not want to look at very high
spin data, however, because then the difference between each order will be
hard to observe. As a compromise, we will focus on the lowest spin CFT data
for which the Lorentzian inversion converges for the string/M-theory CFTs. We
summarize the comparison of the analytic $1/c$ expansion to fits in the large
$c$ regime of the bootstrap bounds in Table 1 111111A rough diagnostic for the
error of these fits is given by how close the $1/c$ tree level correction
matches the known answer. The range of $c$ used for the fits was motivated to
give such a tree level match, such that the 1-loop term is then a prediction..
3d: | $\Delta_{0,2}$: Exact | $4-49.931/c+2713.6/c^{2}$
---|---|---
| Fit | $3.99996-49.82/c+2619.4/c^{2}$
| $\lambda^{2}_{(A,2)_{1}}$: Exact | $9.7523-98.764/c+570.43/c^{2}$
| Fit | $9.7523-98.772/c+580.443/c^{2}$
| $\lambda^{2}_{(A,+)_{0}}$: Exact | $7.1111+48.448/c+97.768/c^{2}$
| Fit | $7.1111+48.445/c+103.35/c^{2}$
4d: | $\Delta_{0,2}$: Exact | $6-1/c+0.12976/c^{2}$
| Fit | $6.0000-0.99929/c+0.14718/c^{2}$
6d: | $\Delta_{0,2}$: Exact | $10-10.909/c-258.79/c^{2}$
| Fit | $10.000-11.209/c+270.96/c^{2}$
| $\Delta_{0,4}$: Exact | $12-3.1648/c-17.157/c^{2}$
| Fit | $12.000-3.1956/c-17.832/c^{2}$
| $\lambda^{2}_{\mathcal{B}[02]_{1}}$: Exact | $0.75757-0.98484/c-4.2372/c^{2}$
| Fit | $0.75757-0.98009/c-3.9446/c^{2}$
| $\lambda^{2}_{\mathcal{B}[02]_{3}}$: Exact | $0.43076-0.15440/c-0.15313/c^{2}$
| Fit | $0.43076-0.15432/c-0.17448/c^{2}$
Table 1: Fits of the numerical bootstrap bounds at large $c$, compared to
exact $O(1/c^{2})$ values for the pure AdSd+1 theory for $d=3,4,6$.
Figure 1: Upper and lower numerical bootstrap bounds (in black) on the
$\lambda_{(A,+)_{0}}^{2}$ and $\lambda_{(A,2)_{1}}^{2}$ OPE coefficients, as
well as upper bounds on the scaling dimension $\Delta_{0,2}$ of the lowest
dimension spin 2 long multiplet, made with precision $\Lambda=83$. These
bounds apply to any 3d $\mathcal{N}=8$ CFT, and are plotted in terms of the
stress-tensor coefficient $c$ in the large $c$ regime, where $c=16$ for the
free theory. The gray dotted line denotes the large $c$ expansion to order
tree level supergravity $O(c^{-1})$, which does not depend on the compact
factor in the bulk. The purple, blue, and orange dashed lines also include the
1-loop supergravity correction $O(c^{-2})$ on $AdS_{4}\times S^{7}$,
$AdS_{4}\times S^{7}/\mathbb{Z}_{2}$, and AdS4, respectively.
We start with the the bounds on 3d $\mathcal{N}=8$ CFTs, which were computed
with $\Lambda=83$. In Figure 1 we show upper and lower bounds on OPE
coefficients for the protected $(A,+)_{\ell}$ and $(A,2)_{\ell}$ multiplets
for the lowest spins $\ell=0$ and $\ell=1$, respectively. Both upper and lower
bounds exist for the OPE coefficients, because their protected scaling
dimensions $\Delta=\ell+2$ are separated from the continuum of long
multiplets. The lower bounds are the nontrivial bounds in this case, as the
upper bounds simply interpolate between the GFFT values at $c\to\infty$ and
the free theory values at $c=16$. We also show upper bounds on the lowest
dimension scaling dimension $\Delta_{0,\ell}$ of the long multiplet with spin
$\ell=2$. We compare these bounds to the 1-loop data for the pure AdS4 theory
as given in (14), as well as for the $AdS_{4}\times S^{7}$ and $AdS_{4}\times
S^{7}/\mathbb{Z}_{2}$ theories Alday:2021ymb ; Alday:2022rly , which we review
in Appendix B. We find that the pure AdS4 1-loop correction at $1/c^{2}$
noticeably improves the universal tree correction at $1/c$ and approximately
saturates the numerical bounds, unlike the $AdS_{4}\times S^{7}$ and
$AdS_{4}\times S^{7}/\mathbb{Z}_{2}$ 1-loop corrections, which lie inside the
allowed region.
Figure 2: Upper bounds (in black) on the scaling dimension $\Delta_{0,2}$ of
the lowest dimension spin 2 long multiplet, made with precision $\Lambda=123$.
These bounds apply to any interacting 4d $\mathcal{N}=4$ CFT, and are plotted
in terms of the stress-tensor coefficient $c$ in the large $c$ regime, where
$c=3/4$ for the minimal interacting theory $SU(2)$ SYM. The gray dotted line
denotes the large $c$ expansion to order tree level supergravity $O(c^{-1})$,
which does not depend on the compact factor in the bulk. The purple, blue, and
orange dashed lines also include the 1-loop supergravity correction
$O(c^{-2})$ on $AdS_{5}\times S^{5}$, $AdS_{5}\times S^{5}/\mathbb{Z}_{2}$,
and AdS5, respectively.
Next, we consider the bounds on 4d $\mathcal{N}=4$ CFTs, which were computed
with $\Lambda=123$. In Figure 2 we show upper bounds on the lowest dimension
scaling dimension $\Delta_{0,\ell}$ of the long multiplet with spin $\ell=2$.
We compare these bounds to the 1-loop data for the pure AdS5 theory as given
in (15), as well as for the $AdS_{5}\times S^{5}$ and $AdS_{5}\times
S^{5}/\mathbb{Z}_{2}$ theories Aprile:2017bgs ; Alday:2017xua ; Alday:2021vfb
, which we review in Appendix B. Again, we find that the pure AdS5 1-loop
correction noticeably saturates the numerical bounds relative to the tree,
$AdS_{5}\times S^{5}$, or $AdS_{5}\times S^{5}/\mathbb{Z}_{2}$ expressions.
The correction is particularly striking in this case, as the tree level
correction lies below the upper bound, and only the pure AdS5 1-loop
correction is positive.
Figure 3: Upper and lower numerical bootstrap bounds (in black) on the
$\lambda_{\mathcal{B}[02]_{\ell}}^{2}$ OPE coefficients for $\ell=1,3$, as
well as upper bounds on the scaling dimension $\Delta_{0,\ell}$ of the lowest
dimension $\ell=2,4$ long multiplets, made with precision $\Lambda=91$. These
bounds apply to any interacting 6d $(2,0)$ CFT, and are plotted in terms of
the stress-tensor coefficient $c$ for $c\geq 25$, which is the value for the
minimal interacting theory $A_{1}$. The gray dotted line denotes the large $c$
expansion to order tree level supergravity $O(c^{-1})$, which does not depend
on the compact factor in the bulk. The purple, blue, and orange dashed lines
also include the 1-loop supergravity correction $O(c^{-2})$ on $AdS_{7}\times
S^{4}$, $AdS_{7}\times S^{4}/\mathbb{Z}_{2}$, and AdS7, respectively.
Finally, we consider the bounds on 6d $(2,0)$ CFTs, which we computed with
$\Lambda=91$ 121212See Lemos:2021azv for a recent numerical bootstrap study
of this correlator that compared the bounds to the finite $c$ inversion
formula iteratively acted on the protected CFT data.. The bootstrap is
generically less converged as $d$ increases, so in this case we show bounds on
two low values of spin for each nontrivial multiplet to show the improvement
of the match. Since $c$ is generically bigger for physical 6d CFTs, e.g. the
minimal interacting CFT is the $A_{1}$ theory with $c=25$ Beem:2015aoa , we
plot the entire allowed range of $c$. In Figure 3 we show upper bounds on the
OPE coefficients for the protected $\mathcal{B}[0,2]_{\ell}$ multiplet for the
lowest spins $\ell=1,3$. While we cannot compute lower bounds as in 3d,
because this multiplet is not separated from the continuum of long multiplets,
the upper bound in this case is now nontrivial. We also show upper bounds on
the lowest dimension scaling dimension $\Delta_{0,\ell}$ of the long multiplet
with spin $\ell=2,4$. We compare these bounds to the 1-loop data for the pure
AdS7 theory as given in (16), as well as for the $AdS_{7}\times S^{4}$ and
$AdS_{7}\times S^{4}/\mathbb{Z}_{2}$ theories Alday:2020tgi , which we review
in Appendix B. Again, we find that the pure AdS7 1-loop correction noticeably
saturates the numerical bounds relative to the tree, $AdS_{7}\times S^{4}$, or
$AdS_{7}\times S^{4}/\mathbb{Z}_{2}$ expressions. We also computed an upper
bound on $c$ (i.e. a lower bound on the stress tensor OPE coefficient), which
applies to any interacting 6d $(2,0)$ CFT, and got
$\begin{split}c\geq 21.6441\,,\end{split}$ (17)
which is weaker than the bound $c\gtrsim 25$ conjectuerd in Beem:2015aoa .
This latter bound was found by extrapolating bounds computed at lower values
of $\Lambda$ to $\Lambda\to\infty$, and was used as evidence that these
general bootstrap bounds were saturated by the physical $A_{1}$ theory with
$c=25$. We use a different definition of $\Lambda$ than Beem:2015aoa
131313Compare our definition in 6.13 of Chester:2014fya , to their definition
in 5.9 of Beem:2015aoa . We thank Balt van Rees for pointing this out., so it
is hard check their conjectured extrapolation against our bound, but since in
3d 141414For the 3d $\mathcal{N}=8$ stress tensor bootstrap, a kink was found
at $c\approx 22.2735$ even at the high value of $\Lambda=43$ Agmon:2019imm ,
which is close but different from the lowest known interacting theory of
$U(2)_{2}\times U(1)_{-1}$ ABJ with $c\approx 21.3333$. The value of $c$ at
this kink was also shown to be the lowest value allowed by a mixed correlator
bootstrap that kinematically ruled out $U(2)_{2}\times U(1)_{-1}$ ABJ, which
strongly suggests that even at infinite $\Lambda$ the kink will not correspond
to this theory. and 4d 151515For the 4d stress tensor bootstrap for $c=3/4$,
corresponding to the lowest interacting $SU(2)$ SYM theory, it was shown in
Chester:2021aun that bounds obtained after imposing localization constraints
for this theory strictly rule out the more general bounds like those in this
paper. we know that the general bounds are not saturated by the
string/M-theory theory duals for the smallest such values of $c$, it seems
likely that this general 6d bound is also not saturated by the $A_{1}$ theory
even at $\Lambda\to\infty$.
## V Discussion
Our results show that pure AdSd+1 maximal supergravity saturates the most
general non-perturbative bootstrap bounds in the large $c$ regime, while CFTs
with string/M-theory duals lie in the allowed region. This suggests that to
study the latter theories, one needs to disallow the existence of the pure
AdSd+1 theory by either looking at mixed correlator with other single trace
operators Agmon:2019imm ; Bissi:2020jve , or imposing theory specific
constraints like supersymmetric localization Pestun:2016zxk . Indeed, in 3d
one can strengthen these general bootstrap bounds by inputting the OPE
coefficients of the $(B,2)$ and $(B,+)$ multiplets for the $U(N)_{k}\times
U(N)_{-k}$ ABJM theory for $k=1,2$, as computed to all orders in $1/N$ using
localization in Agmon:2017xes , in which case the 1-loop data for the dual
$AdS_{4}\times S^{7}/\mathbb{Z}_{k}$ theories then saturates the bounds
Alday:2021ymb ; Alday:2022rly . In 4d, one can input the two localization
inputs for $SU(N)$ SYM derived in Binder:2019jwn ; Chester:2020dja , which are
a function of the complexified coupling $\tau$, in which case the the bounds
in Chester:2021aun match 4-loop weak coupling results Fleury:2019ydf in the
appropriate regime, and exclude the general bootstrap bounds shown here for
all $\tau$. In 6d there is no localization, but for correlators of single
trace operators other than the stress tensor one can input nontrivial OPE
coefficients given by the protected 2d chiral algebra Beem:2014kka ;
Chester:2018dga for the $A_{N-1}$ or $D_{N}$ theories
We can also use the general bootstrap bounds themselves to further study the
pure AdSd+1 theory, assuming it continues to saturate the bounds to higher
order in $1/c$. In particular, by applying a fit to the large $c$ regime of
the numerical bounds, one could read off higher derivative corrections to
supergravity such as the $\mathcal{G}^{R^{4}}$ term discussed in the
introduction, to help determine a putative UV completion. Since
$\mathcal{G}^{R^{4}}$ occurs at the same order as higher loop corrections in
some cases, e.g. $c^{-3}$ for pure AdS5 (2), it will be necessary to compute
these higher loops, as was recently recently done for the 2-loop correction on
$AdS_{5}\times S^{5}$ Huang:2021xws ; Drummond:2022dxw . The pure AdSd+1 case
should be much easier due to the lack of mixing, and so could even guide the
calculation in the more physical cases with compact factors. More ambitiously,
we can non-perturbatively define the pure AdSd+1 theory as whatever saturates
the bootstrap bounds at finite $c$; it would be fascinating to find
independent evidence for or against the existence of such a theory.
Finally, we can ask what theory saturates the stress tensor correlator
bootstrap bound with less than maximal supersymmetry. In 3d, the
$\mathcal{N}=6$ bootstrap bounds were found in Binder:2020ckj ; Binder:2021cif
to be saturated by $U(1)_{2N}\times U(1+N)_{-2N}$ ABJ theory Aharony:2008gk
for all $N$, which has a vector-like large $N$ limit dual to supersymmetric
higher spin gravity Chang:2012kt ; Aharony:2020omh ; Aharony:2021ovo . With no
supersymmetry, it was observed in El-Showk:2014dwa ; ElShowk:2012ht ;
Kos:2013tga that critical $O(N)$ vector models saturate the bound on $c$
161616See Chester:2015lej ; Chester:2015qca for similar results on
$\mathcal{N}=2$ critical $O(N)$ vector models., so it is likely that the 3d
stress tensor correlator bounds in general are saturated by interacting vector
model CFTs. In higher dimensions, however, there are no interacting unitary
vector models 171717The critical $O(N)$ vector model can be defined also in
$4<d<6$ Fei:2014yja , but it is non-unitary Giombi:2019upv . Nonetheless, it
can be non-rigorously bootstrapped with some success Chester:2014gqa ;
Li:2016wdp ; Nakayama:2014yia ., so it is possible that the most general non-
supersymmetric stress tensor bounds could be saturated by pure AdSd+1 Einstein
gravity with $d>3$. It would be fascinating to check this by generalizing the
non-supersymmetric stress tensor bootstrap in 3d Dymarsky:2017yzx to higher
$d$. If such non-supersymmetric pure AdSd+1 theories exist for any $d$, then
they suggest that unitary interacting CFTs can be constructed for any $d$,
unlike supersymmetric CFTs which only exist for $d\leq 6$.
## Acknowledgments
We thank Anatoly Dymarsky, Balt van Rees, Joao Penedones, and Leonardo
Rastelli for useful conversations, Himanshu Raj for collaboration on related
projects, and Ofer Aharony for reviewing the manuscript. We also thank the
organizers of the 2022 Bootstrap conference in Porto, during which this
project was completed. SMC is supported by the Weizmann Senior Postdoctoral
Fellowship. The work of LFA is supported by the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation
programme (grant agreement No 787185). LFA is also supported in part by the
STFC grant ST/T000864/1. The authors would like to acknowledge the use of the
WEXAC cluster in carrying out this work.
## Appendix A Multiplets
Type | $(\Delta,\ell)$ | ${SO}(8)_{R}$ irrep | spin $\ell$ | $1/c$ exact
---|---|---|---|---
$(B,+)$ | $(2,0)$ | ${\bf 294}_{c}=[0040]$ | $0$ | no
$(B,2)$ | $(2,0)$ | ${\bf 300}=[0200]$ | $0$ | no
$(B,+)$ | $(1,0)$ | ${\bf 35}_{c}=[0020]$ | $0$ | yes
$(A,+)$ | $(\ell+2,\ell)$ | ${\bf 35}_{c}=[0020]$ | even | no
$(A,2)$ | $(\ell+2,\ell)$ | ${\bf 28}=[0100]$ | odd | no
Long | $\Delta>\ell+1$ | ${\bf 1}=[0000]$ | even | no
Id | $(0,0)$ | ${\bf 1}=[0000]$ | even | N/A
Table 2: The possible superconformal multiplets in the $S\times S$ OPE for 3d
$\mathcal{N}=8$ CFTs. The quantum numbers are those of the superconformal
primary in each multiplet.
In this appendix we review the supermultiplets that appear in the OPE $S\times
S$ for $d=3,4,6$ interacting theories. In 3d, $S$ is a $(B,+)$ type multiplet
that transforms in the $[0020]$ of $SO(8)_{R}$, and we show the possible
multiplets in Table 2. In this case, none of the protected multiplets are
$1/c$ exact except trivially the stress tensor multiplet itself.
Type | $(\Delta,\ell)$ | ${SU}(4)_{R}$ irrep | spin $\ell$ | $1/c$ exact
---|---|---|---|---
$\mathcal{B}$ | $(2,0)$ | ${\bf 20^{\prime}}=[020]$ | $0$ | yes
$\mathcal{B}$ | $(4,0)$ | ${\bf 105}=[040]$ | $0$ | yes
$\mathcal{B}$ | $(4,0)$ | ${\bf 84}=[202]$ | $0$ | yes
$\mathcal{C}$ | $(\ell+4,\ell)$ | ${\bf 20^{\prime}}=[020]$ | even | yes
$\mathcal{C}$ | $(\ell+4,\ell)$ | ${\bf 15}=[101]$ | odd | yes
Long | $\Delta>\ell+2$ | ${\bf 1}=[000]$ | even | no
Id | $(0,0)$ | ${\bf 1}=[000]$ | even | N/A
Table 3: The possible superconformal multiplets in the $S\times S$ OPE for 4d
$\mathcal{N}=4$ CFTs. The quantum numbers are those of the superconformal
primary in each multiplet, and for familiarity we use $SU(4)$ conventions for
the Dynkin labels.
In 4d, $S$ is a $\mathcal{B}$ type multiplet in the $[020]$ of $SU(4)_{R}$,
and we show the possible multiplets in Table 3. Here, there are no non-trivial
protected multiplets.
Type | $(\Delta,\ell)$ | ${SO}(5)_{R}$ irrep | spin $\ell$ | $1/c$ exact
---|---|---|---|---
$\mathcal{D}$ | $(4,0)$ | ${\bf 14}=[20]$ | $0$ | yes
$\mathcal{D}$ | $(8,0)$ | ${\bf 35^{\prime}}=[04]$ | $0$ | no
$\mathcal{D}$ | $(8,0)$ | ${\bf 55}=[40]$ | $0$ | yes
$\mathcal{B}$ | $(\ell+8,\ell)$ | ${\bf 14}=[20]$ | even | yes
$\mathcal{B}$ | $(\ell+8,\ell)$ | ${\bf 10}=[02]$ | odd | no
Long | $\Delta>\ell+6$ | ${\bf 1}=[00]$ | even | no
Id | $(0,0)$ | ${\bf 1}=[00]$ | even | N/A
Table 4: The possible superconformal multiplets in the $S\times S$ OPE for 6d
$(2,0)$ CFTs. The quantum numbers are those of the superconformal primary in
each multiplet.
In 6d, $S$ is a $\mathcal{D}$ type multiplet in the $[20]$ of $SO(5)_{R}$, and
we show the possible multiplets in Table 4. Here, the non-trivial protected
multiplets are $\mathcal{D}[04]$ and $\mathcal{B}[02]_{\ell}$ with odd $\ell$,
which are identical to the long multiplets at their unitarity value $\ell=6$.
## Appendix B CFT data
In this appendix, we collect previous results for 1-loop CFT data in $d=3,4,6$
for string/M-theory duals, which we will use in the main text. In 3d, the
1-loop corrections were computed for $U(N)_{k}\times U(N)_{-k}$ ABJM dual to
$AdS_{4}\times S^{7}/\mathbb{Z}_{k}$ for $k=1,2$ in Alday:2021ymb ;
Alday:2022rly to get for the $k=1$ theory
$\begin{split}&AdS_{4}\times
S^{7}:\qquad\qquad\qquad\quad\;\;\gamma^{R|R}_{0,2}=-39254.4\,,\\\
&(\lambda^{R|R}_{(A,+)_{0}})^{2}=513.49\,,\qquad(\lambda^{R|R}_{(A,2)_{1}})^{2}=5221.3\,,\\\
\end{split}$ (18)
and for the $k=2$ theory
$\begin{split}&AdS_{4}\times
S^{7}/\mathbb{Z}_{2}:\qquad\qquad\quad\;\;\;\,\gamma^{R|R}_{0,2}=-16740.9\,,\\\
&(\lambda^{R|R}_{(A,+)_{0}})^{2}=285.32\,,\qquad(\lambda^{R|R}_{(A,2)_{1}})^{2}=2239.9\,.\\\
\end{split}$ (19)
In 4d, the 1-loop corrections were computed for $\mathcal{N}=4$ SYM with
$SU(N)$ Alday:2017xua ; Aprile:2017bgs and $SO(N)$ Alday:2021vfb gauge group
dual to $AdS_{5}\times S^{5}$ and $AdS_{5}\times S^{5}/\mathbb{Z}_{2}$,
respectively, to get
$\begin{split}&AdS_{5}\times
S^{5}:\qquad\quad\;\;\gamma^{R|R}_{0,2}=-2.5625\,,\\\ &AdS_{5}\times
S^{5}/\mathbb{Z}_{2}:\qquad\gamma^{R|R}_{0,2}=-0.88851\,.\\\ \end{split}$ (20)
In 6d, the 1-loop corrections were computed for $A_{N-1}$ and $D_{N}$ CFTs
dual to $AdS_{7}\times S^{4}$ and $AdS_{7}\times S^{4}/\mathbb{Z}_{2}$,
respectively, to get for the former theory
$\begin{split}&AdS_{7}\times S^{4}:\\\
&\quad\;\;\,\gamma^{R|R}_{0,2}=-1171.1\,,\qquad\quad\;\;\;\;\gamma^{R|R}_{0,4}=-25.414\,,\\\
&(\lambda^{R|R}_{\mathcal{B}[02]_{1}})^{2}=-12.388\,,\qquad(\lambda^{R|R}_{\mathcal{B}[02]_{3}})^{2}=-0.18697\,,\end{split}$
(21)
and for the latter theory
$\begin{split}&AdS_{7}\times S^{4}/\mathbb{Z}_{2}:\\\
&\quad\;\;\;\,\gamma^{R|R}_{0,2}=-644.25\,,\qquad\quad\;\;\,\,\gamma^{R|R}_{0,4}=-18.918\,,\\\
&(\lambda^{R|R}_{\mathcal{B}[02]_{1}})^{2}=-7.6294\,,\qquad(\lambda^{R|R}_{\mathcal{B}[02]_{3}})^{2}=-0.15983\,.\end{split}$
(22)
## Appendix C Inversion formulae
In this appendix we collect the inversion formulae from Alday:2021ymb ;
Alday:2020tgi that we apply to the DDs computed for the 1-loop pure AdSd+1
correlator for $d=3,6$ to get the CFT data reported in the main text. Recall
that for 4d, the pure AdS5 results are already available from Alday:2017xua .
For 3d, the $(A,+)_{\ell}$ formula was given in (11), while the $(A,2)_{\ell}$
result can be extracted from the formula
$\begin{split}&\frac{12(\ell+1)^{2}(\ell+2)^{2}}{(2\ell+1)(2\ell+3)^{2}(2\ell+5)}\lambda^{2}_{(A,2)_{\ell-1}}\\\
&+\frac{2(\ell+2)(\ell+3)}{(2\ell+3)(2\ell+7)}\lambda^{2}_{(A,+)_{\ell}}+\frac{3}{4}\lambda^{2}_{(A,2)_{\ell+1}}=\\\
&\frac{12(2\ell+5)\Gamma(\ell+3)^{4}}{\Gamma\left(\ell+\frac{5}{2}\right)^{2}\Gamma\left(\ell+\frac{7}{2}\right)^{2}}\\\
&\times\int_{0}^{1}\frac{d\bar{z}}{\bar{z}}g_{\ell+4,\ell+2}(\bar{z})\text{dDisc}[{\cal
G}^{[0200]}(z\bar{z},1-\bar{z},)|_{z}]\,,\end{split}$ (23)
after plugging in the results for $\lambda^{2}_{(A,+)_{\ell}}$, and using the
lightcone block with normalization
$\begin{split}&g_{\Delta,\ell}(1-V)=\frac{\Gamma(\ell+1/2)}{4^{\Delta}\sqrt{\pi}\ell!}(1-V)^{\ell}\\\
&\quad\times{}_{2}F_{1}\left(\frac{\Delta+\ell}{2},\frac{\Delta+\ell}{2},\Delta+\ell,1-V\right)\,.\\\
\end{split}$ (24)
The $(B,+)$ and $(B,2)$ OPE coefficients then correspond to the values
$\begin{split}\lambda^{2}_{(B,2)}=\lambda^{2}_{(A,2)_{-1}}\,,\qquad\lambda^{2}_{(B,+)}=\lambda^{2}_{(A,+)_{-2}}\,.\end{split}$
(25)
We can extract the anomalous dimension from the formula
$\begin{split}\gamma^{R|R}_{0,\ell}=&\frac{1}{(\lambda^{(0)}_{2,\ell})^{2}}\Big{(}4R^{[0040]}_{1,R|R}(\ell)+\frac{1}{2}\partial_{\ell}\big{[}(\lambda^{(0)}_{0,\ell})^{2}(\gamma^{R}_{0,\ell})^{2}\big{]}\\\
&-(\lambda^{R}_{0,\ell})^{2}\gamma^{R}_{0,\ell}\Big{)}\,,\end{split}$ (26)
where we have the inversion integral
$\begin{split}&R^{[0040]}_{1,R|R}(\ell)=\frac{512(\ell+1)(\ell+2)(2\ell+3)\Gamma(\ell+1)^{4}}{\Gamma\left(\ell+\frac{1}{2}\right)^{2}\Gamma\left(\ell+\frac{5}{2}\right)^{2}}\\\
&\times\int_{0}^{1}{d\bar{z}}{\bar{z}}g_{\ell+6,\ell}(\bar{z})\text{dDisc}\left.{\cal
G}_{R|R}^{[0040]}(z\bar{z},1-\bar{z})\right|_{z^{3}\log z}\,,\end{split}$ (27)
and the tree and GFFT formula needed above take a more complicated form that
we give in the attached Mathematica file.
For 6d, it is convenient to solve the superconformal Ward identities by
writing $\mathcal{G}(U,V;\sigma,\tau)$ in (3) as
$\begin{split}\mathcal{G}(U,V;\sigma,\tau)=\mathcal{F}(U,V;\sigma,\tau)+\Upsilon\circ\mathcal{H}(U,V)\,,\end{split}$
(28)
where $\mathcal{F}$ is the free theory correlator, $\Upsilon$ is a complicated
differential operator defined in Dolan:2004mu , and $\mathcal{H}(U,V)$ is an
R-symmetry singlet called the reduced correlator. In terms of this reduced
correlator, we can extract the 1-loop OPE coefficient as
$\begin{split}&\lambda^{2}_{\mathcal{B}[02]_{\ell}}=-\frac{\pi(\ell+1)(\ell+4)\Gamma(\ell+5)\Gamma(\ell+7)}{2^{4\ell+19}(\ell+2)\Gamma\left(\ell+\frac{11}{2}\right)\Gamma\left(\ell+\frac{13}{2}\right)}\\\
&\times\int_{0}^{1}d\bar{z}\bar{z}^{4}g_{\ell+11,\ell+1}^{-2,0}(\bar{z})\text{dDisc}\left.\mathcal{H}(z\bar{z},1-\bar{z})\right|_{z^{0}}\,,\end{split}$
(29)
where we define the mixed lightcone block in the 6d normalization as
$\begin{split}&g_{\Delta,\ell}^{-2,0}(1-V)=(V-1)^{\ell}\\\
&\qquad\times{}_{2}F_{1}\left(\frac{\Delta+\ell+2}{2},\frac{\Delta+\ell}{2},\Delta+\ell,1-V\right)\,.\\\
\end{split}$ (30)
The $\mathcal{D}[04]$ OPE coefficient then corresponds to the limit
$\begin{split}\lambda^{2}_{\mathcal{D}[04]}=\lim_{\ell\to-1}(\ell+1)\lambda^{2}_{\mathcal{B}[02]_{\ell}}\,.\end{split}$
(31)
We can extract the anomalous dimension from the formula
$\begin{split}&\gamma^{R|R}_{0,\ell}=\hat{\gamma}^{R|R}_{0,\ell}+\gamma^{\text{extra}}_{\ell}\,,\\\
&\gamma^{\text{extra}}_{\ell}\equiv-\frac{298598400(2\ell+11)\left(\ell^{2}+11\ell+14\right)}{(\ell+1)^{3}(\ell+2)^{3}(\ell+9)^{3}(\ell+10)^{3}}\,,\end{split}$
(32)
where we compute
$\begin{split}&\hat{\gamma}^{R|R}_{0,\ell}=-\frac{\sqrt{\pi}45\
2^{-7-2\ell}\Gamma(\ell+5)}{(\ell+1)(\ell+2)(\ell+9)(\ell+10)\Gamma\left(\ell+\frac{13}{2}\right)}\\\
&\times\int_{0}^{1}d\bar{z}\bar{z}^{4}g_{\ell+11,\ell+1}^{-2,0}(\bar{z})\Big{[}\text{dDisc}\left.\mathcal{H}(z\bar{z},1-\bar{z})\right|_{\log
z}\\\
&-\frac{3456\left(397\bar{z}^{3}-2910\bar{z}^{2}+5730\bar{z}-3305\right)}{\bar{z}^{5}(1-\bar{z})^{-1}}\Big{]}\,.\end{split}$
(33)
## References
* (1) J. M. Maldacena, “The Large $N$ limit of superconformal field theories and supergravity,” Int. J. Theor. Phys. 38 (1999) 1113–1133, hep-th/9711200. [Adv. Theor. Math. Phys.2,231(1998)].
* (2) L. F. Alday and E. Perlmutter, “Growing Extra Dimensions in AdS/CFT,” 1906.01477.
* (3) R. Gopakumar, E. Perlmutter, S. S. Pufu, and X. Yin, “Snowmass White Paper: Bootstrapping String Theory,” 2202.07163.
* (4) E. Witten, “Three-Dimensional Gravity Revisited,” 0706.3359.
* (5) A. Maloney and E. Witten, “Quantum Gravity Partition Functions in Three Dimensions,” JHEP 02 (2010) 029, 0712.0155.
* (6) S. Hellerman, “A Universal Inequality for CFT and Quantum Gravity,” JHEP 08 (2011) 130, 0902.2790.
* (7) C. A. Keller and A. Maloney, “Poincare Series, 3D Gravity and CFT Spectroscopy,” JHEP 02 (2015) 080, 1407.6008.
* (8) S. Collier, Y.-H. Lin, and X. Yin, “Modular Bootstrap Revisited,” JHEP 09 (2018) 061, 1608.06241.
* (9) N. Afkhami-Jeddi, T. Hartman, and A. Tajdini, “Fast Conformal Bootstrap and Constraints on 3d Gravity,” JHEP 05 (2019) 087, 1903.06272.
* (10) T. Hartman, D. Mazáč, and L. Rastelli, “Sphere Packing and Quantum Gravity,” JHEP 12 (2019) 048, 1905.01319.
* (11) H. Maxfield and G. J. Turiaci, “The path integral of 3D gravity near extremality; or, JT gravity with defects as a matrix integral,” JHEP 01 (2021) 118, 2006.11317.
* (12) N. Afkhami-Jeddi, H. Cohn, T. Hartman, and A. Tajdini, “Free partition functions and an averaged holographic duality,” JHEP 01 (2021) 130, 2006.04839.
* (13) A. Maloney and E. Witten, “Averaging over Narain moduli space,” JHEP 10 (2020) 187, 2006.04855.
* (14) L. Rastelli and X. Zhou, “How to Succeed at Holographic Correlators Without Really Trying,” 1710.05923.
* (15) O. Aharony, L. F. Alday, A. Bissi, and E. Perlmutter, “Loops in AdS from Conformal Field Theory,” JHEP 07 (2017) 036, 1612.03891.
* (16) I. Heemskerk, J. Penedones, J. Polchinski, and J. Sully, “Holography from Conformal Field Theory,” JHEP 10 (2009) 079, 0907.0151.
* (17) L. Rastelli and X. Zhou, “Holographic Four-Point Functions in the $(2,0)$ Theory,” 1712.02788.
* (18) X. Zhou, “On Superconformal Four-Point Mellin Amplitudes in Dimension $d>2$,” 1712.02800.
* (19) L. F. Alday and X. Zhou, “All Holographic Four-Point Functions in All Maximally Supersymmetric CFTs,” Phys. Rev. X 11 (2021), no. 1 011056, 2006.12505.
* (20) C. Beem, L. Rastelli, and B. C. van Rees, “The $\mathcal{N}=4$ Superconformal Bootstrap,” Phys.Rev.Lett. 111 (2013), no. 7 071601, 1304.1803.
* (21) C. Beem, M. Lemos, L. Rastelli, and B. C. van Rees, “The (2, 0) superconformal bootstrap,” Phys. Rev. D93 (2016), no. 2 025016, 1507.05637.
* (22) S. M. Chester, J. Lee, S. S. Pufu, and R. Yacoby, “The $\mathcal{N}=8$ superconformal bootstrap in three dimensions,” JHEP 09 (2014) 143, 1406.4814.
* (23) S. M. Chester, J. Lee, S. S. Pufu, and R. Yacoby, “Exact Correlators of BPS Operators from the 3d Superconformal Bootstrap,” JHEP 03 (2015) 130, 1412.0334.
* (24) O. Aharony, O. Bergman, D. L. Jafferis, and J. Maldacena, “${\cal N}=6$ superconformal Chern-Simons-matter theories, M2-branes and their gravity duals,” JHEP 10 (2008) 091, 0806.1218.
* (25) E. Witten, “Some comments on string dynamics,” in STRINGS 95: Future Perspectives in String Theory, pp. 501–523, 7, 1995. hep-th/9507121.
* (26) E. Witten, “Baryons and branes in anti-de Sitter space,” JHEP 07 (1998) 006, hep-th/9805112.
* (27) O. Aharony, Y. Oz, and Z. Yin, “M theory on AdS(p) x S(11-p) and superconformal field theories,” Phys. Lett. B 430 (1998) 87–93, hep-th/9803051.
* (28) L. F. Alday and A. Bissi, “Loop Corrections to Supergravity on $AdS_{5}\times S^{5}$,” Phys. Rev. Lett. 119 (2017), no. 17 171601, 1706.02388.
* (29) F. Aprile, J. M. Drummond, P. Heslop, and H. Paul, “Quantum Gravity from Conformal Field Theory,” JHEP 01 (2018) 035, 1706.02822.
* (30) L. F. Alday, S. M. Chester, and H. Raj, “6d (2,0) and M-theory at 1-loop,” JHEP 01 (2021) 133, 2005.07175.
* (31) L. F. Alday, S. M. Chester, and H. Raj, “ABJM at strong coupling from M-theory, localization, and Lorentzian inversion,” JHEP 02 (2022) 005, 2107.10274.
* (32) L. F. Alday, S. M. Chester, and T. Hansen, “Modular invariant holographic correlators for $\mathcal{N}$ = 4 SYM with general gauge group,” JHEP 12 (2021) 159, 2110.13106.
* (33) L. F. Alday, S. M. Chester, and H. Raj, “M-theory on $AdS_{4}\times S^{7}$ at 1-loop and beyond,” 2207.11138.
* (34) F. A. Dolan, L. Gallot, and E. Sokatchev, “On four-point functions of 1/2-BPS operators in general dimensions,” JHEP 0409 (2004) 056, hep-th/0405180.
* (35) C. Beem, L. Rastelli, and B. C. van Rees, “More ${\mathcal{N}}=4$ superconformal bootstrap,” Phys. Rev. D96 (2017), no. 4 046014, 1612.02363.
* (36) H. Osborn and A. Petkou, “Implications of conformal invariance in field theories for general dimensions,” Annals Phys. 231 (1994) 311–362, hep-th/9307010.
* (37) C. Beem, M. Lemos, P. Liendo, W. Peelaers, L. Rastelli, and B. C. van Rees, “Infinite Chiral Symmetry in Four Dimensions,” Commun. Math. Phys. 336 (2015), no. 3 1359–1433, 1312.5344.
* (38) F. Dolan and H. Osborn, “Superconformal symmetry, correlation functions and the operator product expansion,” Nucl.Phys. B629 (2002) 3–73, hep-th/0112251.
* (39) P. Heslop, “Aspects of superconformal field theories in six dimensions,” JHEP 07 (2004) 056, hep-th/0405245.
* (40) G. Arutyunov and E. Sokatchev, “Implications of superconformal symmetry for interacting (2,0) tensor multiplets,” Nucl. Phys. B 635 (2002) 3–32, hep-th/0201145.
* (41) S. Caron-Huot, “Analyticity in Spin in Conformal Theories,” JHEP 09 (2017) 078, 1703.00278.
* (42) M. Nirschl and H. Osborn, “Superconformal Ward identities and their solution,” Nucl.Phys. B711 (2005) 409–479, hep-th/0407060.
* (43) D. Poland, S. Rychkov, and A. Vichi, “The Conformal Bootstrap: Theory, Numerical Techniques, and Applications,” Rev. Mod. Phys. 91 (2019) 015002, 1805.04405.
* (44) S. M. Chester, “Weizmann Lectures on the Numerical Conformal Bootstrap,” 1907.05147.
* (45) D. Simmons-Duffin, “The Conformal Bootstrap,” in Theoretical Advanced Study Institute in Elementary Particle Physics: New Frontiers in Fields and Strings, pp. 1–74, 2017. 1602.07982.
* (46) D. Poland and D. Simmons-Duffin, “Snowmass White Paper: The Numerical Conformal Bootstrap,” in 2022 Snowmass Summer Study, 3, 2022. 2203.08117.
* (47) N. B. Agmon, S. M. Chester, and S. S. Pufu, “The M-theory Archipelago,” JHEP 02 (2020) 010, 1907.13222.
* (48) A. Bissi, A. Manenti, and A. Vichi, “Bootstrapping mixed correlators in $\mathcal{N}$ = 4 super Yang-Mills,” JHEP 05 (2021) 111, 2010.15126.
* (49) V. Pestun et. al., “Localization techniques in quantum field theories,” J. Phys. A50 (2017), no. 44 440301, 1608.02952.
* (50) N. B. Agmon, S. M. Chester, and S. S. Pufu, “Solving M-theory with the Conformal Bootstrap,” 1711.07343.
* (51) D. J. Binder, S. M. Chester, S. S. Pufu, and Y. Wang, “$\mathcal{N}=4$ Super-Yang-Mills Correlators at Strong Coupling from String Theory and Localization,” 1902.06263.
* (52) S. M. Chester and S. S. Pufu, “Far Beyond the Planar Limit in Strongly-Coupled $\mathcal{N}=4$ SYM,” 2003.08412.
* (53) S. M. Chester, R. Dempsey, and S. S. Pufu, “Bootstrapping $\mathcal{N}=4$ super-Yang-Mills on the conformal manifold,” 2111.07989.
* (54) T. Fleury and R. Pereira, “Non-planar data of $\mathcal{N}$ = 4 SYM,” JHEP 03 (2020) 003, 1910.09428.
* (55) C. Beem, L. Rastelli, and B. C. van Rees, “$\mathcal{W}$ symmetry in six dimensions,” JHEP 05 (2015) 017, 1404.1079.
* (56) S. M. Chester and E. Perlmutter, “M-Theory Reconstruction from (2,0) CFT and the Chiral Algebra Conjecture,” 1805.00892.
* (57) Z. Huang and E. Y. Yuan, “Graviton Scattering in $\mathrm{AdS}_{5}\times\mathrm{S}^{5}$ at Two Loops,” 2112.15174.
* (58) J. M. Drummond and H. Paul, “Two-loop supergravity on AdS${}_{5}\times$S5 from CFT,” 2204.01829.
* (59) D. J. Binder, S. M. Chester, M. Jerdee, and S. S. Pufu, “The 3d $\mathcal{N}$ = 6 bootstrap: from higher spins to strings to membranes,” JHEP 05 (2021) 083, 2011.05728.
* (60) D. J. Binder, S. M. Chester, and M. Jerdee, “ABJ Correlators with Weakly Broken Higher Spin Symmetry,” JHEP 04 (2021) 242, 2103.01969.
* (61) O. Aharony, O. Bergman, and D. L. Jafferis, “Fractional M2-branes,” JHEP 0811 (2008) 043, 0807.4924.
* (62) C.-M. Chang, S. Minwalla, T. Sharma, and X. Yin, “ABJ Triality: from Higher Spin Fields to Strings,” J. Phys. A46 (2013) 214009, 1207.4485.
* (63) O. Aharony, S. M. Chester, and E. Y. Urbach, “A Derivation of AdS/CFT for Vector Models,” JHEP 03 (2021) 208, 2011.06328.
* (64) O. Aharony, S. M. Chester, and E. Y. Urbach, “AdS from CFT for scalar QED,” Phys. Rev. D 104 (2021), no. 12 126011, 2109.05512.
* (65) S. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin, and A. Vichi, “Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents,” J. Stat. Phys. 157 (2014) 869, 1403.4545.
* (66) S. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin, et. al., “Solving the 3D Ising Model with the Conformal Bootstrap,” Phys.Rev. D86 (2012) 025022, 1203.6064.
* (67) F. Kos, D. Poland, and D. Simmons-Duffin, “Bootstrapping the $O(N)$ vector models,” JHEP 06 (2014) 091, 1307.6856.
* (68) A. Dymarsky, F. Kos, P. Kravchuk, D. Poland, and D. Simmons-Duffin, “The 3d Stress-Tensor Bootstrap,” JHEP 02 (2018) 164, 1708.05718.
* (69) S. M. Chester, S. S. Pufu, and X. Yin, “The M-Theory S-Matrix From ABJM: Beyond 11D Supergravity,” 1804.00949.
* (70) D. J. Binder, S. M. Chester, and S. S. Pufu, “AdS4/CFT3 from Weak to Strong String Coupling,” 1906.07195.
* (71) S. M. Chester, M. B. Green, S. S. Pufu, Y. Wang, and C. Wen, “Modular Invariance in Superstring Theory From ${\cal N}=4$ Super-Yang Mills,” 1912.13365.
* (72) D. J. Binder, S. M. Chester, and S. S. Pufu, “Absence of $D^{4}R^{4}$ in M-Theory From ABJM,” 1808.10554.
* (73) S. M. Chester, M. B. Green, S. S. Pufu, Y. Wang, and C. Wen, “New modular invariants in $\mathcal{N}$ = 4 Super-Yang-Mills theory,” JHEP 04 (2021) 212, 2008.02713.
* (74) S. M. Chester, “Genus-2 Holographic Correlator on $AdS_{5}\times S^{5}$ from Localization,” 1908.05247.
* (75) W. Landry and D. Simmons-Duffin, “Scaling the semidefinite program solver SDPB,” 1909.09745.
* (76) M. Lemos, B. C. van Rees, and X. Zhao, “Regge trajectories for the (2,0) theories,” 2105.13361.
* (77) S. M. Chester, L. V. Iliesiu, S. S. Pufu, and R. Yacoby, “Bootstrapping $O(N)$ Vector Models with Four Supercharges in $3\leq d\leq 4$,” JHEP 05 (2016) 103, 1511.07552.
* (78) S. M. Chester, S. Giombi, L. V. Iliesiu, I. R. Klebanov, S. S. Pufu, and R. Yacoby, “Accidental Symmetries and the Conformal Bootstrap,” JHEP 01 (2016) 110, 1507.04424.
* (79) L. Fei, S. Giombi, and I. R. Klebanov, “Critical $O(N)$ Models in $6-\epsilon$ Dimensions,” 1404.1094.
* (80) S. Giombi, R. Huang, I. R. Klebanov, S. S. Pufu, and G. Tarnopolsky, “The $O(N)$ Model in ${4<d<6}$ : Instantons and complex CFTs,” Phys. Rev. D 101 (2020), no. 4 045013, 1910.02462.
* (81) S. M. Chester, S. S. Pufu, and R. Yacoby, “Bootstrapping $O(N)$ vector models in 4 $<d<$ 6,” Phys. Rev. D91 (2015), no. 8 086014, 1412.7746.
* (82) Z. Li and N. Su, “Bootstrapping Mixed Correlators in the Five Dimensional Critical O(N) Models,” JHEP 04 (2017) 098, 1607.07077.
* (83) Y. Nakayama and T. Ohtsuki, “Five dimensional $O(N)$-symmetric CFTs from conformal bootstrap,” Phys. Lett. B 734 (2014) 193–197, 1404.5201.
|
# DURRNet: Deep Unfolded Single Image Reflection Removal Network
Jun-Jie Huang, Tianrui Liu, Zhixiong Yang, Shaojing Fu, Wentao Zhao, and Pier
Luigi Dragotti
###### Abstract
Single image reflection removal problem aims to divide a reflection-
contaminated image into a transmission image and a reflection image. It is a
canonical blind source separation problem and is highly ill-posed. In this
paper, we present a novel deep architecture called deep unfolded single image
reflection removal network (DURRNet) which makes an attempt to combine the
best features from model-based and learning-based paradigms and therefore
leads to a more interpretable deep architecture. Specifically, we first
propose a model-based optimization with transform-based exclusion prior and
then design an iterative algorithm with simple closed-form solutions for
solving each sub-problems. With the deep unrolling technique, we build the
DURRNet with ProxNets to model natural image priors and ProxInvNets which are
constructed with invertible networks to impose the exclusion prior.
Comprehensive experimental results on commonly used datasets demonstrate that
the proposed DURRNet achieves state-of-the-art results both visually and
quantitatively.
## 1 Introduction
Single image reflection removal (SIRR) is a typical blind image separation
problem. It aims to decompose an image, which is captured through a glass and
is associated with reflections, into a transmission image and a reflection
image. The transmission image refers to the image content of the target scene
on the other side of the glass, and the reflection image refers to the image
content from another scene reflected by the glass. This is a highly ill-posed
problem and requires high-level understanding of the scene.
A reflection-contaminated color image $\mathbf{I}\in\mathbb{R}_{+}^{W\times
H\times 3}$ is usually assumed to be a linear combination of a transmission
image $\mathbf{T}\in\mathbb{R}_{+}^{W\times H\times 3}$ and a reflection image
$\mathbf{R}\in\mathbb{R}_{+}^{W\times H\times 3}$, i.e.,
$\mathbf{I}=\mathbf{T}+\mathbf{R}$, where $W$ and $H$ are the width and height
of the image, respectively. Decomposing $\mathbf{I}$ into $\mathbf{T}$ and
$\mathbf{R}$ is a highly ill-posed problem since there are infinite number of
feasible decomposition in the form of
$\mathbf{I}=\left(\mathbf{T}+\mathbf{Q}\right)+\left(\mathbf{R}-\mathbf{Q}\right)$,
where $\mathbf{Q}$ is the shared image content between $\mathbf{T}$ and
$\mathbf{R}$. The purpose of image reflection removal is therefore to minimize
the shared image contents on the decomposed images, and at the same time
maintain the natural aspect of the estimated images.
In order to perform effective reflection removal, suitable priors should be
exploited to constrain the problem effectively. Model-based methods
levin2004separating ; levin2007user ; li2014single ; ghost_cues_2015 ;
reflect_suppression2017 ; fast_convex_2019 formulate the image reflection
removal problem as an optimization problem with explicitly defined image
priors, for example, the gradient sparsity prior. Model-based methods lead to
highly interpretable mathematical formulations and optimization algorithms
though the end result may not be satisfactory when strong and complex
reflections are present. On the other hand, methods based on deep learning
generic_smooth_2017 ; wan2017benchmarking ; perceptual_loss_2018 ;
yang2018seeing ; beyond_linear_2019 ; wei2019single ; cascaded_refine_2020
design task specific deep network structures and loss functions to exploit
data-driven priors. These priors can be learned from large-scale real training
data or from the generation of faithful synthetic training data. However, the
deep-learning based methods are difficult to interpret and a more principled
approach to design the network structures is needed.
Figure 1: The proposed Deep Unfolded Reflection Removal Layer (DURRLayer)
based on deep unfolding. It consists of a transmission estimation network and
a reflection estimation network. For each estimation network, a proxyNet
updates the features and the proxInvNet imposes exclusion condition on two
estimated images.
In this paper, we propose a model-inspired deep network architecture for the
image separation task using deep unrolling technique. We first formulate the
single image reflection removal problem as a convolutional sparse coding
problem with sparsity priors and an exclusion prior, then we propose an
iterative algorithm based on proximal gradient descent to solve the problem.
By using the unfolding technique, we unroll an iteration of the proposed
iterative algorithm into a Deep Unfolded Reflection Removal Layer (DURRLayer)
as shown in Fig. 1. A model-driven multi-scale Deep Unfolded Reflection
Removal Network (DURRNet) is then constructed with DURRLayers in a multi-
resolution fashion. Facilitated by the model-driven deep network structure,
the proposed DURRNet is not only more interpretable, but also achieves high
quality reflection removal results.
The contribution of this paper is three-fold:
* •
We propose a single image reflection removal convolutional sparse coding model
by exploiting the formation model of reflection-contaminated image and a
transform-based exclusion loss. Based on proximal gradient descent, we propose
an iterative algorithm with simple computations.
* •
Based on the proposed iterative algorithm, we design a new deep network
architecture for single image reflection removal by unrolling the algorithm
into a deep network with learnable parameters. The proposed DURRNet consists
of multiple scales of DURRLayers which has an exact step-by-step relationship
with the corresponding optimization algorithm, therefore, is of high
interpretability.
* •
Through extensive experiments, we demonstrate that the proposed DURRNet is
able to achieve effective single image reflection removal and obtains highly
competitive results compared to both the model-based and deep-learning based
single image removal methods.
The rest of the paper is organized as follows: Section 2 reviews the related
single image removal methods and algorithm unfolding. Section 3 presents the
model formulation, optimization algorithm design and the deep network
architecture of the proposed DURRNet. Section 4 demonstrates the experimental
results and comparisons. Section 5 concludes the paper.
## 2 Related Works
Model-based SIRR Methods levin2004separating ; levin2007user ; li2014single ;
ghost_cues_2015 ; reflect_suppression2017 ; fast_convex_2019 formulate the
image reflection removal problem as an optimization problem and solve it with
optimization tools. The gradient sparsity prior of natural images has been
exploited in levin2004separating ; levin2007user to obtain decomposition with
minimal edges and local features. The relative smoothness prior has been
proposed in li2014single since the reflected image is usually more blurred.
In fast_convex_2019 , a convex model which implies a partial differential
equation with gradient thresholding is used to suppress the reflection from a
single input image. The Laplacian fidelity prior and the $l_{0}$ gradient
sparsity prior have been used in reflect_suppression2017 to formulate the
optimization problem for reflection suppression. In ghost_cues_2015 , Gaussian
Mixture Model (GMM) has been applied for modelling patch prior to exploit the
ghosting effects on reflection.
Deep-Learning-based SIRR Methods generic_smooth_2017 ; wan2017benchmarking ;
perceptual_loss_2018 ; yang2018seeing ; beyond_linear_2019 ; wei2019single ;
cascaded_refine_2020 ; hu2021trash solve the reflection removal problem by
designing proper deep network architectures, loss functions and exploiting
external real or synthetically generated training datasets. The Cascaded Edge
and Image Learning Network (CEILNet) generic_smooth_2017 consists of two
cascaded CNN networks, i.e., E-CNN and I-CNN for edge prediction and image
reconstruction, respectively. In perceptual_loss_2018 , exclusion loss,
perceptual loss and adversarial loss are proposed to regularize the learning
of the reflection separation network. In yang2018seeing , a bidirectional
network (BDN) which consists of a cascaded deep network has been proposed to
estimate the reflection image and use it to improve the estimation of the
transmission image. ERRNet wei2019single proposes to utilize misaligned
training data with an alignment-invariant loss. In cascaded_refine_2020 , an
Iterative Boost Convolutional LSTM Network (IBCLN) has been proposed to
progressively separate the reflection-contaminated image into two image
layers. In hu2021trash , a dual-stream decomposition network has been proposed
to enable information exchange at different branches and achieved state-of-
the-art single image reflection removal performance.
Deep Unfolding monga2021algorithm aims to merge model-based and deep-learning
based approaches for solving inverse problems (e.g., image restoration
problems). The general idea is to design an iterative algorithm for the
problem at hand and then convert certain steps of the iterative algorithm into
learnable parameters. In the seminal work gregor2010learning , Gregor and
LeCun proposed to convert the iterative shrinkage-thresholding algorithm
(ISTA) into a deep network by setting the dictionaries in ISTA as learnable
parameters. In yang2016deep , ADMM-Net has been proposed to unfold the
Alternating Direction Method of Multipliers (ADMM) algorithm for compressive
sensing Magnetic Resonance Imaging (MRI) reconstruction. In zhang2020deep , a
deep unfolding network for single image super-resolution has been proposed by
unfolding Maximum-a-Posteriori (MAP) formulation via a half-quadratic
splitting algorithm and interpreting the prior term as a denoiser. A Deep
Unrolling for Blind Deblurring (DUBLID) network li2020efficient unfolds a
total variation based blind deconvolution algorithm and contains a very small
number of learnable parameters. In model_driven_rain_2020 , Wang et al.
proposed a model-inspired rain removal deep unfolding network based on
proximal gradient descent to simplify computations. Recently, Pu et al.
pu2022mixed proposed a self-supervised deep unfolding network for separating
X-Ray images of Artworks.
## 3 Proposed Method
In this section, we will first introduce the proposed model-based optimization
formulation for single image reflection removal and then we solve the
optimization using an iterative algorithm based on proximal gradient descent.
Finally we present the proposed Deep Unfolded Reflection Removal Network
(DURRNet) architecture based on the designed iterative algorithm and detail
the training strategy.
### 3.1 Model Formulation
A reflection-contaminated color image $\mathbf{I}\in\mathbb{R}_{+}^{W\times
H\times 3}$ can be expressed as a linear combination of a transmission image
and a reflection image reflect_suppression2017 . Therefore, we can represent
the observed reflection-contaminated color image as
$\mathbf{I}=\mathbf{T}+\mathbf{R}$, where
$\mathbf{T}\in\mathbb{R}_{+}^{W\times H\times 3}$ and
$\mathbf{R}\in\mathbb{R}_{+}^{W\times H\times 3}$ are the transmission image
and the reflection image, respectively. $W$ and $H$ are the width and height
of the image.
The reflection image is usually considered as a blurred version of the
reflected scene due to the effect of the glass. With different
characteristics, $\mathbf{T}$ and $\mathbf{R}$ are assumed to have two
different representations over a transmission dictionary $\mathbf{D}_{T}$ and
a reflection dictionary $\mathbf{D}_{R}$, respectively. Based on the
Convolutional Sparse Coding (CSC) model papyan2017convolutional ;
bristow2013fast , we propose to formulate the reflection removal problem as:
$\displaystyle\underset{\mathbf{z}_{T},\mathbf{z}_{R}}{\min}$
$\displaystyle\frac{1}{2}\|\mathbf{I}-\sum_{i=1}^{N}\mathbf{D}_{T}^{i}\otimes\mathbf{z}_{T}^{i}-\sum_{j=1}^{N}\mathbf{D}_{R}^{j}\otimes\mathbf{z}_{R}^{j}\|_{F}^{2}+\lambda_{T}p_{T}(\mathbf{z}_{T})+\lambda_{R}p_{R}(\mathbf{z}_{R}),$
(1)
where $\mathbf{D}_{T}=[\mathbf{D}_{T}^{1},\cdots,\mathbf{D}_{T}^{N}]$ and
$\mathbf{D}_{R}=[\mathbf{D}_{R}^{1},\cdots,\mathbf{D}_{R}^{N}]$ are the
transmission convolutional dictionary and the reflection dictionary and
$\mathbf{z}_{T}=[\mathbf{z}_{T}^{1},\cdots,\mathbf{z}_{T}^{N}]$ and
$\mathbf{z}_{R}=[\mathbf{z}_{R}^{1},\cdots,\mathbf{z}_{R}^{N}]$ are the
features corresponding to $\mathbf{T}$ and $\mathbf{R}$, respectively. Here
$\otimes$ denotes the convolution operator and $N$ is the number of filters.
Moreover, $\lambda_{T}$, $\lambda_{R}$ are regularization parameters, and
$p_{T}(\cdot)$ and $p_{R}(\cdot)$ represents the prior term for the feature of
$\mathbf{T}$ and $\mathbf{R}$, respectively.
The exclusion loss perceptual_loss_2018 is based on the idea that if two
images do not contain shared contents, then their edges and their contours
will only overlap in a small region. In perceptual_loss_2018 , the exclusion
loss is applied as a training loss function to facilitate the training of the
image reflection network. It measures the degree of edge overlapping of two
images in a multi-scale manner and can be expressed as:
$\displaystyle\mathcal{L}_{\text{e}}=\sum_{j=1}^{J}||\Psi(f^{\downarrow
j}(\mathbf{T}),f^{\downarrow j}(\mathbf{R}))||_{F},$ (2)
where
$\Psi(\mathbf{T},\mathbf{R})=\tanh(\beta_{T}|\nabla\mathbf{T}|)\odot\tanh(\beta_{R}|\nabla\mathbf{R}|)$,
$\beta_{T}$ and $\beta_{R}$ are normalization factors, moreover, $\odot$
denotes element-wise multiplication, $\nabla\mathbf{T}$ and $\nabla\mathbf{R}$
denote the gradients of $\mathbf{T}$ and $\mathbf{R}$, respectively. Finally,
$f^{\downarrow j}(\cdot)$ denotes the downsampling operation by a factor
$2^{j-1}$ with bilinear interpolation.
In our model, we aim to explicitly include the exclusion constraint into the
optimization formulation for reflection removal, however, Eq. (2) does not
lead to easy to compute solutions. Inspired by kamilov2016parallel which
proposed a proximal-gradient algorithm for minimizing Total Variation
regularized least-squares cost functional, a transform-based exclusion loss
has been proposed in pu2022mixed :
$\mathcal{L}_{\text{te}}(\mathbf{T},\mathbf{R})=\sum_{m=1}^{M}\|\left(\mathbf{W}_{m}\otimes\mathbf{T}\right)\odot\left(\mathbf{W}_{m}\otimes\mathbf{R}\right)\|_{1},$
(3)
where $\mathbf{W}=[\mathbf{W}_{1},\cdots,\mathbf{W}_{M}]$ denotes the high-
pass filters of a transform with $\mathbf{W}_{m}$ being the $m$-th filter.
This new formulation uses high-pass filters of a transform to extract high-
frequency information from the image and measures the element-wise correlation
between each pair of “edge” images in $l_{1}$ norm. This enables simple
closed-form solution for the optimization problem.
Based on Eq. (1) and Eq. (3), we propose to formulate the reflection removal
problem as a convolutional sparse coding problem:
$\displaystyle\underset{\mathbf{z}_{T},\mathbf{z}_{R}}{\min}$
$\displaystyle\frac{1}{2}\|\mathbf{I}-\mathbf{D}_{T}\otimes\mathbf{z}_{T}-\mathbf{D}_{R}\otimes\mathbf{z}_{R}\|_{F}^{2}+\lambda_{T}p_{T}(\mathbf{z}_{T})+\lambda_{R}p_{R}(\mathbf{z}_{R})$
(4)
$\displaystyle+\kappa\mathcal{L}_{\text{te}}(\mathbf{D}_{T}\otimes\mathbf{z}_{T},\mathbf{D}_{R}\otimes\mathbf{z}_{R}),$
where with a slight abuse of notation, we denote
$\mathbf{D}_{T}\otimes\mathbf{z}_{T}=\sum_{i=1}^{N}\mathbf{D}_{T}^{i}\otimes\mathbf{z}_{T}^{i}$
and
$\mathbf{D}_{R}\otimes\mathbf{z}_{R}=\sum_{i=1}^{N}\mathbf{D}_{R}^{i}\otimes\mathbf{z}_{R}^{i}$,
and $\kappa$ is the regularization parameter for the exclusion term.
In Eq. (4), the transmission image and the reflection image are modelled as a
linear combination of atoms from the transmission dictionary and the
reflection dictionary; the data fidelity term ensures the estimated
transmission image and the reflection image contain sufficient information of
the observed image; the two prior terms, $p_{T}(\mathbf{z}_{T})$ and
$p_{R}(\mathbf{z}_{R})$ regularize the features for the transmission and the
reflection image, and the transform-based exclusion term
$\mathcal{L}_{\text{te}}$ is used to further facilitate the separation of
image contents on the two images.
### 3.2 Optimization Algorithm
Based on the model formulation defined in Eq. (4), in this section, we design
an algorithm which solves iteratively simpler sub-problems for which we can
provide close-form solutions. Since the features $\bm{z}_{T}$ and $\bm{z}_{R}$
appear in the data fidelity term, the prior terms and the exclusion terms, it
is difficult to optimize all these terms jointly. Therefore, we introduce two
auxiliary parameters $\hat{\mathbf{T}}=\mathbf{D}_{T}\otimes\mathbf{z}_{T}$
and $\hat{\mathbf{R}}=\mathbf{D}_{R}\otimes\mathbf{z}_{R}$. With Half-
Quadratic Splitting (HQS) algorithm, Eq. (4) can then be reformulated as:
$\displaystyle\underset{\mathbf{z}_{T},\mathbf{z}_{R},\hat{\mathbf{T}},\hat{\mathbf{R}}}{\min}$
$\displaystyle\frac{1}{2}\|\mathbf{I}-\hat{\mathbf{T}}-\hat{\mathbf{R}}\|_{F}^{2}+\frac{\tau}{2}\|\hat{\mathbf{T}}-\mathbf{D}_{T}\otimes\mathbf{z}_{T}\|_{F}^{2}+\frac{\tau}{2}\|\hat{\mathbf{R}}-\mathbf{D}_{R}\otimes\mathbf{z}_{R}\|_{F}^{2}$
(5)
$\displaystyle+\lambda_{T}p_{T}(\mathbf{z}_{T})+\lambda_{R}p_{R}(\mathbf{z}_{R})+\kappa\sum_{m=1}^{M}\|(\mathbf{W}_{m}\otimes\hat{\mathbf{T}})\odot(\mathbf{W}_{m}\otimes\hat{\mathbf{R}})\|_{1},$
where $\tau$ is a regularization parameter. This formulation minimizes over
features $\mathbf{z}_{T},\mathbf{z}_{R}$ and two auxiliary parameters
$\hat{\mathbf{T}},\hat{\mathbf{R}}$. Based on Proximal Gradient Descent (PGD)
beck2009fast ; model_driven_rain_2020 , we propose an iterative algorithm to
sequentially update
$\mathbf{z}_{T},\mathbf{z}_{R},\hat{\mathbf{T}},\hat{\mathbf{R}}$ with simple
computations.
Updating $\mathbf{z}_{T}$: The sub-problem corresponding to $\mathbf{z}_{T}$
can be solved using quadratic approximation:
$\underset{\mathbf{z}_{T}}{\min}\frac{1}{2}\|\mathbf{z}_{T}-\left(\mathbf{z}_{T}^{(k)}-\eta_{1}\nabla
f(\mathbf{z}_{T}^{(k)})\right)\|_{F}^{2}+\frac{\eta_{1}\lambda_{T}}{\tau}p_{T}(\mathbf{z}_{T}),$
(6)
where $\eta_{1}$ denotes the step-size for updating, the superscript $(k)$
denotes the results from the $k$-th iteration, and
$f(\mathbf{z}_{T})=\frac{1}{2}\|\hat{\mathbf{T}}-\mathbf{D}_{T}\otimes\mathbf{z}_{T}|_{F}^{2}$.
Therefore, its solution can be expressed as:
$\mathbf{z}_{T}^{(k+1)}=\text{prox}_{\eta_{1}\lambda_{T}/\tau}\left(\mathbf{z}_{T}^{(k)}-\eta_{1}\nabla
f(\mathbf{z}_{T}^{(k)})\right),$ (7)
where $\text{prox}_{\eta_{1}\lambda_{T}/\tau}(\cdot)$ is the proximal operator
corresponding to the prior term $p_{T}(\cdot)$, $\nabla
f(\mathbf{z}_{T}^{(k)})=-\mathbf{D}_{T}^{(k)}\otimes^{T}(\hat{\mathbf{T}}-\mathbf{D}_{T}^{(k)}\otimes\mathbf{z}_{T}^{(k)})$,
and $\otimes^{T}$ denotes the transposed convolution111The operation
$\otimes^{T}$ can be implemented using the function
“torch.nn.ConvTransposed2d” in PyTorch..
Updating $\mathbf{z}_{R}$: The updating rule of $\mathbf{z}_{R}$ is similar to
that of $\mathbf{z}_{T}$ and can be expressed as:
$\mathbf{z}_{R}^{(k+1)}=\text{prox}_{\eta_{2}\lambda_{R}/\tau}\left(\mathbf{z}_{R}^{(k)}-\eta_{2}\nabla
h(\mathbf{z}_{R}^{(k)})\right),$ (8)
where $\eta_{2}$ denotes the step-size for updating,
$\text{prox}_{\eta_{2}\lambda_{R}/\tau}(\cdot)$ is the proximal operator
corresponding to the prior term $p_{R}(\cdot)$, $\nabla
h(\mathbf{z}_{R}^{(k)})=-\mathbf{D}_{R}^{(k)}\otimes^{T}(\hat{\mathbf{R}}-\hat{\mathbf{T}}-\mathbf{D}_{R}^{(k)}\otimes\mathbf{z}_{R}^{(k)})$.
Updating $\hat{\mathbf{T}}$: The sub-problem with respect to
$\hat{\mathbf{T}}$ can be expressed as:
$\displaystyle\underset{\hat{\mathbf{T}}}{\min}$
$\displaystyle\frac{1}{2}\|\mathbf{I}-\hat{\mathbf{T}}-\hat{\mathbf{R}}\|_{F}^{2}+\frac{\tau}{2}\|\hat{\mathbf{T}}-\mathbf{D}_{T}\otimes\mathbf{z}_{T}\|_{F}^{2}+\kappa\sum_{m=1}^{M}\|(\mathbf{W}_{m}\otimes\hat{\mathbf{T}})\odot(\mathbf{W}_{m}\otimes\hat{\mathbf{R}})\|_{1}.$
(9)
The quadratic approximation of Eq. (9) can similarly be expressed as:
$\displaystyle\underset{\hat{\mathbf{T}}}{\min}$
$\displaystyle\frac{1}{2}\|\hat{\mathbf{T}}-(\hat{\mathbf{T}}^{(k)}-\eta_{3}\nabla
u(\hat{\mathbf{T}}^{(k)}))\|_{F}^{2}+\kappa\sum_{m=1}^{M}\|(\mathbf{W}_{m}\otimes\hat{\mathbf{R}}^{(k)})\odot(\mathbf{W}_{m}\otimes\hat{\mathbf{T}})\|_{1},$
(10)
where
$u(\hat{\mathbf{T}})=\frac{1}{2}\|\mathbf{I}-\hat{\mathbf{T}}-\hat{\mathbf{R}}^{(k)}\|_{F}^{2}+\frac{\tau}{2}\|\hat{\mathbf{T}}-\mathbf{D}_{T}\otimes\mathbf{z}_{T}^{(k+1)}\|_{F}^{2}$.
Therefore $\nabla
u(\hat{\mathbf{T}})=-(\mathbf{I}-\hat{\mathbf{R}}^{(k)}-\hat{\mathbf{T}})+\tau(\hat{\mathbf{T}}-\mathbf{D}_{T}\otimes\mathbf{z}_{T}^{(k+1)})$.
When optimizing with respect to $\hat{\mathbf{T}}$, the estimated reflection
image $\hat{\mathbf{R}}$ is assumed to be fixed. Therefore, the transform
coefficients of the reflection image $\mathbf{W}_{m}\otimes\hat{\mathbf{R}}$
in the proposed transform-based exclusion loss can be treated as an element-
wise regularization parameter for the transform coefficients
$\mathbf{W}_{m}\otimes\hat{\mathbf{T}}$ of the transmission image.
Consequently, the solution to Eq. (9) can be expressed in terms of the
proximal operator for the proposed transform-based exclusion loss:
$\hat{\mathbf{T}}^{(k+1)}=\sum_{m=1}^{M}\mathbf{W}_{m}^{\dagger}\otimes\mathcal{S}_{\kappa|\mathbf{W}_{m}\otimes\hat{\mathbf{R}}^{(k)}|}(\mathbf{W}_{m}\otimes\phi(\hat{\mathbf{T}}^{(k)})),$
(11)
where $\phi(\hat{\mathbf{T}}^{(k)})=\hat{\mathbf{T}}^{(k)}-\eta_{3}\nabla
u(\hat{\mathbf{T}}^{(k)})$ and $\mathbf{W}_{m}^{\dagger}$ denotes the inverse
filter of $\mathbf{W}_{m}$.
The proximal operator is the soft-thresholding operator performed on the
transform coefficients of $\phi(\hat{\mathbf{T}})^{(k)}$. The soft-thresholds
${\kappa|\mathbf{W}_{m}\otimes\hat{\mathbf{R}}^{(k)}|}$ is position dependent
and based on the transform coefficients of the estimated reflection image
$\hat{\mathbf{R}}^{(k)}$. After soft-thresholding, the updated transmission
image is reconstructed using inverse transform with the soft-thresholded
transform coefficients.
Updating $\hat{\mathbf{R}}$: Similar to the updating rule for
$\hat{\mathbf{T}}$, we can express the solution to the sub-problem
corresponding to $\hat{\mathbf{R}}$ as follows:
$\hat{\mathbf{R}}^{(k+1)}=\sum_{m=1}^{M}\mathbf{W}_{m}^{\dagger}\otimes\mathcal{S}_{\kappa|\mathbf{W}_{m}\otimes\hat{\mathbf{T}}^{(k+1)}|}(\mathbf{W}_{m}\otimes\psi(\hat{\mathbf{R}}^{(k)})),$
(12)
where $\psi(\hat{\mathbf{R}}^{(k)})=\hat{\mathbf{R}}^{(k)}-\eta_{4}\nabla
v(\hat{\mathbf{R}}^{(k)})$ and $\nabla
v(\hat{\mathbf{R}})=-(\mathbf{I}-\hat{\mathbf{R}}-\hat{\mathbf{T}})+\tau(\hat{\mathbf{R}}-\mathbf{D}_{R}\otimes\mathbf{z}_{R}^{(k+1)})$.
### 3.3 Deep Unfolded Reflection Removal Network (DURRNet)
Figure 2: The proposed Deep Unfolded Reflection Removal Network (DURRNet). It
consists of $S$ scales of DURRLayers to gradually estimate the transmission
and the reflection images from low-resolution scales to the resolution of the
input image. At each scale, there are $K$ stages of DURRLayers. $\downarrow 2$
and $\uparrow 2$ denotes bilinear interpolation by a factor of 0.5 and 2,
respectively.
In this section, by using the unfolding technique, we construct a model-driven
multi-scale Deep Unfolded Reflection Removal Network (DURRNet) with multiple
Deep Unfolded Reflection Removal Layers (DURRLayers). Each DURRLayer unrolls
an iteration of the proposed iterative algorithm for single image reflection
removal.
Overall Architecture: As shown in Fig. 2, the proposed DURRNet is designed in
a multi-resolution fashion. There are $S$ scales of DURRLayers to effectively
exploit information at different scales for separating the input image into a
transmission image and a reflection image. Each scale consists of $K$
DURRLayers. At the lowest scale, the initial transmission image
$\mathbf{T}_{S}$, reflection image $\mathbf{R}_{S}$ and features
$\mathbf{z}_{T,S},\mathbf{z}_{R,S}$ are initialized based on the down-sampled
input image and the the hyper-column feature perceptual_loss_2018 ;
hariharan2015hypercolumns of the input image using bilinear interpolation by
a factor $2^{S-1}$, respectively. At an upper scale, the transmission and
reflection images are initialized based on the $2$ times up-sampled version
estimated from its lower scale, and the features are initialized based on the
down-sampled hyper-column feature of the input image and the up-sampled
feature estimated from the lower scale. The multi-scale architecture performs
image separation in a coarse-to-fine manner and can therefore effectively
combine information from different scales.
DURRLayer: Fig. 1 shows the network structure for the proposed Deep Unfolded
Reflection Removal Layer (DURRLayer) which corresponds to one iteration of the
proposed iterative algorithm. The model-inspired DURRLayer enables that the
estimated transmission and reflection image can well reconstruct the input
image and the prior information can be properly imposed. For each image layer,
a proximal network ProxNet is used to impose the prior for the feature, and a
proximal network based on invertible network ProxInvNet is proposed to impose
the exclusion prior for each estimated image.
ProxNet: Similar to model_driven_rain_2020 , the proximal operators for
$\mathbf{z}_{T}$ and $\mathbf{z}_{R}$ in Eq. (7) and (8) are represented by
two deep convolutional networks
$\text{ProxNet}_{\mathbf{\theta}_{\mathbf{z}_{T}}}(\cdot)$ and
$\text{ProxNet}_{\mathbf{\theta}_{\mathbf{z}_{R}}}(\cdot)$ whose parameters
are learned from the training dataset to well capture the prior information.
The updating rule for $\mathbf{z}_{T}$ and $\mathbf{z}_{R}$ can be therefore
expressed as:
$\begin{cases}\nabla
f(\mathbf{z}_{T}^{(k)})=-\mathbf{K}_{T}^{(k)}\otimes^{T}\left(\hat{\mathbf{T}}^{(k)}-\mathbf{D}_{T}^{(k)}\otimes\mathbf{z}_{T}^{(k)}\right),\\\
\mathbf{z}_{T}^{(k+1)}=\text{ProxNet}_{\mathbf{\theta}_{\mathbf{z}_{T}}}\left(\mathbf{z}_{T}^{(k)}-\nabla
f(\mathbf{z}_{T}^{(k)})\right),\\\ \end{cases}$ (13) $\begin{cases}\nabla
h(\mathbf{z}_{R}^{(k)})=-\mathbf{K}_{R}^{(k)}\otimes^{T}\left(\hat{\mathbf{R}}^{(k)}-\mathbf{D}_{R}^{(k)}\otimes\mathbf{z}_{R}^{(k)}\right),\\\
\mathbf{z}_{R}^{(k+1)}=\text{ProxNet}_{\mathbf{\theta}_{\mathbf{z}_{R}}}\left(\mathbf{z}_{R}^{(k)}-\nabla
f(\mathbf{z}_{R}^{(k)})\right),\end{cases}$ (14)
where the convolutional dictionaries $\mathbf{D}_{T}^{(k)}$,
$\mathbf{D}_{R}^{(k)}$, $\mathbf{K}_{T}^{(k)}$, and $\mathbf{K}_{R}^{(k)}$ and
the parameters of the proximal networks $\mathbf{\theta}_{\mathbf{z}_{T}}$ and
$\mathbf{\theta}_{\mathbf{z}_{R}}$ are learnable parameters.
Figure 3: The proposed Proximal Invertible Network
$\text{ProxInvNete}_{\mathbf{\theta}_{\mathbf{T}}}(\cdot,\cdot)$. The
invertible network (invNet) serves as an invertible transform to transform
images to coefficient domain using its forward pass then transform the
coefficients back to image domain using its backward pass.
ProxInvNet: For the proximal operators for the transform-based exclusion term,
a direct option is to apply wavelet transform to extract edge information, use
soft-thresholding operator to suppress common content and then reconstruct the
image using the inverse wavelet transform. However, the fixed transform may
not be sufficiently flexible to handle complex reflections. Inspired by the
invertible networks as a learnable invertible transform huang2021linn ;
huang2021winnet , we propose to use the invertible networks to construct a
learnable proximal operator $\text{ProxInvNete}_{\mathbf{\theta}}(\cdot)$ for
imposing the exclusion condition. The forward pass of the invertible networks
serves as the forward transform, and the backward pass of the invertible
networks then serves as the corresponding inverse transform. The updating rule
for $\hat{\mathbf{T}}$ and $\hat{\mathbf{R}}$ can be expressed as:
$\begin{cases}\mathcal{E}_{\mathbf{T}}^{(k+1)}=\hat{\mathbf{T}}^{(k)}-\mathbf{D}_{T}^{(k)}\otimes\mathbf{z}_{T}^{(k+1)},\\\
\phi(\hat{\mathbf{T}}^{(k)})=\hat{\mathbf{T}}^{(k)}+\eta_{T}\left((\mathbf{I}-\hat{\mathbf{R}}^{(k)}-\hat{\mathbf{T}}^{(k)})-\tau_{T}\mathcal{E}_{\mathbf{T}}^{(k+1)}\right),\\\
\hat{\mathbf{T}}^{(k+1)}=\text{ProxInvNet}_{\mathbf{\theta}_{\mathbf{T}}}\left(\phi(\hat{\mathbf{T}}^{(k)}),\hat{\mathbf{R}^{(k)}}\right),\\\
\end{cases}$ (15)
$\begin{cases}\mathcal{E}_{\mathbf{R}}^{(k+1)}=\hat{\mathbf{R}}-\mathbf{D}_{R}^{(k)}\otimes\mathbf{z}_{R}^{(k+1)},\\\
\psi(\hat{\mathbf{R}}^{(k)})=\hat{\mathbf{R}}^{(k)}+\eta_{R}\left((\mathbf{I}-\hat{\mathbf{R}}^{(k)}-\hat{\mathbf{T}}^{(k+1)})-\tau_{R}\mathcal{E}_{\mathbf{R}}^{(k+1)}\right),\\\
\hat{\mathbf{R}}^{(k+1)}=\text{ProxInvNet}_{\mathbf{\theta}_{\mathbf{R}}}\left(\psi(\hat{\mathbf{R}}^{(k)}),\hat{\mathbf{T}}^{(k+1)}\right),\end{cases}$
(16)
where the convolutional dictionaries $\mathbf{D}_{T}^{(k)}$ and
$\mathbf{D}_{R}^{(k)}$, and step size parameter $\eta_{T}$, $\eta_{R}$,
$\tau_{T}$, and $\tau_{R}$ and the parameters of the proximal invertible
networks $\mathbf{\theta}_{\mathbf{T}}$ and $\mathbf{\theta}_{\mathbf{R}}$ are
learnable parameters.
Fig. 3 shows the diagram for
$\text{ProxInvNete}_{\mathbf{\theta}_{\mathbf{T}}}(\cdot,\cdot)$. The forward
pass of the invertible networks is applied as the forward transform to extract
features from $\phi(\hat{\mathbf{T}}^{(k)})$ and $\hat{\mathbf{R}^{(k)}}$. In
the Threshold Network (ThreNet), the feature of $\hat{\mathbf{R}}$ will be
concatenated with that of $\phi(\hat{\mathbf{T}}^{(k)})$ and then they pass
through a convolutional network with residual blocks to generate corrections
for the feature of $\hat{\mathbf{T}}$. The updated feature of
$\hat{\mathbf{T}}$ will then be converted back to image domain using the
backward pass of the invertible networks. Similar operations can be performed
when updating $\hat{\mathbf{R}}$. The forward and backward pass of the
invertible networks are constructed by the same set of $P$ pairs of prediction
and updater networks (PUNet), for details please refer to huang2021winnet .
Figure 4: The network architectures for ProxNet, PUNet and ThreNet used to
construct the proposed DURRLayer. The blue and green blocks represent
convolutional layers and ReLU activation layers, respectively. The yellow
blocks represent a residual block.
### 3.4 Training Details
Apart from the proposed exclusion loss we introduced in Section 3.1, we adopt
the reconstruction loss and the perceptual loss perceptual_loss_2018 for
training:
$\mathcal{L}=\mathcal{L}_{{r}}+\lambda_{e}\mathcal{L}_{{e}}+\lambda_{p}\mathcal{L}_{{p}},$
(17)
where $\lambda_{e}=0.01$ and $\lambda_{p}=0.01$ are regularization parameters.
The reconstruction loss $\mathcal{L}_{\text{r}}$ is applied to the estimated
transmission image $\widehat{\mathbf{T}}$ and reflection image
$\widehat{\mathbf{R}}$ as well as the reconstructed image based on the final
features:
$\displaystyle\mathcal{L}_{{r}}=$
$\displaystyle\|\mathbf{T}-\widehat{\mathbf{T}}\|_{2}^{2}+\|\mathbf{T}-\mathbf{D}_{T}\otimes\mathbf{z}_{T}\|_{2}^{2}+\|\mathbf{R}-\hat{\mathbf{R}}\|_{2}^{2}+\|\mathbf{R}-\mathbf{D}_{R}\otimes\mathbf{z}_{R}\|_{2}^{2}.$
(18)
Perceptual loss perceptual_loss_2018 is used to regularize the estimated
images with high perceptual quality by minimizing the $l_{1}$ difference
between the VGG features of the estimated and the ground-truth images:
$\mathcal{L}_{{p}}=\|\tau(\mathbf{T})-\tau(\widehat{\mathbf{T}})\|_{1}+\|\tau(\mathbf{R})-\tau(\widehat{\mathbf{R}})\|_{1},$
(19)
where $\tau(\cdot)$ denotes the features of the VGG-19 model pretrained on
ImageNet dataset.
## 4 Experimental Results
### 4.1 Implementation Details
The proposed method is implemented with Pytorch, and the models are optimized
with Adam optimizer with initial learning rate $10^{-4}$ which are decayed at
epoch 10, 15, and 20 with learning rate decay 0.5. The total number of epochs
is 25. The early stop strategy is used. The experiments were performed on a
computer with a RTX 3090 Ti GPU.
The number of scales $S$ in DURRNet is set to 4 and the number of DURRLayer
stages in each scale is set to 2. The number of feature channels is set to 64.
The forward pass and backward pass of the invertible networks consists of
$P=2$ pairs of PUNets. The network architectures of ProxNet, PUNet, ThreNet
used to construct DURRLayer are illustrated in Fig. 4. All the networks are
constructed using convolutional layers, ReLU layers and Residual blocks.
Table 1: Quantitative comparisons on Real20 testing dataset perceptual_loss_2018 of different methods. (The best scores are in bold.) Metrics | CEILNet | Zhang et al. | BDN | IBCLN | YTMT | DURRNet
---|---|---|---|---|---|---
PSNR | 18.45 | 22.55 | 18.41 | 21.86 | 23.26 | 23.61
SSIM | 0.690 | 0.788 | 0.726 | 0.762 | 0.806 | 0.804
Table 2: Quantitative comparisons on Nature testing dataset cascaded_refine_2020 of different methods. (The best scores are in bold.) Metrics | CEILNet-F | Zhang et al. | BDN-F | IBCLN | YTMT | DURRNet
---|---|---|---|---|---|---
PSNR | 19.33 | 19.56 | 18.92 | 23.57 | 23.85 | 24.29
SSIM | 0.745 | 0.736 | 0.737 | 0.783 | 0.810 | 0.806
(a) Zhang et al.
(b) BDN
(c) IBCLN
(d) DURRNet
(e) GT
Figure 5: Visual comparisons on the estimated transmission image (row 1, 3 and
5) and the estimated reflection image (row 2, 4 and 6) of different single
image reflection methods on Real20 dataset perceptual_loss_2018 . The last
column shows the ground-truth transmission image and reflection image for the
reference.
### 4.2 Comparison with State-of-the-arts Methods
In this section, we quantitatively and visually compare our DURRNet with other
single image reflection removal methods including CEILNet method
generic_smooth_2017 , Zhang et al.’s method perceptual_loss_2018 , BDN method
yang2018seeing , IBCLN method cascaded_refine_2020 and YTMT method
hu2021trash .
Table 1 shows the quantitative evaluation results of different single image
reflection removal methods evaluated on Real20 dataset perceptual_loss_2018 .
The training datasets consist of synthetically generated reflection-
contaminated images using 7643 image pairs from PASCAL VOC dataset by
following the settings in CEILNet generic_smooth_2017 and 90 pairs of real
images from perceptual_loss_2018 . The testing datasets contain 20 images from
Real20 perceptual_loss_2018 . From Table 1, we can see that on Real20 dataset
the proposed DURRNet achieves significantly better PSNR compared to other
methods and achieves a similar SSIM results as YTMT method.
Table 2 shows the quantitative comparison results on Nature testing dataset
cascaded_refine_2020 . The comparison follows the settings in
cascaded_refine_2020 . Additional 200 training image pairs from the Nature
training dataset cascaded_refine_2020 were used for training and other models
(with a suffix “-F") were fine-tuned on the Nature training dataset for fair
comparisons. We can see that the proposed DURRNet achieves the highest PSNR
value and the second best SSIM value among all the methods.
For visual comparisons, Fig. 5 shows the estimated transmission and reflection
images by different methods on 3 exemplar images from Real20 dataset
perceptual_loss_2018 . This dataset is a challenging dataset since the input
images contain different reflection patterns and the region of overlap is
large. From Fig. 5, we can see that the proposed DURRNet is able to recover
natural looking transmission and reflection images. This could be due to the
fact that the proposed deep unfolded network architecture takes the image
formation model into consideration and prior information has been properly
imposed into the network architecture. For comparison methods, Zhang et al.’s
method is able to well separate most reflections in the input, but may
generate images with visible artifacts, BDN method did not successfully remove
strong reflections and usually generates reflection images with too much
transmission image content, and IBCLN method struggle to separate large
overlapping reflections.
Fig. 6 further shows the visual comparisons on Real45 dataset
generic_smooth_2017 which does not contain ground-truth images for reference.
We can see that the proposed DURRNet is able to properly separate the
reflection image content from the input reflection-contaminated image and the
separated reflection images contain little information from the transmission
image.
(a) Input
(b) Zhang et al.
(c) BDN
(d) IBCLN
(e) DURRNet
Figure 6: Visual comparisons of different single image reflection methods on
Real45 dataset generic_smooth_2017 . Row 1 and 3 show the estimated
transmission images, row 2 and 4 show the estimated reflection image.
### 4.3 Ablation Studies
The effectiveness of ProxNet/ProxInvNet: In the proposed DURRLayer, the main
network components are the ProxNet and ProxInvNet which is used to impose
natural image prior and exclusion prior, respectively. To understand their
functionalities, we perform ablation studies on these network components.
Table 3: Quantitative performance of the proposed DURRNet with different variations. The performance of different models are evaluated on Real20 dataset perceptual_loss_2018 . Settings | DURRNet | w/o ProxNet | w/o ProxInvNet | $(S,K)=(1,8)$ | $(S,K)=(2,4)$
---|---|---|---|---|---
PSNR | 23.61 | 22.63 | 22.61 | 22.47 | 22.74
SSIM | 0.803 | 0.787 | 0.788 | 0.787 | 0.794
From Table 3, we can see that when ProxNets or ProxInvNets are removed from
DURRNet there is approximately a 1 dB drop in PSNR. Therefore they are both
essential components of the proposed DURRNet. To further visualize the
functionality of ProxNet and ProxInvNet, Fig. 7 shows the single image
reflection removal results of the DURRLayer w/o ProxNet, DURRLayer w/o
ProxInvNet, and the complete model of DURRNet. We can see that when either
ProxNets or ProxInvNets are disabled, the model can still produce relatively
good results. This could be due to the network architecture design for ProxNet
which includes a global skip connection, and ProxInvNet which adopts
invertible networks as learnable transforms. In Fig. 7 (b) when ProxNets are
disabled, the model would have difficulty to localize the reflection region,
and in Fig. 7 (c) when ProxInvNets are disabled, the model would have
difficulty at dealing with the contour regions of the reflections.
(a) Input
(b) w/o ProxNet
(c) w/o ProxInvNet
(d) DURRNet
Figure 7: Visualization of the different effects of ProxNets and ProxInvNets
in the proposed DURRNet.
The effectiveness of Multi-scale Architecture: As shown in Fig. 2, the
proposed DURRNet consists of $S$ scales of DURRLayers to progressively
estimate the transmission image and the reflection image from low-resolution
scales to high-resolution scales. In Table 3 , we further analyzed the
effectiveness of the multi-scale architecture. From the table, we can see that
when the same total number of DURRLayer stages are fixed, i.e., $S\times K=8$,
the proposed DURRNet (with $(S,K)=(4,2)$) achieves the best performance
compared to other configurations, e.g., $(S,K)=(1,8)$ and $(S,K)=(2,4)$. This
indicates that the multi-scale architecture can effectively and efficiently
integrate information from different scales.
## 5 Conclusions
In this paper, we proposed a novel model-inspired single image reflection
removal network named Deep Unfolded Reflection Removal Network (DURRNet). The
proposed DURRNet is designed using deep unfolding technique and has clear
interpretation. The image formation model and priors have been explicitly
embedded into the design of the DURRNet architecture. Within each DURRLayer,
ProxNets are used to model natural image priors and ProxInvNets which are
constructed with invertible networks are used to impose the exclusion prior.
From experimental results, the proposed DURRNet is able to recover high-
quality transmission and reflection images both quantitatively and visually.
## References
* [1] N. Arvanitopoulos, R. Achanta, and S. Susstrunk. Single image reflection suppression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4498–4506, 2017.
* [2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183–202, 2009.
* [3] H. Bristow, A. Eriksson, and S. Lucey. Fast convolutional sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 391–398, 2013.
* [4] Q. Fan, J. Yang, G. Hua, B. Chen, and D. Wipf. A generic deep architecture for single image reflection removal and image smoothing. In Proceedings of the IEEE International Conference on Computer Vision, pages 3238–3247, 2017.
* [5] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th international conference on international conference on machine learning, pages 399–406, 2010.
* [6] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 447–456, 2015.
* [7] Q. Hu and X. Guo. Trash or treasure? an interactive dual-stream strategy for single image reflection separation. Advances in Neural Information Processing Systems, 34, 2021.
* [8] J.-J. Huang and P. L. Dragotti. LINN: Lifting inspired invertible neural network for image denoising. In 2021 29th European Signal Processing Conference (EUSIPCO), pages 636–640, 2021.
* [9] J.-J. Huang and P. L. Dragotti. WINNet: Wavelet-inspired invertible network for image denoising. arXiv preprint arXiv:2109.06381, 2021.
* [10] U. S. Kamilov. Parallel proximal methods for total variation minimization. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4697–4701. IEEE, 2016.
* [11] A. Levin and Y. Weiss. User assisted separation of reflections from a single image using a sparsity prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9):1647–1654, 2007.
* [12] A. Levin, A. Zomet, and Y. Weiss. Separating reflections from a single image using local features. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 1, pages I–I. IEEE, 2004.
* [13] C. Li, Y. Yang, K. He, S. Lin, and J. E. Hopcroft. Single image reflection removal through cascaded refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3565–3574, 2020.
* [14] Y. Li and M. S. Brown. Single image layer separation using relative smoothness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2752–2759, 2014.
* [15] Y. Li, M. Tofighi, J. Geng, V. Monga, and Y. C. Eldar. Efficient and interpretable deep blind image deblurring via algorithm unrolling. IEEE Transactions on Computational Imaging, 6:666–681, 2020.
* [16] V. Monga, Y. Li, and Y. C. Eldar. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Processing Magazine, 38(2):18–44, 2021.
* [17] V. Papyan, Y. Romano, and M. Elad. Convolutional neural networks analyzed via convolutional sparse coding. The Journal of Machine Learning Research, 18(1):2887–2938, 2017\.
* [18] W. Pu, J.-J. Huang, B. Sober, N. Daly, C. Higgitt, I. Daubechies, P. L. Dragotti, and M. Rodigues. Mixed x-ray image separation for artworks with concealed designs. arXiv preprint arXiv:2201.09167, 2022.
* [19] Y. Shih, D. Krishnan, F. Durand, and W. T. Freeman. Reflection removal using ghosting cues. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3193–3201, 2015.
* [20] R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, and A. C. Kot. Benchmarking single-image reflection removal algorithms. In Proceedings of the IEEE International Conference on Computer Vision, pages 3922–3930, 2017.
* [21] H. Wang, Q. Xie, Q. Zhao, and D. Meng. A model-driven deep neural network for single image rain removal. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3100–3109, 2020.
* [22] K. Wei, J. Yang, Y. Fu, D. Wipf, and H. Huang. Single image reflection removal exploiting misaligned training data and network enhancements. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8178–8187, 2019.
* [23] Q. Wen, Y. Tan, J. Qin, W. Liu, G. Han, and S. He. Single image reflection removal beyond linearity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3771–3779, 2019.
* [24] J. Yang, D. Gong, L. Liu, and Q. Shi. Seeing deeply and bidirectionally: A deep learning approach for single image reflection removal. In Proceedings of the european conference on computer vision (ECCV), pages 654–669, 2018.
* [25] Y. Yang, W. Ma, Y. Zheng, J.-F. Cai, and W. Xu. Fast single image reflection suppression via convex optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8141–8149, 2019.
* [26] Y. Yang, J. Sun, H. Li, and Z. Xu. Deep ADMM-net for compressive sensing mri. In Proceedings of the 30th international conference on neural information processing systems, pages 10–18, 2016.
* [27] K. Zhang, L. V. Gool, and R. Timofte. Deep unfolding network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3217–3226, 2020.
* [28] X. Zhang, R. Ng, and Q. Chen. Single image reflection separation with perceptual losses. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4786–4794, 2018.
|
# Leveraging Prior Knowledge in Reinforcement Learning via
Double-Sided Bounds on the Value Function
Jacob Adamczyk1, Stas Tiomkin2, Rahul V. Kulkarni1
###### Abstract
An agent’s ability to leverage past experience is critical for efficiently
solving new tasks. Approximate solutions for new tasks can be obtained from
previously derived value functions, as demonstrated by research on transfer
learning, curriculum learning, and compositionality. However, prior work has
primarily focused on using value functions to obtain zero-shot approximations
for solutions to a new task. In this work, we show how an arbitrary
approximation for the value function can be used to derive double-sided bounds
on the optimal value function of interest. We further extend the framework
with error analysis for continuous state and action spaces. The derived
results lead to new approaches for clipping during training which we validate
numerically in simple domains.
## Introduction
The field of reinforcement learning (RL) has seen impressive successes
(Degrave et al. 2022; Schrittwieser et al. 2020; Vinyals et al. 2019; Silver
et al. 2018) in recent years due to the development of novel algorithms in
combination with deep learning architectures. However, for complex tasks, the
amount of training time required for learning an optimal solution from scratch
can be prohibitively large and thus presents a significant obstacle to further
development. To address this challenge, approaches that leverage prior
knowledge to efficiently calculate policies for new tasks are needed. While
policies generated from prior solutions may not be the optimal policies for
the new tasks, they can serve as useful approximations that reduce training
time. Correspondingly, there is a need to develop approaches that further
leverage the use of approximations based on prior knowledge to address the
problem of solving new tasks.
Previous work has focused on addressing this problem using different
approaches such as transfer learning, curriculum learning, and
compositionality. In particular, we consider value-based RL approaches,
wherein the agent’s goal is to learn the expected value of every state and
action pair. Given this value function, $Q(s,a)$, the agent can act optimally
by choosing actions which maximize its expected future returns. In many
instances, the agent has an estimate for the value function before training
begins. For example, in the case of curriculum learning, the agent has the
$Q$-values for previously learned (progressively more challenging) tasks. In
the case of compositional or hierarchical RL, the agent can combine knowledge
by applying a function on subtasks’ $Q$-values. When using an exploratory
skill-acquisition approach such as DIAYN (Eysenbach et al. 2019) or CSD (Park
et al. 2023), the agent obtains $Q$-values for a diverse set of skills. Even
in cases where an initial estimate is not explicitly provided, the agent can
provide itself an estimate by using Q-values that were obtained during the
ongoing learning phase (bootstrapping).
An underlying question in these scenarios is the following: How can the agent
use the known value function estimate(s) for solving a new target task? Does
the estimate only serve as a zero-shot approximation or is there additional
useful information that can be extracted from it?
In the work of (Adamczyk et al. 2023a), the authors show that there exists a
method of “closing the gap” between any estimate ($Q^{*}(s,a)$) and any target
($\widetilde{Q}^{*}(s,a)$) task (with an accessible reward function) in
entropy-regularized RL. This statement is facilitated by the work of (Cao,
Cohen, and Szpruch 2021) which can be used to show that any estimate can be
viewed as an optimal value function corresponding to a suitably defined reward
function. Here, we show that since the gap between the target and estimated
value functions: $\widetilde{Q}^{*}(s,a)-Q^{*}(s,a)=K^{*}(s,a)$ is itself an
optimal value function, it can be bounded. As a consequence, instead of
providing only a zero-shot approximation or a warmstart for training the
target task, we show that the estimates available to the agent also provide a
double-sided bound on the optimal $Q$-values being learned.
A schematic illustration of our approach is provided in Fig. 1. Starting with
an estimate of the optimal value function and samples of the reward function,
we derive double-sided bounds on the true optimal value function. We find that
applying these bounds during training improves the agent’s training
performance and allows an additional method for monitoring convergence. We
provide further theoretical analysis on continuous state-action spaces,
relevant for the function approximator (FA) setting in Deep RL.
Main contributions
The main contributions of our work, applicable to both standard and entropy-
regularized RL, are:
1. 1.
Development of a general framework for bounding optimal value functions based
on prior knowledge.
2. 2.
Extension of derived results to include theoretical error analysis in
continuous state-action spaces.
3. 3.
Demonstration of value-based clipping methods as practical applications of the
derived theoretical results.
Figure 1: Schematic illustration of the main contribution of this work. Given
any approximation (red curve) to the optimal value function of interest (black
curve), we derive double-sided bounds (blue curves) that lead to clipping
approaches during training. Based solely on the current approximation for
$Q(s,a)$ (red curve), we derive double-sided bounds on the unknown optimal
value function $Q^{*}(s,a)$ (black curve). In the right panel, we show the
different clipping methods, which are described further in the “Experimental
Validation” section. In “Hard Clipping”, the target is replaced with the
exceeded bound; in “Soft Clipping”, an additional loss term is appended to the
Bellman loss, proportional to the magnitude of the bound violation; in
“Smoothed Clipping”, the target update is replaced with a weighted average of
the original value and the exceeded bound.
There are multiple applications that arise from the derivation of such double-
sided bounds. The bounds (1) allow confinement of FA training to a limited
output range, (2) provide a mechanism to choose the “best” skill from a pre-
trained set of skills and (3) establish a framework that provides insights
into and extends previous results on exact compositions of value functions.
## Preliminaries
For the theoretical setup, we consider initially the case of finite, discrete
state and action spaces, and we will subsequently extend our analysis to
continuous spaces. In this setting, the reinforcement learning (RL) problem is
modeled by a Markov Decision Process (MDP) represented as a tuple
$\langle\mathcal{S},\mathcal{A},p,r,\gamma\rangle$ where $\mathcal{S}$ is the
set of available states; $\mathcal{A}$ is the set of possible actions;
$p:\mathcal{S}\times\mathcal{A}\to\mathcal{S}$ is the transition function
(dynamics); $r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ is a (bounded)
reward function which associates a reward (or cost) with each state-action
pair; and $\gamma\in(0,1)$ is a discount factor which discounts future rewards
and assures convergence of the total reward for an infinitely long trajectory.
The objective in standard (un-regularized) RL is to find an optimal policy
that maximizes expected rewards collected by the agent, i.e.
$\pi^{*}=\arg\max_{\pi}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\right].$
(1)
An important generalization is entropy-regularized RL (Ziebart 2010), which
augments the un-regularized RL objective (Eq. (1)) by including an entropic
regularization term which penalizes control over a pre-specified reference
policy:
$\pi^{*}=\arg\max_{\pi}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}\left(r_{t}-\frac{1}{\beta}\log\left(\frac{\pi(a_{t}|s_{t})}{\pi_{0}(a_{t}|s_{t})}\right)\right)\right]$
where $\pi_{0}(a|s)$ is the fixed prior policy. The additional control cost
discourages the agent from choosing policies that deviate too much from this
prior policy. Importantly, entropy-regularized MDPs lead to stochastic optimal
policies that are provably robust to perturbations of rewards and dynamics
(Eysenbach and Levine 2022); making them a more suitable approach to real-
world problems.
The solution to the RL problem is defined by its optimal action-value function
($Q^{*}(s,a)$) from which one can derive the aforementioned optimal policy
$\pi^{*}(a|s)$. For both un-regularized and entropy-regularized RL, the
optimal value function can be obtained by iterating a recursive Bellman
equation. In un-regularized RL, the Bellman optimality equation is given by
(Sutton and Barto 2018):
$Q^{*}(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim{}p(\cdot|s,a)}\max_{a^{\prime}}\left(Q^{*}(s^{\prime},a^{\prime})\right).$
(2)
The entropy term in the objective function of entropy-regularized RL modifies
the previous optimality equation in the following way (Ziebart 2010; Haarnoja
et al. 2018b):
$Q^{*}(s,a)=r(s,a)+\frac{\gamma}{\beta}\mathbb{E}_{s^{\prime}\sim{}p}\log\mathbb{E}_{a^{\prime}\sim{}\pi_{0}}e^{\beta
Q^{*}(s^{\prime},a^{\prime})}.$ (3)
The regularization parameter $\beta$ can be interpreted as being analogous to
an inverse temperature parameter, its value is used to control the degree of
stochasticity in the optimal policy. In the entropy-regularized setting,
$Q^{*}$ is referred to as the optimal “soft” action-value function. For
brevity, we will hereon refer to $Q^{*}$ simply as the value function.
## Prior Work
The importance of double-sided bounds on value functions has been explored in
prior work. In this section we review a set of the most relevant prior works
(Nemecek and Parr 2021; Kim, Park, and Kim 2022; Haarnoja et al. 2018a;
Adamczyk et al. 2023b; Todorov 2009; Van Niekerk et al. 2019; Tasse, James,
and Rosman 2020; Lee et al. 2021). We contrast the existing works with regard
to the following features: i) the assumption about composition and/or
transformation of known solutions in the derivation of bounds, ii) the
requirement for additional samples needed to derive bounds, iii) the
generality and applicability of bounds to un-regularized RL and entropy-
regularized RL, and to deterministic and stochastic dynamics, iv) double or
single-sided bounds.
In (Nemecek and Parr 2021), the authors have derived double-sided bounds on
the state value function $V(s)$ by the positive conical combination of subtask
rewards. The method in (Nemecek and Parr 2021) requires additional samples for
first learning the successor features before then deriving the double-sided
bounds for a downstream task. The applicability of (Nemecek and Parr 2021) is
limited to un-regularized RL.
The aforementioned work was subsequently extended by (Kim, Park, and Kim
2022), where, in the same GPI setting, they present double-sided bounds on
$Q$-values for linear combinations of subtask reward functions. They introduce
the notion of “soft clipping” which we adapt to our setting (details in the
“Experimental Validation” section), but it was not demonstrated in practice.
Similarly to (Nemecek and Parr 2021), the method in (Kim, Park, and Kim 2022)
requires firstly to learn the successor features, and it is limited to un-
regularized RL only.
The previous two works were focused on the standard (un-regularized)
reinforcement learning setting. However, the double-sided bounds presented in
(Haarnoja et al. 2018a)’s Lemma 1 are derived for the MaxEnt setting, for the
case of convex reward combinations. It is worth noting that the lower bound in
this case must be learned (the $C$ function). Extending these results to other
more general classes of functional composition, (Adamczyk et al. 2023b)
provides double-sided bounds for both entropy-regularized and un-regularized
RL. However, one side of the bound in all cases must be learned as well.
Finally, multiple prior works have focused on specific examples of
compositionality for which exact results can be obtained for the optimal value
function. These results typically involve multiple limiting assumptions on the
structure of rewards functions, nature of transition dynamics and specific
forms for the composition function. (Todorov 2009; Van Niekerk et al. 2019;
Tasse, James, and Rosman 2020). In a broader context, (Lee et al. 2021)
proposes to bound “Bellman updates”, which improves the stability of training
and sample efficiency in entropy-regularized RL. However, the method in (Lee
et al. 2021) does not leverage known solutions for new tasks, instead using a
parallel ensemble of learners for variance estimation.
In the current work we propose a novel method for the derivation of double-
sided bounds, which is not limited to a particular type of composition or
transformation of prior solution(s), and is valid for an arbitrary function.
Our method is a “zero-shot” approach for deriving double-sided bounds – it
does not require additional samples beyond those collected by the learning
agent. It is applicable to both standard and entropy-regularized RL, to
deterministic and stochastic environments, and to discrete and continuous
domains. The theoretical results are provided in the following “Results”
section, and in the “Applications” section we demonstrate the applications of
the theory in simple domains, leaving large scale experiments to future work.
## Results
In this section, we focus on entropy-regularized (MaxEnt) RL, the case
considered in (Adamczyk et al. 2023a). The analogous results for un-
regularized RL (which can be considered as a limiting case of entropy-
regularized RL) are provided later. The proofs of all results shown can be
found in the Appendix.
Our main result provides double-sided bounds on the optimal $Q$ function. We
emphasize that any (bounded) function
$Q:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ can be used to generate a bound.
We suggestively use the notation “$Q$” for this otherwise arbitrary function
to note that it can be derived from a previous tasks’ solution, an estimate,
or other ansatz (e.g. composition or hierarchical function) of subtask
$Q$-values.
###### Theorem 4.1.
Consider an entropy-regularized MDP
$\langle\mathcal{S},\mathcal{A},p,r,\gamma,\beta\rangle$ with (unknown)
optimal value function $Q^{*}(s,a)$. Let an estimate for the value function
$Q(s,a)$ be given. Denote
$V(s)~{}\doteq~{}1/\beta\log\operatorname*{\mathbb{E}}_{a\sim\pi_{0}}\exp\beta
Q(s,a)$.
The optimal value function $Q^{*}(s,a)$ is then bounded by:
$\displaystyle Q^{*}(s,a)$ $\displaystyle\geq
r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})+\frac{\inf\Delta}{1-\gamma}\right)$
(4a) $\displaystyle Q^{*}(s,a)$ $\displaystyle\leq
r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})+\frac{\sup\Delta}{1-\gamma}\right)$
(4b)
where
$\Delta(s,a)\doteq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})-Q(s,a).$
In Eq. (4a) and (4b), the $\inf$ and $\sup$ are taken over the continuous
state-action space $\mathcal{S}\times\mathcal{A}$.
During training, the Bellman loss $\mathcal{L}=||\Delta||^{2}\to 0$, implying
that $\inf\Delta\to 0$ and $\sup\Delta\to 0$, hence the bounds in Eq. (4.1)
will become tight upon convergence of the soft action-value function. We note
that this is generally not the case for un-regularized RL, as will be
discussed later.
In principle, given some assumptions on the structure of the reward function
or dynamics, it is possible to tighten these bounds. As an example, we provide
a tighter lower bound when the MDP always has an “identity” action allowing
the agent to return to the same state:
###### Lemma 4.1a.
Consider an entropy-regularized MDP
$\langle\mathcal{S},\mathcal{A},p,r,\gamma,\beta\rangle$ with (unknown)
optimal value function $Q^{*}(s,a)$. Let an estimate for the value function
$Q(s,a)$ be given. Denote
$V(s)~{}\doteq~{}1/\beta\log\operatorname*{\mathbb{E}}_{a\sim\pi_{0}}\exp\beta
Q(s,a)$. Suppose there exists an “identity” action
$a_{\emptyset}(s)\in\mathcal{A}$ for each state, which deterministically
transitions the agent to the same state:
$p(s^{\prime}|s,a_{\emptyset}(s))=\delta(s^{\prime}-s)$ for all
$s\in\mathcal{S}$.
Then the lower bound on the optimal value function $Q^{*}(s,a)$ can be
improved:
$Q^{*}(s,a)\geq
r(s,a)+\gamma\left(V(s^{\prime})+\frac{1}{1-\gamma}\Delta(s^{\prime},a_{\emptyset})\right)$
(5)
In the Appendix, we show that the lower bound of Eq. (5) is indeed tighter
than Eq. (4a) at all state-actions except the minimizer
$(s^{*},a^{*})=\textrm{arginf}\ \Delta(s,a)$.
As an alternative, in practice, one can replace the $\inf$ and $\sup$ in the
previous results by a $\min$ and $\max$, respectively, over the finite dataset
provided (e.g. the current batch of replay data). Although not exact, this
substitution becomes increasingly accurate for large batch sizes. We employ
this substitution in the experiments shown in section Experimental Validation.
Nevertheless, we provide an exact extension of our results in the subsequent
section for sufficiently well-behaved state-action spaces.
In a similar manner, we may also bound the rate of suboptimality induced by
using the policy derived from some estimate $Q(s,a)$:
###### Corollary 4.2 (Suboptimality Bounds).
Let policy $\pi(a|s)$ be given with soft value $Q^{\pi}(s,a)$. The rate of the
suboptimality gap, $Q^{*}(s,a)-Q^{\pi}(s,a)$, is then bounded between
$\inf_{(s,a)}d(s,a)\leq\frac{Q^{*}(s,a)-Q^{\pi}(s,a)}{H}\leq\sup_{(s,a)}d(s,a)$
(6)
where $d(s,a)\doteq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}V^{\pi}(s^{\prime})-Q^{\pi}(s,a)$,
$V^{\pi}(s)\doteq\log\operatorname*{\mathbb{E}}_{a}\exp\beta Q^{\pi}(s,a)$ is
the soft state-value function, and $H=(1-\gamma)^{-1}$ is the effective time
horizon.
This result implies that any policy with a known soft value function has a
(lower and upper) bounded suboptimality. The typically-stated objective of
minimizing the Bellman loss can be understood as minimizing the suboptimality
suffered by the induced policy $\pi\propto\exp\beta Q$.
We conclude this section by showing that a new Bellman operator, which
includes clipping when applicable, converges to the optimal $Q$ function:
###### Theorem 4.3.
Let the functions $L(s,a),U(s,a)$ be lower and upper bounds on the optimal
value function: $L(s,a)~{}\leq~{}Q^{*}(s,a)~{}\leq U(s,a)$ for all
$s\in\mathcal{S}$ and $a\in\mathcal{A}$. The clipped Bellman operator,
$\mathcal{B}_{C}Q(s,a)~{}:=~{}\max_{s,a}\left(\min_{s,a}\left(\mathcal{B}Q(s,a),U(s,a)\right),L(s,a)\right)$
converges to the optimal value function
$Q^{*}(s,a)~{}=~{}\mathcal{B}^{\infty}Q(s,a)$.
This result shows that updates with clipping are guaranteed to converge to the
same solution. We experimentally demonstrate this in Fig. 3.
### Error Propagation in Continuous Spaces
The bounds presented in the previous section, though exact, are often
intractable due to the required global extremization over continuous state-
action spaces. One cannot access the global extrema of $\Delta$ given only
finitely many samples in state-action space. Thus, we provide the following
bounds, allowing for the extension of our results to (sufficiently well-
behaved) continuous spaces. In this section, we loosen those bounds by
relaxing the required extremization with a simpler optimization over a given
discrete batch of replay data.
We begin with some helpful definitions.
###### Definition 1.
A function $\bar{X}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ is an
$\varepsilon$-optimal approximation of
$X(s,a):\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ if it satisfies
$\left\lvert\bar{X}(s,a)-X(s,a)\right\rvert\leq\varepsilon$ for all
$s\in\mathcal{S},a\in\mathcal{A}$.
###### Definition 2.
The diameter of a bounded metric space, $\mathcal{X}$, endowed with a metric
$d(\cdot,\cdot)\to\mathbb{R}_{\geq 0}$ is a constant $D\in\mathbb{R}_{>0}$
such that $d(x_{1},x_{2})\leq D$ for all $x_{1},x_{2}\in\mathcal{X}$.
###### Lemma 4.4.
Let $\mathcal{S}\times\mathcal{A}$ be a bounded metric space with diameter
$D$, and let $r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ be
$L_{r}$-Lipschitz (w.r.t. the same metric). Then the global extrema of
$r(s,a)$ on $\mathcal{S}\times\mathcal{A}$ are bounded as follows:
$\displaystyle\sup_{s\in\mathcal{S},a\in\mathcal{A}}r(s,a)$
$\displaystyle\leq\min_{(s,a)\in\mathcal{D}}r(s,a)+L_{r}D$
$\displaystyle\inf_{s\in\mathcal{S},a\in\mathcal{A}}r(s,a)$
$\displaystyle\geq\max_{(s,a)\in\mathcal{D}}r(s,a)-L_{r}D$
where $\mathcal{D}$ is the dataset of $(s,a)$ tuples available for querying
the magnitude of $r$ (e.g. the current batch or buffer).
As an example, in the case that one uses the simple upper bound,
$Q(s,a)\leq\frac{1}{1-\gamma}\sup r(s,a)$, over a finite-sized batch of replay
experience $\\{s_{i},a_{i},r_{i},s_{i+1}\\}_{i=1}^{T}$, one can bound the
(intractable) $\sup$ which is taken over all state-action space: $\sup
r(s,a)\leq\min_{i}r_{i}+L_{r}||(D_{\mathcal{S}},D_{\mathcal{A}})||_{p}$.
In the case of continuous spaces, we cannot calculate the state-value function
directly, so one typically resorts to actor-critic methods (Haarnoja et al.
2018b) where a policy network $\pi$ and value network $Q$ are trained
together. In this case, one must calculate the entropy-regularized state-value
function as
$V^{\pi}(s)=\operatorname*{\mathbb{E}}_{a\sim{}\pi}\left[Q^{\pi}(s,a)-\beta^{-1}\log\pi(a|s)\right]$.
However, the expectation over continuously many actions is intractable in the
general case. The solution to this is parameterizing the policy network by a
simple, but expressive distribution at each state, for instance a Gaussian
actor $\mathcal{N}(\mu(s),\sigma(s))$. With knowledge of the means and
variances, the sampling error can be bounded as we show below.
###### Theorem 4.5.
Let an entropy-regularized MDP be given with an $L_{Q}$-Lipschitz value
function $\bar{Q}^{\pi}$. Using a Gaussian parameterization for the associated
policy $\pi(\cdot|s)=\mathcal{N}(\mu(s),\sigma(s))$, suppose that
$\bar{Q}^{\pi}$ is an $\varepsilon$-optimal approximation of the policy’s true
value, $Q^{\pi}$.
By estimating the state-value function as:
$\bar{V}^{\pi}(s)=\bar{Q}^{\pi}(s,\mu)-\frac{1}{\beta}\operatorname*{\mathbb{E}}_{a\sim{}\pi}\log\frac{\pi(a|s)}{\pi_{0}(a|s)},$
(7)
the error in using such an approximation is upper bounded:
$|\bar{V}^{\pi}(s)-V^{\pi}(s)|\leq\sqrt{\frac{2}{\pi}}L_{Q}\sigma(s)e^{-\mu(s)^{2}/2\sigma(s)^{2}}+\varepsilon$
In the case that the function $Q$ used is an optimal value function for an
$(L_{r},L_{p})$-Lipschitz task, with a policy whose variance is lower bounded
$\sigma(s)\geq\sigma_{\text{min}}$ and $\gamma L_{p}(1+L_{\mathcal{N}})<1$,
where $L_{\mathcal{N}}=\sigma_{\text{min}}^{-2}(2\pi e)^{-1/2}$ is the
Lipschitz constant of the Gaussian distribution, then the Lipschitz constant
for $Q$ can be computed as:
$L_{Q}=\frac{L_{r}+\gamma L_{p}(\beta\sigma_{\min})^{-1}}{1-\gamma
L_{p}(1+L_{\mathcal{N}})}.$ (8)
As the policy becomes deterministic ($\sigma\to 0$), in the un-regularized
limit ($\beta\sigma\to\infty$), the error reduces to zero as expected (since
accurately sampling a deterministic policy only requires one action). Further,
the Lipschitz constant in Eq. (8) matches that of the un-regularized case
(Rachelson and Lagoudakis 2010). Although the expectation in Eq. (7) appears
intractable, the Gaussian parameterization allows it to be calculable, since
the entropy of the policy only depends on its variance. Under the stated
hypotheses, this allows us to translate our bounds in Theorem 4.1 to the
continuous setting. However, satisfying these hypotheses (e.g. the restriction
on $\gamma$) may be challenging in practice. One way of circumventing this is
to consider works such as (Fazlyab et al. 2019), where one can estimate the
Lipschitz constant of the neural net ($Q$-function) being used to generate
bounds.
We note that with the Gaussian policy parameterization, the relative entropy
(second term in Eq. (7)) can be computed exactly from the mean action. In
principle, the analysis may be extended to other policy parameterizations. For
simplicity, the analysis is carried out for single-dimensional action spaces
in the $p=1$ norm, which is easily generalized to other contexts.
These results allow us to derive the following upper and lower bounds in
continuous spaces (an extension of Theorem 4.1), when the $Q$-function used
for deriving $\Delta$ is known to be $L_{Q}$-Lipschitz, or is optimal for an
($L_{r},L_{p}$)-Lipschitz MDP:
###### Theorem 4.6.
Let the $L_{Q}$-Lipschitz value function $Q^{\pi}$ and corresponding Gaussian
policy $\pi(\cdot|s)=\mathcal{N}(\mu(s),\sigma(s))$ be given, where $Q^{\pi}$
is an $\varepsilon$-optimal estimate of the true policy’s value function. For
an $(L_{r},L_{p})$-Lipschitz task with (unknown) optimal value function
$Q^{*}$, let $\bar{V}^{\pi}$ be the one-point estimate of the (known) value
function $Q^{\pi}$, and denote
$\bar{\Delta}(s,a)=r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\bar{V}^{\pi}(s^{\prime})-Q^{\pi}(s,a)$.
Then:
$\displaystyle Q^{*}(s,a)\leq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\left[\bar{V}^{\pi}(s^{\prime})+A(s^{\prime})\right]$
$\displaystyle\hskip
10.00002pt+\frac{\gamma}{1-\gamma}\left(\min_{(s,a)\in\mathcal{D}}\left(\bar{\Delta}(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}A(s^{\prime})\right)+L_{\Delta}D\right)$
$\displaystyle Q^{*}(s,a)\geq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\left[\bar{V}^{\pi}(s^{\prime})-A(s^{\prime})\right]$
$\displaystyle\hskip
10.00002pt+\frac{\gamma}{1-\gamma}\left(\max_{(s,a)\in\mathcal{D}}\left(\bar{\Delta}(s,a)-\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}A(s^{\prime})\right)-L_{\Delta}D\right)$
where we let
$A(s)=\sqrt{\frac{2}{\pi}}L_{Q}\sigma(s)e^{-\mu(s)/2\sigma(s)^{2}}+\varepsilon$
and $L_{\Delta}=\max\left\\{L_{r},L_{Q},\gamma
L_{p}\left(L_{Q}(1+L_{\mathcal{N}})+(\beta\sigma_{\text{min}})^{-1}\right)\right\\}$
and $D$ denotes the diameter of the state-action space.
### Extension to Un-Regularized RL
Although the previous results have been discussed in the context of entropy-
regularized RL, it is possible to extend them to the un-regularized
($\beta\to\infty$) domain as well with the replacement $\Delta^{\prime}\to
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}V(s^{\prime})-V(s)$. This
can be understood as taking the estimated state-value function $V(s)$ to
generate a potential function for shaping (Ng, Harada, and Russell 1999) the
original reward function $r(s,a)$, with $\Delta^{\prime}$ now representing
this shaped reward. The corresponding value functions are then related by Eq.
(3) in (Ng, Harada, and Russell 1999) which can be seen as the analog of
Theorem 1 in (Adamczyk et al. 2023a) for the un-regularized case. In the
Appendix, we show that replacing $\Delta\to\Delta^{\prime}$ in Theorem 4.1,
leaves Eq. (4a) and (4b) valid for the un-regularized case. In this case, as
the Bellman loss decreases, $\mathcal{L}\to 0$, there is no guarantee that
$\Delta^{\prime}\to 0$ as in the regularized case. Interestingly, we
nevertheless find that in the un-regularized case, the clipping does occur,
and the magnitude of bound violations decreases throughout training. We use
this form (un-regularized RL double-sided clipping) for the FA experiments
shown in the next section.
The preceding extension to un-regularized RL can be generalized to address an
open problem in research on compositionality. Specifically, we can now address
a question posed by (Nemecek and Parr 2021) concerning the possibility of
composing prior solutions in un-regularized RL. We can address this question
by deriving an extension of Theorem 10 in (Adamczyk et al. 2023a) to the case
of un-regularized RL.
###### Theorem 4.8.
Given a set of primitive tasks $\\{\mathcal{T}_{j}\\}$ with corresponding
optimal value functions $\\{Q_{j}^{*}\\}$, denote $\widetilde{Q}^{*}$ as the
optimal value function for the composition of $\\{\mathcal{T}_{j}\\}$ under
the composition function $f:\mathbb{R}^{M}\to\mathbb{R}$.
Define $K^{*}$ as the optimal value function for a task with reward function
$\kappa$ defined by:
$\displaystyle\kappa(s,a)=f(\\{r_{j}(s,a)\\})+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}V_{f}(s^{\prime})-V_{f}(s)$
$V_{f}(s)=\max_{a}f\left(\\{Q_{j}^{*}(s,a)\\}\right)$
Then, the optimal value functions $\widetilde{Q}^{*}$ and $K^{*}$ are related
by:
$\widetilde{Q}^{*}(s,a)=V_{f}(s)+K^{*}(s,a)$ (9)
Thus, multiple primitive tasks can indeed be composed (via $V_{f}$) and
subsequently corrected (via $K^{*}$) in un-regularized RL.
## Applications
The framework developed in this work has applications on both theoretical and
experimental fronts. In this section, we discuss some applications relating to
compositionality and approaches to clipping.
### Exact Composition in Entropy-Regularized RL
One application of the framework developed is to provide new insights and
extensions of previously derived results for value function compositions, as
seen in Theorem 9. Previous work (Van Niekerk et al. 2019) on entropy-
regularized RL has shown that, for a specific choice of composition function,
an exact expression for the optimal value function of interest can be derived.
This result can be rederived from a different perspective and also extended to
a broader class of compositions using the framework developed. Specifically,
we use the composition of value functions for previously solved tasks as an
estimate for the optimal value function of the composite task. Then, using
this estimate in combination with Theorem 4.1, we derive conditions such that
both of the bounds can be saturated with $\Delta(s,a)=0$, thereby giving an
exact composition.
Using this approach, we are able to extend the results of (Van Niekerk et al.
2019), who find an instance of exact composition in entropy-regularized RL for
tasks with absorbing states. Our derivation (see Appendix) provides new
insight into why specific choices of reward compositions lead to exact
compositions of optimal value functions.
###### Theorem 5.1.
Consider $m$ solved tasks in the entropy-regularized setting, with reward
functions $\\{r_{1},\dotsc,r_{m}\\}$ varying only on the set of absorbing
states. Assume all tasks are given with the same deterministic dynamics. Given
a set of non-negative weights $w_{j}$, consider a new task with the same
reward function for the interior (i.e. non-absorbing) states and with reward
function for the absorbing states given by
$\widetilde{r}(s,a)=\tau\log\sum_{j=1}^{m}w_{j}e^{r_{j}(s,a)/\tau}.$ (10)
Then, the optimal value function for such a task is given by:
$\widetilde{Q}(s,a)=\tau\log\sum_{j=1}^{m}w_{j}e^{Q_{j}(s,a)/\tau}.$ (11)
A detailed derivation of the result is provided in the Appendix; in the
following we note some key points. We consider the setting discussed in (Van
Niekerk et al. 2019) (undiscounted, deterministic dynamics with rewards
varying only on the absorbing states for the solved tasks). By analyzing the
exponentiated version of the backup equation for the solved tasks, we obtain a
general class of reward compositions and value function compositions that
satisfy the same form of backup equation. The extension from previous work is
that the weights no longer need to be normalized to unity.
Figure 2: The discrete maze considered for the tabular experiments. The agent
begins at the green circle, and the yellow star is the only rewarding state.
The action space consists of the cardinal directions, and the state is encoded
by the location on the grid. At each step, the agent receives a small penalty
if it has not reached the goal. $\gamma=0.98$, $\beta=0.1$. On the left plot,
we show the optimal value function $V(s)$ (blue indicates high value). On the
right plot, we show the greedy policy extracted from the optimal action value
function $\text{argmax}_{a}Q(s,a)$. Figure 3: $Q$-values during training with
respect to the derived bounds. The error is the maximum difference between
consecutive Bellman updates. (Note the $\log$-scaled axes.)
### Experimental Validation
In the following experiments, we study the utility of clipping based on our
theoretical results. For simplicity, we highlight the results on a simple
discrete environment. Without any external estimates for the $Q$ function, we
use the estimate given by the previous step’s $Q$-function.
#### Tabular Experiments
In the tabular case, since we have access to the $Q$-table and we perform
exact updates, we simply clip the updated $Q$-table according to the derived
bounds. In Fig. 3 we show the results of training in a simple maze environment
(Fig. 2). In experiments across different sized environments, and with various
levels of stochasticity, we universally find the increase in convergence speed
shown in the inset plot of Fig. 3. In the main plot of Fig. 3, we depict the
mean $Q$ values over all $(s,a)$ pairs. We find that the violated upper bound
(over-optimism) occurs across many tabular domains. In this experiment, we use
stochastic transition dynamics with a $50\%$ probability of taking the
intended action and $25\%$ probability of taking an action perpendicular to
that intended. As claimed previously, we see that as the Bellman loss reduces
(inset plot), the double-sided bounds become tight (blue and orange lines
converge).
#### Function Approximator Experiments
In the DQN algorithm used, a target network is employed for stability. We can
therefore also use the target network to derive another set of bounds on the
true $Q$-values (cf. Appendix for the un-regularized RL bounds corresponding
to those given in Theorem 4.1). Since both bounds must hold, we take the
tightest bound possible. In general, given many sources of an estimate
$Q$-function, one can collectively use them to obtain the tightest bound
possible.
Figure 4: Reward curves for the MountainCar environment. We fine tune each
method’s hyperparameters, and average over 20 random initializations. The
$95\%$ confidence intervals are shaded for each method.
The derived bounds can be implemented using different approaches for clipping
of the value function during training. We highlight the different methods used
below, inspired by the methods used in (Kim, Park, and Kim 2022; Adamczyk et
al. 2023b):
(0) No Clipping: The standard training scheme for DQN is implemented, with no
clipping.
(1) Hard Clipping: At each backward pass to the function approximator we
enforce the following bounds on the target value:
$Q(s,a)\xleftarrow[]{}\hat{Q}(s,a)$ (12)
where L and U denote the lower and upper bounds derived in Theorem 4.1, and
$\hat{Q}_{\textrm{clip}}\doteq\min\\{\max\\{r(s,a)+\gamma V(s^{\prime}),\
\text{L}(s,a)\\},\text{U}(s,a)\\}$ (13)
(2) Soft Clipping: An additional term, the “clipping loss”, is added to the
function approximator’s loss function. The clipping loss is defined as
$\mathcal{L}_{\textrm{clip}}=\left\lvert
Q(s,a)-\hat{Q}_{\textrm{clip}}(s,a)\right\rvert$ (14)
This gives a total loss of
$\mathcal{L}=\mathcal{L}_{\text{Bellman}}+\eta\mathcal{L}_{\text{clip}}$. The
hyperparameter $\eta$ weights the relative importance of the bound violations
against the Bellman error. In principle it can be tuned, but we choose to fix
$\eta=10^{-5}$ for all experiments, ensuring
$\mathcal{L}_{\text{Bellman}}\sim{}\eta\mathcal{L}_{\text{clip}}$.
Alternatively, one can view this as equivalent to providing a bonus to the
reward function for states with high bound violation. This is analogous to the
UCB-style bonus applied in (Lee et al. 2021).
(3) Smoothed Clipping: The updated $Q$-values are set as an average between
those given by Hard Clipping and No Clipping, with a relative weight factor
inversely related to the bound violations.
$\displaystyle Q(s,a)\xrightarrow[]{}(1-\tau)\left(r(s,a)+\gamma
V(s^{\prime})\right)+\tau\hat{Q}_{\textrm{clip}}(s,a)$
where
$\tau=\frac{\mathcal{L}_{\text{clip}}}{1+\mathcal{L}_{\text{clip}}}$ (15)
We note that when the bound violations are zero, the standard update rule is
recovered. This value for $\tau$ is chosen to set the relative weight of the
two terms to match the magnitude of bound violations:
$\tau/(1-\tau)=\mathcal{L}_{\text{clip}}$. Therefore, the clipped values will
be preferred over the standard update rule, in direct proportion to the bound
violations.
Figure 4 indicates that clipping is able to improve the stability and speed of
training in the MountainCar environment. Here, we use a bootstrapped estimate
of $Q(s,a)$ (that is, the target $Q$-network is bounded by the actively
trained $Q$-network).
## Discussion
In summary, we have established a general theoretical framework for deriving
double-sided bounds in reinforcement learning. We have explored the use of the
double-sided bounds in tabular domains, finding that application of the bounds
through clipping is able to speed up training. We also provide some
preliminary exploration in the FA domain where new experimental methods for
clipping were presented. Furthermore, beyond the theoretical contributions, we
believe the current work has the potential to open new directions of research
as outlined below.
While the derived bounds are applicable generally to any value function
estimate and for arbitrary transition dynamics, it is possible that they are
tightened for specific classes of the estimates and restrictions on the
dynamics or structure of reward functions. For example, in (Adamczyk et al.
2023b) which analyzed compositions in RL, it was shown that one side of the
bound can be simplified further for specific classes of functional
transformations or compositions. In future work, it would be interesting to
explore under what conditions the bounds may be further simplified or
tightened.
Other promising avenues for future research include: (i) combining our results
with ensemble methods such as SUNRISE (Lee et al. 2021) which can lead to
tighter bounds on the value function, as more estimates are used to derive the
double-sided bounds in Theorem 4.1, (ii) using bound violations as a proxy for
the best prior task to transfer (minimizing bound violations) when multiple
prior solutions are known, (iii) implementing a dynamic schedule for the soft
clipping weight parameter, similar to the approach in (Haarnoja et al. 2018b)
which includes learning a dynamical temperature parameter.
The extension of (Van Niekerk et al. 2019)’s Theorem 2 (shown above in Theorem
11) for value function composition was proved for the case of deterministic
dynamics in this work. However, it still remains an open question as to
whether this result is generalizable to other domains, e.g. stochastic
dynamics. Moreover, other composition methods may yield exact results for the
composite task’s value function (cf. (Tasse, James, and Rosman 2020, 2021)).
It will be of interest to see if the framework developed in this work can be
used to provide insight into the different conditions under which exact
compositions can be obtained.
Considering further the composition of multiple previously solved tasks, one
can consider the problem of learning a composition function $f$, which takes
into account the derived bounds. As a learning objective, one could use the
magnitude of the difference in bounds, to learn a function $f$ which can be
considered an “optimal composition” (e.g. related to (Rusu et al. 2016).
The framework established in this work can be used to obtain bounds for
optimal value functions in general settings, not just limited to the
composition of tasks. Specifically, we can use any estimate for the optimal
value function as the base knowledge and use the derived results to obtain
bounds on the exact optimal value function. In combination with the regret
bound derived in this work, iterations of PE/PI can serve as the initial steps
in an iterative procedure for progressively improving the bounds to obtain
improved approximate solutions. The development of such iterative procedures
will be explored in future work.
## Technical Appendix
In this technical appendix, we provide further discussion on experimental
details and give proofs for all the results shown in the main text.
### Experiments
In the tabular setting, we perform exact updates of the Bellman backup
equation for entropy-regularized RL. At each update step, we calculate the
bounds given by Theorem 4.1, which are exact in this case. Then we perform
Hard Clipping, by following Eq. (13) in the main text. Interestingly, we see
that as the upper bound becomes tight, the $Q$-values are constantly saturated
by this value. The departure of the No Clipping and Hard Clipping $Q$-values
is also evident in the reduction of error ($\ell_{\infty}$ distance) between
consecutive iterations.
To explore the utility of clipping in function approximator (FA) systems, we
use a DQN learning algorithm (Raffin et al. 2021), while applying and
monitoring clipping given by the bounds in Theorem 4.1 for un-regularized RL.
In particular, we continuously bootstrap by using the previous estimate of the
$Q$-function to generate the bounds, and we clip the target network’s output
value accordingly. In particular, we extract bounds from both the target
network and $Q$-network at each step, and take the tighter of the two bounds.
For continuous spaces, we use the estimate $\sup
r(s,a)\approx\max_{i\in\mathcal{D}}r(s,a)$, where the $\max$ is taken over the
current batch (and similarly for $\inf r(s,a)$). We consider the three
clipping methods described in the Experiments section of the main text.
We have also performed the same experiment, with a fixed learning rate, for
the Mountain-Car environment (Brockman et al. 2016). These experiments share
the hyperparameters shown in Table Experiments and are averaged over 25 runs.
Figure 5: Mountain-Car learning curves for a fixed learning rate
$\alpha=0.004$. The mean bound violations and episode rewards throughout
training are shown for each clipping method. In the right panel, we plot the
total bound violations (magnitude of over- or under-estimation of $Q$ based on
the allowed upper and lower bounds). We find that bound violations decrease
during training (most quickly for hard and smoothed clipping), which
corresponds to better performance in terms of the mean evaluation reward (left
plot).
We use $\epsilon$-greedy exploration, with a linear schedule from $1.0$ to
$0.07$ after $20\%$ of the total ($N=500\textrm{k}$) timesteps. The remaining
hyperparameters (shared by all clipping methods) are listed below.
Hyperparameter | Value
---|---
Learning Rate | 0.004
Batch Size | 128
Buffer Size | 10,000
Discount Factor, $\gamma$ | 0.98
Gradient Steps | 8
Policy Architecture | $(256,256)$
“Learning Starts” | 1,000
Polyak Update, $\tau$ | 1.0
Target Update Interval | 600
Training Frequency | 16
Table 1: Hyperparameters shared by all Deep Q Networks. These are the
hyperparameters published by the authors of the algorithm used (Raffin et al.
2021): https://huggingface.co/sb3/dqn-MountainCar-v0.
### Proofs
In this section we provide proofs of the theoretical results in the main text.
Each proof is prefaced with a restatement of the theorem for the reader’s
convenience.
We begin with a helpful lemma which bounds the optimal action-value function
$Q^{*}(s,a)$ for any task. We note that these bounds hold for both un-
regularized RL and entropy-regularized RL.
###### Lemma A.
For a task with reward function $r(s,a)$, discount factor $\gamma$, the (soft)
optimal action-value function $Q^{*}(s,a)$ satisfies:
$\displaystyle Q^{*}(s,a)$ $\displaystyle\geq
r(s,a)+\gamma\frac{\inf_{s,a}r(s,a)}{1-\gamma}$ $\displaystyle Q^{*}(s,a)$
$\displaystyle\leq r(s,a)+\gamma\frac{\sup_{s,a}r(s,a)}{1-\gamma}$
We will prove the upper bound for un-regularized RL, but the proof is
identical in entropy-regularized RL and for the lower bound.
###### Proof.
The proof follows from induction on the Bellman backup equation:
$Q^{(n+1)}(s,a)=r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p(s^{\prime}|s,a)}\max_{a^{\prime}}\left(Q^{(n)}(s^{\prime},a^{\prime})\right)$
(16)
The result we aim to prove is the following:
$\displaystyle Q^{(n)}(s,a)$ $\displaystyle\geq
r(s,a)+\gamma\frac{1-\gamma^{n}}{1-\gamma}\inf_{s,a}r(s,a)$ $\displaystyle
Q^{(n)}(s,a)$ $\displaystyle\leq
r(s,a)+\gamma\frac{1-\gamma^{n}}{1-\gamma}\sup_{s,a}r(s,a)$
Since $\lim_{n\to\infty}Q^{(n)}(s,a)=Q^{*}(s,a)$ and $\gamma\in(0,1)$ the
desired result will follow from this limit.
We set $Q^{(0)}(s,a)=r(s,a)$. The base case ($n=1$) holds as:
$\displaystyle Q^{(1)}(s,a)$
$\displaystyle=r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p(s^{\prime}|s,a)}\max_{a^{\prime}}\left(Q^{(0)}(s^{\prime},a^{\prime})\right)$
$\displaystyle=r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p(s^{\prime}|s,a)}\max_{a^{\prime}}r(s^{\prime},a^{\prime})$
$\displaystyle\leq r(s,a)+\gamma\sup_{s,a}r(s,a)$
$\displaystyle=r(s,a)+\gamma\frac{1-\gamma^{1}}{1-\gamma}\sup_{s,a}r(s,a)$
We proceed in proving the upper bound. For brevity we shall denote
$\sup_{s,a}r(s,a)\doteq R$. The inductive hypothesis is
$Q^{(n)}(s,a)\leq r(s,a)+\gamma\frac{1-\gamma^{n}}{1-\gamma}R.$ (17)
To prove that the inequality holds for $n+1$, we use the Bellman backup
equation:
$\displaystyle Q^{(n+1)}(s,a)$ $\displaystyle\leq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}\max_{a^{\prime}}\left(r(s^{\prime},a^{\prime})+\gamma\frac{1-\gamma^{n}}{1-\gamma}R\right)$
$\displaystyle\leq
r(s,a)+\gamma\left(R+\gamma\frac{1-\gamma^{n}}{1-\gamma}R\right)$
At this point, if the dynamics model were known then one could improve this
bound by including the next term,
$\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p(s^{\prime}|s,a)}\max_{a^{\prime}}r(s^{\prime},a^{\prime})$,
which we instead bound by $R$. Continuing without this term, we have
$\displaystyle Q^{(n+1)}(s,a)$ $\displaystyle\leq
r(s,a)+\gamma\left(R+\gamma\frac{1-\gamma^{n}}{1-\gamma}R\right)$
$\displaystyle=r(s,a)+\gamma\frac{1-\gamma^{n+1}}{1-\gamma}R$
which completes the proof of the inductive step. As stated above, this
completes the proof of the upper bound by taking the limit $n\to\infty$.
The lower bound follows similarly by swapping all inequalities. The same proof
also holds for the soft Bellman backup equation. ∎
We now proceed with the proof of the first result, Theorem 4.1. We do so by
applying Lemma A to the $K^{*}$ function of (Adamczyk et al. 2023a)’s Theorem
1.
###### Theorem 4.1.
Consider an entropy-regularized MDP
$\langle\mathcal{S},\mathcal{A},p,r,\gamma,\beta\rangle$ with (unknown)
optimal value function $Q^{*}(s,a)$. Let an estimate for the value function
$Q(s,a)$ be given. Denote
$V(s)~{}\doteq~{}1/\beta\log\operatorname*{\mathbb{E}}_{a\sim\pi_{0}}\exp\beta
Q(s,a)$.
The optimal value function $Q^{*}(s,a)$ is then bounded by:
$\displaystyle Q^{*}(s,a)$ $\displaystyle\geq
r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})+\frac{\inf\Delta}{1-\gamma}\right)$
(18a) $\displaystyle Q^{*}(s,a)$ $\displaystyle\leq
r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})+\frac{\sup\Delta}{1-\gamma}\right)$
(18b)
where
$\Delta(s,a)\doteq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})-Q(s,a).$
In Eq. (18a) and (18b), the $\inf$ and $\sup$ are taken over the continuous
state-action space $\mathcal{S}\times\mathcal{A}$.
###### Proof.
As a point of notation, $\widetilde{r}(s,a)$ in (Adamczyk et al. 2023a) is the
same as our $r(s,a)$. Using Theorem 1 of (Adamczyk et al. 2023a), we have
$Q^{*}(s,a)=Q(s,a)+K^{*}(s,a)$ (19)
where $K^{*}$ is the optimal soft action value function corresponding to a
task with reward function $\Delta(s,a)\doteq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p(\cdot|s,a)}V(s^{\prime})-Q(s,a)$.
By applying Lemma A on the value function $K^{*}$, we arrive at the stated
result in Eq. (18b):
$\displaystyle Q^{*}(s,a)$ $\displaystyle=Q(s,a)+K^{*}(s,a)$
$\displaystyle\leq Q(s,a)+\Delta(s,a)+\gamma\frac{\sup\Delta}{1-\gamma}$
$\displaystyle=Q(s,a)+r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p(\cdot|s,a)}V(s^{\prime})$
$\displaystyle\hskip 40.00006pt-Q(s,a)+\gamma\frac{\sup\Delta}{1-\gamma}$
$\displaystyle=r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p(\cdot|s,a)}V(s^{\prime})+\frac{\sup\Delta}{1-\gamma}\right).$
A similar proof holds for the lower bound. ∎
###### Lemma 4.1a.
Consider an entropy-regularized MDP
$\langle\mathcal{S},\mathcal{A},p,r,\gamma,\beta\rangle$ with (unknown)
optimal value function $Q^{*}(s,a)$. Let an estimate for the value function
$Q(s,a)$ be given. Denote
$V(s)~{}\doteq~{}1/\beta\log\operatorname*{\mathbb{E}}_{a\sim\pi_{0}}\exp\beta
Q(s,a)$. Suppose there exists an “identity” action
$a_{\emptyset}(s)\in\mathcal{A}$ for each state, which deterministically
transitions the agent to the same state:
$p(s^{\prime}|s,a_{\emptyset}(s))=\delta(s^{\prime}-s)$ for all
$s\in\mathcal{S}$.
Then the lower bound on the optimal value function $Q^{*}(s,a)$ can be
improved:
$Q^{*}(s,a)\geq
r(s,a)+\gamma\left(V(s^{\prime})+\frac{1}{1-\gamma}\Delta(s^{\prime},a_{\emptyset})\right)$
(20)
###### Proof.
The lower bound in Theorem 4.1 can be tightened by noting that the value
function (in both un-regularized and entropy-regularized RL) satisfies a
variational form:
$Q(s,a)=\sup_{\pi}Q^{\pi}(s,a)$ (21)
where
$Q^{\pi}(s,a)=\operatorname*{\mathbb{E}}_{p,\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\biggr{|}\
s_{0}=s,a_{0}=a\right]$
and
$Q^{\pi}(s,a)=\operatorname*{\mathbb{E}}_{p,\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}\left(r(s_{t},a_{t})-\frac{1}{\beta}\log\frac{\pi(a_{t}|s_{t})}{\pi_{0}(a_{t}|s_{t})}\right)\right]$
for standard and entropy-regularized RL, respectively (we have dropped the
initial state-action conditioning in the latter equation for brevity).
Therefore, one can supply any policy $\pi$ into the objective $Q^{\pi}$ to
obtain a lower bound on the optimal value function. However, the expectation
(policy evaluation) is difficult to perform in practice because it corresponds
to the solution to another Bellman equation (Sutton and Barto 2018).
Nevertheless, for particular choices of the input policy $\pi$, one can obtain
a simplified expression for $Q^{\pi}$ leading to a tractable lower bound. With
this in mind, we choose the deterministic “identity policy”,
$\pi_{\emptyset}$, defined as:
$\pi_{\emptyset}(a|s)=a_{\emptyset}(s)$ (22)
where $a_{\emptyset}(s)$ is the action (for a given state $s\in\mathcal{S}$)
such that
$p(s^{\prime}|s,a_{\emptyset}(s))=\delta(s^{\prime}-s).$ (23)
In other words, the identity policy is a deterministic policy which
transitions the agent back to the same state. We note that this requires the
transition dynamics of the task to be deterministic (at least, for this
identity action).
With this in mind, we must evaluate the objective
$Q^{\pi_{\emptyset}}~{}=~{}\hat{Q}^{\pi_{\emptyset}}+S^{\pi_{\emptyset}}$,
which we split between the reward and entropic terms. First, we note that
since $\pi_{\emptyset}$ is deterministic, the relative entropy term satisfies
$S^{\pi_{\emptyset}}=\operatorname*{\mathbb{E}}_{p,\pi_{\emptyset}}\left[\sum_{t=0}^{\infty}\gamma^{t}\log\frac{\pi_{\emptyset}(a_{t}|s_{t})}{\pi_{0}(a_{t}|s_{t})}\right]=0.$
(24)
Therefore, it suffices to evaluate the reward contributions alone which can be
done as follows:
$\displaystyle\widehat{Q}^{\pi_{\emptyset}}(s,a)$
$\displaystyle=\operatorname*{\mathbb{E}}_{p,\pi_{\emptyset}}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\biggr{|}\
s_{0}=s,a_{0}=a\right]$ $\displaystyle=r(s_{0},a_{0})+\gamma
r(s_{1},a_{\emptyset})+\gamma^{2}r(s_{1},a_{\emptyset})+\dots$
$\displaystyle=r(s_{0},a_{0})+\frac{\gamma}{1-\gamma}r(s_{1},a_{\emptyset})$
We see that the determinism of transitions arising from non-identity actions
is required for the first step away from the initial condition. Therefore, we
have $Q(s,a)\geq r(s,a)+\frac{\gamma}{1-\gamma}r(s^{\prime},a_{\emptyset})$.
Now, applying this result to the auxiliary task with optimal value function
$K^{*}$:
$K^{*}(s,a)\geq\Delta(s,a)+\frac{\gamma}{1-\gamma}\Delta(s^{\prime},a_{\emptyset}).$
(25)
Inserting this bound into Theorem 1 of (Adamczyk et al. 2023a), we find:
$\displaystyle Q^{*}(s,a)$ $\displaystyle\geq
Q(s,a)+\Delta(s,a)+\frac{\gamma}{1-\gamma}\Delta(s^{\prime},a_{\emptyset})$
$\displaystyle=r(s,a)+\gamma\left(V(s^{\prime})+\frac{1}{1-\gamma}\Delta(s^{\prime},a_{\emptyset})\right)$
∎
As claimed in the main text, we now show that this lower bound is tighter than
the previous one in Eq. 18a of the main text. Since
$\Delta(s^{\prime},a_{\emptyset})\geq\inf\Delta(s,a)$, this bound can be
saturated only for the initial state-action $(s,a)$ which transitions the
agent to $s^{\prime}=s^{*}$, the state in which the global reward function
$\Delta$ attains its minimum.
###### Corollary 4.2 (Suboptimality Bounds).
Let policy $\pi(a|s)$ be given with soft value $Q^{\pi}(s,a)$. The rate of the
suboptimality gap, $Q^{*}(s,a)-Q^{\pi}(s,a)$, is then bounded between
$\inf_{(s,a)}d(s,a)\leq\frac{Q^{*}(s,a)-Q^{\pi}(s,a)}{H}\leq\sup_{(s,a)}d(s,a)$
(26)
where $d(s,a)\doteq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}V^{\pi}(s^{\prime})-Q^{\pi}(s,a)$,
$V^{\pi}(s)\doteq\log\operatorname*{\mathbb{E}}_{a}\exp\beta Q^{\pi}(s,a)$ is
the soft state-value function, and $H=(1-\gamma)^{-1}$ is the effective time
horizon.
###### Proof.
Consider a task with the stated reward function
$d(s,a)~{}\doteq~{}Q^{\pi}(s,a)~{}-~{}\frac{\gamma}{\beta}\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\log\operatorname*{\mathbb{E}}_{a^{\prime}\sim{}\pi}\exp\beta
Q^{\pi}(s^{\prime},a^{\prime}).$
By (Cao, Cohen, and Szpruch 2021), this task’s corresponding optimal value
function is $Q_{d}^{*}(s,a)=Q^{\pi}(s,a)$. We see that the suboptimality gap
$Q^{*}-Q^{\pi}$ is nothing but the soft value function $K^{*}(s,a)$ (Adamczyk
et al. 2023a) for a task with reward function $d(s,a)$, given above. Applying
the simple bounds $H\inf d(s,a)\leq K^{*}(s,a)\leq H\sup d(s,a)$ yields the
stated result, with $H=(1-\gamma)^{-1}$ being the time horizon. ∎
###### Theorem 4.3.
Let the functions $L(s,a),U(s,a)$ be lower and upper bounds on the optimal
value function: $L(s,a)~{}\leq~{}Q^{*}(s,a)~{}\leq U(s,a)$ for all
$s\in\mathcal{S}$ and $a\in\mathcal{A}$. The clipped Bellman operator,
$\mathcal{B}_{C}Q(s,a)~{}:=~{}\max_{s,a}\left(\min_{s,a}\left(\mathcal{B}Q(s,a),U(s,a)\right),L(s,a)\right)$
converges to the optimal value function
$Q^{*}(s,a)~{}=~{}\mathcal{B}^{\infty}Q(s,a)$.
###### Proof.
We first show convergence of the operator $\mathcal{B}_{C}$, then show that it
converges to the same fixed point. For convergence, it suffices to show that
$|\mathcal{B}_{C}Q(s,a)-Q^{*}(s,a)|\leq\gamma|Q(s,a)-Q^{*}(s,a)|$.
There are three cases for the magnitude of $\mathcal{B}Q(s,a)$ relative to the
upper and lower bounds:
1. 1.
$\mathcal{B}Q(s,a)\in(L(s,a),U(s,a))$
2. 2.
$\mathcal{B}Q(s,a)\in(-\infty,L(s,a))$
3. 3.
$\mathcal{B}Q(s,a)\in(U(s,a),\infty)$
In the first case, clipping does not occur and hence
$\mathcal{B}_{C}Q(s,a)=\mathcal{B}Q(s,a)$, which contracts with rate
$\gamma$.In the second case, we can write $\mathcal{B}Q(s,a)=L(s,a)-\chi(s,a)$
where $\chi(s,a):=\mathcal{B}Q-L(s,a)>0$ is referred to as the “bound
violation”. Then,
$\displaystyle\ \ \ \ \ |\mathcal{B}_{C}Q(s,a)-Q^{*}(s,a)|$
$\displaystyle=|Q^{*}(s,a)-\mathcal{B}_{C}Q(s,a)|$
$\displaystyle=|Q^{*}(s,a)-L(s,a)|$
$\displaystyle\leq|Q^{*}(s,a)-L(s,a)+\chi(s,a)|$
$\displaystyle=|Q^{*}(s,a)-(L(s,a)-\chi(s,a))|$
$\displaystyle=|Q^{*}(s,a)-\mathcal{B}Q(s,a)|$
$\displaystyle\leq\gamma|Q(s,a)-Q^{*}(s,a)|$
A similar proof holds for case 3.
By the Banach fixed point theorem, it follows that repeated application of
$\mathcal{B}_{C}$ converges to a fixed point. It is clear that the fixed point
for $\mathcal{B}$ is also a fixed point for $\mathcal{B}_{C}$, and since it is
unique, we have
$\mathcal{B}_{C}^{\infty}Q(s,a)=\mathcal{B}^{\infty}Q(s,a)=Q^{*}(s,a)$. ∎
### Error Analysis for Continuous Spaces
In this subsection, we turn to those results specific to the bounds in
continuous spaces and their error analysis, based on Lipschitz-continuity.
###### Lemma 4.4.
Let $\mathcal{S}\times\mathcal{A}$ be a bounded metric space with diameter
$D$, and let $r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ be
$L_{r}$-Lipschitz (w.r.t. the same metric). Then the global extrema of
$r(s,a)$ on $\mathcal{S}\times\mathcal{A}$ are bounded as follows:
$\displaystyle\sup_{s\in\mathcal{S},a\in\mathcal{A}}r(s,a)$
$\displaystyle\leq\min_{(s,a)\in\mathcal{D}}r(s,a)+L_{r}D$
$\displaystyle\inf_{s\in\mathcal{S},a\in\mathcal{A}}r(s,a)$
$\displaystyle\geq\max_{(s,a)\in\mathcal{D}}r(s,a)-L_{r}D$
where $\mathcal{D}$ is the dataset of $(s,a)$ tuples available for querying
the magnitude of $r$ (e.g. the current batch or buffer).
Figure 6: Depiction of a continuous state-action space with a finite set of
samples (black points) used to bound the global extrema (star). The diameter
of the space is depicted in red. The distance between each sample and the
global extrema (dashed lines) is always less than the diameter (solid red
line) of the space. Since the growth of the function is linearly bounded by
Lipschitz continuity, we can derive a bound on the value of the global extrema
given the finitely many samples.
###### Proof.
We prove the upper bound on the supremum, the lower bound on the infimum
follows similarly.
Let $\mathcal{S}\times\mathcal{A}$ be a bounded metric space endowed with the
$p$-product metric (for simplicity) and let
$r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ (the function for which we wish
to find the global extrema) be $L_{r}$-Lipschitz continuous. Let the diameters
of state and action space be given: $D_{\mathcal{S}},D_{\mathcal{A}}$. Suppose
a finite set of samples $\mathcal{D}\subset\mathcal{S}\times\mathcal{A}$ is
given. Denote $\sup_{s\in\mathcal{S},a\in\mathcal{A}}r(s,a)=r(s^{*},a^{*})$.
For each $(s,a)\in\mathcal{D}$, the following holds:
$\displaystyle r(s^{*},a^{*})-r(s,a)$ $\displaystyle=|r(s^{*},a^{*})-r(s,a)|$
$\displaystyle\leq L_{r}d\left((s^{*},a^{*}),(s,a)\right)$
since the reward function $r$ is $L_{r}$-Lipschitz in the $d$ metric. In
practice, the distance between the extrema and an arbitrary point (right-hand
side) is unknown, and a generally applicable (albeit loose) bound on this
distance is simply the diameter of the space,
$D=||(D_{\mathcal{S}},D_{\mathcal{A}})||_{p}$. This leads to the following
bound:
$r(s^{*},a^{*})\leq r(s,a)+L_{r}D.$ (27)
This follows from the definition of Lipschitz continuity:
$\displaystyle r(s^{*},a^{*})-r(s,a)$ $\displaystyle=|r(s^{*},a^{*})-r(s,a)|$
$\displaystyle\leq L_{r}d\left((s^{*},a^{*}),(s,a)\right)$ $\displaystyle\leq
L_{r}D$
Since each $(s,a)\in\mathcal{D}$ provides such a bound, we can take the best
one (i.e. the minimum over all points in the subset $\mathcal{D}$), recovering
the stated bound:
$r(s^{*},a^{*})\leq\min_{(s,a)\in\mathcal{D}}r(s,a)+L_{r}D.$ (28)
In case the calculation $d((s_{1},a_{1}),(s_{2},a_{2}))$ is feasible, one can
replace the diameter with the furthest distance from the point in question to
any other point in the (bounded) set:
$r(s^{*},a^{*})\leq\min_{(s,a)\in\mathcal{D}}\left(r(s,a)+L_{r}\sup_{s^{\prime},a^{\prime}}d((s,a),(s^{\prime},a^{\prime}))\right)$
where the $\sup$ is over all $(s,a)\in\mathcal{S}\times\mathcal{A}$. which
follows by a similar argument as given above:
$\displaystyle r(s^{*},a^{*})-r(s,a)$ $\displaystyle=|r(s^{*},a^{*})-r(s,a)|$
$\displaystyle\leq L_{r}d\left((s^{*},a^{*}),(s,a)\right)$ $\displaystyle\leq
L_{r}\sup_{(s^{\prime},a^{\prime})\in\mathcal{S}\times\mathcal{A}}d((s,a),(s^{\prime},a^{\prime}))$
This provides a tighter bound but is less tractable in practice. ∎
We now provide some preliminary results on Lipschitz MDPs which facilitate the
proofs of the subsequent results. The following result proves Lipschitz
continuity of the value function in un-regularized RL, provided by (Rachelson
and Lagoudakis 2010).
###### Theorem 4.5a (Rachelson and Lagoudakis).
Given an $(L_{r},L_{p})$-Lipschitz continuous MDP and an $L_{\pi}$-Lipschitz
continuous, stationary policy $\pi$, if $\gamma L_{p}(1+L_{\pi})<1$, then the
infinite horizon, $\gamma$-discounted value function $Q^{\pi}$ is
$L_{Q}$-Lipschitz continuous, with:
$L_{Q}=\frac{L_{r}}{1-\gamma L_{p}(1+L_{\pi})}$ (29)
We will extend this result to the case of entropy-regularized RL where the
policy’s entropy plays a role. To extend it to the entropy-regularized case,
we begin with (and following the notation of) Lemma 1 in (Rachelson and
Lagoudakis 2010). Since the entropy of the policy appears in the calculation
of the state-value function, we require a tractable policy class. We use the
Gaussian parameterization due to its widespread use (Haarnoja et al. 2018b;
Raffin et al. 2021).
###### Lemma 4.5b.
In entropy-regularized RL, given an $L_{Q}$-Lipschitz continuous $Q$-function
$Q^{\pi}$ denoting the soft value of a Gaussian policy
$\pi(\cdot|s)\sim{}\mathcal{N}\left(\mu(s),\sigma(s)\right)$, the
corresponding value function $V^{\pi}(s)$ is $L$-Lipschitz continuous, with:
$L=L_{Q}(1+L_{\mathcal{N}})+\frac{1}{\beta\sigma_{\text{min}}},$ (30)
where $\sigma_{\text{min}}=\min_{s}\sigma(s)$ and
$L_{\mathcal{N}}=\sigma_{\text{min}}^{-2}(2\pi e)^{-1/2}$ is the maximum
Lipschitz constant of the Gaussian density across all states.
###### Proof.
As in SAC (Haarnoja et al. 2018b; Raffin et al. 2021) we assume a Gaussian
parameterization with bounded variance $\sigma(s)\geq\sigma_{\textrm{min}}$.
We begin by finding the Lipschitz constant for $V^{\pi}(s)$ in the entropy-
regularized setting. Using the definition of the soft state-value function
(Haarnoja et al. 2018b),
$\displaystyle\big{|}V^{\pi}(s)-V^{\pi}(\hat{s})\big{|}$
$\displaystyle\leq\bigg{|}\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(s,a)-\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(\hat{s},a)\biggr{|}$
$\displaystyle+\beta^{-1}\biggl{|}\left(\mathbb{H}\left[\pi(\cdot|s)\right]-\mathbb{H}\left[\pi(\cdot|\hat{s})\right]\right)\bigg{|}$
$\displaystyle=\bigg{|}\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(s,a)-\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(\hat{s},a)\biggr{|}+\beta^{-1}\biggl{|}\log\frac{\sigma(s)}{\sigma(\hat{s})}\bigg{|}$
$\displaystyle\leq\bigg{|}\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(s,a)-\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(\hat{s},a)\biggr{|}+\beta^{-1}\big{|}\log(s)-\log(\hat{s})\big{|}$
$\displaystyle\leq
L_{Q}(1+L_{\pi})\big{|}s-\hat{s}\big{|}+\frac{1}{\beta\sigma_{\text{min}}}\big{|}s-\hat{s}\big{|}$
$\displaystyle=\left(L_{Q}(1+L_{\pi})+\frac{1}{\beta\sigma_{\text{min}}}\right)\big{|}s-\hat{s}\big{|}.$
The second line follows from the entropy of the Gaussian distribution. The
fourth line follows from (Rachelson and Lagoudakis 2010) and from the
Lipschitz-continuity of $\log(\cdot)$ on the domain
$(\sigma_{\text{min}},\infty)$. In practice, one must choose some
$\sigma_{\text{min}}$ to ensure numerical stability. In the case
$\beta\sigma_{\min}\to\infty,\sigma_{\text{min}}\to 0$, the policy becomes
deterministic and the RL objective reduces to un-regularized RL and the
previous result is recovered.
Since the Gaussian distribution is continuous everywhere, its Lipschitz
constant $L_{\mathcal{N}}=\sigma^{-2}(2\pi e)^{-1/2}$ is easily found by
finding the maximum magnitude of the first derivative. Since we are interested
in a globally applicable Lipschitz constant, we take the upper bound given by
$\sigma_{\text{min}}$. Substituting $L_{\pi}=L_{\mathcal{N}}$ above gives the
stated result. ∎
Now, we extend Lemma 2 of (Rachelson and Lagoudakis 2010) to the entropy-
regularized setting with a Gaussian policy:
###### Lemma 4.5c.
Given an $(L_{p},L_{r})$-Lipschitz continuous entropy-regularized MDP and a
Gaussian policy with bounded variance $\sigma(s)\geq\sigma_{\text{min}}$, the
$n$-step, finite horizon, $\gamma$-discounted soft value function
$Q^{\pi}_{n}$is $L_{Q_{n}}$-Lipschitz continuous and $L_{Q_{n}}$ obeys the
recurrence relation
$L_{Q_{n+1}}=L_{r}+\gamma\left((1+L_{\mathcal{N}})L_{Q_{n}}+(\beta\sigma_{\text{min}})^{-1}\right)L_{p}$
###### Proof.
The proof is identical to that of Lemma 2 in (Rachelson and Lagoudakis 2010)
except the penultimate line, where we instead use the Lipschitz constant
computed for $V^{\pi}(s)$ in Lemma 4.5b:
$\displaystyle\left\lvert
Q^{\pi}_{n+1}(s,a)-Q^{\pi}_{n+1}(\hat{s},\hat{a})\right\rvert$
$\displaystyle\leq\left(L_{r}+\gamma
L_{V_{n}}L_{p}\right)\left(|s-\hat{s}|+|a-\hat{a}|\right)$
$\displaystyle=\left(L_{r}+\gamma\left(L_{Q_{n}}(1+L_{\mathcal{N}})+\frac{1}{\beta\sigma_{\text{min}}}\right)L_{p}\right)\times$
$\displaystyle\hskip 130.0002pt\left(|s-\hat{s}|+|a-\hat{a}|\right)$
$\displaystyle=L_{Q_{n+1}}\left(|s-\hat{s}|+|a-\hat{a}|\right).$
∎
We are now ready to prove the extension of Theorem 29 in entropy-regularized
RL:
###### Theorem 4.5d.
Given an $(L_{r},L_{p})$-Lipschitz continuous MDP and a Gaussian policy
$\mathcal{N}(\mu(s),\sigma(s))$ with bounded variance
$\sigma(s)\geq\sigma_{min}$, if $\gamma L_{p}(1+L_{\mathcal{N}})<1$, then the
infinite horizon, $\gamma$-discounted value function $Q^{\pi}$ is
$L_{Q}$-Lipschitz continuous, with:
$L_{Q}=\frac{L_{r}+\gamma L_{p}(\beta\sigma_{\min})^{-1}}{1-\gamma
L_{p}(1+L_{\mathcal{N}})}$ (31)
###### Proof.
We follow the same steps as given in the proof of Theorem 1 of (Rachelson and
Lagoudakis 2010), concluding by considering the recurrence relation in the
convergent limit $L_{Q_{n}}\to L_{Q}$:
$L_{Q}=L_{r}+\gamma\left((1+L_{\mathcal{N}})L_{Q}+(\beta\sigma_{\text{min}})^{-1}\right)L_{p}$
(32)
Solving for $L_{Q}$ yields
$L_{Q}=\frac{L_{r}+\gamma L_{p}(\beta\sigma_{\min})^{-1}}{1-\gamma
L_{p}(1+L_{\mathcal{N}})}.$ (33)
∎
###### Theorem 4.5.
Let an entropy-regularized MDP be given with an $L_{Q}$-Lipschitz value
function $\bar{Q}^{\pi}$. Using a Gaussian parameterization for the associated
policy $\pi(\cdot|s)=\mathcal{N}(\mu(s),\sigma(s))$, suppose that
$\bar{Q}^{\pi}$ is an $\varepsilon$-optimal approximation of the policy’s true
value, $Q^{\pi}$.
By estimating the state-value function as:
$\bar{V}^{\pi}(s)=\bar{Q}^{\pi}(s,\mu)-\frac{1}{\beta}\operatorname*{\mathbb{E}}_{a\sim{}\pi}\log\frac{\pi(a|s)}{\pi_{0}(a|s)},$
(34)
the error in using such an approximation is upper bounded:
$|\bar{V}^{\pi}(s)-V^{\pi}(s)|\leq\sqrt{\frac{2}{\pi}}L_{Q}\sigma(s)e^{-\mu(s)^{2}/2\sigma(s)^{2}}+\varepsilon$
In the case that the function $Q$ used is an optimal value function for an
$(L_{r},L_{p})$-Lipschitz task, with a policy whose variance is lower bounded
$\sigma(s)\geq\sigma_{\text{min}}$ and $\gamma L_{p}(1+L_{\mathcal{N}})<1$,
where $L_{\mathcal{N}}=\sigma_{\text{min}}^{-2}(2\pi e)^{-1/2}$ is the
Lipschitz constant of the Gaussian distribution, then the Lipschitz constant
for $Q$ can be computed as:
$L_{Q}=\frac{L_{r}+\gamma L_{p}(\beta\sigma_{\min})^{-1}}{1-\gamma
L_{p}(1+L_{\mathcal{N}})}.$ (35)
###### Proof.
We first note that although the relative entropy appears in Eq. (7), we will
substitute it with the entropy alone. This is the typical scenario for MaxEnt
RL, where the prior policy is ignored. However, in the case of a Gaussian-
parameterized prior policy, the remaining term
$\operatorname*{\mathbb{E}}_{a\sim{}\pi}\log\pi_{0}(a|s)$ has an analytical
form. Continuing with the entropy, we see that if the variance is known, it is
easily expressed as:
$\mathbb{H}[\mathcal{N}(\mu,\sigma)]=\frac{1}{2}\log(2\pi\sigma^{2})+\frac{1}{2}.$
(36)
Alternative to the variance, the log-probability of the mean is sometimes used
in the parameterization (Raffin et al. 2021), which encodes the same
information:
$-\log(p(\mu))=-\log\left(\frac{1}{\sqrt{2\pi\sigma^{2}}}\right)=\mathbb{H}[\mathcal{N}(\mu,\sigma)]-\frac{1}{2}.$
Therefore, we only take into account the error in the first term, the
estimation of $\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(s,a)$ given only
the mean action $\mu$. We drop the $s$ dependence, denoting $\mu=\mu(s)$ and
$\sigma=\sigma(s)$.
$\displaystyle\ \ \ \ \left\lvert\bar{V}^{\pi}(s)-V^{\pi}(s)\right\rvert$
$\displaystyle=\left\lvert\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(s,a)-Q^{\pi}(s,\mu)\right\rvert+\left\lvert\operatorname*{\mathbb{E}}_{a\sim{}\pi}Q^{\pi}(s,a)-\bar{Q}^{\pi}(s,\mu)\right\rvert$
$\displaystyle\leq\operatorname*{\mathbb{E}}_{a\sim{}\pi}\left|Q(s,a)-Q(s,\mu)\right|+\varepsilon$
$\displaystyle\leq\operatorname*{\mathbb{E}}_{a\sim{}\pi}L_{Q}|a-\mu|+\varepsilon$
$\displaystyle=\frac{L_{Q}}{\sqrt{2\pi\sigma^{2}}}\int_{-\infty}^{\infty}e^{-\frac{(a-\mu)^{2}}{2\sigma^{2}}}|a-\mu|da+\varepsilon$
$\displaystyle=\frac{L_{Q}}{\sqrt{2\pi\sigma^{2}}}2\sigma^{2}e^{-\mu^{2}/2\sigma^{2}}+\varepsilon$
$\displaystyle=\sqrt{\frac{2}{\pi}}L_{Q}\sigma
e^{-\mu^{2}/2\sigma^{2}}+\varepsilon$
Here we have used the one-dimensional absolute value norm for actions, but the
result can be readily extended in a similar way for particular choices of the
metric on the action space. The fifth line follows from the $Q$ function being
$L_{Q}$-Lipschitz continuous, and the final line follows from substituting in
Theorem 29 for $L_{Q}$. ∎
Interestingly, this result has shown that there is maximum potential error
obtained in iterations of policy evaluation, with a non-trivial dependence on
the variance of the distribution in question.
To prove Theorem 4.6 we first provide some lemmas detailing the error analysis
for the $V^{\pi}(s)$ and $\Delta(s,a)$ terms appearing in the double-sided
bounds of Theorem 4.1 and Lemma 4.4; both of which are prone to estimation
errors.
###### Lemma 4.6a.
The maximum error in replacing $\Delta$ with $\bar{\Delta}$ (as defined in
Theorem 4.6, i.e. by using the one-point estimate for the expected $Q$-value)
is upper bounded:
$|\Delta(s,a)-\bar{\Delta}(s,a)|\leq\gamma\sqrt{\frac{2}{\pi}}L_{Q}\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}A(s^{\prime})$
where we introduce the shorthand
$A(s)=\sigma(s)e^{-\mu(s)^{2}/2\sigma(s)^{2}}+\varepsilon$.
###### Proof.
$\displaystyle\ \ \ \ |\Delta(s,a)-\bar{\Delta}(s,a)|$
$\displaystyle=\gamma\big{|}\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\left(V(s^{\prime})-\bar{V}(s^{\prime})\right)\big{|}$
$\displaystyle\leq\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\big{|}V(s^{\prime})-\bar{V}(s^{\prime})\big{|}$
$\displaystyle\leq\gamma\left(\sqrt{\frac{2}{\pi}}L_{Q}\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\sigma(s^{\prime})e^{-\mu(s^{\prime})^{2}/2\sigma(s^{\prime})^{2}}+\varepsilon\right)$
∎
###### Lemma 4.6b.
The reward function $\Delta$ generated from an $L_{Q}$-Lipschitz continuous
function $Q(s,a)$,
$\Delta(s,a)~{}\doteq~{}r(s,a)~{}+~{}\gamma~{}\operatorname*{\mathbb{E}}_{s^{\prime}}V(s^{\prime})-Q(s,a)$
with ($L_{r},L_{p}$)-Lipschitz rewards and dynamics, is Lipschitz continuous
with
$L_{\Delta}=\max\left\\{L_{r},L_{Q},\gamma
L_{p}\left(L_{Q}(1+L_{\mathcal{N}})+(\beta\sigma_{\text{min}})^{-1}\right)\right\\}.$
###### Proof.
The Lipschitz constant of a sum of Lipschitz functions:
$r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}\bar{V}(s^{\prime})-Q(s,a)$
is itself Lipschitz continuous, with the Lipschitz constant being the maximum
of all terms’ Lipschitz constants:
$L_{\Delta}=\max\left\\{L_{r},L_{Q},\gamma L_{p}L_{V}\right\\},$ (37)
where $L_{V}$ is given in Lemma 4.5b. Since the relative magnitude of each
Lipschitz constant is unknown a prior, we can make no further simplification
without additional assumptions. ∎
Now we are positioned to prove Theorem 4.6, the double-sided bounds on the
soft $Q$-function with estimation errors included.
###### Theorem 4.6.
Let the $L_{Q}$-Lipschitz value function $Q^{\pi}$ and corresponding Gaussian
policy $\pi(\cdot|s)=\mathcal{N}(\mu(s),\sigma(s))$ be given, where $Q^{\pi}$
is an $\varepsilon$-optimal estimate of the true policy’s value function. For
an $(L_{r},L_{p})$-Lipschitz task with (unknown) optimal value function
$Q^{*}$, let $\bar{V}^{\pi}$ be the one-point estimate of the (known) value
function $Q^{\pi}$, and denote
$\bar{\Delta}(s,a)=r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\bar{V}^{\pi}(s^{\prime})-Q^{\pi}(s,a)$.
Then:
$\displaystyle Q^{*}(s,a)\leq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\left[\bar{V}^{\pi}(s^{\prime})+A(s^{\prime})\right]$
$\displaystyle\hskip
10.00002pt+\frac{\gamma}{1-\gamma}\left(\min_{(s,a)\in\mathcal{D}}\left(\bar{\Delta}(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}A(s^{\prime})\right)+L_{\Delta}D\right)$
$\displaystyle Q^{*}(s,a)\geq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\left[\bar{V}^{\pi}(s^{\prime})-A(s^{\prime})\right]$
$\displaystyle\hskip
10.00002pt+\frac{\gamma}{1-\gamma}\left(\max_{(s,a)\in\mathcal{D}}\left(\bar{\Delta}(s,a)-\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}A(s^{\prime})\right)-L_{\Delta}D\right)$
where we let
$A(s)=\sqrt{\frac{2}{\pi}}L_{Q}\sigma(s)e^{-\mu(s)/2\sigma(s)^{2}}+\varepsilon$
and $L_{\Delta}=\max\left\\{L_{r},L_{Q},\gamma
L_{p}\left(L_{Q}(1+L_{\mathcal{N}})+(\beta\sigma_{\text{min}})^{-1}\right)\right\\}$
and $D$ denotes the diameter of the state-action space.
###### Proof.
We will prove the upper bound, with the lower bound following accordingly.
Beginning with the exact form in Theorem 4.1, the main idea is to propagate
the errors due to the single-point estimation for $\bar{V}$, the resulting
error in the calculation of $\Delta$ itself, and the $\sup(\Delta)$
estimation.
$\displaystyle Q(s,a)$ $\displaystyle\leq
r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V^{\pi}(s^{\prime})+\frac{\sup\Delta(s,a)}{1-\gamma}\right)$
$\displaystyle\leq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\biggr{[}\big{|}V^{\pi}(s^{\prime})-\bar{V}^{\pi}(s^{\prime})\big{|}+\bar{V}^{\pi}(s^{\prime})\biggr{]}$
$\displaystyle\hskip
40.00006pt+\frac{\gamma}{1-\gamma}\left(\min_{(s,a)\in\mathcal{D}}\Delta(s,a)+L_{\Delta}D\right)$
$\displaystyle\leq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\left[\bar{V}^{\pi}(s^{\prime})+A(s^{\prime})\right]$
$\displaystyle\hskip
40.00006pt+\frac{\gamma}{1-\gamma}\left(\min_{(s,a)\in\mathcal{D}}\Delta(s,a)+L_{\Delta}D\right)$
$\displaystyle\leq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\left[\bar{V}^{\pi}(s^{\prime})+A(s^{\prime})\right]$
$\displaystyle+\frac{\gamma}{1-\gamma}\left(\min_{(s,a)\in\mathcal{D}}\left(\bar{\Delta}(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}A(s^{\prime})\right)+L_{\Delta}D\right)$
where
$A(s)=\sqrt{\frac{2}{\pi}}L_{Q}\sigma(s)e^{-\mu(s)^{2}/2\sigma(s)^{2}}+\varepsilon$
and $L_{\Delta}=\max\left\\{L_{r},L_{Q},\gamma L_{p}L_{V}\right\\}$. The
second line follows from Lemma 4.4, the third line follows from Theorem 4.5,
and the fourth line follows from Lemma 4.6a. ∎
### Un-Regularized RL
We now turn to proofs of the analogous results in standard (un-regularized)
RL. We begin by using (Ng, Harada, and Russell 1999) to connect to the results
of (Adamczyk et al. 2023a) and (Cao, Cohen, and Szpruch 2021). In un-
regularized RL (Adamczyk et al. 2023a) Theorem 1 holds,
###### Theorem 6.1a (Ng, Harada, and Russell).
Let a (standard RL) primitive task $\mathcal{T}$ with reward function $r$ be
given, with the optimal value function $V^{*}(s)$. Consider another (standard
RL) task, $\widetilde{\mathcal{T}}$ with reward function $\widetilde{r}$, with
an unknown optimal action-value function, $\widetilde{Q}^{*}$. Define
$\kappa(s,a)\doteq\widetilde{r}(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}V^{*}(s^{\prime})-V^{*}(s)$.
Denote the optimal action-value function $K^{*}$ as the solution of the
following Bellman optimality equation
$K^{*}(s,a)=\kappa(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}\max_{a^{\prime}}K^{*}(s^{\prime},a^{\prime})$
(38)
Then,
$\widetilde{Q}^{*}(s,a)=V^{*}(s)+K^{*}(s,a)$ (39)
###### Proof.
Since $\kappa(s,a)$ is simply the reward function $\widetilde{r}(s,a)$ shaped
by the potential function $V^{*}(s)$, this is simply a re-writing of Eq. (3)
in (Ng, Harada, and Russell 1999). ∎
Now we provide a lemma before proving a similar result for compositions.
Motivated by (Cao, Cohen, and Szpruch 2021)’s Theorem 1, we provide the same
result for standard (un-regularized) RL:
###### Lemma 4.8a.
Let $Q(s,a)$ be given. Define $V^{*}(s)=\max_{a}Q(s,a)$ as the corresponding
state value functions for a un-regularized RL task. Then
$R(s,a)=Q(s,a)-\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V^{*}(s^{\prime})$
(40)
is the reward function for a task with optimal action-value function
$Q^{*}(s,a)=Q(s,a)$.
###### Proof.
The proof is trivial, given by rearrangement of the Bellman optimality
equation. ∎
###### Theorem 4.8.
Given a set of primitive tasks $\\{\mathcal{T}_{j}\\}$ with corresponding
optimal value functions $\\{Q_{j}^{*}\\}$, denote $\widetilde{Q}^{*}$ as the
optimal value function for the composition of $\\{\mathcal{T}_{j}\\}$ under
the composition function $f:\mathbb{R}^{M}\to\mathbb{R}$.
Define $K^{*}$ as the optimal value function for a task with reward function
$\kappa$ defined by:
$\displaystyle\kappa(s,a)=f(\\{r_{j}(s,a)\\})+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}V_{f}(s^{\prime})-V_{f}(s)$
$V_{f}(s)=\max_{a}f\left(\\{Q_{j}^{*}(s,a)\\}\right)$
Then, the optimal value functions $\widetilde{Q}^{*}$ and $K^{*}$ are related
by:
$\widetilde{Q}^{*}(s,a)=V_{f}(s)+K^{*}(s,a)$ (41)
###### Proof.
Let $f\left(\\{Q_{j}^{*}(s,a)\\}\right)$ stand for the primitive task’s
solution, as in Theorem 39. Then, by Lemma 40, such a value function is
optimal for a un-regularized RL task with reward function
$R(s,a)=f\left(\\{Q_{j}^{*}(s,a)\\}\right)-\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V_{f}^{*}(s^{\prime})$,
where $V_{f}(s)=\max_{a}f\left(\\{Q_{j}^{*}(s,a)\\}\right)$. By Theorem 39,
the corrective task has a reward function
$\kappa(s,a)=f\left(\\{r_{j}(s,a)\\}\right)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}}V_{f}(s^{\prime})-V_{f}(s)$
(42)
with corresponding optimal value function $K^{*}(s,a)$, related to
$\widetilde{Q}^{*}(s,a)$ by
$\widetilde{Q}^{*}(s,a)=V_{f}(s)+K^{*}(s,a)$ (43)
Again, this result can be seen as (Ng, Harada, and Russell 1999)’s reward
shaping with a potential function $\Phi(s)=V_{f}(s)$. ∎
We now note that Lemma A applies to the cases of Theorem 39 and 41, which
results in double-sided bounds given any estimate of the state value function
$V(s)$:
###### Theorem 4.9.
Consider a (standard RL) task with reward function $r(s,a)$ and (unknown)
optimal value function $Q^{*}(s,a)$. Let an estimate for the state value
function be given as $V(s)$.
The optimal value function $Q^{*}(s,a)$ is then bounded by:
$\displaystyle Q^{*}(s,a)$ $\displaystyle\geq
r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})+\frac{\inf\Delta}{1-\gamma}\right)$
(44) $\displaystyle Q^{*}(s,a)$ $\displaystyle\leq
r(s,a)+\gamma\left(\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})+\frac{\sup\Delta}{1-\gamma}\right)$
(45)
where
$\Delta(s,a)\doteq
r(s,a)+\gamma\operatorname*{\mathbb{E}}_{s^{\prime}\sim{}p}V(s^{\prime})-V(s).$
In Eq. (45), the $\inf$ and $\sup$ are taken over the continuous state-action
space $\mathcal{S}\times\mathcal{A}$.
###### Proof.
The proof is identical to that of Theorem 4.1, except with the proper
replacement of $\Delta$. ∎
## Exact composition in entropy regularized RL
Here, we provide a new proof and extension of Theorem 2 in (Van Niekerk et al.
2019) to highlight that our results can provide new insight to exact
compositions in entropy-regularized RL.
To align with the assumptions of (Van Niekerk et al. 2019), we consider the
undiscounted, finite horizon setting with deterministic dynamics. We first
note the observation which forms the starting point of our analysis: the
difference between the true optimal value function ($Q^{*}(s,a)$),
corresponding to reward function $r(s,a)$, and any estimate of the value
function ($Q(s,a)$) can itself be represented as another optimal value
function, with the corresponding reward function given by (Adamczyk et al.
2023a):
$\Delta(s,a)\doteq r(s,a)+\gamma V(s^{\prime})-Q(s,a)$
It is straightforward to show that this observation remains valid in the
undiscounted ($\gamma=1$) setting as well. Now, if the estimate of the value
function is exact, we must have $\Delta(s,a)=0$. In the following, we
determine conditions which lead to $\Delta(s,a)=0$ and correspondingly to
exact compositions.
###### Proof.
We consider $M$ solved tasks with reward functions $\\{r_{1},\dotsc,r_{M}\\}$
varying only on the set of absorbing states ($s\in\mathcal{G}$). Let
$Q_{i}(s,a)$ denote the optimal value function for the $i^{\mathrm{th}}$ task.
Consider the composite task with the following reward structure:
* •
For the absorbing states ($s\in\mathcal{G}$), the reward function is given by
the reward composition function $\widetilde{r}(s,a)=g(\\{r_{i}(s,a)\\})$.
* •
For the interior states ($s\not\in\mathcal{G}$), the reward function is taken
to be the same as the solved tasks and will be denoted by $r(s,a)$.
For the composite task defined in this way, we wish to determine if the
corresponding optimal value function can be expressed exactly as some global
composition of the known value functions for the solved tasks, denoted by
$f(\\{Q_{i}(s,a)\\})$. In other words, the estimate of the optimal value
function is given by $f(\\{Q_{i}(s,a)\\})$, and we will show how a specific
form for $f$ corresponds to $\Delta(s,a)=0$ (exact composition).
In the following, we will first show that we must have $f=g$, i.e the value
composition function must be identical to the reward composition function for
the absorbing states. We will then determine a specific form of
$f(\\{Q(s,a)\\})$ such that the corresponding reward function (i.e.
$f(\\{Q(s,a)\\})-V_{f}(s^{\prime})$, by (Cao, Cohen, and Szpruch 2021)) is
equal to the reward function for the composite task ($\widetilde{r}(s,a)$),
thus yielding $\Delta(s,a)=0$. We will do so by deriving the soft back-up
equation for $f(\\{Q(s,a)\\})$ using the soft back-up equations for the
subtasks.
We begin by observing that, on the absorbing set $\mathcal{G}$, we have
$r(s,a)=Q(s,a)$ for all $s\in\mathcal{G}$, implying that
$\widetilde{Q}(s,a)=\widetilde{r}(s,a)=g(\\{r_{i}(s,a)\\})=g(\\{Q_{i}(s,a)\\})$.
Thus, for exact composition on the absorbing set $\mathcal{G}$, the value
composition function must be the same as the reward composition function (i.e
$f=g$), for any reward composition function $g$. Since we are interested in a
global value composition function, this means that the reward composition
function $g$ also determines the composition function $f(\\{Q(s,a)\\})$ for
states $s\not\in\mathcal{G}$. However, for arbitrary choices of $g$, the
corresponding $f(\\{Q(s,a)\\})$ will not, in general, correspond to the exact
optimal value function for states $s\not\in\mathcal{G}$.
We now consider a special class of reward composition functions $g$, such that
the corresponding value composition function $f$ is an exact composition
globally. Consider $g$ such that we have, for the absorbing states $s$,
$e^{\widetilde{r}(s,a)}=\sum_{i}w_{i}e^{r_{i}(s,a)}$ (46)
with weights $w_{i}>0$ and we have set $\tau=1$ for simplicity.
For deterministic dynamics, focusing on the non-absorbing states (i.e.
$s\not\in\mathcal{G}$ ) the soft backup equation for the subtask $m$ can be
expressed as
$e^{Q_{m}(s,a)}=e^{r_{m}(s,a)}e^{V_{m}(s^{\prime})}.$ (47)
Since the subtask reward functions are identical for those
$s\not\in\mathcal{G}$, this simplifies to
$e^{Q_{m}(s,a)}=e^{r(s,a)}e^{V_{m}(s^{\prime})}.$ (48)
Since the state space is made of disjoint absorbing and non-absorbing (i.e.
boundary and interior as in (Todorov 2009)), we can split to two cases as
$s,a$ which transition to $s^{\prime}\in\mathcal{G}$ and otherwise.
Now, consider the backup equation for each subtask, where we split those
states $s\not\in\mathcal{G}$ and $s\in\mathcal{G}$.
$\displaystyle e^{Q_{i}(s,a)}$ $\displaystyle=e^{r(s,a)}\times$
$\displaystyle\left(\sum_{s^{\prime}\in\mathcal{G}}p(s^{\prime}|s,a)e^{V_{i}(s^{\prime})}+\sum_{s^{\prime}\not\in\mathcal{G}}p(s^{\prime}|s,a)e^{V_{i}(s^{\prime})}\right)$
But for $s\in\mathcal{G}$, the state value function is simply
$V_{m}(s)=r_{m}(s)$. Thus we have
$\displaystyle e^{Q_{i}(s,a)}$ $\displaystyle=e^{r(s,a)}\times$ (49)
$\displaystyle\left(\sum_{s^{\prime}\in\mathcal{G}}p(s^{\prime}|s,a)e^{r_{i}(s^{\prime})}+\sum_{s^{\prime}\not\in\mathcal{G}}p(s^{\prime}|s,a)e^{V_{i}(s^{\prime})}\right)$
Now, since we have $f=g$, the optimal value composition function is given by
$e^{f(\\{Q_{i}(s,a)\\})}=\sum_{i}w_{i}e^{Q_{i}(s,a)}$ (50)
Multiplying each of the subtask backup equations (above) by the respective
weight ($w_{i}$) and summing up we obtain
$\displaystyle e^{f(\\{Q_{i}(s,a)\\})}=e^{r(s,a)}\times$
$\displaystyle\sum_{i}w_{i}\biggl{(}\sum_{s^{\prime}\in\mathcal{G}}p(s^{\prime}|s,a)e^{r_{i}(s^{\prime})}+\sum_{s^{\prime}\not\in\mathcal{G}}p(s^{\prime}|s,a)e^{V_{i}(s^{\prime})}\biggr{)}.$
Now we observe that for $f$ as defined above, the soft state-value function
$V_{f}(s)$ derived from $f(Q)$ satisfies:
$\displaystyle e^{V_{f}(s)}$
$\displaystyle=\operatorname*{\mathbb{E}}_{a^{\prime}\sim{}\pi_{0}}~{}e^{f(\\{Q_{i}(s^{\prime},a^{\prime})\\})}$
$\displaystyle=\operatorname*{\mathbb{E}}_{a^{\prime}\sim{}\pi_{0}}\sum_{i}w_{i}~{}e^{Q_{i}(s^{\prime},a^{\prime})}$
$\displaystyle=\sum_{i}w_{i}~{}\operatorname*{\mathbb{E}}_{a^{\prime}\sim{}\pi_{0}}~{}e^{Q_{i}(s^{\prime},a^{\prime})}$
$\displaystyle=\sum_{i}w_{i}~{}e^{V_{i}(s^{\prime})}$
Using the above, we obtain
$\displaystyle e^{f(\\{Q_{i}(s,a)\\})}=e^{r(s,a)}\times$
$\displaystyle\left(\sum_{s^{\prime}\in\mathcal{G}}p(s^{\prime}|s,a)e^{\widetilde{r}(s,a)}+\sum_{s^{\prime}\not\in\mathcal{G}}p(s^{\prime}|s,a)e^{V_{f}(s^{\prime})}\right)$
Comparing the above equation with the backup equation for the subtask Eq.
(49), we obtain that $f(\\{Q_{i}(s,a)\\})$ (defined in Eq. (50)) is the exact
optimal value function for the composite task with reward function
$\widetilde{r}(s,a)$ for the absorbing states ($s\in\mathcal{G}$) and $r(s,a)$
for the non-absorbing states ($s\not\in\mathcal{G}$). The result stated in the
main text (Theorem 5.1) follows, given that
$\widetilde{Q}(s,a)=f(\\{Q_{i}(s,a)\\})$. ∎
## References
* Adamczyk et al. (2023a) Adamczyk, J.; Arriojas, A.; Tiomkin, S.; and Kulkarni, R. V. 2023a. Utilizing Prior Solutions for Reward Shaping and Composition in Entropy-Regularized Reinforcement Learning. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 37(6): 6658–6665.
* Adamczyk et al. (2023b) Adamczyk, J.; Makarenko, V.; Arriojas, A.; Tiomkin, S.; and Kulkarni, R. V. 2023b. Bounding the optimal value function in compositional reinforcement learning. In Evans, R. J.; and Shpitser, I., eds., _Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence_ , volume 216 of _Proceedings of Machine Learning Research_ , 22–32. PMLR.
* Brockman et al. (2016) Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. Openai gym. _arXiv preprint arXiv:1606.01540_.
* Cao, Cohen, and Szpruch (2021) Cao, H.; Cohen, S.; and Szpruch, Ł. 2021. Identifiability in inverse reinforcement learning. _Advances in Neural Information Processing Systems_ , 34: 12362–12373.
* Degrave et al. (2022) Degrave, J.; Felici, F.; Buchli, J.; Neunert, M.; Tracey, B.; Carpanese, F.; Ewalds, T.; Hafner, R.; Abdolmaleki, A.; de Las Casas, D.; et al. 2022. Magnetic control of tokamak plasmas through deep reinforcement learning. _Nature_ , 602(7897): 414–419.
* Eysenbach et al. (2019) Eysenbach, B.; Gupta, A.; Ibarz, J.; and Levine, S. 2019. Diversity is all you need: Learning skills without a reward function. _International Conference on Learning Representations_.
* Eysenbach and Levine (2022) Eysenbach, B.; and Levine, S. 2022. Maximum Entropy RL (Provably) Solves Some Robust RL Problems. In _International Conference on Learning Representations_.
* Fazlyab et al. (2019) Fazlyab, M.; Robey, A.; Hassani, H.; Morari, M.; and Pappas, G. 2019. Efficient and accurate estimation of lipschitz constants for deep neural networks. _Advances in Neural Information Processing Systems_ , 32.
* Haarnoja et al. (2018a) Haarnoja, T.; Pong, V.; Zhou, A.; Dalal, M.; Abbeel, P.; and Levine, S. 2018a. Composable deep reinforcement learning for robotic manipulation. In _2018 IEEE international conference on robotics and automation (ICRA)_ , 6244–6251. IEEE.
* Haarnoja et al. (2018b) Haarnoja, T.; Zhou, A.; Abbeel, P.; and Levine, S. 2018b. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _International conference on machine learning_ , 1861–1870. PMLR.
* Kim, Park, and Kim (2022) Kim, J.; Park, S.; and Kim, G. 2022. Constrained GPI for Zero-Shot Transfer in Reinforcement Learning. In Koyejo, S.; Mohamed, S.; Agarwal, A.; Belgrave, D.; Cho, K.; and Oh, A., eds., _Advances in Neural Information Processing Systems_ , volume 35, 4585–4597. Curran Associates, Inc.
* Lee et al. (2021) Lee, K.; Laskin, M.; Srinivas, A.; and Abbeel, P. 2021. Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning. In _International Conference on Machine Learning_ , 6131–6141. PMLR.
* Nemecek and Parr (2021) Nemecek, M.; and Parr, R. 2021. Policy caches with successor features. In _International Conference on Machine Learning_ , 8025–8033. PMLR.
* Ng, Harada, and Russell (1999) Ng, A. Y.; Harada, D.; and Russell, S. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In _Proceedings of the 16th International Conference on Machine Learning_ , volume 99, 278–287.
* Park et al. (2023) Park, S.; Lee, K.; Lee, Y.; and Abbeel, P. 2023. Controllability-Aware Unsupervised Skill Discovery. arXiv:2302.05103.
* Rachelson and Lagoudakis (2010) Rachelson, E.; and Lagoudakis, M. G. 2010. On the Locality of Action Domination in Sequential Decision Making. In _11th International Symposium on Artificial Intelligence and Mathematics (ISIAM 2010)_ , 1–8. Fort Lauderdale, US.
* Raffin et al. (2021) Raffin, A.; Hill, A.; Gleave, A.; Kanervisto, A.; Ernestus, M.; and Dormann, N. 2021\. Stable-Baselines3: Reliable Reinforcement Learning Implementations. _Journal of Machine Learning Research_ , 22(268): 1–8.
* Rusu et al. (2016) Rusu, A. A.; Rabinowitz, N. C.; Desjardins, G.; Soyer, H.; Kirkpatrick, J.; Kavukcuoglu, K.; Pascanu, R.; and Hadsell, R. 2016. Progressive neural networks. _arXiv preprint arXiv:1606.04671_.
* Schrittwieser et al. (2020) Schrittwieser, J.; Antonoglou, I.; Hubert, T.; Simonyan, K.; Sifre, L.; Schmitt, S.; Guez, A.; Lockhart, E.; Hassabis, D.; Graepel, T.; et al. 2020. Mastering atari, go, chess and shogi by planning with a learned model. _Nature_ , 588(7839): 604–609.
* Silver et al. (2018) Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; Lillicrap, T.; Simonyan, K.; and Hassabis, D. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. _Science_ , 362(6419): 1140–1144.
* Sutton and Barto (2018) Sutton, R. S.; and Barto, A. G. 2018. _Reinforcement learning: An introduction_. MIT press.
* Tasse, James, and Rosman (2020) Tasse, G. N.; James, S.; and Rosman, B. 2020. A Boolean task algebra for reinforcement learning. _Advances in Neural Information Processing Systems_ , 33: 9497–9507.
* Tasse, James, and Rosman (2021) Tasse, G. N.; James, S.; and Rosman, B. 2021. Generalisation in Lifelong Reinforcement Learning through Logical Composition. In _Deep RL Workshop NeurIPS 2021_.
* Todorov (2009) Todorov, E. 2009. Compositionality of optimal control laws. _Advances in neural information processing systems_.
* Van Niekerk et al. (2019) Van Niekerk, B.; James, S.; Earle, A.; and Rosman, B. 2019. Composing value functions in reinforcement learning. In _International conference on machine learning_ , 6401–6409. PMLR.
* Vinyals et al. (2019) Vinyals, O.; Babuschkin, I.; Czarnecki, W. M.; Mathieu, M.; Dudzik, A.; Chung, J.; Choi, D. H.; Powell, R.; Ewalds, T.; Georgiev, P.; et al. 2019. Grandmaster level in StarCraft II using multi-agent reinforcement learning. _Nature_ , 575(7782): 350–354.
* Ziebart (2010) Ziebart, B. D. 2010. _Modeling purposeful adaptive behavior with the principle of maximum causal entropy_. PhD Dissertation, Carnegie Mellon University.
|
# The Atari Disk, a Metal-Poor Stellar Population in the Disk System of the
Milky Way
Mohammad K. Mardini Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa,
Chiba 277-8583, Japan Institute for AI and Beyond, The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan Anna Frebel Department of
Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts
Institute of Technology, Cambridge, MA 02139, USA Anirudh Chiti Department of
Astronomy $\&$ Astrophysics, University of Chicago, 5640 S Ellis Avenue,
Chicago, IL 60637, USA Kavli Institute for Cosmological Physics, University
of Chicago, Chicago, IL 60637, USA Yohai Meiron SciNet High Performance
Computing Consortium, University of Toronto, 661 University Ave., Toronto, ON
M5G 1M1, Canada Kaley V. Brauer Department of Physics and Kavli Institute for
Astrophysics and Space Research, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA Xiaowei Ou Department of Physics and Kavli Institute
for Astrophysics and Space Research, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA Mohammad K. Mardini<EMAIL_ADDRESS>
###### Abstract
We have developed a chemo-dynamical approach to assign 36,010 metal-poor
SkyMapper stars to various Galactic stellar populations. Using two independent
techniques (velocity and action space behavior), $Gaia$ EDR3 astrometry, and
photometric metallicities, we selected stars with the characteristics of the
”metal-weak” thick disk population by minimizing contamination by the
canonical thick disk or other Galactic structures. This sample comprises 7,127
stars, spans a metallicity range of $-3.50<$$[\mathrm{Fe}/\mathrm{H}]$
$<-0.8$, and has a systematic rotational velocity of $\langle
V_{\phi}\rangle=154$ km s-1 that lags that of the thick disk. Orbital
eccentricities have intermediate values between typical thick disk and halo
values. The scale length is $h_{R}=2.48^{+0.05}_{-0.05}$ kpc and the scale
height is $h_{Z}=1.68^{+0.19}_{-0.15}$ kpc. The metallicity distribution
function is well fit by an exponential with a slope of $\Delta\log{\rm
N}/\Delta[\mathrm{Fe}/\mathrm{H}]=1.13\pm 0.06$. Overall, we find a
significant metal-poor component consisting of 261 SkyMapper stars with
$[\mathrm{Fe}/\mathrm{H}]$$<-2.0$. While our sample contains only eleven stars
with $[\mathrm{Fe}/\mathrm{H}]$ $\lesssim-3.0$, investigating the JINAbase
compilation of metal-poor stars reveals another 18 such stars (five have
$[\mathrm{Fe}/\mathrm{H}]$$<-4.0$) that kinematically belong to our sample.
These distinct spatial, kinematic and chemical characteristics strongly
suggest this metal-poor, phase-mixed kinematic sample to represent an
independent disk component with an accretion origin in which a massive dwarf
galaxy radially plunged into the early Galactic disk. Going forward, we
propose to call the metal-weak thick disk population as the Atari disk, given
its likely accretion origin, and in reference to it sharing space with the
Galactic thin and thick disks.
Galaxy: formation – Galaxy: structure – Galaxy: disk – Galaxy: kinematics and
dynamics – Galaxy: abundances
††journal: ApJ
## 1 Introduction
The existence of chemo-dynamically distinguishable components of the Galactic
disk was first proposed several decades ago, in which the “thick disk” was
introduced as a distinct component of the Milky Way disk (e.g., Gilmore &
Reid, 1983). Many studies have investigated in detail the nature of this
component, which is considered the “canonical thick disk” by determining its
age (older than 8 Gyr; Kilic et al., 2017), velocity dispersion
($\sigma_{z}\approx 35\,$km s-1 Norris, 1993), metallicity distribution
(peaking at $[\mathrm{Fe}/\mathrm{H}]$
$\approx-0.5$111$[\mathrm{Fe}/\mathrm{H}]$=
$\log_{10}(N_{\text{Fe}}/N_{\text{H}})_{\star}-\log_{10}(N_{\text{Fe}}/N_{\text{H}})_{\sun}$;
Kordopatis et al. 2011), and relative abundance ([X/Fe]) trends (see; Bensby
et al., 2005). In addition, a seemingly more metal-poor
($[\mathrm{Fe}/\mathrm{H}]$ $<-0.8$) stellar population within this canonical
thick disk was identified (Norris et al., 1985; Morrison, 1990), and termed
the “metal-weak thick disk” (MWTD) (e.g., Chiba & Beers, 2000).
While various properties of the canonical thick disk could be conclusively
determined, the metal-weak thick disk remained insufficiently studied, likely
due to its somewhat elusive nature. For example, several open questions remain
regarding its nature – what are the upper and lower $[\mathrm{Fe}/\mathrm{H}]$
bounds characterizing the metal-weak thick disk? How did it form and evolve?
Is it mainly the metal-poor tail of, and hence associated with, the canonical
thick disk, or actually a separate component of the Milky Way’s disk? Several
clues from recent chemo-dynamical analyses Carollo et al. (2019); An & Beers
(2020) suggested the metal-weak thick disk as being independent from the the
canonical thick disk, with plausibly distinct spatial, kinematic, chemical,
and age distributions.
Moreover, recent reports of very and extremely metal-poor stars being part of
the Milky Way disk system has provided further insights into, and questions
about, the formation of the Galactic disk system and the Milky Way itself
(Sestito et al., 2019; Carter et al., 2020; Cordoni et al., 2020; Di Matteo et
al., 2020; Venn et al., 2020). The existence of these low-metallicity stars in
the disk could be a signature of an early component of this disk system,
assembled from a massive building block(s) entering the proto-Milky Way.
Alternatively, these stars might have formed in the early disk system, which
was later dynamically heated.
More generally, investigating thick disk origin scenarios through metal-poor
stellar samples of the disk may shed light on the nature and origin of the
metal-weak thick disk; for instance, in gauging whether metal-weak thick disk
stars have consistent behavior(s) or an implied origin that aligns with stars
belonging to the canonical thick disk. In this paper, we implement several
approaches using kinematics derived from the Gaia mission (Gaia Collaboration
et al., 2016, 2020) and photometric metallicities (Chiti et al., 2020, 2021a)
obtained from using public SkyMapper DR2 data (Onken et al., 2020) to select a
clean and representative sample of metal-poor stars of the metal-weak thick
disk.
While addressing to what extent the metal-weak thick disk could be viewed as a
component distinct from the canonical thick disk to learn about its early
formation and evolution, we found that it is indeed characterizable as a
distinct spatial, kinematic and chemical stellar component. While it appears
independent of the thick disk, this disk component remains described by its
low-metallicity stellar content, as originally envisioned with the description
of “metal-weak thick disk”. To account for the different nature of this
component, we propose to call it the Atari disk (with Atari 辺り meaning “in the
neighborhood” or “nearby” in Japanese), in reference to it sharing close space
with the Galactic thin and thick disks. This paper explores a full
characterization of the nature of the Atari disk which appears to have an
accretion origin in which a massive dwarf galaxy plunged into the early
Galactic disk.
## 2 Sample Selection and Quality Assessment
To build a representative sample of Atari disk/MWTD stars, we applied the
following procedure. We used the photometric metallicity catalog presented in
Chiti et al. (2021a), which provides metallicities
($[\mathrm{Fe}/\mathrm{H}]$) for $\sim 280,000$ relatively bright (g
$\leqslant 17$) and cool (0.35 $<$ $g-i$ $<$ 1.20) giants using metallicity-
sensitive photometry from SkyMapper DR2 (Onken et al., 2020). We then limited
the sample to $g-i>0.65$ and random metallicity uncertainties $<0.5$ dex,
following Chiti et al. (2021b), to ensure a high-quality sample of photometric
metallicities. We cross-matched this sample with the early third data release
of the Gaia mission (Gaia EDR3, Gaia Collaboration et al., 2020; Lindegren et
al., 2020a) to collect five-parameter astrometric solutions (sky positions:
$\alpha$, $\delta$, proper motions: $\mu_{\alpha}\cos\delta$, $\mu_{\delta}$,
and parallaxes: $\varpi$). For sources typical in our sample (e.g., brighter
than G = 17 mag), Gaia EDR3 provides meaningfully more accurate astrometric
measurements relative to Gaia DR2. For instance, the parallax errors
($\Delta\varpi$) in our sample improve by $20\%$ and proper motion
uncertainties improve by a factor of two.
In addition to these improvements, Lindegren et al. (2020a) introduced several
quality cuts for the selection of reliable astrometric solutions. We thus
apply the following restrictions based on the Gaia EDR3 quality flags which
reduces our sample to 169,530 stars:
* •
astrometric_excess_noise ($<$ 1 $\mu$as): Higher values might indicate that
the astrometric solution for the target has failed and/or that the star is in
a multiple-system for which the single object solution is not
reliable222Without filtering on the astrometric excess noise, artefacts might
present (see Appendix C of Lindegren et al., 2018).. This also accounts for
the fact that our metallicity technique may fail for binaries.
* •
Parallax$\\_$over$\\_$error ($\geqslant$ 5): Ensures reliable distance
measurements (i.e., 20% uncertainty or better).
For reference, typical uncertainties in the parallaxes and proper motions of
the resulting sample of stars are 0.01 $\mu$as and 0.02 $\mu$as yr-1,
respectively.
To calculate the full space motions of our sample, line-of-sight radial
velocities (RV) are required. About $\sim$7 million stars have RV measurements
in the $Gaia$ DR2 catalog which is similar to what is available in Gaia EDR3.
Yet only $\sim 19\,\%$ of our sample have any of these RV measurements. We
apply an additional quality cut (dr2_radial_velocity_error $<3.0$ km s-1) to
conservatively select stars with reliable RV values. This results in a sample
of 28,714 stars. To further increase the size of our sample, we collected
additional high-quality RV data from other surveys. We acquired 311, 1581,
771, and 4905 unique measurements from the APOGEE DR16, LAMOST DR6, RAVE DR5,
and GALAH DR3 surveys, respectively (Majewski et al., 2017; Cui et al., 2012;
Kunder et al., 2017; Buder et al., 2021). In case of stars having multiple
spectroscopic RV measurements, we choose to keep the ones with the highest
S/N. The final sample of our stars with available RV measurements increases to
36,010 stars after including these datasets.
We followed Lindegren et al. (2020b) by assuming that additional parallax zero
point ($\varpi_{zp}$) corrections are required for each star. These
corrections utilize the magnitude, color, and ecliptic latitude of each source
to compute an individual $\varpi_{zp}$ correction for each star in our sample.
For our sample, $\varpi_{zp}$ ranges from $-0.047$ to $0.004$ $\mu$as, as
shown in the upper panel of Figure 1. We obtained corrected parallaxes
($\varpi_{corr}$) by subtracting the estimated $\varpi_{zp}$ from the $Gaia$
EDR3 parallaxes ($\varpi_{corr}=\varpi-\varpi_{zp}$).
Figure 1: Top panel: Distribution of our calculated parallax zero points for
our final sample of 36,010 stars with available RV measurements. Bottom panel:
Calculated parallax distances with the zero-point correction (black dots),
without the zero-point correction (gray dots), and the Bailer-Jones et al.
(2021) values (blue dots) as a function of the mean value distances calculated
using a space density prior. The blue solid line represents the one-to-one
relation.
Stellar distances derived from directly inverting these corrected parallaxes
($d=1/\varpi$) should principally be reliable and not introduce additional
biases (e.g., Mardini et al., 2019a, b). However, as an additional check, we
calculated the distance of each star in our sample by implementing a Monte
Carlo simulation with an exponentially decreasing space density prior as
presented in Bailer-Jones et al. (2018), which we label “SDP” distances333At
the time we started this project, the catalog in Bailer-Jones et al. (2021)
was not public.. For this, we generated 10,000 realizations for each star
assuming a normal distribution with $\varpi$ as the central value and the
dispersion given by $\Delta\varpi$. We adopt the mean value as our final
distance estimate. The lower panel of Figure 1 shows a direct comparison
between distances calculated by inverting the parallax, our SDP approach, and
the distances in Bailer-Jones et al. (2021). For the parallax distances, we
show two versions, one obtained without the zero-point correction, and one
after the zero-point correction was applied.
Out to 3 kpc, all three distance measurements agree reasonably well. Beyond
that, the un-corrected parallax distances are overestimated (the effect is
prominent from 1-8 kpc) compared to SDP distances, with the effect becoming
worse at the largest distances. However, the corrected parallax distances show
excellent agreement with the SDP distances. We adopt the SDP distances for our
entire sample, since they are more statistically vetted, though we note that
the differences between those distances and the corrected parallax distances
are minor. Table 1 then lists $Gaia$ source ID, velocities, SDP distances, and
orbital actions for each star of our final sample.
Table 1: Stellar Parameters and $Gaia$ Astrometric Solutions
source_id | parallax | $\mu_{\alpha}$ $cos(\delta)$ | $\mu_{\delta}$ | RV | l | b | dSDP | $[\mathrm{Fe}/\mathrm{H}]$ | Lz | Jr | Jz
---|---|---|---|---|---|---|---|---|---|---|---
| (mas) | (mas yr-1) | (mas yr-1) | (km s-1) | (deg) | (deg) | (kpc) | (dex) | (kpc km s-1) | (kpc km s-1) | (kpc km s-1)
2334983060942097664 | 0.671 | 2.330 | $-$28.582 | 70.27 | 37.245 | $-$78.457 | 1.49 | $-$0.86 | 542.99 | 389.25 | 192.81
4918304837496677248 | 0.304 | 4.711 | $-$0.954 | 53.08 | 314.997 | $-$56.947 | 3.31 | $-$0.79 | 1305.19 | 26.08 | 163.45
4901413310941027072 | 1.125 | 32.894 | $-$1.159 | 119.69 | 312.286 | $-$52.664 | 0.89 | $-$1.12 | 1015.84 | 158.48 | 198.71
2421111311440455168 | 0.198 | $-$2.679 | $-$9.637 | $-$104.37 | 80.127 | $-$71.531 | 5.09 | $-$3.20 | 607.15 | 696.21 | 236.33
2422492847800684416 | 0.300 | $-$0.500 | $-$23.563 | $-$110.44 | 83.613 | $-$70.002 | 3.36 | $-$2.42 | $-$607.64 | 753.16 | 219.27
2339756040919188480 | 0.257 | 5.320 | $-$7.972 | 69.20 | 47.902 | $-$77.903 | 3.93 | $-$1.25 | 605.90 | 238.43 | 356.76
4991401092065999360 | 0.376 | 0.444 | 3.777 | 20.18 | 328.375 | $-$68.950 | 2.67 | $-$0.81 | 2081.24 | 131.16 | 145.21
2340071806914929920 | 0.517 | $-$4.974 | $-$34.741 | $-$97.15 | 50.676 | $-$77.698 | 1.96 | $-$1.97 | $-$215.55 | 884.75 | 90.11
4995919432020493440 | 0.475 | $-$2.472 | $-$1.622 | $-$34.00 | 333.978 | $-$71.668 | 2.11 | $-$0.89 | 1867.50 | 54.88 | 102.31
2341853840385868288 | 0.320 | 5.335 | $-$19.603 | $-$110.72 | 62.991 | $-$76.206 | 3.15 | $-$1.74 | $-$508.89 | 376.26 | 137.58
2314830593353777280 | 0.727 | 30.731 | $-$7.730 | $-$18.87 | 13.964 | $-$78.469 | 1.38 | $-$0.92 | 852.38 | 467.33 | 36.53
4994799132751744128 | 0.354 | 0.902 | $-$4.096 | $-$4.87 | 331.310 | $-$70.548 | 2.84 | $-$0.88 | 1411.96 | 37.25 | 127.50
4688252950170144640 | 0.437 | 22.078 | $-$2.191 | $-$28.16 | 307.226 | $-$41.562 | 2.29 | $-$1.06 | 1303.86 | 427.46 | 60.41
2320839596198507648 | 0.604 | 5.558 | $-$11.909 | 9.12 | 14.965 | $-$78.575 | 1.66 | $-$1.05 | 1118.50 | 145.25 | 49.47
2340104929702744192 | 0.185 | $-$1.568 | $-$12.358 | $-$59.60 | 51.845 | $-$77.765 | 5.44 | $-$1.34 | $-$49.71 | 827.14 | 429.74
4901401907804117632 | 0.693 | 43.595 | $-$24.514 | 7.91 | 312.082 | $-$52.667 | 1.44 | $-$1.39 | $-$19.81 | 874.88 | 94.39
2340079091179498496 | 0.732 | $-$12.163 | $-$11.067 | 27.42 | 50.690 | $-$77.917 | 1.38 | $-$0.93 | 1792.91 | 178.37 | 47.83
Note. — Parallax is the corrected parallax based on Lindegren et al. (2020b).
$d_{\text{SDP}}$ is the mean value of the 10,000 realizations and
$[\mathrm{Fe}/\mathrm{H}]$ adopted from Chiti et al. (2021a). The complete
version of Table 1 is available online only. A short version is shown here to
illustrate its form and content. Distance and actions are rounded to two
digits, but are given at full numerical precision in the online table.
## 3 Derivation of Kinematic Parameters
### 3.1 Position and Velocity Transformation
We transform Galactic longitude ($l$), Galactic latitude ($b$), and distance
to the Sun ($d$) to rectangular Galactocentric coordinates ($X,Y,Z$) using the
following coordinate transformations:
$\displaystyle X=R_{\odot}-d\,\cos(l)\,\cos(b)$ $\displaystyle
Y=-d\,\sin(l)\,\cos(b)$ (1) $\displaystyle Z=d\,\sin(b),$
where the Sun is located at R${}_{\odot}=8.178\pm 0.013$ kpc from the Galactic
center (Gravity Collaboration et al., 2019); $X$ is taken to be oriented
toward $l$=0∘, $Y$ toward $l$=90∘, and $Z$ toward the north Galactic pole.
We transform $\mu_{\alpha}\cos\delta$, $\mu_{\delta}$, and RV measurements to
rectangular Galactic ($U,V,W$) velocities with respect to the Local Standard
of Rest (LSR). $U$ is oriented toward the Galactic center, $V$ in the
direction of Galactic rotation, and $W$ toward the North Galactic pole. We
adopt the peculiar motion of the Sun ($U_{\odot}=11.1\pm 0.72$ km s-1,
$V_{\odot}=12.24\pm 0.47$ km s-1, and $W_{\odot}=7.25\pm 0.36$ km s-1) from
Schönrich et al. (2010), and a maximum height of $z_{\odot}=20.8\pm 0.3$ pc of
the Sun (Bennett & Bovy, 2019) above the plane. We take VLSR = $220$ km s-1
from Kerr & Lynden-Bell (1986)444Using more recent values (e.g., 232.8 $\pm$
3.0 km s-1; McMillan 2017) did not produce large discrepancies in the Galactic
component classifications/membership. However, using such higher LSR value
would shift the $<V_{\phi}>$ by 10 km s-1, which might create some confusion
for the reader once we compare our calculated $<V_{\phi}>$ with literature
values calculated using LSR = 220 km s-1.
We transform $U,V,W$ to velocities in cylindrical Galactocentric coordinates
($V_{R},V_{\phi},V_{z}$) using the following coordinate transformations:
$\displaystyle V_{R}=U\cos(\phi)+(V+V_{rot})\sin(\phi)$ $\displaystyle
V_{\phi}=(V+V_{rot})\cos(\phi)-U\sin(\phi)$ (2) $\displaystyle V_{z}=W$
Where $\cos$ ($\phi$) = $X/\sqrt{X{{}^{2}}+Y{{}^{2}}}$, $V_{rot}$ is the
circular velocity of the LSR, $\sin$ ($\phi$) =
$Y/\sqrt{X{{}^{2}}+Y{{}^{2}}}$, and objects with V${}_{\phi}>0$ km s-1 are
retrograde.
### 3.2 Orbital Parameters
We used galpy and a scaled version of MWPotential2014 potential (Bovy, 2015)
to derive orbital parameters (rperi, rapo, and Zmax) for each star. The
modified MWPotential2014 contains (i) a potential based on a virial mass of
$M_{200}=1.4\times 10^{12}\,M_{\odot}$ instead of a canonical, shallower NFW
profile, and (ii) a concentration parameter ($c=8.25$) that matches the
rotation curve of the Milky Way. This modification helps overcome an issue of
erroneously identifying unbound stars, a known issue of the original
MWPotential2014 potential.
We define the total orbital energy as
$E=(1/2)\mbox{\boldmath$v$}^{2}+\Phi(\mbox{\boldmath$x$})$ and set $E=0$ at a
very large distance from the Galactic center. We define the eccentricity as
$e=(r_{\mathrm{apo}}-r_{\mathrm{peri}})/(r_{\mathrm{apo}}+r_{\mathrm{peri}})$
and the vertical angular momentum component as $L_{z}=R\times V_{\phi}$ (see
Mackereth & Bovy, 2018, for more details). The distance from the Galactic
center (cylindrical radius) is set by $R=\sqrt{X^{2}+Y^{2}}$. We calculate
these orbital parameters based on the starting point obtained from the
observations via a Markov Chain Monte Carlo sampling method, assuming normally
distributed input parameters around their observed values. We generate 10,000
realizations based on the observed input for each star to obtain medians and
standard deviations of all kinematic parameters and to infer their values and
associated uncertainties. We note that we use these orbital properties in all
of the remaining Sections in the paper except in Section 4.1, where we follow
a separate approach to assign stars to their Galactic components.
## 4 Identification of the Metal-Weak Thick Disk/Atari Disk and other
Galactic Components
The Galactic thick disk has been extensively studied as part of learning about
the formation and evolution of the Milky Way system. Previous studies
(beginning with Gilmore & Reid 1983 and the many others over the last several
decades) all selected member stars assuming the thick disk to be a one-
component, single population. However, it has long been suspected (e.g.,
Norris et al. 1985) that a small portion of this canonical thick disk might
actually be a separate, more metal-poor component that was eventually termed
the “metal-weak thick disk” (Chiba & Beers, 2000). It was found that it has a
mean rotational velocity $\langle V_{\phi}\rangle\,\sim 150$ km s-1 (e.g.,
Carollo et al., 2019). This presents a notable lag compared to the rotational
velocity of the canonical thick disk with $\langle V_{\phi}\rangle\,\sim 180$
km s-1 (e.g., Carollo et al., 2019). Yet, more details remained outstanding to
fully characterize this elusive body of metal-poor disk stars.
Beers et al. (2014) suggested specific criteria to select MWTD stars, i.e.
$Z_{max}\leqslant 3$ kpc and $-1.8\leqslant$
$[\mathrm{Fe}/\mathrm{H}]$$\leqslant-0.8$. More recently Naidu et al. (2020)
also suggested a MWTD selection criteria, namely $-2.5\leqslant$
$[\mathrm{Fe}/\mathrm{H}]$$\leqslant-0.8\land(0.25<\rm{[\alpha/Fe]}<0.45)\land\
(J_{\phi}/\sqrt{J_{\rm{\phi}}^{2}+J_{\rm{z}}^{2}+J_{\rm{R}}^{2}}<-0.5)$. The
chosen lower metallicity bounds aim to avoid possible contamination with the
metal-poor halo. This prevents exploration of the potential existence of
extremely low-metallicity stars typically associated with the halo within the
MWTD (hereafter Atari). If the Atari disk has an accretion origin, it is
principally expected that at least some extremely metal-poor stars should have
been brought in from the early evolution of the progenitor.
Another selection has also been suggested by Carollo et al. (2019), based on
enhanced $\alpha$-abundances and an angular momentum of L${{}_{z}}\sim 1200$
${\rm kpc}$ km s-1 (the less-prograde group in their Figure 1(a)) to
characterize the Atari disk. However, using angular momentum as sole
discriminator can only select stars within a given radial bracket as Lz varies
as a function of Galactic radius $R$ (due to a roughly constant rotational
velocity throughout the outer Galactic parts). For instance, for a sample
restricted to the solar vicinity around R$\sim$8 kpc, and using the suggested
circular angular velocity of V${}_{\phi}=150$ km s-1, the resulting angular
momentum is Lz = $1200$ ${\rm kpc}$ km s-1. But for a more distant sample,
e.g. at $3<$R$<5$ kpc, then Lz peaks at between $462$ and $770$ ${\rm kpc}$ km
s-1 (see Figure 2).
Figure 2: Distribution of the angular momenta of our final SMSS Atari disk
sample for different Galactic radii (R) cuts. The distributions have the same
x-axis range to allow visual comparisons of the shift of the peak of the Lz
distributions between different R bins. The selection procedure for this
sample is explained in Section 4.1 and 4.2.
These different selection approaches show that it remains difficult to cleanly
select Atari disk samples given that candidate stars have very similar
chemical and kinematic properties to those of the canonical thick disk. In the
following, we thus explore a different identification process to characterize
the Atari disk, based on two different techniques (space velocities and
behavior in action space) with the aim of selecting a representative, clean
sample. We start by identifying stars in the thick disk using both of these
methods, and then apply metallicity cut to isolate the Atari disk sample. This
approach returns a sample of stars with kinematic properties in line with what
was previously identified as the MWTD and thus allows us to more firmly
establish the properties of this elusive Galactic component, including its
low-metallicity member stars.
### 4.1 Galactic Space Velocities Approach
In order to select the traditional thin disk, thick disk and halo components,
we adopt the kinematic analysis method presented in Bensby et al. (2003) that
assumes the Galactic space velocities to have Gaussian distributions defined
as follows:
$\displaystyle f(U,V,W)=k\cdot\textrm{exp}($
$\displaystyle-\frac{(V_{\textrm{LSR}}-V_{\textrm{asym}})^{2}}{2\sigma_{V}^{2}}$
$\displaystyle-\frac{W_{\textrm{LSR}}^{2}}{2\sigma_{W}^{2}}-\frac{U_{\textrm{LSR}}^{2}}{2\sigma_{U}^{2}})$
(3)
where
$\displaystyle k=\frac{1}{(2\pi)^{3/2}\sigma_{U}\sigma_{V}\sigma_{W}}$
The expressions $\sigma_{U}$, $\sigma_{V}$, and $\sigma_{W}$ denote the
characteristic dispersions of each Galactic velocity component. The
$V_{\textrm{asym}}$ denotes the asymmetric drift. We adopt these values from
Table 1 in Bensby et al. (2003).
To calculate the relative likelihood for a given star of being a member of a
specific Galactic population, we take into account the observed number
densities (thin disk $X_{D}=0.8649$, thick disk $X_{TD}=0.13$, and halo
$X_{H}=0.0051$) in the solar neighborhood vicinity (which we assume to be $\pm
3$ kpc from the Sun) as reported in Jurić et al. (2008). Therefore, the
relative probabilities for the thick disk-to-thin disk (TD/D) and thick disk-
to-halo (TD/H) ratios are defined as follows:
$\displaystyle\textrm{TD/D}=\frac{X_{\textrm{TD}}\cdot
f_{\textrm{TD}}}{X_{\textrm{D}}\cdot f_{\textrm{D}}}$
$\displaystyle\textrm{TD/H}=\frac{X_{\textrm{TD}}\cdot
f_{\textrm{TD}}}{X_{\textrm{H}}\cdot f_{\textrm{H}}}$
Following Eqs. 4.1 and 4.1, we assign every star that has a membership
probability of TD/D $>2.0$ to the Galactic thick disk, while stars with TD/D
$<0.5$ are assigned to the Galactic thin disk. Furthermore, we exclude all
stars with TD/H $<10.0$ from the thick disk sample to minimize any possible
contamination with halo stars. Our selection results in 10,588 thick disk
stars, 2,571 thin disk stars, and 15,096 halo stars. Figure 3 shows a Toomre
diagram of all these Galactic components, with typical halo stars having
$v_{{\rm tot}}>180\,{\rm km\,s^{-1}}$, and thick disk stars having $70\,{\rm
km\,s^{-1}}<v_{{\rm tot}}<180\,{\rm km\,s^{-1}}$. We note that discarding
these low TD/H stars produces the small gap between the distributions of the
thick disk (red) and halo (yellow) samples in Figure 3.
Figure 3: Toomre diagram for our halo, thick disk, and thin disk stars in
yellow, red, and gray points, respectively. Blue dashed curves denote $v_{{\rm
tot}}=\sqrt{U_{{\rm LSR}}+V_{{\rm LSR}}+W_{{\rm LSR}}}=70$ and $180\,{\rm
km\,s^{-1}}$.
### 4.2 Orbital Properties Approach
In the previous subsection, we have identified thick disk stars by selecting
its highly likely members (high relative probabilities) according to stellar
velocities. Here, we develop another method to identify the traditional
Galactic components based on the probability distribution functions of the
action integrals of our sample, following the procedures presented in Piffl et
al. (2014) and Posti et al. (2018).
The mass-normalized distribution function (DF) of the stellar halo is assumed
to have the following form:
$\displaystyle
f_{\mathrm{halo}}(J_{r},J_{z},L_{z})=f_{0}\left[1+\frac{J_{r}+J_{z}+|L_{z}|}{J_{0}}\right]^{\beta_{*}}$
(5)
where $f_{0}\approx
0.09\,\mathrm{M}_{\odot}\mathrm{Gyr}^{3}\,\mathrm{kpc}^{-6}$ denotes a
normalization constant that results in a total stellar halo mass555We note
that adopting different total mass estimates (e.g., $1.3\times 10^{9}$ M⊙;
Mackereth & Bovy, 2020) would not change our halo membership assignment. of
$M_{\mathrm{halo}}=5\times 10^{8}\,\mathrm{M}_{\odot}$. The constant
$J_{0}=511\,\mathrm{kpc}^{2}\,\mathrm{Gyr}^{-1}$ controls the core in the
center of the stellar halo, and the power law index $\beta_{*}=-4$ is chosen
to set a reasonable density profile in the solar neighbourhood.
Most of the stellar mass is assumed to lie within the thin and thick disk
components, and follows quasi-isothermal DFs (Binney, 2010) with the basic
form
$\begin{split}f_{\mathrm{disk}}(J_{r},J_{z},L_{z})=\frac{\Omega\Sigma\nu}{2\pi^{2}\sigma_{r}^{2}\sigma_{z}^{2}\kappa}\,[1+\tanh(L_{z}/L_{0})]\,\times\\\
\exp(-\kappa J_{r}/\sigma_{r}^{2}-\nu J_{z}/\sigma_{z}^{2})\end{split}$ (6)
$\Omega$ is the circular frequency, $\kappa$ and $\nu$ are the radial and
vertical epicycle frequencies, respectively, of a planar circular orbit with
angular momentum $L_{z}$ (these quantities are related to the potential
through its spatial derivatives, see Chapter 3.2.3 of Binney & Tremaine 2008).
The surface density $\Sigma$ and the velocity dispersions ($\sigma_{r}$ and
$\sigma_{z}$) are similarly functions of $L_{z}$, as they depend on the radius
of a planar circular orbit in the potential. We adopt
L${}_{0}=10\,\mathrm{kpc\,km\,s^{-1}}$ assuming that L0 should not be bigger
than the typical angular momentum of a star in the bulge/bar. We then derive
separate forms of DFs for each the thin and thick disk component through
different adopted forms of the parameters (e.g., in $\sigma_{r}$,
$\sigma_{z}$), exactly following Piffl et al. (2014).
These DFs are calculated using the parameters given in Binney & Sanders
(2016)666It is worth noting that adopting different literature parameters can
meaningfully change the relative fraction of Galactic thin disk vs. thick disk
stars. These are the best fitting parameters given the assumed forms of the
DFs, and other assumptions related to the kinematics of RAVE DR1 stars
(Steinmetz et al., 2006) and the resulting mass distribution. The mass
distribution has five components: thin and thick stellar disks, a gas disk, a
flattened stellar bulge, and a spherical dark matter halo (the stellar halo is
neglected due to its relatively low mass), exactly following the form in Piffl
et al. (2014). We calculated the potential by numerically solving the Poisson
equation given the mass distribution, and we then were able to evaluate the
DFs for every J. In the case of the thin disk, $\sigma_{r}$ and $\sigma_{z}$
are assumed to additionally depend on time (the velocity-dispersion functions
increase with stellar age), and the DF is evaluated through a weighted time
integration (again, following Piffl et al. 2014).
Note also that an additional order-unity multiplicative term in the quasi-
isothermal DF is found by Binney (2010). It is not used here as that term is
needed to control the asymmetry of probabilities with respect to the direction
of rotation (sign of $L_{z}$) that is not constrained by Piffl et al. (2014).
Instead, Piffl et al. (2015) use a refined way to calculate the quasi-
isothermal DFs by iteratively inputting a newly calculated potential from the
DF back into Equation (6) until convergence is achieved.
In order to evaluate each of the three DFs for each star in our sample, the
actions have to be calculated. For internal consistency of this method, we use
the same potential that was used to derive the disk DFs. Accordingly, we use a
spherically symmetric ad-hoc approximation:
$\displaystyle\Phi_{\mathrm{approx}}(r)=-\Phi_{0,\mathrm{fit}}\frac{r_{\mathrm{fit}}}{r}\left[1-\frac{1}{\left(1+r/r_{\mathrm{fit}}\right)^{\beta_{\mathrm{fit}}-3}}\right]$
(7)
This corresponds to the analytical potential of a $\beta$-model presented in
Zhao (1996), where $\Phi_{0,\mathrm{fit}}=2.08\times
10^{6}\,\mathrm{kpc}^{2}\>\mathrm{Gyr}^{-2}$,
$r_{\mathrm{fit}}=6.63\,\mathrm{kpc}$, and $\beta_{\mathrm{fit}}=3.148$. The
approximate potential is accurate to within 6% everywhere inside the virial
radius. The actions are calculated using the formulae in Binney & Tremaine
(2008, chapter 3.5.2)
Finally, the probability to find a star in a phase space volume
$\mathrm{d}^{3}\bm{J}$ around $\bm{J}_{i}$ is proportional to the value of the
DF at this point divided by the total mass of the component. Therefore,
relative probabilities are:
$\displaystyle\mathrm{TD/D}=\frac{f_{\mathrm{thick}}(\bm{J}_{i})/M_{\mathrm{thick}}}{f_{\mathrm{thin}}(\bm{J}_{i})/M_{\mathrm{thin}}}$
(8)
and
$\displaystyle\mathrm{TD/H}=\frac{f_{\mathrm{\mathrm{thick}}}(\bm{J}_{i})/M_{\mathrm{thick}}}{f_{\mathrm{\mathrm{halo}}}(\bm{J}_{i})/M_{\mathrm{halo}}}$
(9)
where $M_{\mathrm{thin}}=2.86\times 10^{10}\>\mathrm{M}_{\odot}$,
$M_{\mathrm{thick}}=1.17\times 10^{10}\>\mathrm{M}_{\odot}$, and
$M_{\mathrm{halo}}=5\times 10^{8}\,\mathrm{M}_{\odot}$.
Using this approach and the same probability thresholds as in the previous
section results in the selection of 15,521 thick disk stars, 3,278 thin disk
stars, and 15,289 halo stars. These results are in good agreement; for
example, the two methods select the main bulk (more than $\sim 87\%$) of each
Galactic component obtained by the other selection technique in Section 4.1.
To construct a clean Atari disk sample, we then adopt an inclusion method by
first selecting all thick disk stars that are common to both selection
methods. Then, we only include stars with photometric
$[\mathrm{Fe}/\mathrm{H}]$$<-0.8$, following the upper limit of the
metallicity criteria in Beers et al. (2014) and Naidu et al. (2020) to isolate
the Atari disk. This results in a sample of 7,127 stars, which we hereby refer
to as the Atari sample. We find that 261 stars in our Atari disk sample have
$[\mathrm{Fe}/\mathrm{H}]$ $\leq-2.0$.
We decided to further assess the quality of our Atari disk sample via an
independent check of our selection procedure, based on the spatial
distribution of our Atari disk sample. We first considered the Zmax
distribution of the sample to identify any outliers (stars with high Zmax)
that can plausibly be associated with the halo. The halo becomes more
pronounced at Z${}_{max}>3$ kpc, while the thin disk is confined to
Z${}_{max}<0.8$ kpc. The vast majority of our Atari disk sample lies in the
range of $0.8\leq$ Z${}_{max}\leq 3$ kpc which suggests that this sample
predominantly includes objects not belonging to the halo or thin disk but
rather in between, and thus more consistent with the thick disk.
As a second check, we then computed orbital histories for the past 10 Gyr for
all stars following Mardini et al. (2020) to further validate their stellar
membership to the Atari disk. Again, the vast majority of stars have orbital
properties similar to that of the canonical thick disk. This agrees with what
was suggested by the Zmax values, that contamination from the metal-poor halo
or thin disk is low.
We do find that 439 stars in our sample of 7,127 Atari disk stars lie outside
of the 0.8 kpc $\leq$ $Z_{max}\leq 3.0$ kpc range. We find that the long-term
orbits of these stars largely reflect thick disk characteristics, i.e., their
orbital energies of E $<-0.9$ km2 s-2, eccentricity of $0.3<$ e $<$ 0.5, and
distances of $r_{\mathrm{apo}}<5$ kpc very much align with thick disk
kinematics. Only 137 stars of these 439 outlier stars have orbital histories
not consistent with the thick disk. We do, however, find that all these stars
have putative thick disk membership via evidenced by their TD/D ratios, as
derived in Sections 4.1 and 4.2, ranging from $10$ to $10^{5}$ and from $10$
to $10^{8}$, respectively.
We thus conclude that our Atari disk sample is clean at the 98% level, given
the 137/7127 stars that do not have orbital histories consistent with thick
disk-like motions. Accordingly, at most relatively few stars with halo-like or
thin disk-like kinematics are coincidentally selected by our technique.
Considering this a representative sample of the Atari disk, we now assess
various characteristics to describe this elusive Galactic component.
### 4.3 Simple Atari disk star selection recipe
The bulk of our Atari disk sample has unique kinematic parameters unassociated
with the general properties of the canonical thick disk. For example, the
canonical thick disk is known to have a rotational velocity $V_{\phi}\approx
180$ km s-1, which lags the VLSR by $\sim 40$ km s-1. But our Atari disk
sample has rotational velocity $V_{\phi}=154$ km s-1. Also, $\sim 20\%$ of our
Atari disk sample have orbital eccentricities above the typical range of
orbital eccentricities reported in the literature for the canonical thick disk
(see Table 2).
Given the complex and involved nature of our selection procedures in Sections
4.1 and 4.2, we also attempted to develop a simplified procedure that would
allow the selection of Atari disk stars from other existing and future stellar
samples with more ease. We suggest the following. Stars that fulfil the
criteria
$[\mathrm{Fe}/\mathrm{H}]$$\ <-0.8\ \land\ Z_{max}<3.0\ \land\
(J_{\phi}/\sqrt{J_{\rm{\phi}}^{2}+J_{\rm{z}}^{2}+J_{\rm{R}}^{2}}<-0.98)\
\land\ (0.3<e<0.6)\ \land\ (140<V_{\phi}<160)$
will be Atari disk stars with high likelihood, albeit not yield an all-
encompassing sample of Atari disk stars. Applying these criteria to our
initial SMSS sample (36,010 stars), we find that 84% of the stars recovered
from our simple selection are also in the Atari disk sample. We investigated
the nature of the remaining 16% of stars found with the simple selection
recipe. Our calculated membership probabilities suggest equal probability of
thin and halo of these contaminants.
## 5 Properties of the Atari disk
In this Section, we aim to establish the kinematic properties of the Atari
disk using our representative sample as selected in the previous section.
Specifically, we investigate the scale length, scale height, and correlations
between several variables (e.g., metallicity, eccentricity, rotational
velocity) to characterize the nature of this component. Table 2 lists our
derived properties of the Atari disk, along with those of other galactic
populations for comparison.
Table 2: Orbital properties of the Galactic thin disk, thick disk, and inner
halo
Parameter | unit | Thin disk | Thick disk | Inner halo | Atari disk
---|---|---|---|---|---
$h_{R}$ | (kpc) | 2.6aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. \- 3.0bbfootnotemark: | 2.0bbfootnotemark: \- 3.0ccfootnotemark: | $\cdots$ | 2.48 $\pm$ 0.05
$h_{Z}$ | (kpc) | 0.14ddfootnotemark: \- 0.36eefootnotemark: | 0.5eefootnotemark: \- 1.1eefootnotemark: | $\cdots$ | 1.68${}^{+0.19}_{-0.15}$
$<V_{\phi}>$ | (km s-1) | 208eefootnotemark: | 182ggfootnotemark: | 0fffootnotemark: | 154 $\pm$ 1
Zmax | (kpc) | $<0.8$hhfootnotemark: | $0.8$ \- $3.0$ggfootnotemark: | $>3.0$ggfootnotemark: | $<$ 3.0
e | $\cdots$ | $<0.14$ggfootnotemark: | 0.3 - 0.5ggfootnotemark: | $>$ 0.7ggfootnotemark: | 0.30 - 0.7
††footnotetext: References are as follows: (a): Jurić et al. 2008; (b): Li &
Zhao 2017; (c): Li et al. 2018; (d): Sanders & Binney 2015; (e): Recio-Blanco
et al. 2014; (f): Carollo et al. 2010; (g): Lee et al. 2011; (h): Anders et
al. 2014.
### 5.1 Scale Length
Measurements of the scale length ($h_{R}$) and scale height ($h_{Z}$) are
important to trace the structure, size, mass distribution, and radial
luminosity profile of the Galactic disk components (e.g., Dehnen & Binney,
1998). In order to calculate $h_{R}$ and $h_{Z}$ of our Atari disk sample, we
solve the fundamental collisionless Boltzmann equation of axisymmetric
systems, which is expressed as the following (see equation 4.12; Binney &
Tremaine, 2008):
$\displaystyle\frac{\partial f}{\partial t}$
$\displaystyle+v_{R}\frac{\partial f}{\partial
R}+\frac{v_{\phi}}{R^{2}}\frac{\partial f}{\partial\phi}+v_{z}\frac{\partial
f}{\partial z}-\left(\frac{\partial\Phi}{\partial
R}-\frac{v_{\phi}^{2}}{R^{3}}\right)\frac{\partial f}{\partial v_{R}}$
$\displaystyle-\frac{\partial\Phi}{\partial\phi}\frac{\partial f}{\partial
v_{\phi}}-\frac{\partial\Phi}{\partial z}\frac{\partial f}{\partial v_{z}}=0,$
(10)
where $f$ is the number of objects in a small volume, and $\Phi$ is the
gravitational potential. It is then convenient to derive the Jeans equation
from the Boltzmann equation in the radial and Z-component directions as the
following (see equation 9; Gilmore et al., 1989):
$\displaystyle\rho
K_{R}=\frac{1}{R}\frac{\partial(R\rho\sigma^{2}_{V_{R}})}{\partial
R}+\frac{\partial(\rho\sigma^{2}_{V_{R,Z}})}{\partial
Z}-\frac{\rho\sigma^{2}_{V_{\phi}}}{R}-\frac{\rho}{R}\bar{V_{\phi}}^{2}$ (11)
$\displaystyle\rho K_{Z}=\frac{\partial(\rho\sigma^{2}_{V_{Z}})}{\partial
Z}+\frac{1}{R}\frac{\partial(R\rho\sigma^{2}_{V_{R,Z}})}{\partial R}$ (12)
where $\rho(R,Z)$ is the space density of the stars in the thick disk, and
$K_{R}$= $\frac{\partial\phi}{\partial R}$, and $K_{Z}$=
$\frac{\partial\phi}{\partial Z}$ are the derivatives of the potential.
Assuming an exponential density profile, the radial Jeans equation can be
rewritten as follows (Li et al., 2018):
$\displaystyle\frac{\sigma^{2}_{V_{\phi}}}{\sigma^{2}_{V_{R}}}-2+\frac{2R}{h_{R}}-\frac{V_{c}^{2}-\bar{V_{\phi}}^{2}}{\sigma^{2}_{V_{R}}}+\frac{\sigma^{2}_{V_{Z}}}{\sigma^{2}_{V_{R}}}=0$
(13)
where $h_{R}$ is the scale length. By substituting our calculated velocity
dispersions from the Atari disk sample, within $\approx$ 3 kpc of the Sun in
the cynlindrical $R$ coordinate and $\approx$ 2 kpc above or below the
Galactic plane, (6,347 stars) into Equation 13, we obtain a radial scale
length of $h_{R}=2.48$ kpc. Calculating the scale length using different
metallicity bins shows a small increase from 2.38 kpc among the higher
metallicity stars up to 2.91 kpc for the low-metallicity stars. The results
are detailed in Table 3. In general, these results point to the Atari disk
being comparable in size in the radial direction to the thick and thin disk.
For reference, the scale length of the canonical thick disk has been measured
as 2.0 kpc (Bensby et al., 2011), 2.2 kpc (Carollo et al., 2010), and 2.31 kpc
(Sanders & Binney, 2015), although larger values have also been reported
previously (Chiba & Beers, 2000; de Jong et al., 2010). Thin disk values refer
to an overall similar spatial distribution although it is likely somewhat more
extended ($h_{R}>3.0$ kpc; e.g., Bensby et al. 2011; Sanders & Binney 2015).
See Table 2 for further details.
### 5.2 Scale Height
Assuming an exponential density distribution and constant surface density
($\sigma^{2}_{V_{R,Z}}\approx 0$)777$\sigma^{2}_{V_{R,Z}}$ is negligible small
compared to the remaining term in Eq.(14), as described in Gilmore et al.
(1989), Equation 12 can be rewritten as follows:
$\displaystyle\frac{\partial\ln{\sigma^{2}_{V_{Z}}}}{\partial
Z}-\frac{1}{h_{Z}}+\frac{K_{Z}}{\sigma^{2}_{V_{Z}}}=0$ (14)
where $h_{Z}$ is the scale height. By substituting $K_{Z}=2\pi G\times
71M_{\odot}$ pc-2 at $|$z$|$ = 1.1 kpc (see equation 4 in Kuijken & Gilmore
1991), relevant velocity dispersions, and gradients into Equation 14, the
scale height can be obtained.
We applied this technique to derive scale heights for both the original
velocity-selected sample of 7,451 stars (see Section 4.1) as well as the
action-selected sample of 10,351 stars, using the same spatial selection as in
Section 5.1. By design, the velocity selection method employed in our study
sets out to select stars roughly within the spatial distribution of the thick
disk (in the z-direction by using $\sigma_{\text{W}}=35$ km s-1; see Section
4.1). This might lead to a bias when attempting to use the velocity-selected
sample to determine the scale height. Table 3 shows the results. Using the
action-selection sample, we then derive 1.68 kpc for the scale height of the
Atari disk. Restricting the sample to stars with
$-1.2<[\mathrm{Fe}/\mathrm{H}]<-0.8$, we find $h_{Z}=$1.92 kpc. However, stars
with $-1.5<[\mathrm{Fe}/\mathrm{H}]<-1.2$ suggest a lower value of $\sim 1.37$
kpc. Stars with even lower metallicity once again follow a wider distributions
with larger scale heights. However, the larger uncertainty associated with the
calculated $h_{Z}$ in this metallicity bin comes from the low number of stars
to calculate the slope of the velocity dispersion (see term 1 of Equation 14)
accurately. We also investigated the idea that these different $h_{Z}$ values
might be due to possible contamination from other accreted substructures. To
address this question, we investigated the E-Lz space of each of the used
different $[\mathrm{Fe}/\mathrm{H}]$ bins. Overplotting these E-Lz
distributions on our Figure 9 suggests no significant overlap with any of the
other accreted substructures. Also, we performed a 2D Gaussian mixture model
fitting in E-Lz space for each of these $[\mathrm{Fe}/\mathrm{H}]$ bins and
found that the E-Lz distribution in each $[\mathrm{Fe}/\mathrm{H}]$ bin could
be reasonably fit by one Gaussian. This suggests no obvious substructure
contaminating our sample at various $[\mathrm{Fe}/\mathrm{H}]$ bins.
At face value, the $h_{Z}$ values calculated for the action-selected sample
are about 0.2 to 0.5 kpc larger than what we find for the velocity-selected
sample, as can be seen in Table 3. While there is a small change in $h_{Z}$
for the highest metallicity bin ($-1.2<[\mathrm{Fe}/\mathrm{H}]<-0.8$)
compared to the whole Atari disk sample, the low number of stars in the other
metallicity bins (i.e., large uncertainties) refrain us from determining any
scale height gradient with metallicity. For comparison, using a chemo-
dynamical selection, (Carollo et al., 2010) find a scale height of
$h_{Z}=1.36$ kpc. Their sample had $[\mathrm{Fe}/\mathrm{H}]$$<-1.0$ and
ranging down to below $[\mathrm{Fe}/\mathrm{H}]$$=-1.5$. Such low value
corresponds to what we obtain for our metallicity bin of
$-1.5<[\mathrm{Fe}/\mathrm{H}]<-1.2$. Based on our comprehensive analysis,
this value may well depend on the metallicity of the chosen sample.
To further quantify the bias introduced by the velocity method, we reran our
analysis with an increased $\sigma{{}_{z}}=45$ km s-1 and $\sigma{{}_{z}}=55$
km s-1. Our intention was to learn whether increasing the initial spatial
distribution would impact the scale height to the extent of matching that of
the action method. While choosing $\sigma{{}_{z}}=55$ km s-1 did indeed result
in scale height increases, the values of the action-selected sample were not
entirely reached. At the same time, however, the halo contamination rate
drastically increased, suggesting that loosening the velocity selection
criterion was detrimental to our overall science goal of accurately selecting
a high-confidence sample of Atari disk stars. To avoid this bias, we thus
chose to use the scale height value obtained from the action-selection sample
only. For the remainder of the analysis, we then kept the common sample as
originally selected with $\sigma{{}_{z}}=35$ km s-1.
Interestingly, a scale height of $h_{Z}\sim 1.7$ kpc derived for the whole
metallicity range is significantly more extended than what is measured for the
canonical thick disk having $h_{Z}\sim 0.5$ to 1 kpc. More recent papers have
reported progressively shorter scale heights (see Table 2) which suggests that
the scale height of the Atari disk is could be up to three times that of the
thick disk. Considering a more matching metallicity range of these two
populations, the Atari disk scale height for stars with
$-1.2<[\mathrm{Fe}/\mathrm{H}]<-0.8$ is $\sim$1.9 kpc which is about two to
four times that of the thick disk. But even the lowest value derived for stars
with $-1.5<[\mathrm{Fe}/\mathrm{H}]<-1.2$ of $\sim 1.4$ kpc is still more than
the scale height of the thick disk. Values for other metallicity bins can also
be found in Table 3, for additional comparisons. Overall, this robustly
suggests the Atari disk to be generally significantly more extended than the
thick disk in the the z-direction.
Table 3: Scale Lengths and Scale Heights for Different Metallicity Bins
Sample | Metallicity bin | $N_{\rm stars}$ | scale length | scale height
---|---|---|---|---
| | | (kpc) | (kpc)
Common | $-1.2\leq{\rm[Fe/H]}<-0.8$ | 4,868 | $2.38^{+0.05}_{-0.05}$ | $1.75^{+0.17}_{-0.14}$
| $-1.5\leq{\rm[Fe/H]}<-1.2$ | 835 | $2.62^{+0.09}_{-0.09}$ | $1.36^{+0.31}_{-0.23}$
| $-1.8\leq{\rm[Fe/H]}<-1.5$ | 314 | $2.98^{+0.15}_{-0.15}$ | $1.41^{+1.37}_{-0.52}$
| $-3.5\leq{\rm[Fe/H]}<-1.8$ | 268 | $2.91^{+0.16}_{-0.16}$ | $2.03^{+2.82}_{-0.95}$
| $-3.5\leq{\rm[Fe/H]}<-0.8$ | 6,347 | $2.48^{+0.05}_{-0.05}$ | $1.67^{+0.20}_{-0.16}$
Velocity | $-1.2\leq{\rm[Fe/H]}<-0.8$ | 5,570 | $2.64^{+0.04}_{-0.04}$ | $1.43^{+0.18}_{-0.15}$
| $-1.5\leq{\rm[Fe/H]}<-1.2$ | 993 | $3.30^{+0.10}_{-0.10}$ | $1.15^{+0.19}_{-0.16}$
| $-1.8\leq{\rm[Fe/H]}<-1.5$ | 414 | $4.00^{+0.11}_{-0.12}$ | $1.33^{+1.14}_{-0.48}$
| $-3.5\leq{\rm[Fe/H]}<-1.8$ | 394 | $4.14^{+0.09}_{-0.09}$ | $1.68^{+1.32}_{-0.56}$
| $-3.5\leq{\rm[Fe/H]}<-0.8$ | 7,451 | $3.00^{+0.05}_{-0.05}$ | $1.39^{+0.19}_{-0.16}$
Action | $-1.2\leq{\rm[Fe/H]}<-0.8$ | 7,694 | $2.51^{+0.04}_{-0.04}$ | $1.92^{+0.17}_{-0.15}$
| $-1.5\leq{\rm[Fe/H]}<-1.2$ | 1,497 | $2.95^{+0.08}_{-0.08}$ | $1.37^{+0.32}_{-0.22}$
| $-1.8\leq{\rm[Fe/H]}<-1.5$ | 707 | $3.30^{+0.12}_{-0.12}$ | $1.63^{+1.19}_{-0.49}$
| $-3.5\leq{\rm[Fe/H]}<-1.8$ | 639 | $3.38^{+0.12}_{-0.12}$ | $2.08^{+3.15}_{-1.10}$
| $-3.5\leq{\rm[Fe/H]}<-0.8$ | 10,351 | $2.70^{+0.04}_{-0.04}$ | $1.68^{+0.19}_{-0.15}$
Additional metallicity bins |
Common | $-0.9\leq{\rm[Fe/H]}<-0.8$ | 1,960 | $2.32^{+0.06}_{-0.06}$ | $1.64^{+0.47}_{-0.30}$
| $-1.1\leq{\rm[Fe/H]}<-0.9$ | 2,161 | $2.41^{+0.06}_{-0.06}$ | $2.03^{+0.87}_{-0.48}$
| $-1.3\leq{\rm[Fe/H]}<-0.8$ | 5,264 | $2.40^{+0.04}_{-0.05}$ | $1.73^{+0.18}_{-0.15}$
| $-3.5\leq{\rm[Fe/H]}<-1.1$ | 1,964 | $2.73^{+0.07}_{-0.07}$ | $1.59^{+0.56}_{-0.34}$
| $-3.5\leq{\rm[Fe/H]}<-1.3$ | 1,046 | $2.85^{+0.09}_{-0.08}$ | $1.52^{+0.59}_{-0.35}$
| $-3.5\leq{\rm[Fe/H]}<-1.4$ | 786 | $2.92^{+0.10}_{-0.10}$ | $1.84^{+1.59}_{-0.60}$
Velocity | $-0.9\leq{\rm[Fe/H]}<-0.8$ | 2,231 | $2.51^{+0.07}_{-0.07}$ | $1.40^{+0.25}_{-0.20}$
| $-1.1\leq{\rm[Fe/H]}<-0.9$ | 2,486 | $2.71^{+0.06}_{-0.06}$ | $1.64^{+0.58}_{-0.34}$
| $-1.3\leq{\rm[Fe/H]}<-0.8$ | 6,040 | $2.70^{+0.05}_{-0.05}$ | $1.40^{+0.18}_{-0.15}$
| $-3.5\leq{\rm[Fe/H]}<-1.1$ | 2,433 | $3.61^{+0.07}_{-0.07}$ | $1.30^{+0.36}_{-0.24}$
| $-3.5\leq{\rm[Fe/H]}<-1.3$ | 1,369 | $3.96^{+0.09}_{-0.09}$ | $1.37^{+0.48}_{-0.30}$
| $-3.5\leq{\rm[Fe/H]}<-1.4$ | 1,064 | $4.06^{+0.08}_{-0.08}$ | $1.61^{+0.97}_{-0.46}$
Action | $-0.9\leq{\rm[Fe/H]}<-0.8$ | 3,065 | $2.43^{+0.05}_{-0.05}$ | $1.65^{+0.28}_{-0.21}$
| $-1.1\leq{\rm[Fe/H]}<-0.9$ | 3,422 | $2.53^{+0.06}_{-0.06}$ | $2.37^{+0.47}_{-0.35}$
| $-1.3\leq{\rm[Fe/H]}<-0.8$ | 8,353 | $2.55^{+0.05}_{-0.05}$ | $1.80^{+0.17}_{-0.15}$
| $-3.5\leq{\rm[Fe/H]}<-1.1$ | 3,760 | $3.09^{+0.06}_{-0.06}$ | $1.74^{+0.64}_{-0.37}$
| $-3.5\leq{\rm[Fe/H]}<-1.3$ | 2,223 | $3.22^{+0.07}_{-0.07}$ | $1.83^{+0.77}_{-0.42}$
| $-3.5\leq{\rm[Fe/H]}<-1.4$ | 1,740 | $3.27^{+0.08}_{-0.08}$ | $2.14^{+2.25}_{-0.79}$
Velocity (45) | $-1.2\leq{\rm[Fe/H]}<-0.8$ | 5,794 | $2.68^{+0.05}_{-0.04}$ | $1.49^{+0.24}_{-0.19}$
| $-1.5\leq{\rm[Fe/H]}<-1.2$ | 1,108 | $3.44^{+0.09}_{-0.10}$ | $1.44^{+0.36}_{-0.24}$
| $-1.8\leq{\rm[Fe/H]}<-1.5$ | 467 | $4.11^{+0.12}_{-0.12}$ | $1.26^{+1.43}_{-0.48}$
| $-3.5\leq{\rm[Fe/H]}<-1.8$ | 447 | $4.25^{+0.09}_{-0.10}$ | $2.82^{+4.61}_{-1.61}$
| $-0.9\leq{\rm[Fe/H]}<-0.8$ | 2,288 | $2.55^{+0.06}_{-0.06}$ | $1.39^{+0.20}_{-0.16}$
| $-1.1\leq{\rm[Fe/H]}<-0.9$ | 2,617 | $2.74^{+0.06}_{-0.06}$ | $1.71^{+0.54}_{-0.33}$
| $-3.5\leq{\rm[Fe/H]}<-1.1$ | 2,688 | $3.71^{+0.06}_{-0.07}$ | $1.65^{+0.58}_{-0.33}$
Velocity (55) | $-1.2\leq{\rm[Fe/H]}<-0.8$ | 5,794 | $2.73^{+0.05}_{-0.05}$ | $1.79^{+0.44}_{-0.29}$
| $-1.5\leq{\rm[Fe/H]}<-1.2$ | 1,108 | $3.56^{+0.10}_{-0.09}$ | $1.48^{+0.30}_{-0.22}$
| $-1.8\leq{\rm[Fe/H]}<-1.5$ | 467 | $4.34^{+0.10}_{-0.10}$ | $1.53^{+1.84}_{-0.61}$
| $-3.5\leq{\rm[Fe/H]}<-1.8$ | 447 | $4.42^{+0.10}_{-0.10}$ | $4.56^{+8.32}_{-4.56}$
| $-0.9\leq{\rm[Fe/H]}<-0.8$ | 2,386 | $2.59^{+0.06}_{-0.06}$ | $1.45^{+0.21}_{-0.17}$
| $-1.1\leq{\rm[Fe/H]}<-0.9$ | 2,647 | $2.79^{+0.06}_{-0.07}$ | $2.26^{+1.49}_{-0.63}$
| $-3.5\leq{\rm[Fe/H]}<-1.1$ | 2,821 | $3.88^{+0.07}_{-0.07}$ | $2.09^{+0.57}_{-0.36}$
Note. — We list results for various [Fe/H] bins for comparisons with other
studies and sample sizes.
### 5.3 Correlation between the radial and vertical distances and metallicity
The spatial-metallicity correlation of metal-poor stars in the Galactic disk
places observational constraints on our understanding of the formation and
evolution of the Milky Way system. The mean metallicity
($\langle[\mathrm{Fe}/\mathrm{H}]\rangle$) of stars at a particular region in
the disk primarily depends on the gas accretion rate, the chemical composition
of the early interstellar gas, and subsequent evolution of stars at that
region. To investigate the presence of any correlation between metallicity and
radial distance from the Galactic center ($R$), we apply a simple linear fit
to the individual measurements of R vs. $[\mathrm{Fe}/\mathrm{H}]$ in our
Atari disk sample. The top panel of Figure 4 presents a 2d histogram of the
$R$ distribution of the Atari disk sample as a function of
$[\mathrm{Fe}/\mathrm{H}]$. The points and error bars represent the mean value
and standard error of $R$ of bins of 0.20 dex for visualization purposes. The
slope of the dashed line represents a positive radial metallicity gradient
(${\rm\partial R/\partial}[\mathrm{Fe}/\mathrm{H}]=0.73\pm 0.05$ kpc ${\rm
dex^{-1}}$). This result is different from what has been found for the
canonical thick disk which is essentially flat. Recio-Blanco et al. (2014)
used 1,016 stars from the Gaia-ESO DR1 to chemically separate the disk
components and found ${\rm\partial[\mathrm{Fe}/\mathrm{H}]/\partial
R}=+0.006\pm 0.008$ for the thick disk. Peng et al. (2018) used a kinematic
approach to separate 10,520 stars taken from South Galactic Cap u-band Sky
Survey and SDSS/SEGUE data and found
${\rm\partial[\mathrm{Fe}/\mathrm{H}]/\partial R}=-0.001\pm 0.020$. We note
that the above studies have a higher metallicity range
($[\mathrm{Fe}/\mathrm{H}]$$\gtrsim-1.2$), caveating a direct comparison to
our results. Indeed,
Figure 4: Top: Radial metallicity gradient as a function of Galactocentric
radial distance of the Atari disk sample. Bottom: Vertical metallicity
gradient of the Atari disk as a function of $[\mathrm{Fe}/\mathrm{H}]$. Error
bars denote the standard deviation in each bin and show the statistical
uncertainty only.
We also test for the presence of a $[\mathrm{Fe}/\mathrm{H}]$ trend with
absolute vertical distance from the Galactic plane ($|Z|$), using the same
technique as for deriving the radial gradient, including using bin sizes of
0.2 dex for visualization purposes. The bottom panel of Figure 4 shows the
correlation between $[\mathrm{Fe}/\mathrm{H}]$ and $|Z|$ and the best fit
line, which represents a positive vertical metallicity gradient
(${\rm\partial|Z|/\partial}[\mathrm{Fe}/\mathrm{H}]=-0.45\pm 0.03$ kpc ${\rm
dex^{-1}}$). On average, the more metal-poor stars of the Atari disk are
preferentially found at high $|Z|$ values. The relatively more metal-rich
stars are on average located at $|Z|\lesssim 2$. Note that only stars within 5
kpc were included in both of these analyses following Chiti et al. (2021b), to
avoid selection effects toward more metal-poor stars at larger distances.
Following the last point, we investigated a few further avenues to assess to
what extent selection effects affect our observed spatial-metallicity
correlations. There are two primary ways in which selection effects could bias
our gradients: (1) metal-poor stars are brighter than more metal-rich stars in
the SkyMapper $v$ filter, making the most distant stars preferentially metal-
poor; and (2) based on our exclusion of regions of high reddening in the
initial sample (see Chiti et al., 2021a, b, for more details), there is an
exclusion of low $Z$ stars at $R$ close to the galactic center; this could
lead to an artificial $R$-$[\mathrm{Fe}/\mathrm{H}]$ gradient given that lower
metallicity stars are at high $Z$. The effect of (1) is generally accounted
for by only considering stars within 5 kpc from the Sun, which is the distance
within which the SkyMapper filter should not significantly preferentially
select metal-poor stars at large distances. To more stringently test this
effect, we restrict our sample to stars within 4 kpc and still find a positive
$R$-$[\mathrm{Fe}/\mathrm{H}]$ gradient. Restricting the sample to $<$2 kpc
results in no statistically significant gradient, but this is not necessarily
surprising because we lose sensitivity to any gradient by restricting our
sample to only nearby stars. We investigate the effect of (2) by searching for
a gradient in $R$-$[\mathrm{Fe}/\mathrm{H}]$ at various $Z$ bins ($0.1<Z<1.1$
with 0.2 kpc bins and $1.1<Z<3.0$ with 0.4 kpc bins). In general, this
analysis still leads to positive gradients at a given Z range, suggesting that
(2) is not a significant effect. We note that the
$R$-$[\mathrm{Fe}/\mathrm{H}]$ gradient appears to not be significant at the
lowest metallicities (below [Fe/H] $<-1.4$).
### 5.4 Gradients with Rotational Velocity
We investigate the variation of rotational velocity $V_{\phi}$ versus
$[\mathrm{Fe}/\mathrm{H}]$, $R$, and $|Z|$. The top panel of Figure 5 shows a
density plot of the rotational velocity versus the metallicity of stars in our
Atari disk sample. The mean values and standard errors of $V_{\phi}$ in
metallicity bins of 0.2 dex are overplotted. There is an overall positive
rotational velocity gradient as a function of $[\mathrm{Fe}/\mathrm{H}]$ of
${\rm\partial V_{\phi}/\partial}[\mathrm{Fe}/\mathrm{H}]=13.22\pm 1.57$ km s-1
${\rm dex^{-1}}$. The lower left panel of Figure 5 shows the rotational
velocity gradient in the radial direction, ${\rm\partial
V_{\phi}/\partial}R=-2.6\pm 0.4$ km s-1 ${\rm kpc^{-1}}$. While the detection
is statistically significant, the value of the gradient ($\sim 2$ km/s per 1
kpc) is small. There is also a negative correlation between $V_{\phi}$ and
$|Z|$ of ${\rm\partial V_{\phi}/\partial}|z|=-8.96\pm 0.75$ km s-1 ${\rm
kpc^{-1}}$.
Previous studies have found negative and positive slopes for the rotational
velocity-metallicity gradient for the thin and thick disk populations,
respectively. For reference, Lee et al. (2011) and Guiglion et al. (2015) used
the chemical abundance approach to assign the stellar population membership
for 17,277 and 7,800 stars, respectively. Using the thick disk samples in
their studies, they reported rotational velocity gradients of ${\rm\partial
V_{\phi}/\partial}[\mathrm{Fe}/\mathrm{H}]=+45.8\pm 2.9$ km s-1 ${\rm
dex^{-1}}$ and ${\rm\partial V_{\phi}/\partial}[\mathrm{Fe}/\mathrm{H}]=+49\pm
10$ km s-1 ${\rm dex^{-1}}$, respectively. While Allende Prieto et al. (2016)
used 3621 APOGEE stars to measure ${\rm\partial
V_{\phi}/\partial}[\mathrm{Fe}/\mathrm{H}]=-18\pm 2$ km s-1 ${\rm dex^{-1}}$.
It is worth mentioning again that in all of these aforementioned studies, the
metallicity range was $[\mathrm{Fe}/\mathrm{H}]$$>-1.0$ and so a direct
comparison of results might not necessarily be accurate.
Figure 5: Top: Rotational velocity as a function of $[\mathrm{Fe}/\mathrm{H}]$
resulting in a gradient of ${\rm\partial
V_{\phi}/\partial}[\mathrm{Fe}/\mathrm{H}]=13.22\pm 1.57$ km s-1 ${\rm
dex^{-1}}$. Bottom left: Rotational velocity as a function of the
Galactocentric radial distance $R$ with a gradient of ${\rm\partial
V_{\phi}/\partial}R=-2.6\pm 0.4$ km s-1 ${\rm kpc^{-1}}$. Bottom right:
Rotational velocity as a function of scale height Z with a gradient of
${\rm\partial V_{\phi}/\partial}|z|=-8.96\pm 0.75$ km s-1 ${\rm kpc^{-1}}$.
Error bars denote the standard deviations throughout.
### 5.5 Orbital Eccentricity
For our Atari disk sample, we investigate the relation between the orbital
eccentricity, and $[\mathrm{Fe}/\mathrm{H}]$, $R$, and $|Z|$. Figure 6 shows
the observed trends of the orbital eccentricity as a function of
$[\mathrm{Fe}/\mathrm{H}]$, $R$, and $|Z|$. The top panel in Figure 6 shows
eccentricity versus $[\mathrm{Fe}/\mathrm{H}]$. Results suggests that the
orbital eccentricity increases as the metallicity decreases, with the most
metal-poor star having fairly eccentric orbits. The best fit yields a slope of
${\rm\partial e/\partial}{[\mathrm{Fe}/\mathrm{H}]}=-0.05\pm 0.01$ dex-1. The
lower left panel presents an overall no significant correlation between the
orbital eccentricity and $R$. The lower right panel identifies that the
orbital eccentricity varies minorly with $|Z|$, with ${\rm\partial
e/\partial}Z=+0.01\pm 0.002$ kpc-1. In general, our Atari disk stars exhibit
different orbital eccentricity with $[\mathrm{Fe}/\mathrm{H}]$, $R$, and $|Z|$
from the ones reported in the literature for the more metal-rich stars in the
canonical thick disk (see Lee et al., 2011, figure 9).
Figure 6: Top: Orbital eccentricities as a function of
$[\mathrm{Fe}/\mathrm{H}]$ with a gradient of ${\rm\partial
e/\partial}[\mathrm{Fe}/\mathrm{H}]=-0.05\pm 0.01$ ${\rm dex^{-1}}$. Bottom
left: $e$ as a function of the Galactocentric radial distance $R$ with a flat
trend of ${\rm\partial e/\partial}R=0.00\pm 0.00$ ${\rm kpc^{-1}}$. Bottom
right: $e$ as a function of scale height $Z$ with a positive gradient of
${\rm\partial e/\partial}|z|=+0.01\pm 0.002$ ${\rm kpc^{-1}}$. Error bars
denote the standard deviation throughout.
## 6 Comparisons with formation models
Compared to the thick and thin disks, the Atari disk has not been extensively
studied or been regarded as a separate component of the Galactic disk until
recently (Carollo et al., 2019; An & Beers, 2020). Accordingly, there are no
detailed theoretical Atari disk formation scenarios discussed in the
literature. In the absence of such, we will compare our observed
characteristics of the Atari disk with predictions from the four main
formation scenarios for a structurally distinct thick disk, as well as models
with predictions regarding eccentricities, to gain insights into how the Atari
disk formed and evolved.
### 6.1 Comparison with predictions of thick disk formation models based on
$[\mathrm{Fe}/\mathrm{H}]$gradients
We use the properties detailed in Section 5 to assess four main formation
scenarios for a structurally distinct thick disk following Li et al. (2018) to
learn about the origin of the Atari disk. We discuss each in detail below.
1\. Disk heating. This scenario posits the dynamical heating of a pre-existing
disk due to minor mergers. The disk will maintain its chemical or kinematic
gradients (Quinn et al., 1993; Kazantzidis et al., 2008) even after the
merger(s). We observe a positive radial metallicity gradient and a negative
vertical metallicity gradient for our Atari disk sample. We note that direct
comparisons of the magnitude of our gradients are not possible to other
studies in the literature due to the upper metallicity limit
($[\mathrm{Fe}/\mathrm{H}]$$<-0.75$) of our SMSS sample. However, our
detection of a correlation between the radial distance and metallicity is not
principally seen in some studies of this formation scenario of the thick disk,
which disfavors this interpretation (e.g., Recio-Blanco et al., 2014; Peng et
al., 2018).
2\. Gas-rich merger. At high redshifts, dwarf galaxies were likely all gas
rich with few stars formed, including those that merged with the early Milky
Way. Any gas-rich deposit into the Milky Way’s highly turbulent early disk
would have expected to have triggered star formation (Brook et al., 2004,
2007). The subsequent stars that formed from this merger should likely show no
obvious clumpy distribution in the integrals of motion space. Also, we would
expect subsequent stars that formed to have formed in a the short timescale
within the disk following a gas-rich merger, suggesting a flat metallicity
behavior (no gradient) (Cheng et al., 2012). However, we do observe a gradient
in our sample (see Figure 4). Thus, it is unlikely that the Atari population
formed in a star formation episode after a gas-rich merger; although, it very
likely could have been associated with the metal-poor stars that formed in an
accreted galaxy before infall.
It is then interesting to consider the existence of significant numbers of
metal-poor stars with $[\mathrm{Fe}/\mathrm{H}]<-2.5$ in this context. These
stars do support accretion as the origin scenario of the Atari disk, as
opposed to star formation following a gas-rich merger, which would lead to a
population of higher metallicity stars with an average of
$[\mathrm{Fe}/\mathrm{H}]\approx-0.6$. It would thus take a merger(s) to
inject gas $\approx 1,000$ times more metal-poor to bring down the
$[\mathrm{Fe}/\mathrm{H}]$ of the disk’s interstellar medium to allow the
formation of such low-metallicity stars post-merger. This seems unlikely from
having occurred. However, such primitive stars may have easily formed in early
low-mass systems which were accreted first by neighboring, more massive
systems and eventually into the massive progenitor of the Atari disk.
3\. Direct accretion. Cosmological simulations have shown that a direct
accretion of dwarf-like galaxies coming in from specific directions can build
up a thick disk by donating their content in a planar configuration (for more
details about such simulations, see Abadi et al., 2003a, b). Either one major
merger with a massive satellite or the accretion of a number of smaller
systems would result in spatially distinct populations as measurable by
differences in $h_{R}$ and $h_{Z}$ (see Gilmore et al., 2002). Correlations
between $[\mathrm{Fe}/\mathrm{H}]$ and values for $h_{R}$ and $h_{Z}$ may
indeed principally indicate multiple populations since such a scenario would
deposit stars (not just gas) into the early Galactic disk that were formed
within the progenitor system before the merger event. For example, an ex-situ
Milky Way population would display a larger scale length compared to that of a
population formed by an in-situ scenario (Amôres et al., 2017). These stars
(now present in the disk) would also still share similar integrals of motion.
Finally, there would also be an expected observable metallicity gradient, both
vertically and radially due to the different origins of the stars (accreted
vs. in-situ formed stars). Eccentricities would also be distributed broadly
and over a wider range (Sales et al., 2009).
As can be seen in Figure 4, our Atari disk sample displays spatial metallicity
gradients in both the vertical and radial direction. It also shows a broad
range of eccentricities, as can be seen in Figure 6. We also find a moderate
correlations of the scale length as a function of $[\mathrm{Fe}/\mathrm{H}]$
for stars in the solar neighborhood vicinity, from $h_{R}\sim 2.4$ to 2.9 (see
Table 3). The existence of this gradient aligns with predictions for an ex-
situ population to have an increased scale length compared to an in-situ one
(Amôres et al., 2017), and supports the Atari disk to have an accretion
origin. Unfortunately, the picture is less obvious regarding the behavior of
the scale height. As our present Atari disk data is not suitable to draw
strong evidence on the existence of a scale height metallicity gradient, we
strongly recommend future studies with larger samples to further quantify this
issue.
The direct accretion explanation is also supported by simulations of Milky
Way-like galaxies in the IllustrisTNG simulation (Nelson et al., 2019). We
analyze 198 simulated Milky Way analogs from Illustris TNG50, defined as disk
galaxies with stellar masses of $M_{*}=10^{10.5-11.2}M\odot$ in relative
isolation at $z=0$ (originally identified by Engler et al., 2021; Pillepich et
al., 2021). Milky Way analogs are also reported in Illustris TNG100 (e.g.,
Mardini et al., 2020). When defining the thick disk of these galaxies, we look
exclusively at star particles vertically located between 0.75 and 3 kpc from
the plane of the galaxy (excluding the area dominated by the thin disk or
halo) and radially located between 3 and 15 kpc from the center of the galaxy
(excluding the area dominated by the bulge). We trace back the origin of the
star particles in the thick disk at $z=0$ and find that the vast majority of
stars were formed ex-situ: $95^{+3}_{-10}\%$ of the thick disk stars in each
Milky Way analog have accretion origins. We also calculated
$[\mathrm{Fe}/\mathrm{H}]$ vs. radial distance gradients for the ex-situ thick
disk populations. Among the 198 “Milky Way thick disks”, 71 of them have a
positive [Fe/H] vs. radial distance gradient for their ex-situ population.
This is 35% of the simulated ex-situ thick disk populations. Several of the
simulated ex situ thick disk populations have gradients that exactly match the
Atari disk observations.
This percentage remains consistently high when we consider lower metallicity
stars; for stars with $[\mathrm{Fe}/\mathrm{H}]$$<-0.8$, $96^{+3}_{-11}\%$ of
the thick disk stars have accretion origins. This trend is supported by the
results of Abadi et al. (2003b).
In summary, the behavior of the stellar spatial distributions, together with
the eccentricity distribution, the scale length, and plausibly also the scale
height lend support to a scenario in which the Atari disk formed by accretion
event(s) similar to those studied in Abadi et al. (2003b) as well as the
IllustrisTNG simulation.
4\. Radial migration. A radial migration scenario suggests that early
dynamical interactions occurred when the metallicity of the interstellar
medium was relatively low and the $\alpha$-abundance ratios of the disk stars
were high. Specifically, interactions of the disk with the spiral arms can
dilute the metallicity gradient by rearranging the orbital motion of stars
(Schönrich & Binney, 2009). This would lead to an exchange of thin and thick
disk stars. Any outward migration places stars on higher orbits above or below
the Galactic plane. By now, these migrated stars would have had enough time to
also experience significant orbital mixing, thus contributing to the
flattening of the gradient.
Accordingly, only a small or even no correlation between the rotational
velocity and the metallicity is expected in this scenario. Our sample does
show a significant correlation, as can be seen in Figure 5, suggesting that
radial migration has not played a major role in the (more recent) evolution of
the Atari disk.
### 6.2 Comparison with predictions of thick disk formation models based on
orbital eccentricities
In addition to metallicity gradients, information can also be gained from the
distribution of the stellar orbital eccentricities (e.g., Sales et al., 2009).
In the following, we consider eccentricity distribution predictions and
compare them with our results in the context of the chemo-dynamic constraints
already discussed in Section 6.1. Sales et al. (2009) predict (their Figure 3)
that
(i) a notable peak at low eccentricity should be present in the radial
migration and gas-rich scenarios ($e\sim$0.2-0.3),
(ii) the accretion scenario has an eccentricity distribution that is broadly
distributed but with a peak shifted towards higher values, and
(iii) the disk heating scenario has two peaks with the primary one being at
$e\sim$ 0.2-0.3 and a secondary one located at $e\sim$ 0.8888This secondary
peak is from the debris of an accreted/merged luminous satellite. If the disk
heating is due to merging of subhalo(s), then this secondary peak might not
likely exist..
Figure 7: Orbital eccentricity distribution for our Atari disk sample, using
the scale length of $h_{R}=2.48\pm 0.15$ kpc and scale height of
$h_{Z}=1.67\pm 0.15$ kpc in the ranges of $1<|Z/h_{Z}|<3$ and $2<|R/h_{R}|<3$,
corresponding to figure 3 in Sales et al. (2009). The solid black line
represents the superposition of the individual Gaussians. Upper panel: best
fit using two Gaussians with peaks at $e$ = 0.30 and 0.49. Lower panel: best
fit using three Gaussians with peaks at $e=$ 0.16, 0.33, and 0.51.
In Figure 7, we show the orbital eccentricity distributions of our Atari disk
sample and best-fitting Gaussians. We can reproduce the observed distribution
with two Gaussians in the upper panel (with peaks at $e=$0.33 and 0.54), and
with three Gaussians in the lower panel (with peaks at $e=$0.24, 0.40, and
0.61). A significant number of stars with $e>0.4$ in our sample principally
argues against the importance of radial migration and these stars having
originated from star-formation after the gas-rich scenario when considering
the formation and evolution of the Atari disk. Instead, the presence of two or
three broad, well separated peaks that fit the observed distribution quite
well supports the prediction of the direct accretion model, in that the
eccentricity distribution is quite broad.
We note that our distribution may qualitatively align with the disk heating
scenario since we note a peak at $e\sim$0.3 and one at higher $e\sim 0.6$.
However, the larger peak is not located quite as high as $e=0.8$ and the
distribution can be well-described by more than two underlying gaussian
distributions. Consequently, the disk heating scenario might play a role in
the formation and evolution of the Atari disk, but the discussion in Section
6.1 and the overall broad eccentricity distribution suggest that an accretion
scenario might be the dominant channel.
## 7 Findings and Conclusions
Our detailed kinematic investigation of metal-poor stars selected from
SkyMapper DR2 that are located in the Galactic disk has allowed us to identify
the Atari disk and learn about its characteristics and speculate on its
origin. In this Section, we synthesize our findings across Sections 5 and 6
and comment on other chemical characteristics of the Atari disk.
### 7.1 Kinematic & Spatial characterization of the Atari disk
We have assessed and characterized the Atari disk with a new sample of 7,127
low-metallicity member stars and have outlined some of its properties in
Section 5. The main findings regarding the spatial distribution of the stars
are as follows.
Our detailed study confirms earlier claims (Carollo et al., 2019) of a notable
velocity lag of the Atari disk compared to the canonical thick disk. The Atari
disk has a well defined mean velocity of V${}_{\phi}\approx 154$ km s-1 and
FWHM = 63.9 km s-1, with individual values ranging from about 80 to 250 km s-1
as can be seen in Figure 5. A Vϕ distribution with a distinct, net rotation
characterizes that of a disk population, rather than a halo population. Our
extensive kinematic selection results also align with previous findings
(Carollo et al., 2019) of a peak in angular momentum of L${}_{z}\sim 1200$
${\rm kpc}$ km s-1 when restricting our sample to stars with $R=7$-$9$ kpc
from the Galactic center (due to Lz increasing with increasing $R$ and
assuming a constant rotational velocity). Correspondingly, other R brackets (R
= 3-5, 5-7, 9-11 kpc) have lower or higher Lz values (see Figure 2).
The eccentricities of our Atari disk sample cover a broad range of values
ranging from $e\sim 0.0$ to 1.0. The bulk of the stars have $e\sim 0.3$ to 0.7
which appears to be a range between that of the canonical thick disk and the
Galactic halo. A notable fraction of our stars have eccentricities different
from typical canonical thick disk values (see Table 2). There is no
significant sub-population of Atari disk stars with $e$ = 0.7-1 (only 61
stars), suggesting, again, that the Atari disk eccentricities range between
typical thick disk and halo eccentricities.
The velocity lag and the range of eccentricities offer strong support to the
origin scenario in which the Atari disk forms as a result of a major accretion
event in which a satellite (or satellites) plunged into the Galactic disk at
early times while coming from a specific direction (Abadi et al., 2003b; Sales
et al., 2009). For comparison, a gas-rich merger is favored as the formation
scenario for the canonical thick disk. This alone highlights distinct
differences between the nature of these two populations.
An accretion scenario for the Atari disk may also principally be supported by
a variable scale length and height with metallicity. However, investigating
this point with Milky Way mass galaxies in the IllustrisTNG simulation (Nelson
et al., 2019) indicates that while accretion history does affect the scale
height, other factors also play a role. Observationally, we do indeed find a
small increase in scale length with decreasing metallicity from around 2.37
kpc at $[\mathrm{Fe}/\mathrm{H}]\sim-1.0$ to nearly 3 kpc at
$[\mathrm{Fe}/\mathrm{H}]\sim-1.6$.
As discussed in Section 5, the behavior of the scale height with decreasing
metallicity is somewhat inconclusive due to significant uncertainties (arising
from small sample sizes and difficulties in measuring the first term in
Equation 14). Therefore, we strongly recommend future studies of larger Atari
disk data attempting to investigate the existence of a $h_{z}$ gradient with
$[\mathrm{Fe}/\mathrm{H}]$.
### 7.2 Very and extremely metal-poor stars in the Atari disk as identified
in our SMSS sample
The $[\mathrm{Fe}/\mathrm{H}]$ behavior of the Atari disk appears to be
significantly different from that of the canonical thick disk, as it stretches
to much lower $[\mathrm{Fe}/\mathrm{H}]$, not unlike what is canonically found
in the (inner and outer) halo populations.
We searched our Atari disk sample for low-metallicity stars and identified 261
stars with $[\mathrm{Fe}/\mathrm{H}]<-2.0$ (4 % of our sample), 55 stars with
$[\mathrm{Fe}/\mathrm{H}]<-2.5$ (1 % of our sample), and 7 stars with
$[\mathrm{Fe}/\mathrm{H}]<-3.0$ (0.1 % of our sample). We list stars with
$[\mathrm{Fe}/\mathrm{H}]<-2.5$ in Table 4, along with any available
literature metallicities. To check again whether these stars could be halo
stars, we inspected the long-term orbital histories ($Z_{max}$ and orbital
eccentricity) of these objects. The bulk of these stars have $Z_{max}<3$ kpc,
suggesting that they are indeed not part of the halo population. Any stars
with a higher $Z_{max}$ stars appear not have eccentricities exceeding e $\sim
0.6$, again confirming no halo membership. This leaves the questions whether
our sample would contain any thick disk stars. However, we find these low-
metallicity stars to generally have either too high an eccentricity or
$Z_{max}$ values to be associated with the thick disk (as shown in Table 2).
Of our 55 Atari disk stars with $[\mathrm{Fe}/\mathrm{H}]<-2.5$, $\sim 60$%
are readily found in the Simbad database (Wenger et al., 2000). Table 4 lists
the literature metallicities and corresponding references. We note that four
stars have also previously been classified to have disk-type kinematics, as is
noted in Table 4. Overall, for these stars with
$[\mathrm{Fe}/\mathrm{H}]<-2.5$ (excluding those with measurements from the
GALAH survey), the photometric $[\mathrm{Fe}/\mathrm{H}]$ estimates from Chiti
et al. (2021a) agree very reasonably with those from the literature, with a
mean difference of $0.11\,\pm\,0.05$.
Several re-discovered stars display interesting chemical abundance patterns.
Five stars are limited-$r$ stars with light neutron-capture element
enhancements (Frebel, 2018; Hansen et al., 2018), two stars are mildly
$r$-process enhanced (Hansen et al., 2018; Ezzeddine et al., 2020) and two are
carbon-enhanced metal-poor (CEMP) stars. Of the seven stars with
$[\mathrm{Fe}/\mathrm{H}]$$<-3.0$, two are already known in the literature.
One was analyzed by the R-Process Alliance (Sakari et al., 2018) and found to
be a CEMP star, and the other was studied by (Schlaufman & Casey, 2014). Of
the remaining five, we have observed one star and chemical abundance results
will be reported in X. Ou et al. (in prep.).
We also decided to search the two original parent samples (the individual
action and velocity-based selection samples) for additional metal-poor stars.
We find eleven and seven more very and extremely metal-poor stars in the
sample, respectively. Of those extra eleven action-selected stars, eight stars
appear to have likely halo kinematics (Z${}_{max}>3.0$ kpc and/or $e>0.7$).
Five have Z${}_{max}<4$ but large eccentricities ($e>0.65$). We add the
remaining three stars to our Atari disk sample. Of the seven velocity-selected
stars, six have likely halo kinematics (Z${}_{max}>3.0$ kpc and/or $e>0.7$).
We add only the remaining one to our sample. The stars are listed in Table 4.
While these four metal-poor stars were not selected into our final common
sample with the highest likelihood for being the most representative of Atari
disk stars, we note that it is highly likely that they do belong to the Atari
disk. They clearly do not belong to another population, as per their kinematic
properties. Hence, when searching for the most metal-poor stars it is critical
to consider these two selection methods individually as well, given the rarity
of such stars.
The existence of large numbers of bona-fide very and extremely metal-poor
stars in the Atari disk significantly supports an accretion scenario. A
massive progenitor system must have undergone at least a limited amount of
(early) chemical evolution that produced an early population of low-
metallicity stars. For comparison, the thick disk is not known for having many
such metal-poor stars, and the gas-rich merger scenario would not support the
existence of a large fraction either. We discuss possible scenarios for the
nature of the potential progenitor further below.
Table 4: Very and extremely metal-poor SMSS stars with Atari disk kinematics
R.A. (J2000) | Decl. (J2000) | Gaia ID | $[\mathrm{Fe}/\mathrm{H}]$phot | $[\mathrm{Fe}/\mathrm{H}]$lit | Zmax [kpc] | e | References for $[\mathrm{Fe}/\mathrm{H}]$lit
---|---|---|---|---|---|---|---
12 35 57.41 | $-$34 40 19.70 | 6158268802159564544 | $-$3.00 | $\cdots$ | 4.10 | 0.45 |
06 51 52.43 | $-$25 07 02.22 | 2921777156667244032bbfootnotemark: | $-$3.02 | $\cdots$ | 0.89 | 0.21 |
20 05 28.77 | $-$54 31 25.97 | 6473118900280458240 | $-$3.04 | $-$3.01 | 4.10 | 0.18 | Schlaufman & Casey (2014)
16 23 29.34 | $-$65 17 53.33 | 5827787046034460672bbfootnotemark: | $-$3.04 | $-$2.31 | 3.99 | 0.31 | Buder et al. (2021)
09 29 49.73 | $-$29 05 59.03 | 5633365176579363584 | $-$3.10 | $-$2.88 | 0.90 | 0.45 | Sakari et al. (2018)
19 25 0.04 | $-$15 57 43.35 | 4181010754108521472ccfootnotemark: | $-$3.14 | $\cdots$ | 1.49 | 0.28 |
18 11 39.41 | $-$46 45 43.50 | 6707545022835614464 | $-$3.24 | $\cdots$ | 1.90 | 0.42 |
21 48 07.05 | $-$43 43 23.74 | 6565897654232474240 | $-$3.24 | $\cdots$ | 4.80 | 0.57 |
14 59 14.59 | $-$10 49 42.34 | 6313811313365806208 | $-$3.31 | $\cdots$ | 3.70 | 0.41 |
23 30 19.62 | $-$08 13 15.20 | 2438343952886623744 | $-$3.48 | $-$3.20 | 4.20 | 0.56 | X. Ou et al 2022 (in prep)
15 45 12.76 | $-$31 05 29.22 | 6016676026902257280bbfootnotemark: | $-$3.53 | $\cdots$ | 2.10 | 0.22 |
Note. — A short version is shown here to illustrate the table form and
content, but the full content is accessible in the online table.
Table 5: Very and extremely metal-poor stars with Atari disk kinematics from the literature (collected through JINAbase) R.A. (J2000) | Decl. (J2000) | Simbad identifier | $[\mathrm{Fe}/\mathrm{H}]$lit | $[\mathrm{C}/\mathrm{Fe}]$ | Zmax [kpc] | e | Ref
---|---|---|---|---|---|---|---
16 28 56.15 | $-$10 14 57.10 | 2MASS J16285613$-$1014576 | $-$2.00 | 0.24 | 0.58 | 0.43 | Sakari et al. (2018)
18 28 43.44 | $-$84 41 34.81 | 2MASS J18284356$-$8441346 | $-$2.03 | $-$0.39 | 3.19 | 0.32 | Ezzeddine et al. (2020)
05 52 15.78 | $-$39 53 18.47 | TYC 7602-1143-1 | $-$2.05 | $\cdots$ | 1.33 | 0.69 | Ruchti et al. (2011)
00 25 50.30 | $-$48 08 27.07 | 2MASS J00255030$-$4808270 | $-$2.06 | 0.27 | 1.61 | 0.53 | Barklem et al. (2005)
14 10 15.84 | $-$03 43 55.20 | 2MASS J14101587$-$0343553 | $-$2.06 | $-$0.09 | 1.17 | 0.71 | Sakari et al. (2018)
02 21 55.60 | $-$54 10 14.40 | 2MASS J02215557$-$5410143 | $-$2.09 | 0.00 | 3.16 | 0.33 | Holmbeck et al. (2020)
23 05 50.54 | $-$25 57 22.29 | CD $-$26∘ 16470 | $-$2.13 | $\cdots$ | 2.75 | 0.34 | Ruchti et al. (2011)
13 43 26.70 | $+$15 34 31.10 | HD119516 | $-$2.16 | $\cdots$ | 1.07 | 0.52 | For & Sneden (2010)
15 54 27.29 | $+$00 21 36.90 | 2MASS J15542729+0021368 | $-$2.18 | 0.42 | 0.88 | 0.51 | Sakari et al. (2018)
03 46 45.72 | $-$30 51 13.32 | HD23798 | $-$2.22 | $\cdots$ | 0.83 | 0.54 | Roederer et al. (2010)
15 02 38.50 | $-$46 02 06.60 | 2MASS J15023852$-$4602066 | $-$2.23 | $-$0.16 | 0.97 | 0.30 | Holmbeck et al. (2020)
04 01 49.00 | $-$37 57 53.40 | 2MASS J04014897$-$3757533 | $-$2.28 | $-$0.30 | 2.61 | 0.52 | Holmbeck et al. (2020)
23 02 15.75 | $-$33 51 11.03 | 2MASS J23021574$-$3351110 | $-$2.29 | 0.37 | 2.08 | 0.44 | Barklem et al. (2005)
11 41 08.90 | $-$45 35 28.00 | 2MASS J11410885$-$4535283 | $-$2.32 | $\cdots$ | 1.33 | 0.63 | Ruchti et al. (2011)
09 29 49.74 | $-$29 05 59.20 | 2MASS J09294972$-$2905589 | $-$2.32 | 0.11 | 0.92 | 0.46 | Sakari et al. (2018)
19 16 18.20 | $-$55 44 45.40 | 2MASS J19161821$-$5544454 | $-$2.35 | $-$0.80 | 2.37 | 0.49 | Hansen et al. (2018)
22 49 23.56 | $-$21 30 29.50 | TYC 6393-564-1 | $-$2.38 | $\cdots$ | 2.96 | 0.54 | Ruchti et al. (2011)
00 31 16.91 | $-$16 47 40.79 | HD2796 | $-$2.40 | $-$0.48 | 1.13 | 0.68 | Mardini et al. (2019b)
11 58 01.28 | $-$15 22 18.00 | 2MASS J11580127$-$1522179 | $-$2.41 | 0.62 | 3.62 | 0.38 | Sakari et al. (2018)
15 14 18.90 | $+$07 27 02.80 | 2MASS J15141890+0727028 | $-$2.42 | 0.47 | 3.66 | 0.55 | Roederer et al. (2010)
05 10 35.47 | $-$15 51 38.30 | UCAC4 371-007255 | $-$2.43 | $\cdots$ | 0.74 | 0.60 | Cohen et al. (2013)
16 10 31.10 | $+$10 03 05.60 | 2MASS J16103106+1003055 | $-$2.43 | 0.53 | 2.49 | 0.23 | Hansen et al. (2018)
05 51 42.14 | $-$33 27 33.76 | TYC 7062-1120-1 | $-$2.46 | $\cdots$ | 1.71 | 0.68 | Holmbeck et al. (2020)
23 16 30.80 | $-$35 34 35.90 | BPS CS30493$-$0071 | $-$2.46 | $-$0.01 | 1.26 | 0.40 | Roederer et al. (2014)
01 07 31.23 | $-$21 46 06.50 | UCAC4 342-001270 | $-$2.55 | $\cdots$ | 3.28 | 0.11 | Cohen et al. (2013)
18 36 23.20 | $-$64 28 12.50 | 2MASS J18362318$-$6428124 | $-$2.57 | 0.10 | 1.37 | 0.24 | Hansen et al. (2018)
18 40 59.85 | $-$48 41 35.30 | 2MASS J18405985$-$4841353 | $-$2.58 | 0.60 | 1.50 | 0.48 | Ezzeddine et al. (2020)
18 36 12.12 | $-$73 33 44.17 | 2MASS J18361214$-$7333443 | $-$2.61 | 0.10 | 2.13 | 0.48 | Ezzeddine et al. (2020)
01 49 07.94 | $-$49 11 43.16 | CD $-$49∘ 506 | $-$2.65 | $\cdots$ | 2.75 | 0.49 | Ruchti et al. (2011)
09 47 19.20 | $-$41 27 04.00 | 2MASS J09471921$-$4127042 | $-$2.67 | $-$0.42 | 2.40 | 0.35 | Holmbeck et al. (2020)
21 51 45.74 | $-$37 52 30.88 | 2MASS J21514574$-$3752308 | $-$2.76 | 0.38 | 1.87 | 0.35 | Beers et al. (1992)
22 24 00.14 | $-$42 35 16.05 | 2MASS J22240014$-$4235160 | $-$2.77 | 0.14 | 2.45 | 0.49 | Roederer et al. (2014)
15 56 28.74 | $-$16 55 33.40 | SMSS J155628.74$-$165533.4 | $-$2.79 | 0.36 | 2.15 | 0.36 | Jacobson et al. (2015)
22 02 16.36 | $-$05 36 48.40 | 2MASS J22021636$-$0536483 | $-$2.80 | $-$0.25 | 3.34 | 0.58 | Hansen et al. (2018)
03 01 00.70 | $+$06 16 31.87 | BPS CS31079$-$0028 | $-$2.84 | $\cdots$ | 1.34 | 0.62 | Roederer et al. (2010)
04 19 45.54 | $-$36 51 35.92 | 2MASS J04194553$-$3651359 | $-$2.89 | 0.06 | 3.58 | 0.52 | Roederer et al. (2014)
21 20 28.65 | $-$20 46 22.90 | BPS CS29506$-$0007 | $-$2.94 | $\cdots$ | 1.02 | 0.45 | Roederer et al. (2014)
14 35 58.50 | $-$07 19 26.50 | 2MASS J14355850$-$0719265 | $-$2.99 | $-$0.40 | 3.02 | 0.45 | Ezzeddine et al. (2020)
20 42 48.77 | $-$20 00 39.37 | BD $-$20∘ 6008 | $-$3.05 | $-$0.43 | 1.40 | 0.32 | Roederer et al. (2014)
06 30 55.57 | $+$25 52 43.81 | SDSS J063055.57+255243.7aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$3.05 | $\cdots$ | 0.74 | 0.50 | Aoki et al. (2013)
13 03 29.48 | $+$33 51 09.14 | 2MASS J12222802+3411318 | $-$3.05 | $-$0.18 | 3.93 | 0.61 | Lai et al. (2008)
00 20 16.20 | $-$43 30 18.00 | UCAC4 233-000355 | $-$3.07 | 3.02 | 2.70 | 0.52 | Cohen et al. (2013)
13 19 47.00 | $-$04 23 10.25 | TYC 4961-1053-1aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$3.10 | $-$0.52 | 4.50 | 0.30 | Hollek et al. (2011)
12 45 02.68 | $-$07 38 46.95 | SDSS J124502.68$-$073847.0aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$3.17 | 2.54 | 4.00 | 0.50 | Aoki et al. (2013)
14 16 04.71 | $-$20 08 54.08 | 2MASS J14160471$-$208540 | $-$3.20 | 1.44 | 1.92 | 0.54 | Barklem et al. (2005)
13 22 35.36 | $+$00 22 32.60 | UCAC4 452-052732 | $-$3.38 | $\cdots$ | 2.98 | 0.45 | Cohen et al. (2013)
23 21 21.56 | $-$16 05 05.65 | HE 2318$-$1621aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$3.67 | 0.54 | 3.64 | 0.58 | Placco et al. (2020)
11 18 35.88 | $-$06 50 45.02 | TYC 4928-1438-1aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$3.73 | 0.08 | 4.71 | 0.31 | Hollek et al. (2011)
13 02 56.24 | $+$01 41 52.12 | UCAC4 459-050836 | $-$3.88 | 1.34 | 2.80 | 0.26 | Barklem et al. (2005)
09 47 50.70 | $-$14 49 07.00 | HE 0945$-$1435aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$3.90 | $<$2.03 | 0.76 | 0.57 | Hansen et al. (2015)
10 55 19.28 | $+$23 22 34.02 | SDSS J105519.28+232234.0aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$4.00 | $<$0.70 | 2.20 | 0.45 | Aguado et al. (2017)
14 26 40.33 | $-$02 54 27.49 | HE 1424$-$0241aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$4.05 | $<$0.63 | 3.50 | 0.41 | Cohen et al. (2007)
12 47 19.47 | $-$03 41 52.50 | SDSS J124719.46$-$034152.4 | $-$4.11 | $<$1.61 | 1.71 | 0.24 | Caffau et al. (2013)
12 04 41.39 | $+$12 01 11.52 | SDSS J120441.38+120111.5aaStars that have too large uncertainties in their Gaia EDR3 astrometric data to be useful. | $-$4.34 | $<$1.45 | 3.35 | 0.39 | Placco et al. (2015)
10 29 15.15 | $+$17 29 27.88 | SDSS J102915.14+172927.9 | $-$4.99 | $<$0.70 | 2.47 | 0.06 | Caffau et al. (2011)
### 7.3 Very and extremely metal-poor stars in the Atari disk as identified
in literature samples
Knowing about the existence of very and extremely metal-poor stars in the
Atari disk, we also applied our selection procedure to known samples of metal-
poor stars to identify additional ones. The SMSS sample used in the analysis
presented here does not cover e.g., Northern hemisphere stars, warm low-
metallicity stars, extremely metal-poor stars with
$[\mathrm{Fe}/\mathrm{H}]<-3.5$, and very faint stars. This leaves room for
more discoveries.
Hence, we chose to investigate the entire data set compiled in the JINAbase
(Abohalima & Frebel, 2018). The latest version is publicly available on
GitHub999https://github.com/Mohammad-Mardini/JINAbase. We cross-matched all
stars with $[\mathrm{Fe}/\mathrm{H}]$$<-2.0$ (2,302 stars) of the JINAbase
catalog with Gaia EDR3 and applied the same quality cuts
(astrometric_excess_noise $<$ 1 $\mu$as and Parallax$\\_$over$\\_$error
$\geqslant$ 5). We then collected radial velocities for these stars if they
were not already listed in JINAbase. This resulted in a sample of 1,098 stars
from which we identified a total of 47 Atari disk stars (5%) with
$[\mathrm{Fe}/\mathrm{H}]<-2.0$. Of those 22 stars have
$[\mathrm{Fe}/\mathrm{H}]<-2.5$, eight have $[\mathrm{Fe}/\mathrm{H}]<-3.0$,
and two have $[\mathrm{Fe}/\mathrm{H}]<-4.0$. A number of stars show
interesting chemical abundance features. Table 5 lists all these Atari disk
stars. These stars have a mean $<V_{\phi}>=152$ km s-1 which is highly
consistent with our full SMSS Atari disk sample’s mean velocity of 154 km s-1
(see Table 2). Eccentricities range from about 0.05 to 0.7, with typical
values around 0.4 to 0.5.
During our investigation of the literature sample as collected from JINAbase,
we noticed a number of e.g. faint stars with low proper motion uncertainties
and high uncertain parallaxes, leading to their exclusion based on the adopted
Gaia astrometry quality cut. In order to prevent the discovery of additional
exremely metal-poor stars due insufficient data quality, we thus opted to do
an additional probability analysis to identify any potential Atari disk
members. We drew 10,000 realizations of each of the 6-d astrometries for these
JINAbase stars with $[\mathrm{Fe}/\mathrm{H}]$$<-3$ assuming a normal
distribution. We then rerun the whole analysis using these possible
combinations to determine the most likely membership for a halo, thin disk and
thick-disk-like kinematic behavior. We only identified nine stars with
$[\mathrm{Fe}/\mathrm{H}]$ $\sim-3.0$. Of those, three stars have
$[\mathrm{Fe}/\mathrm{H}]\lesssim-4.0$. We also identified three stars with
$[\mathrm{Fe}/\mathrm{H}]$ $<-4.0$ with a 50% probability for being be part of
the Atari disk. However, upon further inspection, we find both their Zmax and
eccentricities too high to be bona-fide Atari disk stars. We thus do not
include them in our sample.
### 7.4 The metallicity distribution function of the Atari disk
The upper panel of Figure 8 shows the metallicity distribution function for
our final Atari disk sample (green histogram), and also for the velocity-
selection (red histogram) and action-selection methods (gray histogram). The
distributions look very similar, albeit the overall numbers are different. Our
main sample shows an exponential decrease in stars with decreasing
$[\mathrm{Fe}/\mathrm{H}]$, but with stars reaching down to
$[\mathrm{Fe}/\mathrm{H}]$ $\sim-3.5$, unlike what has been found for the
canonical thick disk. The distribution of the two parent samples support this
overall behavior.
The inset in Figure 8 shows just the metal-poor tail
($[\mathrm{Fe}/\mathrm{H}]$ $<-2.5$) of the MDFs, with a best-fitting
exponential ($\Delta\log{\rm N}/\Delta[\mathrm{Fe}/\mathrm{H}]=1.13\pm 0.06$).
The best-fitting exponential curve (dashed black line) drops to zero at
$[\mathrm{Fe}/\mathrm{H}]\approx-4.0$, supporting the existence of only a
handful of Atari disk stars (as identified in the literature) with
$[\mathrm{Fe}/\mathrm{H}]\approx-4.0$ (see Table 5).
The lower panel of Figure 8 then shows the very metal-poor tail
($[\mathrm{Fe}/\mathrm{H}]$$<-2.5$) of our Atari disk sample (green histogram)
in comparison with the stars from the literature (blue histograms) that we
identified as Atari disk stars (see Section 7.3). Both samples show that Atari
disk contains a number of stars with $[\mathrm{Fe}/\mathrm{H}]$$<-3.0$, with
the literature sample containing stars with $[\mathrm{Fe}/\mathrm{H}]<-4.0$
(in agreement with our best-fitting exponential curve) and even
$[\mathrm{Fe}/\mathrm{H}]\approx-5.0$.
The MDF currently shows no clear peak up to $[\mathrm{Fe}/\mathrm{H}]<-0.8$.
However, there is likely an increasing member contamination by canonical thick
disk stars as [Fe/H] becomes higher than $\sim-1.0$. Assuming the upper bound
of the mean metallicity of the Atari disk is set by [Fe/H] = $-0.8$ (following
from the simple selection recipe presented in and previous Atari disk
selections in Beers et al. 2014 and Naidu et al. 2020), we estimate a
conservative upper limit to the stellar mass of the progenitor system of
$\sim$109 M⊙ from the mass-metallicity relation in Kirby et al. (2013). The
progenitor mass is likely much lower than this value, though, as the Atari
disk ought to be dwarfed by the thick disk (which has a mass of 1.17 $\times
10^{10}$ M⊙) since the Atari disk kinematic signature is only detectable
relative to the thick disk in the low metallicity regime.
Figure 8: Top: Metallicity distribution function (MDF) for stars with
$[\mathrm{Fe}/\mathrm{H}]$$<-0.8$ of the action-selected method (gray
histogram), the velocity-selected method (red histogram), and our Atari disk
sample (green histogram). The inset figure shows the MDFs of stars with
$[\mathrm{Fe}/\mathrm{H}]$$<-2.5$ and the best-fitting exponential
($\Delta\log{\rm N}/\Delta[\mathrm{Fe}/\mathrm{H}]=1.13\pm 0.06$). Bottom: MDF
of the metal-poor tail (with $[\mathrm{Fe}/\mathrm{H}]$$<-2.5$) of our Atari
disk sample (green histogram) compared with the sample of stars that we
identify as Atari disk stars from JINAbase (blue histogram).
### 7.5 Chemical abundance characteristics of the Atari disk
An accretion origin of the Atari disk would imply that distinct chemical
abundance signatures may be identifiable among Atari disk stars, in particular
at the lowest metallicities. We briefly comment on findings as obtained from
the available literature abundances. A more complete discussion will be
presented in X. Ou et al. (2022, in prep.).
Besides the fact that a significant number of stars with
$[\mathrm{Fe}/\mathrm{H}]$$<-3.0$ seem to belong to the Atari disk, we find
several interesting chemical signatures. The 56 (47 + 9 stars with low quality
parallaxes) JINAbase-selected stars with $[\mathrm{Fe}/\mathrm{H}]<-2.0$
display an average [$\alpha$/Fe] $\geqslant 0.3$ dex, which is a feature of
enrichment by core-collapse supernovae and is generally seen in more metal-
poor stars. This enhanced $\alpha$-abundance behavior is unlike that of the
thick disk, which generally shows [$\alpha$/Fe] $\lesssim 0.2$ dex. However,
we note that the lower [$\alpha$/Fe] of the thick disk may be due to its
higher metallicity range than the Atari disk.
Carbon enhancement among metal-poor stars is regarded as a signature of very
early star formation and commonly found among the most metal-poor halo stars
(e.g., Frebel & Norris 2015). Of the 17 stars with $[\mathrm{Fe}/\mathrm{H}]$
$\lesssim-3.0$, three are CEMP stars with $[\mathrm{C}/\mathrm{Fe}]$$>0.7$
(17%). If we were to also count the two additional stars with upper limits
that do not exclude a CEMP-nature, the fraction would increase to 29%.
Interestingly, none of the five stars with $[\mathrm{Fe}/\mathrm{H}]$
$\lesssim-4.0$ appear to be carbon-enhanced at face value, although the two
stars with upper limits on carbon are in this metallicity group. If they were
indeed CEMP stars, the fraction of CEMP stars could be as high as 41%. For
comparison, in the halo, 24% of stars with $[\mathrm{Fe}/\mathrm{H}]$ $<-2.5$
are CEMP stars, 43% at $[\mathrm{Fe}/\mathrm{H}]$ $<-3.0$, 60% at
$[\mathrm{Fe}/\mathrm{H}]$ $<-3.5$, and 81% at $[\mathrm{Fe}/\mathrm{H}]$
$<-4.0$ (Placco et al., 2014). It thus remains to be seen what the CEMP
fraction is among the lowest metallicity stars in the Atari disk but the
existence of at least three CEMP stars with $[\mathrm{Fe}/\mathrm{H}]$
$\lesssim-3.0$ points to Population III inhomogeneous, faint supernova-driven
enrichment (Umeda & Nomoto, 2003) within the earliest star forming systems
which offers additional support for an accretion origin of the Atari disk.
### 7.6 On the origin and history of the Atari disk
Overall, we find that the Atari disk is principally disk-y in nature as it
appears to be confined to a somewhat puffed up, disk-like structure. This is
illustrated by the fact that the long term orbital evolution of Atari disk
stars, including the most metal-poor ones, shows them to remain within
Z${}_{\text{max}}$ = 3 kpc. The Atari disk appears to be distinct from the
canonical thick disk due to its rotational velocity lagging by $\sim 30$ km
s-1 and its distinct peak in angular momentum at a given radius. Moreover,
Atari disk stars exhibit more varied eccentricities than the canonical thick
disk and the Atari disk stellar population exhibits a significant low-
metallicity tail.
Based on our discussion in Section 6, the origin of the Atari disk likely
stems from an early accretion event. However, going forward, it will be
important to compare our findings with results from tailored theoretical
formation models for the Atari disk. In particular, cosmological simulations
focusing on disk formation will be able shed more light on how multiple disk
components (e.g., the Atari disk) may form and evolve and the nature of any
transitory stellar populations between the components. We isolated our sample
to $[\mathrm{Fe}/\mathrm{H}]$$<-0.8$ to preferentially include Atari disk
stars, but future observational investigations may be able to remove this
metallicity criterion if a sufficiently pure dynamical criterion were
established. An improved selection criterion is particularly noteworthy, as an
accretion scenario may support the existence of higher
$[\mathrm{Fe}/\mathrm{H}]$ stars depending on the chemical evolution of the
progenitor system.
To investigate whether any currently known accreted structures could feasibly
be related to the Atari disk, we compared the kinematic properties of the
Atari disk to those of several recently identified structures (see Figure 9).
We list some comparisons below:
Gaia-Sausage-Enceladus: The Gaia-Sausage-Enceladus (GSE) was identified using
varied selection methods (e.g., Belokurov et al., 2018; Helmi, 2020), which
result in differing degrees of contamination with overlapping structures.
However, the GSE stars cover a narrow range in rotational velocity centered at
$\langle V_{\phi}\rangle=0$ km s-1 coupled with a broad $V_{r}$ distribution.
The orbital eccentricities typically are $e>0.8$, and the GSE has a narrow MDF
that peaks at $[\mathrm{Fe}/\mathrm{H}]$$\approx-1.17$ (Feuillet et al.,
2020). A comparison of these properties with kinematics properties of our
Atari disk sample readily shows differences, suggesting no association with
the GSE structure.
Kraken: The Kraken is the largest ($2\times 10^{8}$ M⊙) and oldest ($\approx
11$ Gyr) galactic merger in the Milky Way’s history, as described in Kruijssen
et al. (2020). The spatial boundaries of the Kraken remnants are not well
constrained. However, field stars originating from massive satellites reside
deeper in the gravitational potential of the Milky Way with a clear separation
due to dynamical friction (Amorisco, 2017). This suggests that the Kraken’s
debris would settle in low-energy and more eccentric orbits ($e>0.5$). In
contrast, our Atari disk sample extends to higher Galactocentric distances and
contains a considerable number of stars with near-circular orbits. Hence, the
Kraken appears to be unrelated to the Atari disk.
Heracles: Heracles is a stellar population in the inner Milky Way
(R${}_{GC}<4$ kpc) identified from the SDSS/APOGEE survey DR16 (Horta et al.,
2021) with an accretion origin. Looking at the integrals of motion space of
Heracles (wide range of orbital energies centered around L${}_{z}\approx 0$;
see figure 8 in Horta et al. 2021) and the Atari Disk (narrow range of orbital
energies with wide range Lz), as currently identified, suggests no immediate
association. At face value, the less evolved part of Heracles could occupy
higher orbital energy values, similar to what we found for the Atari disk.
However, it is still unclear how to reconcile the discrepant $L_{z}$
distributions. This makes an association between the two populations seem
unlikely.
Nyx: Nyx is a prograde (V${}_{r}\approx 134$ km s-1, V${}_{\phi}\approx 130$
km s-1, V${}_{\theta}\approx 53$ km s-1) stellar stream spatially located
close to the Galactic disk, originally identified in position-velocity space
(Necib et al., 2020). For a direct comparison with the kinematic properties of
Nyx, we calculated the spherical velocity components (Vr , Vϕ, Vθ) for our
Atari disk sample using Galactocentric spherical coordinates described in
Appendix B of Binney & Tremaine (2008). Interestingly, the mean rotational
velocity for our Atari disk sample (V${}_{\phi}\approx 150$ km s-1), somewhat
overlaps with the Vϕ of Nyx (V${}_{\phi}\approx 130$ km s-1). However, the
mean velocity in the radial direction of our Atari disk sample
($\langle\text{V}_{r}\rangle\sim$ 10 km s-1) is in stark disagreement with
V${}_{r}\approx 134$ km s-1 for Nyx. Nyx also has a mean eccentricity value of
$e=0.68$, somewhat distinct with the eccentricity distribution for the Atari
disk.
Given some similarities in the properties of Nyx and the Atari disk, we
further tested the association of Nyx with the Atari disk by compiling the 6-D
phase space information of the Nyx sample (Necib, priv. comm.) and ran the
sample through our classification algorithm in Section 4. Essentially all of
the Nyx stars are classified as being associated with the Galactic halo, not
the Atari disk. We thus conclude that the Nyx stellar stream is most likely
not associated with the Atari disk.
We also separately investigated the Vr distribution of our Atari disk sample.
If this distribution shows two peaks, both with the same mean value but
opposite sign, it is a sign of a radial merger event that led to the formation
of this structure. At the same time, the two $V_{r}$ peaks should display
near-identical mean rotational velocity and mean azimuthal velocity values.
This has been shown to be the case for e.g. the GSE (Belokurov et al., 2018),
Nyx (Necib et al., 2020) and others.
To test for this scenario, for a well-defined sample of Atari disk stars
within the solar radius ($7<$ kpc R $<9$ kpc), we performed a 3D Gaussian
mixture model fitting over the velocity space (Vr, Vϕ, and Vθ) of our Atari
disk sample (2874 stars). This yielded two Gaussian distributions that peak at
Vr = 42 km s-1 and $-43$ km s-1. These two peaks also have Vϕ= 145 km s-1 and
150 km s-1, Vθ= 0.67 km s-1 and 0.95 km s-1, respectively. The surprisingly
good match of the two peaks’ values of Vr, Vϕ, and Vθ strongly suggest that
the Atari disk was formed through a radial merger.
Finally, we compared the location of our Atari disk sample in $E-L_{z}$ to the
location of other galactic structures, following the right panel of figure 4
in Naidu et al. (2020). For this comparison, we recalculated the orbital
energy of our Atari disk sample and Nyx stream, as described in Naidu et al.
(2020) for consistency. Figure 9 shows a schematic view of the approximate
location in $E-L_{z}$ space of our Atari disk sample (blue ellipsoid) and the
other stellar structures. We also added the approximate location of Nyx (dark
cyan ellipsoid).
Notably, the Atari disk does not overlap with the Kraken, the Helmi stream,
and the GSE. However, it partially overlaps with Wukong, the in-situ halo, and
the rich-$\alpha$-disk (aka. Thick Disk). But the wider spread in the orbital
energy and Lz of our Atari disk sample rules out any close association with
the in-situ halo or the Wukong structures. Also, the rotational velocity lag
of our Atari disk sample with respect to the TD rules out a full association
with the component. Overall, the unique location of the Atari disk in
$E-L_{z}$ space further supports its definition as its own component or
structure within the Milky Way.
Figure 9: Schematic approximate location of various galactic structures in the
$E-L_{z}$ space adopted from Naidu et al. (2020). The approximate location of
our Atari disk sample is highlighted by the blue ellipsoid. Our Atari disk
sample overlaps with the $E-L_{z}$ space of the Wukong, the in-situ halo, and
the $\alpha$-rich disk (canonical thick disk).
We conclude that the Atari disk is a unique, ancient low-metallicity component
that is located within the Galactic disk that is not associated with any other
currently known structure due to its distinct properties (e.g., velocities,
eccentricities, metallicities, and location in $E-L_{z}$). Looking ahead, it
will be important to further study this component. More detailed information
on the chemical abundances of Atari disk stars, especially those with
$[\mathrm{Fe}/\mathrm{H}]<-3.0$ could reveal meaningful insights into the
nature of the progenitor system (mass, star formation history, accretion time)
that may have formed quickly and grown significantly over a short period
before merging by the proto-Milky Way. All massive structures identified in
the Galactic halo, such as the GSE, seems to account for the total mass from
which the early Milky Way grew. However, no mass estimates were considering
any additional structures potentially hiding in the disk (Naidu et al., 2020;
Kruijssen et al., 2020). It thus appears that the Atari disk adds to the
observed tally of galactic structures with massive progenitors that will need
to be taken into consideration when establishing the early accretion history
of the Milky Way.
## 8 Summary
In this extensive chemo-dynamic study, we have comprehensively characterized
the Atari disk as a separate component of the Milky Way’s disk. Below, we
highlight our main conclusions regarding the nature and origin of the Atari
disk:
* •
We developed a dynamical approach to statistically assign 36,010 low-
metallicity stars selected from SkyMapper DR2 to the Galactic thin disk, thick
disk and halo populations. We utilized two independent probability
distribution function approaches using the action integrals and a velocity-
based method (following Bensby et al. (2003)) to isolate a clean Atari disk
sample while also minimizing the contamination by Galactic halo members, and
thin and thick disk stars. Our clean Atari disk sample comprises 7,127 stars,
all with $-3.5<$ $[\mathrm{Fe}/\mathrm{H}]$$<-0.8$.
* •
We find the Atari disk to have a scale length of $h_{R}=2.48\,\pm\,0.15$ kpc
and scale height of $h_{Z}=1.67\,\pm\,0.15$ kpc. The metallicity distribution
of the Atari disk has notable correlations with $|Z|$, $V_{\phi}$, $e$, and
$R$. The Atari disk sample shows a mean rotational Velocity of
V${}_{\phi}\approx 154$ km s-1 and a broad eccentricity distribution that
peaks at $e=0.45$. The Atari disk sample has a number of stars with higher
eccentricity orbits than the canonical thick disk. It remains to be seen to
what extent the scale length and scale height are dependent on metallicity.
* •
Based on our understanding of the nature of the Atari disk and the properties
of our sample, we also developed a simple recipe that could be readily applied
to any sample to single out Atari disk stars.
* •
Utilizing photometric metallicities adopted from Chiti et al. (2021a) (in
combination with high quality Gaia EDR3 astrometric solutions), in our clean
Atari disk sample of of 7,127 stars, we identify 261 stars with
$[\mathrm{Fe}/\mathrm{H}]$ $<-2.0$, 66 stars with
$[\mathrm{Fe}/\mathrm{H}]\lesssim-2.5$, and 11 stars with
$[\mathrm{Fe}/\mathrm{H}]\lesssim-3.0$. Also, through an additional search, we
find 17 stars with $[\mathrm{Fe}/\mathrm{H}]\lesssim-3.0$ and five stars with
$[\mathrm{Fe}/\mathrm{H}]\lesssim-4.0$ in the literature (collected through
JINAbase) to be associated with the Atari disk. All these metallicities are
below the long-standing metallicity floor of
${[\mathrm{Fe}/\mathrm{H}]}=-2.35$ (Beers et al. 2002) of the thick disk. In
fact, the discovery of these extremely and ultra-metal-poor stars opens a
window to studying the nature and formation history of the proto-disk of our
Galaxy.
* •
Comparing our results with predictions from the four popular formation
scenarios for the formation and evolution of the thick disk (disk heating,
gas-rich merger, direct accretion, and radial migration), we conclude that the
Atari disk may have been formed through accretion, analogous to what has been
suggested for the canonical thick disk direct accretion scenario. Significant
roles played by other mechanisms in forming the Atari disk are observationally
disfavored. This strongly argues for the need for tailored models to attempt
to explain the observed properties to further reveal the origin and history of
the Atari disk, and its relation to the other disk components.
* •
We quantified the shape of the MDF for our Atari disk sample. It is well fit
by exponential profile with a slope of $\Delta\log{\rm
N}/\Delta[\mathrm{Fe}/\mathrm{H}]=1.13\pm 0.06$ over the entire metallicity
range of our sample, reaching down to $[\mathrm{Fe}/\mathrm{H}]\sim-4.0$, in
line with several ultra-metal-poor stars being identified as members of the
Atari disk. The MDF currently shows no clear peak, which may be caused by the
likely increasing member contamination by canonical thick disk stars as [Fe/H]
becomes higher than $\sim-1.0$. The mass of the Atari disk is likely lower
than $\sim 10^{9}$ M⊙, both from the fact that it ought to be dwarfed by the
canonical thick disk, and from the mass-metallicity relation assuming an upper
bound of $\langle$[Fe/H]$\rangle$ = $-0.8$.
* •
We have investigated the existence of any direct association of our Atari disk
component with the following Milky Way structures: Gaia-Sausage-Enceladus,
Kraken, and Nyx, through comparing their space parameters and properties in
the $E-L_{z}$ plane. These comparisons suggest no strong evidence that the
Atari disk is associated with other Galactic structures.
* •
This study opens a window for the need of more extensive formation modeling(s)
of the Galactic disk system and its history, cosmological simulations of the
early Milky Way, and precise future observations of Atari disk stars. All
these approaches will be required to further investigate in even more detail
the observed chemo-dynamical properties of the Atari disk to comprehensively
reconstruct its origin scenario and subsequent evolution. Quantifying its role
within the early formation of the Galaxy will have important ramifications in
understanding the history of our Milky Way.
We thank John E. Norris, Alexander Ji, Lina Necib, Tilman Hartwig, Miho
Ishigaki, Chengdong Li, and Oudai Oweis for fruitful discussions about stellar
populations. This work is supported by Basic Research Grant (Super AI) of
Institute for AI and Beyond of the University of Tokyo. A.F. acknowledges
support from NSF grant AST-1716251, and thanks the Wissenschaftskolleg zu
Berlin for their wonderful Fellow’s program and generous hospitality. This
work has made use of data from the European Space Agency (ESA) mission Gaia
(https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and
Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has
been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement. The national facility
capability for SkyMapper has been funded through ARC LIEF grant LE130100104
from the Australian Research Council, awarded to the University of Sydney, the
Australian National University, Swinburne University of Technology, the
University of Queensland, the University of Western Australia, the University
of Melbourne, Curtin University of Technology, Monash University and the
Australian Astronomical Observatory. SkyMapper is owned and operated by The
Australian National University’s Research School of Astronomy and
Astrophysics. The survey data were processed and provided by the SkyMapper
Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is
hosted at the National Computational Infrastructure (NCI). Development and
support the SkyMapper node of the ASVO has been funded in part by Astronomy
Australia Limited (AAL) and the Australian Government through the
Commonwealth’s Education Investment Fund (EIF) and National Collaborative
Research Infrastructure Strategy (NCRIS), particularly the National eResearch
Collaboration Tools and Resources (NeCTAR) and the Australian National Data
Service Projects (ANDS). Funding for RAVE has been provided by the Australian
Astronomical Observatory; the Leibniz-Institut fuer Astrophysik Potsdam (AIP);
the Australian National University; the Australian Research Council; the
French National Research Agency; the German Research Foundation (SPP 1177 and
SFB 881); the European Research Council (ERC-StG 240271 Galactica); the
Istituto Nazionale di Astrofisica at Padova; The Johns Hopkins University; the
National Science Foundation of the USA (AST-0908326); the W. M. Keck
foundation; the Macquarie University; the Netherlands Research School for
Astronomy; the Natural Sciences and Engineering Research Council of Canada;
the Slovenian Research Agency; the Swiss National Science Foundation; the
Science $\&$ Technology Facilities Council of the UK; Opticon; Strasbourg
Observatory; and the Universities of Groningen, Heidelberg and Sydney. This
work made use of the Third Data Release of the GALAH Survey (Buder et al.,
2021). The GALAH Survey is based on data acquired through the Australian
Astronomical Observatory, under programs: A/2013B/13 (The GALAH pilot survey);
A/2014A/25, A/2015A/19, A2017A/18 (The GALAH survey phase 1); A2018A/18 (Open
clusters with HER- MES); A2019A/1 (Hierarchical star formation in Ori OB1);
A2019A/15 (The GALAH survey phase 2); A/2015B/19, A/2016A/22, A/2016B/10,
A/2017B/16, A/2018B/15 (The HERMES-TESS program); and A/2015A/3, A/2015B/1,
A/2015B/19, A/2016A/22, A/2016B/12, A/2017A/14 (The HERMES K2-follow-up
program). We acknowledge the traditional owners of the land on which the AAT
stands, the Gamilaraay people, and pay our respects to elders past and
present. This paper includes data that has been provided by AAO Data Central
(datacentral.aao.gov.au). The Guoshoujing Telescope (the Large Sky Area Multi-
Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific
Project built by the Chinese Academy of Sciences. Funding for the project has
been provided by the National Development and Reform Commission. LAMOST is
operated and managed by the National Astronomical Observatories, Chinese
Academy of Sciences. This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France.
## References
* Abadi et al. (2003a) Abadi, M. G., Navarro, J. F., Steinmetz, M., & Eke, V. R. 2003a, ApJ, 591, 499, doi: 10.1086/375512
* Abadi et al. (2003b) —. 2003b, ApJ, 597, 21, doi: 10.1086/378316
* Abohalima & Frebel (2018) Abohalima, A., & Frebel, A. 2018, ApJS, 238, 36, doi: 10.3847/1538-4365/aadfe9
* Aguado et al. (2017) Aguado, D. S., González Hernández, J. I., Allende Prieto, C., & Rebolo, R. 2017, A&A, 605, A40, doi: 10.1051/0004-6361/201730654
* Allende Prieto et al. (2016) Allende Prieto, C., Kawata, D., & Cropper, M. 2016, A&A, 596, A98, doi: 10.1051/0004-6361/201629787
* Amôres et al. (2017) Amôres, E. B., Robin, A. C., & Reylé, C. 2017, A&A, 602, A67, doi: 10.1051/0004-6361/201628461
* Amorisco (2017) Amorisco, N. C. 2017, MNRAS, 464, 2882, doi: 10.1093/mnras/stw2229
* An & Beers (2020) An, D., & Beers, T. C. 2020, ApJ, 897, 39, doi: 10.3847/1538-4357/ab8d39
* Anders et al. (2014) Anders, F., Chiappini, C., Santiago, B. X., et al. 2014, A&A, 564, A115, doi: 10.1051/0004-6361/201323038
* Aoki et al. (2013) Aoki, W., Beers, T. C., Lee, Y. S., et al. 2013, AJ, 145, 13, doi: 10.1088/0004-6256/145/1/13
* Bailer-Jones et al. (2021) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Demleitner, M., & Andrae, R. 2021, AJ, 161, 147, doi: 10.3847/1538-3881/abd806
* Bailer-Jones et al. (2018) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Mantelet, G., & Andrae, R. 2018, AJ, 156, 58, doi: 10.3847/1538-3881/aacb21
* Barklem et al. (2005) Barklem, P. S., Christlieb, N., Beers, T. C., et al. 2005, A&A, 439, 129, doi: 10.1051/0004-6361:20052967
* Beers et al. (2002) Beers, T. C., Drilling, J. S., Rossi, S., et al. 2002, AJ, 124, 931, doi: 10.1086/341377
* Beers et al. (2014) Beers, T. C., Norris, J. E., Placco, V. M., et al. 2014, ApJ, 794, 58, doi: 10.1088/0004-637X/794/1/58
* Beers et al. (1992) Beers, T. C., Preston, G. W., & Shectman, S. A. 1992, AJ, 103, 1987, doi: 10.1086/116207
* Belokurov et al. (2018) Belokurov, V., Erkal, D., Evans, N. W., Koposov, S. E., & Deason, A. J. 2018, MNRAS, 478, 611, doi: 10.1093/mnras/sty982
* Bennett & Bovy (2019) Bennett, M., & Bovy, J. 2019, MNRAS, 482, 1417, doi: 10.1093/mnras/sty2813
* Bensby et al. (2011) Bensby, T., Alves-Brito, A., Oey, M. S., Yong, D., & Meléndez, J. 2011, ApJ, 735, L46, doi: 10.1088/2041-8205/735/2/L46
* Bensby et al. (2003) Bensby, T., Feltzing, S., & Lundström, I. 2003, A&A, 410, 527, doi: 10.1051/0004-6361:20031213
* Bensby et al. (2005) Bensby, T., Feltzing, S., Lundström, I., & Ilyin, I. 2005, A&A, 433, 185, doi: 10.1051/0004-6361:20040332
* Binney (2010) Binney, J. 2010, MNRAS, 401, 2318, doi: 10.1111/j.1365-2966.2009.15845.x
* Binney & Sanders (2016) Binney, J., & Sanders, J. L. 2016, Astronomische Nachrichten, 337, 939, doi: 10.1002/asna.201612403
* Binney & Tremaine (2008) Binney, J., & Tremaine, S. 2008, Galactic Dynamics: Second Edition (Princeton University Press)
* Bovy (2015) Bovy, J. 2015, ApJS, 216, 29, doi: 10.1088/0067-0049/216/2/29
* Brook et al. (2007) Brook, C., Richard, S., Kawata, D., Martel, H., & Gibson, B. K. 2007, ApJ, 658, 60, doi: 10.1086/511056
* Brook et al. (2004) Brook, C. B., Kawata, D., Gibson, B. K., & Freeman, K. C. 2004, ApJ, 612, 894, doi: 10.1086/422709
* Buder et al. (2021) Buder, S., Sharma, S., Kos, J., et al. 2021, MNRAS, doi: 10.1093/mnras/stab1242
* Caffau et al. (2011) Caffau, E., Bonifacio, P., François, P., et al. 2011, Nature, 477, 67, doi: 10.1038/nature10377
* Caffau et al. (2013) Caffau, E., Bonifacio, P., Sbordone, L., et al. 2013, A&A, 560, A71, doi: 10.1051/0004-6361/201322488
* Carollo et al. (2010) Carollo, D., Beers, T. C., Chiba, M., et al. 2010, ApJ, 712, 692, doi: 10.1088/0004-637X/712/1/692
* Carollo et al. (2019) Carollo, D., Chiba, M., Ishigaki, M., et al. 2019, ApJ, 887, 22, doi: 10.3847/1538-4357/ab517c
* Carter et al. (2020) Carter, C., Conroy, C., Zaritsky, D., et al. 2020, arXiv e-prints, arXiv:2012.00036. https://arxiv.org/abs/2012.00036
* Cheng et al. (2012) Cheng, J. Y., Rockosi, C. M., Morrison, H. L., et al. 2012, ApJ, 746, 149, doi: 10.1088/0004-637X/746/2/149
* Chiba & Beers (2000) Chiba, M., & Beers, T. C. 2000, AJ, 119, 2843, doi: 10.1086/301409
* Chiti et al. (2020) Chiti, A., Frebel, A., Jerjen, H., Kim, D., & Norris, J. E. 2020, ApJ, 891, 8, doi: 10.3847/1538-4357/ab6d72
* Chiti et al. (2021a) Chiti, A., Frebel, A., Mardini, M. K., et al. 2021a, ApJS, 254, 31, doi: 10.3847/1538-4365/abf73d
* Chiti et al. (2021b) Chiti, A., Mardini, M. K., Frebel, A., & Daniel, T. 2021b, ApJ, 911, L23, doi: 10.3847/2041-8213/abd629
* Cohen et al. (2013) Cohen, J. G., Christlieb, N., Thompson, I., et al. 2013, ApJ, 778, 56, doi: 10.1088/0004-637X/778/1/56
* Cohen et al. (2007) Cohen, J. G., McWilliam, A., Christlieb, N., et al. 2007, ApJ, 659, L161, doi: 10.1086/518031
* Cordoni et al. (2020) Cordoni, G., Da Costa, G. S., Yong, D., et al. 2020, MNRAS, doi: 10.1093/mnras/staa3417
* Cui et al. (2012) Cui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, Research in Astronomy and Astrophysics, 12, 1197, doi: 10.1088/1674-4527/12/9/003
* de Jong et al. (2010) de Jong, J. T. A., Yanny, B., Rix, H.-W., et al. 2010, ApJ, 714, 663, doi: 10.1088/0004-637X/714/1/663
* Dehnen & Binney (1998) Dehnen, W., & Binney, J. 1998, MNRAS, 294, 429, doi: 10.1046/j.1365-8711.1998.01282.x
* Di Matteo et al. (2020) Di Matteo, P., Spite, M., Haywood, M., et al. 2020, A&A, 636, A115, doi: 10.1051/0004-6361/201937016
* Engler et al. (2021) Engler, C., Pillepich, A., Pasquali, A., et al. 2021, MNRAS, 507, 4211, doi: 10.1093/mnras/stab2437
* Ezzeddine et al. (2020) Ezzeddine, R., Rasmussen, K., Frebel, A., et al. 2020, ApJ, 898, 150, doi: 10.3847/1538-4357/ab9d1a
* Feuillet et al. (2020) Feuillet, D. K., Feltzing, S., Sahlholdt, C. L., & Casagrande, L. 2020, MNRAS, 497, 109, doi: 10.1093/mnras/staa1888
* For & Sneden (2010) For, B.-Q., & Sneden, C. 2010, AJ, 140, 1694, doi: 10.1088/0004-6256/140/6/1694
* Frebel (2018) Frebel, A. 2018, Annual Review of Nuclear and Particle Science, 68, 237, doi: 10.1146/annurev-nucl-101917-021141
* Frebel & Norris (2015) Frebel, A., & Norris, J. E. 2015, ARA&A, 53, 631, doi: 10.1146/annurev-astro-082214-122423
* Gaia Collaboration et al. (2020) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2020, arXiv e-prints, arXiv:2012.01533. https://arxiv.org/abs/2012.01533
* Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1, doi: 10.1051/0004-6361/201629272
* Gilmore & Reid (1983) Gilmore, G., & Reid, N. 1983, MNRAS, 202, 1025, doi: 10.1093/mnras/202.4.1025
* Gilmore et al. (1989) Gilmore, G., Wyse, R. F. G., & Kuijken, K. 1989, ARA&A, 27, 555, doi: 10.1146/annurev.aa.27.090189.003011
* Gilmore et al. (2002) Gilmore, G., Wyse, R. F. G., & Norris, J. E. 2002, ApJ, 574, L39, doi: 10.1086/342363
* Gravity Collaboration et al. (2019) Gravity Collaboration, Abuter, R., Amorim, A., et al. 2019, A&A, 625, L10, doi: 10.1051/0004-6361/201935656
* Guiglion et al. (2015) Guiglion, G., Recio-Blanco, A., de Laverny, P., et al. 2015, A&A, 583, A91, doi: 10.1051/0004-6361/201525883
* Hansen et al. (2015) Hansen, T., Hansen, C. J., Christlieb, N., et al. 2015, ApJ, 807, 173, doi: 10.1088/0004-637X/807/2/173
* Hansen et al. (2018) Hansen, T. T., Holmbeck, E. M., Beers, T. C., et al. 2018, ApJ, 858, 92, doi: 10.3847/1538-4357/aabacc
* Helmi (2020) Helmi, A. 2020, ARA&A, 58, 205, doi: 10.1146/annurev-astro-032620-021917
* Hollek et al. (2011) Hollek, J. K., Frebel, A., Roederer, I. U., et al. 2011, ApJ, 742, 54, doi: 10.1088/0004-637X/742/1/54
* Holmbeck et al. (2020) Holmbeck, E. M., Hansen, T. T., Beers, T. C., et al. 2020, ApJS, 249, 30, doi: 10.3847/1538-4365/ab9c19
* Horta et al. (2021) Horta, D., Schiavon, R. P., Mackereth, J. T., et al. 2021, MNRAS, 500, 1385, doi: 10.1093/mnras/staa2987
* Jacobson et al. (2015) Jacobson, H. R., Keller, S., Frebel, A., et al. 2015, ApJ, 807, 171, doi: 10.1088/0004-637X/807/2/171
* Jurić et al. (2008) Jurić, M., Ivezić, Ž., Brooks, A., et al. 2008, ApJ, 673, 864, doi: 10.1086/523619
* Kazantzidis et al. (2008) Kazantzidis, S., Bullock, J. S., Zentner, A. R., Kravtsov, A. V., & Moustakas, L. A. 2008, ApJ, 688, 254, doi: 10.1086/591958
* Kerr & Lynden-Bell (1986) Kerr, F. J., & Lynden-Bell, D. 1986, MNRAS, 221, 1023, doi: 10.1093/mnras/221.4.1023
* Kilic et al. (2017) Kilic, M., Munn, J. A., Harris, H. C., et al. 2017, ApJ, 837, 162, doi: 10.3847/1538-4357/aa62a5
* Kirby et al. (2013) Kirby, E. N., Cohen, J. G., Guhathakurta, P., et al. 2013, The Astrophysical Journal, 779, 102, doi: 10.1088/0004-637x/779/2/102
* Kordopatis et al. (2011) Kordopatis, G., Recio-Blanco, A., de Laverny, P., et al. 2011, A&A, 535, A107, doi: 10.1051/0004-6361/201117373
* Kruijssen et al. (2020) Kruijssen, J. M. D., Pfeffer, J. L., Chevance, M., et al. 2020, MNRAS, 498, 2472, doi: 10.1093/mnras/staa2452
* Kuijken & Gilmore (1991) Kuijken, K., & Gilmore, G. 1991, ApJ, 367, L9, doi: 10.1086/185920
* Kunder et al. (2017) Kunder, A., Kordopatis, G., Steinmetz, M., et al. 2017, AJ, 153, 75, doi: 10.3847/1538-3881/153/2/75
* Lai et al. (2008) Lai, D. K., Bolte, M., Johnson, J. A., et al. 2008, ApJ, 681, 1524, doi: 10.1086/588811
* Lee et al. (2011) Lee, Y. S., Beers, T. C., An, D., et al. 2011, ApJ, 738, 187, doi: 10.1088/0004-637X/738/2/187
* Li & Zhao (2017) Li, C., & Zhao, G. 2017, ApJ, 850, 25, doi: 10.3847/1538-4357/aa93f4
* Li et al. (2018) Li, C., Zhao, G., Zhai, M., & Jia, Y. 2018, ApJ, 860, 53, doi: 10.3847/1538-4357/aac50f |
# Omni SCADA Intrusion Detection Using Deep Learning Algorithms
Jun Gao, Luyun Gan, Fabiola Buschendorf, Liao Zhang, Hua Liu, Peixue Li,
Xiaodai Dong and Tao Lu
###### Abstract
We investigate deep learning based omni intrusion detection system (IDS) for
supervisory control and data acquisition (SCADA) networks that are capable of
detecting both temporally uncorrelated and correlated attacks. Regarding the
IDSs developed in this paper, a feedforward neural network (FNN) can detect
temporally uncorrelated attacks at an F1 of 99.967${\pm}$0.005% but correlated
attacks as low as 58${\pm}$2%. In contrast, long-short term memory (LSTM)
detects correlated attacks at 99.56${\pm}$0.01% while uncorrelated attacks at
99.3${\pm}$0.1%. Combining LSTM and FNN through an ensemble approach further
improves the IDS performance with F1 of 99.68${\pm}$0.04% regardless the
temporal correlations among the data packets.
###### Index Terms:
Feedforward Neural Networks, Multilayer Perceptron, Intrusion detection,
Network security, SCADA systems, Supervised learning, LSTM, IDS, Modbus,
Denial of Service (DoS).
††Manuscript received June 14, 2019; revised XY, 2019.This work is supported
in part by the Nature Science and Engineering Research Council of Canada
(NSERC) Discovery Grant (Grant No. RGPIN-2015-06515), Mitacs globalink
program, and Nvidia Corporation TITAN-X GPU grant. (Corresponding author: Tao
Lu)††J. Gao, L. Gan, L. Zhang, X. Dong and T. Lu are with the Department of
Electrical and Computer Engineering, University of Victoria, EOW 448, 3800
Finnerty Rd., Victoria, British Columbia, V8P 5C2, Canada, (e-mail:
{jungao,luyun,liao,xdong,taolu}@uvic.ca)††F. Buschendorf was with the
Department of Computer Science, University of Goettingen, Germany, (e-mail:
fabiola.buschendorf@protonmail.com)††H. Liu and P. Li are with Fortinet
Technology Inc., 899 Kifer Road, Sunnyvale, California 94086, USA, (e-mail:
<EMAIL_ADDRESS>
## I Introduction
Supervisory control and data acquisition (SCADA) is a well established
industrial system to automate/monitor processes and to gather data from remote
or local equipment such as programmable logic controller (PLC), remote
terminal units (RTU) and human-machine-interfaces (HMI), etc. SCADA became
popular in the 60’s for power plants, water treatment [1], and oil pipelines
[2], etc., which were usually disconnected from the Internet and made use of
hardware devices running proprietary protocols. The network was secured from
harmful attacks because of its obscurity, thus security means were barely
implemented. However, as more and more SCADA systems are adopting its Modbus
protocol over TCP and are accessible via the Internet, they are vulnerable to
cyberattacks. In 2010, Stuxnet [3] was spread over the world and damaged
Iranian nuclear power plants. Since then, the need for industrial network
security became urgent.
To safeguard SCADA networks, an intrusion detection system (IDS) needs to be
implemented. IDS can be signature-based or anomaly-based. Traditionally,
signature-based IDS is the mainstream to detect SCADA attacks. It identifies
specific patterns from traffic data to detect the malicious activities and can
be implemented as policy rules in IDS software such as Snort [4, 5]. Ref. [6]
investigates a set of attacks against Modbus and designs rules to detect
attacks. Ref. [7] proposes a state-relation-based IDS (SRID) to increase the
accuracy and decrease the false negative rate in denial-of-service (DoS)
detection. However, these detection methods are too complicated and only valid
for specific scenarios. Overall, as discovered in the previous research,
signature based IDS is only efficient at finding known attacks and its
performance relies heavily on the experts’ knowledge and experiences.
An anomaly-based IDS [8] overcomes these challenges by introducing machine
learning to identify attack patterns from data. It is also widely used in
other applications such as mobile data misuse detection [9], software [10] and
wireless sensor security [11]. Several machine learning algorithms are
proposed to develop anomaly-based IDS. Linda et al. [12] tailored a neural
network model with error-back propagation and Levenberg-Marquardt learning
rules in their IDS. Rrushi and Kang [13] combined logistic regression and
maximum likelihood estimation to detect anomalies in process control networks.
Poojitha et al. [14] trained a feedforward neural network (FNN) to classify
intrusions on the KDD99 dataset and the industrial control system dataset.
Zhang et al. [15] used support vector machine and artificial immune system to
identify malicious network traffic in the smart grid. Maglaras and Jiang [16]
developed a one-class support vector machine module to train network traces
off-line and detect intrusions on-line. All these machine learning algorithms
are excellent in observing the pattern of attacks from the in-packet features.
None of them, however, takes into account of the temporal features between
packets and thus will not perform well on attacks such as DoS which has strong
temporal dependence.
DoS attacks are among the most popular attacks to slow down or even crush the
SCADA networks. Most of the devices in SCADA operate in low power mode with
limited capacity and are vulnerable to DoS [17]. Up to date, various DoS
types, including spoofing [18], flooding and smurfing [19], etc., have been
reported. Among all types of DoS, flooding DoS is widely-exploited where
hackers send a massive number of packets to jam the target network. In [20],
the author exploits TCP syn flooding attack against the vulnerability of TCP
transmission using hping DoS attack tool. Flooding DoS, along with all other
DoS, is difficult to detect because the in-packet features extracted from each
data packet may not display any suspicious pattern [21].
Similar to DoS, man-in-the-middle (MITM) is another attack that is hard to
detect from observing the in-packet features. It will be more efficient to
detect them by observing the inter-packet patterns in time domain.
Anomaly-based IDS on DoS and MITM becomes popular along with the advances of
machine learning. For example, in [22], an auto-associative kernel regression
(AAKR) coupled with the statistical probability ratio test (SPRT) is
implemented to detect DoS. The result is not satisfactory because the
regression model does not take the temporal signatures of DoS into
consideration. In [23], FNN is used to classify abnormal packets in SCADA with
85% accuray for MITM-based random response injection and 90% accuracy for DoS-
based random response injection attacks but 12% at Replay-based attacks. The
author exploits various attacks including DoS attacks and man-in-the-middle
(MITM) attacks in the testbed built in Modbus/RTU instead of Modbus/TCP. In
[24], the authors propose one-class support vector machine (OCSVM) combined
with k-means clustering method to detect the DoS. They set flags on every 10
packets to reflect the relationships of time series. But the handcrafted
features may be easily by-passed by expert attackers.
To detect temporally correlated attacks such as flooding DoS and MITM, one
should capture the temporal anomaly from these attacks. However, those above
mentioned IDS are not designed to extract temporal patterns from packets
sequence. A more practical approach is to implement an IDS with the capacity
of time series analysis.
Recurrent neural networks (RNN) are the machine learning models that
incorporate the recognition of temporal patterns. Among all RNN models, long
short-term memory (LSTM) gains its popularity from speech recognition [25],
music composition [26] and to machine translation [27]. It is designed to
predict future events according to the information in the previous time steps
and suitable for detecting attacks with temporal correlation. For example,
Ref. [28] applied LSTM for distributed DoS with high success rate. In [29] the
authors also developed a time-series anomaly detector based on LSTM [30]
networks to enhance the performance of IDS and apply this framework to the
dataset in [31]. But the number of DoS attacks in the dataset is relatively
small and the time interval for the DoS attack in this dataset is too long,
making the detection inefficient.
Despite of the excellent performance in detecting temporally correlated
attacks such as DoS and MITM, the capacity of RNN to detect temporally
uncorrelated attacks is limited compared to other types of machine learning
algorithms such as FNN. In this paper, utilizing the advantages of both RNN
and FNN while avoiding their disadvantages, we implement an omni IDS that can
detect all attacks regardless of their temporal dependence. On a SCADA testbed
[17], we demonstrate that our IDS reaches the highest performance against all
attacks compared to those that employ RNN or FNN alone.
## II SCADA Testbed and Data Synthesize
Our IDS is tested on a simulated SCADA testbed. A simulated network has the
advantage of being easy to maintain, change and operate and is less costly
than a real device network. A software testbed, which simulates a SCADA
industry network and emulates the attacks was built by L. Zhang [17] on the
basis work of T. Morris [32]. In the past, several preliminary researches on
SCADA security had been conducted on this testbed [33, 34]. The attack target
is a simple SCADA network, consisting of two tanks using Modbus over TCP. The
liquid level of the tanks is controlled by pumps and measured by sensors via
Modbus control information. The purpose of this network is to attract hackers
and study possible defense methods. Such a system is called Honeypot, as it
fools the attacker while exploiting his behaviour. This tank system is
developed by the MBLogic HMIBuilder and HMIServer toolkit [35] and has been
extended by L. Zhang in [17]. The HMI’s purpose is to pull data from the
sensor or send the desired pump speed to the motor periodically. The back end
of the HMI is a PLC while the front end is a web browser.
As this system is simulated, we make use of four virtual machines as shown in
Fig. 1. The SCADA system runs on a Modbus master and several slaves. On a
virtual host called Nova the HMI is deployed, thus we refer to this host as
Modbus master. In order to extend the network, some Modbus slaves such as PLCs
are simulated by the HoneyD software [36]. This will provide a more realistic
Honeypot. The role of a Modbus slave is to process commands from the master by
pulling sensory data about the tank system from the PLCs and sending it back
to the master.
Figure 1: Testbed architecture [17]
The data needed to feed the neural network is generated by an attack machine
using a virtual host named Kali. Kali is a Debian-derived Linux host used for
penetration testing and features many attack and defense tools. Additional to
the message exchange between the Modbus master (Nova) and its slaves we can
launch normal traffic mixed with various attacks from Kali. A command line
tool, Modpoll [37], is used to send Modbus instructions to the PLC which
controls sensible tank system variables. An example Modpoll instruction which
sends a pump speed of 5 to the system looks like this:
⬇
$ modpoll -0 -r 32210 10.0.0.5 5
The command addresses a simulated PLC with an IP address of 10.0.0.5 and a
register address which contains either a threshold value (register 42212 -
42215), the current pump speed (32210) or the tank level (42210,42211),
measured by the sensors. Modpoll will send Modbus requests with function code
16 to attempt a write action to the specified registers. By modifying the pump
speed the attackers can exceed the allowed tank level and create serious
damage to a system. A script on Kali will randomly choose between these normal
or malicious Modbus instructions and will launch a Modpoll instruction with
another randomly chosen parameter. This will ensure desired distribution of
attack/non-attack data.
The traffic will be recorded by the fourth virtual machine referred to as
“Defense Wall”, which operates in the bridge mode and thus is invisible to the
attacker. With PyShark we capture the traffic between Nova and Modbus slaves
and between the attacker machine Kali and the PLCs. During this process we can
label each packet as malicious or normal.
### II-A Features extracted from the data packets
In our testbed, we use a self-developed IDS installed on “Defense Wall” to
extract 19 features from each data packet captured. They are listed below:
1. 1.
Source IP address;
2. 2.
Destination IP address;
3. 3.
Source port number;
4. 4.
Destination port number;
5. 5.
TCP sequence number;
6. 6.
Transaction identifier set by the client to uniquely identify each request;
7. 7.
Function code identify the Modbus function used;
8. 8.
Reference number of the specified register;
9. 9.
Modbus register data;
10. 10.
Modbus exception code;
11. 11.
Time stamp;
12. 12.
Relative time;
13. 13.
Highest threshold;
14. 14.
Lowest threshold;
15. 15.
High threshold;
16. 16.
Low threshold;
17. 17.
Pump speed;
18. 18.
Tank 1 water level;
19. 19.
Tank 2 water level.
Here, the “Relative time” represents the time in seconds for packets relative
to the first packet in the same TCP session. To reduce the periodicity of this
feature, we reset it to zero when “Relative time” reaches 3,000 seconds.
In our IDS, we adopt feature scaling of each feature $x$ in the dataset
according to
$x^{\prime}=\frac{x-\bar{x}}{\sigma_{x}}$ (1)
where $\bar{x}$ and $\sigma_{x}$ are the mean and standard deviation of
original feature $x$ and $x^{\prime}$ is the re-scaled feature from $x$ with
zero mean and unity variance.
### II-B Types of attacks in our datasets
Figure 2: Data packet types distribution in Dataset I, II and online script.
The ones with a superscript “*” are temporally correlated attacks.
Using our scripts, we created two datasets. As illustrated in Fig. 2, in
addition to “Normal” data packets, Dataset I contains attacks that are
uncorrelated in time domain while Dataset II contains temporally dependent
attacks. Here we have incorporated 10 attacks in our testbed. 7 of them are
temporally uncorrelated while the remaining 3 are correlated. The temporally
uncorrelated attacks include “Pump Speed” (Pump), “Tank 1 Level” (T1), “Tank 2
Level” (T2), “Threshold Highest” (HH), “Threshold Lowest” (LL), “Threshold
High” (H) and “Threshold Low” (L) whose detailed descriptions can be found in
[17, 32].
Among all temporally correlated attacks, two types of flooding DoS attacks are
included [31]. The first labelled as “Scan flooding” (SCAN) is to send massive
scan command, resulting in increasing latency of communications between the
HMI and the sensors in SCADA. The second type labelled as “Incorrect CRC”
(CRC) is sending massive packets with incorrect cyclic redundancy check (CRC)
to cause latency of master.
Another temporally correlated attack included in this testbed is “Man-in-the-
middle” (MITM) attack. It is an eavesdropping where the attacker monitors the
communication traffics between two parties secretly. Here, the MITM attack is
launched by Ettercap [38] using ARP spoofing [39]. One effective way to detect
ARP spoofing is identifying the Media Access Control (MAC) address in layer 2
of OSI model. However, most of Network IDSs (NIDS) do not support the
protocols in layer 2 such as ARP and MAC protocols. Even Snort requires an ARP
spoof preprocessor [40] to collect the MAC address information to detect ARP
spoofing. Besides, the victim host of ARP spoofing attack would experience
packets retransmissions. For SCADA networks, packet retrasmissions or delay
may cause great damages. Therefore, the IDS should raise alert when it detects
either MITM attack or packets retransmissions. To make the IDS robust in
detecting both MITM and packets retransmissions we remove the MAC address
feature which was used for labeling MITM attack from the datasets for training
neural networks.
At the first stage, FNN and LSTM IDSs will be trained as binary classifiers
that only predict attacks from normal traffic and tested on these datasets
separately for performance comparisons. In on-line phases, these two IDSs
along with our FNN-LSTM ensemble IDS will be trained as multi-class
classifiers by the combined datasets to predict various types of attacks from
normal traffics and implemented on the testbed. In addition, we also implement
a script that can launch realtime attacks for online testing. The online
script will randomly launch normal traffic, temporally uncorrelated and
correlated attacks with ratios shown in the table to examine the omni-
detection capability of different IDSs.
## III IDS Implementation
In this paper, we implemented three IDSs: a conventional FNN, a LSTM and a
FNN-LSTM ensemble IDS. Here, we use Keras [41] to implement tensorflow [42]
based machine learning models with AdamOptimizer [43] to train our model. The
structure of these IDSs are detailed in the following subsections.
### III-A FNN IDS
(a)
(b)
Figure 3: (a)The schematics of the FNN IDS (b) Details of each neuron in FNN
The basic structure of the FNN IDS is illustrated in Fig. 3. A typical FNN is
formed by an input layer, an output layer and one or more hidden layers in-
between. Each layer has a number of neurons that use the neuron outputs from
the previous layer as input and produces output to the neurons in next layer.
In our case, inputs are the scaled and normalized features extracted from the
data packets, and outputs are the predictions of attacks and normal events.
Mathematically, the FNN can be expressed as:
$\begin{array}[]{rcl}\textbf{z}^{(1)}&=&\textbf{W}^{(1)}\textbf{x}+\textbf{b}^{(1)},\textbf{h}_{1}=f_{h}(\textbf{z}^{(1)})\\\
\textbf{z}^{(2)}&=&\textbf{W}^{(2)}\textbf{h}_{1}+\textbf{b}^{(2)},\textbf{h}_{2}=f_{h}(\textbf{z}^{(2)})\\\
&...&\\\
\textbf{z}^{(N+1)}&=&\textbf{W}^{(N+1)}\textbf{h}_{N}+\textbf{b}^{(N+1)},\hat{\textbf{y}}=\textbf{z}^{(N+1)}\end{array}$
(2)
where $N$ is the number of hidden layers, $f_{h}$ is the ReLU activation
function, and $\textbf{W}^{(1)},\textbf{W}^{(2)},...,\textbf{W}^{(N+1)}$,
$\textbf{b}^{(1)},\textbf{b}^{(2)},...,\textbf{b}^{(N+1)}$ are the parameters
to be trained. Here we use softmax cross entropy as our loss function, which
can be expressed as
$f_{L}(\hat{\textbf{y}},\textbf{y})=-\sum_{i=1}^{C}\textbf{y}_{i}\log(f_{s}(\hat{\textbf{y}_{i}}))$
(3)
where $\hat{\textbf{y}}$ is the predicted label and y the ground truth. $C$ is
the number of all possible classes, $\textbf{y}_{i}$ and
$\hat{\textbf{y}_{i}}$ are the actual and predicted labels that belongs to
class $i$, and $f_{s}$ is the softmax function.
### III-B LSTM IDS
The LSTM is built on a collection of single LSTM cells [31]. The structure of
single LSTM cells is as Fig. 4a. Each LSTM cell has 3 gates: input gate,
forget gate and output gate. The input gate selects useful information and
push it to the cell. The irrelevant information will be discarded in forget
gate. The output gate outputs the activation state $o_{t}$. A hidden state
vector $h_{t}$ is transferred to the next time steps.
(a)
(b)
Figure 4: The structure of (a) single LSTM cell, (b) LSTM Network.
The following equations represent the processes of a single LSTM cell:
$\begin{array}[]{rcl}\textbf{f}_{t}&=&\sigma(\textbf{W}_{f}x_{t}+\textbf{U}_{f}h_{t-1}+\textbf{b}_{f})\\\
\textbf{i}_{t}&=&\sigma(\textbf{W}_{i}x_{t}+\textbf{U}_{i}h_{t-1}+\textbf{b}_{i})\\\
\textbf{o}_{t}&=&\sigma(\textbf{W}_{o}x_{t}+\textbf{U}_{o}h_{t-1}+\textbf{b}_{o})\\\
\textbf{c}_{t}&=&\textbf{f}_{t}\circ\textbf{c}_{t-1}+\textbf{i}_{t}\circ\sigma_{g}(\textbf{W}_{c}x_{t}+\textbf{U}_{c}h_{t-1}+\textbf{b}_{c})\\\
\textbf{h}_{t}&=&\textbf{o}_{t}\circ\sigma_{g}(\textbf{c}_{t})\end{array}$ (4)
where $\sigma_{g}$ is hyperbolic tangent function and $\sigma$ is sigmoid
function. $\circ$ is the element-wise product notation. $W$, $U$, $b$ are the
weight matrix for the gates.
Shown in Fig. 4b, the LSTM IDS includes two LSTM layers with 10 LSTM cells in
each layer. An activation layer with sigmoid activation function is placed
after the last LSTM layer. The ${\\{x_{1},x_{2},...,x_{t}}\\}$ vector is the
input vector containing features of packets within $t$ time steps. The dataset
is reshaped in this format and fit into the LSTM model. In our model, we set
$t=10$. The loss function in this model is binary cross entropy and the
optimizer is Adam optimizer [44].
### III-C FNN-LSTM Ensemble IDS
Figure 5: Ensemble Model.
Our FNN-LSTM ensemble IDS aims to combine the advantages of both FNN and LSTM
while avoiding their weaknesses [45]. The schematics of this model is as shown
in Fig. 8. In this model, the data packet features are fed into FNN and LSTM
simultaneously to predict attacks as a multi-class classifier. The output
labels of both are concatenated as the input of a multilayer perceptron, which
through training, is capable of voting for the best prediction of the data
packet under investigation.
## IV Experiment and Result
To demonstrate their capability for detecting attacks with/without temporal
correlation, we first implement FNN and LSTM IDSs to establish references for
comparison. At this stage, the IDSs only conduct binary classification to
predict if the data packet under investigation is normal (labeled as “0”) or
attack (labeled as “1”). Consequently, sigmoid function
$\sigma(z)=\frac{e^{z}}{1+e^{z}}$ (5)
is selected as the activation function. Here, $z$ is the output of the
previous LSTM layer.
### IV-A Hyper parameters tuning
Both IDSs are trained using 70% of the randomly chosen samples from the two
datasets and tested with the remaining 30% samples following the 10-fold
training/testing procedure so that the average and standard deviation of
figures of merits including precision, recall and $\mathrm{F_{1}}$ can be used
for evaluation.
To determine the number of hidden layers necessary for our FNN, we computed
$\mathrm{F_{1}}$ with 0, 1 and 2 hidden layers where the values of 99.22%,
99.96% and 99.97% are obtained respectively. As shown, employing 1 hidden
layers in FNN will increase the $\mathrm{F_{1}}$ by more than 7% while using 2
hidden layers the improvement is minimal. Therefore, we select 1 hidden layer
in our FNN implementation.
In addition, to circumvent overfitting, we further adopted early stop
procedure in FNN such that the optimization stops when the number of epochs
whose relative differences of loss between consecutive ones are less than
$\mathrm{10^{-6}}$ reaches 35 [46]. Similarly, LSTM adopts early stop if
either maximum epochs reach 3.
In implementation of LSTM, we connect 10 LSTM cells in input layer where the
features from 10 consecutive data packets are entered into the cells to
predict if the last packet is normal or an attack. In training, we adopt mini-
batch with a batch size of $\mathrm{1,000}$.
### IV-B Detection of temporally uncorrelated attacks
We exploit the Dataset I described in Section II to compare the detection
capability of FNN and LSTM for temporally uncorrelated attacks. To verify the
models, learning curves are plotted in Fig. 6 where training and testing
losses as a function of training samples are plotted. Here the average value
and standard deviation after 10 fold training/testing are represented by
circle markers and error bars respectively. As shown, with training samples
exceeding 40,000, FNN training and testing losses (blue dashed lines) start to
converge while LSTM (red solid lines) converges at sample size larger than
60,000. Overall, it confirms that the number of samples in Dataset I is
sufficient for the training and testing of our IDS.
Figure 6: Learning Curves of FNN and LSTM using temporally-uncorrelated-attacks dataset (Dataset I). TABLE I: Comparison of the temporally-uncorrelated-attacks detection. | Precision | Recall | F1
---|---|---|---
FNN | $\mathrm{99.996{\pm}0.006}$ | $\mathrm{99.84{\pm}0.05}$ | $\mathrm{99.92{\pm}0.03}$
LSTM | $\mathrm{99.88{\pm}0.06}$ | $\mathrm{98.7{\pm}0.4}$ | $\mathrm{99.3{\pm}0.1}$
TABLE II: Confusion matrices of temporally-uncorrelated-attacks detection using Dataset I (averaged over 10 trials) | Predicted
---|---
| Normal | Attacks
Actual | Normal | FNN | $\mathrm{69,845.4}$ | $\mathrm{0.6}$
LSTM | $\mathrm{69,902.2}$ | $\mathrm{22.8}$
Attacks | FNN | $\mathrm{30.7}$ | $\mathrm{19,741.3}$
LSTM | $\mathrm{241.9}$ | $\mathrm{19,448.1}$
After the IDSs are trained, we use 30% of samples in Dataset I for 10 fold
testing. Also shown in Table II and I, on average, for FNN, only 0.6 of the
69,846 normal datapackets are mislabelled as attacks while only 30.7 out of
19,771 actual attacks are mislabelled as normal traffic, yielding the
precision, recall and $\mathrm{F_{1}}$ to be $\mathrm{99.996{\pm}0.006\%}$,
$\mathrm{99.84{\pm}0.05\%}$, and $\mathrm{99.92{\pm}0.03\%}$. In comparison,
LSTM mislabelled 22.8 normal packets as attacks and 241.9 attacks as normal
packets, resulting the figures of merits to be $\mathrm{99.88{\pm}0.06\%}$,
$\mathrm{98.7{\pm}0.4\%}$ and $\mathrm{99.3{\pm}0.1\%}$. The comparison
demonstrates that FNN outperformed LSTM in detecting temporally uncorrelated
attacks where recognition of the in-packet feature patterns is critical.
### IV-C Detection of temporally correlated attacks
Figure 7: Learning Curves of FNN and LSTM using temporally-correlated-attacks dataset (Dataset II). TABLE III: Comparison of temporally correlated attacks $\mathrm{(\%)}$ | Precision | Recall | F1
---|---|---|---
FNN | $\mathrm{73{\pm}2}$ | $\mathrm{49{\pm}4}$ | $\mathrm{58{\pm}2}$
LSTM | $\mathrm{99.60{\pm}0.01}$ | $\mathrm{99.52{\pm}0.02}$ | $\mathrm{99.56{\pm}0.01}$
TABLE IV: Confusion matrix of temporally correlated attacks | Predicted
---|---
| Normal | Attacks
Actual | Normal | FNN | $\mathrm{28,668.3}$ | $\mathrm{5,044.7}$
LSTM | $\mathrm{33,504.0}$ | $\mathrm{105.0}$
Attacks | FNN | $\mathrm{13,510.4}$ | $\mathrm{13,169.6}$
LSTM | $\mathrm{128.4}$ | $\mathrm{26,652.6}$
In this subsection FNN and LSTM are re-trained and tested using Dataset II for
comparison of their temporally correlated attacks detection comparison. Again
the learning curves in Fig. 8 shows that both FNN (blue dashed lines) and LSTM
(red solid lines) converge at training samples exceeding 10,000 while LSTM
clearly shows lower testing loss. This confirms the sufficiency of our dataset
to generalize the IDS models.
The performance of each model is compared in Table III and IV. As shown, FNN
is inefficient in detecting temporally correlated attacks with precision,
recall and $\mathrm{F_{1}}$ scores as low as $\mathrm{73{\pm}2\%}$,
$\mathrm{49{\pm}4\%}$ and $\mathrm{58{\pm}2}$ respectively. In particular,
5,044.7 out of 33,713 normal packets are mislabelled to attacks while 13,510.4
out of 26,680 actual attacks are mislabelled to normal traffic. It is evident
that the poor performance of FNN is caused by its inability to inter-packet
features. In contrast, LSTM displays an outstanding performance on the
corresponding figures of merits to be $\mathrm{99.60{\pm}0.01\%}$,
$\mathrm{99.52{\pm}0.02\%}$ and $\mathrm{99.56{\pm}0.01\%}$ where only 105.0
normal packets are mislabelled as attacks and 128.4 attacks packets are
mislabelled as normal traffic. As expected, LSTM outperforms FNN in detecting
temporally correlated attacks due to its inherent nature to observe data
pattern in time domain.
### IV-D Omni attacks detection
TABLE V: Macro-average comparison of omni-attacks detection | Precision | Recall | F1
---|---|---|---
FNN | $\mathrm{88{\pm}1}$ | $\mathrm{89.2{\pm}0.8}$ | $\mathrm{87.4{\pm}0.6}$
LSTM | $\mathrm{99.54{\pm}0.03}$ | $\mathrm{99.01{\pm}0.07}$ | $\mathrm{99.27{\pm}0.05}$
Ensemble | $\mathrm{99.76{\pm}0.05}$ | $\mathrm{99.57{\pm}0.03}$ | $\mathrm{99.68{\pm}0.04}$
(a)
(b)
(c)
Figure 8: (a) Precision, (b) Recall and (c) $\mathrm{F_{1}}$ of individual
attacks in omni-attacks detection.
Recognizing the mutual strength of FNN and LSTM IDSs in detecting temporally
correlated and uncorrelated attacks, we here combine the advantages of both
for an omni attacks detector through ensemble approach. The structure of FNN-
LSTM ensemble is described in Subsection III-C. To implement, we first
remodelled FNN and LSTM to multi-class classifiers so that different attacks
can be distinguished. Dataset I and II are combined and used to train FNN and
LSTM independently. The outputs of both are combined to form the input
features of a multilayer perceptron for training. After training, FNN, LSTM
and FNN-LSTM ensemble IDSs are integrated into our SCADA testbed to detect and
classify attacks. The traffic is generated online using the script that
generates a pre-determined ratio of normal, temporally correlated and
uncorrelated attacks as described in Fig 2. To estimate the figures of merits,
we evenly divide the predicted labels to 10 portions and compute the average
and standard deviation of macro-averaged precision, recall and
$\mathrm{F_{1}}$. As shown in Table V, among all the three IDSs, the FNN
achieve lowest performance with macro-averaged figures of merits of
$\mathrm{88{\pm}1\%}$, $\mathrm{89.2{\pm}0.8\%}$ and $\mathrm{87.4{\pm}0.6\%}$
while LSTM reaches $\mathrm{99.54{\pm}0.03\%}$, $\mathrm{99.01{\pm}0.07\%}$
and $\mathrm{99.27{\pm}0.05\%}$. In contrast, the FNN-LSTM ensemble IDS
further outperforms both with figures of merits to be
$\mathrm{99.76{\pm}0.05}$, $\mathrm{99.57{\pm}0.03}$ and
$\mathrm{99.68{\pm}0.04}$. Detailed analysis in Fig. 8 further confirms that
the under-performance of FNN (yellow bars) are due to the mislabels of
temporally correlated attacks (MITM, CRC and SCAN) while the performance of
LSTM (red bars) by temporally uncorrelated attacks (“Pump Speed (Pump)”, “Tank
1 Level (T1)”, and “Threshold High (H)”, etc.). Overall, the FNN-LSTM ensemble
demonstrates a consistent out-performance over them in all types of attacks.
## V Conclusion
In this paper we demonstrated that the FNN-LSTM ensemble IDS can detect all
types of cyberattacks regardless of the their temporal relevance. In opposite,
FNN only performance well in temporally uncorrelated attacks and LSTM is
relatively weak in uncorrelated attacks. In future research we will further
improve our model through field trials.
## Acknowledgment
## References
* [1] S. Adepu and A. Mathur, “An investigation into the response of a water treatment system to cyber attacks,” in _17th IEEE International Symposium on High Assurance Systems Engineering (HASE 2016)_ , pp. 141–148, 2016.
* [2] H. Huang, W. Zhang, G. Qi, S. Ma, Y. Yang, F. Yan, and P. Chen, “Research on accident inversion and analysis method of the oil and gas pipeline SCADA system,” in _2014 Sixth International Conference on Measuring Technology and Mechatronics Automation_ , pp. 492–496, 2014.
* [3] S. Karnouskos, “Stuxnet worm impact on industrial cyber-physical system security,” in _IEEE 37th Annual Conference of the Industrial Electronics Society (IECON 2011)_ , pp. 4490–4494, 2011.
* [4] M. Roesch, “Snort - lightweight intrusion detection for networks,” in _Proceedings of the 13th USENIX Conference on System Administration_ , ser. LISA ’99. Berkeley, CA, USA: USENIX Association, pp. 229–238, 1999.
* [5] M. A. Aydın, A. H. Zaim, and K. G. Ceylan, “A hybrid intrusion detection system design for computer network security,” _Computers & Electrical Engineering_, vol. 35, no. 3, pp. 517 – 526, 2009.
* [6] W. Gao and T. Morris, “On cyber attacks and signature based intrusion detection for modbus based industrial control systems,” _Journal of Digital Forensics, Security and Law_ , Vol. 9, No. 1, Article. 3, 2014.
* [7] Y. Wang, Z. Xu, J. Zhang, L. Xu, H. Wang, and G. Gu, “SRID: State relation based intrusion detection for false data injection attacks in SCADA,” in _Computer Security - ESORICS 2014_ , pp. 401–418, 2014.
* [8] V. Jyothsna, V. V. Rama Prasad, and K. Munivara Prasad, “A review of anomaly based intrusion detection systems,” _International Journal of Computer Applications_ , vol. 28, pp. 26–35, 08 2011.
* [9] D. Damopoulos, S. A. Menesidou, G. Kambourakis, M. Papadaki, N. Clarke, and S. Gritzalis, “Evaluation of anomaly-based IDS for mobile devices using machine learning classifiers,” _Security and Communication Networks_ , vol. 5, no. 1, pp. 3–14, 2012.
* [10] G. Nascimento and M. Correia, “Anomaly-based intrusion detection in software as a service,” in _2011 IEEE/IFIP 41st International Conference on Dependable Systems and Networks Workshops (DSN-W)_ , pp. 19–24, June 2011.
* [11] M. S. Islam and S. A. Rahman, “Anomaly intrusion detection system in wireless sensor networks: security threats and existing approaches,” _International Journal of Advanced Science and Technology_ , vol. 36, no. 1, pp. 1–8, 2011.
* [12] O. Linda, T. Vollmer, and M. Manic, “Neural network based intrusion detection system for critical infrastructures,” in _2009 International Joint Conference on Neural Networks_ , pp. 1827–1834, 2009.
* [13] J. Rrushi and K.-D. Kang, “Detecting anomalies in process control networks,” _Critical Infrastructure Protection III_ , pp. 151–165, 2009.
* [14] G. Poojitha, K. N. Kumar, and P. J. Reddy, “Intrusion detection using artificial neural network,” in _Computing Communication and Networking Technologies (ICCCNT), 2010 International Conference on Computing, Communication and Networking Technologies_. pp. 1–7, 2010.
* [15] Y. Zhang, L. Wang, W. Sun, R. C. Green II, and M. Alam, “Distributed intrusion detection system in a multi-layer network architecture of smart grids,” _IEEE Transactions on Smart Grid_ , vol. 2, no. 4, pp. 796–808, 2011.
* [16] L. A. Maglaras and J. Jiang, “Intrusion detection in scada systems using machine learning techniques,” in _2014 Science and Information Conference_. London, pp. 626–631, 2014.
* [17] L. Zhang, “An implementation of scada network security testbed,” _Master’s thesis, University of Victoria, Victoria, BC_ , 2015.
* [18] L. T. Heberlein and M. Bishop, “Attack class: address spoofing,” in _Proceedings of the 19th National Information Systems Security Conference_ , 1997.
* [19] S. Kumar, “Smurf-based distributed denial of service (ddos) attack amplification in internet,” in _Second International Conference on Internet Monitoring and Protection (ICIMP 2007)_ , San Jose, CA, pp. 25–25, 2007.
* [20] B. Chen, N. Pattanaik, A. Goulart, K. Butler-Purry, and D. Kundur, “Implementing attacks for modbus/tcp protocol in a real-time cyber physical system test bed,” _Proceedings - CQR 2015: 2015 IEEE International Workshop Technical Committee on Communications Quality and Reliability_ , pp. 1–6, 2015.
* [21] N. Sayegh, A. Chehab, I. H. Elhajj, and A. Kayssi, “Internal security attacks on scada systems,” in _2013 Third International Conference on Communications and Information Technology (ICCIT)_ , pp. 22–27, 2013.
* [22] D. Yang, A. Usynin, and J. Hines, “Anomaly-based intrusion detection for SCADA systems,” in _Proceedings of the 5. International Topical Meeting on Nuclear Plant Instrumentation Controls, and Human Machine Interface Technology_ , vol. 43, no. 47, pp. 797–803, 2006.
* [23] W. Gao, T. Morris, B. Reaves, and D. Richey, “On scada control system command and response injection and intrusion detection,” _2010 eCrime Researchers Summit_ , Dallas, TX, pp. 1-9, 2010.
* [24] L. A. Maglaras, J. Jiang, and T. Cruz, “Integrated ocsvm mechanism for intrusion detection in SCADA systems,” _Electronics Letters_ , vol. 50, no. 25, pp. 1935–1936, 2014.
* [25] A. Graves, A.-r. Mohamed, and G. Hinton, “Speech Recognition with Deep Recurrent Neural Networks,” _arXiv e-prints_ , p. arXiv:1303.5778, Mar. 2013\.
* [26] D. Eck and J. Schmidhuber, “A first look at music composition using lstm recurrent neural networks,” Technical Report. Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale, 2002.
* [27] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” _arXiv e-prints_ , p. arXiv:1409.3215, Sep. 2014\.
* [28] C. D. McDermott, F. Majdani, and A. V. Petrovski, “Botnet detection in the internet of things using deep learning approaches,” in _2018 International Joint Conference on Neural Networks (IJCNN)_ , Rio de Janeiro, pp. 1–8, 2018.
* [29] C. Feng, T. Li, and D. Chana, “Multi-level anomaly detection in industrial control systems via package signatures and lstm networks,” in _2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)_ , Denver, CO, pp. 261–272, 2017.
* [30] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” _Neural Comput._ , vol. 9, no. 8, pp. 1735–1780, Nov. 1997.
* [31] T. Morris and W. Gao, “Industrial control system traffic data sets for intrusion detection research,” in _Critical Infrastructure Protection VIII_ , J. Butts and S. Shenoi, Eds. ICCIP 2014. IFIP Advances in Information and Communication Technology, vol 441. pp 65–78,2014.
* [32] T. Morris, A. Srivastava, B. Reaves, W. Gao, K. Pavurapu, and R. Reddi, “A control system testbed to validate critical infrastructure protection concepts,” _International Journal of Critical Infrastructure Protection_ , vol. 4, no. 2, pp. 88 – 103, 2011.
* [33] H. Wang, T. Lu, X. Dong, P. Li, and M. Xie, “Hierarchical online intrusion detection for scada networks,” _arXiv_ , p. 1611.09418, 2016.
* [34] S. Patel, “IEC-61850 Protocol Analysis and Online Intrusion Detection System for SCADA Networks using Machine Learning,” Master’s thesis, University of Victoria, Victoria, BC, 2017.
* [35] MBLogic. Mblogic homepage. [Online]. Available: http://mblogic.sourceforge.net/index.html
* [36] N. Provos. Honeyd. [Online]. Available: http://www.honeyd.org/
* [37] ProconX Pty Ltd. Modpoll modbus master simulator. [Online]. Available: http://www.modbusdriver.com/modpoll.html
* [38] Ettercap, a comprehensive suite for man in the middle attacks. [Online]. Available: http://openmaniak.com/ettercap.php
* [39] Z. Trabelsi and W. El-Hajj, “Arp spoofing: A comparative study for education purposes,” in _2009 Information Security Curriculum Development Conference_ , ser. InfoSecCD ’09. New York, NY, USA: ACM, pp. 60–66, 2009.
* [40] Snort ARP spoof preprocessor. [Online]. Available: http://manual-snort-org.s3-website-us-east-1.amazonaws.com/node17.html
* [41] F. Chollet _et al._ , “Keras,” https://keras.io, 2015.
* [42] M. Abadi, et. al. “Tensorflow: A system for large-scale machine learning,” in _12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16)_ , pp. 265–283, 2016.
* [43] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
* [44] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _CoRR_ , vol. abs/1412.6980, 2014.
* [45] T. G. Dietterich, “Ensemble methods in machine learning,” in _Proceedings of the First International Workshop on Multiple Classifier Systems_ , ser. MCS ’00. London, UK, UK: Springer-Verlag, pp. 1–15, 2000.
* [46] L. Prechelt, “Early stopping-but when?” in _Neural Networks: Tricks of the Trade_. London, UK, UK: Springer-Verlag, pp. 55–69, 2012.
|
Subsets and Splits